Science.gov

Sample records for earth fault management

  1. Managing Fault Management Development

    NASA Technical Reports Server (NTRS)

    McDougal, John M.

    2010-01-01

    As the complexity of space missions grows, development of Fault Management (FM) capabilities is an increasingly common driver for significant cost overruns late in the development cycle. FM issues and the resulting cost overruns are rarely caused by a lack of technology, but rather by a lack of planning and emphasis by project management. A recent NASA FM Workshop brought together FM practitioners from a broad spectrum of institutions, mission types, and functional roles to identify the drivers underlying FM overruns and recommend solutions. They identified a number of areas in which increased program and project management focus can be used to control FM development cost growth. These include up-front planning for FM as a distinct engineering discipline; managing different, conflicting, and changing institutional goals and risk postures; ensuring the necessary resources for a disciplined, coordinated approach to end-to-end fault management engineering; and monitoring FM coordination across all mission systems.

  2. Flight elements: Fault detection and fault management

    NASA Technical Reports Server (NTRS)

    Lum, H.; Patterson-Hine, A.; Edge, J. T.; Lawler, D.

    1990-01-01

    Fault management for an intelligent computational system must be developed using a top down integrated engineering approach. An approach proposed includes integrating the overall environment involving sensors and their associated data; design knowledge capture; operations; fault detection, identification, and reconfiguration; testability; causal models including digraph matrix analysis; and overall performance impacts on the hardware and software architecture. Implementation of the concept to achieve a real time intelligent fault detection and management system will be accomplished via the implementation of several objectives, which are: Development of fault tolerant/FDIR requirement and specification from a systems level which will carry through from conceptual design through implementation and mission operations; Implementation of monitoring, diagnosis, and reconfiguration at all system levels providing fault isolation and system integration; Optimize system operations to manage degraded system performance through system integration; and Lower development and operations costs through the implementation of an intelligent real time fault detection and fault management system and an information management system.

  3. Fault Management Metrics

    NASA Technical Reports Server (NTRS)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  4. Fault Management Guiding Principles

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn E.; Friberg, Kenneth H.; Fesq, Lorraine; Barley, Bryan

    2011-01-01

    Regardless of the mission type: deep space or low Earth orbit, robotic or human spaceflight, Fault Management (FM) is a critical aspect of NASA space missions. As the complexity of space missions grows, the complexity of supporting FM systems increase in turn. Data on recent NASA missions show that development of FM capabilities is a common driver for significant cost overruns late in the project development cycle. Efforts to understand the drivers behind these cost overruns, spearheaded by NASA's Science Mission Directorate (SMD), indicate that they are primarily caused by the growing complexity of FM systems and the lack of maturity of FM as an engineering discipline. NASA can and does develop FM systems that effectively protect mission functionality and assets. The cost growth results from a lack of FM planning and emphasis by project management, as well the maturity of FM as an engineering discipline, which lags behind the maturity of other engineering disciplines. As a step towards controlling the cost growth associated with FM development, SMD has commissioned a multi-institution team to develop a practitioner's handbook representing best practices for the end-to-end processes involved in engineering FM systems. While currently concentrating primarily on FM for science missions, the expectation is that this handbook will grow into a NASA-wide handbook, serving as a companion to the NASA Systems Engineering Handbook. This paper presents a snapshot of the principles that have been identified to guide FM development from cradle to grave. The principles range from considerations for integrating FM into the project and SE organizational structure, the relationship between FM designs and mission risk, and the use of the various tools of FM (e.g., redundancy) to meet the FM goal of protecting mission functionality and assets.

  5. Fault management and systems knowledge

    DOT National Transportation Integrated Search

    2016-12-01

    Pilots are asked to manage faults during flight operations. This leads to the training question of the type and depth of system knowledge required to respond to these faults. Based on discussions with multiple airline operators, there is agreement th...

  6. Fault Management Design Strategies

    NASA Technical Reports Server (NTRS)

    Day, John C.; Johnson, Stephen B.

    2014-01-01

    Development of dependable systems relies on the ability of the system to determine and respond to off-nominal system behavior. Specification and development of these fault management capabilities must be done in a structured and principled manner to improve our understanding of these systems, and to make significant gains in dependability (safety, reliability and availability). Prior work has described a fundamental taxonomy and theory of System Health Management (SHM), and of its operational subset, Fault Management (FM). This conceptual foundation provides a basis to develop framework to design and implement FM design strategies that protect mission objectives and account for system design limitations. Selection of an SHM strategy has implications for the functions required to perform the strategy, and it places constraints on the set of possible design solutions. The framework developed in this paper provides a rigorous and principled approach to classifying SHM strategies, as well as methods for determination and implementation of SHM strategies. An illustrative example is used to describe the application of the framework and the resulting benefits to system and FM design and dependability.

  7. Fault management for data systems

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Iverson, David L.; Patterson-Hine, F. Ann

    1993-01-01

    Issues related to automating the process of fault management (fault diagnosis and response) for data management systems are considered. Substantial benefits are to be gained by successful automation of this process, particularly for large, complex systems. The use of graph-based models to develop a computer assisted fault management system is advocated. The general problem is described and the motivation behind choosing graph-based models over other approaches for developing fault diagnosis computer programs is outlined. Some existing work in the area of graph-based fault diagnosis is reviewed, and a new fault management method which was developed from existing methods is offered. Our method is applied to an automatic telescope system intended as a prototype for future lunar telescope programs. Finally, an application of our method to general data management systems is described.

  8. Fault Current Distribution and Pole Earth Potential Rise (EPR) Under Substation Fault

    NASA Astrophysics Data System (ADS)

    Nnassereddine, M.; Rizk, J.; Hellany, A.; Nagrial, M.

    2013-09-01

    New high-voltage (HV) substations are fed by transmission lines. The position of these lines necessitates earthing design to ensure safety compliance of the system. Conductive structures such as steel or concrete poles are widely used in HV transmission mains. The earth potential rise (EPR) generated by a fault at the substation could result in an unsafe condition. This article discusses EPR based on substation fault. The pole EPR assessment under substation fault is assessed with and without mutual impedance consideration. Split factor determination with and without the mutual impedance of the line is also discussed. Furthermore, a simplified formula to compute the pole grid current under substation fault is included. Also, it includes the introduction of the n factor which determines the number of poles that required earthing assessments under substation fault. A case study is shown.

  9. Managing Space System Faults: Coalescing NASA's Views

    NASA Technical Reports Server (NTRS)

    Muirhead, Brian; Fesq, Lorraine

    2012-01-01

    Managing faults and their resultant failures is a fundamental and critical part of developing and operating aerospace systems. Yet, recent studies have shown that the engineering "discipline" required to manage faults is not widely recognized nor evenly practiced within the NASA community. Attempts to simply name this discipline in recent years has been fraught with controversy among members of the Integrated Systems Health Management (ISHM), Fault Management (FM), Fault Protection (FP), Hazard Analysis (HA), and Aborts communities. Approaches to managing space system faults typically are unique to each organization, with little commonality in the architectures, processes and practices across the industry.

  10. Fault tolerant data management system

    NASA Technical Reports Server (NTRS)

    Gustin, W. M.; Smither, M. A.

    1972-01-01

    Described in detail are: (1) results obtained in modifying the onboard data management system software to a multiprocessor fault tolerant system; (2) a functional description of the prototype buffer I/O units; (3) description of modification to the ACADC and stimuli generating unit of the DTS; and (4) summaries and conclusions on techniques implemented in the rack and prototype buffers. Also documented is the work done in investigating techniques of high speed (5 Mbps) digital data transmission in the data bus environment. The application considered is a multiport data bus operating with the following constraints: no preferred stations; random bus access by all stations; all stations equally likely to source or sink data; no limit to the number of stations along the bus; no branching of the bus; and no restriction on station placement along the bus.

  11. Fault Management Techniques in Human Spaceflight Operations

    NASA Technical Reports Server (NTRS)

    O'Hagan, Brian; Crocker, Alan

    2006-01-01

    This paper discusses human spaceflight fault management operations. Fault detection and response capabilities available in current US human spaceflight programs Space Shuttle and International Space Station are described while emphasizing system design impacts on operational techniques and constraints. Preflight and inflight processes along with products used to anticipate, mitigate and respond to failures are introduced. Examples of operational products used to support failure responses are presented. Possible improvements in the state of the art, as well as prioritization and success criteria for their implementation are proposed. This paper describes how the architecture of a command and control system impacts operations in areas such as the required fault response times, automated vs. manual fault responses, use of workarounds, etc. The architecture includes the use of redundancy at the system and software function level, software capabilities, use of intelligent or autonomous systems, number and severity of software defects, etc. This in turn drives which Caution and Warning (C&W) events should be annunciated, C&W event classification, operator display designs, crew training, flight control team training, and procedure development. Other factors impacting operations are the complexity of a system, skills needed to understand and operate a system, and the use of commonality vs. optimized solutions for software and responses. Fault detection, annunciation, safing responses, and recovery capabilities are explored using real examples to uncover underlying philosophies and constraints. These factors directly impact operations in that the crew and flight control team need to understand what happened, why it happened, what the system is doing, and what, if any, corrective actions they need to perform. If a fault results in multiple C&W events, or if several faults occur simultaneously, the root cause(s) of the fault(s), as well as their vehicle-wide impacts, must be

  12. NASA Spacecraft Fault Management Workshop Results

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn; McDougal, John; Barley, Bryan; Fesq, Lorraine; Stephens, Karen

    2010-01-01

    Fault Management is a critical aspect of deep-space missions. For the purposes of this paper, fault management is defined as the ability of a system to detect, isolate, and mitigate events that impact, or have the potential to impact, nominal mission operations. The fault management capabilities are commonly distributed across flight and ground subsystems, impacting hardware, software, and mission operations designs. The National Aeronautics and Space Administration (NASA) Discovery & New Frontiers (D&NF) Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for 5 missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that 4 out of the 5 missions studied had significant overruns due to underestimating the complexity and support requirements for fault management. As a result of this and other recent experiences, the NASA Science Mission Directorate (SMD) Planetary Science Division (PSD) commissioned a workshop to bring together invited participants across government, industry, academia to assess the state of the art in fault management practice and research, identify current and potential issues, and make recommendations for addressing these issues. The workshop was held in New Orleans in April of 2008. The workshop concluded that fault management is not being limited by technology, but rather by a lack of emphasis and discipline in both the engineering and programmatic dimensions. Some of the areas cited in the findings include different, conflicting, and changing institutional goals and risk postures; unclear ownership of end-to-end fault management engineering; inadequate understanding of the impact of mission-level requirements on fault management complexity; and practices, processes, and

  13. Formal Validation of Fault Management Design Solutions

    NASA Technical Reports Server (NTRS)

    Gibson, Corrina; Karban, Robert; Andolfato, Luigi; Day, John

    2013-01-01

    The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model's logic by performing fault injection simulations, and verify the fault protection system's logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-Statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.

  14. Implementation of Integrated System Fault Management Capability

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando; Schmalzel, John; Morris, Jon; Smith, Harvey; Turowski, Mark

    2008-01-01

    Fault Management to support rocket engine test mission with highly reliable and accurate measurements; while improving availability and lifecycle costs. CORE ELEMENTS: Architecture, taxonomy, and ontology (ATO) for DIaK management. Intelligent Sensor Processes; Intelligent Element Processes; Intelligent Controllers; Intelligent Subsystem Processes; Intelligent System Processes; Intelligent Component Processes.

  15. A System for Fault Management and Fault Consequences Analysis for NASA's Deep Space Habitat

    NASA Technical Reports Server (NTRS)

    Colombano, Silvano; Spirkovska, Liljana; Baskaran, Vijaykumar; Aaseng, Gordon; McCann, Robert S.; Ossenfort, John; Smith, Irene; Iverson, David L.; Schwabacher, Mark

    2013-01-01

    NASA's exploration program envisions the utilization of a Deep Space Habitat (DSH) for human exploration of the space environment in the vicinity of Mars and/or asteroids. Communication latencies with ground control of as long as 20+ minutes make it imperative that DSH operations be highly autonomous, as any telemetry-based detection of a systems problem on Earth could well occur too late to assist the crew with the problem. A DSH-based development program has been initiated to develop and test the automation technologies necessary to support highly autonomous DSH operations. One such technology is a fault management tool to support performance monitoring of vehicle systems operations and to assist with real-time decision making in connection with operational anomalies and failures. Toward that end, we are developing Advanced Caution and Warning System (ACAWS), a tool that combines dynamic and interactive graphical representations of spacecraft systems, systems modeling, automated diagnostic analysis and root cause identification, system and mission impact assessment, and mitigation procedure identification to help spacecraft operators (both flight controllers and crew) understand and respond to anomalies more effectively. In this paper, we describe four major architecture elements of ACAWS: Anomaly Detection, Fault Isolation, System Effects Analysis, and Graphic User Interface (GUI), and how these elements work in concert with each other and with other tools to provide fault management support to both the controllers and crew. We then describe recent evaluations and tests of ACAWS on the DSH testbed. The results of these tests support the feasibility and strength of our approach to failure management automation and enhanced operational autonomy

  16. The Development of NASA's Fault Management Handbook

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine

    2011-01-01

    Disciplined approach to Fault Management (FM) has not always been emphasized by projects, contributing to major schedule and cost overruns: (1) Often faults aren't addressed until nominal spacecraft design is fairly stable. (2) Design relegated to after-the-fact patchwork, Band-Aid approach. Progress is being made on a number of fronts outside of Handbook effort: (1) Processes, Practices and Tools being developed at some Centers and Institutions (2) Management recognition. Constellation FM roles, Discovery/New Frontiers mission reviews (3) Potential Technology solutions. New approaches could avoid many current pitfalls (3a) New FM architectures, including model-based approach integrated with NASA's MBSE (Model-Based System Engineering) efforts (3b) NASA's Office of the Chief Technologist: FM identified in seven of NASA's 14 Space Technology Roadmaps. Opportunity to coalesce and establish thrust area to progressively develop new FM techniques. FM Handbook will help ensure that future missions do not encounter same FM-related problems as previous missions. Version 1 of the FM Handbook is a good start: (1) Still need Version 2 Agency-wide FM Handbook to expand Handbook to other areas, especially crewed missions. (2) Still need to reach out to other organizations to develop common understanding and vocabulary. Handbook doesn't/can't address all Workshop recommendations. Still need to identify how to address programmatic and infrastructure issues.

  17. Fault Management Practice: A Roadmap for Improvement

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Oberhettinger, David

    2010-01-01

    Autonomous fault management (FM) is critical for deep space and planetary missions where the limited communication opportunities may prevent timely intervention by ground control. Evidence of pervasive architecture, design, and verification/validation problems with NASA FM engineering has been revealed both during technical reviews of spaceflight missions and in flight. These problems include FM design changes required late in the life-cycle, insufficient project insight into the extent of FM testing required, unexpected test results that require resolution, spacecraft operational limitations because certain functions were not tested, and in-flight anomalies and mission failures attributable to fault management. A recent NASA initiative has characterized the FM state-of-practice throughout the spacecraft development community and identified common NASA, DoD, and commercial concerns that can be addressed in the near term through the development of a FM Practitioner's Handbook and the formation of a FM Working Group. Initial efforts will focus on standardizing FM terminology, establishing engineering processes and tools, and training.

  18. The Development of NASA's Fault Management Handbook

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine

    2011-01-01

    Disciplined approach to Fault Management (FM) has not always been emphasized by projects, contributing to major schedule and cost overruns. Progress is being made on a number of fronts outside of Handbook effort: (1) Processes, Practices and Tools being developed at some Centers and Institutions (2) Management recognition . Constellation FM roles, Discovery/New Frontiers mission reviews (3) Potential Technology solutions . New approaches could avoid many current pitfalls (3a) New FM architectures, including model ]based approach integrated with NASA fs MBSE efforts (3b) NASA fs Office of the Chief Technologist: FM identified in seven of NASA fs 14 Space Technology Roadmaps . opportunity to coalesce and establish thrust area to progressively develop new FM techniques FM Handbook will help ensure that future missions do not encounter same FM ]related problems as previous missions Version 1 of the FM Handbook is a good start.

  19. A System for Fault Management for NASA's Deep Space Habitat

    NASA Technical Reports Server (NTRS)

    Colombano, Silvano P.; Spirkovska, Liljana; Aaseng, Gordon B.; Mccann, Robert S.; Baskaran, Vijayakumar; Ossenfort, John P.; Smith, Irene Skupniewicz; Iverson, David L.; Schwabacher, Mark A.

    2013-01-01

    NASA's exploration program envisions the utilization of a Deep Space Habitat (DSH) for human exploration of the space environment in the vicinity of Mars and/or asteroids. Communication latencies with ground control of as long as 20+ minutes make it imperative that DSH operations be highly autonomous, as any telemetry-based detection of a systems problem on Earth could well occur too late to assist the crew with the problem. A DSH-based development program has been initiated to develop and test the automation technologies necessary to support highly autonomous DSH operations. One such technology is a fault management tool to support performance monitoring of vehicle systems operations and to assist with real-time decision making in connection with operational anomalies and failures. Toward that end, we are developing Advanced Caution and Warning System (ACAWS), a tool that combines dynamic and interactive graphical representations of spacecraft systems, systems modeling, automated diagnostic analysis and root cause identification, system and mission impact assessment, and mitigation procedure identification to help spacecraft operators (both flight controllers and crew) understand and respond to anomalies more effectively. In this paper, we describe four major architecture elements of ACAWS: Anomaly Detection, Fault Isolation, System Effects Analysis, and Graphic User Interface (GUI), and how these elements work in concert with each other and with other tools to provide fault management support to both the controllers and crew. We then describe recent evaluations and tests of ACAWS on the DSH testbed. The results of these tests support the feasibility and strength of our approach to failure management automation and enhanced operational autonomy.

  20. Managing systems faults on the commercial flight deck: Analysis of pilots' organization and prioritization of fault management information

    NASA Technical Reports Server (NTRS)

    Rogers, William H.

    1993-01-01

    In rare instances, flight crews of commercial aircraft must manage complex systems faults in addition to all their normal flight tasks. Pilot errors in fault management have been attributed, at least in part, to an incomplete or inaccurate awareness of the fault situation. The current study is part of a program aimed at assuring that the types of information potentially available from an intelligent fault management aiding concept developed at NASA Langley called 'Faultfinde' (see Abbott, Schutte, Palmer, and Ricks, 1987) are an asset rather than a liability: additional information should improve pilot performance and aircraft safety, but it should not confuse, distract, overload, mislead, or generally exacerbate already difficult circumstances.

  1. Analytical Approaches to Guide SLS Fault Management (FM) Development

    NASA Technical Reports Server (NTRS)

    Patterson, Jonathan D.

    2012-01-01

    Extensive analysis is needed to determine the right set of FM capabilities to provide the most coverage without significantly increasing the cost, reliability (FP/FN), and complexity of the overall vehicle systems. Strong collaboration with the stakeholders is required to support the determination of the best triggers and response options. The SLS Fault Management process has been documented in the Space Launch System Program (SLSP) Fault Management Plan (SLS-PLAN-085).

  2. On-board fault management for autonomous spacecraft

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Stephan, Amy; Doyle, Susan C.; Martin, Eric; Sellers, Suzanne

    1991-01-01

    The dynamic nature of the Cargo Transfer Vehicle's (CTV) mission and the high level of autonomy required mandate a complete fault management system capable of operating under uncertain conditions. Such a fault management system must take into account the current mission phase and the environment (including the target vehicle), as well as the CTV's state of health. This level of capability is beyond the scope of current on-board fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems that can meet the needs of spacecraft that have long-range autonomy requirements. We have implemented a model-based approach to fault detection and isolation that does not require explicit characterization of failures prior to launch. It is thus able to detect failures that were not considered in the failure and effects analysis. We have applied this technique to several different subsystems and tested our approach against both simulations and an electrical power system hardware testbed. We present findings from simulation and hardware tests which demonstrate the ability of our model-based system to detect and isolate failures, and describe our work in porting the Ada version of this system to a flight-qualified processor. We also discuss current research aimed at expanding our system to monitor the entire spacecraft.

  3. Optimal Management of Redundant Control Authority for Fault Tolerance

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva; Ju, Jianhong

    2000-01-01

    This paper is intended to demonstrate the feasibility of a solution to a fault tolerant control problem. It explains, through a numerical example, the design and the operation of a novel scheme for fault tolerant control. The fundamental principle of the scheme was formalized in [5] based on the notion of normalized nonspecificity. The novelty lies with the use of a reliability criterion for redundancy management, and therefore leads to a high overall system reliability.

  4. Absence of earthquake correlation with Earth tides: An indication of high preseismic fault stress rate

    Vidale, J.E.; Agnew, D.C.; Johnston, M.J.S.; Oppenheimer, D.H.

    1998-01-01

    Because the rate of stress change from the Earth tides exceeds that from tectonic stress accumulation, tidal triggering of earthquakes would be expected if the final hours of loading of the fault were at the tectonic rate and if rupture began soon after the achievement of a critical stress level. We analyze the tidal stresses and stress rates on the fault planes and at the times of 13,042 earthquakes which are so close to the San Andreas and Calaveras faults in California that we may take the fault plane to be known. We find that the stresses and stress rates from Earth tides at the times of earthquakes are distributed in the same way as tidal stresses and stress rates at random times. While the rate of earthquakes when the tidal stress promotes failure is 2% higher than when the stress does not, this difference in rate is not statistically significant. This lack of tidal triggering implies that preseismic stress rates in the nucleation zones of earthquakes are at least 0.15 bar/h just preceding seismic failure, much above the long-term tectonic stress rate of 10-4 bar/h.

  5. Response of faults to climate-driven changes in ice and water volumes on Earth's surface.

    PubMed

    Hampel, Andrea; Hetzel, Ralf; Maniatis, Georgios

    2010-05-28

    Numerical models including one or more faults in a rheologically stratified lithosphere show that climate-induced variations in ice and water volumes on Earth's surface considerably affect the slip evolution of both thrust and normal faults. In general, the slip rate and hence the seismicity of a fault decreases during loading and increases during unloading. Here, we present several case studies to show that a postglacial slip rate increase occurred on faults worldwide in regions where ice caps and lakes decayed at the end of the last glaciation. Of note is that the postglacial amplification of seismicity was not restricted to the areas beneath the large Laurentide and Fennoscandian ice sheets but also occurred in regions affected by smaller ice caps or lakes, e.g. the Basin-and-Range Province. Our results do not only have important consequences for the interpretation of palaeoseismological records from faults in these regions but also for the evaluation of the future seismicity in regions currently affected by deglaciation like Greenland and Antarctica: shrinkage of the modern ice sheets owing to global warming may ultimately lead to an increase in earthquake frequency in these regions.

  6. The Earth isn't flat: The (large) influence of topography on geodetic fault slip imaging.

    NASA Astrophysics Data System (ADS)

    Thompson, T. B.; Meade, B. J.

    2017-12-01

    While earthquakes both occur near and generate steep topography, most geodetic slip inversions assume that the Earth's surface is flat. We have developed a new boundary element tool, Tectosaur, with the capability to study fault and earthquake problems including complex fault system geometries, topography, material property contrasts, and millions of elements. Using Tectosaur, we study the model error induced by neglecting topography in both idealized synthetic fault models and for the cases of the MW=7.3 Landers and MW=8.0 Wenchuan earthquakes. Near the steepest topography, we find the use of flat Earth dislocation models may induce errors of more than 100% in the inferred slip magnitude and rake. In particular, neglecting topographic effects leads to an inferred shallow slip deficit. Thus, we propose that the shallow slip deficit observed in several earthquakes may be an artefact resulting from the systematic use of elastic dislocation models assuming a flat Earth. Finally, using this study as an example, we emphasize the dangerous potential for forward model errors to be amplified by an order of magnitude in inverse problems.

  7. Current Fault Management Trends in NASA's Planetary Spacecraft

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.

    2009-01-01

    The key product of this three-day workshop is a NASA White Paper that documents lessons learned from previous missions, recommended best practices, and future opportunities for investments in the fault management domain. This paper summarizes the findings and recommendations that are captured in the White Paper.

  8. Geodetic Network Design and Optimization on the Active Tuzla Fault (Izmir, Turkey) for Disaster Management

    PubMed Central

    Halicioglu, Kerem; Ozener, Haluk

    2008-01-01

    Both seismological and geodynamic research emphasize that the Aegean Region, which comprises the Hellenic Arc, the Greek mainland and Western Turkey is the most seismically active region in Western Eurasia. The convergence of the Eurasian and African lithospheric plates forces a westward motion on the Anatolian plate relative to the Eurasian one. Western Anatolia is a valuable laboratory for Earth Science research because of its complex geological structure. Izmir is a large city in Turkey with a population of about 2.5 million that is at great risk from big earthquakes. Unfortunately, previous geodynamics studies performed in this region are insufficient or cover large areas instead of specific faults. The Tuzla Fault, which is aligned trending NE–SW between the town of Menderes and Cape Doganbey, is an important fault in terms of seismic activity and its proximity to the city of Izmir. This study aims to perform a large scale investigation focusing on the Tuzla Fault and its vicinity for better understanding of the region's tectonics. In order to investigate the crustal deformation along the Tuzla Fault and Izmir Bay, a geodetic network has been designed and optimizations were performed. This paper suggests a schedule for a crustal deformation monitoring study which includes research on the tectonics of the region, network design and optimization strategies, theory and practice of processing. The study is also open for extension in terms of monitoring different types of fault characteristics. A one-dimensional fault model with two parameters – standard strike-slip model of dislocation theory in an elastic half-space – is formulated in order to determine which sites are suitable for the campaign based geodetic GPS measurements. Geodetic results can be used as a background data for disaster management systems. PMID:27873783

  9. Geodetic Network Design and Optimization on the Active Tuzla Fault (Izmir, Turkey) for Disaster Management.

    PubMed

    Halicioglu, Kerem; Ozener, Haluk

    2008-08-19

    Both seismological and geodynamic research emphasize that the Aegean Region, which comprises the Hellenic Arc, the Greek mainland and Western Turkey is the most seismically active region in Western Eurasia. The convergence of the Eurasian and African lithospheric plates forces a westward motion on the Anatolian plate relative to the Eurasian one. Western Anatolia is a valuable laboratory for Earth Science research because of its complex geological structure. Izmir is a large city in Turkey with a population of about 2.5 million that is at great risk from big earthquakes. Unfortunately, previous geodynamics studies performed in this region are insufficient or cover large areas instead of specific faults. The Tuzla Fault, which is aligned trending NE-SW between the town of Menderes and Cape Doganbey, is an important fault in terms of seismic activity and its proximity to the city of Izmir. This study aims to perform a large scale investigation focusing on the Tuzla Fault and its vicinity for better understanding of the region's tectonics. In order to investigate the crustal deformation along the Tuzla Fault and Izmir Bay, a geodetic network has been designed and optimizations were performed. This paper suggests a schedule for a crustal deformation monitoring study which includes research on the tectonics of the region, network design and optimization strategies, theory and practice of processing. The study is also open for extension in terms of monitoring different types of fault characteristics. A one-dimensional fault model with two parameters - standard strike-slip model of dislocation theory in an elastic half-space - is formulated in order to determine which sites are suitable for the campaign based geodetic GPS measurements. Geodetic results can be used as a background data for disaster management systems.

  10. Automated fault-management in a simulated spaceflight micro-world

    NASA Technical Reports Server (NTRS)

    Lorenz, Bernd; Di Nocera, Francesco; Rottger, Stefan; Parasuraman, Raja

    2002-01-01

    BACKGROUND: As human spaceflight missions extend in duration and distance from Earth, a self-sufficient crew will bear far greater onboard responsibility and authority for mission success. This will increase the need for automated fault management (FM). Human factors issues in the use of such systems include maintenance of cognitive skill, situational awareness (SA), trust in automation, and workload. This study examine the human performance consequences of operator use of intelligent FM support in interaction with an autonomous, space-related, atmospheric control system. METHODS: An expert system representing a model-base reasoning agent supported operators at a low level of automation (LOA) by a computerized fault finding guide, at a medium LOA by an automated diagnosis and recovery advisory, and at a high LOA by automate diagnosis and recovery implementation, subject to operator approval or veto. Ten percent of the experimental trials involved complete failure of FM support. RESULTS: Benefits of automation were reflected in more accurate diagnoses, shorter fault identification time, and reduced subjective operator workload. Unexpectedly, fault identification times deteriorated more at the medium than at the high LOA during automation failure. Analyses of information sampling behavior showed that offloading operators from recovery implementation during reliable automation enabled operators at high LOA to engage in fault assessment activities CONCLUSIONS: The potential threat to SA imposed by high-level automation, in which decision advisories are automatically generated, need not inevitably be counteracted by choosing a lower LOA. Instead, freeing operator cognitive resources by automatic implementation of recover plans at a higher LOA can promote better fault comprehension, so long as the automation interface is designed to support efficient information sampling.

  11. Management Approach for Earth Venture Instrument

    NASA Technical Reports Server (NTRS)

    Hope, Diane L.; Dutta, Sanghamitra

    2013-01-01

    The Earth Venture Instrument (EVI) element of the Earth Venture Program calls for developing instruments for participation on a NASA-arranged spaceflight mission of opportunity to conduct innovative, integrated, hypothesis or scientific question-driven approaches to pressing Earth system science issues. This paper discusses the EVI element and the management approach being used to manage both an instrument development activity as well as the host accommodations activity. In particular the focus will be on the approach being used for the first EVI (EVI-1) selected instrument, Tropospheric Emissions: Monitoring of Pollution (TEMPO), which will be hosted on a commercial GEO satellite and some of the challenges encountered to date and corresponding mitigations that are associated with the management structure for the TEMPO Mission and the architecture of EVI.

  12. Vehicle fault diagnostics and management system

    NASA Astrophysics Data System (ADS)

    Gopal, Jagadeesh; Gowthamsachin

    2017-11-01

    This project is a kind of advanced automatic identification technology, and is more and more widely used in the fields of transportation and logistics. It looks over the main functions with like Vehicle management, Vehicle Speed limit and Control. This system starts with authentication process to keep itself secure. Here we connect sensors to the STM32 board which in turn is connected to the car through Ethernet cable, as Ethernet in capable of sending large amounts of data at high speeds. This technology involved clearly shows how a careful combination of software and hardware can produce an extremely cost-effective solution to a problem.

  13. Fault Management Architectures and the Challenges of Providing Software Assurance

    NASA Technical Reports Server (NTRS)

    Savarino, Shirley; Fitz, Rhonda; Fesq, Lorraine; Whitman, Gerek

    2015-01-01

    Fault Management (FM) is focused on safety, the preservation of assets, and maintaining the desired functionality of the system. How FM is implemented varies among missions. Common to most missions is system complexity due to a need to establish a multi-dimensional structure across hardware, software and spacecraft operations. FM is necessary to identify and respond to system faults, mitigate technical risks and ensure operational continuity. Generally, FM architecture, implementation, and software assurance efforts increase with mission complexity. Because FM is a systems engineering discipline with a distributed implementation, providing efficient and effective verification and validation (V&V) is challenging. A breakout session at the 2012 NASA Independent Verification & Validation (IV&V) Annual Workshop titled "V&V of Fault Management: Challenges and Successes" exposed this issue in terms of V&V for a representative set of architectures. NASA's Software Assurance Research Program (SARP) has provided funds to NASA IV&V to extend the work performed at the Workshop session in partnership with NASA's Jet Propulsion Laboratory (JPL). NASA IV&V will extract FM architectures across the IV&V portfolio and evaluate the data set, assess visibility for validation and test, and define software assurance methods that could be applied to the various architectures and designs. This SARP initiative focuses efforts on FM architectures from critical and complex projects within NASA. The identification of particular FM architectures and associated V&V/IV&V techniques provides a data set that can enable improved assurance that a system will adequately detect and respond to adverse conditions. Ultimately, results from this activity will be incorporated into the NASA Fault Management Handbook providing dissemination across NASA, other agencies and the space community. This paper discusses the approach taken to perform the evaluations and preliminary findings from the research.

  14. Fault Management Architectures and the Challenges of Providing Software Assurance

    NASA Technical Reports Server (NTRS)

    Savarino, Shirley; Fitz, Rhonda; Fesq, Lorraine; Whitman, Gerek

    2015-01-01

    The satellite systems Fault Management (FM) is focused on safety, the preservation of assets, and maintaining the desired functionality of the system. How FM is implemented varies among missions. Common to most is system complexity due to a need to establish a multi-dimensional structure across hardware, software and operations. This structure is necessary to identify and respond to system faults, mitigate technical risks and ensure operational continuity. These architecture, implementation and software assurance efforts increase with mission complexity. Because FM is a systems engineering discipline with a distributed implementation, providing efficient and effective verification and validation (VV) is challenging. A breakout session at the 2012 NASA Independent Verification Validation (IVV) Annual Workshop titled VV of Fault Management: Challenges and Successes exposed these issues in terms of VV for a representative set of architectures. NASA's IVV is funded by NASA's Software Assurance Research Program (SARP) in partnership with NASA's Jet Propulsion Laboratory (JPL) to extend the work performed at the Workshop session. NASA IVV will extract FM architectures across the IVV portfolio and evaluate the data set for robustness, assess visibility for validation and test, and define software assurance methods that could be applied to the various architectures and designs. This work focuses efforts on FM architectures from critical and complex projects within NASA. The identification of particular FM architectures, visibility, and associated VVIVV techniques provides a data set that can enable higher assurance that a satellite system will adequately detect and respond to adverse conditions. Ultimately, results from this activity will be incorporated into the NASA Fault Management Handbook providing dissemination across NASA, other agencies and the satellite community. This paper discusses the approach taken to perform the evaluations and preliminary findings from the

  15. V&V of Fault Management: Challenges and Successes

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Costello, Ken; Ohi, Don; Lu, Tiffany; Newhouse, Marilyn

    2013-01-01

    This paper describes the results of a special breakout session of the NASA Independent Verification and Validation (IV&V) Workshop held in the fall of 2012 entitled "V&V of Fault Management: Challenges and Successes." The NASA IV&V Program is in a unique position to interact with projects across all of the NASA development domains. Using this unique opportunity, the IV&V program convened a breakout session to enable IV&V teams to share their challenges and successes with respect to the V&V of Fault Management (FM) architectures and software. The presentations and discussions provided practical examples of pitfalls encountered while performing V&V of FM including the lack of consistent designs for implementing faults monitors and the fact that FM information is not centralized but scattered among many diverse project artifacts. The discussions also solidified the need for an early commitment to developing FM in parallel with the spacecraft systems as well as clearly defining FM terminology within a project.

  16. Fault Management Technology Maturation for NASA's Constellation Program

    NASA Technical Reports Server (NTRS)

    Waterman, Robert D.

    2010-01-01

    This slide presentation reviews the maturation of fault management technology in preparation for the Constellation Program. There is a review of the Space Shuttle Main Engine (SSME) and a discussion of a couple of incidents with the shuttle main engine and tanking that indicated the necessity for predictive maintenance. Included is a review of the planned Ares I-X Ground Diagnostic Prototype (GDP) and further information about detection and isolation of faults using Testability Engineering and Maintenance System (TEAMS). Another system that being readied for use that detects anomalies, the Inductive Monitoring System (IMS). The IMS automatically learns how the system behaves and alerts operations it the current behavior is anomalous. The comparison of STS-83 and STS-107 (i.e., the Columbia accident) is shown as an example of the anomaly detection capabilities.

  17. Fault management for the Space Station Freedom control center

    NASA Technical Reports Server (NTRS)

    Clark, Colin; Jowers, Steven; Mcnenny, Robert; Culbert, Chris; Kirby, Sarah; Lauritsen, Janet

    1992-01-01

    This paper describes model based reasoning fault isolation in complex systems using automated digraph analysis. It discusses the use of the digraph representation as the paradigm for modeling physical systems and a method for executing these failure models to provide real-time failure analysis. It also discusses the generality, ease of development and maintenance, complexity management, and susceptibility to verification and validation of digraph failure models. It specifically describes how a NASA-developed digraph evaluation tool and an automated process working with that tool can identify failures in a monitored system when supplied with one or more fault indications. This approach is well suited to commercial applications of real-time failure analysis in complex systems because it is both powerful and cost effective.

  18. Health management and controls for Earth-to-orbit propulsion systems

    NASA Astrophysics Data System (ADS)

    Bickford, R. L.

    1995-03-01

    Avionics and health management technologies increase the safety and reliability while decreasing the overall cost for Earth-to-orbit (ETO) propulsion systems. New ETO propulsion systems will depend on highly reliable fault tolerant flight avionics, advanced sensing systems and artificial intelligence aided software to ensure critical control, safety and maintenance requirements are met in a cost effective manner. Propulsion avionics consist of the engine controller, actuators, sensors, software and ground support elements. In addition to control and safety functions, these elements perform system monitoring for health management. Health management is enhanced by advanced sensing systems and algorithms which provide automated fault detection and enable adaptive control and/or maintenance approaches. Aerojet is developing advanced fault tolerant rocket engine controllers which provide very high levels of reliability. Smart sensors and software systems which significantly enhance fault coverage and enable automated operations are also under development. Smart sensing systems, such as flight capable plume spectrometers, have reached maturity in ground-based applications and are suitable for bridging to flight. Software to detect failed sensors has reached similar maturity. This paper will discuss fault detection and isolation for advanced rocket engine controllers as well as examples of advanced sensing systems and software which significantly improve component failure detection for engine system safety and health management.

  19. Assurance of Fault Management: Risk-Significant Adverse Condition Awareness

    NASA Technical Reports Server (NTRS)

    Fitz, Rhonda

    2016-01-01

    Fault Management (FM) systems are ranked high in risk-based assessment of criticality within flight software, emphasizing the importance of establishing highly competent domain expertise to provide assurance for NASA projects, especially as spaceflight systems continue to increase in complexity. Insight into specific characteristics of FM architectures seen embedded within safety- and mission-critical software systems analyzed by the NASA Independent Verification Validation (IVV) Program has been enhanced with an FM Technical Reference (TR) suite. Benefits are aimed beyond the IVV community to those that seek ways to efficiently and effectively provide software assurance to reduce the FM risk posture of NASA and other space missions. The identification of particular FM architectures, visibility, and associated IVV techniques provides a TR suite that enables greater assurance that critical software systems will adequately protect against faults and respond to adverse conditions. The role FM has with regard to overall asset protection of flight software systems is being addressed with the development of an adverse condition (AC) database encompassing flight software vulnerabilities.Identification of potential off-nominal conditions and analysis to determine how a system responds to these conditions are important aspects of hazard analysis and fault management. Understanding what ACs the mission may face, and ensuring they are prevented or addressed is the responsibility of the assurance team, which necessarily should have insight into ACs beyond those defined by the project itself. Research efforts sponsored by NASAs Office of Safety and Mission Assurance defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs, and allowing queries based on project, mission

  20. Evolution of shuttle avionics redundancy management/fault tolerance

    NASA Technical Reports Server (NTRS)

    Boykin, J. C.; Thibodeau, J. R.; Schneider, H. E.

    1985-01-01

    The challenge of providing redundancy management (RM) and fault tolerance to meet the Shuttle Program requirements of fail operational/fail safe for the avionics systems was complicated by the critical program constraints of weight, cost, and schedule. The basic and sometimes false effectivity of less than pure RM designs is addressed. Evolution of the multiple input selection filter (the heart of the RM function) is discussed with emphasis on the subtle interactions of the flight control system that were found to be potentially catastrophic. Several other general RM development problems are discussed, with particular emphasis on the inertial measurement unit RM, indicative of the complexity of managing that three string system and its critical interfaces with the guidance and control systems.

  1. Health management and controls for earth to orbit propulsion systems

    NASA Technical Reports Server (NTRS)

    Bickford, R. L.

    1992-01-01

    Fault detection and isolation for advanced rocket engine controllers are discussed focusing on advanced sensing systems and software which significantly improve component failure detection for engine safety and health management. Aerojet's Space Transportation Main Engine controller for the National Launch System is the state of the art in fault tolerant engine avionics. Health management systems provide high levels of automated fault coverage and significantly improve vehicle delivered reliability and lower preflight operations costs. Key technologies, including the sensor data validation algorithms and flight capable spectrometers, have been demonstrated in ground applications and are found to be suitable for bridging programs into flight applications.

  2. Disease management programmes in Germany: a fundamental fault.

    PubMed

    Felder, Stefan

    2006-12-01

    In 2001 Germany introduced disease management programmes (DMPs) in order to give sick funds an incentive to improve the treatment of the chronically ill. By 1 March 2005, a total of 3275 programmes had been approved, 2760 for diabetes, 390 for breast cancer and 125 for coronary heart disease, covering roughly 1 million patients. German DMPs show a major fault regarding financial incentives. Sick funds increase their transfers from the risk adjustment scheme when their clients enroll in DMPs. Since this money is a lump sum, sick funds do not necessarily foster treatment of the chronically ill. Similarly, reimbursement of physicians is also not well targeted to the needs of DMPs. Preliminary evidence points to poor performance of German DMPs.

  3. Orion GN&C Fault Management System Verification: Scope And Methodology

    NASA Technical Reports Server (NTRS)

    Brown, Denise; Weiler, David; Flanary, Ronald

    2016-01-01

    In order to ensure long-term ability to meet mission goals and to provide for the safety of the public, ground personnel, and any crew members, nearly all spacecraft include a fault management (FM) system. For a manned vehicle such as Orion, the safety of the crew is of paramount importance. The goal of the Orion Guidance, Navigation and Control (GN&C) fault management system is to detect, isolate, and respond to faults before they can result in harm to the human crew or loss of the spacecraft. Verification of fault management/fault protection capability is challenging due to the large number of possible faults in a complex spacecraft, the inherent unpredictability of faults, the complexity of interactions among the various spacecraft components, and the inability to easily quantify human reactions to failure scenarios. The Orion GN&C Fault Detection, Isolation, and Recovery (FDIR) team has developed a methodology for bounding the scope of FM system verification while ensuring sufficient coverage of the failure space and providing high confidence that the fault management system meets all safety requirements. The methodology utilizes a swarm search algorithm to identify failure cases that can result in catastrophic loss of the crew or the vehicle and rare event sequential Monte Carlo to verify safety and FDIR performance requirements.

  4. Developing a Fault Management Guidebook for Nasa's Deep Space Robotic Missions

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Jacome, Raquel Weitl

    2015-01-01

    NASA designs and builds systems that achieve incredibly ambitious goals, as evidenced by the Curiosity rover traversing on Mars, the highly complex International Space Station orbiting our Earth, and the compelling plans for capturing, retrieving and redirecting an asteroid into a lunar orbit to create a nearby a target to be investigated by astronauts. In order to accomplish these feats, the missions must be imbued with sufficient knowledge and capability not only to realize the goals, but also to identify and respond to off-nominal conditions. Fault Management (FM) is the discipline of establishing how a system will respond to preserve its ability to function even in the presence of faults. In 2012, NASA released a draft FM Handbook in an attempt to coalesce the field by establishing a unified terminology and a common process for designing FM mechanisms. However, FM approaches are very diverse across NASA, especially between the different mission types such as Earth orbiters, launch vehicles, deep space robotic vehicles and human spaceflight missions, and the authors were challenged to capture and represent all of these views. The authors recognized that a necessary precursor step is for each sub-community to codify its FM policies, practices and approaches in individual, focused guidebooks. Then, the sub-communities can look across NASA to better understand the different ways off-nominal conditions are addressed, and to seek commonality or at least an understanding of the multitude of FM approaches. This paper describes the development of the "Deep Space Robotic Fault Management Guidebook," which is intended to be the first of NASA's FM guidebooks. Its purpose is to be a field-guide for FM practitioners working on deep space robotic missions, as well as a planning tool for project managers. Publication of this Deep Space Robotic FM Guidebook is expected in early 2015. The guidebook will be posted on NASA's Engineering Network on the FM Community of Practice

  5. Development of Asset Fault Signatures for Prognostic and Health Management in the Nuclear Industry

    SciT

    Vivek Agarwal; Nancy J. Lybeck; Randall Bickford

    2014-06-01

    Proactive online monitoring in the nuclear industry is being explored using the Electric Power Research Institute’s Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. The FW-PHM Suite is a set of web-based diagnostic and prognostic tools and databases that serves as an integrated health monitoring architecture. The FW-PHM Suite has four main modules: Diagnostic Advisor, Asset Fault Signature (AFS) Database, Remaining Useful Life Advisor, and Remaining Useful Life Database. This paper focuses on development of asset fault signatures to assess the health status of generator step-up generators and emergency diesel generators in nuclear power plants. Asset fault signatures describe themore » distinctive features based on technical examinations that can be used to detect a specific fault type. At the most basic level, fault signatures are comprised of an asset type, a fault type, and a set of one or more fault features (symptoms) that are indicative of the specified fault. The AFS Database is populated with asset fault signatures via a content development exercise that is based on the results of intensive technical research and on the knowledge and experience of technical experts. The developed fault signatures capture this knowledge and implement it in a standardized approach, thereby streamlining the diagnostic and prognostic process. This will support the automation of proactive online monitoring techniques in nuclear power plants to diagnose incipient faults, perform proactive maintenance, and estimate the remaining useful life of assets.« less

  6. Sharing Earth Observation Data When Health Management

    NASA Astrophysics Data System (ADS)

    Cox, E. L., Jr.

    2015-12-01

    While the global community is struck by pandemics and epidemics from time to time the ability to fully utilize earth observations and integrate environmental information has been limited - until recently. Mature science understanding is allowing new levels of situational awareness be possible when and if the relevant data is available and shared in a timely and useable manner. Satellite and other remote sensing tools have been used to observe, monitor, assess and predict weather and water impacts for decades. In the last few years much of this has included a focus on the ability to monitor changes on climate scales that suggest changes in quantity and quality of ecosystem resources or the "one-health" approach where trans-disciplinary links between environment, animal and vegetative health may provide indications of best ways to manage susceptibility to infectious disease or outbreaks. But the scale of impacts and availability of information from earth observing satellites, airborne platforms, health tracking systems and surveillance networks offer new integrated tools. This presentation will describe several recent events, such as Superstorm Sandy in the United States and the Ebola outbreak in Africa, where public health and health infrastructure have been exposed to environmental hazards and lessons learned from disaster response in the ability to share data have been effective in risk reduction.

  7. Technical Reference Suite Addressing Challenges of Providing Assurance for Fault Management Architectural Design

    NASA Technical Reports Server (NTRS)

    Fitz, Rhonda; Whitman, Gerek

    2016-01-01

    Research into complexities of software systems Fault Management (FM) and how architectural design decisions affect safety, preservation of assets, and maintenance of desired system functionality has coalesced into a technical reference (TR) suite that advances the provision of safety and mission assurance. The NASA Independent Verification and Validation (IV&V) Program, with Software Assurance Research Program support, extracted FM architectures across the IV&V portfolio to evaluate robustness, assess visibility for validation and test, and define software assurance methods applied to the architectures and designs. This investigation spanned IV&V projects with seven different primary developers, a wide range of sizes and complexities, and encompassed Deep Space Robotic, Human Spaceflight, and Earth Orbiter mission FM architectures. The initiative continues with an expansion of the TR suite to include Launch Vehicles, adding the benefit of investigating differences intrinsic to model-based FM architectures and insight into complexities of FM within an Agile software development environment, in order to improve awareness of how nontraditional processes affect FM architectural design and system health management. The identification of particular FM architectures, visibility, and associated IV&V techniques provides a TR suite that enables greater assurance that critical software systems will adequately protect against faults and respond to adverse conditions. Additionally, the role FM has with regard to strengthened security requirements, with potential to advance overall asset protection of flight software systems, is being addressed with the development of an adverse conditions database encompassing flight software vulnerabilities. Capitalizing on the established framework, this TR suite provides assurance capability for a variety of FM architectures and varied development approaches. Research results are being disseminated across NASA, other agencies, and the

  8. Evaluating Fault Management Operations Concepts for Next-Generation Spacecraft: What Eye Movements Tell Us

    NASA Technical Reports Server (NTRS)

    Hayashi, Miwa; Ravinder, Ujwala; McCann, Robert S.; Beutter, Brent; Spirkovska, Lily

    2009-01-01

    Performance enhancements associated with selected forms of automation were quantified in a recent human-in-the-loop evaluation of two candidate operational concepts for fault management on next-generation spacecraft. The baseline concept, called Elsie, featured a full-suite of "soft" fault management interfaces. However, operators were forced to diagnose malfunctions with minimal assistance from the standalone caution and warning system. The other concept, called Besi, incorporated a more capable C&W system with an automated fault diagnosis capability. Results from analyses of participants' eye movements indicate that the greatest empirical benefit of the automation stemmed from eliminating the need for text processing on cluttered, text-rich displays.

  9. Results from the NASA Spacecraft Fault Management Workshop: Cost Drivers for Deep Space Missions

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn E.; McDougal, John; Barley, Bryan; Stephens Karen; Fesq, Lorraine M.

    2010-01-01

    Fault Management, the detection of and response to in-flight anomalies, is a critical aspect of deep-space missions. Fault management capabilities are commonly distributed across flight and ground subsystems, impacting hardware, software, and mission operations designs. The National Aeronautics and Space Administration (NASA) Discovery & New Frontiers (D&NF) Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for five missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that four out of the five missions studied had significant overruns due to underestimating the complexity and support requirements for fault management. As a result of this and other recent experiences, the NASA Science Mission Directorate (SMD) Planetary Science Division (PSD) commissioned a workshop to bring together invited participants across government, industry, and academia to assess the state of the art in fault management practice and research, identify current and potential issues, and make recommendations for addressing these issues. The workshop was held in New Orleans in April of 2008. The workshop concluded that fault management is not being limited by technology, but rather by a lack of emphasis and discipline in both the engineering and programmatic dimensions. Some of the areas cited in the findings include different, conflicting, and changing institutional goals and risk postures; unclear ownership of end-to-end fault management engineering; inadequate understanding of the impact of mission-level requirements on fault management complexity; and practices, processes, and tools that have not kept pace with the increasing complexity of mission requirements and spacecraft systems. This paper summarizes the

  10. Dynamic earthquake rupture simulation on nonplanar faults embedded in 3D geometrically complex, heterogeneous Earth models

    NASA Astrophysics Data System (ADS)

    Duru, K.; Dunham, E. M.; Bydlon, S. A.; Radhakrishnan, H.

    2014-12-01

    Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake.The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws.The corresponding initial boundary value problems are both numerically and computationally challenging.In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas.Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods.We present a numerical method for:a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration;b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of seismic waves in heterogeneous media with free surface topography.We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. We have performed extensive numerical experiments using a slip-weakening friction law on non-planar faults, including recent SCEC benchmark problems. We also show simulations on fractal faults revealing the complexity of rupture dynamics on rough faults. We are presently extending our method to rate-and-state friction laws and off-fault plasticity.

  11. Technical Reference Suite Addressing Challenges of Providing Assurance for Fault Management Architectural Design

    NASA Technical Reports Server (NTRS)

    Fitz, Rhonda; Whitman, Gerek

    2016-01-01

    Research into complexities of software systems Fault Management (FM) and how architectural design decisions affect safety, preservation of assets, and maintenance of desired system functionality has coalesced into a technical reference (TR) suite that advances the provision of safety and mission assurance. The NASA Independent Verification and Validation (IVV) Program, with Software Assurance Research Program support, extracted FM architectures across the IVV portfolio to evaluate robustness, assess visibility for validation and test, and define software assurance methods applied to the architectures and designs. This investigation spanned IVV projects with seven different primary developers, a wide range of sizes and complexities, and encompassed Deep Space Robotic, Human Spaceflight, and Earth Orbiter mission FM architectures. The initiative continues with an expansion of the TR suite to include Launch Vehicles, adding the benefit of investigating differences intrinsic to model-based FM architectures and insight into complexities of FM within an Agile software development environment, in order to improve awareness of how nontraditional processes affect FM architectural design and system health management.

  12. Automated Generation of Fault Management Artifacts from a Simple System Model

    NASA Technical Reports Server (NTRS)

    Kennedy, Andrew K.; Day, John C.

    2013-01-01

    Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.

  13. Waste Management with Earth Observation Technologies

    NASA Astrophysics Data System (ADS)

    Margarit, Gerard; Tabasco, A.

    2010-05-01

    The range of applications where Earth Observation (EO) can be useful has been notably increased due to the maturity reached in the adopted technology and techniques. In most of the cases, EO provides a manner to remotely monitor particular variables and parameters with a more efficient usage of the available resources. Typical examples are environmental (forest, marine, resources…) monitoring, precision farming, security and surveillance (land, maritime…) and risk / disaster management (subsidence, volcanoes…). In this context, this paper presents a methodology to monitor waste disposal sites with EO. In particular, the explored technology is Interferometric Synthetic Aperture Radar (InSAR), which applies the interferometric concept to SAR images. SAR is an advanced radar concept able to acquire 2D coherent microwave reflectivity images for large scenes (tens of thousands kilometres) with fine resolution (< 1 m). The main product of InSAR is Digital Elevation Models (DEM) that provide key information about the tri-dimensional configuration of a scene, that is, a height map of the scene. In practice, this represents an alternative way to obtain the same information than in-situ altimetry can provide. In the case of waste management, InSAR has been used to evaluate the potentiality of EO to monitor the disposed volume along a specific range of time. This activity has been developed in collaboration with the Agència de Resídus de Catalunya (ARC) (The Waste Agency of Catalonia), Spain, in the framework of a pilot project. The motivation comes from the new law promoted by the regional Government that taxes the volume of disposed waste. This law put ARC in duty to control that the real volume matches the numbers provided by the waste processing firms so that they can not commit illegal actions. Right now, this task is performed with in-situ altimetry. But despite of the accurate results, this option is completely inefficient and limits the numbers of polls that

  14. Management approach recommendations. Earth Observatory Satellite system definition study (EOS)

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Management analyses and tradeoffs were performed to determine the most cost effective management approach for the Earth Observatory Satellite (EOS) Phase C/D. The basic objectives of the management approach are identified. Some of the subjects considered are as follows: (1) contract startup phase, (2) project management control system, (3) configuration management, (4) quality control and reliability engineering requirements, and (5) the parts procurement program.

  15. Fleet-Wide Prognostic and Health Management Suite: Asset Fault Signature Database

    SciT

    Vivek Agarwal; Nancy J. Lybeck; Randall Bickford

    Proactive online monitoring in the nuclear industry is being explored using the Electric Power Research Institute’s Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. The FW-PHM Suite is a set of web-based diagnostic and prognostic tools and databases that serves as an integrated health monitoring architecture. The FW-PHM Suite has four main modules: (1) Diagnostic Advisor, (2) Asset Fault Signature (AFS) Database, (3) Remaining Useful Life Advisor, and (4) Remaining Useful Life Database. The paper focuses on the AFS Database of the FW-PHM Suite, which is used to catalog asset fault signatures. A fault signature is a structured representation ofmore » the information that an expert would use to first detect and then verify the occurrence of a specific type of fault. The fault signatures developed to assess the health status of generator step-up transformers are described in the paper. The developed fault signatures capture this knowledge and implement it in a standardized approach, thereby streamlining the diagnostic and prognostic process. This will support the automation of proactive online monitoring techniques in nuclear power plants to diagnose incipient faults, perform proactive maintenance, and estimate the remaining useful life of assets.« less

  16. Anisotropy of Earth's D'' layer and stacking faults in the MgSiO3 post-perovskite phase.

    PubMed

    Oganov, Artem R; Martonák, Roman; Laio, Alessandro; Raiteri, Paolo; Parrinello, Michele

    2005-12-22

    The post-perovskite phase of (Mg,Fe)SiO3 is believed to be the main mineral phase of the Earth's lowermost mantle (the D'' layer). Its properties explain numerous geophysical observations associated with this layer-for example, the D'' discontinuity, its topography and seismic anisotropy within the layer. Here we use a novel simulation technique, first-principles metadynamics, to identify a family of low-energy polytypic stacking-fault structures intermediate between the perovskite and post-perovskite phases. Metadynamics trajectories identify plane sliding involving the formation of stacking faults as the most favourable pathway for the phase transition, and as a likely mechanism for plastic deformation of perovskite and post-perovskite. In particular, the predicted slip planes are {010} for perovskite (consistent with experiment) and {110} for post-perovskite (in contrast to the previously expected {010} slip planes). Dominant slip planes define the lattice preferred orientation and elastic anisotropy of the texture. The {110} slip planes in post-perovskite require a much smaller degree of lattice preferred orientation to explain geophysical observations of shear-wave anisotropy in the D'' layer.

  17. Concurrent development of fault management hardware and software in the SSM/PMAD. [Space Station Module/Power Management And Distribution

    NASA Technical Reports Server (NTRS)

    Freeman, Kenneth A.; Walsh, Rick; Weeks, David J.

    1988-01-01

    Space Station issues in fault management are discussed. The system background is described with attention given to design guidelines and power hardware. A contractually developed fault management system, FRAMES, is integrated with the energy management functions, the control switchgear, and the scheduling and operations management functions. The constraints that shaped the FRAMES system and its implementation are considered.

  18. Operations management system advanced automation: Fault detection isolation and recovery prototyping

    NASA Technical Reports Server (NTRS)

    Hanson, Matt

    1990-01-01

    The purpose of this project is to address the global fault detection, isolation and recovery (FDIR) requirements for Operation's Management System (OMS) automation within the Space Station Freedom program. This shall be accomplished by developing a selected FDIR prototype for the Space Station Freedom distributed processing systems. The prototype shall be based on advanced automation methodologies in addition to traditional software methods to meet the requirements for automation. A secondary objective is to expand the scope of the prototyping to encompass multiple aspects of station-wide fault management (SWFM) as discussed in OMS requirements documentation.

  19. Operator Performance Evaluation of Fault Management Interfaces for Next-Generation Spacecraft

    NASA Technical Reports Server (NTRS)

    Hayashi, Miwa; Ravinder, Ujwala; Beutter, Brent; McCann, Robert S.; Spirkovska, Lilly; Renema, Fritz

    2008-01-01

    In the cockpit of the NASA's next generation of spacecraft, most of vehicle commanding will be carried out via electronic interfaces instead of hard cockpit switches. Checklists will be also displayed and completed on electronic procedure viewers rather than from paper. Transitioning to electronic cockpit interfaces opens up opportunities for more automated assistance, including automated root-cause diagnosis capability. The paper reports an empirical study evaluating two potential concepts for fault management interfaces incorporating two different levels of automation. The operator performance benefits produced by automation were assessed. Also, some design recommendations for spacecraft fault management interfaces are discussed.

  20. Breaking down barriers in cooperative fault management: Temporal and functional information displays

    NASA Technical Reports Server (NTRS)

    Potter, Scott S.; Woods, David D.

    1994-01-01

    At the highest level, the fundamental question addressed by this research is how to aid human operators engaged in dynamic fault management. In dynamic fault management there is some underlying dynamic process (an engineered or physiological process referred to as the monitored process - MP) whose state changes over time and whose behavior must be monitored and controlled. In these types of applications (dynamic, real-time systems), a vast array of sensor data is available to provide information on the state of the MP. Faults disturb the MP and diagnosis must be performed in parallel with responses to maintain process integrity and to correct the underlying problem. These situations frequently involve time pressure, multiple interacting goals, high consequences of failure, and multiple interleaved tasks.

  1. Redundancy management for efficient fault recovery in NASA's distributed computing system

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw; Pandya, Mihir; Yau, Kitty

    1991-01-01

    The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.

  2. Nonlinear waves in earth crust faults: application to regular and slow earthquakes

    NASA Astrophysics Data System (ADS)

    Gershenzon, Naum; Bambakidis, Gust

    2015-04-01

    The genesis, development and cessation of regular earthquakes continue to be major problems of modern geophysics. How are earthquakes initiated? What factors determine the rapture velocity, slip velocity, rise time and geometry of rupture? How do accumulated stresses relax after the main shock? These and other questions still need to be answered. In addition, slow slip events have attracted much attention as an additional source for monitoring fault dynamics. Recently discovered phenomena such as deep non-volcanic tremor (NVT), low frequency earthquakes (LFE), very low frequency earthquakes (VLF), and episodic tremor and slip (ETS) have enhanced and complemented our knowledge of fault dynamic. At the same time, these phenomena give rise to new questions about their genesis, properties and relation to regular earthquakes. We have developed a model of macroscopic dry friction which efficiently describes laboratory frictional experiments [1], basic properties of regular earthquakes including post-seismic stress relaxation [3], the occurrence of ambient and triggered NVT [4], and ETS events [5, 6]. Here we will discuss the basics of the model and its geophysical applications. References [1] Gershenzon N.I. & G. Bambakidis (2013) Tribology International, 61, 11-18, http://dx.doi.org/10.1016/j.triboint.2012.11.025 [2] Gershenzon, N.I., G. Bambakidis and T. Skinner (2014) Lubricants 2014, 2, 1-x manuscripts; doi:10.3390/lubricants20x000x; arXiv:1411.1030v2 [3] Gershenzon N.I., Bykov V. G. and Bambakidis G., (2009) Physical Review E 79, 056601 [4] Gershenzon, N. I, G. Bambakidis, (2014a), Bull. Seismol. Soc. Am., 104, 4, doi: 10.1785/0120130234 [5] Gershenzon, N. I.,G. Bambakidis, E. Hauser, A. Ghosh, and K. C. Creager (2011), Geophys. Res. Lett., 38, L01309, doi:10.1029/2010GL045225. [6] Gershenzon, N.I. and G. Bambakidis (2014) Bull. Seismol. Soc. Am., (in press); arXiv:1411.1020

  3. An operational, multistate, earth observation data management system

    NASA Technical Reports Server (NTRS)

    Eastwood, L. F., Jr.; Hill, C. T.; Morgan, R. P.; Gohagan, J. K.; Hays, T. R.; Ballard, R. J.; Crnkovich, G. G.; Schaeffer, M. A.

    1977-01-01

    State, local, and regional agencies involved in natural resources management were investigated as potential users of satellite remotely sensed data. This group's needs are assessed and alternative data management systems serving some of those needs are outlined. It is concluded that an operational earth observation data management system will be of most use to these user agencies if it provides a full range of information services -- from raw data acquisition to interpretation and dissemination of final information products.

  4. Geophysical character of the intraplate Wabash Fault System from the Wabash EarthScope FlexArray

    NASA Astrophysics Data System (ADS)

    Conder, J. A.; Zhu, L.; Wood, J. D.

    2017-12-01

    The Wabash Seismic Array was an EarthScope funded FlexArray deployment across the Wabash Fault System. The Wabash system is long known for oil and gas production. The fault system is often characterized as an intraplate seismic zone as it has produced several earthquakes above M4 in the last 50 years and potentially several above M7 in the Holocene. While earthquakes are far less numerous in the Wabash system than in the nearby New Madrid seismic zone, the seismic moment is nearly twice that of New Madrid over the past 50 years. The array consisted of 45 broadband instruments deployed across the axis to study the larger structure and 3 smaller phased arrays of 9 short-period instruments each to get a better sense of the local seismic output of smaller events. First results from the northern phased array indicate that seismicity in the Wabash behaves markedly differently than in New Madrid, with a low b-value around 0.7. Receiver functions show a 50 km thick crust beneath the system, thickening somewhat to the west. A variable-depth, positive-amplitude conversion in the deep crust gives evidence for a rift pillow at the base of the system within a dense lowermost crustal layer. Low Vs and a moderate negative amplitude conversion in the mid crust suggest a possible weak zone that could localize deformation. Shear wave splitting shows fast directions consistent with absolute plate motion across the system. Split times drop in magnitude to 0.5-0.7 seconds within the valley while in the 1-1.5 second range outside the valley. This magnitude decrease suggests a change in mantle signature beneath the fault system, possibly resulting from a small degree of local flow in the asthenosphere either along axis (as may occur with a thinned lithosphere) or by vertical flow (e.g., from delamination or dripping). We are building a 2D tomographic model across the region, relying primarily on teleseismic body waves. The tomography will undoubtedly show variations in crustal structure

  5. On the management and processing of earth resources information

    NASA Technical Reports Server (NTRS)

    Skinner, C. W.; Gonzalez, R. C.

    1973-01-01

    The basic concepts of a recently completed large-scale earth resources information system plan are reported. Attention is focused throughout the paper on the information management and processing requirements. After the development of the principal system concepts, a model system for implementation at the state level is discussed.

  6. An operational, multistate, earth observation data management system

    NASA Technical Reports Server (NTRS)

    Eastwood, L. F., Jr.; Hays, T. R.; Hill, C. T.; Ballard, R. J.; Morgan, R. P.; Crnkovich, G. G.; Gohagan, J. K.; Schaeffer, M. A.

    1977-01-01

    The purpose of this paper is to investigate a group of potential users of satellite remotely sensed data - state, local, and regional agencies involved in natural resources management. We assess this group's needs in five states and outline alternative data management systems to serve some of those needs. We conclude that an operational Earth Observation Data Management System (EODMS) will be of most use to these user agencies if it provides a full range of information services - from raw data acquisition to interpretation and dissemination of final information products.

  7. Dream project: Applications of earth observations to disaster risk management

    NASA Astrophysics Data System (ADS)

    Dyke, G.; Gill, S.; Davies, R.; Betorz, F.; Andalsvik, Y.; Cackler, J.; Dos Santos, W.; Dunlop, K.; Ferreira, I.; Kebe, F.; Lamboglia, E.; Matsubara, Y.; Nikolaidis, V.; Ostoja-Starzewski, S.; Sakita, M.; Verstappen, N.

    2011-01-01

    The field of disaster risk management is relatively new and takes a structured approach to managing uncertainty related to the threat of natural and man-made disasters. Disaster risk management consists primarily of risk assessment and the development of strategies to mitigate disaster risk. This paper will discuss how increasing both Earth observation data and information technology capabilities can contribute to disaster risk management, particularly in Belize. The paper presents the results and recommendations of a project conducted by an international and interdisciplinary team of experts at the 2009 session of the International Space University in NASA Ames Research Center (California, USA). The aim is to explore the combination of current, planned and potential space-aided, airborne, and ground-based Earth observation tools, the emergence of powerful new web-based and mobile data management tools, and how this combination can support and improve the emerging field of disaster risk management. The starting point of the project was the World Bank's Comprehensive Approach to Probabilistic Risk Assessment (CAPRA) program, focused in Central America. This program was used as a test bed to analyze current space technologies used in risk management and develop new strategies and tools to be applied in other regions around the world.

  8. An architecture for automated fault diagnosis. [Space Station Module/Power Management And Distribution

    NASA Technical Reports Server (NTRS)

    Ashworth, Barry R.

    1989-01-01

    A description is given of the SSM/PMAD power system automation testbed, which was developed using a systems engineering approach. The architecture includes a knowledge-based system and has been successfully used in power system management and fault diagnosis. Architectural issues which effect overall system activities and performance are examined. The knowledge-based system is discussed along with its associated automation implications, and interfaces throughout the system are presented.

  9. Failure detection and fault management techniques for flush airdata sensing systems

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.

    1992-01-01

    Methods based on chi-squared analysis are presented for detecting system and individual-port failures in the high-angle-of-attack flush airdata sensing system on the NASA F-18 High Alpha Research Vehicle. The HI-FADS hardware is introduced, and the aerodynamic model describes measured pressure in terms of dynamic pressure, angle of attack, angle of sideslip, and static pressure. Chi-squared analysis is described in the presentation of the concept for failure detection and fault management which includes nominal, iteration, and fault-management modes. A matrix of pressure orifices arranged in concentric circles on the nose of the aircraft indicate the parameters which are applied to the regression algorithms. The sensing techniques are applied to the F-18 flight data, and two examples are given of the computed angle-of-attack time histories. The failure-detection and fault-management techniques permit the matrix to be multiply redundant, and the chi-squared analysis is shown to be useful in the detection of failures.

  10. Transforming Water Management: an Emerging Promise of Integrated Earth Observations

    NASA Astrophysics Data System (ADS)

    Lawford, R. G.

    2011-12-01

    Throughout its history, civilization has relied on technology to facilitate many of its advances. New innovations and technologies have often provided strategic advantages that have led to transformations in institutions, economies and ultimately societies. Observational and information technologies are leading to significant developments in the water sector. After a brief introduction tracing the role of observational technologies in the areas of hydrology and water cycle science, this talk explores the existing and potential contributions of remote sensing data in water resource management around the world. In particular, it outlines the steps being undertaken by the Group on Earth Observations (GEO) and its Water Task to facilitate capacity building efforts in water management using Earth Observations in Asia, Africa and Latin and Caribbean America. Success stories on the benefits of using Earth Observations and applying GEO principles are provided. While GEO and its capacity building efforts are contributing to the transformation of water management through interoperability, data sharing, and capacity building, the full potential of these contributions has not been fully realized because impediments and challenges still remain.

  11. Program on Earth Observation Data Management Systems (EODMS), appendixes

    NASA Technical Reports Server (NTRS)

    Eastwood, L. F., Jr.; Gohagan, J. K.; Hill, C. T.; Morgan, R. P.; Bay, S. M.; Foutch, T. K.; Hays, T. R.; Ballard, R. J.; Makin, K. P.; Power, M. A.

    1976-01-01

    The needs of state, regional, and local agencies involved in natural resources management in Illinois, Iowa, Minnesota, Missouri, and Wisconsin are investigated to determine the design of satellite remotely sensed derivable information products. It is concluded that an operational Earth Observation Data Management System (EODMS) will be most beneficial if it provides a full range of services - from raw data acquisition to interpretation and dissemination of final information products. Included is a cost and performance analysis of alternative processing centers, and an assessment of the impacts of policy, regulation, and government structure on implementing large scale use of remote sensing technology in this community of users.

  12. Depending on Partnerships to Manage NASA's Earth Science Data

    NASA Astrophysics Data System (ADS)

    Behnke, J.; Lindsay, F. E.; Lowe, D. R.

    2015-12-01

    increase in user demand that has occurred over the past 15 years. We will present how the EOSDIS has relies on partnerships to support the challenges of managing NASA's Earth Science data.

  13. Kwf-Grid workflow management system for Earth science applications

    NASA Astrophysics Data System (ADS)

    Tran, V.; Hluchy, L.

    2009-04-01

    In this paper, we present workflow management tool for Earth science applications in EGEE. The workflow management tool was originally developed within K-wf Grid project for GT4 middleware and has many advanced features like semi-automatic workflow composition, user-friendly GUI for managing workflows, knowledge management. In EGEE, we are porting the workflow management tool to gLite middleware for Earth science applications K-wf Grid workflow management system was developed within "Knowledge-based Workflow System for Grid Applications" under the 6th Framework Programme. The workflow mangement system intended to - semi-automatically compose a workflow of Grid services, - execute the composed workflow application in a Grid computing environment, - monitor the performance of the Grid infrastructure and the Grid applications, - analyze the resulting monitoring information, - capture the knowledge that is contained in the information by means of intelligent agents, - and finally to reuse the joined knowledge gathered from all participating users in a collaborative way in order to efficiently construct workflows for new Grid applications. Kwf Grid workflow engines can support different types of jobs (e.g. GRAM job, web services) in a workflow. New class of gLite job has been added to the system, allows system to manage and execute gLite jobs in EGEE infrastructure. The GUI has been adapted to the requirements of EGEE users, new credential management servlet is added to portal. Porting K-wf Grid workflow management system to gLite would allow EGEE users to use the system and benefit from its avanced features. The system is primarly tested and evaluated with applications from ES clusters.

  14. Fault recovery characteristics of the fault tolerant multi-processor

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1990-01-01

    The fault handling performance of the fault tolerant multiprocessor (FTMP) was investigated. Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles byzantine or lying faults. It is pointed out that these weak areas in the FTMP's design increase the probability that, for any hardware fault, a good LRU (line replaceable unit) is mistakenly disabled by the fault management software. It is concluded that fault injection can help detect and analyze the behavior of a system in the ultra-reliable regime. Although fault injection testing cannot be exhaustive, it has been demonstrated that it provides a unique capability to unmask problems and to characterize the behavior of a fault-tolerant system.

  15. Application of Fault Management Theory to the Quantitative Selection of a Launch Vehicle Abort Trigger Suite

    NASA Technical Reports Server (NTRS)

    Lo, Yunnhon; Johnson, Stephen B.; Breckenridge, Jonathan T.

    2014-01-01

    This paper describes the quantitative application of the theory of System Health Management and its operational subset, Fault Management, to the selection of abort triggers for a human-rated launch vehicle, the United States' National Aeronautics and Space Administration's (NASA) Space Launch System (SLS). The results demonstrate the efficacy of the theory to assess the effectiveness of candidate failure detection and response mechanisms to protect humans from time-critical and severe hazards. The quantitative method was successfully used on the SLS to aid selection of its suite of abort triggers.

  16. Application of Fault Management Theory to the Quantitative Selection of a Launch Vehicle Abort Trigger Suite

    NASA Technical Reports Server (NTRS)

    Lo, Yunnhon; Johnson, Stephen B.; Breckenridge, Jonathan T.

    2014-01-01

    This paper describes the quantitative application of the theory of System Health Management and its operational subset, Fault Management, to the selection of Abort Triggers for a human-rated launch vehicle, the United States' National Aeronautics and Space Administration's (NASA) Space Launch System (SLS). The results demonstrate the efficacy of the theory to assess the effectiveness of candidate failure detection and response mechanisms to protect humans from time-critical and severe hazards. The quantitative method was successfully used on the SLS to aid selection of its suite of Abort Triggers.

  17. Reply to comments by Ahmad et al. on: Shah, A. A., 2013. Earthquake geology of Kashmir Basin and its implications for future large earthquakes International Journal of Earth Sciences DOI:10.1007/s00531-013-0874-8 and on Shah, A. A., 2015. Kashmir Basin Fault and its tectonic significance in NW Himalaya, Jammu and Kashmir, India, International Journal of Earth Sciences DOI:10.1007/s00531-015-1183-1

    NASA Astrophysics Data System (ADS)

    Shah, A. A.

    2016-03-01

    Shah (Int J Earth Sci 102:1957-1966, 2013) mapped major unknown faults and fault segments in Kashmir basin using geomorphological techniques. The major trace of out-of-sequence thrust fault was named as Kashmir basin fault (KBF) because it runs through the middle of Kashmir basin, and the active movement on it has backtilted and uplifted most of the basin. Ahmad et al. (Int J Earth Sci, 2015) have disputed the existence of KBF and maintained that faults identified by Shah (Int J Earth Sci 102:1957-1966, 2013) were already mapped as inferred faults by earlier workers. The early works, however, show a major normal fault, or a minor out-of-sequence reverse fault, and none have shown a major thrust fault.

  18. Orbital debris and near-Earth environmental management: A chronology

    NASA Technical Reports Server (NTRS)

    Portree, David S. F.; Loftus, Joseph P., Jr.

    1993-01-01

    This chronology covers the 32-year history of orbital debris and near-Earth environmental concerns. It tracks near-Earth environmental hazard creation, research, observation, experimentation, management, mitigation, protection, and policy-making, with emphasis on the orbital debris problem. Included are the Project West Ford experiments; Soviet ASAT tests and U.S. Delta upper stage explosions; the Ariane V16 explosion, U.N. treaties pertinent to near-Earth environmental problems, the PARCS tests; space nuclear power issues, the SPS/orbital debris link; Space Shuttle and space station orbital debris issues; the Solwind ASAT test; milestones in theory and modeling the Cosmos 954, Salyut 7, and Skylab reentries; the orbital debris/meteoroid research link; detection system development; orbital debris shielding development; popular culture and orbital debris; Solar Max results; LDEF results; orbital debris issues peculiar to geosynchronous orbit, including reboost policies and the stable plane; seminal papers, reports, and studies; the increasing effects of space activities on astronomy; and growing international awareness of the near-Earth environment.

  19. Development of the self-learning machine for creating models of microprocessor of single-phase earth fault protection devices in networks with isolated neutral voltage above 1000 V

    NASA Astrophysics Data System (ADS)

    Utegulov, B. B.; Utegulov, A. B.; Meiramova, S.

    2018-02-01

    The paper proposes the development of a self-learning machine for creating models of microprocessor-based single-phase ground fault protection devices in networks with an isolated neutral voltage higher than 1000 V. Development of a self-learning machine for creating models of microprocessor-based single-phase earth fault protection devices in networks with an isolated neutral voltage higher than 1000 V. allows to effectively implement mathematical models of automatic change of protection settings. Single-phase earth fault protection devices.

  20. Application of a Multimedia Service and Resource Management Architecture for Fault Diagnosis

    PubMed Central

    Castro, Alfonso; Sedano, Andrés A.; García, Fco. Javier; Villoslada, Eduardo

    2017-01-01

    Nowadays, the complexity of global video products has substantially increased. They are composed of several associated services whose functionalities need to adapt across heterogeneous networks with different technologies and administrative domains. Each of these domains has different operational procedures; therefore, the comprehensive management of multi-domain services presents serious challenges. This paper discusses an approach to service management linking fault diagnosis system and Business Processes for Telefónica’s global video service. The main contribution of this paper is the proposal of an extended service management architecture based on Multi Agent Systems able to integrate the fault diagnosis with other different service management functionalities. This architecture includes a distributed set of agents able to coordinate their actions under the umbrella of a Shared Knowledge Plane, inferring and sharing their knowledge with semantic techniques and three types of automatic reasoning: heterogeneous, ontology-based and Bayesian reasoning. This proposal has been deployed and validated in a real scenario in the video service offered by Telefónica Latam. PMID:29283398

  1. Application of a Multimedia Service and Resource Management Architecture for Fault Diagnosis.

    PubMed

    Castro, Alfonso; Sedano, Andrés A; García, Fco Javier; Villoslada, Eduardo; Villagrá, Víctor A

    2017-12-28

    Nowadays, the complexity of global video products has substantially increased. They are composed of several associated services whose functionalities need to adapt across heterogeneous networks with different technologies and administrative domains. Each of these domains has different operational procedures; therefore, the comprehensive management of multi-domain services presents serious challenges. This paper discusses an approach to service management linking fault diagnosis system and Business Processes for Telefónica's global video service. The main contribution of this paper is the proposal of an extended service management architecture based on Multi Agent Systems able to integrate the fault diagnosis with other different service management functionalities. This architecture includes a distributed set of agents able to coordinate their actions under the umbrella of a Shared Knowledge Plane, inferring and sharing their knowledge with semantic techniques and three types of automatic reasoning: heterogeneous, ontology-based and Bayesian reasoning. This proposal has been deployed and validated in a real scenario in the video service offered by Telefónica Latam.

  2. Risk-Significant Adverse Condition Awareness Strengthens Assurance of Fault Management Systems

    NASA Technical Reports Server (NTRS)

    Fitz, Rhonda

    2017-01-01

    As spaceflight systems increase in complexity, Fault Management (FM) systems are ranked high in risk-based assessment of software criticality, emphasizing the importance of establishing highly competent domain expertise to provide assurance. Adverse conditions (ACs) and specific vulnerabilities encountered by safety- and mission-critical software systems have been identified through efforts to reduce the risk posture of software-intensive NASA missions. Acknowledgement of potential off-nominal conditions and analysis to determine software system resiliency are important aspects of hazard analysis and FM. A key component of assuring FM is an assessment of how well software addresses susceptibility to failure through consideration of ACs. Focus on significant risk predicted through experienced analysis conducted at the NASA Independent Verification Validation (IVV) Program enables the scoping of effective assurance strategies with regard to overall asset protection of complex spaceflight as well as ground systems. Research efforts sponsored by NASA's Office of Safety and Mission Assurance defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs and allowing queries based on project, mission type, domaincomponent, causal fault, and other key characteristics. Vulnerability in off-nominal situations, architectural design weaknesses, and unexpected or undesirable system behaviors in reaction to faults are curtailed with the awareness of ACs and risk-significant scenarios modeled for analysts through this database. Integration within the Enterprise Architecture at NASA IVV enables interfacing with other tools and datasets, technical support, and accessibility across the Agency. This paper discusses the development of an improved workflow process utilizing this

  3. Risk-Significant Adverse Condition Awareness Strengthens Assurance of Fault Management Systems

    NASA Technical Reports Server (NTRS)

    Fitz, Rhonda

    2017-01-01

    As spaceflight systems increase in complexity, Fault Management (FM) systems are ranked high in risk-based assessment of software criticality, emphasizing the importance of establishing highly competent domain expertise to provide assurance. Adverse conditions (ACs) and specific vulnerabilities encountered by safety- and mission-critical software systems have been identified through efforts to reduce the risk posture of software-intensive NASA missions. Acknowledgement of potential off-nominal conditions and analysis to determine software system resiliency are important aspects of hazard analysis and FM. A key component of assuring FM is an assessment of how well software addresses susceptibility to failure through consideration of ACs. Focus on significant risk predicted through experienced analysis conducted at the NASA Independent Verification & Validation (IV&V) Program enables the scoping of effective assurance strategies with regard to overall asset protection of complex spaceflight as well as ground systems. Research efforts sponsored by NASAs Office of Safety and Mission Assurance (OSMA) defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs and allowing queries based on project, mission type, domain/component, causal fault, and other key characteristics. Vulnerability in off-nominal situations, architectural design weaknesses, and unexpected or undesirable system behaviors in reaction to faults are curtailed with the awareness of ACs and risk-significant scenarios modeled for analysts through this database. Integration within the Enterprise Architecture at NASA IV&V enables interfacing with other tools and datasets, technical support, and accessibility across the Agency. This paper discusses the development of an improved workflow process utilizing

  4. The Ural-Herirud transcontinental postcollisional strike-slip fault and its role in the formation of the Earth's crust

    NASA Astrophysics Data System (ADS)

    Leonov, Yu. G.; Volozh, Yu. A.; Antipov, M. P.; Kheraskova, T. N.

    2015-11-01

    The paper considers the morphology, deep structure, and geodynamic features of the Ural-Herirud postorogenic strike-slip fault (UH fault), along which the Moho (the "M") shifts along the entire axial zone of the Ural Orogen, then further to the south across the Scythian-Turan Plate to the Herirud sublatitudinal fault in Afghanistan. The postcollisional character of dextral displacements along the Ural-Herirud fault and its Triassic-Jurassic age are proven. We have estimated the scale of displacements and made an attempt to make a paleoreconstruction, illustrating the relationship between the Variscides of the Urals and the Tien Shan before tectonic displacements. The analysis of new data includes the latest generation of 1: 200000 geological maps and the regional seismic profiling data obtained in the most elevated part of the Urals (from the seismic profile of the Middle Urals in the north to the Uralseis seismic profile in the south), as well as within the sedimentary cover of the Turan Plate, from Mugodzhary to the southern boundaries of the former water area of the Aral Sea. General typomorphic signs of transcontinental strike-slip fault systems are considered and the structural model of the Ural-Herirud postcollisional strike-slip fault is presented.

  5. An expert systems approach to automated fault management in a regenerative life support subsystem

    NASA Technical Reports Server (NTRS)

    Malin, J. T.; Lance, N., Jr.

    1986-01-01

    This paper describes FIXER, a prototype expert system for automated fault management in a regenerative life support subsystem typical of Space Station applications. The development project provided an evaluation of the use of expert systems technology to enhance controller functions in space subsystems. The software development approach permitted evaluation of the effectiveness of direct involvement of the expert in design and development. The approach also permitted intensive observation of the knowledge and methods of the expert. This paper describes the development of the prototype expert system and presents results of the evaluation.

  6. A New Kinematic Model for Polymodal Faulting: Implications for Fault Connectivity

    NASA Astrophysics Data System (ADS)

    Healy, D.; Rizzo, R. E.

    2015-12-01

    Conjugate, or bimodal, fault patterns dominate the geological literature on shear failure. Based on Anderson's (1905) application of the Mohr-Coulomb failure criterion, these patterns have been interpreted from all tectonic regimes, including normal, strike-slip and thrust (reverse) faulting. However, a fundamental limitation of the Mohr-Coulomb failure criterion - and others that assume faults form parallel to the intermediate principal stress - is that only plane strain can result from slip on the conjugate faults. However, deformation in the Earth is widely accepted as being three-dimensional, with truly triaxial stresses and strains. Polymodal faulting, with three or more sets of faults forming and slipping simultaneously, can generate three-dimensional strains from truly triaxial stresses. Laboratory experiments and outcrop studies have verified the occurrence of the polymodal fault patterns in nature. The connectivity of polymodal fault networks differs significantly from conjugate fault networks, and this presents challenges to our understanding of faulting and an opportunity to improve our understanding of seismic hazards and fluid flow. Polymodal fault patterns will, in general, have more connected nodes in 2D (and more branch lines in 3D) than comparable conjugate (bimodal) patterns. The anisotropy of permeability is therefore expected to be very different in rocks with polymodal fault patterns in comparison to conjugate fault patterns, and this has implications for the development of hydrocarbon reservoirs, the genesis of ore deposits and the management of aquifers. In this contribution, I assess the published evidence and models for polymodal faulting before presenting a novel kinematic model for general triaxial strain in the brittle field.

  7. Sorption of the Rare Earth Elements and Yttrium (REE-Y) in calcite: the mechanism of a new effective tool in identifying paleoearthquakes on carbonate faults

    NASA Astrophysics Data System (ADS)

    Moraetis, Daniel; Mouslopoulou, Vasiliki; Pratikakis, Alexandros

    2015-04-01

    A new tool for identifying paleoearthquakes on carbonate faults has been successfully tested on two carbonate faults in southern Europe (the Magnola Fault in Italy and the Spili Fault in Greece): the Rare Earth Element and Yttrium (REE-Y) method (Manighetti et al., 2010; Mouslopoulou et al., 2011). The method is based on the property of the calcite in limestone scarps to absorb the REE and Y from the soil during its residence beneath the ground surface (e.g. before its exhumation due to earthquakes). Although the method is established, the details of the enrichment mechanism are poorly investigated. Here we use published data together with new information from pot-experiments to shed light on the sorption mechanism and the time effectiveness of the REE-Y method. Data from the Magnola and Spili faults show that the average chemical enrichment is ~45%, in REE-Y while the denudation rate of the enriched zones is ~1% higher every 400 years due to exposure of the fault scarp in weathering. They also show that the chemical enrichment is significant even for short periods of residence time (e.g., ~100 years). To better understand the enrichment mechanism, we performed a series of pot experiments, where carbonate tiles extracted from the Spili Fault were buried into soil collected from the hanging-wall of the same fault. We irrigated the pots with artificial rain that equals 5 years of rainfall in Crete and at temperatures of 15oC and 25oC. Following, we performed sorption isotherm, kinetic and pH-edge tests for the europium (Eu), the cerium (Ce) and the ytterbium (Yt) that occur in the calcite minerals. The processes of adsorption and precipitation in the batch experiments are simulated by the Mineql software. The pot experiments indicate incorporation of the REE and Y into the surface of the carbonate tile which is in contact with the soil. The pH of the leached solution during the rain application range from 7.6 to 8.3. Nutrient release like Ca is higher in the leached

  8. Persistent Identifiers in Earth science data management environments

    NASA Astrophysics Data System (ADS)

    Weigel, Tobias; Stockhause, Martina; Lautenschlager, Michael

    2014-05-01

    Globally resolvable Persistent Identifiers (PIDs) that carry additional context information (which can be any form of metadata) are increasingly used by data management infrastructures for fundamental tasks. The notion of a Persistent Identifier is originally an abstract concept that aims to provide identifiers that are quality-controlled and maintained beyond the life time of the original issuer, for example through the use of redirection mechanisms. Popular implementations of the PID concept are for example the Handle System and the DOI System based on it. These systems also move beyond the simple identification concept by providing facilities that can hold additional context information. Not only in the Earth sciences, data managers are increasingly attracted to PIDs because of the opportunities these facilities provide; however, long-term viable principles and mechanisms for efficient organization of PIDs and context information are not yet available or well established. In this respect, promising techniques are to type the information that is associated with PIDs and to construct actionable collections of PIDs. There are two main drivers for extended PID usage: Earth science data management middleware use cases and applications geared towards scientific end-users. Motivating scenarios from data management include hierarchical data and metadata management, consistent data tracking and improvements in the accountability of processes. If PIDs are consistently assigned to data objects, context information can be carried over to subsequent data life cycle stages much easier. This can also ease data migration from one major curation domain to another, e.g. from early dissemination within research communities to formal publication and long-term archival stages, and it can help to document processes across technical and organizational boundaries. For scientific end users, application scenarios include for example more personalized data citation and improvements in the

  9. Design for interaction between humans and intelligent systems during real-time fault management

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Schreckenghost, Debra L.; Thronesbery, Carroll G.

    1992-01-01

    Initial results are reported to provide guidance and assistance for designers of intelligent systems and their human interfaces. The objective is to achieve more effective human-computer interaction (HCI) for real time fault management support systems. Studies of the development of intelligent fault management systems within NASA have resulted in a new perspective of the user. If the user is viewed as one of the subsystems in a heterogeneous, distributed system, system design becomes the design of a flexible architecture for accomplishing system tasks with both human and computer agents. HCI requirements and design should be distinguished from user interface (displays and controls) requirements and design. Effective HCI design for multi-agent systems requires explicit identification of activities and information that support coordination and communication between agents. The effects are characterized of HCI design on overall system design and approaches are identified to addressing HCI requirements in system design. The results include definition of (1) guidance based on information level requirements analysis of HCI, (2) high level requirements for a design methodology that integrates the HCI perspective into system design, and (3) requirements for embedding HCI design tools into intelligent system development environments.

  10. CONFIG - Adapting qualitative modeling and discrete event simulation for design of fault management systems

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Basham, Bryan D.

    1989-01-01

    CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.

  11. Failure mode effect analysis and fault tree analysis as a combined methodology in risk management

    NASA Astrophysics Data System (ADS)

    Wessiani, N. A.; Yoshio, F.

    2018-04-01

    There have been many studies reported the implementation of Failure Mode Effect Analysis (FMEA) and Fault Tree Analysis (FTA) as a method in risk management. However, most of the studies usually only choose one of these two methods in their risk management methodology. On the other side, combining these two methods will reduce the drawbacks of each methods when implemented separately. This paper aims to combine the methodology of FMEA and FTA in assessing risk. A case study in the metal company will illustrate how this methodology can be implemented. In the case study, this combined methodology will assess the internal risks that occur in the production process. Further, those internal risks should be mitigated based on their level of risks.

  12. Intelligent fault diagnosis and failure management of flight control actuation systems

    NASA Technical Reports Server (NTRS)

    Bonnice, William F.; Baker, Walter

    1988-01-01

    The real-time fault diagnosis and failure management (FDFM) of current operational and experimental dual tandem aircraft flight control system actuators was investigated. Dual tandem actuators were studied because of the active FDFM capability required to manage the redundancy of these actuators. The FDFM methods used on current dual tandem actuators were determined by examining six specific actuators. The FDFM capability on these six actuators was also evaluated. One approach for improving the FDFM capability on dual tandem actuators may be through the application of artificial intelligence (AI) technology. Existing AI approaches and applications of FDFM were examined and evaluated. Based on the general survey of AI FDFM approaches, the potential role of AI technology for real-time actuator FDFM was determined. Finally, FDFM and maintainability improvements for dual tandem actuators were recommended.

  13. Goal-Function Tree Modeling for Systems Engineering and Fault Management

    NASA Technical Reports Server (NTRS)

    Johnson, Stephen B.; Breckenridge, Jonathan T.

    2013-01-01

    The draft NASA Fault Management (FM) Handbook (2012) states that Fault Management (FM) is a "part of systems engineering", and that it "demands a system-level perspective" (NASAHDBK- 1002, 7). What, exactly, is the relationship between systems engineering and FM? To NASA, systems engineering (SE) is "the art and science of developing an operable system capable of meeting requirements within often opposed constraints" (NASA/SP-2007-6105, 3). Systems engineering starts with the elucidation and development of requirements, which set the goals that the system is to achieve. To achieve these goals, the systems engineer typically defines functions, and the functions in turn are the basis for design trades to determine the best means to perform the functions. System Health Management (SHM), by contrast, defines "the capabilities of a system that preserve the system's ability to function as intended" (Johnson et al., 2011, 3). Fault Management, in turn, is the operational subset of SHM, which detects current or future failures, and takes operational measures to prevent or respond to these failures. Failure, in turn, is the "unacceptable performance of intended function." (Johnson 2011, 605) Thus the relationship of SE to FM is that SE defines the functions and the design to perform those functions to meet system goals and requirements, while FM detects the inability to perform those functions and takes action. SHM and FM are in essence "the dark side" of SE. For every function to be performed (SE), there is the possibility that it is not successfully performed (SHM); FM defines the means to operationally detect and respond to this lack of success. We can also describe this in terms of goals: for every goal to be achieved, there is the possibility that it is not achieved; FM defines the means to operationally detect and respond to this inability to achieve the goal. This brief description of relationships between SE, SHM, and FM provide hints to a modeling approach to

  14. Fault diagnosis

    NASA Technical Reports Server (NTRS)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to

  15. Program on Earth Observation Data Management Systems (EODMS)

    NASA Technical Reports Server (NTRS)

    Eastwood, L. F., Jr.; Gohagan, J. K.; Hill, C. T.; Morgan, R. P.; Hays, T. R.; Ballard, R. J.; Crnkovick, G. R.; Schaeffer, M. A.

    1976-01-01

    An assessment was made of the needs of a group of potential users of satellite remotely sensed data (state, regional, and local agencies) involved in natural resources management in five states, and alternative data management systems to satisfy these needs are outlined. Tasks described include: (1) a comprehensive data needs analysis of state and local users; (2) the design of remote sensing-derivable information products that serve priority state and local data needs; (3) a cost and performance analysis of alternative processing centers for producing these products; (4) an assessment of the impacts of policy, regulation and government structure on implementing large-scale use of remote sensing technology in this community of users; and (5) the elaboration of alternative institutional arrangements for operational Earth Observation Data Management Systems (EODMS). It is concluded that an operational EODMS will be of most use to state, regional, and local agencies if it provides a full range of information services -- from raw data acquisition to interpretation and dissemination of final information products.

  16. Product quality management based on CNC machine fault prognostics and diagnosis

    NASA Astrophysics Data System (ADS)

    Kozlov, A. M.; Al-jonid, Kh M.; Kozlov, A. A.; Antar, Sh D.

    2018-03-01

    This paper presents a new fault classification model and an integrated approach to fault diagnosis which involves the combination of ideas of Neuro-fuzzy Networks (NF), Dynamic Bayesian Networks (DBN) and Particle Filtering (PF) algorithm on a single platform. In the new model, faults are categorized in two aspects, namely first and second degree faults. First degree faults are instantaneous in nature, and second degree faults are evolutional and appear as a developing phenomenon which starts from the initial stage, goes through the development stage and finally ends at the mature stage. These categories of faults have a lifetime which is inversely proportional to a machine tool's life according to the modified version of Taylor’s equation. For fault diagnosis, this framework consists of two phases: the first one is focusing on fault prognosis, which is done online, and the second one is concerned with fault diagnosis which depends on both off-line and on-line modules. In the first phase, a neuro-fuzzy predictor is used to take a decision on whether to embark Conditional Based Maintenance (CBM) or fault diagnosis based on the severity of a fault. The second phase only comes into action when an evolving fault goes beyond a critical threshold limit called a CBM limit for a command to be issued for fault diagnosis. During this phase, DBN and PF techniques are used as an intelligent fault diagnosis system to determine the severity, time and location of the fault. The feasibility of this approach was tested in a simulation environment using the CNC machine as a case study and the results were studied and analyzed.

  17. Risk management of PPP project in the preparation stage based on Fault Tree Analysis

    NASA Astrophysics Data System (ADS)

    Xing, Yuanzhi; Guan, Qiuling

    2017-03-01

    The risk management of PPP(Public Private Partnership) project can improve the level of risk control between government departments and private investors, so as to make more beneficial decisions, reduce investment losses and achieve mutual benefit as well. Therefore, this paper takes the PPP project preparation stage venture as the research object to identify and confirm four types of risks. At the same time, fault tree analysis(FTA) is used to evaluate the risk factors that belong to different parts, and quantify the influencing degree of risk impact on the basis of risk identification. In addition, it determines the importance order of risk factors by calculating unit structure importance on PPP project preparation stage. The result shows that accuracy of government decision-making, rationality of private investors funds allocation and instability of market returns are the main factors to generate the shared risk on the project.

  18. Goal-Function Tree Modeling for Systems Engineering and Fault Management

    NASA Technical Reports Server (NTRS)

    Johnson, Stephen B.; Breckenridge, Jonathan T.

    2013-01-01

    This paper describes a new representation that enables rigorous definition and decomposition of both nominal and off-nominal system goals and functions: the Goal-Function Tree (GFT). GFTs extend the concept and process of functional decomposition, utilizing state variables as a key mechanism to ensure physical and logical consistency and completeness of the decomposition of goals (requirements) and functions, and enabling full and complete traceabilitiy to the design. The GFT also provides for means to define and represent off-nominal goals and functions that are activated when the system's nominal goals are not met. The physical accuracy of the GFT, and its ability to represent both nominal and off-nominal goals enable the GFT to be used for various analyses of the system, including assessments of the completeness and traceability of system goals and functions, the coverage of fault management failure detections, and definition of system failure scenarios.

  19. On providing the fault-tolerant operation of information systems based on open content management systems

    NASA Astrophysics Data System (ADS)

    Kratov, Sergey

    2018-01-01

    Modern information systems designed to service a wide range of users, regardless of their subject area, are increasingly based on Web technologies and are available to users via Internet. The article discusses the issues of providing the fault-tolerant operation of such information systems, based on free and open source content management systems. The toolkit available to administrators of similar systems is shown; the scenarios for using these tools are described. Options for organizing backups and restoring the operability of systems after failures are suggested. Application of the proposed methods and approaches allows providing continuous monitoring of the state of systems, timely response to the emergence of possible problems and their prompt solution.

  20. Model-Based Data Integration and Process Standardization Techniques for Fault Management: A Feasibility Study

    NASA Technical Reports Server (NTRS)

    Haste, Deepak; Ghoshal, Sudipto; Johnson, Stephen B.; Moore, Craig

    2018-01-01

    This paper describes the theory and considerations in the application of model-based techniques to assimilate information from disjoint knowledge sources for performing NASA's Fault Management (FM)-related activities using the TEAMS® toolset. FM consists of the operational mitigation of existing and impending spacecraft failures. NASA's FM directives have both design-phase and operational-phase goals. This paper highlights recent studies by QSI and DST of the capabilities required in the TEAMS® toolset for conducting FM activities with the aim of reducing operating costs, increasing autonomy, and conforming to time schedules. These studies use and extend the analytic capabilities of QSI's TEAMS® toolset to conduct a range of FM activities within a centralized platform.

  1. Protecting Against Faults in JPL Spacecraft

    NASA Technical Reports Server (NTRS)

    Morgan, Paula

    2007-01-01

    A paper discusses techniques for protecting against faults in spacecraft designed and operated by NASA s Jet Propulsion Laboratory (JPL). The paper addresses, more specifically, fault-protection requirements and techniques common to most JPL spacecraft (in contradistinction to unique, mission specific techniques), standard practices in the implementation of these techniques, and fault-protection software architectures. Common requirements include those to protect onboard command, data-processing, and control computers; protect against loss of Earth/spacecraft radio communication; maintain safe temperatures; and recover from power overloads. The paper describes fault-protection techniques as part of a fault-management strategy that also includes functional redundancy, redundant hardware, and autonomous monitoring of (1) the operational and health statuses of spacecraft components, (2) temperatures inside and outside the spacecraft, and (3) allocation of power. The strategy also provides for preprogrammed automated responses to anomalous conditions. In addition, the software running in almost every JPL spacecraft incorporates a general-purpose "Safe Mode" response algorithm that configures the spacecraft in a lower-power state that is safe and predictable, thereby facilitating diagnosis of more complex faults by a team of human experts on Earth.

  2. On the Possibility of Estimation of the Earth Crust's Properties from the Observations of Electric Field of Electrokinetic Origin, Generated by Tidal Deformation within the Fault Zone

    NASA Astrophysics Data System (ADS)

    Alekseev, D. A.; Gokhberg, M. B.

    2018-05-01

    A 2-D boundary problem formulation in terms of pore pressure in Biot poroelasticity model is discussed, with application to a vertical contact model mechanically excited by a lunar-solar tidal deformation wave, representing a fault zone structure. A problem parametrization in terms of permeability and Biot's modulus contrasts is proposed and its numerical solution is obtained for a series of models differing in the values of the above parameters. The behavior of pore pressure and its gradient is analyzed. From those, the electric field of the electrokinetic nature is calculated. The possibilities of estimation of the elastic properties and permeability of geological formations from the observations of the horizontal and vertical electric field measured inside the medium and at the earth's surface near the block boundary are discussed.

  3. Earth observation for regional scale environmental and natural resources management

    NASA Astrophysics Data System (ADS)

    Bernknopf, R.; Brookshire, D.; Faulkner, S.; Chivoiu, B.; Bridge, B.; Broadbent, C.

    2013-12-01

    Earth observations (EO) provide critical information to natural resource assessment. Three examples are presented: conserving potable groundwater in intense agricultural regions, maximizing ecosystem service benefits at regional scales from afforestation investment and management, and enabling integrated natural and behavioral sciences for resource management and policy analysis. In each of these cases EO of different resolutions are used in different ways to help in the classification, characterization, and availability of natural resources and ecosystem services. To inform decisions, each example includes a spatiotemporal economic model to optimize the net societal benefits of resource development and exploitation. 1) EO is used for monitoring land use in intensively cultivated agricultural regions. Archival imagery is coupled to a hydrogeological process model to evaluate the tradeoff between agrochemical use and retention of potable groundwater. EO is used to couple individual producers and regional resource managers using information from markets and natural systems to aid in the objective of maximizing agricultural production and maintaining groundwater quality. The contribution of EO is input to a nitrate loading and transport model to estimate the cumulative impact on groundwater at specified distances from specific sites (wells) for 35 Iowa counties and two aquifers. 2) Land use/land cover (LULC) derived from EO is used to compare biological carbon sequestration alternatives and their provisioning of ecosystem services. EO is used to target land attributes that are more or less desirable for enhancing ecosystem services in two parishes in Louisiana. Ecological production functions are coupled with value data to maximize the expected return on investment in carbon sequestration and other ancillary ecosystem services while minimizing the risk. 3) Environmental and natural resources management decisions employ probabilistic estimates of yet-to-find or yet

  4. Mechanisms, Monitoring and Modeling Earth Fissure generation and Fault activation due to subsurface Fluid exploitation (M3EF3): A UNESCO-IGCP project in partnership with the UNESCO-IHP Working Group on Land Subsidence

    NASA Astrophysics Data System (ADS)

    Teatini, P.; Carreon-Freyre, D.; Galloway, D. L.; Ye, S.

    2015-12-01

    Land subsidence due to groundwater extraction was recently mentioned as one of the most urgent threats to sustainable development in the latest UNESCO IHP-VIII (2014-2020) strategic plan. Although advances have been made in understanding, monitoring, and predicting subsidence, the influence of differential vertical compaction, horizontal displacements, and hydrostratigraphic and structural features in groundwater systems on localized near-surface ground ruptures is still poorly understood. The nature of ground failure may range from fissuring, i.e., formation of an open crack, to faulting, i.e., differential offset of the opposite sides of the failure plane. Ground ruptures associated with differential subsidence have been reported from many alluvial basins in semiarid and arid regions, e.g. China, India, Iran, Mexico, Saudi Arabia, Spain, and the United States. These ground ruptures strongly impact urban, industrial, and agricultural infrastructures, and affect socio-economic and cultural development. Leveraging previous collaborations, this year the UNESCO Working Group on Land Subsidence began the scientific cooperative project M3EF3 in collaboration with the UNESCO International Geosciences Programme (IGCP n.641; www.igcp641.org) to improve understanding of the processes involved in ground rupturing associated with the exploitation of subsurface fluids, and to facilitate the transfer of knowledge regarding sustainable groundwater management practices in vulnerable aquifer systems. The project is developing effective tools to help manage geologic risks associated with these types of hazards, and formulating recommendations pertaining to the sustainable use of subsurface fluid resources for urban and agricultural development in susceptible areas. The partnership between the UNESCO IHP and IGCP is ensuring that multiple scientific competencies required to optimally investigate earth fissuring and faulting caused by groundwater withdrawals are being employed.

  5. Goal-Function Tree Modeling for Systems Engineering and Fault Management

    NASA Technical Reports Server (NTRS)

    Patterson, Jonathan D.; Johnson, Stephen B.

    2013-01-01

    The draft NASA Fault Management (FM) Handbook (2012) states that Fault Management (FM) is a "part of systems engineering", and that it "demands a system-level perspective" (NASAHDBK- 1002, 7). What, exactly, is the relationship between systems engineering and FM? To NASA, systems engineering (SE) is "the art and science of developing an operable system capable of meeting requirements within often opposed constraints" (NASA/SP-2007-6105, 3). Systems engineering starts with the elucidation and development of requirements, which set the goals that the system is to achieve. To achieve these goals, the systems engineer typically defines functions, and the functions in turn are the basis for design trades to determine the best means to perform the functions. System Health Management (SHM), by contrast, defines "the capabilities of a system that preserve the system's ability to function as intended" (Johnson et al., 2011, 3). Fault Management, in turn, is the operational subset of SHM, which detects current or future failures, and takes operational measures to prevent or respond to these failures. Failure, in turn, is the "unacceptable performance of intended function." (Johnson 2011, 605) Thus the relationship of SE to FM is that SE defines the functions and the design to perform those functions to meet system goals and requirements, while FM detects the inability to perform those functions and takes action. SHM and FM are in essence "the dark side" of SE. For every function to be performed (SE), there is the possibility that it is not successfully performed (SHM); FM defines the means to operationally detect and respond to this lack of success. We can also describe this in terms of goals: for every goal to be achieved, there is the possibility that it is not achieved; FM defines the means to operationally detect and respond to this inability to achieve the goal. This brief description of relationships between SE, SHM, and FM provide hints to a modeling approach to

  6. Earth

    2012-01-30

    Behold one of the more detailed images of the Earth yet created. This Blue Marble Earth montage shown above -- created from photographs taken by the Visible/Infrared Imager Radiometer Suite (VIIRS) instrument on board the new Suomi NPP satellite -- shows many stunning details of our home planet. The Suomi NPP satellite was launched last October and renamed last week after Verner Suomi, commonly deemed the father of satellite meteorology. The composite was created from the data collected during four orbits of the robotic satellite taken earlier this month and digitally projected onto the globe. Many features of North America and the Western Hemisphere are particularly visible on a high resolution version of the image. http://photojournal.jpl.nasa.gov/catalog/PIA18033

  7. BioEarth: Envisioning and developing a new regional earth system model to inform natural and agricultural resource management

    DOE PAGES

    Adam, Jennifer C.; Stephens, Jennie C.; Chung, Serena H.; ...

    2014-04-24

    Uncertainties in global change impacts, the complexities associated with the interconnected cycling of nitrogen, carbon, and water present daunting management challenges. Existing models provide detailed information on specific sub-systems (e.g., land, air, water, and economics). An increasing awareness of the unintended consequences of management decisions resulting from interconnectedness of these sub-systems, however, necessitates coupled regional earth system models (EaSMs). Decision makers’ needs and priorities can be integrated into the model design and development processes to enhance decision-making relevance and “usability” of EaSMs. BioEarth is a research initiative currently under development with a focus on the U.S. Pacific Northwest region thatmore » explores the coupling of multiple stand-alone EaSMs to generate usable information for resource decision-making. Direct engagement between model developers and non-academic stakeholders involved in resource and environmental management decisions throughout the model development process is a critical component of this effort. BioEarth utilizes a bottom-up approach for its land surface model that preserves fine spatial-scale sensitivities and lateral hydrologic connectivity, which makes it unique among many regional EaSMs. Here, we describe the BioEarth initiative and highlights opportunities and challenges associated with coupling multiple stand-alone models to generate usable information for agricultural and natural resource decision-making.« less

  8. Application of NASA management approach to solve complex problems on earth

    NASA Technical Reports Server (NTRS)

    Potate, J. S.

    1972-01-01

    The application of NASA management approach to solving complex problems on earth is discussed. The management of the Apollo program is presented as an example of effective management techniques. Four key elements of effective management are analyzed. Photographs of the Cape Kennedy launch sites and supporting equipment are included to support the discussions.

  9. Perspective View, Garlock Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    California's Garlock Fault, marking the northwestern boundary of the Mojave Desert, lies at the foot of the mountains, running from the lower right to the top center of this image, which was created with data from NASA's shuttle Radar Topography Mission (SRTM), flown in February 2000. The data will be used by geologists studying fault dynamics and landforms resulting from active tectonics. These mountains are the southern end of the Sierra Nevada and the prominent canyon emerging at the lower right is Lone Tree canyon. In the distance, the San Gabriel Mountains cut across from the leftside of the image. At their base lies the San Andreas Fault which meets the Garlock Fault near the left edge at Tejon Pass. The dark linear feature running from lower right to upper left is State Highway 14 leading from the town of Mojave in the distance to Inyokern and the Owens Valley in the north. The lighter parallel lines are dirt roads related to power lines and the Los Angeles Aqueduct which run along the base of the mountains.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast

  10. Abnormal fault-recovery characteristics of the fault-tolerant multiprocessor uncovered using a new fault-injection methodology

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1991-01-01

    An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.

  11. Using GIS in an Earth Sciences Field Course for Quantitative Exploration, Data Management and Digital Mapping

    ERIC Educational Resources Information Center

    Marra, Wouter A.; van de Grint, Liesbeth; Alberti, Koko; Karssenberg, Derek

    2017-01-01

    Field courses are essential for subjects like Earth Sciences, Geography and Ecology. In these topics, GIS is used to manage and analyse spatial data, and offers quantitative methods that are beneficial for fieldwork. This paper presents changes made to a first-year Earth Sciences field course in the French Alps, where new GIS methods were…

  12. Management Approach for NASA's Earth Venture-1 (EV-1) Airborne Science Investigations

    NASA Technical Reports Server (NTRS)

    Guillory, Anthony R.; Denkins, Todd C.; Allen, B. Danette

    2013-01-01

    The Earth System Science Pathfinder (ESSP) Program Office (PO) is responsible for programmatic management of National Aeronautics and Space Administration's (NASA) Science Mission Directorate's (SMD) Earth Venture (EV) missions. EV is composed of both orbital and suborbital Earth science missions. The first of the Earth Venture missions is EV-1, which are Principal Investigator-led, temporally-sustained, suborbital (airborne) science investigations costcapped at $30M each over five years. Traditional orbital procedures, processes and standards used to manage previous ESSP missions, while effective, are disproportionally comprehensive for suborbital missions. Conversely, existing airborne practices are primarily intended for smaller, temporally shorter investigations, and traditionally managed directly by a program scientist as opposed to a program office such as ESSP. In 2010, ESSP crafted a management approach for the successful implementation of the EV-1 missions within the constructs of current governance models. NASA Research and Technology Program and Project Management Requirements form the foundation of the approach for EV-1. Additionally, requirements from other existing NASA Procedural Requirements (NPRs), systems engineering guidance and management handbooks were adapted to manage programmatic, technical, schedule, cost elements and risk. As the EV-1 missions are nearly at the end of their successful execution and project lifecycle and the submission deadline of the next mission proposals near, the ESSP PO is taking the lessons learned and updated the programmatic management approach for all future Earth Venture Suborbital (EVS) missions for an even more flexible and streamlined management approach.

  13. Fault Tree Analysis as a Planning and Management Tool: A Case Study

    ERIC Educational Resources Information Center

    Witkin, Belle Ruth

    1977-01-01

    Fault Tree Analysis is an operations research technique used to analyse the most probable modes of failure in a system, in order to redesign or monitor the system more closely in order to increase its likelihood of success. (Author)

  14. Exploring Best Practices for Research Data Management in Earth Science through Collaborating with University Libraries

    NASA Astrophysics Data System (ADS)

    Wang, T.; Branch, B. D.

    2013-12-01

    Earth Science research data, its data management, informatics processing and its data curation are valuable in allowing earth scientists to make new discoveries. But how to actively manage these research assets to ensure them safe and secure, accessible and reusable for long term is a big challenge. Nowadays, the data deluge makes this challenge become even more difficult. To address the growing demand for managing earth science data, the Council on Library and Information Resources (CLIR) partners with the Library and Technology Services (LTS) of Lehigh University and Purdue University Libraries (PUL) on hosting postdoctoral fellows in data curation activity. This inter-disciplinary fellowship program funded by the SLOAN Foundation innovatively connects university libraries and earth science departments and provides earth science Ph.D.'s opportunities to use their research experiences in earth science and data curation trainings received during their fellowship to explore best practices for research data management in earth science. In the process of exploring best practices for data curation in earth science, the CLIR Data Curation Fellows have accumulated rich experiences and insights on the data management behaviors and needs of earth scientists. Specifically, Ting Wang, the postdoctoral fellow at Lehigh University has worked together with the LTS support team for the College of Arts and Sciences, Web Specialists and the High Performance Computing Team, to assess and meet the data management needs of researchers at the Department of Earth and Environmental Sciences (EES). By interviewing the faculty members and graduate students at EES, the fellow has identified a variety of data-related challenges at different research fields of earth science, such as climate, ecology, geochemistry, geomorphology, etc. The investigation findings of the fellow also support the LTS for developing campus infrastructure for long-term data management in the sciences. Likewise

  15. Interacting faults

    NASA Astrophysics Data System (ADS)

    Peacock, D. C. P.; Nixon, C. W.; Rotevatn, A.; Sanderson, D. J.; Zuluaga, L. F.

    2017-04-01

    The way that faults interact with each other controls fault geometries, displacements and strains. Faults rarely occur individually but as sets or networks, with the arrangement of these faults producing a variety of different fault interactions. Fault interactions are characterised in terms of the following: 1) Geometry - the spatial arrangement of the faults. Interacting faults may or may not be geometrically linked (i.e. physically connected), when fault planes share an intersection line. 2) Kinematics - the displacement distributions of the interacting faults and whether the displacement directions are parallel, perpendicular or oblique to the intersection line. Interacting faults may or may not be kinematically linked, where the displacements, stresses and strains of one fault influences those of the other. 3) Displacement and strain in the interaction zone - whether the faults have the same or opposite displacement directions, and if extension or contraction dominates in the acute bisector between the faults. 4) Chronology - the relative ages of the faults. This characterisation scheme is used to suggest a classification for interacting faults. Different types of interaction are illustrated using metre-scale faults from the Mesozoic rocks of Somerset and examples from the literature.

  16. Model Meets Data: Challenges and Opportunities to Implement Land Management in Earth System Models

    NASA Astrophysics Data System (ADS)

    Pongratz, J.; Dolman, A. J.; Don, A.; Erb, K. H.; Fuchs, R.; Herold, M.; Jones, C.; Luyssaert, S.; Kuemmerle, T.; Meyfroidt, P.

    2016-12-01

    Land-based demand for food and fibre is projected to increase in the future. In light of global sustainability challenges only part of this increase will be met by expansion of land use into relatively untouched regions. Additional demand will have to be fulfilled by intensification and other adjustments in management of land that already is under agricultural and forestry use. Such land management today occurs on about half of the ice-free land surface, as compared to only about one quarter that has undergone a change in land cover. As the number of studies revealing substantial biogeophysical and biogeochemical effects of land management is increasing, moving beyond land cover change towards including land management has become a key focus for Earth system modeling. However, a basis for prioritizing land management activities for implementation in models is lacking. We lay this basis for prioritization in a collaborative project across the disciplines of Earth system modeling, land system science, and Earth observation. We first assess the status and plans of implementing land management in Earth system and dynamic global vegetation models. A clear trend towards higher complexity of land use representation is visible. We then assess five criteria for prioritizing the implementation of land management activities: (1) spatial extent, (2) evidence for substantial effects on the Earth system, (3) process understanding, (4) possibility to link the management activity to existing concepts and structures of models, (5) availability of data required as model input. While the first three criteria have been assessed by an earlier study for ten common management activities, we review strategies for implementation in models and the availability of required datasets. We can thus evaluate the management activities for their performance in terms of importance for the Earth system, possibility of technical implementation in models, and data availability. This synthesis reveals

  17. Model meets data: Challenges and opportunities to implement land management in Earth System Models

    NASA Astrophysics Data System (ADS)

    Pongratz, Julia; Dolman, Han; Don, Axel; Erb, Karl-Heinz; Fuchs, Richard; Herold, Martin; Jones, Chris; Luyssaert, Sebastiaan; Kuemmerle, Tobias; Meyfroidt, Patrick; Naudts, Kim

    2017-04-01

    Land-based demand for food and fibre is projected to increase in the future. In light of global sustainability challenges only part of this increase will be met by expansion of land use into relatively untouched regions. Additional demand will have to be fulfilled by intensification and other adjustments in management of land that already is under agricultural and forestry use. Such land management today occurs on about half of the ice-free land surface, as compared to only about one quarter that has undergone a change in land cover. As the number of studies revealing substantial biogeophysical and biogeochemical effects of land management is increasing, moving beyond land cover change towards including land management has become a key focus for Earth system modeling. However, a basis for prioritizing land management activities for implementation in models is lacking. We lay this basis for prioritization in a collaborative project across the disciplines of Earth system modeling, land system science, and Earth observation. We first assess the status and plans of implementing land management in Earth system and dynamic global vegetation models. A clear trend towards higher complexity of land use representation is visible. We then assess five criteria for prioritizing the implementation of land management activities: (1) spatial extent, (2) evidence for substantial effects on the Earth system, (3) process understanding, (4) possibility to link the management activity to existing concepts and structures of models, (5) availability of data required as model input. While the first three criteria have been assessed by an earlier study for ten common management activities, we review strategies for implementation in models and the availability of required datasets. We can thus evaluate the management activities for their performance in terms of importance for the Earth system, possibility of technical implementation in models, and data availability. This synthesis reveals

  18. Application of Fault Management Theory to the Quantitative Selection of a Launch Vehicle Abort Trigger Suite

    NASA Technical Reports Server (NTRS)

    Lo, Yunnhon; Johnson, Stephen B.; Breckenridge, Jonathan T.

    2014-01-01

    The theory of System Health Management (SHM) and of its operational subset Fault Management (FM) states that FM is implemented as a "meta" control loop, known as an FM Control Loop (FMCL). The FMCL detects that all or part of a system is now failed, or in the future will fail (that is, cannot be controlled within acceptable limits to achieve its objectives), and takes a control action (a response) to return the system to a controllable state. In terms of control theory, the effectiveness of each FMCL is estimated based on its ability to correctly estimate the system state, and on the speed of its response to the current or impending failure effects. This paper describes how this theory has been successfully applied on the National Aeronautics and Space Administration's (NASA) Space Launch System (SLS) Program to quantitatively estimate the effectiveness of proposed abort triggers so as to select the most effective suite to protect the astronauts from catastrophic failure of the SLS. The premise behind this process is to be able to quantitatively provide the value versus risk trade-off for any given abort trigger, allowing decision makers to make more informed decisions. All current and planned crewed launch vehicles have some form of vehicle health management system integrated with an emergency launch abort system to ensure crew safety. While the design can vary, the underlying principle is the same: detect imminent catastrophic vehicle failure, initiate launch abort, and extract the crew to safety. Abort triggers are the detection mechanisms that identify that a catastrophic launch vehicle failure is occurring or is imminent and cause the initiation of a notification to the crew vehicle that the escape system must be activated. While ensuring that the abort triggers provide this function, designers must also ensure that the abort triggers do not signal that a catastrophic failure is imminent when in fact the launch vehicle can successfully achieve orbit. That is

  19. CEOS Contributions to Informing Energy Management and Policy Decision Making Using Space-Based Earth Observations

    NASA Technical Reports Server (NTRS)

    Eckman, Richard S.

    2009-01-01

    Earth observations are playing an increasingly significant role in informing decision making in the energy sector. In renewable energy applications, space-based observations now routinely augment sparse ground-based observations used as input for renewable energy resource assessment applications. As one of the nine Group on Earth Observations (GEO) societal benefit areas, the enhancement of management and policy decision making in the energy sector is receiving attention in activities conducted by the Committee on Earth Observation Satellites (CEOS). CEOS has become the "space arm" for the implementation of the Global Earth Observation System of Systems (GEOSS) vision. It is directly supporting the space-based, near-term tasks articulated in the GEO three-year work plan. This paper describes a coordinated program of demonstration projects conducted by CEOS member agencies and partners to utilize Earth observations to enhance energy management end-user decision support systems. I discuss the importance of engagement with stakeholders and understanding their decision support needs in successfully increasing the uptake of Earth observation products for societal benefit. Several case studies are presented, demonstrating the importance of providing data sets in formats and units familiar and immediately usable by decision makers. These projects show the utility of Earth observations to enhance renewable energy resource assessment in the developing world, forecast space-weather impacts on the power grid, and improve energy efficiency in the built environment.

  20. Data management and analysis for the Earth System Grid

    NASA Astrophysics Data System (ADS)

    Williams, D. N.; Ananthakrishnan, R.; Bernholdt, D. E.; Bharathi, S.; Brown, D.; Chen, M.; Chervenak, A. L.; Cinquini, L.; Drach, R.; Foster, I. T.; Fox, P.; Hankin, S.; Henson, V. E.; Jones, P.; Middleton, D. E.; Schwidder, J.; Schweitzer, R.; Schuler, R.; Shoshani, A.; Siebenlist, F.; Sim, A.; Strand, W. G.; Wilhelmi, N.; Su, M.

    2008-07-01

    The international climate community is expected to generate hundreds of petabytes of simulation data within the next five to seven years. This data must be accessed and analyzed by thousands of analysts worldwide in order to provide accurate and timely estimates of the likely impact of climate change on physical, biological, and human systems. Climate change is thus not only a scientific challenge of the first order but also a major technological challenge. In order to address this technological challenge, the Earth System Grid Center for Enabling Technologies (ESG-CET) has been established within the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC)-2 program, with support from the offices of Advanced Scientific Computing Research and Biological and Environmental Research. ESG-CET's mission is to provide climate researchers worldwide with access to the data, information, models, analysis tools, and computational capabilities required to make sense of enormous climate simulation datasets. Its specific goals are to (1) make data more useful to climate researchers by developing Grid technology that enhances data usability; (2) meet specific distributed database, data access, and data movement needs of national and international climate projects; (3) provide a universal and secure web-based data access portal for broad multi-model data collections; and (4) provide a wide-range of Grid-enabled climate data analysis tools and diagnostic methods to international climate centers and U.S. government agencies. Building on the successes of the previous Earth System Grid (ESG) project, which has enabled thousands of researchers to access tens of terabytes of data from a small number of ESG sites, ESG-CET is working to integrate a far larger number of distributed data providers, high-bandwidth wide-area networks, and remote computers in a highly collaborative problem-solving environment.

  1. Near-Real-Time Earth Observation Data Supporting Wildfire Management

    NASA Astrophysics Data System (ADS)

    Ambrosia, V. G.; Zajkowski, T.; Quayle, B.

    2013-12-01

    During disaster events, the most critical element needed by responding personnel and management teams is situational intelligence / awareness. During rapidly-evolving events such as wildfires, the need for timely information is critical to save lives, property and resources. The wildfire management agencies in the US rely heavily on remote sensing information both from airborne platforms as well as from orbital assets. The ability to readily have information from those systems, not just data, is critical to effective control and damage mitigation. NASA has been collaborating with the USFS to mature and operationalize various asset-information capabilities to effect improved knowledge of fire-prone areas, monitor wildfire events in real-time, assess effectiveness of fire management strategies, and provide rapid, post-fire assessment for recovery operations. Specific examples of near-real-time remote sensing asset utility include daily MODIS data employed to assess fire potential / wildfire hazard areas, and national-scale hot-spot detection, airborne thermal sensor collected during wildfire events to effect management strategies, EO-1 ALI 'pointable' satellite sensor data to assess fire-retardant application effectiveness, and Landsat 8 and other sensor data to derive burn severity indices for post-fire remediation work. These cases of where near-real-time data is used operationally during the previous few fire seasons will be presented.

  2. Definition of Earth Resource Policy and Management Problems in California

    NASA Technical Reports Server (NTRS)

    Churchman, C. W.; Clark, I.

    1971-01-01

    Management planning for the California water survey considers the use of satellite and airplane remote sensing information on water-source, -center, and -sink geographies. A model is developed for estimating the social benefit of water resource information and to identify the most important types of resource information relevant to regulatory agencies and the private sector.

  3. Run Environment and Data Management for Earth System Models

    NASA Astrophysics Data System (ADS)

    Widmann, H.; Lautenschlager, M.; Fast, I.; Legutke, S.

    2009-04-01

    The Integrating Model and Data Infrastructure (IMDI) developed and maintained by the Model and Data Group (M&D) comprises the Standard Compile Environment (SCE) and the Standard Run Environment (SRE). The IMDI software has a modular design, which allows to combine and couple a suite of model components and as well to execute the tasks independently and on various platforms. Furthermore the modular structure enables the extension to new model combinations and new platforms. The SRE presented here enables the configuration and performance of earth system model experiments from model integration up to storage and visualization of data. We focus on recently implemented tasks such as synchronous data base filling, graphical monitoring and automatic generation of meta data in XML forms during run time. As well we address the capability to run experiments in heterogeneous IT environments with different computing systems for model integration, data processing and storage. These features are demonstrated for model configurations and on platforms used in current or upcoming projects, e.g. MILLENNIUM or IPCC AR5.

  4. Knowledge Acquisition and Management for the NASA Earth Exchange (NEX)

    NASA Astrophysics Data System (ADS)

    Votava, P.; Michaelis, A.; Nemani, R. R.

    2013-12-01

    NASA Earth Exchange (NEX) is a data, computing and knowledge collaboratory that houses NASA satellite, climate and ancillary data where a focused community can come together to share modeling and analysis codes, scientific results, knowledge and expertise on a centralized platform with access to large supercomputing resources. As more and more projects are being executed on NEX, we are increasingly focusing on capturing the knowledge of the NEX users and provide mechanisms for sharing it with the community in order to facilitate reuse and accelerate research. There are many possible knowledge contributions to NEX, it can be a wiki entry on the NEX portal contributed by a developer, information extracted from a publication in an automated way, or a workflow captured during code execution on the supercomputing platform. The goal of the NEX knowledge platform is to capture and organize this information and make it easily accessible to the NEX community and beyond. The knowledge acquisition process consists of three main faucets - data and metadata, workflows and processes, and web-based information. Once the knowledge is acquired, it is processed in a number of ways ranging from custom metadata parsers to entity extraction using natural language processing techniques. The processed information is linked with existing taxonomies and aligned with internal ontology (which heavily reuses number of external ontologies). This forms a knowledge graph that can then be used to improve users' search query results as well as provide additional analytics capabilities to the NEX system. Such a knowledge graph will be an important building block in creating a dynamic knowledge base for the NEX community where knowledge is both generated and easily shared.

  5. Perspective View, San Andreas Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The prominent linear feature straight down the center of this perspective view is California's famous San Andreas Fault. The image, created with data from NASA's Shuttle Radar Topography Mission (SRTM), will be used by geologists studying fault dynamics and landforms resulting from active tectonics. This segment of the fault lies west of the city of Palmdale, Calif., about 100 kilometers (about 60 miles) northwest of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. Two large mountain ranges are visible, the San Gabriel Mountains on the left and the Tehachapi Mountains in the upper right. Another fault, the Garlock Fault lies at the base of the Tehachapis; the San Andreas and the Garlock Faults meet in the center distance near the town of Gorman. In the distance, over the Tehachapi Mountains is California's Central Valley. Along the foothills in the right hand part of the image is the Antelope Valley, including the Antelope Valley California Poppy Reserve. The data used to create this image were acquired by SRTM aboard the Space Shuttle Endeavour, launched on February 11, 2000.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    SRTM uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space

  6. NASA Earth Observations Informing Energy Management Decision Making

    NASA Technical Reports Server (NTRS)

    Eckman, Richard; Stackhouse, Paul

    2017-01-01

    The Energy Sector is experiencing increasing impacts from severe weather and shifting climatic trends, as well as facing a changing political climate, adding uncertainty for stakeholders as they make short- and long-term planning investments. Climate changes such as prolonged extreme heat and drought (leading to wildfire spread, for example), sea level rise, and extreme storms are changing the ways that utilities operate. Energy infrastructure located in coastal or flood-prone areas faces inundation risks, such as damage to energy facilities. The use of renewable energy resources is increasing, requiring more information about their intermittency and spatial patterns. In light of these challenges, public and private stakeholders have collaborated to identify potential data sources, tools, and programmatic ideas. For example, utilities across the country are using cutting-edge technology and data to plan for and adapt to these changes. In the Federal Government, NASA has invested in preliminary work to identify needs and opportunities for satellite data in energy sector application, and the Department of Energy has similarly brought together stakeholders to understand the landscape of climate vulnerability and resilience for utilities and others. However, have these efforts improved community-scale resilience and adaptation efforts? Further, some communities are more vulnerable to climate change and infrastructure impacts than others. This session has two goals. First, panelists seek to share existing and ongoing efforts related to energy management. Second, the session seeks to engage with attendees via group knowledge exchange to connect national energy management efforts to local practice for increased community resilience.

  7. Earth Observatory Satellite system definition study. Report no. 4: Management approach recommendations

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A management approach for the Earth Observatory Satellite (EOS) which will meet the challenge of a constrained cost environment is presented. Areas of consideration are contracting techniques, test philosophy, reliability and quality assurance requirements, commonality options, and documentation and control requirements. The various functional areas which were examined for cost reduction possibilities are identified. The recommended management approach is developed to show the primary and alternative methods.

  8. Intelligent fault management for the Space Station active thermal control system

    NASA Technical Reports Server (NTRS)

    Hill, Tim; Faltisco, Robert M.

    1992-01-01

    The Thermal Advanced Automation Project (TAAP) approach and architecture is described for automating the Space Station Freedom (SSF) Active Thermal Control System (ATCS). The baseline functionally and advanced automation techniques for Fault Detection, Isolation, and Recovery (FDIR) will be compared and contrasted. Advanced automation techniques such as rule-based systems and model-based reasoning should be utilized to efficiently control, monitor, and diagnose this extremely complex physical system. TAAP is developing advanced FDIR software for use on the SSF thermal control system. The goal of TAAP is to join Knowledge-Based System (KBS) technology, using a combination of rules and model-based reasoning, with conventional monitoring and control software in order to maximize autonomy of the ATCS. TAAP's predecessor was NASA's Thermal Expert System (TEXSYS) project which was the first large real-time expert system to use both extensive rules and model-based reasoning to control and perform FDIR on a large, complex physical system. TEXSYS showed that a method is needed for safely and inexpensively testing all possible faults of the ATCS, particularly those potentially damaging to the hardware, in order to develop a fully capable FDIR system. TAAP therefore includes the development of a high-fidelity simulation of the thermal control system. The simulation provides realistic, dynamic ATCS behavior and fault insertion capability for software testing without hardware related risks or expense. In addition, thermal engineers will gain greater confidence in the KBS FDIR software than was possible prior to this kind of simulation testing. The TAAP KBS will initially be a ground-based extension of the baseline ATCS monitoring and control software and could be migrated on-board as additional computation resources are made available.

  9. Expert systems applied to fault isolation and energy storage management, phase 2

    NASA Technical Reports Server (NTRS)

    1987-01-01

    A user's guide for the Fault Isolation and Energy Storage (FIES) II system is provided. Included are a brief discussion of the background and scope of this project, a discussion of basic and advanced operating installation and problem determination procedures for the FIES II system and information on hardware and software design and implementation. A number of appendices are provided including a detailed specification for the microprocessor software, a detailed description of the expert system rule base and a description and listings of the LISP interface software.

  10. Integrating emerging earth science technologies into disaster risk management: an enterprise architecture approach

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Hao, W.; Chettri, S. R.

    2014-12-01

    Disaster risk management has grown to rely on earth observations, multi-source data analysis, numerical modeling, and interagency information sharing. The practice and outcomes of disaster risk management will likely undergo further change as several emerging earth science technologies come of age: mobile devices; location-based services; ubiquitous sensors; drones; small satellites; satellite direct readout; Big Data analytics; cloud computing; Web services for predictive modeling, semantic reconciliation, and collaboration; and many others. Integrating these new technologies well requires developing and adapting them to meet current needs; but also rethinking current practice to draw on new capabilities to reach additional objectives. This requires a holistic view of the disaster risk management enterprise and of the analytical or operational capabilities afforded by these technologies. One helpful tool for this assessment, the GEOSS Architecture for the Use of Remote Sensing Products in Disaster Management and Risk Assessment (Evans & Moe, 2013), considers all phases of the disaster risk management lifecycle for a comprehensive set of natural hazard types, and outlines common clusters of activities and their use of information and computation resources. We are using these architectural views, together with insights from current practice, to highlight effective, interrelated roles for emerging earth science technologies in disaster risk management. These roles may be helpful in creating roadmaps for research and development investment at national and international levels.

  11. Why the 2002 Denali fault rupture propagated onto the Totschunda fault: implications for fault branching and seismic hazards

    Schwartz, David P.; Haeussler, Peter J.; Seitz, Gordon G.; Dawson, Timothy E.

    2012-01-01

    The propagation of the rupture of the Mw7.9 Denali fault earthquake from the central Denali fault onto the Totschunda fault has provided a basis for dynamic models of fault branching in which the angle of the regional or local prestress relative to the orientation of the main fault and branch plays a principal role in determining which fault branch is taken. GeoEarthScope LiDAR and paleoseismic data allow us to map the structure of the Denali-Totschunda fault intersection and evaluate controls of fault branching from a geological perspective. LiDAR data reveal the Denali-Totschunda fault intersection is structurally simple with the two faults directly connected. At the branch point, 227.2 km east of the 2002 epicenter, the 2002 rupture diverges southeast to become the Totschunda fault. We use paleoseismic data to propose that differences in the accumulated strain on each fault segment, which express differences in the elapsed time since the most recent event, was one important control of the branching direction. We suggest that data on event history, slip rate, paleo offsets, fault geometry and structure, and connectivity, especially on high slip rate-short recurrence interval faults, can be used to assess the likelihood of branching and its direction. Analysis of the Denali-Totschunda fault intersection has implications for evaluating the potential for a rupture to propagate across other types of fault intersections and for characterizing sources of future large earthquakes.

  12. Pegasus Workflow Management System: Helping Applications From Earth and Space

    NASA Astrophysics Data System (ADS)

    Mehta, G.; Deelman, E.; Vahi, K.; Silva, F.

    2010-12-01

    Pegasus WMS is a Workflow Management System that can manage large-scale scientific workflows across Grid, local and Cloud resources simultaneously. Pegasus WMS provides a means for representing the workflow of an application in an abstract XML form, agnostic of the resources available to run it and the location of data and executables. It then compiles these workflows into concrete plans by querying catalogs and farming computations across local and distributed computing resources, as well as emerging commercial and community cloud environments in an easy and reliable manner. Pegasus WMS optimizes the execution as well as data movement by leveraging existing Grid and cloud technologies via a flexible pluggable interface and provides advanced features like reusing existing data, automatic cleanup of generated data, and recursive workflows with deferred planning. It also captures all the provenance of the workflow from the planning stage to the execution of the generated data, helping scientists to accurately measure performance metrics of their workflow as well as data reproducibility issues. Pegasus WMS was initially developed as part of the GriPhyN project to support large-scale high-energy physics and astrophysics experiments. Direct funding from the NSF enabled support for a wide variety of applications from diverse domains including earthquake simulation, bacterial RNA studies, helioseismology and ocean modeling. Earthquake Simulation: Pegasus WMS was recently used in a large scale production run in 2009 by the Southern California Earthquake Centre to run 192 million loosely coupled tasks and about 2000 tightly coupled MPI style tasks on National Cyber infrastructure for generating a probabilistic seismic hazard map of the Southern California region. SCEC ran 223 workflows over a period of eight weeks, using on average 4,420 cores, with a peak of 14,540 cores. A total of 192 million files were produced totaling about 165TB out of which 11TB of data was saved

  13. RESOURCESAT-2: a mission for Earth resources management

    NASA Astrophysics Data System (ADS)

    Venkata Rao, M.; Gupta, J. P.; Rattan, Ram; Thyagarajan, K.

    2006-12-01

    The Indian Space Research Organisation (ISRO) has established an operational Remote sensing satellite system by launching its first satellite, IRS-1A in 1988, followed by a series of IRS spacecraft. The IRS-1C/1D satellites with their unique combination of Payloads have taken a lead position in the Global remote sensing scenario. Realising the growing User demands for the "Multi" level approach in terms of Spatial, Spectral, Temporal and Radiometric resolutions, ISRO identified the Resourcesat as a continuity as well as improved RS Satellite. The Resourcesat-1 (IRS-P6) was launched in October 2003 using PSLV launch vehicle and it is in operational service. Resourcesat-2 is its follow-on Mission scheduled for launch in 2008. Each Resourcesat satellite carries three Electro-optical cameras as its payload - LISS-3, LISS-4 and AWIFS. All the three are multi-spectral push-broom scanners with linear array CCDs as Detectors. LISS-3 and AWIFS operate in four identical spectral bands in the VIS-NIR-SWIR range while LISS-4 is a high resolution camera with three spectral bands in VIS-NIR range. In order to meet the stringent requirements of band-to-band registration and platform stability, several improvements have been incorporated in the mainframe Bus configuration like wide field Star trackers, precision Gyroscopes, on-board GPS receiver etc,. The Resourcesat data finds its application in several areas like agricultural crop discrimination and monitoring, crop acreage/yield estimation, precision farming, water resources, forest mapping, Rural infrastructure development, disaster management etc,. to name a few. A brief description of the Payload cameras, spacecraft bus elements and operational modes and few applications are presented.

  14. Earth-Mars Telecommunications and Information Management System (TIMS): Antenna Visibility Determination, Network Simulation, and Management Models

    NASA Technical Reports Server (NTRS)

    Odubiyi, Jide; Kocur, David; Pino, Nino; Chu, Don

    1996-01-01

    This report presents the results of our research on Earth-Mars Telecommunications and Information Management System (TIMS) network modeling and unattended network operations. The primary focus of our research is to investigate the feasibility of the TIMS architecture, which links the Earth-based Mars Operations Control Center, Science Data Processing Facility, Mars Network Management Center, and the Deep Space Network of antennae to the relay satellites and other communication network elements based in the Mars region. The investigation was enhanced by developing Build 3 of the TIMS network modeling and simulation model. The results of several 'what-if' scenarios are reported along with reports on upgraded antenna visibility determination software and unattended network management prototype.

  15. Policy Document on Earth Observation for Urban Planning and Management: State of the Art and Recommendations for Application of Earth Observation in Urban Planning

    NASA Technical Reports Server (NTRS)

    Nichol, Janet; King, Bruce; Xiaoli, Ding; Dowman, Ian; Quattrochi, Dale; Ehlers, Manfred

    2007-01-01

    A policy document on earth observation for urban planning and management resulting from a workshop held in Hong Kong in November 2006 is presented. The aim of the workshop was to provide a forum for researchers and scientists specializing in earth observation to interact with practitioners working in different aspects of city planning, in a complex and dynamic city, Hong Kong. A summary of the current state of the art, limitations, and recommendations for the use of earth observation in urban areas is presented here as a policy document.

  16. Fault finder

    DOEpatents

    Bunch, Richard H.

    1986-01-01

    A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

  17. Low cost management of replicated data in fault-tolerant distributed systems

    NASA Technical Reports Server (NTRS)

    Joseph, Thomas A.; Birman, Kenneth P.

    1990-01-01

    Many distributed systems replicate data for fault tolerance or availability. In such systems, a logical update on a data item results in a physical update on a number of copies. The synchronization and communication required to keep the copies of replicated data consistent introduce a delay when operations are performed. A technique is described that relaxes the usual degree of synchronization, permitting replicated data items to be updated concurrently with other operations, while at the same time ensuring that correctness is not violated. The additional concurrency thus obtained results in better response time when performing operations on replicated data. How this technique performs in conjunction with a roll-back and a roll-forward failure recovery mechanism is also discussed.

  18. Perspective View, San Andreas Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The prominent linear feature straight down the center of this perspective view is the San Andreas Fault in an image created with data from NASA's shuttle Radar Topography Mission (SRTM), which will be used by geologists studying fault dynamics and landforms resulting from active tectonics. This segment of the fault lies west of the city of Palmdale, California, about 100 kilometers (about 60 miles) northwest of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. This area is at the junction of two large mountain ranges, the San Gabriel Mountains on the left and the Tehachapi Mountains on the right. Quail Lake Reservoir sits in the topographic depression created by past movement along the fault. Interstate 5 is the prominent linear feature starting at the left edge of the image and continuing into the fault zone, passing eventually over Tejon Pass into the Central Valley, visible at the upper left.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994

  19. Increasing the Use of Earth Science Data and Models in Air Quality Management.

    PubMed

    Milford, Jana B; Knight, Daniel

    2017-04-01

    In 2010, the U.S. National Aeronautics and Space Administration (NASA) initiated the Air Quality Applied Science Team (AQAST) as a 5-year, $17.5-million award with 19 principal investigators. AQAST aims to increase the use of Earth science products in air quality-related research and to help meet air quality managers' information needs. We conducted a Web-based survey and a limited number of follow-up interviews to investigate federal, state, tribal, and local air quality managers' perspectives on usefulness of Earth science data and models, and on the impact AQAST has had. The air quality managers we surveyed identified meeting the National Ambient Air Quality Standards for ozone and particulate matter, emissions from mobile sources, and interstate air pollution transport as top challenges in need of improved information. Most survey respondents viewed inadequate coverage or frequency of satellite observations, data uncertainty, and lack of staff time or resources as barriers to increased use of satellite data by their organizations. Managers who have been involved with AQAST indicated that the program has helped build awareness of NASA Earth science products, and assisted their organizations with retrieval and interpretation of satellite data and with application of global chemistry and climate models. AQAST has also helped build a network between researchers and air quality managers with potential for further collaborations. NASA's Air Quality Applied Science Team (AQAST) aims to increase the use of satellite data and global chemistry and climate models for air quality management purposes, by supporting research and tool development projects of interest to both groups. Our survey and interviews of air quality managers indicate they found value in many AQAST projects and particularly appreciated the connections to the research community that the program facilitated. Managers expressed interest in receiving continued support for their organizations' use of

  20. An Information Architect's View of Earth Observations for Disaster Risk Management

    NASA Astrophysics Data System (ADS)

    Moe, K.; Evans, J. D.; Cappelaere, P. G.; Frye, S. W.; Mandl, D.; Dobbs, K. E.

    2014-12-01

    Satellite observations play a significant role in supporting disaster response and risk management, however data complexity is a barrier to broader use especially by the public. In December 2013 the Committee on Earth Observation Satellites Working Group on Information Systems and Services documented a high-level reference model for the use of Earth observation satellites and associated products to support disaster risk management within the Global Earth Observation System of Systems context. The enterprise architecture identified the important role of user access to all key functions supporting situational awareness and decision-making. This paper focuses on the need to develop actionable information products from these Earth observations to simplify the discovery, access and use of tailored products. To this end, our team has developed an Open GeoSocial API proof-of-concept for GEOSS. We envision public access to mobile apps available on smart phones using common browsers where users can set up a profile and specify a region of interest for monitoring events such as floods and landslides. Information about susceptibility and weather forecasts about flood risks can be accessed. Users can generate geo-located information and photos of local events, and these can be shared on social media. The information architecture can address usability challenges to transform sensor data into actionable information, based on the terminology of the emergency management community responsible for informing the public. This paper describes the approach to collecting relevant material from the disasters and risk management community to address the end user needs for information. The resulting information architecture addresses the structural design of the shared information in the disasters and risk management enterprise. Key challenges are organizing and labeling information to support both online user communities and machine-to-machine processing for automated product generation.

  1. Large earthquakes and creeping faults

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  2. Models meet data: Challenges and opportunities in implementing land management in Earth system models.

    PubMed

    Pongratz, Julia; Dolman, Han; Don, Axel; Erb, Karl-Heinz; Fuchs, Richard; Herold, Martin; Jones, Chris; Kuemmerle, Tobias; Luyssaert, Sebastiaan; Meyfroidt, Patrick; Naudts, Kim

    2018-04-01

    As the applications of Earth system models (ESMs) move from general climate projections toward questions of mitigation and adaptation, the inclusion of land management practices in these models becomes crucial. We carried out a survey among modeling groups to show an evolution from models able only to deal with land-cover change to more sophisticated approaches that allow also for the partial integration of land management changes. For the longer term a comprehensive land management representation can be anticipated for all major models. To guide the prioritization of implementation, we evaluate ten land management practices-forestry harvest, tree species selection, grazing and mowing harvest, crop harvest, crop species selection, irrigation, wetland drainage, fertilization, tillage, and fire-for (1) their importance on the Earth system, (2) the possibility of implementing them in state-of-the-art ESMs, and (3) availability of required input data. Matching these criteria, we identify "low-hanging fruits" for the inclusion in ESMs, such as basic implementations of crop and forestry harvest and fertilization. We also identify research requirements for specific communities to address the remaining land management practices. Data availability severely hampers modeling the most extensive land management practice, grazing and mowing harvest, and is a limiting factor for a comprehensive implementation of most other practices. Inadequate process understanding hampers even a basic assessment of crop species selection and tillage effects. The need for multiple advanced model structures will be the challenge for a comprehensive implementation of most practices but considerable synergy can be gained using the same structures for different practices. A continuous and closer collaboration of the modeling, Earth observation, and land system science communities is thus required to achieve the inclusion of land management in ESMs. © 2017 John Wiley & Sons Ltd.

  3. NASA's EOSDIS Cumulus: Ingesting, Archiving, Managing, and Distributing Earth Science Data from the Commercial Cloud

    NASA Technical Reports Server (NTRS)

    Baynes, Katie; Ramachandran, Rahul; Pilone, Dan; Quinn, Patrick; Gilman, Jason; Schuler, Ian; Jazayeri, Alireza

    2017-01-01

    NASA's Earth Observing System Data and Information System (EOSDIS) has been working towards a vision of a cloud-based, highly-flexible, ingest, archive, management, and distribution system for its ever-growing and evolving data holdings. This system, Cumulus, is emerging from its prototyping stages and is poised to make a huge impact on how NASA manages and disseminates its Earth science data. This talk will outline the motivation for this work, present the achievements and hurdles of the past 18 months and will chart a course for the future expansion of the Cumulus expansion. We will explore on not just the technical, but also the socio-technical challenges that we face in evolving a system of this magnitude into the cloud and how we are rising to meet those challenges through open collaboration and intentional stakeholder engagement.

  4. Earth Observatory Satellite system definition study. Report 4: Low cost management approach and recommendations

    NASA Technical Reports Server (NTRS)

    1974-01-01

    An analysis of low cost management approaches for the development of the Earth Observatory Satellite (EOS) is presented. The factors of the program which tend to increase costs are identified. The NASA/Industry interface is stressed to show how the interface can be improved to produce reduced program costs. Techniques and examples of cost reduction which can be applied to the EOS program are tabulated. Specific recommendations for actions to be taken to reduce costs in prescribed areas are submitted.

  5. Earth Science Keyword Stewardship: Access and Management through NASA's Global Change Master Directory (GCMD) Keyword Management System (KMS)

    NASA Astrophysics Data System (ADS)

    Stevens, T.; Olsen, L. M.; Ritz, S.; Morahan, M.; Aleman, A.; Cepero, L.; Gokey, C.; Holland, M.; Cordova, R.; Areu, S.; Cherry, T.; Tran-Ho, H.

    2012-12-01

    Discovering Earth science data can be complex if the catalog holding the data lacks structure. Controlled keyword vocabularies within metadata catalogues can improve data discovery. NASA's Global Change Master Directory's (GCMD) Keyword Management System (KMS) is a recently released a RESTful web service for managing and providing access to controlled keywords (science keywords, service keywords, platforms, instruments, providers, locations, projects, data resolution, etc.). The KMS introduces a completely new paradigm for the use and management of the keywords and allows access to these keywords as SKOS Concepts (RDF), OWL, standard XML, and CSV. A universally unique identifier (UUID) is automatically assigned to each keyword, which uniquely identifies each concept and its associated information. A component of the KMS is the keyword manager, an internal tool that allows GCMD science coordinators to manage concepts. This includes adding, modifying, and deleting broader, narrower, or related concepts and associated definitions. The controlled keyword vocabulary represents over 20 years of effort and collaboration with the Earth science community. The maintenance, stability, and ongoing vigilance in maintaining mutually exclusive and parallel keyword lists is important for a "normalized" search and discovery, and provides a unique advantage for the science community. Modifications and additions are made based on community suggestions and internal review. To help maintain keyword integrity, science keyword rules and procedures for modification of keywords were developed. This poster will highlight the use of the KMS as a beneficial service for the stewardship and access of the GCMD keywords. Users will learn how to access the KMS and utilize the keywords. Best practices for managing an extensive keyword hierarchy will also be discussed. Participants will learn the process for making keyword suggestions, which subsequently help in building a controlled keyword

  6. Derailment-based Fault Tree Analysis on Risk Management of Railway Turnout Systems

    NASA Astrophysics Data System (ADS)

    Dindar, Serdar; Kaewunruen, Sakdirat; An, Min; Gigante-Barrera, Ángel

    2017-10-01

    Railway turnouts are fundamental mechanical infrastructures, which allow a rolling stock to divert one direction to another. As those are of a large number of engineering subsystems, e.g. track, signalling, earthworks, these particular sub-systems are expected to induce high potential through various kind of failure mechanisms. This could be a cause of any catastrophic event. A derailment, one of undesirable events in railway operation, often results, albeit rare occurs, in damaging to rolling stock, railway infrastructure and disrupt service, and has the potential to cause casualties and even loss of lives. As a result, it is quite significant that a well-designed risk analysis is performed to create awareness of hazards and to identify what parts of the systems may be at risk. This study will focus on all types of environment based failures as a result of numerous contributing factors noted officially as accident reports. This risk analysis is designed to help industry to minimise the occurrence of accidents at railway turnouts. The methodology of the study relies on accurate assessment of derailment likelihood, and is based on statistical multiple factors-integrated accident rate analysis. The study is prepared in the way of establishing product risks and faults, and showing the impact of potential process by Boolean algebra.

  7. The Heritage of Earth Science Applications in Policy, Business, and Management of Natural Resources

    NASA Astrophysics Data System (ADS)

    Macauley, M.

    2012-12-01

    From the first hand-held cameras on the Gemini space missions to present day satellite instruments, Earth observations have enhanced the management of natural resources including water, land, and air. Applications include the development of new methodology (for example, developing and testing algorithms or demonstrating how data can be used) and the direct use of data in decisionmaking and policy implementation. Using well-defined bibliographic search indices to systematically survey a broad social science literature, this project enables identification of a host of well-documented, practical and direct applications of Earth science data in resource management. This literature has not previously been well surveyed, aggregated, or analyzed for the heritage of lessons learned in practical application of Earth science data. In the absence of such a survey, the usefulness of Earth science data is underestimated and the factors that make people want to use -- and able to use -- the data are poorly understood. The project extends and updates previous analysis of social science applications of Landsat data to show their contemporary, direct use in new policy, business, and management activities and decisionmaking. The previous surveys (for example, Blumberg and Jacobson 1997; National Research Council 1998) find that the earliest attempts to use data are almost exclusively testing of methodology rather than direct use in resource management. Examples of methodology prototyping include Green et al. (1997) who demonstrate use of remote sensing to detect and monitor changes in land cover and use, Cowen et al. (1995) who demonstrate design and integration of GIS for environmental applications, Hutchinson (1991) who shows uses of data for famine early warning, and Brondizio et al. (1996) who show the link of thematic mapper data with botanical data. Blumberg and Jacobson (in Acevedo et al. 1996) show use of data in a study of urban development in the San Francisco Bay and the

  8. Semantics-enabled knowledge management for global Earth observation system of systems

    NASA Astrophysics Data System (ADS)

    King, Roger L.; Durbha, Surya S.; Younan, Nicolas H.

    2007-10-01

    The Global Earth Observation System of Systems (GEOSS) is a distributed system of systems built on current international cooperation efforts among existing Earth observing and processing systems. The goal is to formulate an end-to-end process that enables the collection and distribution of accurate, reliable Earth Observation data, information, products, and services to both suppliers and consumers worldwide. One of the critical components in the development of such systems is the ability to obtain seamless access of data across geopolitical boundaries. In order to gain support and willingness to participate by countries around the world in such an endeavor, it is necessary to devise mechanisms whereby the data and the intellectual capital is protected through procedures that implement the policies specific to a country. Earth Observations (EO) are obtained from a multitude of sources and requires coordination among different agencies and user groups to come to a shared understanding on a set of concepts involved in a domain. It is envisaged that the data and information in a GEOSS context will be unprecedented and the current data archiving and delivery methods need to be transformed into one that allows realization of seamless interoperability. Thus, EO data integration is dependent on the resolution of conflicts arising from a variety of areas. Modularization is inevitable in distributed environments to facilitate flexible and efficient reuse of existing ontologies. Therefore, we propose a framework for modular ontologies based knowledge management approach for GEOSS and present methods to enable efficient reasoning in such systems.

  9. Applications of Earth Observations for Fisheries Management: An analysis of socioeconomic benefits

    NASA Astrophysics Data System (ADS)

    Friedl, L.; Kiefer, D. A.; Turner, W.

    2013-12-01

    This paper will discuss the socioeconomic impacts of a project applying Earth observations and models to support management and conservation of tuna and other marine resources in the eastern Pacific Ocean. A project team created a software package that produces statistical analyses and dynamic maps of habitat for pelagic ocean biota. The tool integrates sea surface temperature and chlorophyll imagery from MODIS, ocean circulation models, and other data products. The project worked with the Inter-American Tropical Tuna Commission, which issues fishery management information, such as stock assessments, for the eastern Pacific region. The Commission uses the tool and broader habitat information to produce better estimates of stock and thus improve their ability to identify species that could be at risk of overfishing. The socioeconomic analysis quantified the relative value that Earth observations contributed to accurate stock size assessments through improvements in calculating population size. The analysis team calculated the first-order economic costs of a fishery collapse (or shutdown), and they calculated the benefits of improved estimates that reduce the uncertainty of stock size and thus reduce the risk of fishery collapse. The team estimated that the project reduced the probability of collapse of different fisheries, and the analysis generated net present values of risk mitigation. USC led the project with sponsorship from the NASA Earth Science Division's Applied Sciences Program, which conducted the socioeconomic impact analysis. The paper will discuss the project and focus primarily on the analytic methods, impact metrics, and the results of the socioeconomic benefits analysis.

  10. MER surface fault protection system

    NASA Technical Reports Server (NTRS)

    Neilson, Tracy

    2005-01-01

    The Mars Exploration Rovers surface fault protection design was influenced by the fact that the solar-powered rovers must recharge their batteries during the day to survive the night. the rovers needed to autonomously maintain thermal stability, initiate safe and reliable communication with orbiting assets or directly to Earth, while maintaining energy balance. This paper will describe the system fault protection design for the surface phase of the mission.

  11. Momentum Management for the NASA Near Earth Asteroid Scout Solar Sail Mission

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew; Diedrich, Benjamin L.; Orphee, Juan; Stiltner, Brandon; Becker, Christopher

    2017-01-01

    The Momentum Management (MM) system is described for the NASA Near Earth Asteroid Scout (NEA Scout) cubesat solar sail mission. Unlike many solar sail mission proposals that used solar torque as the primary or only attitude control system, NEA Scout uses small reaction wheels (RW) and a reaction control system (RCS) with cold gas thrusters, as described in the abstract "Solar Sail Attitude Control System for Near Earth Asteroid Scout Cubesat Mission." The reaction wheels allow fine pointing and higher rates with low mass actuators to meet the science, communication, and trajectory guidance requirements. The MM system keeps the speed of the wheels within their operating margins using a combination of solar torque and the RCS.

  12. A Comparison of Global Indexing Schemes to Facilitate Earth Science Data Management

    NASA Astrophysics Data System (ADS)

    Griessbaum, N.; Frew, J.; Rilee, M. L.; Kuo, K. S.

    2017-12-01

    Recent advances in database technology have led to systems optimized for managing petabyte-scale multidimensional arrays. These array databases are a good fit for subsets of the Earth's surface that can be projected into a rectangular coordinate system with acceptable geometric fidelity. However, for global analyses, array databases must address the same distortions and discontinuities that apply to map projections in general. The array database SciDB supports enormous databases spread across thousands of computing nodes. Additionally, the following SciDB characteristics are particularly germane to the coordinate system problem: SciDB efficiently stores and manipulates sparse (i.e. mostly empty) arrays. SciDB arrays have 64-bit indexes. SciDB supports user-defined data types, functions, and operators. We have implemented two geospatial indexing schemes in SciDB. The simplest uses two array dimensions to represent longitude and latitude. For representation as 64-bit integers, the coordinates are multiplied by a scale factor large enough to yield an appropriate Earth surface resolution (e.g., a scale factor of 100,000 yields a resolution of approximately 1m at the equator). Aside from the longitudinal discontinuity, the principal disadvantage of this scheme is its fixed scale factor. The second scheme uses a single array dimension to represent the bit-codes for locations in a hierarchical triangular mesh (HTM) coordinate system. A HTM maps the Earth's surface onto an octahedron, and then recursively subdivides each triangular face to the desired resolution. Earth surface locations are represented as the concatenation of an octahedron face code and a quadtree code within the face. Unlike our integerized lat-lon scheme, the HTM allow for objects of different size (e.g., pixels with differing resolutions) to be represented in the same indexing scheme. We present an evaluation of the relative utility of these two schemes for managing and analyzing MODIS swath data.

  13. An expert system for fault management assistance on a space sleep experiment

    NASA Technical Reports Server (NTRS)

    Atamer, A.; Delaney, M.; Young, L. R.

    2002-01-01

    The expert system, Principal Investigator-in-a-box, or [PI], was designed to assist astronauts or other operators in performing experiments outside their expertise. Currently, the software helps astronauts calibrate instruments for a Sleep and Respiration Experiment without contact with the investigator on the ground. It flew on the Space Shuttle missions STS-90 and STS-95. [PI] displays electrophysiological signals in real time, alerts astronauts via the indicator lights when a poor signal quality is detected, and advises astronauts how to restore good signal quality. Thirty subjects received training on the sleep instrumentation and the [PI] interface. A beneficial effects of [PI] and training reduced troubleshooting time. [PI] benefited subjects on the most difficult scenarios, even though its lights were not 100% accurate. Further, questionnaires showed that most subjects preferred monitoring waveforms with [PI] assistance rather than monitoring waveforms alone. This study addresses problems of complex troubleshooting and the extended time between training and execution that is common to many human operator situations on earth such as in power plant operation, and marine exploration.

  14. Satellite and earth science data management activities at the U.S. geological survey's EROS data center

    Carneggie, David M.; Metz, Gary G.; Draeger, William C.; Thompson, Ralph J.

    1991-01-01

    The U.S. Geological Survey's Earth Resources Observation Systems (EROS) Data Center, the national archive for Landsat data, has 20 years of experience in acquiring, archiving, processing, and distributing Landsat and earth science data. The Center is expanding its satellite and earth science data management activities to support the U.S. Global Change Research Program and the National Aeronautics and Space Administration (NASA) Earth Observing System Program. The Center's current and future data management activities focus on land data and include: satellite and earth science data set acquisition, development and archiving; data set preservation, maintenance and conversion to more durable and accessible archive medium; development of an advanced Land Data Information System; development of enhanced data packaging and distribution mechanisms; and data processing, reprocessing, and product generation systems.

  15. Earth Observations

    2011-05-28

    ISS028-E-006059 (28 May 2011) --- One of the Expedition 28 crew members, photographing Earth images onboard the International Space Station while docked with the space shuttle Endeavour and flying at an altitude of just under 220 miles, captured this frame of the Salton Sea. The body of water, easily identifiable from low orbit spacecraft, is a saline, endorheic rift lake located directly on the San Andreas Fault. The agricultural area is within the Coachella Valley.

  16. Online fault adaptive control for efficient resource management in Advanced Life Support Systems.

    PubMed

    Abdelwahed, Sherif; Wu, Jian; Biswas, Gautam; Ramirez, John; Manders, Eric-J

    2005-01-01

    This article presents the design and implementation of a controller scheme for efficient resource management in Advanced Life Support Systems. In the proposed approach, a switching hybrid system model is used to represent the dynamics of the system components and their interactions. The operational specifications for the controller are represented by utility functions, and the corresponding resource management problem is formulated as a safety control problem. The controller is designed as a limited-horizon online supervisory controller that performs a limited forward search on the state-space of the system at each time step, and uses the utility functions to decide on the best action. The feasibility and accuracy of the online algorithm can be assessed at design time. We demonstrate the effectiveness of the scheme by running a set of experiments on the Reverse Osmosis (RO) subsystem of the Water Recovery System (WRS).

  17. Online fault adaptive control for efficient resource management in Advanced Life Support Systems

    NASA Technical Reports Server (NTRS)

    Abdelwahed, Sherif; Wu, Jian; Biswas, Gautam; Ramirez, John; Manders, Eric-J

    2005-01-01

    This article presents the design and implementation of a controller scheme for efficient resource management in Advanced Life Support Systems. In the proposed approach, a switching hybrid system model is used to represent the dynamics of the system components and their interactions. The operational specifications for the controller are represented by utility functions, and the corresponding resource management problem is formulated as a safety control problem. The controller is designed as a limited-horizon online supervisory controller that performs a limited forward search on the state-space of the system at each time step, and uses the utility functions to decide on the best action. The feasibility and accuracy of the online algorithm can be assessed at design time. We demonstrate the effectiveness of the scheme by running a set of experiments on the Reverse Osmosis (RO) subsystem of the Water Recovery System (WRS).

  18. Enhancing Earth Observation and Modeling for Tsunami Disaster Response and Management

    NASA Astrophysics Data System (ADS)

    Koshimura, Shunichi; Post, Joachim

    2017-04-01

    In the aftermath of catastrophic natural disasters, such as earthquakes and tsunamis, our society has experienced significant difficulties in assessing disaster impact in the limited amount of time. In recent years, the quality of satellite sensors and access to and use of satellite imagery and services has greatly improved. More and more space agencies have embraced data-sharing policies that facilitate access to archived and up-to-date imagery. Tremendous progress has been achieved through the continuous development of powerful algorithms and software packages to manage and process geospatial data and to disseminate imagery and geospatial datasets in near-real time via geo-web-services, which can be used in disaster-risk management and emergency response efforts. Satellite Earth observations now offer consistent coverage and scope to provide a synoptic overview of large areas, repeated regularly. These can be used to compare risk across different countries, day and night, in all weather conditions, and in trans-boundary areas. On the other hand, with use of modern computing power and advanced sensor networks, the great advances of real-time simulation have been achieved. The data and information derived from satellite Earth observations, integrated with in situ information and simulation modeling provides unique value and the necessary complement to socio-economic data. Emphasis also needs to be placed on ensuring space-based data and information are used in existing and planned national and local disaster risk management systems, together with other data and information sources as a way to strengthen the resilience of communities. Through the case studies of the 2011 Great East Japan earthquake and tsunami disaster, we aim to discuss how earth observations and modeling, in combination with local, in situ data and information sources, can support the decision-making process before, during and after a disaster strikes.

  19. The San Andreas Fault and a Strike-slip Fault on Europa

    NASA Technical Reports Server (NTRS)

    1998-01-01

    materials, but may be filled in mostly by sedimentary and erosional material deposited from above. Comparisons between faults on Europa and Earth may generate ideas useful in the study of terrestrial faulting.

    One theory is that fault motion on Europa is induced by the pull of variable daily tides generated by Jupiter's gravitational tug on Europa. The tidal tension opens the fault; subsequent tidal stress causes it to move lengthwise in one direction. Then the tidal forces close the fault up again. This prevents the area from moving back to its original position. If it moves forward with the next daily tidal cycle, the result is a steady accumulation of these lengthwise offset motions.

    Unlike Europa, here on Earth, large strike-slip faults such as the San Andreas are set in motion not by tidal pull, but by plate tectonic forces from the planet's mantle.

    North is to the top of the picture. The Earth picture (left) shows a LandSat Thematic Mapper image acquired in the infrared (1.55 to 1.75 micrometers) by LandSat5 on Friday, October 20th 1989 at 10:21 am. The original resolution was 28.5 meters per picture element.

    The Europa picture (right)is centered at 66 degrees south latitude and 195 degrees west longitude. The highest resolution frames, obtained at 40 meters per picture element with a spacecraft range of less than 4200 kilometers (2600 miles), are set in the context of lower resolution regional frames obtained at 200 meters per picture element and a range of 22,000 kilometers (13,600 miles). The images were taken on September 26, 1998 by the Solid State Imaging (SSI) system on NASA's Galileo spacecraft.

    The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC.

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images

  20. Advanced cloud fault tolerance system

    NASA Astrophysics Data System (ADS)

    Sumangali, K.; Benny, Niketa

    2017-11-01

    Cloud computing has become a prevalent on-demand service on the internet to store, manage and process data. A pitfall that accompanies cloud computing is the failures that can be encountered in the cloud. To overcome these failures, we require a fault tolerance mechanism to abstract faults from users. We have proposed a fault tolerant architecture, which is a combination of proactive and reactive fault tolerance. This architecture essentially increases the reliability and the availability of the cloud. In the future, we would like to compare evaluations of our proposed architecture with existing architectures and further improve it.

  1. Fault-tolerant processing system

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L. (Inventor)

    1996-01-01

    A fault-tolerant, fiber optic interconnect, or backplane, which serves as a via for data transfer between modules. Fault tolerance algorithms are embedded in the backplane by dividing the backplane into a read bus and a write bus and placing a redundancy management unit (RMU) between the read bus and the write bus so that all data transmitted by the write bus is subjected to the fault tolerance algorithms before the data is passed for distribution to the read bus. The RMU provides both backplane control and fault tolerance.

  2. Food, water, and fault lines: Remote sensing opportunities for earthquake-response management of agricultural water.

    PubMed

    Rodriguez, Jenna; Ustin, Susan; Sandoval-Solis, Samuel; O'Geen, Anthony Toby

    2016-09-15

    Earthquakes often cause destructive and unpredictable changes that can affect local hydrology (e.g. groundwater elevation or reduction) and thus disrupt land uses and human activities. Prolific agricultural regions overlie seismically active areas, emphasizing the importance to improve our understanding and monitoring of hydrologic and agricultural systems following a seismic event. A thorough data collection is necessary for adequate post-earthquake crop management response; however, the large spatial extent of earthquake's impact makes challenging the collection of robust data sets for identifying locations and magnitude of these impacts. Observing hydrologic responses to earthquakes is not a novel concept, yet there is a lack of methods and tools for assessing earthquake's impacts upon the regional hydrology and agricultural systems. The objective of this paper is to describe how remote sensing imagery, methods and tools allow detecting crop responses and damage incurred after earthquakes because a change in the regional hydrology. Many remote sensing datasets are long archived with extensive coverage and with well-documented methods to assess plant-water relations. We thus connect remote sensing of plant water relations to its utility in agriculture using a post-earthquake agrohydrologic remote sensing (PEARS) framework; specifically in agro-hydrologic relationships associated with recent earthquake events that will lead to improved water management. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Design Methods and Practices for Fault Prevention and Management in Spacecraft

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.

    2005-01-01

    Integrated Systems Health Management (ISHM) is intended to become a critical capability for all space, lunar and planetary exploration vehicles and systems at NASA. Monitoring and managing the health state of diverse components, subsystems, and systems is a difficult task that will become more challenging when implemented for long-term, evolving deployments. A key technical challenge will be to ensure that the ISHM technologies are reliable, effective, and low cost, resulting in turn in safe, reliable, and affordable missions. To ensure safety and reliability, ISHM functionality, decisions and knowledge have to be incorporated into the product lifecycle as early as possible, and ISHM must be considered as an essential element of models developed and used in various stages during system design. During early stage design, many decisions and tasks are still open, including sensor and measurement point selection, modeling and model-checking, diagnosis, signature and data fusion schemes, presenting the best opportunity to catch and prevent potential failures and anomalies in a cost-effective way. Using appropriate formal methods during early design, the design teams can systematically explore risks without committing to design decisions too early. However, the nature of ISHM knowledge and data is detailed, relying on high-fidelity, detailed models, whereas the earlier stages of the product lifecycle utilize low-fidelity, high-level models of systems and their functionality. We currently lack the tools and processes necessary for integrating ISHM into the vehicle system/subsystem design. As a result, most existing ISHM-like technologies are retrofits that were done after the system design was completed. It is very expensive, and sometimes futile, to retrofit a system health management capability into existing systems. Last-minute retrofits result in unreliable systems, ineffective solutions, and excessive costs (e.g., Space Shuttle TPS monitoring which was considered

  4. Class D Management Implementation Approach of the First Orbital Mission of the Earth Venture Series

    NASA Technical Reports Server (NTRS)

    Wells, James E.; Scherrer, John; Law, Richard; Bonniksen, Chris

    2013-01-01

    A key element of the National Research Council's Earth Science and Applications Decadal Survey called for the creation of the Venture Class line of low-cost research and application missions within NASA (National Aeronautics and Space Administration). One key component of the architecture chosen by NASA within the Earth Venture line is a series of self-contained stand-alone spaceflight science missions called "EV-Mission". The first mission chosen for this competitively selected, cost and schedule capped, Principal Investigator-led opportunity is the CYclone Global Navigation Satellite System (CYGNSS). As specified in the defining Announcement of Opportunity, the Principal Investigator is held responsible for successfully achieving the science objectives of the selected mission and the management approach that he/she chooses to obtain those results has a significant amount of freedom as long as it meets the intent of key NASA guidance like NPR 7120.5 and 7123. CYGNSS is classified under NPR 7120.5E guidance as a Category 3 (low priority, low cost) mission and carries a Class D risk classification (low priority, high risk) per NPR 8705.4. As defined in the NPR guidance, Class D risk classification allows for a relatively broad range of implementation strategies. The management approach that will be utilized on CYGNSS is a streamlined implementation that starts with a higher risk tolerance posture at NASA and that philosophy flows all the way down to the individual part level.

  5. Failure detection and fault management techniques for flush airdata sensing systems

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.

    1992-01-01

    A high-angle-of-attack flush airdata sensing system was installed and flight tested on the F-18 High Alpha Research Vehicle at NASA-Dryden. This system uses a matrix of pressure orifices arranged in concentric circles on the nose of the vehicle to determine angles of attack, angles of sideslip, dynamic pressure, and static pressure as well as other airdata parameters. Results presented use an arrangement of 11 symmetrically distributed ports on the aircraft nose. Experience with this sensing system data indicates that the primary concern for real-time implementation is the detection and management of overall system and individual pressure sensor failures. The multiple port sensing system is more tolerant to small disturbances in the measured pressure data than conventional probe-based intrusive airdata systems. However, under adverse circumstances, large undetected failures in individual pressure ports can result in algorithm divergence and catastrophic failure of the entire system. How system and individual port failures may be detected using chi sq. analysis is shown. Once identified, the effects of failures are eliminated using weighted least squares.

  6. Fault-Related Sanctuaries

    NASA Astrophysics Data System (ADS)

    Piccardi, L.

    2001-12-01

    Beyond the study of historical surface faulting events, this work investigates the possibility, in specific cases, of identifying pre-historical events whose memory survives in myths and legends. The myths of many famous sacred places of the ancient world contain relevant telluric references: "sacred" earthquakes, openings to the Underworld and/or chthonic dragons. Given the strong correspondence with local geological evidence, these myths may be considered as describing natural phenomena. It has been possible in this way to shed light on the geologic origin of famous myths (Piccardi, 1999, 2000 and 2001). Interdisciplinary researches reveal that the origin of several ancient sanctuaries may be linked in particular to peculiar geological phenomena observed on local active faults (like ground shaking and coseismic surface ruptures, gas and flames emissions, strong underground rumours). In many of these sanctuaries the sacred area is laid directly above the active fault. In a few cases, faulting has affected also the archaeological relics, right through the main temple (e.g. Delphi, Cnidus, Hierapolis of Phrygia). As such, the arrangement of the cult site and content of relative myths suggest that specific points along the trace of active faults have been noticed in the past and worshiped as special `sacred' places, most likely interpreted as Hades' Doors. The mythological stratification of most of these sanctuaries dates back to prehistory, and points to a common derivation from the cult of the Mother Goddess (the Lady of the Doors), which was largely widespread since at least 25000 BC. The cult itself was later reconverted into various different divinities, while the `sacred doors' of the Great Goddess and/or the dragons (offspring of Mother Earth and generally regarded as Keepers of the Doors) persisted in more recent mythologies. Piccardi L., 1999: The "Footprints" of the Archangel: Evidence of Early-Medieval Surface Faulting at Monte Sant'Angelo (Gargano, Italy

  7. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S

  8. Structural Health and Prognostics Management for Offshore Wind Turbines: Sensitivity Analysis of Rotor Fault and Blade Damage with O&M Cost Modeling

    SciT

    Myrent, Noah J.; Barrett, Natalie C.; Adams, Douglas E.

    2014-07-01

    Operations and maintenance costs for offshore wind plants are significantly higher than the current costs for land-based (onshore) wind plants. One way to reduce these costs would be to implement a structural health and prognostic management (SHPM) system as part of a condition based maintenance paradigm with smart load management and utilize a state-based cost model to assess the economics associated with use of the SHPM system. To facilitate the development of such a system a multi-scale modeling and simulation approach developed in prior work is used to identify how the underlying physics of the system are affected by themore » presence of damage and faults, and how these changes manifest themselves in the operational response of a full turbine. This methodology was used to investigate two case studies: (1) the effects of rotor imbalance due to pitch error (aerodynamic imbalance) and mass imbalance and (2) disbond of the shear web; both on a 5-MW offshore wind turbine in the present report. Sensitivity analyses were carried out for the detection strategies of rotor imbalance and shear web disbond developed in prior work by evaluating the robustness of key measurement parameters in the presence of varying wind speeds, horizontal shear, and turbulence. Detection strategies were refined for these fault mechanisms and probabilities of detection were calculated. For all three fault mechanisms, the probability of detection was 96% or higher for the optimized wind speed ranges of the laminar, 30% horizontal shear, and 60% horizontal shear wind profiles. The revised cost model provided insight into the estimated savings in operations and maintenance costs as they relate to the characteristics of the SHPM system. The integration of the health monitoring information and O&M cost versus damage/fault severity information provides the initial steps to identify processes to reduce operations and maintenance costs for an offshore wind farm while increasing turbine

  9. Strong ground motions generated by earthquakes on creeping faults

    Harris, Ruth A.; Abrahamson, Norman A.

    2014-01-01

    A tenet of earthquake science is that faults are locked in position until they abruptly slip during the sudden strain-relieving events that are earthquakes. Whereas it is expected that locked faults when they finally do slip will produce noticeable ground shaking, what is uncertain is how the ground shakes during earthquakes on creeping faults. Creeping faults are rare throughout much of the Earth's continental crust, but there is a group of them in the San Andreas fault system. Here we evaluate the strongest ground motions from the largest well-recorded earthquakes on creeping faults. We find that the peak ground motions generated by the creeping fault earthquakes are similar to the peak ground motions generated by earthquakes on locked faults. Our findings imply that buildings near creeping faults need to be designed to withstand the same level of shaking as those constructed near locked faults.

  10. How do normal faults grow?

    NASA Astrophysics Data System (ADS)

    Jackson, Christopher; Bell, Rebecca; Rotevatn, Atle; Tvedt, Anette

    2016-04-01

    Normal faulting accommodates stretching of the Earth's crust, and it is arguably the most fundamental tectonic process leading to continent rupture and oceanic crust emplacement. Furthermore, the incremental and finite geometries associated with normal faulting dictate landscape evolution, sediment dispersal and hydrocarbon systems development in rifts. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate

  11. Quantifying Anderson's fault types

    Simpson, R.W.

    1997-01-01

    Anderson [1905] explained three basic types of faulting (normal, strike-slip, and reverse) in terms of the shape of the causative stress tensor and its orientation relative to the Earth's surface. Quantitative parameters can be defined which contain information about both shape and orientation [Ce??le??rier, 1995], thereby offering a way to distinguish fault-type domains on plots of regional stress fields and to quantify, for example, the degree of normal-faulting tendencies within strike-slip domains. This paper offers a geometrically motivated generalization of Angelier's [1979, 1984, 1990] shape parameters ?? and ?? to new quantities named A?? and A??. In their simple forms, A?? varies from 0 to 1 for normal, 1 to 2 for strike-slip, and 2 to 3 for reverse faulting, and A?? ranges from 0?? to 60??, 60?? to 120??, and 120?? to 180??, respectively. After scaling, A?? and A?? agree to within 2% (or 1??), a difference of little practical significance, although A?? has smoother analytical properties. A formulation distinguishing horizontal axes as well as the vertical axis is also possible, yielding an A?? ranging from -3 to +3 and A?? from -180?? to +180??. The geometrically motivated derivation in three-dimensional stress space presented here may aid intuition and offers a natural link with traditional ways of plotting yield and failure criteria. Examples are given, based on models of Bird [1996] and Bird and Kong [1994], of the use of Anderson fault parameters A?? and A?? for visualizing tectonic regimes defined by regional stress fields. Copyright 1997 by the American Geophysical Union.

  12. The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform

    NASA Astrophysics Data System (ADS)

    Xie, Qingyun

    2016-06-01

    This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.

  13. Long-Term Soil Experiments: A Key to Managing Earth's Rapidly Changing Critical Zones

    NASA Astrophysics Data System (ADS)

    Richter, D., Jr.

    2014-12-01

    In a few decades, managers of Earth's Critical Zones (biota, humans, land, and water) will be challenged to double food and fiber production and diminish adverse effects of management on the wider environment. To meet these challenges, an array of scientific approaches is being used to increase understanding of Critical Zone functioning and evolution, and one amongst these approaches needs to be long-term soil field studies to move us beyond black boxing the belowground Critical Zone, i.e., to further understanding of processes driving changes in the soil environment. Long-term soil experiments (LTSEs) provide direct observations of soil change and functioning across time scales of decades, data critical for biological, biogeochemical, and environmental assessments of sustainability; for predictions of soil fertility, productivity, and soil-environment interactions; and for developing models at a wide range of temporal and spatial scales. Unfortunately, LTSEs globally are not in a good state, and they take years to mature, are vulnerable to loss, and even today remain to be fully inventoried. Of the 250 LTSEs in a web-based network, results demonstrate that soils and belowground Critical Zones are highly dynamic and responsive to human management. The objective of this study is to review the contemporary state of LTSEs and consider how they contribute to three open questions: (1) can soils sustain a doubling of food production in the coming decades without further impinging on the wider environment, (2) how do soils interact with the global C cycle, and (3) how can soil management establish greater control over nutrient cycling. While LTSEs produce significant data and perspectives for all three questions, there is on-going need and opportunity for reviews of the long-term soil-research base, for establishment of an efficiently run network of LTSEs aimed at sustainability and improving management control over C and nutrient cycling, and for research teams that

  14. DREAM: Distributed Resources for the Earth System Grid Federation (ESGF) Advanced Management

    NASA Astrophysics Data System (ADS)

    Williams, D. N.

    2015-12-01

    The data associated with climate research is often generated, accessed, stored, and analyzed on a mix of unique platforms. The volume, variety, velocity, and veracity of this data creates unique challenges as climate research attempts to move beyond stand-alone platforms to a system that truly integrates dispersed resources. Today, sharing data across multiple facilities is often a challenge due to the large variance in supporting infrastructures. This results in data being accessed and downloaded many times, which requires significant amounts of resources, places a heavy analytic development burden on the end users, and mismanaged resources. Working across U.S. federal agencies, international agencies, and multiple worldwide data centers, and spanning seven international network organizations, the Earth System Grid Federation (ESGF) has begun to solve this problem. Its architecture employs a system of geographically distributed peer nodes that are independently administered yet united by common federation protocols and application programming interfaces. However, significant challenges remain, including workflow provenance, modular and flexible deployment, scalability of a diverse set of computational resources, and more. Expanding on the existing ESGF, the Distributed Resources for the Earth System Grid Federation Advanced Management (DREAM) will ensure that the access, storage, movement, and analysis of the large quantities of data that are processed and produced by diverse science projects can be dynamically distributed with proper resource management. This system will enable data from an infinite number of diverse sources to be organized and accessed from anywhere on any device (including mobile platforms). The approach offers a powerful roadmap for the creation and integration of a unified knowledge base of an entire ecosystem, including its many geophysical, geographical, social, political, agricultural, energy, transportation, and cyber aspects. The

  15. Integration of Earth System Models and Workflow Management under iRODS for the Northeast Regional Earth System Modeling Project

    NASA Astrophysics Data System (ADS)

    Lengyel, F.; Yang, P.; Rosenzweig, B.; Vorosmarty, C. J.

    2012-12-01

    The Northeast Regional Earth System Model (NE-RESM, NSF Award #1049181) integrates weather research and forecasting models, terrestrial and aquatic ecosystem models, a water balance/transport model, and mesoscale and energy systems input-out economic models developed by interdisciplinary research team from academia and government with expertise in physics, biogeochemistry, engineering, energy, economics, and policy. NE-RESM is intended to forecast the implications of planning decisions on the region's environment, ecosystem services, energy systems and economy through the 21st century. Integration of model components and the development of cyberinfrastructure for interacting with the system is facilitated with the integrated Rule Oriented Data System (iRODS), a distributed data grid that provides archival storage with metadata facilities and a rule-based workflow engine for automating and auditing scientific workflows.

  16. Preliminary evaluation of effects of best management practices in the Black Earth Creek, Wisconsin, priority watershed

    Walker, J.F.; Graczyk, D.J.; Olem, H.

    1993-01-01

    Nonpoint-source contamination accounts for a substantial part of the water quality problems in many watersheds. The Wisconsin Nonpoint Source Water Pollution Abatement Program provides matching money for voluntary implementation of various best management practices (BMPs). The effectiveness of BMPs on a drainage-basin scale has not been adequately assessed in Wisconsin by use of data collected before and after BMP implementation. The U.S. Geological Survey, in cooperation with the Wisconsin Department of Natural Resources, monitored water quality in the Black Earth Creek watershed in southern Wisconsin from October 1984 through September 1986 (pre-BMP conditions). BMP implementation began during the summer of 1989 and is planned to continue through 1993. Data collection resumed in fall 1989 and is intended to provide information during the transitional period of BMP implementation (1990-93) and 2 years of post-BMP conditions (1994-95). Preliminary results presented for two subbasins in toe Black Earth Creek watershed (Brewery and Garfoot Creeks) are based on data collected during pre-BMP conditions and the first 3 years of the transitional period. The analysis includes the use of regressions to control for natural variability in the data and, hence, enhance the ability to detect changes. Data collected to date (1992) indicate statistically significant differences in storm mass transport of suspended sediment and ammonia nitrogen at Brewery Creek. The central tendency of the regression residuals has decreased with the implementation of BMPs; hence, the improvement in water quality in the Brewery Creek watershed is likely a result of BMP implementation. Differences in storm mass transport at Garfoot Creek were not detected, primarily because of an insufficient number of storms in the transitional period. As practice implementation continues, the additional data will be used to determine the level of management which results in significant improvements in water

  17. How Do Normal Faults Grow?

    NASA Astrophysics Data System (ADS)

    Jackson, C. A. L.; Bell, R. E.; Rotevatn, A.; Tvedt, A. B. M.

    2015-12-01

    Normal faulting accommodates stretching of the Earth's crust and is one of the fundamental controls on landscape evolution and sediment dispersal in rift basins. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate that, in the case of seismic-scale growth faults, growth strata thickness patterns and relay zone kinematics, rather than displacement backstripping, should be assessed to directly constrain

  18. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASAs Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The engineering development of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS) requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The nominal and off-nominal characteristics of SLS's elements and subsystems must be understood and matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex systems engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model-based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model-based algorithms and their development lifecycle from inception through FSW certification are an important focus of SLS's development effort to further ensure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. To test and validate these M&FM algorithms a dedicated test-bed was developed for full Vehicle Management End-to-End Testing (VMET). For addressing fault management (FM

  19. Supporting Management of European Refugee Streams by Earth Observation and Geoinformation

    NASA Astrophysics Data System (ADS)

    Komp, K.-U.; Müterthies, A.

    2016-06-01

    The sharp increase in refugee numbers arriving in the European Union has recently caused major and manifold challenges for the member states and their administrative services. Location based situation reports and maps may support the refugee management from local to European level. The first support is mapping of the geographical distribution of migrating people which needs more or less real time data. The actual data sources are location related observations along the routes of refugees, actual satellite observations and data mining results. These tools and data are used to monitor spatial distributions as well as extrapolate the arrival of refugees for the subsequent weeks. The second support is the short term update of the location of initial registration facilities and first reception facilities, their capacities, and their occupancy. The third management level is the systematic inquiry for unoccupied housing facilities and for empty places within build-up areas. Geo-coded data sets of house numbers have to be cross-referenced with city maps and communal inhabitants address data. The legal aspects of data mining and secured access to personal data are strictly controlled by the administration allowing only limited access and distribution of data and results. However, the paper will not disclose scientific progress in Earth Observation and GIS, but will actually demonstrate an urgently needed new combination of existing methods to support actual needs. The societal benefits of EO/GIS are no longer just potential possibilities, but actual results in real political, administrative and humanitarian day to day reality.

  20. Managing Earth to Make Future Development More Sustainable: Learning From a Megacity Like Hong Kong

    NASA Astrophysics Data System (ADS)

    Yim, W. W.; Ollier, C. D.

    2008-12-01

    Selected recent findings related to climate change in Hong Kong include: (1) The Hong Kong seafloor has yielded a ~0.5-million year record of climate and sea-level changes. (2) Greenhouse gases produced naturally from sub-aerially exposed continental shelves were a probable forcing mechanism in triggering the termination of past ice ages. (3) An analysis of annual mean temperature records has revealed that the urban heat island effect has contributed ~75 % of the warming. (4) Past volcanic eruptions are found to lower Hong Kong's temperature and to cause extremely dry and wet years. (5) No evidence can be found for an increase in frequency and intensity of typhoons based on the analysis of an 8,000-year record in the Pearl River Estuary. (6) The observed rate of sea-level rise in the South China Sea is much slower than the predictions of the IPCC Fourth Assessment. For the Earth's management, population growth and the depletion of non-renewable resources must be recognized as unsustainable. The human impact on the natural hydrological cycle is an important forcing mechanism in climate change. In order to delay the demise of the human race, management must include curbing population growth and much more waste recycling than at present.

  1. Spacecraft fault tolerance: The Magellan experience

    NASA Technical Reports Server (NTRS)

    Kasuda, Rick; Packard, Donna Sexton

    1993-01-01

    Interplanetary and earth orbiting missions are now imposing unique fault tolerant requirements upon spacecraft design. Mission success is the prime motivator for building spacecraft with fault tolerant systems. The Magellan spacecraft had many such requirements imposed upon its design. Magellan met these requirements by building redundancy into all the major subsystem components and designing the onboard hardware and software with the capability to detect a fault, isolate it to a component, and issue commands to achieve a back-up configuration. This discussion is limited to fault protection, which is the autonomous capability to respond to a fault. The Magellan fault protection design is discussed, as well as the developmental and flight experiences and a summary of the lessons learned.

  2. Alpine Fault, New Zealand, SRTM Shaded Relief and Colored Height

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The Alpine fault runs parallel to, and just inland of, much of the west coast of New Zealand's South Island. This view was created from the near-global digital elevation model produced by the Shuttle Radar Topography Mission (SRTM) and is almost 500 kilometers (just over 300 miles) wide. Northwest is toward the top. The fault is extremely distinct in the topographic pattern, nearly slicing this scene in half lengthwise.

    In a regional context, the Alpine fault is part of a system of faults that connects a west dipping subduction zone to the northeast with an east dipping subduction zone to the southwest, both of which occur along the juncture of the Indo-Australian and Pacific tectonic plates. Thus, the fault itself constitutes the major surface manifestation of the plate boundary here. Offsets of streams and ridges evident in the field, and in this view of SRTM data, indicate right-lateral fault motion. But convergence also occurs across the fault, and this causes the continued uplift of the Southern Alps, New Zealand's largest mountain range, along the southeast side of the fault.

    Two visualization methods were combined to produce this image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the northwest-southeast (image top to bottom) direction, so that northwest slopes appear bright and southeast slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and tan, to white at the highest elevations.

    Elevation data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect

  3. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen

    2015-01-01

    The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that

  4. A decision analysis approach for risk management of near-earth objects

    NASA Astrophysics Data System (ADS)

    Lee, Robert C.; Jones, Thomas D.; Chapman, Clark R.

    2014-10-01

    Risk management of near-Earth objects (NEOs; e.g., asteroids and comets) that can potentially impact Earth is an important issue that took on added urgency with the Chelyabinsk event of February 2013. Thousands of NEOs large enough to cause substantial damage are known to exist, although only a small fraction of these have the potential to impact Earth in the next few centuries. The probability and location of a NEO impact are subject to complex physics and great uncertainty, and consequences can range from minimal to devastating, depending upon the size of the NEO and location of impact. Deflecting a potential NEO impactor would be complex and expensive, and inter-agency and international cooperation would be necessary. Such deflection campaigns may be risky in themselves, and mission failure may result in unintended consequences. The benefits, risks, and costs of different potential NEO risk management strategies have not been compared in a systematic fashion. We present a decision analysis framework addressing this hazard. Decision analysis is the science of informing difficult decisions. It is inherently multi-disciplinary, especially with regard to managing catastrophic risks. Note that risk analysis clarifies the nature and magnitude of risks, whereas decision analysis guides rational risk management. Decision analysis can be used to inform strategic, policy, or resource allocation decisions. First, a problem is defined, including the decision situation and context. Second, objectives are defined, based upon what the different decision-makers and stakeholders (i.e., participants in the decision) value as important. Third, quantitative measures or scales for the objectives are determined. Fourth, alternative choices or strategies are defined. Fifth, the problem is then quantitatively modeled, including probabilistic risk analysis, and the alternatives are ranked in terms of how well they satisfy the objectives. Sixth, sensitivity analyses are performed in

  5. Managing the Risk of Triggered Seismicity: Can We Identify (and Avoid) Potentially Active Faults? - A Practical Case Study in Oklahoma

    NASA Astrophysics Data System (ADS)

    Zoback, M. D.; Alt, R. C., II; Walsh, F. R.; Walters, R. J.

    2014-12-01

    It is well known that throughout the central and eastern U.S. there has been a marked increase in seismicity since 2009, at least some of which appears to increased wastewater injection. No area has seen a greater increase in seismicity than Oklahoma. In this paper, we utilize newly available information on in situ stress orientation and relative magnitudes, the distribution of high volume injection wells and knowledge of the intervals used for waste water disposal to identify the factors potentially contributing to the occurrence of triggered seismicity. While there are a number of sites where in situ stress data has been successfully used to identify potentially active faults, we are investigating whether this methodology can be implemented throughout a state utilizing the types of information frequently available in areas of oil and gas development. As an initial test of this concept, we have been compiling stress orientation data from wells throughout Oklahoma provided by private industry. Over fifty new high quality data points, principally drilling-induced tensile fractures observed in image logs, result in a greatly improved understanding of the stress field in much of the state. A relatively uniform ENE direction of maximum compressive stress is observed, although stress orientations (and possibly relative stress magnitudes) differ in the southern and southwestern parts of the state. The proposed methodology can be tested in the area of the NE-trending fault that produced the M 5+ earthquakes in the Prague, OK sequence in 2011, and the Meers fault in southwestern OK, that produced a M~7 reverse faulting earthquake about 1100 years ago. This methodology can also be used to essentially rule out slip on other major faults in the area, such as the ~N-S trending Nemaha fault system. Additional factors leading to the occurrence of relatively large triggered earthquakes in Oklahoma are 1) the overall increase in injection volumes throughout the state in recent

  6. An integrated study of earth resources in the state of California using remote sensing techniques. [water and forest management

    NASA Technical Reports Server (NTRS)

    Colwell, R. N.

    1974-01-01

    Progress and results of an integrated study of California's water resources are discussed. The investigation concerns itself primarily with the usefulness of remote sensing of relation to two categories of problems: (1) water supply; and (2) water demand. Also considered are its applicability to forest management and timber inventory. The cost effectiveness and utility of remote sensors such as the Earth Resources Technology Satellite for water and timber management are presented.

  7. Active faults in Africa: a review

    NASA Astrophysics Data System (ADS)

    Skobelev, S. F.; Hanon, M.; Klerkx, J.; Govorova, N. N.; Lukina, N. V.; Kazmin, V. G.

    2004-03-01

    The active fault database and Map of active faults in Africa, in scale of 1:5,000,000, were compiled according to the ILP Project II-2 "World Map of Major Active Faults". The data were collected in the Royal Museum of Central Africa, Tervuren, Belgium, and in the Geological Institute, Moscow, where the final edition was carried out. Active faults of Africa form three groups. The first group is represented by thrusts and reverse faults associated with compressed folds in the northwest Africa. They belong to the western part of the Alpine-Central Asian collision belt. The faults disturb only the Earth's crust and some of them do not penetrate deeper than the sedimentary cover. The second group comprises the faults of the Great African rift system. The faults form the known Western and Eastern branches, which are rifts with abnormal mantle below. The deep-seated mantle "hot" anomaly probably relates to the eastern volcanic branch. In the north, it joins with the Aden-Red Sea rift zone. Active faults in Egypt, Libya and Tunis may represent a link between the East African rift system and Pantellerian rift zone in the Mediterranean. The third group included rare faults in the west of Equatorial Africa. The data were scarce, so that most of the faults of this group were identified solely by interpretation of space imageries and seismicity. Some longer faults of the group may continue the transverse faults of the Atlantic and thus can penetrate into the mantle. This seems evident for the Cameron fault line.

  8. Can Earth System Model Provide Reasonable Natural Runoff Estimates to Support Water Management Studies?

    NASA Astrophysics Data System (ADS)

    Kao, S. C.; Shi, X.; Kumar, J.; Ricciuto, D. M.; Mao, J.; Thornton, P. E.

    2017-12-01

    With the concern of changing hydrologic regime, there is a crucial need to better understand how water availability may change and influence water management decisions in the projected future climate conditions. Despite that surface hydrology has long been simulated by land model within the Earth System modeling (ESM) framework, given the coarser horizontal resolution and lack of engineering-level calibration, raw runoff from ESM is generally discarded by water resource managers when conducting hydro-climate impact assessments. To identify a likely path to improve the credibility of ESM-simulated natural runoff, we conducted regional model simulation using the land component (ALM) of the Accelerated Climate Modeling for Energy (ACME) version 1 focusing on the conterminous United States (CONUS). Two very different forcing data sets, including (1) the conventional 0.5° CRUNCEP (v5, 1901-2013) and (2) the 1-km Daymet (v3, 1980-2013) aggregated to 0.5°, were used to conduct 20th century transient simulation with satellite phenology. Additional meteorologic and hydrologic observations, including PRISM precipitation and U.S. Geological Survey WaterWatch runoff, were used for model evaluation. For various CONUS hydrologic regions (such as Pacific Northwest), we found that Daymet can significantly improve the reasonableness of simulated ALM runoff even without intensive calibration. The large dry bias of CRUNCEP precipitation (evaluated by PRISM) in multiple CONUS hydrologic regions is believed to be the main reason causing runoff underestimation. The results suggest that when driving with skillful precipitation estimates, ESM has the ability to produce reasonable natural runoff estimates to support further water management studies. Nevertheless, model calibration will be required for regions (such as Upper Colorado) where ill performance is showed for multiple different forcings.

  9. Discover Earth

    NASA Technical Reports Server (NTRS)

    Steele, Colleen

    1998-01-01

    Discover Earth is a NASA-sponsored project for teachers of grades 5-12, designed to: (1) enhance understanding of the Earth as an integrated system; (2) enhance the interdisciplinary approach to science instruction; and (3) provide classroom materials that focus on those goals. Discover Earth is conducted by the Institute for Global Environmental Strategies in collaboration with Dr. Eric Barron, Director, Earth System Science Center, The Pennsylvania State University; and Dr. Robert Hudson, Chair, the Department of Meteorology, University of Maryland at College Park. The enclosed materials: (1) represent only part of the Discover Earth materials; (2) were developed by classroom teachers who are participating in the Discover Earth project; (3) utilize an investigative approach and on-line data; and (4) can be effectively adjusted to classrooms with greater/without technology access. The Discover Earth classroom materials focus on the Earth system and key issues of global climate change including topics such as the greenhouse effect, clouds and Earth's radiation balance, surface hydrology and land cover, and volcanoes and climate change. All the materials developed to date are available on line at (http://www.strategies.org) You are encouraged to submit comments and recommendations about these materials to the Discover Earth project manager, contact information is listed below. You are welcome to duplicate all these materials.

  10. Role of reservoir simulation in development and management of complexly-faulted, multiple-reservoir Dulang field, offshore Malaysia: Holistic strategies

    SciT

    Sonrexa, K.; Aziz, A.; Solomon, G.J.

    1995-10-01

    The Dulang field, discovered in 1981, is a major oil filed located offshore Malaysia in the Malay Basin. The Dulang Unit Area constitutes the central part of this exceedingly heterogeneous field. The Unit Area consists of 19 stacked shaly sandstone reservoirs which are divided into about 90 compartments with multiple fluid contacts owing to severe faulting. Current estimated put the Original-Oil-In-Place (OOIP) in the neighborhood of 700 million stock tank barrels (MMSTB). Production commenced in March 1991 and the current production is more than 50,000 barrels of oil per day (BOPD). In addition to other more conventional means, reservoir simulationmore » has been employed form the very start as a vital component of the overall strategy to develop and manage this challenging field. More than 10 modeling studies have been completed by Petronas Carigali Sdn. Bhd. (Carigali) at various times during the short life of this field thus far. To add to that, Esso Production Malaysia Inc. (EPMI) has simultaneously conducted a number of independent studies. These studies have dealt with undersaturated compartments as well as those with small and large gas caps. They have paved the way for improved reservoir characterization, optimum development planning and prudent production practices. This paper discusses the modeling approaches and highlights the crucial role these studies have played on an ongoing basis in the development and management of the complexly-faulted, multi-reservoir Dulang Unit Area.« less

  11. Evolution of Information Management at the GSFC Earth Sciences (GES) Data and Information Services Center (DISC): 2006-2007

    NASA Technical Reports Server (NTRS)

    Kempler, Steven; Lynnes, Christopher; Vollmer, Bruce; Alcott, Gary; Berrick, Stephen

    2009-01-01

    Increasingly sophisticated National Aeronautics and Space Administration (NASA) Earth science missions have driven their associated data and data management systems from providing simple point-to-point archiving and retrieval to performing user-responsive distributed multisensor information extraction. To fully maximize the use of remote-sensor-generated Earth science data, NASA recognized the need for data systems that provide data access and manipulation capabilities responsive to research brought forth by advancing scientific analysis and the need to maximize the use and usability of the data. The decision by NASA to purposely evolve the Earth Observing System Data and Information System (EOSDIS) at the Goddard Space Flight Center (GSFC) Earth Sciences (GES) Data and Information Services Center (DISC) and other information management facilities was timely and appropriate. The GES DISC evolution was focused on replacing the EOSDIS Core System (ECS) by reusing the In-house developed disk-based Simple, Scalable, Script-based Science Product Archive (S4PA) data management system and migrating data to the disk archives. Transition was completed in December 2007

  12. Archive Management of NASA Earth Observation Data to Support Cloud Analysis

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Baynes, Kathleen; McInerney, Mark

    2017-01-01

    NASA collects, processes and distributes petabytes of Earth Observation (EO) data from satellites, aircraft, in situ instruments and model output, with an order of magnitude increase expected by 2024. Cloud-based web object storage (WOS) of these data can simplify the execution of such an increase. More importantly, it can also facilitate user analysis of those volumes by making the data available to the massively parallel computing power in the cloud. However, storing EO data in cloud WOS has a ripple effect throughout the NASA archive system with unexpected challenges and opportunities. One challenge is modifying data servicing software (such as Web Coverage Service servers) to access and subset data that are no longer on a directly accessible file system, but rather in cloud WOS. Opportunities include refactoring of the archive software to a cloud-native architecture; virtualizing data products by computing on demand; and reorganizing data to be more analysis-friendly. Reviewed by Mark McInerney ESDIS Deputy Project Manager.

  13. Academic and research capacity development in Earth observation for environmental management

    NASA Astrophysics Data System (ADS)

    Cassells, Gemma; Woodhouse, Iain H.; Patenaude, Genevieve; Tembo, Mavuto

    2011-10-01

    Sustainable environmental management is one of the key development goals of the 21st century. The importance of Earth observation (EO) for addressing current environmental problems is well recognized. Most developing countries are highly susceptible to environmental degradation; however, the capacity to monitor these changes is predominantly located in the developed world. Decades of aid and effort have been invested in capacity development (CD) with the goal of ensuring sustainable development. Academics, given their level of freedom and their wider interest in teaching and knowledge transfer, are ideally placed to act as catalyst for capacity building. In this letter, we make a novel investigation into the extent to which the EO academic research community is engaged in capacity development. Using the Web of Knowledge publication database (http://wok.mimas.ac.uk), we examined the geographical distribution of published EO related research (a) by country as object of research and (b) by authors' country of affiliation. Our results show that, while a significant proportion of EO research (44%) has developing countries as their object of research, less than 3% of publications have authors working in, or affiliated to, a developing country (excluding China, India and Brazil, which not only are countries in transition, but also have well established EO capacity). These patterns appear consistent over the past 20 years. Despite the wide awareness of the importance of CD, we show that significant progress on this front is required. We therefore propose a number of recommendations and best practices to ease collaboration and open access.

  14. Against the Grain: The Influence of Changing Agricultural Management on the Earth System

    NASA Astrophysics Data System (ADS)

    Foley, J. A.

    2007-12-01

    The rise of modern agriculture was one of the most transformative events in human history, and has forever changed our relationship to the natural world. By clearing tropical forests, practicing subsistence agriculture on marginal lands and intensifying industrialized farmland production, agricultural practices are changing the worldês landscapes in pervasive ways. In the past decade, we have made tremendous progress in monitoring agricultural expansion from satellites, and modeling associated environmental impacts. In the past decade, the Earth System Science research community has begun to recognize the importance of agricultural lands, particularly as they continue expanding at the expense of important natural ecosystems, potentially altering the planetês carbon cycle and climate. With the advent of new remote sensing and global modeling methods, several efforts have documented the expansion of agricultural lands, the corresponding loss of natural ecosystems, and how this may influence the earth system. But the geographic expansion of agricultural lands is not the whole story. While significant agricultural expansion (or extensification) has occurred in the past few decades, the intensification of agricultural practices Ð under the aegis of the -Green Revolution" Ð has dramatically altered the relationship between humans and environmental systems across the world. Simply put, many of the worldês existing agricultural lands are being used much more intensively as opportunities for agricultural expansion are being exhausted elsewhere. In the last 40 years, global agricultural production has more than doubled Ð although global cropland has increased by only 12% Ð mainly through the use of high yielding varieties of grain, increased reliance on irrigation, massive increases in chemical fertilization, and increased mechanization. Indeed, in the past 40 years there has been a 700% increase in global fertilizer use and a 70% increase in irrigated cropland area

  15. System for Earth Sample Registration SESAR: Services for IGSN Registration and Sample Metadata Management

    NASA Astrophysics Data System (ADS)

    Chan, S.; Lehnert, K. A.; Coleman, R. J.

    2011-12-01

    SESAR, the System for Earth Sample Registration, is an online registry for physical samples collected for Earth and environmental studies. SESAR generates and administers the International Geo Sample Number IGSN, a unique identifier for samples that is dramatically advancing interoperability amongst information systems for sample-based data. SESAR was developed to provide the complete range of registry services, including definition of IGSN syntax and metadata profiles, registration and validation of name spaces requested by users, tools for users to submit and manage sample metadata, validation of submitted metadata, generation and validation of the unique identifiers, archiving of sample metadata, and public or private access to the sample metadata catalog. With the development of SESAR v3, we placed particular emphasis on creating enhanced tools that make metadata submission easier and more efficient for users, and that provide superior functionality for users to manage metadata of their samples in their private workspace MySESAR. For example, SESAR v3 includes a module where users can generate custom spreadsheet templates to enter metadata for their samples, then upload these templates online for sample registration. Once the content of the template is uploaded, it is displayed online in an editable grid format. Validation rules are executed in real-time on the grid data to ensure data integrity. Other new features of SESAR v3 include the capability to transfer ownership of samples to other SESAR users, the ability to upload and store images and other files in a sample metadata profile, and the tracking of changes to sample metadata profiles. In the next version of SESAR (v3.5), we will further improve the discovery, sharing, registration of samples. For example, we are developing a more comprehensive suite of web services that will allow discovery and registration access to SESAR from external systems. Both batch and individual registrations will be possible

  16. San Andreas Fault in the Carrizo Plain

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The 1,200-kilometer (800-mile)San Andreas is the longest fault in California and one of the longest in North America. This perspective view of a portion of the fault was generated using data from the Shuttle Radar Topography Mission (SRTM), which flew on NASA's Space Shuttle last February, and an enhanced, true-color Landsat satellite image. The view shown looks southeast along the San Andreas where it cuts along the base of the mountains in the Temblor Range near Bakersfield. The fault is the distinctively linear feature to the right of the mountains. To the left of the range is a portion of the agriculturally rich San Joaquin Valley. In the background is the snow-capped peak of Mt. Pinos at an elevation of 2,692 meters (8,831 feet). The complex topography in the area is some of the most spectacular along the course of the fault. To the right of the fault is the famous Carrizo Plain. Dry conditions on the plain have helped preserve the surface trace of the fault, which is scrutinized by both amateur and professional geologists. In 1857, one of the largest earthquakes ever recorded in the United States occurred just north of the Carrizo Plain. With an estimated magnitude of 8.0, the quake severely shook buildings in Los Angeles, caused significant surface rupture along a 350-kilometer (220-mile) segment of the fault, and was felt as far away as Las Vegas, Nev. This portion of the San Andreas is an important area of study for seismologists. For visualization purposes, topographic heights displayed in this image are exaggerated two times.

    The elevation data used in this image was acquired by SRTM aboard the Space Shuttle Endeavour, launched on February 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of Earth's land surface. To collect the 3-D SRTM data, engineers added a mast 60

  17. Terra Sapiens: The Role of Science in Fostering a Wisely Managed Earth

    NASA Astrophysics Data System (ADS)

    Grinspoon, D. H.

    2013-12-01

    Carl Sagan was sometimes shunned by the scientific community for his successful popularizations, but another factor was his activism on issues such as nuclear weapons and climate change. The question of whether Earth has entered a new geological epoch characterized by human influence has gained significance beyond the narrow question of stratigraphic nomenclature. The anthropocene has raised new questions about the 'nature of nature', about the false - or at least fluid - dichotomy between wild and managed environments, about what it is that, in a world already profoundly altered by human activities, we should be trying to conserve, and ultimately about how humanity can learn to live comfortably with world-changing technology. It also raises challenging questions about the role of scientists in the public arena. Astrobiology is largely a scientific study of the relationship between planets and life. On Earth this relationship has taken a dramatic new turn - a planetary transformation potentially as significant as the origin of life, the great oxygenation or the Cambrian 'explosion'. We are not the first species to cause catastrophic change in the quest for a new energy source. The cyanobacteria, in perfecting photosynthesis, liberated vast quantities of free oxygen, wreaking havoc on the global biosphere and climate. And yet, obviously, there seems to be something important differentiating us from cyanobacteria. When we try to describe that difference we use poorly defined (some may even say ironic) words like 'intelligence', 'consciousness', 'foresight', 'awareness' and 'responsibility.' Looking at the anthropocene as an event in planetary evolution gives us new perspective on the meaning of these terms. We may also ask if these phenomena could somehow be unique to Earth and if, given the plethora of exponential changes occurring now, they can become part of a stable or long-lived planetary epoch. It can be shown quantitatively that the prospect for successful

  18. National Aeronautics and Space Administration (NASA) Earth Science Research for Energy Management. Part 1; Overview of Energy Issues and an Assessment of the Potential for Application of NASA Earth Science Research

    NASA Technical Reports Server (NTRS)

    Zell, E.; Engel-Cox, J.

    2005-01-01

    Effective management of energy resources is critical for the U.S. economy, the environment, and, more broadly, for sustainable development and alleviating poverty worldwide. The scope of energy management is broad, ranging from energy production and end use to emissions monitoring and mitigation and long-term planning. Given the extensive NASA Earth science research on energy and related weather and climate-related parameters, and rapidly advancing energy technologies and applications, there is great potential for increased application of NASA Earth science research to selected energy management issues and decision support tools. The NASA Energy Management Program Element is already involved in a number of projects applying NASA Earth science research to energy management issues, with a focus on solar and wind renewable energy and developing interests in energy modeling, short-term load forecasting, energy efficient building design, and biomass production.

  19. Earth Observing System (EOS)/Advanced Microwave Sounding Unit-A (AMSU-A): Calibration management plan

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is the Calibration Management Plan for the Earth Observing System/Advanced Microwave Sounding Unit-A (AMSU-A). The plan defines calibration requirements, calibration equipment, and calibration methods for the AMSU-A, a 15 channel passive microwave radiometer that will be used for measuring global atmospheric temperature profiles from the EOS polar orbiting observatory. The AMSU-A system will also provide data to verify and augment that of the Atmospheric Infrared Sounder.

  20. Approach to Managing MeaSURES Data at the GSFC Earth Science Data and Information Services Center (GES DISC)

    NASA Technical Reports Server (NTRS)

    Vollmer, Bruce; Kempler, Steven J.; Ramapriyan, Hampapuram K.

    2009-01-01

    A major need stated by the NASA Earth science research strategy is to develop long-term, consistent, and calibrated data and products that are valid across multiple missions and satellite sensors. (NASA Solicitation for Making Earth System data records for Use in Research Environments (MEaSUREs) 2006-2010) Selected projects create long term records of a given parameter, called Earth Science Data Records (ESDRs), based on mature algorithms that bring together continuous multi-sensor data. ESDRs, associated algorithms, vetted by the appropriate community, are archived at a NASA affiliated data center for archive, stewardship, and distribution. See http://measures-projects.gsfc.nasa.gov/ for more details. This presentation describes the NASA GSFC Earth Science Data and Information Services Center (GES DISC) approach to managing the MEaSUREs ESDR datasets assigned to GES DISC. (Energy/water cycle related and atmospheric composition ESDRs) GES DISC will utilize its experience to integrate existing and proven reusable data management components to accommodate the new ESDRs. Components include a data archive system (S4PA), a data discovery and access system (Mirador), and various web services for data access. In addition, if determined to be useful to the user community, the Giovanni data exploration tool will be made available to ESDRs. The GES DISC data integration methodology to be used for the MEaSUREs datasets is presented. The goals of this presentation are to share an approach to ESDR integration, and initiate discussions amongst the data centers, data managers and data providers for the purpose of gaining efficiencies in data management for MEaSUREs projects.

  1. Earth Observations in Support of Offshore Wind Energy Management in the Euro-Atlantic Region

    NASA Astrophysics Data System (ADS)

    Liberato, M. L. R.

    2017-12-01

    Climate change is one of the most important challenges in the 21st century and the energy sector is a major contributor to GHG emissions. Therefore greater attention has been given to the evaluation of offshore wind energy potentials along coastal areas, as it is expected offshore wind energy to be more efficient and cost-effective in the near future. Europe is developing offshore sites for over two decades and has been growing at gigawatt levels in annual capacity. Portugal is among these countries, with the development of a 25MW WindFloat Atlantic wind farm project. The international scientific community has developed robust ability on the research of the climate system components and their interactions. Climate scientists have gained expertise in the observation and analysis of the climate system as well as on the improvement of model and predictive capabilities. Developments on climate science allow advancing our understanding and prediction of the variability and change of Earth's climate on all space and time scales, while improving skilful climate assessments and tools for dealing with future challenges of a warming planet. However the availability of greater datasets amplifies the complexity on manipulation, representation and consequent analysis and interpretation of such datasets. Today the challenge is to translate scientific understanding of the climate system into climate information for society and decision makers. Here we discuss the development of an integration tool for multidisciplinary research, which allows access, management, tailored pre-processing and visualization of datasets, crucial to foster research as a service to society. One application is the assessment and monitoring of renewable energy variability, such as wind or solar energy, at several time and space scales. We demonstrate the ability of the e-science platform for planning, monitoring and management of renewable energy, particularly offshore wind energy in the Euro

  2. SCAP: a new methodology for safety management based on feedback from credible accident-probabilistic fault tree analysis system.

    PubMed

    Khan, F I; Iqbal, A; Ramesh, N; Abbasi, S A

    2001-10-12

    As it is conventionally done, strategies for incorporating accident--prevention measures in any hazardous chemical process industry are developed on the basis of input from risk assessment. However, the two steps-- risk assessment and hazard reduction (or safety) measures--are not linked interactively in the existing methodologies. This prevents a quantitative assessment of the impacts of safety measures on risk control. We have made an attempt to develop a methodology in which risk assessment steps are interactively linked with implementation of safety measures. The resultant system tells us the extent of reduction of risk by each successive safety measure. It also tells based on sophisticated maximum credible accident analysis (MCAA) and probabilistic fault tree analysis (PFTA) whether a given unit can ever be made 'safe'. The application of the methodology has been illustrated with a case study.

  3. On inclusion of water resource management in Earth system models - Part 1: Problem definition and representation of water demand

    NASA Astrophysics Data System (ADS)

    Nazemi, A.; Wheater, H. S.

    2015-01-01

    Human activities have caused various changes to the Earth system, and hence the interconnections between human activities and the Earth system should be recognized and reflected in models that simulate Earth system processes. One key anthropogenic activity is water resource management, which determines the dynamics of human-water interactions in time and space and controls human livelihoods and economy, including energy and food production. There are immediate needs to include water resource management in Earth system models. First, the extent of human water requirements is increasing rapidly at the global scale and it is crucial to analyze the possible imbalance between water demands and supply under various scenarios of climate change and across various temporal and spatial scales. Second, recent observations show that human-water interactions, manifested through water resource management, can substantially alter the terrestrial water cycle, affect land-atmospheric feedbacks and may further interact with climate and contribute to sea-level change. Due to the importance of water resource management in determining the future of the global water and climate cycles, the World Climate Research Program's Global Energy and Water Exchanges project (WRCP-GEWEX) has recently identified gaps in describing human-water interactions as one of the grand challenges in Earth system modeling (GEWEX, 2012). Here, we divide water resource management into two interdependent elements, related firstly to water demand and secondly to water supply and allocation. In this paper, we survey the current literature on how various components of water demand have been included in large-scale models, in particular land surface and global hydrological models. Issues of water supply and allocation are addressed in a companion paper. The available algorithms to represent the dominant demands are classified based on the demand type, mode of simulation and underlying modeling assumptions. We discuss

  4. Challenges of agricultural monitoring: integration of the Open Farm Management Information System into GEOSS and Digital Earth

    NASA Astrophysics Data System (ADS)

    Řezník, T.; Kepka, M.; Charvát, K.; Charvát, K., Jr.; Horáková, S.; Lukas, V.

    2016-04-01

    From a global perspective, agriculture is the single largest user of freshwater resources, each country using an average of 70% of all its surface water supplies. An essential proportion of agricultural water is recycled back to surface water and/or groundwater. Agriculture and water pollution is therefore the subject of (inter)national legislation, such as the Clean Water Act in the United States of America, the European Water Framework Directive, and the Law of the People's Republic of China on the Prevention and Control of Water Pollution. Regular monitoring by means of sensor networks is needed in order to provide evidence of water pollution in agriculture. This paper describes the benefits of, and open issues stemming from, regular sensor monitoring provided by an Open Farm Management Information System. Emphasis is placed on descriptions of the processes and functionalities available to users, the underlying open data model, and definitions of open and lightweight application programming interfaces for the efficient management of collected (spatial) data. The presented Open Farm Management Information System has already been successfully registered under Phase 8 of the Global Earth Observation System of Systems (GEOSS) Architecture Implementation Pilot in order to support the wide variety of demands that are primarily aimed at agriculture pollution monitoring. The final part of the paper deals with the integration of the Open Farm Management Information System into the Digital Earth framework.

  5. Airborne hunt for faults in the Portland-Vancouver area

    Blakely, Richard J.; Wells, Ray E.; Yelin, Thomas S.; Stauffer, Peter H.; Hendley, James W.

    1996-01-01

    Geologic hazards in the Portland-Vancouver area include faults entirely hidden by river sediments, vegetation, and urban development. A recent aerial geophysical survey revealed patterns in the Earth's magnetic field that confirm the existence of a previously suspected fault running through Portland. It also indicated that this fault may pose a significant seismic threat. This discovery has enabled the residents of the populous area to better prepare for future earthquakes.

  6. Fault tolerant operation of switched reluctance machine

    NASA Astrophysics Data System (ADS)

    Wang, Wei

    The energy crisis and environmental challenges have driven industry towards more energy efficient solutions. With nearly 60% of electricity consumed by various electric machines in industry sector, advancement in the efficiency of the electric drive system is of vital importance. Adjustable speed drive system (ASDS) provides excellent speed regulation and dynamic performance as well as dramatically improved system efficiency compared with conventional motors without electronics drives. Industry has witnessed tremendous grow in ASDS applications not only as a driving force but also as an electric auxiliary system for replacing bulky and low efficiency auxiliary hydraulic and mechanical systems. With the vast penetration of ASDS, its fault tolerant operation capability is more widely recognized as an important feature of drive performance especially for aerospace, automotive applications and other industrial drive applications demanding high reliability. The Switched Reluctance Machine (SRM), a low cost, highly reliable electric machine with fault tolerant operation capability, has drawn substantial attention in the past three decades. Nevertheless, SRM is not free of fault. Certain faults such as converter faults, sensor faults, winding shorts, eccentricity and position sensor faults are commonly shared among all ASDS. In this dissertation, a thorough understanding of various faults and their influence on transient and steady state performance of SRM is developed via simulation and experimental study, providing necessary knowledge for fault detection and post fault management. Lumped parameter models are established for fast real time simulation and drive control. Based on the behavior of the faults, a fault detection scheme is developed for the purpose of fast and reliable fault diagnosis. In order to improve the SRM power and torque capacity under faults, the maximum torque per ampere excitation are conceptualized and validated through theoretical analysis and

  7. Forest Management in Earth System Modelling: a Vertically Discretised Canopy Description for ORCHIDEE and Effects on European Climate Since 1750

    NASA Astrophysics Data System (ADS)

    McGrath, M.; Luyssaert, S.; Naudts, K.; Chen, Y.; Ryder, J.; Otto, J.; Valade, A.

    2015-12-01

    Forest management has the potential to impact surface physical characteristics to the same degree that changes in land cover do. The impacts of land cover changes on the global climate are well-known. Despite an increasingly detailed understanding of the potential for forest management to affect climate, none of the current generation of Earth system models account for forest management through their land surface modules. We addressed this gap by developing and reparameterizing the ORCHIDEE land surface model to simulate the biogeochemical and biophysical effects of forest management. Through vertical discretization of the forest canopy and corresponding modifications to the energy budget, radiation transfer, and carbon allocation, forest management can now be simulated much more realistically on the global scale. This model was used to explore the effect of forest management on European climate since 1750. Reparameterization was carried out to replace generic forest plant functional types with real tree species, covering the most dominant species across the continent. Historical forest management and land cover maps were created to run the simulations from 1600 until the present day. The model was coupled to the atmospheric model LMDz to explore differences in climate between 1750 and 2010 and attribute those differences to changes in atmospheric carbon dioxide concentrations and concurrent warming, land cover, species composition, and wood extraction. Although Europe's forest are considered a carbon sink in this century, our simulations show the modern forests are still experiencing carbon debt compared to their historical values.

  8. Incorporating agricultural management into an earth system model for the Pacific Northwest region: Interactions between climate, hydrology, agriculture, and economics

    NASA Astrophysics Data System (ADS)

    Chinnayakanahalli, K.; Adam, J. C.; Stockle, C.; Nelson, R.; Brady, M.; Rajagopalan, K.; Barber, M. E.; Dinesh, S.; Malek, K.; Yorgey, G.; Kruger, C.; Marsh, T.; Yoder, J.

    2011-12-01

    For better management and decision making in the face of climate change, earth system models must explicitly account for natural resource and agricultural management activities. Including crop system, water management, and economic models into an earth system modeling framework can help in answering questions related to the impacts of climate change on irrigation water and crop productivity, how agricultural producers can adapt to anticipated climate change, and how agricultural practices can mitigate climate change. Herein we describe the coupling of the Variability Infiltration Capacity (VIC) land surface model, which solves the water and energy balances of the hydrologic cycle at regional scales, with a crop-growth model, CropSyst. This new model, VIC-CropSyst, is the land surface model that will be used in a new regional-scale model development project focused on the Pacific Northwest, termed BioEarth. Here we describe the VIC-CropSyst coupling process and its application over the Columbia River basin (CRB) using agricultural-specific land cover information. The Washington State Department of Agriculture (WSDA) and U. S. Department of Agriculture (USDA) cropland data layers were used to identify agricultural land use patterns, in which both irrigated and dry land crops were simulated. The VIC-CropSyst model was applied over the CRB for the historical period of 1976 - 2006 to establish a baseline for surface water availability, irrigation demand, and crop production. The model was then applied under future (2030s) climate change scenarios derived from statistically-downscaled Global Circulation Models output under two emission scenarios (A1B and B1). Differences between simulated future and historical irrigation demand, irrigation water availability, and crop production were used in an economics model to identify the most economically-viable future cropping pattern. The economics model was run under varying scenarios of regional growth, trade, water pricing, and

  9. Why the Petascale era will drive improvements in the management of the full lifecycle of earth science data.

    NASA Astrophysics Data System (ADS)

    Wyborn, L.

    2012-04-01

    The advent of the petascale era, in both storage and compute facilities, will offer new opportunities for earth scientists to transform the way they do their science and to undertake cross-disciplinary science at a global scale. No longer will data have to be averaged and subsampled: it can be analysed to its fullest resolution at national or even global scales. Much larger data volumes can be analysed in single passes and at higher resolution: large scale cross domain science is now feasible. However, in general, earth sciences have been slow to capitalise on the potential of these new petascale compute facilities: many struggle to even use terascale facilities. Our chances of using these new facilities will require a vast improvement in the management of the full life cycle of data: in reality it will need to be transformed. Many of our current issues with earth science data are historic and stem from the limitations of early data storage systems. As storage was so expensive, metadata was usually stored separate from the data and attached as a readme file. Likewise, attributes that defined uncertainty, reliability and traceability were recoded in lab note books and rarely stored with the data. Data were routinely transferred as files. The new opportunities require that the traditional discover, display and locally download and process paradigm is too limited. For data access and assimilation to be improved, data will need to be self describing. For heterogeneous data to be rapidly integrated attributes such as reliability, uncertainty and traceability will need to be systematically recorded with each observation. The petascale era also requires that individual data files be transformed and aggregated into calibrated data arrays or data cubes. Standards become critical and are the enablers of integration. These changes are common to almost every science discipline. What makes earth sciences unique is that many domains record time series data, particularly in the

  10. Influence of slip-surface geometry on earth-flow deformation, Montaguto earth flow, southern Italy

    Guerriero, L.; Coe, Jeffrey A.; Revellio, P.; Grelle, G.; Pinto, F.; Guadagno, F.

    2016-01-01

    We investigated relations between slip-surface geometry and deformational structures and hydrologic features at the Montaguto earth flow in southern Italy between 1954 and 2010. We used 25 boreholes, 15 static cone-penetration tests, and 22 shallow-seismic profiles to define the geometry of basal- and lateral-slip surfaces; and 9 multitemporal maps to quantify the spatial and temporal distribution of normal faults, thrust faults, back-tilted surfaces, strike-slip faults, flank ridges, folds, ponds, and springs. We infer that the slip surface is a repeating series of steeply sloping surfaces (risers) and gently sloping surfaces (treads). Stretching of earth-flow material created normal faults at risers, and shortening of earth-flow material created thrust faults, back-tilted surfaces, and ponds at treads. Individual pairs of risers and treads formed quasi-discrete kinematic zones within the earth flow that operated in unison to transmit pulses of sediment along the length of the flow. The locations of strike-slip faults, flank ridges, and folds were not controlled by basal-slip surface topography but were instead dependent on earth-flow volume and lateral changes in the direction of the earth-flow travel path. The earth-flow travel path was strongly influenced by inactive earth-flow deposits and pre-earth-flow drainages whose positions were determined by tectonic structures. The implications of our results that may be applicable to other earth flows are that structures with strikes normal to the direction of earth-flow motion (e.g., normal faults and thrust faults) can be used as a guide to the geometry of basal-slip surfaces, but that depths to the slip surface (i.e., the thickness of an earth flow) will vary as sediment pulses are transmitted through a flow.

  11. Active faults newly identified in Pacific Northwest

    NASA Astrophysics Data System (ADS)

    Balcerak, Ernie

    2012-05-01

    The Bellingham Basin, which lies north of Seattle and south of Vancouver around the border between the United States and Canada in the northern part of the Cascadia subduction zone, is important for understanding the regional tectonic setting and current high rates of crustal deformation in the Pacific Northwest. Using a variety of new data, Kelsey et al. identified several active faults in the Bellingham Basin that had not been previously known. These faults lie more than 60 kilometers farther north of the previously recognized northern limit of active faulting in the area. The authors note that the newly recognized faults could produce earthquakes with magnitudes between 6 and 6.5 and thus should be considered in hazard assessments for the region. (Journal of Geophysical Reserch-Solid Earth, doi:10.1029/2011JB008816, 2012)

  12. Computing Fault Displacements from Surface Deformations

    NASA Technical Reports Server (NTRS)

    Lyzenga, Gregory; Parker, Jay; Donnellan, Andrea; Panero, Wendy

    2006-01-01

    Simplex is a computer program that calculates locations and displacements of subterranean faults from data on Earth-surface deformations. The calculation involves inversion of a forward model (given a point source representing a fault, a forward model calculates the surface deformations) for displacements, and strains caused by a fault located in isotropic, elastic half-space. The inversion involves the use of nonlinear, multiparameter estimation techniques. The input surface-deformation data can be in multiple formats, with absolute or differential positioning. The input data can be derived from multiple sources, including interferometric synthetic-aperture radar, the Global Positioning System, and strain meters. Parameters can be constrained or free. Estimates can be calculated for single or multiple faults. Estimates of parameters are accompanied by reports of their covariances and uncertainties. Simplex has been tested extensively against forward models and against other means of inverting geodetic data and seismic observations. This work

  13. Applications notice. [application of space techniques to earth resources, environment management, and space processing

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The discipline programs of the Space and Terrestrial (S&T) Applications Program are described and examples of research areas of current interest are given. Application of space techniques to improve conditions on earth are summarized. Discipline programs discussed include: resource observations; environmental observations; communications; materials processing in space; and applications systems/information systems. Format information on submission of unsolicited proposals for research related to the S&T Applications Program are given.

  14. Use of data from space for earth resources exploration and management in Alabama

    NASA Technical Reports Server (NTRS)

    Lamoreaux, P. E.; Henry, H. R.

    1972-01-01

    The University of Alabama, the Geological Survey of Alabama, and the George C. Marshall Space Flight Center are involved in an interagency, interdisciplinary effort to use remotely sensed, multispectral observations to yield improved and timely assessment of earth resources and environmental quality in Alabama. It is the goal of this effort to interpret these data and provide them in a format which is meaningful to and readily usable by agencies, industries, and individuals who are potential users throughout the State.

  15. NASA Earth Observations Informing Renewable Energy Management and Policy Decision Making

    NASA Technical Reports Server (NTRS)

    Eckman, Richard S.; Stackhouse, Paul W., Jr.

    2008-01-01

    The NASA Applied Sciences Program partners with domestic and international governmental organizations, universities, and private entities to improve their decisions and assessments. These improvements are enabled by using the knowledge generated from research resulting from spacecraft observations and model predictions conducted by NASA and providing these as inputs to the decision support and scenario assessment tools used by partner organizations. The Program is divided into eight societal benefit areas, aligned in general with the Global Earth Observation System of Systems (GEOSS) themes. The Climate Application of the Applied Sciences Program has as one of its focuses, efforts to provide for improved decisions and assessments in the areas of renewable energy technologies, energy efficiency, and climate change impacts. The goals of the Applied Sciences Program are aligned with national initiatives such as the U.S. Climate Change Science and Technology Programs and with those of international organizations including the Group on Earth Observations (GEO) and the Committee on Earth Observation Satellites (CEOS). Activities within the Program are funded principally through proposals submitted in response to annual solicitations and reviewed by peers.

  16. Fault orientations in extensional and conjugate strike-slip environments and their implications

    Thatcher, W.; Hill, D.P.

    1991-01-01

    Seismically active conjugate strike-slip faults in California and Japan typically have mutually orthogonal right- and left-lateral fault planes. Normal-fault dips at earthquake nucleation depths are concentrated between 40?? and 50??. The observed orientations and their strong clustering are surprising, because conventional faulting theory suggests fault initiation with conjugate 60?? and 120?? intersecting planes and 60?? normal-fault dip or fault reactivation with a broad range of permitted orientations. The observations place new constraints on the mechanics of fault initiation, rotation, and evolutionary development. We speculate that the data could be explained by fault rotation into the observed orientations and deactivation for greater rotation or by formation of localized shear zones beneath the brittle-ductile transition in Earth's crust. Initiation as weak frictional faults seems unlikely. -Authors

  17. Is Earth F**ked? Dynamical Futility of Global Environmental Management and Possibilities for Sustainability via Direct Action Activism

    NASA Astrophysics Data System (ADS)

    wErnEr, B.

    2012-12-01

    Environmental challenges are dynamically generated within the dominant global culture principally by the mismatch between short-time-scale market and political forces driving resource extraction/use and longer-time-scale accommodations of the Earth system to these changes. Increasing resource demand is leading to the development of two-way, nonlinear interactions between human societies and environmental systems that are becoming global in extent, either through globalized markets and other institutions or through coupling to global environmental systems such as climate. These trends are further intensified by dissipation-reducing technological advances in transactions, communication and transport, which suppress emergence of longer-time-scale economic and political levels of description and facilitate long-distance connections, and by predictive environmental modeling, which strengthens human connections to a short-time-scale virtual Earth, and weakens connections to the longer time scales of the actual Earth. Environmental management seeks to steer fast scale economic and political interests of a coupled human-environmental system towards longer-time-scale consideration of benefits and costs by operating within the confines of the dominant culture using a linear, engineering-type connection to the system. Perhaps as evidenced by widespread inability to meaningfully address such global environmental challenges as climate change and soil degradation, nonlinear connections reduce the ability of managers to operate outside coupled human-environmental systems, decreasing their effectiveness in steering towards sustainable interactions and resulting in managers slaved to short-to-intermediate-term interests. In sum, the dynamics of the global coupled human-environmental system within the dominant culture precludes management for stable, sustainable pathways and promotes instability. Environmental direct action, resistance taken from outside the dominant culture, as in

  18. The "It's Not My Fault!" Exercise: Exploring the Causes and Consequences of Managers' Explanations for Poor Performance

    ERIC Educational Resources Information Center

    Paglis, Laura L.

    2008-01-01

    Experienced managers know that perceptions matter greatly when it comes to working effectively with employees. The task for organizational behavior (OB) instructors, especially in the undergraduate classroom, is to make the perceptions topic come alive for students who may not appreciate at first the application and significance of this subject…

  19. Tools for developing a quality management program: proactive tools (process mapping, value stream mapping, fault tree analysis, and failure mode and effects analysis).

    PubMed

    Rath, Frank

    2008-01-01

    This article examines the concepts of quality management (QM) and quality assurance (QA), as well as the current state of QM and QA practices in radiotherapy. A systematic approach incorporating a series of industrial engineering-based tools is proposed, which can be applied in health care organizations proactively to improve process outcomes, reduce risk and/or improve patient safety, improve through-put, and reduce cost. This tool set includes process mapping and process flowcharting, failure modes and effects analysis (FMEA), value stream mapping, and fault tree analysis (FTA). Many health care organizations do not have experience in applying these tools and therefore do not understand how and when to use them. As a result there are many misconceptions about how to use these tools, and they are often incorrectly applied. This article describes these industrial engineering-based tools and also how to use them, when they should be used (and not used), and the intended purposes for their use. In addition the strengths and weaknesses of each of these tools are described, and examples are given to demonstrate the application of these tools in health care settings.

  20. Fault zone hydrogeology

    NASA Astrophysics Data System (ADS)

    Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.

    2013-12-01

    Deformation along faults in the shallow crust (< 1 km) introduces permeability heterogeneity and anisotropy, which has an important impact on processes such as regional groundwater flow, hydrocarbon migration, and hydrothermal fluid circulation. Fault zones have the capacity to be hydraulic conduits connecting shallow and deep geological environments, but simultaneously the fault cores of many faults often form effective barriers to flow. The direct evaluation of the impact of faults to fluid flow patterns remains a challenge and requires a multidisciplinary research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and

  1. Scaling Forest Management Practices in Earth System Models: Case Study of Southeast and Pacific Northwest Forests

    NASA Astrophysics Data System (ADS)

    Pourmokhtarian, A.; Becknell, J. M.; Hall, J.; Desai, A. R.; Boring, L. R.; Duffy, P.; Staudhammer, C. L.; Starr, G.; Dietze, M.

    2014-12-01

    A wide array of human-induced disturbances can alter the structure and function of forests, including climate change, disturbance and management. While there have been numerous studies on climate change impacts on forests, interactions of management with changing climate and natural disturbance are poorly studied. Forecasts of the range of plausible responses of forests to climate change and management are need for informed decision making on new management approaches under changing climate, as well as adaptation strategies for coming decades. Terrestrial biosphere models (TBMs) provide an excellent opportunity to investigate and assess simultaneous responses of terrestrial ecosystems to climatic perturbations and management across multiple spatio-temporal scales, but currently do not represent a wide array of management activities known to impact carbon, water, surface energy fluxes, and biodiversity. The Ecosystem Demography model 2 (ED2) incorporates non-linear impacts of fine-scale (~10-1 km) heterogeneity in ecosystem structure both horizontally and vertically at a plant level. Therefore it is an ideal candidate to incorporate different forest management practices and test various hypotheses under changing climate and across various spatial scales. The management practices that we implemented were: clear-cut, conversion, planting, partial harvest, low intensity fire, restoration, salvage, and herbicide. The results were validated against observed data across 8 different sites in the U.S. Southeast (Duke Forest, Joseph Jones Ecological Research Center, North Carolina Loblolly Pine, and Ordway-Swisher Biological Station) and Pacific Northwest (Metolius Research Natural Area, H.J. Andrews Experimental Forest, Wind River Field Station, and Mount Rainier National Park). These sites differ in regards to climate, vegetation, soil, and historical land disturbance as well as management approaches. Results showed that different management practices could successfully

  2. Machine Learning of Fault Friction

    NASA Astrophysics Data System (ADS)

    Johnson, P. A.; Rouet-Leduc, B.; Hulbert, C.; Marone, C.; Guyer, R. A.

    2017-12-01

    We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025

  3. Anatomy of landslides along the Dead Sea Transform Fault System in NW Jordan

    NASA Astrophysics Data System (ADS)

    Dill, H. G.; Hahne, K.; Shaqour, F.

    2012-03-01

    In the mountainous region north of Amman, Jordan, Cenomanian calcareous rocks are being monitored constantly for their mass wasting processes which occasionally cause severe damage to the Amman-Irbid Highway. Satellite remote sensing data (Landsat TM, ASTER, and SRTM) and ground measurements are applied to investigate the anatomy of landslides along the Dead Sea Transform Fault System (DSTFS), a prominent strike-slip fault. The joints and faults pertinent to the DSTFS match the architectural elements identified in landslides of different size. This similarity attests to a close genetic relation between the tectonic setting of one of the most prominent fault zones on the earth and modern geomorphologic processes. Six indicators stand out in particular: 1) The fractures developing in N-S and splay faults represent the N-S lateral movement of the DSTFS. They governed the position of the landslides. 2) Cracks and faults aligned in NE-SW to NNW-SSW were caused by compressional strength. They were subsequently reactivated during extensional processes and used in some cases as slip planes during mass wasting. 3) Minor landslides with NE-SW straight scarps were derived from compressional features which were turned into slip planes during the incipient stages of mass wasting. They occur mainly along the slopes in small wadis or where a wide wadi narrows upstream. 4) Major landslides with curved instead of straight scarps and rotational slides are representative of a more advanced level of mass wasting. These areas have to be marked in the maps and during land management projects as high-risk area mainly and may be encountered in large wadis with steep slopes or longitudinal slopes undercut by road construction works. 5) The spatial relation between minor faults and slope angle is crucial as to the vulnerability of the areas in terms of mass wasting. 6) Springs lined up along faults cause serious problems to engineering geology in that they step up the behavior of marly

  4. Earth meandering

    NASA Astrophysics Data System (ADS)

    Asadiyan, H.; Zamani, A.

    2009-04-01

    In this paper we try to put away current Global Tectonic Model to look the tectonic evolution of the earth from new point of view. Our new dynamic model is based on study of river meandering (RM) which infer new concept as Earth meandering(EM). In a universal gravitational field if we consider a clockwise spiral galaxy model rotate above Ninety East Ridge (geotectonic axis GA), this system with applying torsion field (likes geomagnetic field) in side direction from Rocky Mt. (west geotectonic pole WGP) to Tibetan plateau TP (east geotectonic pole EGP),it seems that pulled mass from WGP and pushed it in EGP due to it's rolling dynamics. According to this idea we see in topographic map that North America and Green land like a tongue pulled from Pacific mouth toward TP. Actually this system rolled or meander the earth over itself fractaly from small scale to big scale and what we see in the river meandering and Earth meandering are two faces of one coin. River transport water and sediments from high elevation to lower elevation and also in EM, mass transport from high altitude-Rocky Mt. to lower altitude Himalaya Mt. along 'S' shape geodetic line-optimum path which connect points from high altitude to lower altitude as kind of Euler Elastica(EE). These curves are responsible for mass spreading (source) and mass concentration (sink). In this regard, tiltness of earth spin axis plays an important role, 'S' are part of sigmoidal shape which formed due to intersection of Earth rolling with the Earth glob and actual feature of transform fault and river meandering. Longitudinal profile in mature rivers as a part of 'S' curve also is a kind of EE. 'S' which bound the whole earth is named S-1(S order 1) and cube corresponding to this which represent Earth fracturing in global scale named C-1(cube order 1 or side vergence cube SVC), C-1 is a biggest cycle of spiral polygon, so it is not completely closed and it has separation about diameter of C-7. Inside SVC we introduce cone

  5. Archive Management of NASA Earth Observation Data to Support Cloud Analysis

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Baynes, Kathleen; McInerney, Mark A.

    2017-01-01

    NASA collects, processes and distributes petabytes of Earth Observation (EO) data from satellites, aircraft, in situ instruments and model output, with an order of magnitude increase expected by 2024. Cloud-based web object storage (WOS) of these data can simplify the execution of such an increase. More importantly, it can also facilitate user analysis of those volumes by making the data available to the massively parallel computing power in the cloud. However, storing EO data in cloud WOS has a ripple effect throughout the NASA archive system with unexpected challenges and opportunities. One challenge is modifying data servicing software (such as Web Coverage Service servers) to access and subset data that are no longer on a directly accessible file system, but rather in cloud WOS. Opportunities include refactoring of the archive software to a cloud-native architecture; virtualizing data products by computing on demand; and reorganizing data to be more analysis-friendly.

  6. Engaging and Empowering the National Park Service to apply Earth Observations to Management Decisions

    NASA Astrophysics Data System (ADS)

    Clayton, A.; Ross, K. W.; Crepps, G.; Childs-Gleason, L. M.; Ruiz, M. L.; Rogers, L.; Allsbrook, K. N.

    2017-12-01

    Since 2015, the NASA DEVELOP National Program has partnered with the National Park Service (NPS) engaging more than 120 program participants, working on over 22 projects across approximately 27 unique park units. These projects examined a variety of cultural and environmental concerns facing the NPS including landscape disturbance, invasive species mapping, archaeological site preservation, and water resources monitoring. DEVELOP, part of NASA's Applied Sciences' Capacity Building program, conducts 10-week feasibility projects which demonstrate the utility of NASA's Earth observations as an additional tool for decision-making processes. This presentation will highlight several of these projects and discuss the progress of capacity building working with individual, regional, and institutional elements within the National Park Service.

  7. Fault-Tree Compiler

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Boerschlein, David P.

    1993-01-01

    Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.

  8. Teaching earth science

    Alpha, Tau Rho; Diggles, Michael F.

    1998-01-01

    This CD-ROM contains 17 teaching tools: 16 interactive HyperCard 'stacks' and a printable model. They are separated into the following categories: Geologic Processes, Earthquakes and Faulting, and Map Projections and Globes. A 'navigation' stack, Earth Science, is provided as a 'launching' place from which to access all of the other stacks. You can also open the HyperCard Stacks folder and launch any of the 16 stacks yourself. In addition, a 17th tool, Earth and Tectonic Globes, is provided as a printable document. Each of the tools can be copied onto a 1.4-MB floppy disk and distributed freely.

  9. Coordinated Fault Tolerance for High-Performance Computing

    SciT

    Dongarra, Jack; Bosilca, George; et al.

    2013-04-08

    Our work to meet our goal of end-to-end fault tolerance has focused on two areas: (1) improving fault tolerance in various software currently available and widely used throughout the HEC domain and (2) using fault information exchange and coordination to achieve holistic, systemwide fault tolerance and understanding how to design and implement interfaces for integrating fault tolerance features for multiple layers of the software stack—from the application, math libraries, and programming language runtime to other common system software such as jobs schedulers, resource managers, and monitoring tools.

  10. Earth Science

    1994-09-02

    This image depicts a full view of the Earth, taken by the Geostationary Operational Environment Satellite (GOES-8). The red and green charnels represent visible data, while the blue channel represents inverted 11 micron infrared data. The north and south poles were not actually observed by GOES-8. To produce this image, poles were taken from a GOES-7 image. Owned and operated by the National Oceanic and Atmospheric Administration (NOAA), GOES satellites provide the kind of continuous monitoring necessary for intensive data analysis. They circle the Earth in a geosynchronous orbit, which means they orbit the equatorial plane of the Earth at a speed matching the Earth's rotation. This allows them to hover continuously over one position on the surface. The geosynchronous plane is about 35,800 km (22,300 miles) above the Earth, high enough to allow the satellites a full-disc view of the Earth. Because they stay above a fixed spot on the surface, they provide a constant vigil for the atmospheric triggers for severe weather conditions such as tornadoes, flash floods, hail storms, and hurricanes. When these conditions develop, the GOES satellites are able to monitor storm development and track their movements. NASA manages the design and launch of the spacecraft. NASA launched the first GOES for NOAA in 1975 and followed it with another in 1977. Currently, the United States is operating GOES-8, positioned at 75 west longitude and the equator, and GOES-10, which is positioned at 135 west longitude and the equator. (GOES-9, which malfunctioned in 1998, is being stored in orbit as an emergency backup should either GOES-8 or GOES-10 fail. GOES-11 was launched on May 3, 2000 and GOES-12 on July 23, 2001. Both are being stored in orbit as a fully functioning replacement for GOES-8 or GOES-10 on failure.

  11. Classroom management at the university level: lessons from a former high school earth science teacher

    NASA Astrophysics Data System (ADS)

    Lazar, C.

    2009-12-01

    Just a few days before my career as a fledgling science teacher began in a large public high school in New York City, a mentor suggested I might get some ideas about how to run a classroom from a book called The First Days Of School by Harry Wong. Although the book seemed to concentrate more on elementary students, I found that many of the principles in the book worked well for high school students. Even as I have begun to teach at the university level, many of Wong’s themes have persisted in my teaching style. Wong’s central thesis is that for learning to occur, a teacher must create the proper environment. In education jargon, a good climate for learning is generated via classroom management, an array of methods used by elementary and secondary school teachers to provide structure and routine to a class period via a seamless flow of complementary activities. Many college professors would likely consider classroom management to be chiefly a set of rules to maintain discipline and order among an otherwise unruly herd of schoolchildren, and therefore not a useful concept for mature university students. However, classroom management is much deeper than mere rules for behavior; it is an approach to instructional design that considers the classroom experience holistically. A typical professorial management style is to lecture for an hour or so and ask students to demonstrate learning via examinations several times in a semester. In contrast, a good high school teacher will manage a class from bell-to-bell to create a natural order and flow to a given lesson. In this presentation, I will argue for an approach to college lesson design similar to the classroom management style commonly employed by high school and elementary school teachers. I will suggest some simple, practical techniques learned during my high school experience that work just as well in college: warm-up and practice problems, time management, group activities, bulletin boards, learning environment

  12. FTAPE: A fault injection tool to measure fault tolerance

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1995-01-01

    The paper introduces FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The tool combines system-wide fault injection with a controllable workload. A workload generator is used to create high stress conditions for the machine. Faults are injected based on this workload activity in order to ensure a high level of fault propagation. The errors/fault ratio and performance degradation are presented as measures of fault tolerance.

  13. Earthquake Nucleation and Fault Slip: Possible Experiments on a Natural Fault

    NASA Astrophysics Data System (ADS)

    Germanovich, L. N.; Murdoch, L. C.; Garagash, D.; Reches, Z.; Martel, S. J.; Johnston, M. J.; Ebenhack, J.; Gwaba, D.

    2011-12-01

    High-resolution deformation and seismic observations are usually made only near the Earths' surface, kilometers away from where earthquake nucleate on active faults and are limited by inverse-cube-distance attenuation and ground noise. We have developed an experimental approach that aims at reactivating faults in-situ using thermal techniques and fluid injection, which modify in-situ stresses and the fault strength until the fault slips. Mines where in-situ stresses are sufficient to drive faulting present an opportunity to conduct such experiments. The former Homestake gold mine in South Dakota is a good example. During our recent field work in the Homestake mine, we found a large fault that intersects multiple mine levels. The size and distinct structure of this fault make it a promising target for in-situ reactivation, which would likely to be localized on a crack-like patch. Slow patch propagation, moderated by the injection rate and the rate of change of the background stresses, may become unstable, leading to the nucleation of a dynamic earthquake rupture. Our analyses for the Homestake fault conditions indicate that this transition occurs for a patch size ~1 m. This represents a fundamental limitation for laboratory experiments and necessitates larger-scale field tests ~10-100 m. The opportunity to observe earthquake nucleation on the Homestake Fault is feasible because slip could be initiated at a pre-defined location and time with instrumentation placed as close as a few meters from the nucleation site. Designing the experiment requires a detailed assessment of the state-of-stress in the vicinity of the fault. This is being conducted by simulating changes in pore pressure and effective stresses accompanying dewatering of the mine, and by evaluating in-situ stress measurements in light of a regional stress field modified by local perturbations caused by the mine workings.

  14. Fire, Earth and Wind: Managing Risk in Today's Schools Part 2--The Environment

    ERIC Educational Resources Information Center

    Weeks, Richard

    2010-01-01

    Because school business officials are pushed to make difficult decisions quickly when it comes to risk management, they should be aware of the issues associated with environmental safety. School business officials are integral members of the teams that handle crises--big and small--in the school district. A crisis may be as straightforward as the…

  15. Earth, Wind, and Fire: Managing Risk in Today's Schools Part 1--Fire!

    ERIC Educational Resources Information Center

    Weeks, Richard

    2010-01-01

    If one word can characterize what may be troubling about risk management in today's schools, it is "complacency." Complacency is a negative behavior that could entrap people into letting their guard down. In "The School Business Administrator," authors Kenneth Stevenson and Don Tharpe write: "A successful school business administrator has a…

  16. Quaternary Geology and Surface Faulting Hazard: Active and Capable Faults in Central Apennines, Italy

    NASA Astrophysics Data System (ADS)

    Falcucci, E.; Gori, S.

    2015-12-01

    The 2009 L'Aquila earthquake (Mw 6.1), in central Italy, raised the issue of surface faulting hazard in Italy, since large urban areas were affected by surface displacement along the causative structure, the Paganica fault. Since then, guidelines for microzonation were drew up that take into consideration the problem of surface faulting in Italy, and laying the bases for future regulations about related hazard, similarly to other countries (e.g. USA). More specific guidelines on the management of areas affected by active and capable faults (i.e. able to produce surface faulting) are going to be released by National Department of Civil Protection; these would define zonation of areas affected by active and capable faults, with prescriptions for land use planning. As such, the guidelines arise the problem of the time interval and general operational criteria to asses fault capability for the Italian territory. As for the chronology, the review of the international literature and regulatory allowed Galadini et al. (2012) to propose different time intervals depending on the ongoing tectonic regime - compressive or extensional - which encompass the Quaternary. As for the operational criteria, the detailed analysis of the large amount of works dealing with active faulting in Italy shows that investigations exclusively based on surface morphological features (e.g. fault planes exposition) or on indirect investigations (geophysical data), are not sufficient or even unreliable to define the presence of an active and capable fault; instead, more accurate geological information on the Quaternary space-time evolution of the areas affected by such tectonic structures is needed. A test area for which active and capable faults can be first mapped based on such a classical but still effective methodological approach can be the central Apennines. Reference Galadini F., Falcucci E., Galli P., Giaccio B., Gori S., Messina P., Moro M., Saroli M., Scardia G., Sposato A. (2012). Time

  17. Managing the Earth's Biggest Mass Gathering Event and WASH Conditions: Maha Kumbh Mela (India).

    PubMed

    Baranwal, Annu; Anand, Ankit; Singh, Ravikant; Deka, Mridul; Paul, Abhishek; Borgohain, Sunny; Roy, Nobhojit

    2015-04-13

    Mass gatherings including a large number of people makes the planning and management of the event a difficult task. Kumbh Mela is one such, internationally famous religious mass gathering. It creates the substantial challenge of creating a temporary city in which millions of people can stay for a defined period of time. The arrangements need to allow this very large number of people to reside with proper human waste disposal, medical services, adequate supplies of food and clean water, transportation etc. We report a case study of Maha Kumbh, 2013 which focuses on the management and planning that went into the preparation of Kumbh Mela and understanding its water, sanitation and hygiene conditions. It was an observational cross-sectional study, the field work was done for 13 days, from 21 January to 2 February 2013. Our findings suggest that the Mela committee and all other agencies involved in Mela management proved to be successful in supervising the event and making it convenient, efficient and safe. Health care services and water sanitation and hygiene conditions were found to be satisfactory. BhuleBhatke Kendra (Center for helping people who got separated from their families) had the major task of finding missing people and helping them to meet their families. Some of the shortfalls identified were that drainage was a major problem and some fire incidents were reported. Therefore, improvement in drainage facilities and reduction in fire incidents are essential to making Mela cleaner and safer. The number of persons per toilet was high and there were no separate toilets for males and females. Special facilities and separate toilets for men and women will improve their stay in Mela. Inculcation of modern methods and technologies are likely to help in supporting crowd management and improving water, sanitation and hygiene conditions in the continuously expanding KumbhMela, in the coming years.

  18. Earth Observing System (EOS)/Advanced Microwave Sounding Unit A (AMSU-A) configuration management plan

    NASA Technical Reports Server (NTRS)

    Cavanaugh, J.

    1994-01-01

    This plan describes methods and procedures Aerojet will follow in the implementation of configuration control for each established baseline. The plan is written in response to the GSFC EOS CM Plan 420-02-02, dated January 1990, and also meets he requirements specified in DOD-STD-480, DOD-D 1000B, MIL-STD-483A, and MIL-STD-490B. The plan establishes the configuration management process to be used for the deliverable hardware, software, and firmware of the EOS/AMSU-A during development, design, fabrication, test, and delivery. This revision includes minor updates to reflect Aerojet's CM policies.

  19. Peru Water Resources: Integrating NASA Earth Observations into Water Resource Planning and Management in Perus La Libertad Region

    NASA Technical Reports Server (NTRS)

    Padgett-Vasquez, Steve; Steentofte, Catherine; Holbrook, Abigail

    2014-01-01

    Developing countries often struggle with providing water security and sanitation services to their populations. An important aspect of improving security and sanitation is developing a comprehensive understanding of the country's water budget. Water For People, a non-profit organization dedicated to providing clean drinking water, is working with the Peruvian government to develop a water budget for the La Libertad region of Peru which includes the creation of an extensive watershed management plan. Currently, the data archive of the necessary variables to create the water management plan is extremely limited. Implementing NASA Earth observations has bolstered the dataset being used by Water For People, and the METRIC (Mapping EvapoTranspiration at High Resolution and Internalized Calibration) model has allowed for the estimation of the evapotranspiration values for the region. Landsat 8 imagery and the DEM (Digital Elevation Model) from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) sensor onboard Terra were used to derive the land cover information, and were used in conjunction with local weather data of Cascas from Peru's National Meteorological and Hydrological Service (SENAMHI). Python was used to combine input variables and METRIC model calculations to approximate the evapotranspiration values for the Ochape sub-basin of the Chicama River watershed. Once calculated, the evapotranspiration values and methodology were shared Water For People to help supplement their decision support tools in the La Libertad region of Peru and potentially apply the methodology in other areas of need.

  20. A virtual, interactive and dynamic excursion in Google Earth on soil management and conservation (AgroGeovid)

    NASA Astrophysics Data System (ADS)

    Vanwalleghem, Tom; Giráldez, Juan Vicente

    2013-04-01

    Many courses on natural resources require hands-on practical knowledge and experience that students traditionally could only acquire by expensive and time-consuming field excursions. New technologies and social media however provide an interesting alternative to train students and help them improve their practical knowledge. AgroGeovid is a virtual excursion, based on Google Earth, Youtube, Facebook and Twitter that is aimed at agricultural engineering students, but equally useful for any student interested in soil management and conservation, e.g. geography, geology and environmental resources. Agrogeovid provides the framework for teachers and students to upload geotagged photos, comments and discussions. After the initial startup phase, where the teacher uploaded material on e.g. soil erosion phenomena, soil conservation structures and different soil management strategies under different agronomic systems, students contributed with their own material gathered throughout the academic year. All students decided to contribute via Facebook, in stead of Twitter, which was not known to most of them. The final result was a visual and dynamic tool which students could use to train and perfect skills adopted in the classroom using case-studies and examples from their immediate environment.

  1. Solar Torque Management for the Near Earth Asteroid Scout CubeSat Using Center of Mass Position Control

    NASA Technical Reports Server (NTRS)

    Orphee, Juan; Heaton, Andrew; Diedrich, Ben; Stiltner, Brandon C.

    2018-01-01

    A novel mechanism, the Active Mass Translator (AMT), has been developed for the NASA Near Earth Asteroid (NEA) Scout mission to autonomously manage the spacecraft momentum. The NEA Scout CubeSat will launch as a secondary payload onboard Exploration Mission 1 of the Space Launch System. To accomplish its mission, the CubeSat will be propelled by an 86 square-meter solar sail during its two-year journey to reach asteroid 1991VG. NEA Scout's primary attitude control system uses reaction wheels for holding attitude and performing slew maneuvers, while a cold gas reaction control system performs the initial detumble and early trajectory correction maneuvers. The AMT control system requirements, feedback architecture, and control performance will be presented. The AMT reduces the amount of reaction control propellant needed for momentum management and allows for smaller capacity reaction wheels suitable for the limited 6U spacecraft volume. The reduced spacecraft mass allows higher in-space solar sail acceleration, thus reducing time-of-flight. The reduced time-of-flight opens the range of possible missions, which is limited by the lifetime of typical non-radiation tolerant CubeSat avionics exposed to the deep-space environment.

  2. Fault-Tolerant Heat Exchanger

    NASA Technical Reports Server (NTRS)

    Izenson, Michael G.; Crowley, Christopher J.

    2005-01-01

    A compact, lightweight heat exchanger has been designed to be fault-tolerant in the sense that a single-point leak would not cause mixing of heat-transfer fluids. This particular heat exchanger is intended to be part of the temperature-regulation system for habitable modules of the International Space Station and to function with water and ammonia as the heat-transfer fluids. The basic fault-tolerant design is adaptable to other heat-transfer fluids and heat exchangers for applications in which mixing of heat-transfer fluids would pose toxic, explosive, or other hazards: Examples could include fuel/air heat exchangers for thermal management on aircraft, process heat exchangers in the cryogenic industry, and heat exchangers used in chemical processing. The reason this heat exchanger can tolerate a single-point leak is that the heat-transfer fluids are everywhere separated by a vented volume and at least two seals. The combination of fault tolerance, compactness, and light weight is implemented in a unique heat-exchanger core configuration: Each fluid passage is entirely surrounded by a vented region bridged by solid structures through which heat is conducted between the fluids. Precise, proprietary fabrication techniques make it possible to manufacture the vented regions and heat-conducting structures with very small dimensions to obtain a very large coefficient of heat transfer between the two fluids. A large heat-transfer coefficient favors compact design by making it possible to use a relatively small core for a given heat-transfer rate. Calculations and experiments have shown that in most respects, the fault-tolerant heat exchanger can be expected to equal or exceed the performance of the non-fault-tolerant heat exchanger that it is intended to supplant (see table). The only significant disadvantages are a slight weight penalty and a small decrease in the mass-specific heat transfer.

  3. Fault tectonics and earthquake hazards in parts of southern California. [penninsular ranges, Garlock fault, Salton Trough area, and western Mojave Desert

    NASA Technical Reports Server (NTRS)

    Merifield, P. M. (Principal Investigator); Lamar, D. L.; Gazley, C., Jr.; Lamar, J. V.; Stratton, R. H.

    1976-01-01

    The author has identified the following significant results. Four previously unknown faults were discovered in basement terrane of the Peninsular Ranges. These have been named the San Ysidro Creek fault, Thing Valley fault, Canyon City fault, and Warren Canyon fault. In addition fault gouge and breccia were recognized along the San Diego River fault. Study of features on Skylab imagery and review of geologic and seismic data suggest that the risk of a damaging earthquake is greater along the northwestern portion of the Elsinore fault than along the southeastern portion. Physiographic indicators of active faulting along the Garlock fault identifiable in Skylab imagery include scarps, linear ridges, shutter ridges, faceted ridges, linear valleys, undrained depressions and offset drainage. The following previously unrecognized fault segments are postulated for the Salton Trough Area: (1) An extension of a previously known fault in the San Andreas fault set located southeast of the Salton Sea; (2) An extension of the active San Jacinto fault zone along a tonal change in cultivated fields across Mexicali Valley ( the tonal change may represent different soil conditions along opposite sides of a fault). For the Skylab and LANDSAT images studied, pseudocolor transformations offer no advantages over the original images in the recognition of faults in Skylab and LANDSAT images. Alluvial deposits of different ages, a marble unit and iron oxide gossans of the Mojave Mining District are more readily differentiated on images prepared from ratios of individual bands of the S-192 multispectral scanner data. The San Andreas fault was also made more distinct in the 8/2 and 9/2 band ratios by enhancement of vegetation differences on opposite sides of the fault. Preliminary analysis indicates a significant earth resources potential for the discrimination of soil and rock types, including mineral alteration zones. This application should be actively pursued.

  4. A Unified Nonlinear Adaptive Approach for Detection and Isolation of Engine Faults

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong; Farfan-Ramos, Luis; Simon, Donald L.

    2010-01-01

    A challenging problem in aircraft engine health management (EHM) system development is to detect and isolate faults in system components (i.e., compressor, turbine), actuators, and sensors. Existing nonlinear EHM methods often deal with component faults, actuator faults, and sensor faults separately, which may potentially lead to incorrect diagnostic decisions and unnecessary maintenance. Therefore, it would be ideal to address sensor faults, actuator faults, and component faults under one unified framework. This paper presents a systematic and unified nonlinear adaptive framework for detecting and isolating sensor faults, actuator faults, and component faults for aircraft engines. The fault detection and isolation (FDI) architecture consists of a parallel bank of nonlinear adaptive estimators. Adaptive thresholds are appropriately designed such that, in the presence of a particular fault, all components of the residual generated by the adaptive estimator corresponding to the actual fault type remain below their thresholds. If the faults are sufficiently different, then at least one component of the residual generated by each remaining adaptive estimator should exceed its threshold. Therefore, based on the specific response of the residuals, sensor faults, actuator faults, and component faults can be isolated. The effectiveness of the approach was evaluated using the NASA C-MAPSS turbofan engine model, and simulation results are presented.

  5. Fault Tree Analysis.

    PubMed

    McElroy, Lisa M; Khorzad, Rebeca; Rowe, Theresa A; Abecassis, Zachary A; Apley, Daniel W; Barnard, Cynthia; Holl, Jane L

    The purpose of this study was to use fault tree analysis to evaluate the adequacy of quality reporting programs in identifying root causes of postoperative bloodstream infection (BSI). A systematic review of the literature was used to construct a fault tree to evaluate 3 postoperative BSI reporting programs: National Surgical Quality Improvement Program (NSQIP), Centers for Medicare and Medicaid Services (CMS), and The Joint Commission (JC). The literature review revealed 699 eligible publications, 90 of which were used to create the fault tree containing 105 faults. A total of 14 identified faults are currently mandated for reporting to NSQIP, 5 to CMS, and 3 to JC; 2 or more programs require 4 identified faults. The fault tree identifies numerous contributing faults to postoperative BSI and reveals substantial variation in the requirements and ability of national quality data reporting programs to capture these potential faults. Efforts to prevent postoperative BSI require more comprehensive data collection to identify the root causes and develop high-reliability improvement strategies.

  6. Integrating land management into Earth system models: the importance of land use transitions at sub-grid-scale

    NASA Astrophysics Data System (ADS)

    Pongratz, Julia; Wilkenskjeld, Stiig; Kloster, Silvia; Reick, Christian

    2014-05-01

    Recent studies indicate that changes in surface climate and carbon fluxes caused by land management (i.e., modifications of vegetation structure without changing the type of land cover) can be as large as those caused by land cover change. Further, such effects may occur on substantial areas: while about one quarter of the land surface has undergone land cover change, another fifty percent are managed. This calls for integration of management processes in Earth system models (ESMs). This integration increases the importance of awareness and agreement on how to diagnose effects of land use in ESMs to avoid additional model spread and thus unnecessary uncertainties in carbon budget estimates. Process understanding of management effects, their model implementation, as well as data availability on management type and extent pose challenges. In this respect, a significant step forward has been done in the framework of the current IPCC's CMIP5 simulations (Coupled Model Intercomparison Project Phase 5): The climate simulations were driven with the same harmonized land use dataset that, different from most datasets commonly used before, included information on two important types of management: wood harvest and shifting cultivation. However, these new aspects were employed by only part of the CMIP5 models, while most models continued to use the associated land cover maps. Here, we explore the consequences for the carbon cycle of including subgrid-scale land transformations ("gross transitions"), such as shifting cultivation, as example of the current state of implementation of land management in ESMs. Accounting for gross transitions is expected to increase land use emissions because it represents simultaneous clearing and regrowth of natural vegetation in different parts of the grid cell, reducing standing carbon stocks. This process cannot be captured by prescribing land cover maps ("net transitions"). Using the MPI-ESM we find that ignoring gross transitions

  7. Utilizing NASA Earth Observations to Monitor Land Management Practices and the Development of Marshlands to Rice Fields in Rwanda

    NASA Astrophysics Data System (ADS)

    Dusabimana, M. R.; Blach, D.; Mwiza, F.; Muzungu, E.; Swaminathan, R.; Tate, Z.

    2014-12-01

    Rwanda, a small country with the highest population density in Sub-Saharan Africa, is one of the world's poorest countries. Although agriculture is the backbone of Rwandan economy, agricultural productivity is extremely low. Over 90 % of the population is engaged in subsistence farming and only 52 % of the total land surface area is arable. Of this land, approximately 165,000 hectares are marshlands, of which only 57 % has been cultivated. Rwandan government has invested in the advancement of agriculture with activities such as irrigation, marshland reclamation, and crop regionalization. In 2001, Ministry of Agriculture and Animal Resources (MINAGRI) released the Rural Sector Support Program (RSSP), which aimed at converting marshlands into rice fields at various development sites across the country. The focus of this project was to monitor rice fields in Rwanda utilizing NASA Earth observations such as Landsat 5 Thematic Mapper and Landsat 8 Operational Land Imager. Modified Normalized Difference Water Index (MNDWI) was used to depict the progress of marshland to rice field conversion as it highlights the presence of irrigated rice fields from the surrounding area. Additionally, Decision Support System for Agrotechnology Transfer (DSSAT) was used to estimate rice yield at RSSP sites. Various simulations were run to find perfect conditions for cultivating the highest yield for a given farm. Furthermore, soil erosion susceptibility masks were created by combining factors derived from ASTER, MERRA, and ground truth data using Revised Universal Soil Loss Equation (RUSLE). The end results, maps, and tutorials were delivered to the partners and policy makers in Rwanda to help make informed decisions. It can be clearly seen that Earth observations can be successfully used to monitor agricultural and land management practices as a cost effective method that will enable farmers to improve crop yield production and food security.

  8. High-Intensity Radiated Field Fault-Injection Experiment for a Fault-Tolerant Distributed Communication System

    NASA Technical Reports Server (NTRS)

    Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven

    2010-01-01

    Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.

  9. Porosity variations in and around normal fault zones: implications for fault seal and geomechanics

    NASA Astrophysics Data System (ADS)

    Healy, David; Neilson, Joyce; Farrell, Natalie; Timms, Nick; Wilson, Moyra

    2015-04-01

    clear lithofacies control on the Vp-porosity and the Vs-Vp relationships for faulted limestones. Using porosity patterns quantified in naturally deformed rocks we have modelled their effect on the mechanical stability of fluid-saturated fault zones in the subsurface. Poroelasticity theory predicts that variations in fluid pressure could influence fault stability. Anisotropic patterns of porosity in and around fault zones can - depending on their orientation and intensity - lead to an increase in fault stability in response to a rise in fluid pressure, and a decrease in fault stability for a drop in fluid pressure. These predictions are the exact opposite of the accepted role of effective stress in fault stability. Our work has provided new data on the spatial and statistical variation of porosity in fault zones. Traditionally considered as an isotropic and scalar value, porosity and pore networks are better considered as anisotropic and as scale-dependent statistical distributions. The geological processes controlling the evolution of porosity are complex. Quantifying patterns of porosity variation is an essential first step in a wider quest to better understand deformation processes in and around normal fault zones. Understanding porosity patterns will help us to make more useful predictive tools for all agencies involved in the study and management of fluids in the subsurface.

  10. Perspective view, Landsat overlay San Andreas Fault, Palmdale, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The prominent linear feature straight down the center of this perspective view is the San Andreas Fault. This segment of the fault lies near the city of Palmdale, California (the flat area in the right half of the image) about 60 kilometers (37 miles) north of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. Two large mountain ranges are visible, the San Gabriel Mountains on the left and the Tehachapi Mountains in the upper right. The Lake Palmdale Reservoir, approximately 1.5 kilometers (0.9 miles) across, sits in the topographic depression created by past movement along the fault. Highway 14 is the prominent linear feature starting at the lower left edge of the image and continuing along the far side of the reservoir. The patterns of residential and agricultural development around Palmdale are seen in the Landsat imagery in the right half of the image. SRTM topographic data will be used by geologists studying fault dynamics and landforms resulting from active tectonics.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture

  11. Fault detection and fault tolerance in robotics

    NASA Technical Reports Server (NTRS)

    Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.

    1992-01-01

    Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.

  12. Homogenisation in project management for large German research projects in the Earth system sciences: overcoming the institutional coordination bias

    NASA Astrophysics Data System (ADS)

    Rauser, Florian; Vamborg, Freja

    2016-04-01

    The interdisciplinary project on High Definition Clouds and Precipitation for advancing climate prediction HD(CP)2 (hdcp2.eu) is an example for the trend in fundamental research in Europe to increasingly focus on large national and international research programs that require strong scientific coordination. The current system has traditionally been host-based: project coordination activities and funding is placed at the host institute of the central lead PI of the project. This approach is simple and has the advantage of strong collaboration between project coordinator and lead PI, while exhibiting a list of strong, inherent disadvantages that are also mentioned in this session's description: no community best practice development, lack of integration between similar projects, inefficient methodology development and usage, and finally poor career development opportunities for the coordinators. Project coordinators often leave the project before it is finalized, leaving some of the fundamentally important closing processes to the PIs. This systematically prevents the creation of professional science management expertise within academia, which leads to an automatic imbalance that hinders the outcome of large research programs to help future funding decisions. Project coordinators in academia often do not work in a professional project office environment that could distribute activities and use professional tools and methods between different projects. Instead, every new project manager has to focus on methodological work anew (communication infrastructure, meetings, reporting), even though the technological needs of large research projects are similar. This decreases the efficiency of the coordination and leads to funding that is effectively misallocated. We propose to challenge this system by creating a permanent, virtual "Centre for Earth System Science Management CESSMA" (cessma.com), and changing the approach from host- based to centre-based. This should

  13. Modeling, Detection, and Disambiguation of Sensor Faults for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Balaban, Edward; Saxena, Abhinav; Bansal, Prasun; Goebel, Kai F.; Curran, Simon

    2009-01-01

    Sensor faults continue to be a major hurdle for systems health management to reach its full potential. At the same time, few recorded instances of sensor faults exist. It is equally difficult to seed particular sensor faults. Therefore, research is underway to better understand the different fault modes seen in sensors and to model the faults. The fault models can then be used in simulated sensor fault scenarios to ensure that algorithms can distinguish between sensor faults and system faults. The paper illustrates the work with data collected from an electro-mechanical actuator in an aerospace setting, equipped with temperature, vibration, current, and position sensors. The most common sensor faults, such as bias, drift, scaling, and dropout were simulated and injected into the experimental data, with the goal of making these simulations as realistic as feasible. A neural network based classifier was then created and tested on both experimental data and the more challenging randomized data sequences. Additional studies were also conducted to determine sensitivity of detection and disambiguation efficacy to severity of fault conditions.

  14. Solar system fault detection

    DOEpatents

    Farrington, R.B.; Pruett, J.C. Jr.

    1984-05-14

    A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

  15. Solar system fault detection

    DOEpatents

    Farrington, Robert B.; Pruett, Jr., James C.

    1986-01-01

    A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

  16. Towards a standard licensing scheme for the access and use of satellite earth observation data for disaster management

    NASA Astrophysics Data System (ADS)

    Clark, Nathan E.

    2017-10-01

    This paper explores from the view of the data recipient and user the complexities of creating a common licensing scheme for the access and use of satellite earth observation (EO) data in international disaster management (DM) activities. EO data contributions in major disaster events often involve numerous data providers with separate licensing mechanisms for controlling the access, uses, and distribution of data by the end users. A lack of standardization among the terminology, wording, and conditions within these licenses creates a complex legal environment for users, and often prevents them from using, sharing and combining datasets in an effective and timely manner. It also creates uncertainty among data providers as to the types of licensing controls that should be applied in disaster scenarios. This paper builds from an ongoing comparative analysis of the common and conflicting conditions among data licenses that must be addressed in order to facilitate easier access and use of EO data within the DM sector and offers recommendations towards the alignment of the structural and technical aspects of licenses among data providers.

  17. A Geosynchronous Synthetic Aperture Provides for Disaster Management, Measurement of Soil Moisture, and Measurement of Earth-Surface Dynamics

    NASA Technical Reports Server (NTRS)

    Madsen, Soren; Komar, George (Technical Monitor)

    2001-01-01

    A GEO-based Synthetic Aperture Radar (SAR) could provide daily coverage of basically all of North and South America with very good temporal coverage within the mapped area. This affords a key capability to disaster management, tectonic mapping and modeling, and vegetation mapping. The fine temporal sampling makes this system particularly useful for disaster management of flooding, hurricanes, and earthquakes. By using a fairly long wavelength, changing water boundaries caused by storms or flooding could be monitored in near real-time. This coverage would also provide revolutionary capabilities in the field of radar interferometry, including the capability to study the interferometric signature immediately before and after an earthquake, thus allowing unprecedented studies of Earth-surface dynamics. Preeruptive volcano dynamics could be studied as well as pre-seismic deformation, one of the most controversial and elusive aspects of earthquakes. Interferometric correlation would similarly allow near real-time mapping of surface changes caused by volcanic eruptions, mud slides, or fires. Finally, a GEO SAR provides an optimum configuration for soil moisture measurement that requires a high temporal sampling rate (1-2 days) with a moderate spatial resolution (1 km or better). From a technological point of view, the largest challenges involved in developing a geosynchronous SAR capability relate to the very large slant range distance from the radar to the mapped area. This leads to requirements for large power or alternatively very large antenna, the ability to steer the mapping area to the left and right of the satellite, and control of the elevation and azimuth angles. The weight of this system is estimated to be 2750 kg and it would require 20 kW of DC-power. Such a system would provide up to a 600 km ground swath in a strip-mapping mode and 4000 km dual-sided mapping in a scan-SAR mode.

  18. Earth Observations

    2013-06-21

    ISS036-E-011034 (21 June 2013) --- The Salton Trough is featured in this image photographed by an Expedition 36 crew member on the International Space Station. The Imperial and Coachella Valleys of southern California – and the corresponding Mexicali Valley and Colorado River Delta in Mexico – are part of the Salton Trough, a large geologic structure known to geologists as a graben or rift valley that extends into the Gulf of California. The trough is a geologically complex zone formed by interaction of the San Andreas transform fault system that is, broadly speaking, moving southern California towards Alaska; and the northward motion of the Gulf of California segment of the East Pacific Rise that continues to widen the Gulf of California by sea-floor spreading. According to scientists, sediments deposited by the Colorado River have been filling the northern rift valley (the Salton Trough) for the past several million years, excluding the waters of the Gulf of California and providing a fertile environment – together with irrigation—for the development of extensive agriculture in the region (visible as green and yellow-brown fields at center). The Salton Sea, a favorite landmark of astronauts in low Earth orbit, was formed by an irrigation canal rupture in 1905, and today is sustained by agricultural runoff water. A wide array of varying landforms and land uses in the Salton Trough are visible from space. In addition to the agricultural fields and Salton Sea, easily visible metropolitan areas include Yuma, AZ (lower left); Mexicali, Baja California, Mexico (center); and the San Diego-Tijuana conurbation on the Pacific Coast (right). The approximately 72-kilometer-long Algodones Dunefield is visible at lower left.

  19. Methods to enhance seismic faults and construct fault surfaces

    NASA Astrophysics Data System (ADS)

    Wu, Xinming; Zhu, Zhihui

    2017-10-01

    Faults are often apparent as reflector discontinuities in a seismic volume. Numerous types of fault attributes have been proposed to highlight fault positions from a seismic volume by measuring reflection discontinuities. These attribute volumes, however, can be sensitive to noise and stratigraphic features that are also apparent as discontinuities in a seismic volume. We propose a matched filtering method to enhance a precomputed fault attribute volume, and simultaneously estimate fault strikes and dips. In this method, a set of efficient 2D exponential filters, oriented by all possible combinations of strike and dip angles, are applied to the input attribute volume to find the maximum filtering responses at all samples in the volume. These maximum filtering responses are recorded to obtain the enhanced fault attribute volume while the corresponding strike and dip angles, that yield the maximum filtering responses, are recoded to obtain volumes of fault strikes and dips. By doing this, we assume that a fault surface is locally planar, and a 2D smoothing filter will yield a maximum response if the smoothing plane coincides with a local fault plane. With the enhanced fault attribute volume and the estimated fault strike and dip volumes, we then compute oriented fault samples on the ridges of the enhanced fault attribute volume, and each sample is oriented by the estimated fault strike and dip. Fault surfaces can be constructed by directly linking the oriented fault samples with consistent fault strikes and dips. For complicated cases with missing fault samples and noisy samples, we further propose to use a perceptual grouping method to infer fault surfaces that reasonably fit the positions and orientations of the fault samples. We apply these methods to 3D synthetic and real examples and successfully extract multiple intersecting fault surfaces and complete fault surfaces without holes.

  20. Soil and crop management experiments in the Laboratory Biosphere: an analogue system for the Mars on Earth(R) facility.

    PubMed

    Silverstone, S; Nelson, M; Alling, A; Allen, J P

    2005-01-01

    During the years 2002 and 2003, three closed system experiments were carried out in the "Laboratory Biosphere" facility located in Santa Fe, New Mexico. The program involved experimentation of "Hoyt" Soy Beans, (experiment #1) USU Apogee Wheat (experiment #2) and TU-82-155 sweet potato (experiment #3) using a 5.37 m2 soil planting bed which was 30 cm deep. The soil texture, 40% clay, 31% sand and 28% silt (a clay loam), was collected from an organic farm in New Mexico to avoid chemical residues. Soil management practices involved minimal tillage, mulching, returning crop residues to the soil after each experiment and increasing soil biota by introducing worms, soil bacteria and mycorrhizae fungi. High soil pH of the original soil appeared to be a factor affecting the first two experiments. Hence, between experiments #2 and #3, the top 15 cm of the soil was amended using a mix of peat moss, green sand, humates and pumice to improve soil texture, lower soil pH and increase nutrient availability. This resulted in lowering the initial pH of 8.0-6.7 at the start of experiment #3. At the end of the experiment, the pH was 7.6. Soil nitrogen and phosphorus has been adequate, but some chlorosis was evident in the first two experiments. Aphid infestation was the only crop pest problem during the three experiments and was handled using an introduction of Hyppodamia convergens. Experimentation showed there were environmental differences even in this 1200 cubic foot ecological system facility, such as temperature and humidity gradients because of ventilation and airflow patterns which resulted in consequent variations in plant growth and yield. Additional humidifiers were added to counteract low humidity and helped optimize conditions for the sweet potato experiment. The experience and information gained from these experiments are being applied to the future design of the Mars On Earth(R) facility (Silverstone et al., Development and research program for a soil

  1. Soil and crop management experiments in the Laboratory Biosphere: An analogue system for the Mars on Earth ® facility

    NASA Astrophysics Data System (ADS)

    Silverstone, S.; Nelson, M.; Alling, A.; Allen, J. P.

    During the years 2002 and 2003, three closed system experiments were carried out in the "Laboratory Biosphere" facility located in Santa Fe, New Mexico. The program involved experimentation of "Hoyt" Soy Beans, (experiment #1) USU Apogee Wheat (experiment #2) and TU-82-155 sweet potato (experiment #3) using a 5.37 m 2 soil planting bed which was 30 cm deep. The soil texture, 40% clay, 31% sand and 28% silt (a clay loam), was collected from an organic farm in New Mexico to avoid chemical residues. Soil management practices involved minimal tillage, mulching, returning crop residues to the soil after each experiment and increasing soil biota by introducing worms, soil bacteria and mycorrhizae fungi. High soil pH of the original soil appeared to be a factor affecting the first two experiments. Hence, between experiments #2 and #3, the top 15 cm of the soil was amended using a mix of peat moss, green sand, humates and pumice to improve soil texture, lower soil pH and increase nutrient availability. This resulted in lowering the initial pH of 8.0-6.7 at the start of experiment #3. At the end of the experiment, the pH was 7.6. Soil nitrogen and phosphorus has been adequate, but some chlorosis was evident in the first two experiments. Aphid infestation was the only crop pest problem during the three experiments and was handled using an introduction of Hyppodamia convergens. Experimentation showed there were environmental differences even in this 1200 cubic foot ecological system facility, such as temperature and humidity gradients because of ventilation and airflow patterns which resulted in consequent variations in plant growth and yield. Additional humidifiers were added to counteract low humidity and helped optimize conditions for the sweet potato experiment. The experience and information gained from these experiments are being applied to the future design of the Mars On Earth ® facility (Silverstone et al., Development and research program for a soil

  2. Block rotations, fault domains and crustal deformation in the western US

    NASA Technical Reports Server (NTRS)

    Nur, Amos

    1990-01-01

    The aim of the project was to develop a 3D model of crustal deformation by distributed fault sets and to test the model results in the field. In the first part of the project, Nur's 2D model (1986) was generalized to 3D. In Nur's model the frictional strength of rocks and faults of a domain provides a tight constraint on the amount of rotation that a fault set can undergo during block rotation. Domains of fault sets are commonly found in regions where the deformation is distributed across a region. The interaction of each fault set causes the fault bounded blocks to rotate. The work that has been done towards quantifying the rotation of fault sets in a 3D stress field is briefly summarized. In the second part of the project, field studies were carried out in Israel, Nevada and China. These studies combined both paleomagnetic and structural information necessary to test the block rotation model results. In accordance with the model, field studies demonstrate that faults and attending fault bounded blocks slip and rotate away from the direction of maximum compression when deformation is distributed across fault sets. Slip and rotation of fault sets may continue as long as the earth's crustal strength is not exceeded. More optimally oriented faults must form, for subsequent deformation to occur. Eventually the block rotation mechanism may create a complex pattern of intersecting generations of faults.

  3. Postglacial rebound and fault instability in Fennoscandia

    NASA Astrophysics Data System (ADS)

    Wu, Patrick; Johnston, Paul; Lambeck, Kurt

    1999-12-01

    The best available rebound model is used to investigate the role that postglacial rebound plays in triggering seismicity in Fennoscandia. The salient features of the model include tectonic stress due to spreading at the North Atlantic Ridge, overburden pressure, gravitationally self-consistent ocean loading, and the realistic deglaciation history and compressible earth model which best fits the sea-level and ice data in Fennoscandia. The model predicts the spatio-temporal evolution of the state of stress, the magnitude of fault instability, the timing of the onset of this instability, and the mode of failure of lateglacial and postglacial seismicity. The consistency of the predictions with the observations suggests that postglacial rebound is probably the cause of the large postglacial thrust faults observed in Fennoscandia. The model also predicts a uniform stress field and instability in central Fennoscandia for the present, with thrust faulting as the predicted mode of failure. However, the lack of spatial correlation of the present seismicity with the region of uplift, and the existence of strike-slip and normal modes of current seismicity are inconsistent with this model. Further unmodelled factors such as the presence of high-angle faults in the central region of uplift along the Baltic coast would be required in order to explain the pattern of seismicity today in terms of postglacial rebound stress. The sensitivity of the model predictions to the effects of compressibility, tectonic stress, viscosity and ice model is also investigated. For sites outside the ice margin, it is found that the mode of failure is sensitive to the presence of tectonic stress and that the onset timing is also dependent on compressibility. For sites within the ice margin, the effect of Earth rheology is shown to be small. However, ice load history is shown to have larger effects on the onset time of earthquakes and the magnitude of fault instability.

  4. Fault detection and isolation

    NASA Technical Reports Server (NTRS)

    Bernath, Greg

    1994-01-01

    In order for a current satellite-based navigation system (such as the Global Positioning System, GPS) to meet integrity requirements, there must be a way of detecting erroneous measurements, without help from outside the system. This process is called Fault Detection and Isolation (FDI). Fault detection requires at least one redundant measurement, and can be done with a parity space algorithm. The best way around the fault isolation problem is not necessarily isolating the bad measurement, but finding a new combination of measurements which excludes it.

  5. High level organizing principles for display of systems fault information for commercial flight crews

    NASA Technical Reports Server (NTRS)

    Rogers, William H.; Schutte, Paul C.

    1993-01-01

    Advanced fault management aiding concepts for commercial pilots are being developed in a research program at NASA Langley Research Center. One aim of this program is to re-evaluate current design principles for display of fault information to the flight crew: (1) from a cognitive engineering perspective and (2) in light of the availability of new types of information generated by advanced fault management aids. The study described in this paper specifically addresses principles for organizing fault information for display to pilots based on their mental models of fault management.

  6. SUMC fault tolerant computer system

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The results of the trade studies are presented. These trades cover: establishing the basic configuration, establishing the CPU/memory configuration, establishing an approach to crosstrapping interfaces, defining the requirements of the redundancy management unit (RMU), establishing a spare plane switching strategy for the fault-tolerant memory (FTM), and identifying the most cost effective way of extending the memory addressing capability beyond the 64 K-bytes (K=1024) of SUMC-II B. The results of the design are compiled in Contract End Item (CEI) Specification for the NASA Standard Spacecraft Computer II (NSSC-II), IBM 7934507. The implementation of the FTM and memory address expansion.

  7. Ste. Genevieve Fault Zone, Missouri and Illinois. Final report

    SciT

    Nelson, W.J.; Lumm, D.K.

    1985-07-01

    The Ste. Genevieve Fault Zone is a major structural feature which strikes NW-SE for about 190 km on the NE flank of the Ozark Dome. There is up to 900 m of vertical displacement on high angle normal and reverse faults in the fault zone. At both ends the Ste. Genevieve Fault Zone dies out into a monocline. Two periods of faulting occurred. The first was in late Middle Devonian time and the second from latest Mississippian through early Pennsylvanian time, with possible minor post-Pennsylvanian movement. No evidence was found to support the hypothesis that the Ste. Genevieve Fault Zonemore » is part of a northwestward extension of the late Precambrian-early Cambrian Reelfoot Rift. The magnetic and gravity anomalies cited in support of the ''St. Louis arm'' of the Reelfoot Rift possible reflect deep crystal features underlying and older than the volcanic terrain of the St. Francois Mountains (1.2 to 1.5 billion years old). In regard to neotectonics no displacements of Quaternary sediments have been detected, but small earthquakes occur from time to time along the Ste. Genevieve Fault Zone. Many faults in the zone appear capable of slipping under the current stress regime of east-northeast to west-southwest horizontal compression. We conclude that the zone may continue to experience small earth movements, but catastrophic quakes similar to those at New Madrid in 1811-12 are unlikely. 32 figs., 1 tab.« less

  8. Hayward Fault, California Interferogram

    2000-08-17

    This image of California Hayward fault is an interferogram created using a pair of images taken by ESA ERS-1 and ERS-2 in June 1992 and September 1997 over the central San Francisco Bay in California.

  9. Hayward Fault, California Interferogram

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This image of California's Hayward fault is an interferogram created using a pair of images taken by Synthetic Aperture Radar(SAR) combined to measure changes in the surface that may have occurred between the time the two images were taken.

    The images were collected by the European Space Agency's Remote Sensing satellites ERS-1 and ERS-2 in June 1992 and September 1997 over the central San Francisco Bay in California.

    The radar image data are shown as a gray-scale image, with the interferometric measurements that show the changes rendered in color. Only the urbanized area could be mapped with these data. The color changes from orange tones to blue tones across the Hayward fault (marked by a thin red line) show about 2-3centimeters (0.8-1.1 inches) of gradual displacement or movement of the southwest side of the fault. The block west of the fault moved horizontally toward the northwest during the 63 months between the acquisition of the two SAR images. This fault movement is called a seismic creep because the fault moved slowly without generating an earthquake.

    Scientists are using the SAR interferometry along with other data collected on the ground to monitor this fault motion in an attempt to estimate the probability of earthquake on the Hayward fault, which last had a major earthquake of magnitude 7 in 1868. This analysis indicates that the northern part of the Hayward fault is creeping all the way from the surface to a depth of 12 kilometers (7.5 miles). This suggests that the potential for a large earthquake on the northern Hayward fault might be less than previously thought. The blue area to the west (lower left) of the fault near the center of the image seemed to move upward relative to the yellow and orange areas nearby by about 2 centimeters (0.8 inches). The cause of this apparent motion is not yet confirmed, but the rise of groundwater levels during the time between the images may have caused the reversal of a small portion of the subsidence that

  10. Cable-fault locator

    NASA Technical Reports Server (NTRS)

    Cason, R. L.; Mcstay, J. J.; Heymann, A. P., Sr.

    1979-01-01

    Inexpensive system automatically indicates location of short-circuited section of power cable. Monitor does not require that cable be disconnected from its power source or that test signals be applied. Instead, ground-current sensors are installed in manholes or at other selected locations along cable run. When fault occurs, sensors transmit information about fault location to control center. Repair crew can be sent to location and cable can be returned to service with minimum of downtime.

  11. The Earth System Grid Federation (ESGF): Climate Science Infrastructure for Large-scale Data Management and Dissemination

    NASA Astrophysics Data System (ADS)

    Williams, D. N.

    2015-12-01

    Progress in understanding and predicting climate change requires advanced tools to securely store, manage, access, process, analyze, and visualize enormous and distributed data sets. Only then can climate researchers understand the effects of climate change across all scales and use this information to inform policy decisions. With the advent of major international climate modeling intercomparisons, a need emerged within the climate-change research community to develop efficient, community-based tools to obtain relevant meteorological and other observational data, develop custom computational models, and export analysis tools for climate-change simulations. While many nascent efforts to fill these gaps appeared, they were not integrated and therefore did not benefit from collaborative development. Sharing huge data sets was difficult, and the lack of data standards prevented the merger of output data from different modeling groups. Thus began one of the largest-ever collaborative data efforts in climate science, resulting in the Earth System Grid Federation (ESGF), which is now used to disseminate model, observational, and reanalysis data for research assessed by the Intergovernmental Panel on Climate Change (IPCC). Today, ESGF is an open-source petabyte-level data storage and dissemination operational code-base that manages secure resources essential for climate change study. It is designed to remain robust even as data volumes grow exponentially. The internationally distributed, peer-to-peer ESGF "data cloud" archive represents the culmination of an effort that began in the late 1990s. ESGF portals are gateways to scientific data collections hosted at sites around the globe that allow the user to register and potentially access the entire ESGF network of data and services. The growing international interest in ESGF development efforts has attracted many others who want to make their data more widely available and easy to use. For example, the World Climate

  12. Fault lubrication during earthquakes.

    PubMed

    Di Toro, G; Han, R; Hirose, T; De Paola, N; Nielsen, S; Mizoguchi, K; Ferri, F; Cocco, M; Shimamoto, T

    2011-03-24

    The determination of rock friction at seismic slip rates (about 1 m s(-1)) is of paramount importance in earthquake mechanics, as fault friction controls the stress drop, the mechanical work and the frictional heat generated during slip. Given the difficulty in determining friction by seismological methods, elucidating constraints are derived from experimental studies. Here we review a large set of published and unpublished experiments (∼300) performed in rotary shear apparatus at slip rates of 0.1-2.6 m s(-1). The experiments indicate a significant decrease in friction (of up to one order of magnitude), which we term fault lubrication, both for cohesive (silicate-built, quartz-built and carbonate-built) rocks and non-cohesive rocks (clay-rich, anhydrite, gypsum and dolomite gouges) typical of crustal seismogenic sources. The available mechanical work and the associated temperature rise in the slipping zone trigger a number of physicochemical processes (gelification, decarbonation and dehydration reactions, melting and so on) whose products are responsible for fault lubrication. The similarity between (1) experimental and natural fault products and (2) mechanical work measures resulting from these laboratory experiments and seismological estimates suggests that it is reasonable to extrapolate experimental data to conditions typical of earthquake nucleation depths (7-15 km). It seems that faults are lubricated during earthquakes, irrespective of the fault rock composition and of the specific weakening mechanism involved.

  13. Geomorphic expression of strike-slip faults: field observations vs. analog experiments: preliminary results

    NASA Astrophysics Data System (ADS)

    Hsieh, S. Y.; Neubauer, F.; Genser, J.

    2012-04-01

    The aim of this project is to study the surface expression of strike-slip faults with main aim to find rules how these structures can be extrapolated to depth. In the first step, several basic properties of the fault architecture are in focus: (1) Is it possible to define the fault architecture by studying surface structures of the damage zone vs. the fault core, particularly the width of the damage zone? (2) Which second order structures define the damage zone of strike-slip faults, and how relate these to such reported in basement fault strike-slip analog experiments? (3) Beside classical fault bend structures, is there a systematic along-strike variation of the damage zone width and to which properties relates the variation of the damage zone width. We study the above mentioned properties on the dextral Altyn fault, which is one of the largest strike-slip on Earth with the advantage to have developed in a fully arid climate. The Altyn fault includes a ca. 250 to 600 m wide fault valley, usually with the trace of actual fault in its center. The fault valley is confined by basement highs, from which alluvial fans develop towards the center of the fault valley. The active fault trace is marked by small scale pressure ridges and offset of alluvial fans. The fault valley confining basement highs are several kilometer long and ca. 0.5 to 1 km wide and confined by rotated dextral anti-Riedel faults and internally structured by a regular fracture pattern. Dextral anti-Riedel faults are often cut by Riedel faults. Consequently, the Altyn fault comprises a several km wide damage zone. The fault core zone is a barrier to fluid flow, and the few springs of the region are located on the margin of the fault valley implying the fractured basement highs as the reservoir. Consequently, the southern Silk Road was using the Altyn fault valley. The preliminary data show that two or more orders of structures exist. Small-scale develop during a single earthquake. These finally

  14. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1993-01-01

    Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.

  15. Modeling in the State Flow Environment to Support Launch Vehicle Verification Testing for Mission and Fault Management Algorithms in the NASA Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Berg, Peter; England, Dwight; Johnson, Stephen B.

    2016-01-01

    Analysis methods and testing processes are essential activities in the engineering development and verification of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS). Central to mission success is reliable verification of the Mission and Fault Management (M&FM) algorithms for the SLS launch vehicle (LV) flight software. This is particularly difficult because M&FM algorithms integrate and operate LV subsystems, which consist of diverse forms of hardware and software themselves, with equally diverse integration from the engineering disciplines of LV subsystems. M&FM operation of SLS requires a changing mix of LV automation. During pre-launch the LV is primarily operated by the Kennedy Space Center (KSC) Ground Systems Development and Operations (GSDO) organization with some LV automation of time-critical functions, and much more autonomous LV operations during ascent that have crucial interactions with the Orion crew capsule, its astronauts, and with mission controllers at the Johnson Space Center. M&FM algorithms must perform all nominal mission commanding via the flight computer to control LV states from pre-launch through disposal and also address failure conditions by initiating autonomous or commanded aborts (crew capsule escape from the failing LV), redundancy management of failing subsystems and components, and safing actions to reduce or prevent threats to ground systems and crew. To address the criticality of the verification testing of these algorithms, the NASA M&FM team has utilized the State Flow environment6 (SFE) with its existing Vehicle Management End-to-End Testbed (VMET) platform which also hosts vendor-supplied physics-based LV subsystem models. The human-derived M&FM algorithms are designed and vetted in Integrated Development Teams composed of design and development disciplines such as Systems Engineering, Flight Software (FSW), Safety and Mission Assurance (S&MA) and major subsystems and vehicle elements

  16. Google Haul Out: Earth Observation Imagery and Digital Aerial Surveys in Coastal Wildlife Management and Abundance Estimation

    PubMed Central

    Moxley, Jerry H.; Bogomolni, Andrea; Hammill, Mike O.; Moore, Kathleen M. T.; Polito, Michael J.; Sette, Lisa; Sharp, W. Brian; Waring, Gordon T.; Gilbert, James R.; Halpin, Patrick N.; Johnston, David W.

    2017-01-01

    Abstract As the sampling frequency and resolution of Earth observation imagery increase, there are growing opportunities for novel applications in population monitoring. New methods are required to apply established analytical approaches to data collected from new observation platforms (e.g., satellites and unmanned aerial vehicles). Here, we present a method that estimates regional seasonal abundances for an understudied and growing population of gray seals (Halichoerus grypus) in southeastern Massachusetts, using opportunistic observations in Google Earth imagery. Abundance estimates are derived from digital aerial survey counts by adapting established correction-based analyses with telemetry behavioral observation to quantify survey biases. The result is a first regional understanding of gray seal abundance in the northeast US through opportunistic Earth observation imagery and repurposed animal telemetry data. As species observation data from Earth observation imagery become more ubiquitous, such methods provide a robust, adaptable, and cost-effective solution to monitoring animal colonies and understanding species abundances. PMID:29599542

  17. Google Haul Out: Earth Observation Imagery and Digital Aerial Surveys in Coastal Wildlife Management and Abundance Estimation.

    PubMed

    Moxley, Jerry H; Bogomolni, Andrea; Hammill, Mike O; Moore, Kathleen M T; Polito, Michael J; Sette, Lisa; Sharp, W Brian; Waring, Gordon T; Gilbert, James R; Halpin, Patrick N; Johnston, David W

    2017-08-01

    As the sampling frequency and resolution of Earth observation imagery increase, there are growing opportunities for novel applications in population monitoring. New methods are required to apply established analytical approaches to data collected from new observation platforms (e.g., satellites and unmanned aerial vehicles). Here, we present a method that estimates regional seasonal abundances for an understudied and growing population of gray seals (Halichoerus grypus) in southeastern Massachusetts, using opportunistic observations in Google Earth imagery. Abundance estimates are derived from digital aerial survey counts by adapting established correction-based analyses with telemetry behavioral observation to quantify survey biases. The result is a first regional understanding of gray seal abundance in the northeast US through opportunistic Earth observation imagery and repurposed animal telemetry data. As species observation data from Earth observation imagery become more ubiquitous, such methods provide a robust, adaptable, and cost-effective solution to monitoring animal colonies and understanding species abundances.

  18. Earth Observatory Satellite system definition study. Report no. 3: Design/cost tradeoff studies. Appendix D: EOS configuration design data. Part 2: Data management system configuration

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The Earth Observatory Satellite (EOS) data management system (DMS) is discussed. The DMS is composed of several subsystems or system elements which have basic purposes and are connected together so that the DMS can support the EOS program by providing the following: (1) payload data acquisition and recording, (2) data processing and product generation, (3) spacecraft and processing management and control, and (4) data user services. The configuration and purposes of the primary or high-data rate system and the secondary or local user system are explained. Diagrams of the systems are provided to support the systems analysis.

  19. Earth Science Informatics - Overview

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.

    2015-01-01

    Over the last 10-15 years, significant advances have been made in information management, there are an increasing number of individuals entering the field of information management as it applies to Geoscience and Remote Sensing data, and the field of informatics has come to its own. Informatics is the science and technology of applying computers and computational methods to the systematic analysis, management, interchange, and representation of science data, information, and knowledge. Informatics also includes the use of computers and computational methods to support decision making and applications. Earth Science Informatics (ESI, a.k.a. geoinformatics) is the application of informatics in the Earth science domain. ESI is a rapidly developing discipline integrating computer science, information science, and Earth science. Major national and international research and infrastructure projects in ESI have been carried out or are on-going. Notable among these are: the Global Earth Observation System of Systems (GEOSS), the European Commissions INSPIRE, the U.S. NSDI and Geospatial One-Stop, the NASA EOSDIS, and the NSF DataONE, EarthCube and Cyberinfrastructure for Geoinformatics. More than 18 departments and agencies in the U.S. federal government have been active in Earth science informatics. All major space agencies in the world, have been involved in ESI research and application activities. In the United States, the Federation of Earth Science Information Partners (ESIP), whose membership includes nearly 150 organizations (government, academic and commercial) dedicated to managing, delivering and applying Earth science data, has been working on many ESI topics since 1998. The Committee on Earth Observation Satellites (CEOS)s Working Group on Information Systems and Services (WGISS) has been actively coordinating the ESI activities among the space agencies. Remote Sensing; Earth Science Informatics, Data Systems; Data Services; Metadata

  20. A wideband magnetoresistive sensor for monitoring dynamic fault slip in laboratory fault friction experiments

    Kilgore, Brian D.

    2017-01-01

    A non-contact, wideband method of sensing dynamic fault slip in laboratory geophysical experiments employs an inexpensive magnetoresistive sensor, a small neodymium rare earth magnet, and user built application-specific wideband signal conditioning. The magnetoresistive sensor generates a voltage proportional to the changing angles of magnetic flux lines, generated by differential motion or rotation of the near-by magnet, through the sensor. The performance of an array of these sensors compares favorably to other conventional position sensing methods employed at multiple locations along a 2 m long × 0.4 m deep laboratory strike-slip fault. For these magnetoresistive sensors, the lack of resonance signals commonly encountered with cantilever-type position sensor mounting, the wide band response (DC to ≈ 100 kHz) that exceeds the capabilities of many traditional position sensors, and the small space required on the sample, make them attractive options for capturing high speed fault slip measurements in these laboratory experiments. An unanticipated observation of this study is the apparent sensitivity of this sensor to high frequency electomagnetic signals associated with fault rupture and (or) rupture propagation, which may offer new insights into the physics of earthquake faulting.

  1. The Bear River Fault Zone, Wyoming and Utah: Complex Ruptures on a Young Normal Fault

    NASA Astrophysics Data System (ADS)

    Schwartz, D. P.; Hecker, S.; Haproff, P.; Beukelman, G.; Erickson, B.

    2012-12-01

    The Bear River fault zone (BRFZ), a set of normal fault scarps located in the Rocky Mountains at the eastern margin of Basin and Range extension, is a rare example of a nascent surface-rupturing fault. Paleoseismic investigations (West, 1994; this study) indicate that the entire neotectonic history of the BRFZ may consist of two large surface-faulting events in the late Holocene. We have estimated a maximum per-event vertical displacement of 6-6.5 m at the south end of the fault where it abuts the north flank of the east-west-trending Uinta Mountains. However, large hanging-wall depressions resulting from back rotation, which front scarps that locally exceed 15 m in height, are prevalent along the main trace, obscuring the net displacement and its along-strike distribution. The modest length (~35 km) of the BRFZ indicates ruptures with a large displacement-to-length ratio, which implies earthquakes with a high static stress drop. The BRFZ is one of several immature (low cumulative displacement) normal faults in the Rocky Mountain region that appear to produce high-stress drop earthquakes. West (1992) interpreted the BRFZ as an extensionally reactivated ramp of the late Cretaceous-early Tertiary Hogsback thrust. LiDAR data on the southern section of the fault and Google Earth imagery show that these young ruptures are more extensive than currently mapped, with newly identified large (>10m) antithetic scarps and footwall graben. The scarps of the BRFZ extend across a 2.5-5.0 km-wide zone, making this the widest and most complex Holocene surface rupture in the Intermountain West. The broad distribution of Late Holocene scarps is consistent with reactivation of shallow bedrock structures but the overall geometry of the BRFZ at depth and its extent into the seismogenic zone are uncertain.

  2. TWT transmitter fault prediction based on ANFIS

    NASA Astrophysics Data System (ADS)

    Li, Mengyan; Li, Junshan; Li, Shuangshuang; Wang, Wenqing; Li, Fen

    2017-11-01

    Fault prediction is an important component of health management, and plays an important role in the reliability guarantee of complex electronic equipments. Transmitter is a unit with high failure rate. The cathode performance of TWT is a common fault of transmitter. In this dissertation, a model based on a set of key parameters of TWT is proposed. By choosing proper parameters and applying adaptive neural network training model, this method, combined with analytic hierarchy process (AHP), has a certain reference value for the overall health judgment of TWT transmitters.

  3. Engaging teachers, interpreters and emergency management educators in disaster preparedness and EarthScope science through joint professional development workshops (Invited)

    NASA Astrophysics Data System (ADS)

    Pratt-Sitaula, B. A.; Lillie, R. J.; Butler, R. F.; Hunter, N.; Magura, B.; Groom, R.; Hedeen, C. D.; Johnson, J. A.; Ault, C.; Olds, S. E.

    2013-12-01

    The same geological forces that form the spectacular beaches and headlands of the Pacific Northwest also threaten lives and infrastructure with earthquakes and tsunamis. A new project called the Cascadia EarthScope, Earthquake, and Tsunami Education Program (CEETEP), is helping to mitigate the effects of these potential disasters through collaboration building and professional development for K-12 teachers, park and museum interpreters, and emergency management outreach educators in communities along the Oregon and Washington coast. Tens of thousands of Oregon and Washington residents live within severe earthquake-shaking and tsunami-inundation zones, and millions of tourists visit state and federal parks in these same areas each year. Teachers in the K-12 school systems convey some basics about geological hazards to their students, and park rangers and museum educators likewise engage visitors at their sites. Emergency management educators make regular presentations to local residents about disaster preparedness. CEETEP is strengthening these efforts by providing community-based workshops that bring together all of these professionals to review the basic science of earthquakes and tsunamis, learn about EarthScope and other research efforts that monitor the dynamic Earth in the region, and develop ways to collectively engage students and the general public on the mitigation of coastal geologic hazards. As part of a nationwide effort, the NSF EarthScope Program has been deploying hundreds of seismic, GPS, and other geophysical instruments to measure movement of the Earth's crust and detect earthquakes along the Cascadia Subduction Zone. These instruments provide detail for ongoing research showing that coastal regions are storing energy that will be released in the next great Cascadia earthquake, with the resulting tsunami arriving onshore in 30 minutes or less. CEETEP is helping to convey these cutting-edge findings to coastal educators and fulfill Earth

  4. Advanced information processing system: The Army Fault-Tolerant Architecture detailed design overview

    NASA Technical Reports Server (NTRS)

    Harper, Richard E.; Babikyan, Carol A.; Butler, Bryan P.; Clasen, Robert J.; Harris, Chris H.; Lala, Jaynarayan H.; Masotto, Thomas K.; Nagle, Gail A.; Prizant, Mark J.; Treadwell, Steven

    1994-01-01

    The Army Avionics Research and Development Activity (AVRADA) is pursuing programs that would enable effective and efficient management of large amounts of situational data that occurs during tactical rotorcraft missions. The Computer Aided Low Altitude Night Helicopter Flight Program has identified automated Terrain Following/Terrain Avoidance, Nap of the Earth (TF/TA, NOE) operation as key enabling technology for advanced tactical rotorcraft to enhance mission survivability and mission effectiveness. The processing of critical information at low altitudes with short reaction times is life-critical and mission-critical necessitating an ultra-reliable/high throughput computing platform for dependable service for flight control, fusion of sensor data, route planning, near-field/far-field navigation, and obstacle avoidance operations. To address these needs the Army Fault Tolerant Architecture (AFTA) is being designed and developed. This computer system is based upon the Fault Tolerant Parallel Processor (FTPP) developed by Charles Stark Draper Labs (CSDL). AFTA is hard real-time, Byzantine, fault-tolerant parallel processor which is programmed in the ADA language. This document describes the results of the Detailed Design (Phase 2 and 3 of a 3-year project) of the AFTA development. This document contains detailed descriptions of the program objectives, the TF/TA NOE application requirements, architecture, hardware design, operating systems design, systems performance measurements and analytical models.

  5. Tailoring Earth Observation To Ranchers For Improved Land Management And Profitability: The VegMachine Online Project

    NASA Astrophysics Data System (ADS)

    Scarth, P.; Trevithick, B.; Beutel, T.

    2016-12-01

    VegMachine Online is a freely available browser application that allows ranchers across Australia to view and interact with satellite derived ground cover state and change maps on their property and extract this information in a graphical format using interactive tools. It supports the delivery and communication of a massive earth observation data set in an accessible, producer friendly way . Around 250,000 Landsat TM, ETM and OLI images were acquired across Australia, converted to terrain corrected surface reflectance and masked for cloud, cloud shadow, terrain shadow and water. More than 2500 field sites across the Australian rangelands were used to derive endmembers used in a constrained unmixing approach to estimate the per-pixel proportion of bare, green and non-green vegetation for all images. A seasonal metoid compositing method was used to produce national fractional cover virtual mosaics for each three month period since 1988. The time series of green fraction is used to estimate the persistent green due to tree and shrub canopies, and this estimate is used to correct the fractional cover to ground cover for our mixed tree-grass rangeland systems. Finally, deciles are produced for key metrics every season to track a pixels relativity to the entire time series. These data are delivered through time series enabled web mapping services and customised web processing services that enable the full time series over any spatial extent to be interrogated in seconds via a RESTful interface. These services interface with a front end browser application that provides product visualization for any date in the time series, tools to draw or import polygon boundaries, plot time series ground cover comparisons, look at the effect of historical rainfall and tools to run the revised universal soil loss equation in web time to assess the effect of proposed changes in cover retention. VegMachine Online is already being used by ranchers monitoring paddock condition

  6. Map and Data for Quaternary Faults and Fault Systems on the Island of Hawai`i

    Cannon, Eric C.; Burgmann, Roland; Crone, Anthony J.; Machette, Michael N.; Dart, Richard L.

    2007-01-01

    and catalog of data, both in Adobe Acrobat PDF format. The senior authors (Eric C. Cannon and Roland Burgmann) compiled the fault data as part of ongoing studies of active faulting on the Island of Hawai`i. The USGS is responsible for organizing and integrating the State or regional products under their National Seismic Hazard Mapping project, including the coordination and oversight of contributions from individuals and groups (Michael N. Machette and Anthony J. Crone), database design and management (Kathleen M. Haller), and digitization and analysis of map data (Richard L. Dart). After being released an Open-File Report, the data in this report will be available online at http://earthquake.usgs.gov/regional/qfaults/, the USGS Quaternary Fault and Fold Database of the United States.

  7. Layered clustering multi-fault diagnosis for hydraulic piston pump

    NASA Astrophysics Data System (ADS)

    Du, Jun; Wang, Shaoping; Zhang, Haiyan

    2013-04-01

    Efficient diagnosis is very important for improving reliability and performance of aircraft hydraulic piston pump, and it is one of the key technologies in prognostic and health management system. In practice, due to harsh working environment and heavy working loads, multiple faults of an aircraft hydraulic pump may occur simultaneously after long time operations. However, most existing diagnosis methods can only distinguish pump faults that occur individually. Therefore, new method needs to be developed to realize effective diagnosis of simultaneous multiple faults on aircraft hydraulic pump. In this paper, a new method based on the layered clustering algorithm is proposed to diagnose multiple faults of an aircraft hydraulic pump that occur simultaneously. The intensive failure mechanism analyses of the five main types of faults are carried out, and based on these analyses the optimal combination and layout of diagnostic sensors is attained. The three layered diagnosis reasoning engine is designed according to the faults' risk priority number and the characteristics of different fault feature extraction methods. The most serious failures are first distinguished with the individual signal processing. To the desultory faults, i.e., swash plate eccentricity and incremental clearance increases between piston and slipper, the clustering diagnosis algorithm based on the statistical average relative power difference (ARPD) is proposed. By effectively enhancing the fault features of these two faults, the ARPDs calculated from vibration signals are employed to complete the hypothesis testing. The ARPDs of the different faults follow different probability distributions. Compared with the classical fast Fourier transform-based spectrum diagnosis method, the experimental results demonstrate that the proposed algorithm can diagnose the multiple faults, which occur synchronously, with higher precision and reliability.

  8. The Talas-Fergana Fault, Kirghiz and Kazakh, USSR

    Wallace, R.E.

    1976-01-01

    The great Talas-Fergana fault transects the Soviet republic of Kirghiz in Soviet Central Asia and extends southeastward into China and northwestward into Kazakh SSR (figs. 1 and 2). This great rupture in the Earth's crust rivals the San Andreas fault in California; it is long (approximately 900 kilometers), complex, and possibly has a lateral displacement of hundreds of kilometers similar to that on the San Andreas fault. The Soviet geologist V. S. Burtman suggested that right-lateral offset of 250 kilometers has occurred, citing a shift of Devonian rocks as evidence (fig. 3). By no means do all Soviet geologists agree. Some hold the view that there is no lateral displacement along the Talas-Fergana fault and that the anomalous distribution of Paleozoic rocks is a result of the original position of deposition. 

  9. A fuzzy decision tree for fault classification.

    PubMed

    Zio, Enrico; Baraldi, Piero; Popescu, Irina C

    2008-02-01

    In plant accident management, the control room operators are required to identify the causes of the accident, based on the different patterns of evolution of the monitored process variables thereby developing. This task is often quite challenging, given the large number of process parameters monitored and the intense emotional states under which it is performed. To aid the operators, various techniques of fault classification have been engineered. An important requirement for their practical application is the physical interpretability of the relationships among the process variables underpinning the fault classification. In this view, the present work propounds a fuzzy approach to fault classification, which relies on fuzzy if-then rules inferred from the clustering of available preclassified signal data, which are then organized in a logical and transparent decision tree structure. The advantages offered by the proposed approach are precisely that a transparent fault classification model is mined out of the signal data and that the underlying physical relationships among the process variables are easily interpretable as linguistic if-then rules that can be explicitly visualized in the decision tree structure. The approach is applied to a case study regarding the classification of simulated faults in the feedwater system of a boiling water reactor.

  10. Automatic Fault Characterization via Abnormality-Enhanced Classification

    SciT

    Bronevetsky, G; Laguna, I; de Supinski, B R

    Enterprise and high-performance computing systems are growing extremely large and complex, employing hundreds to hundreds of thousands of processors and software/hardware stacks built by many people across many organizations. As the growing scale of these machines increases the frequency of faults, system complexity makes these faults difficult to detect and to diagnose. Current system management techniques, which focus primarily on efficient data access and query mechanisms, require system administrators to examine the behavior of various system services manually. Growing system complexity is making this manual process unmanageable: administrators require more effective management tools that can detect faults and help tomore » identify their root causes. System administrators need timely notification when a fault is manifested that includes the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize system behavior. However, the complex effects of system faults make these tools difficult to apply effectively. This paper investigates the application of classification and clustering algorithms to fault detection and characterization. We show experimentally that naively applying these methods achieves poor accuracy. Further, we design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy. Our experiments demonstrate that these techniques can detect and characterize faults with 65% accuracy, compared to just 5% accuracy for naive approaches.« less

  11. Earth Science Informatics - Overview

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.

    2017-01-01

    Over the last 10-15 years, significant advances have been made in information management, there are an increasing number of individuals entering the field of information management as it applies to Geoscience and Remote Sensing data, and the field of informatics has come to its own. Informatics is the science and technology of applying computers and computational methods to the systematic analysis, management, interchange, and representation of science data, information, and knowledge. Informatics also includes the use of computers and computational methods to support decision making and applications. Earth Science Informatics (ESI, a.k.a. geoinformatics) is the application of informatics in the Earth science domain. ESI is a rapidly developing discipline integrating computer science, information science, and Earth science. Major national and international research and infrastructure projects in ESI have been carried out or are on-going. Notable among these are: the Global Earth Observation System of Systems (GEOSS), the European Commissions INSPIRE, the U.S. NSDI and Geospatial One-Stop, the NASA EOSDIS, and the NSF DataONE, EarthCube and Cyberinfrastructure for Geoinformatics. More than 18 departments and agencies in the U.S. federal government have been active in Earth science informatics. All major space agencies in the world, have been involved in ESI research and application activities. In the United States, the Federation of Earth Science Information Partners (ESIP), whose membership includes over 180 organizations (government, academic and commercial) dedicated to managing, delivering and applying Earth science data, has been working on many ESI topics since 1998. The Committee on Earth Observation Satellites (CEOS)s Working Group on Information Systems and Services (WGISS) has been actively coordinating the ESI activities among the space agencies.

  12. Earth Science Informatics - Overview

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.

    2017-01-01

    Over the last 10-15 years, significant advances have been made in information management, there are an increasing number of individuals entering the field of information management as it applies to Geoscience and Remote Sensing data, and the field of informatics has come to its own. Informatics is the science and technology of applying computers and computational methods to the systematic analysis, management, interchange, and representation of science data, information, and knowledge. Informatics also includes the use of computers and computational methods to support decision making and applications. Earth Science Informatics (ESI, a.k.a. geoinformatics) is the application of informatics in the Earth science domain. ESI is a rapidly developing discipline integrating computer science, information science, and Earth science. Major national and international research and infrastructure projects in ESI have been carried out or are on-going. Notable among these are: the Global Earth Observation System of Systems (GEOSS), the European Commissions INSPIRE, the U.S. NSDI and Geospatial One-Stop, the NASA EOSDIS, and the NSF DataONE, EarthCube and Cyberinfrastructure for Geoinformatics. More than 18 departments and agencies in the U.S. federal government have been active in Earth science informatics. All major space agencies in the world, have been involved in ESI research and application activities. In the United States, the Federation of Earth Science Information Partners (ESIP), whose membership includes over 180 organizations (government, academic and commercial) dedicated to managing, delivering and applying Earth science data, has been working on many ESI topics since 1998. The Committee on Earth Observation Satellites (CEOS)s Working Group on Information Systems and Services (WGISS) has been actively coordinating the ESI activities among the space agencies.The talk will present an overview of current efforts in ESI, the role members of IEEE GRSS play, and discuss

  13. Earth Observation

    2014-06-01

    ISS040-E-006327 (1 June 2014) --- A portion of International Space Station solar array panels and Earth?s horizon are featured in this image photographed by an Expedition 40 crew member on the space station.

  14. Geophysical Characterization of the Hilton Creek Fault System

    NASA Astrophysics Data System (ADS)

    Lacy, A. K.; Macy, K. P.; De Cristofaro, J. L.; Polet, J.

    2016-12-01

    The Long Valley Caldera straddles the eastern edge of the Sierra Nevada Batholith and the western edge of the Basin and Range Province, and represents one of the largest caldera complexes on Earth. The caldera is intersected by numerous fault systems, including the Hartley Springs Fault System, the Round Valley Fault System, the Long Valley Ring Fault System, and the Hilton Creek Fault System, which is our main region of interest. The Hilton Creek Fault System appears as a single NW-striking fault, dipping to the NE, from Davis Lake in the south to the southern rim of the Long Valley Caldera. Inside the caldera, it splays into numerous parallel faults that extend toward the resurgent dome. Seismicity in the area increased significantly in May 1980, following a series of large earthquakes in the vicinity of the caldera and a subsequent large earthquake swarm which has been suggested to be the result of magma migration. A large portion of the earthquake swarms in the Long Valley Caldera occurs on or around the Hilton Creek Fault splays. We are conducting an interdisciplinary geophysical study of the Hilton Creek Fault System from just south of the onset of splay faulting, to its extension into the dome of the caldera. Our investigation includes ground-based magnetic field measurements, high-resolution total station elevation profiles, Structure-From-Motion derived topography and an analysis of earthquake focal mechanisms and statistics. Preliminary analysis of topographic profiles, of approximately 1 km in length, reveals the presence of at least three distinct fault splays within the caldera with vertical offsets of 0.5 to 1.0 meters. More detailed topographic mapping is expected to highlight smaller structures. We are also generating maps of the variation in b-value along different portions of the Hilton Creek system to determine whether we can detect any transition to more swarm-like behavior towards the North. We will show maps of magnetic anomalies, topography

  15. Illuminating Northern California’s Active Faults

    Prentice, Carol S.; Crosby, Christopher J.; Whitehill, Caroline S.; Arrowsmith, J. Ramon; Furlong, Kevin P.; Philips, David A.

    2009-01-01

    Newly acquired light detection and ranging (lidar) topographic data provide a powerful community resource for the study of landforms associated with the plate boundary faults of northern California (Figure 1). In the spring of 2007, GeoEarthScope, a component of the EarthScope Facility construction project funded by the U.S. National Science Foundation, acquired approximately 2000 square kilometers of airborne lidar topographic data along major active fault zones of northern California. These data are now freely available in point cloud (x, y, z coordinate data for every laser return), digital elevation model (DEM), and KMZ (zipped Keyhole Markup Language, for use in Google EarthTM and other similar software) formats through the GEON OpenTopography Portal (http://www.OpenTopography.org/data). Importantly, vegetation can be digitally removed from lidar data, producing high-resolution images (0.5- or 1.0-meter DEMs) of the ground surface beneath forested regions that reveal landforms typically obscured by vegetation canopy (Figure 2)

  16. Digital release of the Alaska Quaternary fault and fold database

    NASA Astrophysics Data System (ADS)

    Koehler, R. D.; Farrell, R.; Burns, P.; Combellick, R. A.; Weakland, J. R.

    2011-12-01

    The Alaska Division of Geological & Geophysical Surveys (DGGS) has designed a Quaternary fault and fold database for Alaska in conformance with standards defined by the U.S. Geological Survey for the National Quaternary fault and fold database. Alaska is the most seismically active region of the United States, however little information exists on the location, style of deformation, and slip rates of Quaternary faults. Thus, to provide an accurate, user-friendly, reference-based fault inventory to the public, we are producing a digital GIS shapefile of Quaternary fault traces and compiling summary information on each fault. Here, we present relevant information pertaining to the digital GIS shape file and online access and availability of the Alaska database. This database will be useful for engineering geologic studies, geologic, geodetic, and seismic research, and policy planning. The data will also contribute to the fault source database being constructed by the Global Earthquake Model (GEM), Faulted Earth project, which is developing tools to better assess earthquake risk. We derived the initial list of Quaternary active structures from The Neotectonic Map of Alaska (Plafker et al., 1994) and supplemented it with more recent data where available. Due to the limited level of knowledge on Quaternary faults in Alaska, pre-Quaternary fault traces from the Plafker map are shown as a layer in our digital database so users may view a more accurate distribution of mapped faults and to suggest the possibility that some older traces may be active yet un-studied. The database will be updated as new information is developed. We selected each fault by reviewing the literature and georegistered the faults from 1:250,000-scale paper maps contained in 1970's vintage and earlier bedrock maps. However, paper map scales range from 1:20,000 to 1:500,000. Fault parameters in our GIS fault attribute tables include fault name, age, slip rate, slip sense, dip direction, fault line type

  17. DIFFERENTIAL FAULT SENSING CIRCUIT

    DOEpatents

    Roberts, J.H.

    1961-09-01

    A differential fault sensing circuit is designed for detecting arcing in high-voltage vacuum tubes arranged in parallel. A circuit is provided which senses differences in voltages appearing between corresponding elements likely to fault. Sensitivity of the circuit is adjusted to some level above which arcing will cause detectable differences in voltage. For particular corresponding elements, a group of pulse transformers are connected in parallel with diodes connected across the secondaries thereof so that only voltage excursions are transmitted to a thyratron which is biased to the sensitivity level mentioned.

  18. Fault tolerant linear actuator

    DOEpatents

    Tesar, Delbert

    2004-09-14

    In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.

  19. Computer hardware fault administration

    DOEpatents

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  20. [Establishment of malaria early warning system in Jiangsu Province II application of digital earth system in malaria epidemic management and surveillance].

    PubMed

    Wang, Wei-Ming; Zhou, Hua-Yun; Liu, Yao-Bao; Li, Ju-Lin; Cao, Yuan-Yuan; Cao, Jun

    2013-04-01

    To explore a new mode of malaria elimination through the application of digital earth system in malaria epidemic management and surveillance. While we investigated the malaria cases and deal with the epidemic areas in Jiangsu Province in 2011, we used JISIBAO UniStrong G330 GIS data acquisition unit (GPS) to collect the latitude and longitude of the cases located, and then established a landmark library about early-warning areas and an image management system by using Google Earth Free 6.2 and its image processing software. A total of 374 malaria cases were reported in Jiangsu Province in 2011. Among them, there were 13 local vivax malaria cases, 11 imported vivax malaria cases from other provinces, 20 abroad imported vivax malaria cases, 309 abroad imported falciparum malaria cases, 7 abroad imported quartan malaria cases (Plasmodium malaria infection), and 14 abroad imported ovale malaria cases (P. ovale infection). Through the analysis of Google Earth Mapping system, these malaria cases showed a certain degree of aggregation except the abroad imported quartan malaria cases which were highly sporadic. The local vivax malaria cases mainly concentrated in Sihong County, the imported vivax malaria cases from other provinces mainly concentrated in Suzhou City and Wuxi City, the abroad imported vivax malaria cases concentrated in Nanjing City, the abroad imported falciparum malaria cases clustered in the middle parts of Jiangsu Province, and the abroad imported ovale malaria cases clustered in Liyang City. The operation of Google Earth Free 6.2 is simple, convenient and quick, which could help the public health authority to make the decision of malaria prevention and control, including the use of funds and other health resources.

  1. Fault tree models for fault tolerant hypercube multiprocessors

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Tuazon, Jezus O.

    1991-01-01

    Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.

  2. Local precision nets for monitoring movements of faults and large engineering structures

    NASA Technical Reports Server (NTRS)

    Henneberg, H. G.

    1978-01-01

    Along Bocono Fault were installed local high precision geodetic nets to observe the possible horizontal crustal deformations and movements. In the fault area there are few big structures which are also included in the mentioned investigation. In the near future, measurements shall be extended to other sites of Bocono Fault and also to the El Pilar Fault. In the same way and by similar methods high precision geodetic nets are applied in Venezuela to observe the behavior of big structures, as bridges and large dams and of earth surface deformations due to industrial activities.

  3. Earth Observation

    NASA Technical Reports Server (NTRS)

    1994-01-01

    For pipeline companies, mapping, facilities inventory, pipe inspections, environmental reporting, etc. is a monumental task. An Automated Mapping/Facilities Management/Geographic Information Systems (AM/FM/GIS) is the solution. However, this is costly and time consuming. James W. Sewall Company, an AM/FM/GIS consulting firm proposed an EOCAP project to Stennis Space Center (SSC) to develop a computerized system for storage and retrieval of digital aerial photography. This would provide its customer, Algonquin Gas Transmission Company, with an accurate inventory of rights-of-way locations and pipeline surroundings. The project took four years to complete and an important byproduct was SSC's Digital Aerial Rights-of-Way Monitoring System (DARMS). DARMS saves substantial time and money. EOCAP enabled Sewall to develop new products and expand its customer base. Algonquin now manages regulatory requirements more efficiently and accurately. EOCAP provides government co-funding to encourage private investment in and broader use of NASA remote sensing technology. Because changes on Earth's surface are accelerating, planners and resource managers must assess the consequences of change as quickly and accurately as possible. Pacific Meridian Resources and NASA's Stennis Space Center (SSC) developed a system for monitoring changes in land cover and use, which incorporated the latest change detection technologies. The goal of this EOCAP project was to tailor existing technologies to a system that could be commercialized. Landsat imagery enabled Pacific Meridian to identify areas that had sustained substantial vegetation loss. The project was successful and Pacific Meridian's annual revenues have substantially increased. EOCAP provides government co-funding to encourage private investment in and broader use of NASA remote sensing technology.

  4. Fault compaction and overpressured faults: results from a 3-D model of a ductile fault zone

    NASA Astrophysics Data System (ADS)

    Fitzenz, D. D.; Miller, S. A.

    2003-10-01

    A model of a ductile fault zone is incorporated into a forward 3-D earthquake model to better constrain fault-zone hydraulics. The conceptual framework of the model fault zone was chosen such that two distinct parts are recognized. The fault core, characterized by a relatively low permeability, is composed of a coseismic fault surface embedded in a visco-elastic volume that can creep and compact. The fault core is surrounded by, and mostly sealed from, a high permeability damaged zone. The model fault properties correspond explicitly to those of the coseismic fault core. Porosity and pore pressure evolve to account for the viscous compaction of the fault core, while stresses evolve in response to the applied tectonic loading and to shear creep of the fault itself. A small diffusive leakage is allowed in and out of the fault zone. Coseismically, porosity is created to account for frictional dilatancy. We show in the case of a 3-D fault model with no in-plane flow and constant fluid compressibility, pore pressures do not drop to hydrostatic levels after a seismic rupture, leading to an overpressured weak fault. Since pore pressure plays a key role in the fault behaviour, we investigate coseismic hydraulic property changes. In the full 3-D model, pore pressures vary instantaneously by the poroelastic effect during the propagation of the rupture. Once the stress state stabilizes, pore pressures are incrementally redistributed in the failed patch. We show that the significant effect of pressure-dependent fluid compressibility in the no in-plane flow case becomes a secondary effect when the other spatial dimensions are considered because in-plane flow with a near-lithostatically pressured neighbourhood equilibrates at a pressure much higher than hydrostatic levels, forming persistent high-pressure fluid compartments. If the observed faults are not all overpressured and weak, other mechanisms, not included in this model, must be at work in nature, which need to be

  5. DisasterHub: A mobile application for enabling crowd generated data fusion in Earth Observation disaster management services

    NASA Astrophysics Data System (ADS)

    Tsironis, Vassilis; Herekakis, Themistocles; Tsouni, Alexia; Kontoes, Charalampos Haris

    2016-04-01

    The rapid changes in climate over the last decades, together with the explosion of human population, have shaped the context for a fragile biosphere, prone to natural and manmade disasters that result in massive flows of environmental immigrants and great disturbances of ecosystems. The magnitude of the latest great disasters have shown evidence for high quality Earth Observation (EO) services as it regards disaster risk reduction and emergency support (DRR & EMS). The EO community runs ambitious initiatives in order to generate services with direct impact in the biosphere, and intends to stimulate the wider participation of citizens, enabling the Openness effect through the Open Innovation paradigm. This by its turn results in the tremendous growth of open source software technologies associated with web, social media, mobile and Crowdsourcing. The Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing of National Observatory of Athens has developed, in the framework of the BEYOND Centre of Excellence for EO-based monitoring of Natural Disasters (http://www.beyond-eocenter.eu), a rich ecosystem of Copernicus compliant services addressing diverse hazardous phenomena caused from climate and weather extremes (fires, floods, windstorms, heat waves), atmospheric disturbances (smoke, dust, ozone, UV), and geo-hazards (earthquakes, landslides, volcanoes). Several services are delivered in near-real time to the public and the institutional authorities at national and regional level in southeastern Europe. Specific ones have been recognized worldwide for their innovation and operational aspects (e.g. FIREHUB was awarded the first prize as Best Service Challenge in the Copernicus Masters Competition, 2014). However, a communication gap still exists between the BEYOND ecosystem and those directly concerned by the natural disasters, the citizens and emergency response managers. This disruption of information flow between interested parties is addressed

  6. Row fault detection system

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN

    2008-10-14

    An apparatus, program product and method checks for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  7. Row fault detection system

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN

    2012-02-07

    An apparatus, program product and method check for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  8. Row fault detection system

    DOEpatents

    Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward

    2010-02-23

    An apparatus and program product check for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  9. Perspective View, Garlock Fault

    2000-04-20

    California Garlock Fault, marking the northwestern boundary of the Mojave Desert, lies at the foot of the mountains, running from the lower right to the top center of this image, which was created with data from NASA shuttle Radar Topography Mission.

  10. Fault-Mechanism Simulator

    ERIC Educational Resources Information Center

    Guyton, J. W.

    1972-01-01

    An inexpensive, simple mechanical model of a fault can be produced to simulate the effects leading to an earthquake. This model has been used successfully with students from elementary to college levels and can be demonstrated to classes as large as thirty students. (DF)

  11. Dynamic Fault Detection Chassis

    SciT

    Mize, Jeffery J

    2007-01-01

    Abstract The high frequency switching megawatt-class High Voltage Converter Modulator (HVCM) developed by Los Alamos National Laboratory for the Oak Ridge National Laboratory's Spallation Neutron Source (SNS) is now in operation. One of the major problems with the modulator systems is shoot-thru conditions that can occur in a IGBTs H-bridge topology resulting in large fault currents and device failure in a few microseconds. The Dynamic Fault Detection Chassis (DFDC) is a fault monitoring system; it monitors transformer flux saturation using a window comparator and dV/dt events on the cathode voltage caused by any abnormality such as capacitor breakdown, transformer primarymore » turns shorts, or dielectric breakdown between the transformer primary and secondary. If faults are detected, the DFDC will inhibit the IGBT gate drives and shut the system down, significantly reducing the possibility of a shoot-thru condition or other equipment damaging events. In this paper, we will present system integration considerations, performance characteristics of the DFDC, and discuss its ability to significantly reduce costly down time for the entire facility.« less

  12. Fault isolation techniques

    NASA Technical Reports Server (NTRS)

    Dumas, A.

    1981-01-01

    Three major areas that are considered in the development of an overall maintenance scheme of computer equipment are described. The areas of concern related to fault isolation techniques are: the programmer (or user), company and its policies, and the manufacturer of the equipment.

  13. Faults and Flows

    2014-10-20

    Lava flows of Daedalia Planum can be seen at the top and bottom portions of this image from NASA 2001 Mars Odyssey spacecraft. The ridge and linear depression in the central part of the image are part of Mangala Fossa, a fault bounded graben.

  14. Recent Progresses in Incorporating Human Land-Water Management into Global Land Surface Models Toward Their Integration into Earth System Models

    NASA Technical Reports Server (NTRS)

    Pokhrel, Yadu N.; Hanasaki, Naota; Wada, Yoshihide; Kim, Hyungjun

    2016-01-01

    The global water cycle has been profoundly affected by human land-water management. As the changes in the water cycle on land can affect the functioning of a wide range of biophysical and biogeochemical processes of the Earth system, it is essential to represent human land-water management in Earth system models (ESMs). During the recent past, noteworthy progress has been made in large-scale modeling of human impacts on the water cycle but sufficient advancements have not yet been made in integrating the newly developed schemes into ESMs. This study reviews the progresses made in incorporating human factors in large-scale hydrological models and their integration into ESMs. The study focuses primarily on the recent advancements and existing challenges in incorporating human impacts in global land surface models (LSMs) as a way forward to the development of ESMs with humans as integral components, but a brief review of global hydrological models (GHMs) is also provided. The study begins with the general overview of human impacts on the water cycle. Then, the algorithms currently employed to represent irrigation, reservoir operation, and groundwater pumping are discussed. Next, methodological deficiencies in current modeling approaches and existing challenges are identified. Furthermore, light is shed on the sources of uncertainties associated with model parameterizations, grid resolution, and datasets used for forcing and validation. Finally, representing human land-water management in LSMs is highlighted as an important research direction toward developing integrated models using ESM frameworks for the holistic study of human-water interactions within the Earths system.

  15. The engine fuel system fault analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Song, Hanqiang; Yang, Changsheng; Zhao, Wei

    2017-05-01

    For improving the reliability of the engine fuel system, the typical fault factor of the engine fuel system was analyzed from the point view of structure and functional. The fault character was gotten by building the fuel system fault tree. According the utilizing of fault mode effect analysis method (FMEA), several factors of key component fuel regulator was obtained, which include the fault mode, the fault cause, and the fault influences. All of this made foundation for next development of fault diagnosis system.

  16. The stress shadow effect: a mechanical analysis of the evenly-spaced parallel strike-slip faults in the San Andreas fault system

    NASA Astrophysics Data System (ADS)

    Zuza, A. V.; Yin, A.; Lin, J. C.

    2015-12-01

    -slip fault systems, both on Earth and throughout the solar system (e.g., the Tiger Stripe Fractures on Enceladus).

  17. Internal Structure of Taiwan Chelungpu Fault Zone Gouges

    NASA Astrophysics Data System (ADS)

    Song, Y.; Song, S.; Tang, M.; Chen, F.; Chen, Y.

    2005-12-01

    Gouge formation is found to exist in brittle faults at all scale (1). This fine-grain gouge is thought to control earthquake instability. And thus investigating the gouge textures and compositions is very important to an understanding of the earthquake process. Employing the transmission electron microscope (TEM) and a new transmission X-ray microscope (TXM), we study the internal structure of fault zone gouges from the cores of the Taiwan Chelungpu-fault Drilling Project (TCDP), which drilled in the fault zone of 1999 Chi-Chi earthquake. This X-ray microscope have installed at beamline BL01B of the Taiwan Light Source, National Synchrotron Radiation Research Center (NSRRC). It provides 2D imaging and 3D tomography at energy 8-11 keV with a spatial resolution of 25-60 nm, and is equipped with the Zernike-phase contrast capability for imaging light materials. In this work, we show the measurements of gouge texture, particle size distribution and 3D structure of the ultracataclasite in fault gouges within 12 cm about 1111.29 m depth. These characterizations in transition from the fault core to damage zone are related to the comminuting and the fracture energy in the earthquake faulting. The TXM data recently shows the particle size distributions of the ultracataclasite are between 150 nm and 900 nm in diameter. We will keep analyzing the characterization of particle size distribution, porosity and 3D structure of the fault zone gouges in transition from the fault core to damage zone to realize the comminuting and fracture surface energy in the earthquake faulting(2-5).The results may ascertain the implication of the nucleation, growth, transition, structure and permeability of the fault zones(6-8). Furthermore, it may be possible to infer the mechanism of faulting, the physical and chemical property of the fault, and the nucleation of the earthquake. References 1) B. Wilson, T. Dewerw, Z. Reches and J. Brune, Nature, 434 (2005) 749. 2) S. E. Schulz and J. P. Evans

  18. Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning

    NASA Astrophysics Data System (ADS)

    Rouet-Leduc, B.; Hulbert, C.; Ren, C. X.; Bolton, D. C.; Marone, C.; Johnson, P. A.

    2017-12-01

    Fault friction controls nearly all aspects of fault rupture, yet it is only possible to measure in the laboratory. Here we describe laboratory experiments where acoustic emissions are recorded from the fault. We find that by applying a machine learning approach known as "extreme gradient boosting trees" to the continuous acoustical signal, the fault friction can be directly inferred, showing that instantaneous characteristics of the acoustic signal are a fingerprint of the frictional state. This machine learning-based inference leads to a simple law that links the acoustic signal to the friction state, and holds for every stress cycle the laboratory fault goes through. The approach does not use any other measured parameter than instantaneous statistics of the acoustic signal. This finding may have importance for inferring frictional characteristics from seismic waves in Earth where fault friction cannot be measured.

  19. Fault linkage and continental breakup

    NASA Astrophysics Data System (ADS)

    Cresswell, Derren; Lymer, Gaël; Reston, Tim; Stevenson, Carl; Bull, Jonathan; Sawyer, Dale; Morgan, Julia

    2017-04-01

    The magma-poor rifted margin off the west coast of Galicia (NW Spain) has provided some of the key observations in the development of models describing the final stages of rifting and continental breakup. In 2013, we collected a 68 x 20 km 3D seismic survey across the Galicia margin, NE Atlantic. Processing through to 3D Pre-stack Time Migration (12.5 m bin-size) and 3D depth conversion reveals the key structures, including an underlying detachment fault (the S detachment), and the intra-block and inter-block faults. These data reveal multiple phases of faulting, which overlap spatially and temporally, have thinned the crust to between zero and a few km thickness, producing 'basement windows' where crustal basement has been completely pulled apart and sediments lie directly on the mantle. Two approximately N-S trending fault systems are observed: 1) a margin proximal system of two linked faults that are the upward extension (breakaway faults) of the S; in the south they form one surface that splays northward to form two faults with an intervening fault block. These faults were thus demonstrably active at one time rather than sequentially. 2) An oceanward relay structure that shows clear along strike linkage. Faults within the relay trend NE-SW and heavily dissect the basement. The main block bounding faults can be traced from the S detachment through the basement into, and heavily deforming, the syn-rift sediments where they die out, suggesting that the faults propagated up from the S detachment surface. Analysis of the fault heaves and associated maps at different structural levels show complementary fault systems. The pattern of faulting suggests a variation in main tectonic transport direction moving oceanward. This might be interpreted as a temporal change during sequential faulting, however the transfer of extension between faults and the lateral variability of fault blocks suggests that many of the faults across the 3D volume were active at least in part

  20. Randomness fault detection system

    NASA Technical Reports Server (NTRS)

    Russell, B. Don (Inventor); Aucoin, B. Michael (Inventor); Benner, Carl L. (Inventor)

    1996-01-01

    A method and apparatus are provided for detecting a fault on a power line carrying a line parameter such as a load current. The apparatus monitors and analyzes the load current to obtain an energy value. The energy value is compared to a threshold value stored in a buffer. If the energy value is greater than the threshold value a counter is incremented. If the energy value is greater than a high value threshold or less than a low value threshold then a second counter is incremented. If the difference between two subsequent energy values is greater than a constant then a third counter is incremented. A fault signal is issued if the counter is greater than a counter limit value and either the second counter is greater than a second limit value or the third counter is greater than a third limit value.

  1. Shaded Relief with Height as Color, Kunlun fault, east-central Tibet

    NASA Technical Reports Server (NTRS)

    2002-01-01

    These two images show exactly the same area, part of the Kunlun fault in northern Tibet. The image on the left was created using the best global topographic data set previously available, the U.S. Geological Survey's GTOPO30. In contrast, the much more detailed image on the right was generated with data from the Shuttle Radar Topography Mission, which collected enough measurements to map 80 percent of Earth's landmass at this level of precision.

    The area covered is the western part of the Kunlun fault, at the north edge of east-central Tibet. The sharp line marking the southern edge of the mountains, running left to right across the scene represents s strike-slip fault, much like California's San Andreas Fault, which is more than 1,000 kilometers (621 miles) long. The most recent earthquake on the Kunlun fault occurred on November 14, 2001. At a magnitude of 8.1, it produced a surface break over 350 kilometers (217 miles) long. Preliminary reports indicate a maximum offset of 7 meters (23 feet) in the central section of the break. This five-kilometer (three mile) high area is uninhabited by humans, so there was little damage reported, despite the large magnitude. Shuttle Radar Topography Mission maps of active faults in Tibet and other parts of the world provide geologists with a unique tool for determining how active a fault is and the probability of future large earthquakes on the fault. This is done by both measuring offsets in topographic features and using the SRTM digital map as a baseline for processing data from orbiting satellites using the techniques of radar interferometry. Based on geologic evidence, the Kunlun fault's long-term slip rate is believed to be about 11 millimeters per year (0.4 inches per year). The Kunlun fault and the Altyn Tagh fault, 400 kilometers (249 miles) to the north, are two major faults that help accommodate the ongoing collision between the Indian and Asian tectonic plates.

    In contrast with the wealth of detail

  2. Fault tolerant control laws

    NASA Technical Reports Server (NTRS)

    Ly, U. L.; Ho, J. K.

    1986-01-01

    A systematic procedure for the synthesis of fault tolerant control laws to actuator failure has been presented. Two design methods were used to synthesize fault tolerant controllers: the conventional LQ design method and a direct feedback controller design method SANDY. The latter method is used primarily to streamline the full-state Q feedback design into a practical implementable output feedback controller structure. To achieve robustness to control actuator failure, the redundant surfaces are properly balanced according to their control effectiveness. A simple gain schedule based on the landing gear up/down logic involving only three gains was developed to handle three design flight conditions: Mach .25 and Mach .60 at 5000 ft and Mach .90 at 20,000 ft. The fault tolerant control law developed in this study provides good stability augmentation and performance for the relaxed static stability aircraft. The augmented aircraft responses are found to be invariant to the presence of a failure. Furthermore, single-loop stability margins of +6 dB in gain and +30 deg in phase were achieved along with -40 dB/decade rolloff at high frequency.

  3. The Sorong Fault Zone, Indonesia: Mapping a Fault Zone Offshore

    NASA Astrophysics Data System (ADS)

    Melia, S.; Hall, R.

    2017-12-01

    The Sorong Fault Zone is a left-lateral strike-slip fault zone in eastern Indonesia, extending westwards from the Bird's Head peninsula of West Papua towards Sulawesi. It is the result of interactions between the Pacific, Caroline, Philippine Sea, and Australian Plates and much of it is offshore. Previous research on the fault zone has been limited by the low resolution of available data offshore, leading to debates over the extent, location, and timing of movements, and the tectonic evolution of eastern Indonesia. Different studies have shown it north of the Sula Islands, truncated south of Halmahera, continuing to Sulawesi, or splaying into a horsetail fan of smaller faults. Recently acquired high resolution multibeam bathymetry of the seafloor (with a resolution of 15-25 meters), and 2D seismic lines, provide the opportunity to trace the fault offshore. The position of different strands can be identified. On land, SRTM topography shows that in the northern Bird's Head the fault zone is characterised by closely spaced E-W trending faults. NW of the Bird's Head offshore there is a fold and thrust belt which terminates some strands. To the west of the Bird's Head offshore the fault zone diverges into multiple strands trending ENE-WSW. Regions of Riedel shearing are evident west of the Bird's Head, indicating sinistral strike-slip motion. Further west, the ENE-WSW trending faults turn to an E-W trend and there are at least three fault zones situated immediately south of Halmahera, north of the Sula Islands, and between the islands of Sanana and Mangole where the fault system terminates in horsetail strands. South of the Sula islands some former normal faults at the continent-ocean boundary with the North Banda Sea are being reactivated as strike-slip faults. The fault zone does not currently reach Sulawesi. The new fault map differs from previous interpretations concerning the location, age and significance of different parts of the Sorong Fault Zone. Kinematic

  4. Seismic Hazard and Fault Length

    NASA Astrophysics Data System (ADS)

    Black, N. M.; Jackson, D. D.; Mualchin, L.

    2005-12-01

    If mx is the largest earthquake magnitude that can occur on a fault, then what is mp, the largest magnitude that should be expected during the planned lifetime of a particular structure? Most approaches to these questions rely on an estimate of the Maximum Credible Earthquake, obtained by regression (e.g. Wells and Coppersmith, 1994) of fault length (or area) and magnitude. Our work differs in two ways. First, we modify the traditional approach to measuring fault length, to allow for hidden fault complexity and multi-fault rupture. Second, we use a magnitude-frequency relationship to calculate the largest magnitude expected to occur within a given time interval. Often fault length is poorly defined and multiple faults rupture together in a single event. Therefore, we need to expand the definition of a mapped fault length to obtain a more accurate estimate of the maximum magnitude. In previous work, we compared fault length vs. rupture length for post-1975 earthquakes in Southern California. In this study, we found that mapped fault length and rupture length are often unequal, and in several cases rupture broke beyond the previously mapped fault traces. To expand the geologic definition of fault length we outlined several guidelines: 1) if a fault truncates at young Quaternary alluvium, the fault line should be inferred underneath the younger sediments 2) faults striking within 45° of one another should be treated as a continuous fault line and 3) a step-over can link together faults at least 5 km apart. These definitions were applied to fault lines in Southern California. For example, many of the along-strike faults lines in the Mojave Desert are treated as a single fault trending from the Pinto Mountain to the Garlock fault. In addition, the Rose Canyon and Newport-Inglewood faults are treated as a single fault line. We used these more generous fault lengths, and the Wells and Coppersmith regression, to estimate the maximum magnitude (mx) for the major faults in

  5. Earth Wisdom.

    ERIC Educational Resources Information Center

    Van Matre, Steve

    1985-01-01

    In our human-centered ignorance and arrogance we are rapidly destroying the earth. We must start helping people understand the big picture of ecological concepts. What these concepts mean for our own lives and how we must begin to change our lifestyles in order to live more harmoniously with the earth. (JHZ)

  6. Earth Science

    1976-01-01

    The LAGEOS I (Laser Geodynamics Satellite) was developed and launched by the Marshall Space Flight Center on May 4, 1976 from Vandenberg Air Force Base, California . The two-foot diameter satellite orbited the Earth from pole to pole and measured the movements of the Earth's surface.

  7. NASA/Caltech Team Images Nepal Quake Fault Rupture, Surface Movements

    2015-05-04

    Using a combination of GPS-measured ground motion data, satellite radar data, and seismic observations from instruments distributed around the world, scientists have constructed preliminary estimates of how much the fault responsible for the April 25, 2015, magnitude 7.8 Gorkha earthquake in Nepal moved below Earth's surface (Figure 1). This information is useful for understanding not only what happened in the earthquake but also the potential for future events. It can also be used to infer a map of how Earth's surface moved due to the earthquake over a broader region (Figure 2). The maps created from these data can be viewed at PIA19384. In the first figure, the modeled slip on the fault is shown as viewed from above and indicated by the colors and contours within the rectangle. The peak slip in the fault exceeds 19.7 feet (6 meters). The ground motion measured with GPS is shown by the red and purple arrows and was used to develop the fault slip model. In the second figure, color represents vertical movement and the scaled arrows indicate direction and magnitude of horizontal movement. In both figures, aftershocks are indicated by red dots. Background color and shaded relief reflect regional variations in topography. The barbed lines show where the main fault reaches Earth's surface. The main fault dives northward into the Earth below the Himalaya. http://photojournal.jpl.nasa.gov/catalog/PIA19384

  8. The applicability of remote sensing to Earth biological problems. Part 2: The potential of remote sensing in pest management

    NASA Technical Reports Server (NTRS)

    Polhemus, J. T.

    1980-01-01

    Five troublesome insect pest groups were chosen for study. These represent a broad spectrum of life cycles, ecological indicators, pest management strategies, and remote sensing requirements. Background data, and field study results for each of these subjects is discussed for each insect group. Specific groups studied include tsetse flies, locusts, western rangeland grasshoppers, range caterpillars, and mosquitoes. It is concluded that remote sensing methods are aplicable to the pest management of the insect groups studied.

  9. Ancient Earth, Alien Earths Event

    2014-08-20

    Panelists pose for a group photo at the “Ancient Earth, Alien Earths” Event at NASA Headquarters in Washington, DC Wednesday, August 20, 2014. The event was sponsored by NASA, the National Science Foundation (NSF), and the Smithsonian Institution and highlighted how research on early Earth could help guide our search for habitable planets orbiting other stars. Photo Credit: (NASA/Aubrey Gemignani)

  10. Fluid involvement in normal faulting

    NASA Astrophysics Data System (ADS)

    Sibson, Richard H.

    2000-04-01

    Evidence of fluid interaction with normal faults comes from their varied role as flow barriers or conduits in hydrocarbon basins and as hosting structures for hydrothermal mineralisation, and from fault-rock assemblages in exhumed footwalls of steep active normal faults and metamorphic core complexes. These last suggest involvement of predominantly aqueous fluids over a broad depth range, with implications for fault shear resistance and the mechanics of normal fault reactivation. A general downwards progression in fault rock assemblages (high-level breccia-gouge (often clay-rich) → cataclasites → phyllonites → mylonite → mylonitic gneiss with the onset of greenschist phyllonites occurring near the base of the seismogenic crust) is inferred for normal fault zones developed in quartzo-feldspathic continental crust. Fluid inclusion studies in hydrothermal veining from some footwall assemblages suggest a transition from hydrostatic to suprahydrostatic fluid pressures over the depth range 3-5 km, with some evidence for near-lithostatic to hydrostatic pressure cycling towards the base of the seismogenic zone in the phyllonitic assemblages. Development of fault-fracture meshes through mixed-mode brittle failure in rock-masses with strong competence layering is promoted by low effective stress in the absence of thoroughgoing cohesionless faults that are favourably oriented for reactivation. Meshes may develop around normal faults in the near-surface under hydrostatic fluid pressures to depths determined by rock tensile strength, and at greater depths in overpressured portions of normal fault zones and at stress heterogeneities, especially dilational jogs. Overpressures localised within developing normal fault zones also determine the extent to which they may reutilise existing discontinuities (for example, low-angle thrust faults). Brittle failure mode plots demonstrate that reactivation of existing low-angle faults under vertical σ1 trajectories is only likely if

  11. Statistical mechanics and scaling of fault populations with increasing strain in the Corinth Rift

    NASA Astrophysics Data System (ADS)

    Michas, Georgios; Vallianatos, Filippos; Sammonds, Peter

    2015-12-01

    Scaling properties of fracture/fault systems are studied in order to characterize the mechanical properties of rocks and to provide insight into the mechanisms that govern fault growth. A comprehensive image of the fault network in the Corinth Rift, Greece, obtained through numerous field studies and marine geophysical surveys, allows for the first time such a study over the entire area of the Rift. We compile a detailed fault map of the area and analyze the scaling properties of fault trace-lengths by using a statistical mechanics model, derived in the framework of generalized statistical mechanics and associated maximum entropy principle. By using this framework, a range of asymptotic power-law to exponential-like distributions are derived that can well describe the observed scaling patterns of fault trace-lengths in the Rift. Systematic variations and in particular a transition from asymptotic power-law to exponential-like scaling are observed to be a function of increasing strain in distinct strain regimes in the Rift, providing quantitative evidence for such crustal processes in a single tectonic setting. These results indicate the organization of the fault system as a function of brittle strain in the Earth's crust and suggest there are different mechanisms for fault growth in the distinct parts of the Rift. In addition, other factors such as fault interactions and the thickness of the brittle layer affect how the fault system evolves in time. The results suggest that regional strain, fault interactions and the boundary condition of the brittle layer may control fault growth and the fault network evolution in the Corinth Rift.

  12. Fault trees and sequence dependencies

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne Bechta; Boyd, Mark A.; Bavuso, Salvatore J.

    1990-01-01

    One of the frequently cited shortcomings of fault-tree models, their inability to model so-called sequence dependencies, is discussed. Several sources of such sequence dependencies are discussed, and new fault-tree gates to capture this behavior are defined. These complex behaviors can be included in present fault-tree models because they utilize a Markov solution. The utility of the new gates is demonstrated by presenting several models of the fault-tolerant parallel processor, which include both hot and cold spares.

  13. Fault-Tree Compiler Program

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Martensen, Anna L.

    1992-01-01

    FTC, Fault-Tree Compiler program, is reliability-analysis software tool used to calculate probability of top event of fault tree. Five different types of gates allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language of FTC easy to understand and use. Program supports hierarchical fault-tree-definition feature simplifying process of description of tree and reduces execution time. Solution technique implemented in FORTRAN, and user interface in Pascal. Written to run on DEC VAX computer operating under VMS operating system.

  14. Physical fault tolerance of nanoelectronics.

    PubMed

    Szkopek, Thomas; Roychowdhury, Vwani P; Antoniadis, Dimitri A; Damoulakis, John N

    2011-04-29

    The error rate in complementary transistor circuits is suppressed exponentially in electron number, arising from an intrinsic physical implementation of fault-tolerant error correction. Contrariwise, explicit assembly of gates into the most efficient known fault-tolerant architecture is characterized by a subexponential suppression of error rate with electron number, and incurs significant overhead in wiring and complexity. We conclude that it is more efficient to prevent logical errors with physical fault tolerance than to correct logical errors with fault-tolerant architecture.

  15. Final Technical Report: PV Fault Detection Tool.

    SciT

    King, Bruce Hardison; Jones, Christian Birk

    The PV Fault Detection Tool project plans to demonstrate that the FDT can (a) detect catastrophic and degradation faults and (b) identify the type of fault. This will be accomplished by collecting fault signatures using different instruments and integrating this information to establish a logical controller for detecting, diagnosing and classifying each fault.

  16. Mechanism of Earth Fissures in Beijing,China

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Gong, H.; Gu, Z.; Wang, R.; Jia, S.; Li, X.

    2013-12-01

    Earth fissure is one of the natural hazards that can occur due to different mechanisms. The Beijing city, located in the north of North China Plain, China, has undergone extensive fissuring for the last twenty years. These fissures have caused serious damages to homes, farmlands and infrastructures. The previous investigation shows the distribution and direction of the major earth fissures mostly paralleled to the active fault, such as Huangzhuang-Gaoliying Fault. Hence, tectonic movements were thought to be the major cause of the fissuring in this region. But the subsidence caused by overdraft and other geological, hydrological and mechanical factors may also play important roles in forming earth fissure. The purpose of the work was to further explores the reason for the cause of the earth fissures and their mechanism of formations using field investigations, geophysical surveys, geotechnical tests and numerical analysis. The results indicated that over-extraction groundwater and differential subsidence are the major causes of the fissuring. The active faulting and fault zone provided or created an ideal condition for stress to accumulate. The earth fissures occur at times when the accumulated stress exceed the strength of soil or coupled with other process by which the strength of soil material is reduced. Survey and simulated results reveal the complex pattern of earth fissure including tensile deformation, vertical offset and rotation. The potential locations for future damage were also evaluated. Keywords: Earth Fissure; Mechanism; Beijing; Subsidence; tectonic movement; Geophysical survey

  17. Cross-Cutting Faults

    NASA Technical Reports Server (NTRS)

    2005-01-01

    16 May 2005 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows cross-cutting fault scarps among graben features in northern Tempe Terra. Graben form in regions where the crust of the planet has been extended; such features are common in the regions surrounding the vast 'Tharsis Bulge' on Mars.

    Location near: 43.7oN, 90.2oW Image width: 3 km (1.9 mi) Illumination from: lower left Season: Northern Summer

  18. Fault current limiter

    DOEpatents

    Darmann, Francis Anthony

    2013-10-08

    A fault current limiter (FCL) includes a series of high permeability posts for collectively define a core for the FCL. A DC coil, for the purposes of saturating a portion of the high permeability posts, surrounds the complete structure outside of an enclosure in the form of a vessel. The vessel contains a dielectric insulation medium. AC coils, for transporting AC current, are wound on insulating formers and electrically interconnected to each other in a manner such that the senses of the magnetic field produced by each AC coil in the corresponding high permeability core are opposing. There are insulation barriers between phases to improve dielectric withstand properties of the dielectric medium.

  19. Earth Observation

    2013-08-20

    Earth observation taken during day pass by an Expedition 36 crew member on board the International Space Station (ISS). Per Twitter message: Looking southwest over northern Africa. Libya, Algeria, Niger.

  20. Earth Observation

    2014-09-01

    Earth Observation taken during a night pass by the Expedition 40 crew aboard the International Space Station (ISS). Folder lists this as: New Zealand Aurora night pass. On crewmember's Flickr page - Look straight down into an aurora.

  1. Earth Observation

    2014-06-07

    ISS040-E-008174 (7 June 2014) --- Layers of Earth's atmosphere, brightly colored as the sun rises, are featured in this image photographed by an Expedition 40 crew member on the International Space Station.

  2. Earth Observation

    2014-06-02

    ISS040-E-006817 (2 June 2014) --- Intersecting the thin line of Earth's atmosphere, International Space Station solar array wings are featured in this image photographed by an Expedition 40 crew member on the International Space Station.

  3. Earth Science

    1992-07-18

    Workers at Launch Complex 17 Pad A, Kennedy Space Center (KSC) encapsulate the Geomagnetic Tail (GEOTAIL) spacecraft (upper) and attached payload Assist Module-D upper stage (lower) in the protective payload fairing. GEOTAIL project was designed to study the effects of Earth's magnetic field. The solar wind draws the Earth's magnetic field into a long tail on the night side of the Earth and stores energy in the stretched field lines of the magnetotail. During active periods, the tail couples with the near-Earth magnetosphere, sometimes releasing energy stored in the tail and activating auroras in the polar ionosphere. GEOTAIL measures the flow of energy and its transformation in the magnetotail and will help clarify the mechanisms that control the imput, transport, storage, release, and conversion of mass, momentum, and energy in the magnetotail.

  4. Discover Earth

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Discover Earth is a NASA-funded project for teachers of grades 5-12 who want to expand their knowledge of the Earth system, and prepare to become master teachers who promote Earth system science in their own schools, counties, and throughout their state. Participants from the following states are invited to apply: Connecticut, Delaware, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, Vermont, and Washington, DC. Teachers selected for the project participate in a two-week summer workshop conducted at the University of Maryland, College Park; develop classroom-ready materials during the workshop for broad dissemination; conduct a minimum of two peer training activities during the coming school year; and participate in other enrichment/education opportunities as available and desired. Discover Earth is a team effort that utilizes expertise from a range of contributors, and balances science content with hands-on classroom applications.

  5. Earth Observation

    2014-05-31

    Earth Observation taken during a day pass by the Expedition 40 crew aboard the International Space Station (ISS). Folder lists this as: CEO - Arena de Sao Paolo. View used for Twitter message: Cloudy skies over São Paulo Brazil

  6. Earth Observation

    2013-07-26

    Earth observation taken during day pass by an Expedition 36 crew member on board the International Space Station (ISS). Per Twitter message: Never tire of finding shapes in the clouds! These look very botanical to me. Simply perfect.

  7. Earth Observation

    2014-06-12

    Earth Observation taken during a day pass by the Expedition 40 crew aboard the International Space Station (ISS). Folder lists this as: Moon, Japan, Kamchatka with a wild cloud. Part of a solar array is also visible.

  8. Earth Science

    1990-10-24

    Solar Vector Magnetograph is used to predict solar flares, and other activities associated with sun spots. This research provides new understanding about weather on the Earth, and solar-related conditions in orbit.

  9. Earth Observation

    2013-08-03

    Earth observation taken during day pass by an Expedition 36 crew member on board the International Space Station (ISS). Per Twitter message: Perhaps a dandelion losing its seeds in the wind? Love clouds!

  10. Earth Observation

    2014-06-27

    Earth Observation taken during a day pass by the Expedition 40 crew aboard the International Space Station (ISS). Part of Space Station Remote Manipulator System (SSRMS) is visible. Folder lists this as: the Middle East, Israel.

  11. Earth Observations

    2010-06-16

    ISS024-E-006136 (16 June 2010) --- Polar mesospheric clouds, illuminated by an orbital sunrise, are featured in this image photographed by an Expedition 24 crew member on the International Space Station. Polar mesospheric, or noctilucent (?night shining?), clouds are observed from both Earth?s surface and in orbit by crew members aboard the space station. They are called night-shining clouds as they are usually seen at twilight. Following the setting of the sun below the horizon and darkening of Earth?s surface, these high clouds are still briefly illuminated by sunlight. Occasionally the ISS orbital track becomes nearly parallel to Earth?s day/night terminator for a time, allowing polar mesospheric clouds to be visible to the crew at times other than the usual twilight due to the space station altitude. This unusual photograph shows polar mesospheric clouds illuminated by the rising, rather than setting, sun at center right. Low clouds on the horizon appear yellow and orange, while higher clouds and aerosols are illuminated a brilliant white. Polar mesospheric clouds appear as light blue ribbons extending across the top of the image. These clouds typically occur at high latitudes of both the Northern and Southern Hemispheres, and at fairly high altitudes of 76?85 kilometers (near the boundary between the mesosphere and thermosphere atmospheric layers). The ISS was located over the Greek island of Kos in the Aegean Sea (near the southwestern coastline of Turkey) when the image was taken at approximately midnight local time. The orbital complex was tracking northeastward, nearly parallel to the terminator, making it possible to observe an apparent ?sunrise? located almost due north. A similar unusual alignment of the ISS orbit track, terminator position, and seasonal position of Earth?s orbit around the sun allowed for striking imagery of polar mesospheric clouds over the Southern Hemisphere earlier this year.

  12. Earth Rotation

    NASA Technical Reports Server (NTRS)

    Dickey, Jean O.

    1995-01-01

    The study of the Earth's rotation in space (encompassing Universal Time (UT1), length of day, polar motion, and the phenomena of precession and nutation) addresses the complex nature of Earth orientation changes, the mechanisms of excitation of these changes and their geophysical implications in a broad variety of areas. In the absence of internal sources of energy or interactions with astronomical objects, the Earth would move as a rigid body with its various parts (the crust, mantle, inner and outer cores, atmosphere and oceans) rotating together at a constant fixed rate. In reality, the world is considerably more complicated, as is schematically illustrated. The rotation rate of the Earth's crust is not constant, but exhibits complicated fluctuations in speed amounting to several parts in 10(exp 8) [corresponding to a variation of several milliseconds (ms) in the Length Of the Day (LOD) and about one part in 10(exp 6) in the orientation of the rotation axis relative to the solid Earth's axis of figure (polar motion). These changes occur over a broad spectrum of time scales, ranging from hours to centuries and longer, reflecting the fact that they are produced by a wide variety of geophysical and astronomical processes. Geodetic observations of Earth rotation changes thus provide insights into the geophysical processes illustrated, which are often difficult to obtain by other means. In addition, these measurements are required for engineering purposes. Theoretical studies of Earth rotation variations are based on the application of Euler's dynamical equations to the problem of finding the response of slightly deformable solid Earth to variety of surface and internal stresses.

  13. AGSM Functional Fault Models for Fault Isolation Project

    NASA Technical Reports Server (NTRS)

    Harp, Janicce Leshay

    2014-01-01

    This project implements functional fault models to automate the isolation of failures during ground systems operations. FFMs will also be used to recommend sensor placement to improve fault isolation capabilities. The project enables the delivery of system health advisories to ground system operators.

  14. Forest management in Earth system modelling: a vertically discretised canopy description for ORCHIDEE and the modifications to the energy, water and carbon fluxes

    NASA Astrophysics Data System (ADS)

    Naudts, Kim; Ryder, James; McGrath, Matthew J.; Otto, Juliane; Chen, Yiying; Valade, Aude; Bellasen, Valentin; Ghattas, Josefine; Haverd, Vanessa; MacBean, Natasha; Maignan, Fabienne; Peylin, Philippe; Pinty, Bernard; Solyga, Didier; Vuichard, Nicolas; Luyssaert, Sebastiaan

    2015-04-01

    Since 70% of global forests are managed and forests impact the global carbon cycle and the energy exchange with the overlying atmosphere, forest management has the potential to mitigate climate change. Yet, none of the land surface models used in Earth system models, and therefore none of today's predictions of future climate, account for the interactions between climate and forest management. We addressed this gap in modelling capability by developing and parametrizing a version of the land surface model ORCHIDEE to simulate the biogeochemical and biophysical effects of forest management. The most significant changes between the new model called ORCHIDEE-CAN and the standard version of ORCHIDEE are the allometric-based allocation of carbon to leaf, root, wood, fruit and reserve pools; the transmittance, absorbance and reflectance of radiation within the canopy; and the vertical discretisation of the energy budget calculations. In addition, conceptual changes towards a better process representation occurred for the interaction of radiation with snow, the hydraulic architecture of plants, the representation of forest management and a numerical solution for the photosynthesis formalism of Farquhar, von Caemmerer and Berry. For consistency reasons, these changes were extensively linked throughout the code. Parametrization was revisited after introducing twelve new parameter sets that represent specific tree species or genera rather than a group of unrelated species, as is the case in widely used plant functional types. Performance of the new model was compared against the trunk and validated against independent spatially explicit data for basal area, tree height, canopy structure, GPP, albedo and evapotranspiration over Europe. For all tested variables ORCHIDEE-CAN outperformed the trunk regarding its ability to reproduce large-scale spatial patterns as well as their inter-annual variability over Europe. Depending on the data stream, ORCHIDEE-CAN had a 67 to 92

  15. A PC based fault diagnosis expert system

    NASA Technical Reports Server (NTRS)

    Marsh, Christopher A.

    1990-01-01

    The Integrated Status Assessment (ISA) prototype expert system performs system level fault diagnosis using rules and models created by the user. The ISA evolved from concepts to a stand-alone demonstration prototype using OPS5 on a LISP Machine. The LISP based prototype was rewritten in C and the C Language Integrated Production System (CLIPS) to run on a Personal Computer (PC) and a graphics workstation. The ISA prototype has been used to demonstrate fault diagnosis functions of Space Station Freedom's Operation Management System (OMS). This paper describes the development of the ISA prototype from early concepts to the current PC/workstation version used today and describes future areas of development for the prototype.

  16. Earth Observing System/Advanced Microwave Sounding Unit-A (EOS/AMSU-A) software management plan

    NASA Technical Reports Server (NTRS)

    Schwantje, Robert

    1994-01-01

    This document defines the responsibilites for the management of the like-cycle development of the flight software installed in the AMSU-A instruments, and the ground support software used in the test and integration of the AMSU-A instruments.

  17. Reliability of Fault Tolerant Control Systems. Part 1

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva

    2001-01-01

    This paper reports Part I of a two part effort, that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability analysis of fault-tolerant control systems is performed using Markov models. Reliability properties, peculiar to fault-tolerant control systems are emphasized. As a consequence, coverage of failures through redundancy management can be severely limited. It is shown that in the early life of a syi1ein composed of highly reliable subsystems, the reliability of the overall system is affine with respect to coverage, and inadequate coverage induces dominant single point failures. The utility of some existing software tools for assessing the reliability of fault tolerant control systems is also discussed. Coverage modeling is attempted in Part II in a way that captures its dependence on the control performance and on the diagnostic resolution.

  18. Orogen-scale uplift in the central Italian Apennines drives episodic behaviour of earthquake faults.

    PubMed

    Cowie, P A; Phillips, R J; Roberts, G P; McCaffrey, K; Zijerveld, L J J; Gregory, L C; Faure Walker, J; Wedmore, L N J; Dunai, T J; Binnie, S A; Freeman, S P H T; Wilcken, K; Shanks, R P; Huismans, R S; Papanikolaou, I; Michetti, A M; Wilkinson, M

    2017-03-21

    Many areas of the Earth's crust deform by distributed extensional faulting and complex fault interactions are often observed. Geodetic data generally indicate a simpler picture of continuum deformation over decades but relating this behaviour to earthquake occurrence over centuries, given numerous potentially active faults, remains a global problem in hazard assessment. We address this challenge for an array of seismogenic faults in the central Italian Apennines, where crustal extension and devastating earthquakes occur in response to regional surface uplift. We constrain fault slip-rates since ~18 ka using variations in cosmogenic 36 Cl measured on bedrock scarps, mapped using LiDAR and ground penetrating radar, and compare these rates to those inferred from geodesy. The 36 Cl data reveal that individual faults typically accumulate meters of displacement relatively rapidly over several thousand years, separated by similar length time intervals when slip-rates are much lower, and activity shifts between faults across strike. Our rates agree with continuum deformation rates when averaged over long spatial or temporal scales (10 4  yr; 10 2  km) but over shorter timescales most of the deformation may be accommodated by <30% of the across-strike fault array. We attribute the shifts in activity to temporal variations in the mechanical work of faulting.

  19. Nearly frictionless faulting by unclamping in long-term interaction models

    Parsons, T.

    2002-01-01

    In defiance of direct rock-friction observations, some transform faults appear to slide with little resistance. In this paper finite element models are used to show how strain energy is minimized by interacting faults that can cause long-term reduction in fault-normal stresses (unclamping). A model fault contained within a sheared elastic medium concentrates stress at its end points with increasing slip. If accommodating structures free up the ends, then the fault responds by rotating, lengthening, and unclamping. This concept is illustrated by a comparison between simple strike-slip faulting and a mid-ocean-ridge model with the same total transform length; calculations show that the more complex system unclapms the transforms and operates at lower energy. In another example, the overlapping San Andreas fault system in the San Francisco Bay region is modeled; this system is complicated by junctions and stepovers. A finite element model indicates that the normal stress along parts of the faults could be reduced to hydrostatic levels after ???60-100 k.y. of system-wide slip. If this process occurs in the earth, then parts of major transform fault zones could appear nearly frictionless.

  20. SFT: Scalable Fault Tolerance

    SciT

    Petrini, Fabrizio; Nieplocha, Jarek; Tipparaju, Vinod

    2006-04-15

    In this paper we will present a new technology that we are currently developing within the SFT: Scalable Fault Tolerance FastOS project which seeks to implement fault tolerance at the operating system level. Major design goals include dynamic reallocation of resources to allow continuing execution in the presence of hardware failures, very high scalability, high efficiency (low overhead), and transparency—requiring no changes to user applications. Our technology is based on a global coordination mechanism, that enforces transparent recovery lines in the system, and TICK, a lightweight, incremental checkpointing software architecture implemented as a Linux kernel module. TICK is completely user-transparentmore » and does not require any changes to user code or system libraries; it is highly responsive: an interrupt, such as a timer interrupt, can trigger a checkpoint in as little as 2.5μs; and it supports incremental and full checkpoints with minimal overhead—less than 6% with full checkpointing to disk performed as frequently as once per minute.« less

  1. From Science Reserves to Sustainable Multiple Uses beyond Earth orbit: Evaluating Issues on the Path towards Balanced Environmental Management on Planetary Bodies

    NASA Astrophysics Data System (ADS)

    Race, Margaret

    Over the past five decades, our understanding of space beyond Earth orbit has been shaped by a succession of mainly robotic missions whose technologies have enabled scientists to answer diverse science questions about celestial bodies across the solar system. For all that time, exploration has been guided by planetary protection policies and principles promulgated by COSPAR and based on provisions in Article IX of the Outer Space Treaty of 1967. Over time, implementation of the various COSPAR planetary protection policies have sought to avoid harmful forward and backward contamination in order to ensure the integrity of science findings, guide activities on different celestial bodies, and appropriately protect Earth whenever extraterrestrial materials have been returned. The recent increased interest in extending both human missions and commercial activities beyond Earth orbit have prompted discussions in various quarters about the need for updating policies and guidelines to ensure responsible, balanced space exploration and use by all parties, regardless whether activities are undertaken by governmental or non-governmental entities. Already, numerous researchers and workgroups have suggested a range of different ways to manage activities on celestial environments (e.g, wilderness parks, exclusion zones, special regions, claims, national research bases, environmental impact assessments, etc.). While the suggestions are useful in thinking about how to manage future space activities, they are not based on any systematically applied or commonly accepted criteria (scientific or otherwise). In addition, they are borrowed from terrestrial approaches for environmental protection, which may or may not have direct applications to space environments. As noted in a recent COSPAR-PEX workshop (GWU 2012), there are no clear definitions of issues such as harmful contamination, the environment to be protected, or what are considered reasonable activity or impacts for particular

  2. Borehole Strainmeters and the monitoring of the North Anatolian Fault in the Marmara Sea.

    NASA Astrophysics Data System (ADS)

    Johnson, W.; Mencin, D.; Bilham, R. G.; Gottlieb, M. H.; Van Boskirk, E.; Hodgkinson, K. M.; Mattioli, G. S.; Acarel, D.; Bulut, F.; Bohnhoff, M.; Ergintav, S.; Bal, O.; Ozener, H.

    2016-12-01

    Twice in the past 1000 years a sequence of large earthquakes has propagated from east to west along the North Anatolian fault (NAF) in Turkey towards Istanbul, with the final earthquake in the sequence destroying the city. This occurred most recently in 1509. The population of greater Istanbul is 20 million and the next large earthquake of the current sequence is considered imminent. The most likely location for a major earthquake on the NAF is considered the Marmara-Sea/Princes-Island segment south and southeast of Istanbul [Bohnhoff et al., 2013]. Insights into the nucleation and future behavior of this segment of the NAF are anticipated from measuring deformation near the fault, and in particular possible aseismic slip processes on the fault that may precede as well as accompany any future rupture. Aseismic slip processes near the western end of the Izmit rupture, near where it passes offshore beneath the Sea of Marmara near Izmit, has been successfully monitored using InSAR, GPS, and creepmeters. A 1mm amplitude, 24h creep event was recorded by our creepmeter near Izmit in 2015. These instruments and methods are of limited utility in monitoring the submarine portion of the NAF Data from numerous borehole strainmeters (BSM) along the San Andreas Fault, including those that were installed and maintained as part of the EarthScope Plate Boundary Observatory (PBO), demonstrate that the characteristics of creep propagation events with sub-cm slip amplitudes can be quantified for slip events at 10 km source-to-sensor distances. Such distances are comparable to those between the mainland and the submarine NAF, with some islands allowing installations within 3 km of the fault. In a collaborative program (GeoGONAF) between the National Science Foundation, GeoForschungsZentrum, Turkish Disaster and Emergency Management Authority, and the Kandilli Observatory, we installed an array of six PBO type BSM systems, which include strainmeters and seismometers, around the eastern

  3. Earth: Earth Science and Health

    NASA Technical Reports Server (NTRS)

    Maynard, Nancy G.

    2001-01-01

    A major new NASA initiative on environmental change and health has been established to promote the application of Earth science remote sensing data, information, observations, and technologies to issues of human health. NASA's Earth Sciences suite of Earth observing instruments are now providing improved observations science, data, and advanced technologies about the Earth's land, atmosphere, and oceans. These new space-based resources are being combined with other agency and university resources, data integration and fusion technologies, geographic information systems (GIS), and the spectrum of tools available from the public health community, making it possible to better understand how the environment and climate are linked to specific diseases, to improve outbreak prediction, and to minimize disease risk. This presentation is an overview of NASA's tools, capabilities, and research advances in this initiative.

  4. On the design of fault-tolerant robotic manipulator systems

    NASA Technical Reports Server (NTRS)

    Tesar, Delbert

    1993-01-01

    Robotic systems are finding increasing use in space applications. Many of these devices are going to be operational on board the Space Station Freedom. Fault tolerance has been deemed necessary because of the criticality of the tasks and the inaccessibility of the systems to maintenance and repair. Design for fault tolerance in manipulator systems is an area within robotics that is without precedence in the literature. In this paper, we will attempt to lay down the foundations for such a technology. Design for fault tolerance demands new and special approaches to design, often at considerable variance from established design practices. These design aspects, together with reliability evaluation and modeling tools, are presented. Mechanical architectures that employ protective redundancies at many levels and have a modular architecture are then studied in detail. Once a mechanical architecture for fault tolerance has been derived, the chronological stages of operational fault tolerance are investigated. Failure detection, isolation, and estimation methods are surveyed, and such methods for robot sensors and actuators are derived. Failure recovery methods are also presented for each of the protective layers of redundancy. Failure recovery tactics often span all of the layers of a control hierarchy. Thus, a unified framework for decision-making and control, which orchestrates both the nominal redundancy management tasks and the failure management tasks, has been derived. The well-developed field of fault-tolerant computers is studied next, and some design principles relevant to the design of fault-tolerant robot controllers are abstracted. Conclusions are drawn, and a road map for the design of fault-tolerant manipulator systems is laid out with recommendations for a 10 DOF arm with dual actuators at each joint.

  5. Deconvoluting complex structural histories archived in brittle fault zones

    NASA Astrophysics Data System (ADS)

    Viola, G.; Scheiber, T.; Fredin, O.; Zwingmann, H.; Margreth, A.; Knies, J.

    2016-11-01

    Brittle deformation can saturate the Earth's crust with faults and fractures in an apparently chaotic fashion. The details of brittle deformational histories and implications on, for example, seismotectonics and landscape, can thus be difficult to untangle. Fortunately, brittle faults archive subtle details of the stress and physical/chemical conditions at the time of initial strain localization and eventual subsequent slip(s). Hence, reading those archives offers the possibility to deconvolute protracted brittle deformation. Here we report K-Ar isotopic dating of synkinematic/authigenic illite coupled with structural analysis to illustrate an innovative approach to the high-resolution deconvolution of brittle faulting and fluid-driven alteration of a reactivated fault in western Norway. Permian extension preceded coaxial reactivation in the Jurassic and Early Cretaceous fluid-related alteration with pervasive clay authigenesis. This approach represents important progress towards time-constrained structural models, where illite characterization and K-Ar analysis are a fundamental tool to date faulting and alteration in crystalline rocks.

  6. San Andreas fault geometry in the Parkfield, California, region

    Simpson, R.W.; Barall, M.; Langbein, J.; Murray, J.R.; Rymer, M.J.

    2006-01-01

    In map view, aftershocks of the 2004 Parkfield earthquake lie along a line that forms a straighter connection between San Andreas fault segments north and south of the Parkfield reach than does the mapped trace of the fault itself. A straightedge laid on a geologic map of Central California reveals a ???50-km-long asymmetric northeastward warp in the Parkfield reach of the fault. The warp tapers gradually as it joins the straight, creeping segment of the San Andreas to the north-west, but bends abruptly across Cholame Valley at its southeast end to join the straight, locked segment that last ruptured in 1857. We speculate that the San Andreas fault surface near Parkfield has been deflected in its upper ???6 km by nonelastic behavior of upper crustal rock units. These units and the fault surface itself are warped during periods between large 1857-type earthquakes by the presence of the 1857-locked segment to the south, which buttresses intermittent coseismic and continuous aseismic slip on the Parkfield reach. Because of nonelastic behavior, the warping is not completely undone when an 1857-type event occurs, and the upper portion of the three-dimensional fault surface is slowly ratcheted into an increasingly prominent bulge. Ultimately, the fault surface probably becomes too deformed for strike-slip motion, and a new, more vertical connection to the Earth's surface takes over, perhaps along the Southwest Fracture Zone. When this happens a wedge of material currently west of the main trace will be stranded on the east side of the new main trace.

  7. Ancient Earth, Alien Earths Event

    2014-08-20

    Panelists discuss how research on early Earth could help guide our search for habitable planets orbiting other stars at the “Ancient Earth, Alien Earths” Event at NASA Headquarters in Washington, DC Wednesday, August 20, 2014. The event was sponsored by NASA, the National Science Foundation (NSF), and the Smithsonian Institution and was moderated by Dr. David H. Grinspoon, Senior Scientist at the Planetary Science Institute. Photo Credit: (NASA/Aubrey Gemignani)

  8. Ancient Earth, Alien Earths Event

    2014-08-20

    Dr. David H. Grinspoon, Senior Scientist, Planetary Science Institute, moderates a panel at the “Ancient Earth, Alien Earths” Event at NASA Headquarters in Washington, DC Wednesday, August 20, 2014. The event was sponsored by NASA, the National Science Foundation (NSF), and the Smithsonian Institution and highlighted how research on early Earth could help guide our search for habitable planets orbiting other stars. Photo Credit: (NASA/Aubrey Gemignani)

  9. Ancient Earth, Alien Earths Event

    2014-08-20

    An audience member asks the panelists a question at the “Ancient Earth, Alien Earths” Event at NASA Headquarters in Washington, DC Wednesday, August 20, 2014. The event was sponsored by NASA, the National Science Foundation (NSF), and the Smithsonian Institution and was moderated by Dr. David H. Grinspoon, Senior Scientist at the Planetary Science Institute. Six scientists discussed how research on early Earth could help guide our search for habitable planets orbiting other stars. Photo Credit: (NASA/Aubrey Gemignani)

  10. Accelerometer having integral fault null

    NASA Astrophysics Data System (ADS)

    Bozeman, Richard J., Jr.

    1995-08-01

    An improved accelerometer is introduced. It comprises a transducer responsive to vibration in machinery which produces an electrical signal related to the magnitude and frequency of the vibration; and a decoding circuit responsive to the transducer signal which produces a first fault signal to produce a second fault signal in which ground shift effects are nullified.

  11. Accelerometer having integral fault null

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1995-01-01

    An improved accelerometer is introduced. It comprises a transducer responsive to vibration in machinery which produces an electrical signal related to the magnitude and frequency of the vibration; and a decoding circuit responsive to the transducer signal which produces a first fault signal to produce a second fault signal in which ground shift effects are nullified.

  12. Soil carbon management in large-scale Earth system modelling: implications for crop yields and nitrogen leaching

    NASA Astrophysics Data System (ADS)

    Olin, S.; Lindeskog, M.; Pugh, T. A. M.; Schurgers, G.; Wårlind, D.; Mishurov, M.; Zaehle, S.; Stocker, B. D.; Smith, B.; Arneth, A.

    2015-11-01

    Croplands are vital ecosystems for human well-being and provide important ecosystem services such as crop yields, retention of nitrogen and carbon storage. On large (regional to global)-scale levels, assessment of how these different services will vary in space and time, especially in response to cropland management, are scarce. We explore cropland management alternatives and the effect these can have on future C and N pools and fluxes using the land-use-enabled dynamic vegetation model LPJ-GUESS (Lund-Potsdam-Jena General Ecosystem Simulator). Simulated crop production, cropland carbon storage, carbon sequestration and nitrogen leaching from croplands are evaluated and discussed. Compared to the version of LPJ-GUESS that does not include land-use dynamics, estimates of soil carbon stocks and nitrogen leaching from terrestrial to aquatic ecosystems were improved. Our model experiments allow us to investigate trade-offs between these ecosystem services that can be provided from agricultural fields. These trade-offs are evaluated for current land use and climate and further explored for future conditions within the two future climate change scenarios, RCP (Representative Concentration Pathway) 2.6 and 8.5. Our results show that the potential for carbon sequestration due to typical cropland management practices such as no-till management and cover crops proposed in previous studies is not realised, globally or over larger climatic regions. Our results highlight important considerations to be made when modelling C-N interactions in agricultural ecosystems under future environmental change and the effects these have on terrestrial biogeochemical cycles.

  13. Management of space networks

    NASA Technical Reports Server (NTRS)

    Markley, R. W.; Williams, B. F.

    1993-01-01

    NASA has proposed missions to the Moon and Mars that reflect three areas of emphasis: human presence, exploration, and space resource development for the benefit of Earth. A major requirement for such missions is a robust and reliable communications architecture. Network management--the ability to maintain some degree of human and automatic control over the span of the network from the space elements to the end users on Earth--is required to realize such robust and reliable communications. This article addresses several of the architectural issues associated with space network management. Round-trip delays, such as the 5- to 40-min delays in the Mars case, introduce a host of problems that must be solved by delegating significant control authority to remote nodes. Therefore, management hierarchy is one of the important architectural issues. The following article addresses these concerns, and proposes a network management approach based on emerging standards that covers the needs for fault, configuration, and performance management, delegated control authority, and hierarchical reporting of events. A relatively simple approach based on standards was demonstrated in the DSN 2000 Information Systems Laboratory, and the results are described.

  14. Interactions between Polygonal Normal Faults and Larger Normal Faults, Offshore Nova Scotia, Canada

    NASA Astrophysics Data System (ADS)

    Pham, T. Q. H.; Withjack, M. O.; Hanafi, B. R.

    2017-12-01

    Polygonal faults, small normal faults with polygonal arrangements that form in fine-grained sedimentary rocks, can influence ground-water flow and hydrocarbon migration. Using well and 3D seismic-reflection data, we have examined the interactions between polygonal faults and larger normal faults on the passive margin of offshore Nova Scotia, Canada. The larger normal faults strike approximately E-W to NE-SW. Growth strata indicate that the larger normal faults were active in the Late Cretaceous (i.e., during the deposition of the Wyandot Formation) and during the Cenozoic. The polygonal faults were also active during the Cenozoic because they affect the top of the Wyandot Formation, a fine-grained carbonate sedimentary rock, and the overlying Cenozoic strata. Thus, the larger normal faults and the polygonal faults were both active during the Cenozoic. The polygonal faults far from the larger normal faults have a wide range of orientations. Near the larger normal faults, however, most polygonal faults have preferred orientations, either striking parallel or perpendicular to the larger normal faults. Some polygonal faults nucleated at the tip of a larger normal fault, propagated outward, and linked with a second larger normal fault. The strike of these polygonal faults changed as they propagated outward, ranging from parallel to the strike of the original larger normal fault to orthogonal to the strike of the second larger normal fault. These polygonal faults hard-linked the larger normal faults at and above the level of the Wyandot Formation but not below it. We argue that the larger normal faults created stress-enhancement and stress-reorientation zones for the polygonal faults. Numerous small, polygonal faults formed in the stress-enhancement zones near the tips of larger normal faults. Stress-reorientation zones surrounded the larger normal faults far from their tips. Fewer polygonal faults are present in these zones, and, more importantly, most polygonal faults

  15. Building Thematic and Integrated Services for European Solid Earth Sciences: the EPOS Integrated Approach

    NASA Astrophysics Data System (ADS)

    Harrison, M.; Cocco, M.

    2017-12-01

    EPOS (European Plate Observing System) has been designed with the vision of creating a pan-European infrastructure for solid Earth science to support a safe and sustainable society. In accordance with this scientific vision, the EPOS mission is to integrate the diverse and advanced European Research Infrastructures for solid Earth science relying on new e-science opportunities to monitor and unravel the dynamic and complex Earth System. EPOS will enable innovative multidisciplinary research for a better understanding of the Earth's physical and chemical processes that control earthquakes, volcanic eruptions, ground instability and tsunami as well as the processes driving tectonics and Earth's surface dynamics. To accomplish its mission, EPOS is engaging different stakeholders, to allow the Earth sciences to open new horizons in our understanding of the planet. EPOS also aims at contributing to prepare society for geo-hazards and to responsibly manage the exploitation of geo-resources. Through integration of data, models and facilities, EPOS will allow the Earth science community to make a step change in developing new concepts and tools for key answers to scientific and socio-economic questions concerning geo-hazards and geo-resources as well as Earth sciences applications to the environment and human welfare. The research infrastructures (RIs) that EPOS is coordinating include: i) distributed geophysical observing systems (seismological and geodetic networks); ii) local observatories (including geomagnetic, near-fault and volcano observatories); iii) analytical and experimental laboratories; iv) integrated satellite data and geological information services; v) new services for natural and anthropogenic hazards; vi) access to geo-energy test beds. Here we present the activities planned for the implementation phase focusing on the TCS, the ICS and on their interoperability. We will discuss the data, data-products, software and services (DDSS) presently under

  16. Effects of Fault Segmentation, Mechanical Interaction, and Structural Complexity on Earthquake-Generated Deformation

    NASA Astrophysics Data System (ADS)

    Haddad, David Elias

    Earth's topographic surface forms an interface across which the geodynamic and geomorphic engines interact. This interaction is best observed along crustal margins where topography is created by active faulting and sculpted by geomorphic processes. Crustal deformation manifests as earthquakes at centennial to millennial timescales. Given that nearly half of Earth's human population lives along active fault zones, a quantitative understanding of the mechanics of earthquakes and faulting is necessary to build accurate earthquake forecasts. My research relies on the quantitative documentation of the geomorphic expression of large earthquakes and the physical processes that control their spatiotemporal distributions. The first part of my research uses high-resolution topographic lidar data to quantitatively document the geomorphic expression of historic and prehistoric large earthquakes. Lidar data allow for enhanced visualization and reconstruction of structures and stratigraphy exposed by paleoseismic trenches. Lidar surveys of fault scarps formed by the 1992 Landers earthquake document the centimeter-scale erosional landforms developed by repeated winter storm-driven erosion. The second part of my research employs a quasi-static numerical earthquake simulator to explore the effects of fault roughness, friction, and structural complexities on earthquake-generated deformation. My experiments show that fault roughness plays a critical role in determining fault-to-fault rupture jumping probabilities. These results corroborate the accepted 3-5 km rupture jumping distance for smooth faults. However, my simulations show that the rupture jumping threshold distance is highly variable for rough faults due to heterogeneous elastic strain energies. Furthermore, fault roughness controls spatiotemporal variations in slip rates such that rough faults exhibit lower slip rates relative to their smooth counterparts. The central implication of these results lies in guiding the

  17. Lacustrine Paleoseismology Reveals Earthquake Segmentation of the Alpine Fault, New Zealand

    NASA Astrophysics Data System (ADS)

    Howarth, J. D.; Fitzsimons, S.; Norris, R.; Langridge, R. M.

    2013-12-01

    Transform plate boundary faults accommodate high rates of strain and are capable of producing large (Mw>7.0) to great (Mw>8.0) earthquakes that pose significant seismic hazard. The Alpine Fault in New Zealand is one of the longest, straightest and fastest slipping plate boundary transform faults on Earth and produces earthquakes at quasi-periodic intervals. Theoretically, the fault's linearity, isolation from other faults and quasi-periodicity should promote the generation of earthquakes that have similar magnitudes over multiple seismic cycles. We test the hypothesis that the Alpine Fault produces quasi-regular earthquakes that contiguously rupture the southern and central fault segments, using a novel lacustrine paleoseismic proxy to reconstruct spatial and temporal patterns of fault rupture over the last 2000 years. In three lakes located close to the Alpine Fault the last nine earthquakes are recorded as megaturbidites formed by co-seismic subaqueous slope failures, which occur when shaking exceeds Modified Mercalli (MM) VII. When the fault ruptures adjacent to a lake the co-seismic megaturbidites are overlain by stacks of turbidites produced by enhanced fluvial sediment fluxes from earthquake-induced landslides. The turbidite stacks record shaking intensities of MM>IX in the lake catchments and can be used to map the spatial location of fault rupture. The lake records can be dated precisely, facilitating meaningful along strike correlations, and the continuous records allow earthquakes closely spaced in time on adjacent fault segments to be distinguished. The results show that while multi-segment ruptures of the Alpine Fault occurred during most seismic cycles, sequential earthquakes on adjacent segments and single segment ruptures have also occurred. The complexity of the fault rupture pattern suggests that the subtle variations in fault geometry, sense of motion and slip rate that have been used to distinguish the central and southern segments of the Alpine

  18. Earth Science

    1994-03-08

    Workers at the Astrotech processing facility in Titusville prepared for a news media showing of the Geostationary Operational Environmental Satellite-1 (GOES-1). GOES-1 was the first in a new generation of weather satellites deployed above Earth. It was the first 3-axis, body-stabilized meteorological satellite to be used by the National Oceanic Atmospheric Administration (NOAA) and NASA. These features allowed GOES-1 to continuously monitor the Earth, rather than viewing it just five percent of the time as was the case with spin-stabilized meteorological satellites. GOES-1 also has independent imaging and sounding instruments which can operate simultaneously yet independently. As a result, observations provided by each instrument will not be interrupted. The imager produces visual and infrared images of the Earth's surface, oceans, cloud cover and severe storm development, while the prime sounding products include vertical temperature and moisture profiles, and layer mean moisture.

  19. Length-displacement scaling of thrust faults on the Moon and the formation of uphill-facing scarps

    NASA Astrophysics Data System (ADS)

    Roggon, Lars; Hetzel, Ralf; Hiesinger, Harald; Clark, Jaclyn D.; Hampel, Andrea; van der Bogert, Carolyn H.

    2017-08-01

    Fault populations on terrestrial planets exhibit a linear relationship between their length, L, and the maximum displacement, D, which implies a constant D/L ratio during fault growth. Although it is known that D/L ratios of faults are typically a few percent on Earth and 0.2-0.8% on Mars and Mercury, the D/L ratios of lunar faults are not well characterized. Quantifying the D/L ratios of faults on the Moon is, however, crucial for a better understanding of lunar tectonics, including for studies of the amount of global lunar contraction. Here, we use high-resolution digital terrain models to perform a topographic analysis of four lunar thrust faults - Simpelius-1, Morozov (S1), Fowler, and Racah X-1 - that range in length from 1.3 km to 15.4 km. First, we determine the along-strike variation of the vertical displacement from ≥ 20 topographic profiles across each fault. For measuring the vertical displacements, we use a method that is commonly applied to fault scarps on Earth and that does not require detrending of the profiles. The resulting profiles show that the displacement changes gradually along these faults' strike, with maximum vertical displacements ranging from 17 ± 2 m for Simpelius-1 to 192 ± 30 m for Racah X-1. Assuming a fault dip of 30° yields maximum total displacements (D) that are twice as large as the vertical displacements. The linear relationship between D and L supports the inference that lunar faults gradually accumulate displacement as they propagate laterally. For the faults we investigated, the D/L ratio is ∼2.3%, an order of magnitude higher than theoretical predictions for the Moon, but a value similar for faults on Earth. We also employ finite-element modeling and a Mohr circle stress analysis to investigate why many lunar thrust faults, including three of those studied here, form uphill-facing scarps. Our analysis shows that fault slip is preferentially initiated on planes that dip in the same direction as the topography, because

  20. Fault Tolerant State Machines

    NASA Technical Reports Server (NTRS)

    Burke, Gary R.; Taft, Stephanie

    2004-01-01

    State machines are commonly used to control sequential logic in FPGAs and ASKS. An errant state machine can cause considerable damage to the device it is controlling. For example in space applications, the FPGA might be controlling Pyros, which when fired at the wrong time will cause a mission failure. Even a well designed state machine can be subject to random errors us a result of SEUs from the radiation environment in space. There are various ways to encode the states of a state machine, and the type of encoding makes a large difference in the susceptibility of the state machine to radiation. In this paper we compare 4 methods of state machine encoding and find which method gives the best fault tolerance, as well as determining the resources needed for each method.

  1. Arc fault detection system

    DOEpatents

    Jha, K.N.

    1999-05-18

    An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard. 1 fig.

  2. Arc fault detection system

    DOEpatents

    Jha, Kamal N.

    1999-01-01

    An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard.

  3. Fault Tolerant Cache Schemes

    NASA Astrophysics Data System (ADS)

    Tu, H.-Yu.; Tasneem, Sarah

    Most of modern microprocessors employ on—chip cache memories to meet the memory bandwidth demand. These caches are now occupying a greater real es tate of chip area. Also, continuous down scaling of transistors increases the possi bility of defects in the cache area which already starts to occupies more than 50% of chip area. For this reason, various techniques have been proposed to tolerate defects in cache blocks. These techniques can be classified into three different cat egories, namely, cache line disabling, replacement with spare block, and decoder reconfiguration without spare blocks. This chapter examines each of those fault tol erant techniques with a fixed typical size and organization of L1 cache, through extended simulation using SPEC2000 benchmark on individual techniques. The de sign and characteristics of each technique are summarized with a view to evaluate the scheme. We then present our simulation results and comparative study of the three different methods.

  4. San Andreas-sized Strike-slip Fault on Europa

    NASA Technical Reports Server (NTRS)

    1998-01-01

    opens the fault and subsequent tidal stress causes it to move lengthwise in one direction. Then tidal forces close the fault again, preventing the area from moving back to its original position. Daily tidal cycles produce a steady accumulation of lengthwise offset motions. Here on Earth, unlike Europa, large strike-slip faults like the San Andreas are set in motion by plate tectonic forces.

    North is to the top of the picture and the sun illuminates the surface from the top. The image, centered at 66 degrees south latitude and 195 degrees west longitude, covers an area approximately 300 by 203 kilometers(185 by 125 miles). The pictures were taken on September 26, 1998by Galileo's solid-state imaging system.

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepo

  5. 3D Fault Network of the Murchison Domain, Yilgarn Craton

    NASA Astrophysics Data System (ADS)

    Murdie, Ruth; Gessner, Klaus

    2014-05-01

    The architecture of Archean granite-greenstone terranes is often controlled by networks of 10 km to 100 km-scale shear zones that record displacement under amphibolite facies to greenschist facies metamorphic conditions. The geometry of such crustal scale 'fault networks' has been shown to be highly relevant to understand the tectonic and metamorphic history of granite-greenstone terranes, as well as the availability of structural controlled fluid pathways related to magmatic and hydrothermal mineralization. The Neoarchean Yilgarn Craton and the Proterozoic orogens around its margins constitute one of Earth's greatest mineral treasure troves, including iron, gold, copper and nickel mineral deposits. Whereas the Yilgarn Craton is one of the best studied Archean cratons, its enormous size and limited outcrop are detrimental to the better understanding of what controls the distribution of these vast resources and what geodynamic processes were involved the tectonic assembly of this part of the Australian continent. Here we present a network of the major faults of the NW Yilgarn Craton between the Yalgar Fault, Murchison's NW contact with the Narryer Terrane to the Ida Fault, its boundary with the Eastern Goldfields Superterrane. The model has been constructed from various geophysical and geological data, including potential field grids, Geological Survey of Western Australia map sheets, seismic reflection surveys and magnetotelluric traverses. The northern extremity of the modelled area is bounded by the Proterozoic cover and the southern limit has been arbitrarily chosen to include various greenstone belts. In the west, the major faults in the upper crust, such as the Carbar and Chundaloo Shear Zones, dip steeply towards the west and then flatten off at depth. They form complex branching fault systems that bound the greenstone belts in a series of stacked faults. East of the Ida, the far east of the model, the faults have been integrated with Geoscience Australia

  6. The Najd Fault System of Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Stüwe, Kurt; Kadi, Khalid; Abu-Alam, Tamer; Hassan, Mahmoud

    2014-05-01

    The Najd Fault System of the Arabian-Nubian Shield is considered to be the largest Proterozoic Shear zone system on Earth. The shear zone was active during the late stages of the Pan African evolution and is known to be responsible for the exhumation of fragments of juvenile Proterozoic continental crust that form a series of basement domes across the shield areas of Egypt and Saudi Arabia. A three year research project funded by the Austrian Science Fund (FWF) and supported by the Saudi Geological Survey (SGS) has focused on structural mapping, petrology and geochronology of the shear zone system in order to constrain age and mechanisms of exhumation of the domes - with focus on the Saudi Arabian side of the Red Sea. We recognise important differences in comparison with the basement domes in the Eastern desert of Egypt. In particular, high grade metamorphic rocks are not exclusively confined to basement domes surrounded by shear zones, but also occur within shear zones themselves. Moreover, we recognise both exhumation in extensional and in transpressive regimes to be responsible for exhumation of high grade metamorphic rocks in different parts of the shield. We suggest that these apparent structural differences between different sub-regions of the shield largely reflect different timing of activity of various branches of the Najd Fault System. In order to tackle the ill-resolved timing of the Najd Fault System, zircon geochronology is performed on intrusive rocks with different cross cutting relationships to the shear zone. We are able to constrain an age between 580 Ma and 605 Ma for one of the major branches of the shear zone, namely the Ajjaj shear zone. In our contribution we present a strain map for the shield as well as early geochronological data for selected shear zone branches.

  7. Earth Observation

    2010-08-23

    ISS024-E-016042 (23 Aug. 2010) --- This night time view captured by one of the Expedition 24 crew members aboard the International Space Station some 220 miles above Earth is looking southward from central Romania over the Aegean Sea toward Greece and it includes Thessaloniki (near center), the larger bright mass of Athens (left center), and the Macedonian capital of Skopje (lower right). Center point coordinates of the area pictured are 46.4 degrees north latitude and 25.5 degrees east longitude. The picture was taken in August and was physically brought back to Earth on a disk with the return of the Expedition 25 crew in November 2010.

  8. Earth Observation

    2014-07-19

    ISS040-E-070412 (19 July 2014) --- One of the Expedition 40 crew members aboard the Earth-orbiting International Space Station recorded this July 19 panorama featuring wildfires which are plaguing the Northwest and causing widespread destruction. (Note: south is at the top of the frame). The orbital outpost was flying 223 nautical miles above Earth at the time of the photo. Parts of Oregon and Washington are included in the scene. Mt. Jefferson, Three Sisters and Mt. St. Helens are all snow-capped and visible in the photo, and the Columbia River can also be delineated.

  9. Earth Observation

    2014-07-19

    ISS040-E-070424 (19 July 2014) --- One of the Expedition 40 crew members aboard the Earth-orbiting International Space Station recorded this July 19 image of wildfires which are plaguing the Northwest and causing widespread destruction. The orbital outpost was flying 223 nautical miles above Earth at the time of the photo. Lightning has been given as the cause of the Ochoco Complex fires in the Ochoco National Forest in central Oregon. The complex has gotten larger since this photo was taken.

  10. Earth observation

    2014-09-04

    ISS040-E-129950 (4 Sept. 2014) --- In this photograph. taken by one of the Expedition 40 crew members aboard the Earth-orbiting International Space Station, the orange spot located in the very center is the sun, which appears to be sitting on Earth's limb. At far right, a small bright spot is believed to be a reflection from somewhere in the camera system or something on the orbital outpost. When the photographed was exposed, the orbital outpost was flying at an altutude of 226 nautical miles above a point near French Polynesia in the Pacific Ocean.

  11. Earth Science

    2004-08-13

    This panoramic view of Hurricane Charley was photographed by the Expedition 9 crew aboard the International Space Station (ISS) on August 13, 2004, at a vantage point just north of Tampa, Florida. The small eye was not visible in this view, but the raised cloud tops near the center coincide roughly with the time that the storm began to rapidly strengthen. The category 2 hurricane was moving north-northwest at 18 mph packing winds of 105 mph. Crew Earth Observations record Earth surface changes over time, as well as more fleeting events such as storms, floods, fires, and volcanic eruptions.

  12. Earth Science

    2004-09-11

    This image hosts a look at the eye of Hurricane Ivan, one of the strongest hurricanes on record, as the storm topped the western Caribbean Sea on Saturday, September 11, 2004. The hurricane was photographed by astronaut Edward M. (Mike) Fincke from aboard the International Space Station (ISS) at an altitude of approximately 230 miles. At the time, the category 5 storm sustained winds in the eye of the wall that were reported at about 160 mph. Crew Earth Observations record Earth surface changes over time, as well as more fleeting events such as storms, floods, fires, and volcanic eruptions.

  13. Earth Science

    2004-09-15

    Except for a small portion of the International Space Station (ISS) in the foreground, Hurricane Ivan, one of the strongest hurricanes on record, fills this image over the northern Gulf of Mexico. As the downgraded category 4 storm approached landfall on the Alabama coast Wednesday afternoon on September 15, 2004, sustained winds in the eye of the wall were reported at about 135 mph. The hurricane was photographed by astronaut Edward M. (Mike) Fincke from aboard the ISS at an altitude of approximately 230 miles. Crew Earth Observations record Earth surface changes over time, as well as more fleeting events such as storms, floods, fires, and volcanic eruptions.

  14. Earth Science

    2004-09-15

    This image hosts a look into the eye of Hurricane Ivan, one of the strongest hurricanes on record, as the storm approached landfall on the central Gulf coast Wednesday afternoon on September 15, 2004. The hurricane was photographed by astronaut Edward M. (Mike) Fincke from aboard the International Space Station (ISS) at an altitude of approximately 230 miles. At the time, sustained winds in the eye of the wall were reported at about 135 mph as the downgraded category 4 storm approached the Alabama coast. Crew Earth Observations record Earth surface changes over time, as well as more fleeting events such as storms, floods, fires, and volcanic eruptions.

  15. Soil carbon management in large-scale Earth system modelling: implications for crop yields and nitrogen leaching

    NASA Astrophysics Data System (ADS)

    Olin, S.; Lindeskog, M.; Pugh, T. A. M.; Schurgers, G.; Wårlind, D.; Mishurov, M.; Zaehle, S.; Stocker, B. D.; Smith, B.; Arneth, A.

    2015-06-01

    We explore cropland management alternatives and the effect these can have on future C and N pools and fluxes using the land use-enabled dynamic vegetation model LPJ-GUESS. Simulated crop production, cropland carbon storage, carbon sequestration and nitrogen leaching from croplands are evaluated and discussed. Compared to the version of LPJ-GUESS that does not include land use dynamics, estimates of soil carbon stocks and nitrogen leaching from terrestrial to aquatic ecosystems were improved. We explore trade-offs between important ecosystem services that can be provided from agricultural fields such as crop yields, retention of nitrogen and carbon storage. These trade-offs are evaluated for current land use and climate and further explored for future conditions within the two future climate change scenarios, RCP 2.6 and 8.5. Our results show that the potential for carbon sequestration due to typical cropland management practices such as no-till and cover-crops proposed in literature is not realised, globally or over larger climatic regions. Our results highlight important considerations to be made when modelling C-N interactions in agricultural ecosystems under future environmental change, and the effects these have on terrestrial biogeochemical cycles.

  16. Integration and Value of Earth Observations Data for Water Management Decision-Making in the Western U.S.

    NASA Astrophysics Data System (ADS)

    Larsen, S. G.; Willardson, T.

    2017-12-01

    Some exciting new science and tools are under development for water management decision-making in the Western U.S. This session will highlight a number of examples where remotely-sensed observation data has been directly beneficial to water resource stakeholders, and discuss the steps needed between receipt of the data and their delivery as a finished data product or tool. We will explore case studies of how NASA scientists and researchers have worked with together with western state water agencies and other stakeholders as a team, to develop and interpret remotely-sensed data observations, implement easy-to-use software and tools, train team-members on their operation, and transition those tools into the insititution's workflows. The benefits of integrating these tools into stakeholder, agency, and end-user operations can be seen on-the-ground, when water is optimally managed for the decision-maker's objectives. These cases also point to the importance of building relationships and conduits for communication between researchers and their institutional counterparts.

  17. Integration and Value of Earth Observations Data for Water Management Decision-Making in the Western U.S.

    NASA Astrophysics Data System (ADS)

    Larsen, S. G.; Willardson, T.

    2016-12-01

    Some exciting new science and tools are under development for water management decision-making in the Western U.S. This session will highlight a number of examples where remotely-sensed observation data has been directly beneficial to water resource stakeholders, and discuss the steps needed between receipt of the data and their delivery as a finished data product or tool. We will explore case studies of how NASA scientists and researchers have worked with together with western state water agencies and other stakeholders as a team, to develop and interpret remotely-sensed data observations, implement easy-to-use software and tools, train team-members on their operation, and transition those tools into the insititution's workflows. The benefits of integrating these tools into stakeholder, agency, and end-user operations can be seen on-the-ground, when water is optimally managed for the decision-maker's objectives. These cases also point to the importance of building relationships and conduits for communication between researchers and their institutional counterparts.

  18. Functional Fault Modeling Conventions and Practices for Real-Time Fault Isolation

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara

    2010-01-01

    The purpose of this paper is to present the conventions, best practices, and processes that were established based on the prototype development of a Functional Fault Model (FFM) for a Cryogenic System that would be used for real-time Fault Isolation in a Fault Detection, Isolation, and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using a suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FFMs were created offline but would eventually be used by a real-time reasoner to isolate faults in a Cryogenic System. Through their development and review, a set of modeling conventions and best practices were established. The prototype FFM development also provided a pathfinder for future FFM development processes. This paper documents the rationale and considerations for robust FFMs that can easily be transitioned to a real-time operating environment.

  19. Digital Earth - A sustainable Earth

    NASA Astrophysics Data System (ADS)

    Mahavir

    2014-02-01

    All life, particularly human, cannot be sustainable, unless complimented with shelter, poverty reduction, provision of basic infrastructure and services, equal opportunities and social justice. Yet, in the context of cities, it is believed that they can accommodate more and more people, endlessly, regardless to their carrying capacity and increasing ecological footprint. The 'inclusion', for bringing more and more people in the purview of development is often limited to social and economic inclusion rather than spatial and ecological inclusion. Economic investment decisions are also not always supported with spatial planning decisions. Most planning for a sustainable Earth, be at a level of rural settlement, city, region, national or Global, fail on the capacity and capability fronts. In India, for example, out of some 8,000 towns and cities, Master Plans exist for only about 1,800. A chapter on sustainability or environment is neither statutorily compulsory nor a norm for these Master Plans. Geospatial technologies including Remote Sensing, GIS, Indian National Spatial Data Infrastructure (NSDI), Indian National Urban Information Systems (NUIS), Indian Environmental Information System (ENVIS), and Indian National GIS (NGIS), etc. have potential to map, analyse, visualize and take sustainable developmental decisions based on participatory social, economic and social inclusion. Sustainable Earth, at all scales, is a logical and natural outcome of a digitally mapped, conceived and planned Earth. Digital Earth, in fact, itself offers a platform to dovetail the ecological, social and economic considerations in transforming it into a sustainable Earth.

  20. Use of Google Earth to strengthen public health capacity and facilitate management of vector-borne diseases in resource-poor environments.

    PubMed

    Lozano-Fuentes, Saul; Elizondo-Quiroga, Darwin; Farfan-Ale, Jose Arturo; Loroño-Pino, Maria Alba; Garcia-Rejon, Julian; Gomez-Carro, Salvador; Lira-Zumbardo, Victor; Najera-Vazquez, Rosario; Fernandez-Salas, Ildefonso; Calderon-Martinez, Joaquin; Dominguez-Galera, Marco; Mis-Avila, Pedro; Morris, Natashia; Coleman, Michael; Moore, Chester G; Beaty, Barry J; Eisen, Lars

    2008-09-01

    Novel, inexpensive solutions are needed for improved management of vector-borne and other diseases in resource-poor environments. Emerging free software providing access to satellite imagery and simple editing tools (e.g. Google Earth) complement existing geographic information system (GIS) software and provide new opportunities for: (i) strengthening overall public health capacity through development of information for city infrastructures; and (ii) display of public health data directly on an image of the physical environment. We used freely accessible satellite imagery and a set of feature-making tools included in the software (allowing for production of polygons, lines and points) to generate information for city infrastructure and to display disease data in a dengue decision support system (DDSS) framework. Two cities in Mexico (Chetumal and Merida) were used to demonstrate that a basic representation of city infrastructure useful as a spatial backbone in a DDSS can be rapidly developed at minimal cost. Data layers generated included labelled polygons representing city blocks, lines representing streets, and points showing the locations of schools and health clinics. City blocks were colour-coded to show presence of dengue cases. The data layers were successfully imported in a format known as shapefile into a GIS software. The combination of Google Earth and free GIS software (e.g. HealthMapper, developed by WHO, and SIGEpi, developed by PAHO) has tremendous potential to strengthen overall public health capacity and facilitate decision support system approaches to prevention and control of vector-borne diseases in resource-poor environments.

  1. Improving Multiple Fault Diagnosability using Possible Conflicts

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2012-01-01

    Multiple fault diagnosis is a difficult problem for dynamic systems. Due to fault masking, compensation, and relative time of fault occurrence, multiple faults can manifest in many different ways as observable fault signature sequences. This decreases diagnosability of multiple faults, and therefore leads to a loss in effectiveness of the fault isolation step. We develop a qualitative, event-based, multiple fault isolation framework, and derive several notions of multiple fault diagnosability. We show that using Possible Conflicts, a model decomposition technique that decouples faults from residuals, we can significantly improve the diagnosability of multiple faults compared to an approach using a single global model. We demonstrate these concepts and provide results using a multi-tank system as a case study.

  2. Complex Plate Tectonic Features on Planetary Bodies: Analogs from Earth

    NASA Astrophysics Data System (ADS)

    Stock, J. M.; Smrekar, S. E.

    2016-12-01

    We review the types and scales of observations needed on other rocky planetary bodies (e.g., Mars, Venus, exoplanets) to evaluate evidence of present or past plate motions. Earth's plate boundaries were initially simplified into three basic types (ridges, trenches, and transform faults). Previous studies examined the Moon, Mars, Venus, Mercury and icy moons such as Europa, for evidence of features, including linear rifts, arcuate convergent zones, strike-slip faults, and distributed deformation (rifting or folding). Yet, several aspects merit further consideration. 1) Is the feature active or fossil? Earth's active mid ocean ridges are bathymetric highs, and seafloor depth increases on either side; whereas, fossil mid ocean ridges may be as deep as the surrounding abyssal plain with no major rift valley, although with a minor gravity low (e.g., Osbourn Trough, W. Pacific Ocean). Fossil trenches have less topographic relief than active trenches (e.g., the fossil trench along the Patton Escarpment, west of California). 2) On Earth, fault patterns of spreading centers depend on volcanism. Excess volcanism reduced faulting. Fault visibility increases as spreading rates slow, or as magmatism decreases, producing high-angle normal faults parallel to the spreading center. At magma-poor spreading centers, high resolution bathymetry shows low angle detachment faults with large scale mullions and striations parallel to plate motion (e.g., Mid Atlantic Ridge, Southwest Indian Ridge). 3) Sedimentation on Earth masks features that might be visible on a non-erosional planet. Subduction zones on Earth in areas of low sedimentation have clear trench -parallel faults causing flexural deformation of the downgoing plate; in highly sedimented subduction zones, no such faults can be seen, and there may be no bathymetric trench at all. 4) Areas of Earth with broad upwelling, such as the North Fiji Basin, have complex plate tectonic patterns with many individual but poorly linked ridge

  3. Subaru FATS (fault tracking system)

    NASA Astrophysics Data System (ADS)

    Winegar, Tom W.; Noumaru, Junichi

    2000-07-01

    The Subaru Telescope requires a fault tracking system to record the problems and questions that staff experience during their work, and the solutions provided by technical experts to these problems and questions. The system records each fault and routes it to a pre-selected 'solution-provider' for each type of fault. The solution provider analyzes the fault and writes a solution that is routed back to the fault reporter and recorded in a 'knowledge-base' for future reference. The specifications of our fault tracking system were unique. (1) Dual language capacity -- Our staff speak both English and Japanese. Our contractors speak Japanese. (2) Heterogeneous computers -- Our computer workstations are a mixture of SPARCstations, Macintosh and Windows computers. (3) Integration with prime contractors -- Mitsubishi and Fujitsu are primary contractors in the construction of the telescope. In many cases, our 'experts' are our contractors. (4) Operator scheduling -- Our operators spend 50% of their work-month operating the telescope, the other 50% is spent working day shift at the base facility in Hilo, or day shift at the summit. We plan for 8 operators, with a frequent rotation. We need to keep all operators informed on the current status of all faults, no matter the operator's location.

  4. Massive Hydrothermal Flows of Fluids and Heat: Earth Constraints and Ocean World Considerations

    NASA Astrophysics Data System (ADS)

    Fisher, A. T.

    2018-05-01

    This presentation reviews the hydrogeologic nature of Earth's ocean crust and evidence for massive flows of low-temperature (≤70°C), seafloor hydrothermal circulation through ridge flanks, including the influence of crustal relief and crustal faults.

  5. Earth Observation

    2014-08-10

    ISS040-E-091158 (10 Aug. 2014) --- One of the Expedition 40 crew members 225 nautical miles above Earth onboard the International Space Station used a 200mm lens to record this image of Hawke's Bay, New Zealand on Aug. 10, 2014. Napier and the bay area's most populous area are at bottom center of the frame.

  6. Earth Observation

    2013-06-13

    ISS036-E-007619 (13 June 2013) --- To a crew member aboard the International Space Station, the home planet is seen from many different angles and perspectives, as evdenced by this Expedition 36 image of Earth's atmophere partially obscured by one of the orbital outpost's solar panels.

  7. Think Earth.

    ERIC Educational Resources Information Center

    Niedermeyer, Fred; Ice, Kay

    1992-01-01

    Describes a series of environmental education instructional units for grades K-6 developed by the Think Earth Consortium that cover topics such as conservation, pollution control, and waste reduction. Provides testimony from one sixth-grade teacher that field tested the second-grade unit. (MDH)

  8. Earth Observation

    2014-09-01

    Earth Observation taken during a night pass by the Expedition 40 crew aboard the International Space Station (ISS). Folder lists this as: New Zealand Aurora night pass. Docked Soyuz and Progress spacecraft are visible. On crewmember's Flickr page - The Moon, about to dive into a glowing ocean of green᥿9.

  9. Earth Observation

    2013-07-21

    Earth observation taken during night pass by an Expedition 36 crew member on board the International Space Station (ISS). Per Twitter message this is labeled as : Tehran, Iran. Lights along the coast of the Caspian Sea visible through clouds. July 21.

  10. Earth Observation

    2013-05-19

    ISS036-E-002224 (21 May 2013) --- The sun is captured in a "starburst" mode over Earth's horizon by one of the Expedition 36 crew members as the orbital outpost was above a point in southwestern Minnesota on May 21, 2013.

  11. Earth Algebra.

    ERIC Educational Resources Information Center

    Schaufele, Christopher; Zumoff, Nancy

    Earth Algebra is an entry level college algebra course that incorporates the spirit of the National Council of Teachers of Mathematics (NCTM) Curriculum and Evaluation Standards for School Mathematics at the college level. The context of the course places mathematics at the center of one of the major current concerns of the world. Through…

  12. Earth Science

    1993-03-29

    Small Expendable Deployer System (SEDS) is a tethered date collecting satellite and is intended to demonstrate a versatile and economical way of delivering smaller payloads to higher orbits or downward toward Earth's atmosphere. 19th Navstar Global Positioning System Satellite mission joined with previously launched satellites used for navigational purposes and geodite studies. These satellites are used commercially as well as by the military.

  13. Earth Observation

    2014-06-14

    ISS040-E-011868 (14 June 2014) --- The dark waters of the Salton Sea stand out against neighboring cultivation and desert sands in the middle of the Southern California desert, as photographed by one of the Expedition 40 crew members aboard the Earth-orbiting International Space Station on June 14, 2014.

  14. Earth Observation

    2013-08-03

    Earth observation taken during day pass by an Expedition 36 crew member on board the International Space Station (ISS). Per Twitter message: From southernmost point of orbit over the South Pacific- all clouds seemed to be leading to the South Pole.

  15. Earth Sky

    1965-12-16

    S65-63282 (16 Dec. 1965) --- Area of Indian Ocean, just east of the island of Madagascar, as seen from the Gemini-6 spacecraft during its 15th revolution of Earth. Land mass at top of picture is the Malagasy Republic (Madagascar). Photo credit: NASA or National Aeronautics and Space Administration

  16. Rare earths

    Gambogi, J.

    2013-01-01

    Global mine production of rare earths was estimated to have declined slightly in 2012 relative to 2011 (Fig. 1). Production in China was estimated to have decreased to 95 from 105 kt (104,700 from 115,700 st) in 2011, while new mine production in the United States and Australia increased.

  17. Earth Observation

    2013-07-04

    ISS036-E-015354 (4 July 2013) --- A number of Quebec, Canada wildfires near the Manicouagan Reservoir (seen at lower left) were recorded as part of a series of photographs taken and downlinked to Earth on July 4 by the Expedition 36 crew members aboard the International Space Station.

  18. Earth Observation

    2013-07-04

    ISS036-E-015355 (4 July 2013) --- A number of Quebec, Canada wildfires near the Manicouagan Reservoir (seen at bottom center) were recorded in a series of photographs taken and downlinked to Earth on July 4 by the Expedition 36 crew members aboard the International Space Station.

  19. Earth Observation

    2013-07-03

    ISS036-E-015292 (3 July 2013) --- A number of Quebec, Canada wildfires southeast of James Bay were recorded as part of a series of photographs taken and downlinked to Earth on July 3-4 by the Expedition 36 crew members aboard the International Space Station. This image was recorded on July 3.

  20. Earth Observation

    2013-07-04

    ISS036-E-015342 (4 July 2013) --- A number of Quebec, Canada wildfires southeast of James Bay were recorded as part of a series of photographs taken and downlinked to Earth on July 4 by the Expedition 36 crew members aboard the International Space Station.

  1. Earth Observation

    2013-07-04

    ISS036-E-015335 (4 July 2013) --- A number of Quebec, Canada wildfires southeast of James Bay were recorded as part of a series of photographs taken and downlinked to Earth on July 4 by the Expedition 36 crew members aboard the International Space Station.

  2. Earth Observation

    2014-06-12

    Earth Observation taken during a day pass by the Expedition 40 crew aboard the International Space Station (ISS). Folder lists this as: Moon, Japan, Kamchatka with a wild cloud. Part of the U.S. Lab and PMM are also visible.

  3. Earth Observation

    2013-08-29

    ISS036-E-038117 (29 Aug. 2013) --- One of the Expedition 36 crew members aboard the Earth-orbiting International Space Station photographed massive smoke plumes from the California wildfires. When this image was exposed on Aug. 29, the orbital outpost was approximately 220 miles above a point located at 38.6 degrees north latitude and 123.2 degrees west longitude.

  4. Earth Observation

    2013-08-29

    ISS036-E-038114 (29 Aug. 2013) --- One of the Expedition 36 crew members aboard the Earth-orbiting International Space Station photographed massive smoke plumes from the California wildfires. When this image was exposed on Aug. 29, the orbital outpost was approximately 220 miles above a point located at 38.6 degrees north latitude and 123.3 degrees west longitude.

  5. Earth Observations

    2014-11-18

    ISS042E006751 (11/08/2014) --- Earth observation taken from the International Space Station of the coastline of the United Arab Emirates. The large wheel along the coast center left is "Jumeirah" Palm Island, with a conference center, hotels, recreation areas and a large marine zoo.

  6. Earth Moon

    1998-06-08

    NASA Galileo spacecraft took this image of Earth moon on December 7, 1992 on its way to explore the Jupiter system in 1995-97. The distinct bright ray crater at the bottom of the image is the Tycho impact basin. http://photojournal.jpl.nasa.gov/catalog/PIA00405

  7. Earth's horizon

    2005-07-30

    S114-E-6076 (30 July 2005) --- The blackness of space and Earth’s horizon form the backdrop for this view of the extended Space Shuttle Discovery’s remote manipulator system (RMS) robotic arm while docked to the International Space Station during the STS-114 mission.

  8. Crescent Earth and Moon

    NASA Technical Reports Server (NTRS)

    1977-01-01

    This picture of a crescent-shaped Earth and Moon -- the first of its kind ever taken by a spacecraft -- was recorded Sept. 18, 1977, by NASA's Voyager 1 when it was 7.25 million miles (11.66 million kilometers) from Earth. The Moon is at the top of the picture and beyond the Earth as viewed by Voyager. In the picture are eastern Asia, the western Pacific Ocean and part of the Arctic. Voyager 1 was directly above Mt. Everest (on the night side of the planet at 25 degrees north latitude) when the picture was taken. The photo was made from three images taken through color filters, then processed by the Jet Propulsion Laboratory's Image Processing Lab. Because the Earth is many times brighter than the Moon, the Moon was artificially brightened by a factor of three relative to the Earth by computer enhancement so that both bodies would show clearly in the print. Voyager 2 was launched Aug. 20, 1977, followed by Voyager 1 on Sept. 5, 1977, en route to encounters at Jupiter in 1979 and Saturn in 1980 and 1981. JPL manages the Voyager mission for NASA's Office of Space Science.

  9. Probabilistic fault tree analysis of a radiation treatment system.

    PubMed

    Ekaette, Edidiong; Lee, Robert C; Cooke, David L; Iftody, Sandra; Craighead, Peter

    2007-12-01

    Inappropriate administration of radiation for cancer treatment can result in severe consequences such as premature death or appreciably impaired quality of life. There has been little study of vulnerable treatment process components and their contribution to the risk of radiation treatment (RT). In this article, we describe the application of probabilistic fault tree methods to assess the probability of radiation misadministration to patients at a large cancer treatment center. We conducted a systematic analysis of the RT process that identified four process domains: Assessment, Preparation, Treatment, and Follow-up. For the Preparation domain, we analyzed possible incident scenarios via fault trees. For each task, we also identified existing quality control measures. To populate the fault trees we used subjective probabilities from experts and compared results with incident report data. Both the fault tree and the incident report analysis revealed simulation tasks to be most prone to incidents, and the treatment prescription task to be least prone to incidents. The probability of a Preparation domain incident was estimated to be in the range of 0.1-0.7% based on incident reports, which is comparable to the mean value of 0.4% from the fault tree analysis using probabilities from the expert elicitation exercise. In conclusion, an analysis of part of the RT system using a fault tree populated with subjective probabilities from experts was useful in identifying vulnerable components of the system, and provided quantitative data for risk management.

  10. The Role of Interdisciplinary Earth Science in the Assessment of Regional Land Subsidence Hazards: Toward Sustainable Management of Global Land and Subsurface-Fluid Resources

    NASA Astrophysics Data System (ADS)

    Galloway, D. L.

    2012-12-01

    Land-level lowering or land subsidence is a consequence of many local- and regional-scale physical, chemical or biologic processes affecting soils and geologic materials. The principal processes can be natural or anthropogenic, and include consolidation or compaction, karst or pseudokarst, hydrocompaction of collapsible soils, mining, oxidation of organic soils, erosive piping, tectonism, and volcanism. In terms of affected area, there are two principal regional-scale anthropogenic processes—compaction of compressible subsurface materials owing to the extraction of subsurface fluids (principally groundwater, oil and gas) and oxidation and compaction accompanying drainage of organic soils—which cause significant hazards related to flooding and infrastructure damage that are amenable to resource management measures. The importance of even small magnitude (< 10 mm/yr) subsidence rates in coastal areas is amplified by its contribution to relative sea-level rise compared to estimated rates of rising eustatic sea levels (2-3 mm/yr) attributed to global climate change. Multi- or interdisciplinary [scientific] studies, including those focused on geodetic, geologic, geophysical, hydrologic, hydrogeologic, geomechanical, geochemical, and biologic factors, improve understanding of these subsidence processes. Examples include geodetic measurement and analysis techniques, such as Global Positioning System (GPS), Light Detection and Ranging (LiDAR) and Interferometric Synthetic Aperture Radar (InSAR), which have advanced our capabilities to detect, measure and monitor land-surface motion at multiple scales. Improved means for simulating aquifer-system and hydrocarbon-reservoir deformation, and the oxidation and compaction of organic soils are leading to refined predictive capabilities. The role of interdisciplinary earth science in improving the characterization of land subsidence attributed to subsurface fluid withdrawals and the oxidation and compaction of organic soils is

  11. Fault Identification by Unsupervised Learning Algorithm

    NASA Astrophysics Data System (ADS)

    Nandan, S.; Mannu, U.

    2012-12-01

    Contemporary fault identification techniques predominantly rely on the surface expression of the fault. This biased observation is inadequate to yield detailed fault structures in areas with surface cover like cities deserts vegetation etc and the changes in fault patterns with depth. Furthermore it is difficult to estimate faults structure which do not generate any surface rupture. Many disastrous events have been attributed to these blind faults. Faults and earthquakes are very closely related as earthquakes occur on faults and faults grow by accumulation of coseismic rupture. For a better seismic risk evaluation it is imperative to recognize and map these faults. We implement a novel approach to identify seismically active fault planes from three dimensional hypocenter distribution by making use of unsupervised learning algorithms. We employ K-means clustering algorithm and Expectation Maximization (EM) algorithm modified to identify planar structures in spatial distribution of hypocenter after filtering out isolated events. We examine difference in the faults reconstructed by deterministic assignment in K- means and probabilistic assignment in EM algorithm. The method is conceptually identical to methodologies developed by Ouillion et al (2008, 2010) and has been extensively tested on synthetic data. We determined the sensitivity of the methodology to uncertainties in hypocenter location, density of clustering and cross cutting fault structures. The method has been applied to datasets from two contrasting regions. While Kumaon Himalaya is a convergent plate boundary, Koyna-Warna lies in middle of the Indian Plate but has a history of triggered seismicity. The reconstructed faults were validated by examining the fault orientation of mapped faults and the focal mechanism of these events determined through waveform inversion. The reconstructed faults could be used to solve the fault plane ambiguity in focal mechanism determination and constrain the fault

  12. Fault Injection Campaign for a Fault Tolerant Duplex Framework

    NASA Technical Reports Server (NTRS)

    Sacco, Gian Franco; Ferraro, Robert D.; von llmen, Paul; Rennels, Dave A.

    2007-01-01

    Fault tolerance is an efficient approach adopted to avoid or reduce the damage of a system failure. In this work we present the results of a fault injection campaign we conducted on the Duplex Framework (DF). The DF is a software developed by the UCLA group [1, 2] that uses a fault tolerant approach and allows to run two replicas of the same process on two different nodes of a commercial off-the-shelf (COTS) computer cluster. A third process running on a different node, constantly monitors the results computed by the two replicas, and eventually restarts the two replica processes if an inconsistency in their computation is detected. This approach is very cost efficient and can be adopted to control processes on spacecrafts where the fault rate produced by cosmic rays is not very high.

  13. Earth Observations

    2011-06-02

    ISS028-E-006830 (2 June 2011) --- Okavango Swamp in Botswana is featured in this image photographed by an Expedition 28 crew member on the International Space Station. This short focal-length photograph shows the entire Okavango ?delta,? a swampland known in Southern Africa as the ?Jewel of the Kalahari Desert?. This enormous pristine wetland of forest, wildlife, and freshwater almost miraculously appears in a desert where surface water is typically non-existent. The water comes from the Okavango River which rises in the high-rainfall zone of southern Angola, hundreds of kilometers to the northwest. The dark green forested floodplain is approximately 10 kilometers wide where it enters the view (left). The Okavango then enters a rift basin which allows the river to spread out, forming the wetland. The width of the rift determines the dimensions of the delta?150 kilometers from apex to the linear downstream margin (right). The apex fault is more difficult to discern, but two fault lines actually define the downstream margin; the fault traces are indicated by linear stream channels and vegetation patterns oriented at nearly right angles to the southeast-trending distributary channels at center. The distributary channels carry sediment from the Okavango River that is deposited within the rift basin. Over time, a fan-shaped morphology of the deposits has developed, leading to characterization of the wetland as the Okavango ?delta?. The drying trend from higher rainfall in the north (left) to less rainfall in central Botswana (right) is shown by the change from the greens of denser savanna vegetation to browns of an open ?thornscrub? savanna. More subtle distinctions appear: the distributary arms of the delta include tall, permanent riverine and seasonal forest (dark green), with grasses and other savanna vegetation (light green) on floodplains?which appear well watered in this image. Linear dunes, emplaced by constant winds from the east during drier climates, appear as

  14. Earth Observations

    2010-09-09

    ISS024-E-014071 (9 Sept. 2010) --- This striking panoramic view of the southwestern USA and Pacific Ocean is an oblique image photographed by an Expedition 24 crew member looking outwards at an angle from the International Space Station (ISS). While most unmanned orbital satellites view Earth from a nadir perspective?in other words, collecting data with a ?straight down? viewing geometry?crew members onboard the space station can acquire imagery at a wide range of viewing angles using handheld digital cameras. The ISS nadir point (the point on Earth?s surface directly below the spacecraft) was located in northwestern Arizona, approximately 260 kilometers to the east-southeast, when this image was taken. The image includes parts of the States of Arizona, Nevada, Utah, and California together with a small segment of the Baja California, Mexico coastline at center left. Several landmarks and physiographic features are readily visible. The Las Vegas, NV metropolitan area appears as a gray region adjacent to the Spring Mountains and Sheep Range (both covered by white clouds). The Grand Canyon, located on the Colorado Plateau in Arizona, is visible (lower left) to the east of Las Vegas with the blue waters of Lake Mead in between. The image also includes the Mojave Desert, stretching north from the Salton Sea (left) to the Sierra Nevada mountain range. The Sierra Nevada range is roughly 640 kilometers long (north-south) and forms the boundary between the Central Valley of California and the adjacent Basin and Range. The Basin and Range is so called due to the pattern of long linear valleys separated by parallel linear mountain ranges ? this landscape, formed by extension and thinning of Earth?s crust, is particularly visible at right.

  15. Low-Power Fault Tolerance for Spacecraft FPGA-Based Numerical Computing

    DTIC Science & Technology

    2006-09-01

    Ranganathan , “Power Management – Guest Lecture for CS4135, NPS,” Naval Postgraduate School, Nov 2004 [32] R. L. Phelps, “Operational Experiences with the...4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2...undesirable, are not necessarily harmful. Our intent is to prevent errors by properly managing faults. This research focuses on developing fault-tolerant

  16. Finding faults with the data

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    Rudolph Giuliani and Hillary Rodham Clinton are crisscrossing upstate New York looking for votes in the U.S. Senate race. Also cutting back and forth across upstate New York are hundreds of faults of a kind characterized by very sporadic seismic activity according to Robert Jacobi, professor of geology at the University of Buffalo (UB), who conducted research with fellow UB geology professor John Fountain."We have proof that upstate New York is crisscrossed by faults," Jacobi said. "In the past, the Appalachian Plateau—which stretches from Albany to Buffalo—was considered a pretty boring place structurally without many faults or folds of any significance."

  17. Ancient Earth, Alien Earths Event

    2014-08-20

    Dr. Shawn Domagal-Goldman, Research Space Scientist, NASA Goddard Space Flight Center, speaks on a panel at the “Ancient Earth, Alien Earths” Event at NASA Headquarters in Washington, DC Wednesday, August 20, 2014. The event was sponsored by NASA, the National Science Foundation (NSF), and the Smithsonian Institution and was moderated by Dr. David H. Grinspoon, Senior Scientist at the Planetary Science Institute. Six scientists discussed how research on early Earth could help guide our search for habitable planets orbiting other stars. Photo Credit: (NASA/Aubrey Gemignani)

  18. Ancient Earth, Alien Earths Event

    2014-08-20

    Dr. Phoebe Cohen, Professor of Geosciences, Williams College, speaks on a panel at the “Ancient Earth, Alien Earths” Event at NASA Headquarters in Washington, DC Wednesday, August 20, 2014. The event was sponsored by NASA, the National Science Foundation (NSF), and the Smithsonian Institution and was moderated by Dr. David H. Grinspoon, Senior Scientist at the Planetary Science Institute. Six scientists discussed how research on early Earth could help guide our search for habitable planets orbiting other stars. Photo Credit: (NASA/Aubrey Gemignani)

  19. Ancient Earth, Alien Earths Event

    2014-08-20

    Dr. Christopher House, Professor of Geosciences, Pennsylvania State University, speaks on a panel at the “Ancient Earth, Alien Earths” Event at NASA Headquarters in Washington, DC Wednesday, August 20, 2014. The event was sponsored by NASA, the National Science Foundation (NSF), and the Smithsonian Institution and was moderated by Dr. David H. Grinspoon, Senior Scientist at the Planetary Science Institute. Six scientists discussed how research on early Earth could help guide our search for habitable planets orbiting other stars. Photo Credit: (NASA/Aubrey Gemignani)

  20. Ancient Earth, Alien Earths Event

    2014-08-20

    Dr. Dawn Sumner, Professor of Geology, UC Davis, speaks on a panel at the “Ancient Earth, Alien Earths” Event at NASA Headquarters in Washington, DC Wednesday, August 20, 2014. The event was sponsored by NASA, the National Science Foundation (NSF), and the Smithsonian Institution and was moderated by Dr. David H. Grinspoon, Senior Scientist at the Planetary Science Institute. Six scientists discussed how research on early Earth could help guide our search for habitable planets orbiting other stars. Photo Credit: (NASA/Aubrey Gemignani)

  1. Ancient Earth, Alien Earths Event

    2014-08-20

    Dr. Timothy Lyons, Professor of Biogeochemistry, UC Riverside, speaks on a panel at the “Ancient Earth, Alien Earths” Event at NASA Headquarters in Washington, DC Wednesday, August 20, 2014. The event was sponsored by NASA, the National Science Foundation (NSF), and the Smithsonian Institution and was moderated by Dr. David H. Grinspoon, Senior Scientist at the Planetary Science Institute. Six scientists discussed how research on early Earth could help guide our search for habitable planets orbiting other stars. Photo Credit: (NASA/Aubrey Gemignani)

  2. Soft-Fault Detection Technologies Developed for Electrical Power Systems

    NASA Technical Reports Server (NTRS)

    Button, Robert M.

    2004-01-01

    The NASA Glenn Research Center, partner universities, and defense contractors are working to develop intelligent power management and distribution (PMAD) technologies for future spacecraft and launch vehicles. The goals are to provide higher performance (efficiency, transient response, and stability), higher fault tolerance, and higher reliability through the application of digital control and communication technologies. It is also expected that these technologies will eventually reduce the design, development, manufacturing, and integration costs for large, electrical power systems for space vehicles. The main focus of this research has been to incorporate digital control, communications, and intelligent algorithms into power electronic devices such as direct-current to direct-current (dc-dc) converters and protective switchgear. These technologies, in turn, will enable revolutionary changes in the way electrical power systems are designed, developed, configured, and integrated in aerospace vehicles and satellites. Initial successes in integrating modern, digital controllers have proven that transient response performance can be improved using advanced nonlinear control algorithms. One technology being developed includes the detection of "soft faults," those not typically covered by current systems in use today. Soft faults include arcing faults, corona discharge faults, and undetected leakage currents. Using digital control and advanced signal analysis algorithms, we have shown that it is possible to reliably detect arcing faults in high-voltage dc power distribution systems (see the preceding photograph). Another research effort has shown that low-level leakage faults and cable degradation can be detected by analyzing power system parameters over time. This additional fault detection capability will result in higher reliability for long-lived power systems such as reusable launch vehicles and space exploration missions.

  3. Developing a Hayward Fault Greenbelt in Fremont, California

    NASA Astrophysics Data System (ADS)

    Blueford, J. R.

    2007-12-01

    The Math Science Nucleus, an educational non-profit, in cooperation with the City of Fremont and U.S. Geological Survey has concluded that outdoor and indoor exhibits highlighting the Hayward Fault is a spectacular and educational way of illustrating the power of earthquakes. Several projects are emerging that use the Hayward fault to illustrate to the public and school groups that faults mold the landscape upon which they live. One area that is already developed, Tule Ponds at Tyson Lagoon, is owned by Alameda County Flood Control and Conservation District and managed by the Math Science Nucleus. This 17 acre site illustrates two traces of the Hayward fault (active and inactive), whose sediments record over 4000 years of activity. Another project is selecting an area in Fremont that a permanent trench or outside earthquake exhibit can be created that people can see seismic stratigraphic features of the Hayward Fault. This would be part of a 3 mile Earthquake Greenbelt area from Tyson Lagoon to the proposed Irvington BART Station. Informational kiosks or markers and a "yellow brick road" of earthquake facts could allow visitors to take an exciting and educational tour of the Hayward Fault's surface features in Fremont. Visitors would visually see the effects of fault movement and the tours would include preparedness information. As these plans emerge, an indoor permanent exhibits is being developed at the Children's Natural History Museum in Fremont. This exhibit will be a model of the Earthquake Greenbelt. It will also allow people to see a scale model of how the Hayward Fault unearthed the Pleistocene fossil bed (Irvingtonian) as well as created traps for underground aquifers as well as surface sag ponds.

  4. Earthquake-origin expansion of the Earth inferred from a spherical-Earth elastic dislocation theory

    NASA Astrophysics Data System (ADS)

    Xu, Changyi; Sun, Wenke

    2014-12-01

    In this paper, we propose an approach to compute the coseismic Earth's volume change based on a spherical-Earth elastic dislocation theory. We present a general expression of the Earth's volume change for three typical dislocations: the shear, tensile and explosion sources. We conduct a case study for the 2004 Sumatra earthquake (Mw9.3), the 2010 Chile earthquake (Mw8.8), the 2011 Tohoku-Oki earthquake (Mw9.0) and the 2013 Okhotsk Sea earthquake (Mw8.3). The results show that mega-thrust earthquakes make the Earth expand and earthquakes along a normal fault make the Earth contract. We compare the volume changes computed for finite fault models and a point source of the 2011 Tohoku-Oki earthquake (Mw9.0). The big difference of the results indicates that the coseismic changes in the Earth's volume (or the mean radius) are strongly dependent on the earthquakes' focal mechanism, especially the depth and the dip angle. Then we estimate the cumulative volume changes by historical earthquakes (Mw ≥ 7.0) since 1960, and obtain an Earth mean radius expanding rate about 0.011 mm yr-1.

  5. Fault Model Development for Fault Tolerant VLSI Design

    DTIC Science & Technology

    1988-05-01

    0 % .%. . BEIDGING FAULTS A bridging fault in a digital circuit connects two or more conducting paths of the circuit. The resistance...Melvin Breuer and Arthur Friedman, "Diagnosis and Reliable Design of Digital Systems", Computer Science Press, Inc., 1976. 4. [Chandramouli,1983] R...2138 AEDC LIBARY (TECH REPORTS FILE) MS-O0 ARNOLD AFS TN 37389-9998 USAG1 Attn: ASH-PCA-CRT Ft Huachuca AZ 85613-6000 DOT LIBRARY/iQA SECTION - ATTN

  6. Geological modeling of a fault zone in clay rocks at the Mont-Terri laboratory (Switzerland)

    NASA Astrophysics Data System (ADS)

    Kakurina, M.; Guglielmi, Y.; Nussbaum, C.; Valley, B.

    2016-12-01

    Clay-rich formations are considered to be a natural barrier for radionuclides or fluids (water, hydrocarbons, CO2) migration. However, little is known about the architecture of faults affecting clay formations because of their quick alteration at the Earth's surface. The Mont Terri Underground Research Laboratory provides exceptional conditions to investigate an un-weathered, perfectly exposed clay fault zone architecture and to conduct fault activation experiments that allow explore the conditions for stability of such clay faults. Here we show first results from a detailed geological model of the Mont Terri Main Fault architecture, using GoCad software, a detailed structural analysis of 6 fully cored and logged 30-to-50m long and 3-to-15m spaced boreholes crossing the fault zone. These high-definition geological data were acquired within the Fault Slip (FS) experiment project that consisted in fluid injections in different intervals within the fault using the SIMFIP probe to explore the conditions for the fault mechanical and seismic stability. The Mont Terri Main Fault "core" consists of a thrust zone about 0.8 to 3m wide that is bounded by two major fault planes. Between these planes, there is an assembly of distinct slickensided surfaces and various facies including scaly clays, fault gouge and fractured zones. Scaly clay including S-C bands and microfolds occurs in larger zones at top and bottom of the Mail Fault. A cm-thin layer of gouge, that is known to accommodate high strain parts, runs along the upper fault zone boundary. The non-scaly part mainly consists of undeformed rock block, bounded by slickensides. Such a complexity as well as the continuity of the two major surfaces are hard to correlate between the different boreholes even with the high density of geological data within the relatively small volume of the experiment. This may show that a poor strain localization occurred during faulting giving some perspectives about the potential for

  7. Lessons Learned in the Livingstone 2 on Earth Observing One Flight Experiment

    NASA Technical Reports Server (NTRS)

    Hayden, Sandra C.; Sweet, Adam J.; Shulman, Seth

    2005-01-01

    The Livingstone 2 (L2) model-based diagnosis software is a reusable diagnostic tool for monitoring complex systems. In 2004, L2 was integrated with the JPL Autonomous Sciencecraft Experiment (ASE) and deployed on-board Goddard's Earth Observing One (EO-1) remote sensing satellite, to monitor and diagnose the EO-1 space science instruments and imaging sequence. This paper reports on lessons learned from this flight experiment. The goals for this experiment, including validation of minimum success criteria and of a series of diagnostic scenarios, have all been successfully net. Long-term operations in space are on-going, as a test of the maturity of the system, with L2 performance remaining flawless. L2 has demonstrated the ability to track the state of the system during nominal operations, detect simulated abnormalities in operations and isolate failures to their root cause fault. Specific advances demonstrated include diagnosis of ambiguity groups rather than a single fault candidate; hypothesis revision given new sensor evidence about the state of the system; and the capability to check for faults in a dynamic system without having to wait until the system is quiescent. The major benefits of this advanced health management technology are to increase mission duration and reliability through intelligent fault protection, and robust autonomous operations with reduced dependency on supervisory operations from Earth. The work-load for operators will be reduced by telemetry of processed state-of-health information rather than raw data. The long-term vision is that of making diagnosis available to the onboard planner or executive, allowing autonomy software to re-plan in order to work around known component failures. For a system that is expected to evolve substantially over its lifetime, as for the International Space Station, the model-based approach has definite advantages over rule-based expert systems and limit-checking fault protection systems, as these do not

  8. Oceanic transform faults: how and why do they form? (Invited)

    NASA Astrophysics Data System (ADS)

    Gerya, T.

    2013-12-01

    transform faults. Offsets along the transform faults change continuously with time by asymmetric plate growth and discontinuously by ridge jumps. The ridge instability is governed by rheological weakening of active fault structures. The instability is most efficient for slow to intermediate spreading rates, whereas ultraslow and (ultra)fast spreading rates tend to destabilize transform faults (Gerya, 2010; Püthe and Gerya, 2013) References Gerya, T. (2010) Dynamical instability produces transform faults at mid-ocean ridges. Science, 329, 1047-1050. Gerya, T. (2012) Origin and models of oceanic transform faults. Tectonophys., 522-523, 34-56 Gerya, T.V. (2013a) Three-dimensional thermomechanical modeling of oceanic spreading initiation and evolution. Phys. Earth Planet. Interiors, 214, 35-52. Gerya, T.V. (2013b) Initiation of transform faults at rifted continental margins: 3D petrological-thermomechanical modeling and comparison to the Woodlark Basin. Petrology, 21, 1-10. Püthe, C., Gerya, T.V. (2013) Dependence of mid-ocean ridge morphology on spreading rate in numerical 3-D models. Gondwana Res., DOI: http://dx.doi.org/10.1016/j.gr.2013.04.005 Taylor, B., Goodliffe, A., Martinez, F. (2009) Initiation of transform faults at rifted continental margins. Comptes Rendus Geosci., 341, 428-438.

  9. Strain-dependent Damage Evolution and Velocity Reduction in Fault Zones Induced by Earthquake Rupture

    NASA Astrophysics Data System (ADS)

    Zhong, J.; Duan, B.

    2009-12-01

    Low-velocity fault zones (LVFZs) with reduced seismic velocities relative to the surrounding wall rocks are widely observed around active faults. The presence of such a zone will affect rupture propagation, near-field ground motion, and off-fault damage in subsequent earth-quakes. In this study, we quantify the reduction of seismic velocities caused by dynamic rup-ture on a 2D planar fault surrounded by a low-velocity fault zone. First, we implement the damage rheology (Lyakhovsky et al. 1997) in EQdyna (Duan and Oglesby 2006), an explicit dynamic finite element code. We further extend this damage rheology model to include the dependence of strains on crack density. Then, we quantify off-fault continuum damage distribution and velocity reduction induced by earthquake rupture with the presence of a preexisting LVFZ. We find that the presence of a LVFZ affects the tempo-spatial distribu-tions of off-fault damage. Because lack of constraint in some damage parameters, we further investigate the relationship between velocity reduction and these damage prameters by a large suite of numerical simulations. Slip velocity, slip, and near-field ground motions computed from damage rheology are also compared with those from off-fault elastic or elastoplastic responses. We find that the reduction in elastic moduli during dynamic rupture has profound impact on these quantities.

  10. The Design of a Fault-Tolerant COTS-Based Bus Architecture for Space Applications

    NASA Technical Reports Server (NTRS)

    Chau, Savio N.; Alkalai, Leon; Tai, Ann T.

    2000-01-01

    The high-performance, scalability and miniaturization requirements together with the power, mass and cost constraints mandate the use of commercial-off-the-shelf (COTS) components and standards in the X2000 avionics system architecture for deep-space missions. In this paper, we report our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. While the COTS standard IEEE 1394 adequately supports power management, high performance and scalability, its topological criteria impose restrictions on fault tolerance realization. To circumvent the difficulties, we derive a "stack-tree" topology that not only complies with the IEEE 1394 standard but also facilitates fault tolerance realization in a spaceborne system with limited dedicated resource redundancies. Moreover, by exploiting pertinent standard features of the 1394 interface which are not purposely designed for fault tolerance, we devise a comprehensive set of fault detection mechanisms to support the fault-tolerant bus architecture.

  11. ISHM-oriented adaptive fault diagnostics for avionics based on a distributed intelligent agent system

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Zhong, Zhengqiang; Xu, Lei

    2015-10-01

    In this paper, an integrated system health management-oriented adaptive fault diagnostics and model for avionics is proposed. With avionics becoming increasingly complicated, precise and comprehensive avionics fault diagnostics has become an extremely complicated task. For the proposed fault diagnostic system, specific approaches, such as the artificial immune system, the intelligent agents system and the Dempster-Shafer evidence theory, are used to conduct deep fault avionics diagnostics. Through this proposed fault diagnostic system, efficient and accurate diagnostics can be achieved. A numerical example is conducted to apply the proposed hybrid diagnostics to a set of radar transmitters on an avionics system and to illustrate that the proposed system and model have the ability to achieve efficient and accurate fault diagnostics. By analyzing the diagnostic system's feasibility and pragmatics, the advantages of this system are demonstrated.

  12. "Handling" seismic hazard: 3D printing of California Faults

    NASA Astrophysics Data System (ADS)

    Kyriakopoulos, C.; Potter, M.; Richards-Dinger, K. B.

    2017-12-01

    As earth scientists, we face the challenge of how to explain and represent our work and achievements to the general public. Nowadays, this problem is partially alleviated by the use of modern visualization tools such as advanced scientific software (Paraview.org), high resolution monitors, elaborate video simulations, and even 3D Virtual Reality goggles. However, the ability to manipulate and examine a physical object in 3D is still an important tool to connect better with the public. For that reason, we are presenting a scaled 3D printed version of the complex network of earthquake faults active in California based on that used by the Uniform California Earthquake Rupture Forecast 3 (UCERF3) (Field et al., 2013). We start from the fault geometry in the UCERF3.1 deformation model files. These files contain information such as the coordinates of the surface traces of the faults, dip angle, and depth extent. The fault specified in the above files are triangulated at 1km resolution and exported as a facet (.fac) file. The facet file is later imported into the Trelis 15.1 mesh generator (csimsoft.com). We use Trelis to perform the following three operations: First, we scale down the model so that 100 mm corresponds to 100km. Second, we "thicken" the walls of the faults; wall thickness of at least 1mm is necessary in 3D printing. We thicken fault geometry by 1mm on each side of the faults for a total of 2mm thickness. Third, we break down the model into parts that will fit the printing bed size ( 25 x 20mm). Finally, each part is exported in stereolithography format (.stl). For our project, we are using the 3D printing facility within the Creat'R Lab in the UC Riverside Orbach Science Library. The 3D printer is a MakerBot Replicator Desktop, 5th Generation. The resolution of print is 0.2mm (Standard quality). The printing material is the MakerBot PLA Filament, 1.75 mm diameter, large Spool, green. The most complex part of the display model requires approximately 17

  13. 20 CFR 410.561b - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Fault. 410.561b Section 410.561b Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL MINE HEALTH AND SAFETY ACT OF 1969, TITLE IV-BLACK LUNG BENEFITS (1969- ) Payment of Benefits § 410.561b Fault. Fault as used in without fault (see § 410...

  14. Expert System Detects Power-Distribution Faults

    NASA Technical Reports Server (NTRS)

    Walters, Jerry L.; Quinn, Todd M.

    1994-01-01

    Autonomous Power Expert (APEX) computer program is prototype expert-system program detecting faults in electrical-power-distribution system. Assists human operators in diagnosing faults and deciding what adjustments or repairs needed for immediate recovery from faults or for maintenance to correct initially nonthreatening conditions that could develop into faults. Written in Lisp.

  15. 20 CFR 410.561b - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Fault. 410.561b Section 410.561b Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL MINE HEALTH AND SAFETY ACT OF 1969, TITLE IV-BLACK LUNG BENEFITS (1969- ) Payment of Benefits § 410.561b Fault. Fault as used in without fault (see § 410...

  16. Fault Tolerance Middleware for a Multi-Core System

    NASA Technical Reports Server (NTRS)

    Some, Raphael R.; Springer, Paul L.; Zima, Hans P.; James, Mark; Wagner, David A.

    2012-01-01

    Fault Tolerance Middleware (FTM) provides a framework to run on a dedicated core of a multi-core system and handles detection of single-event upsets (SEUs), and the responses to those SEUs, occurring in an application running on multiple cores of the processor. This software was written expressly for a multi-core system and can support different kinds of fault strategies, such as introspection, algorithm-based fault tolerance (ABFT), and triple modular redundancy (TMR). It focuses on providing fault tolerance for the application code, and represents the first step in a plan to eventually include fault tolerance in message passing and the FTM itself. In the multi-core system, the FTM resides on a single, dedicated core, separate from the cores used by the application. This is done in order to isolate the FTM from application faults and to allow it to swap out any application core for a substitute. The structure of the FTM consists of an interface to a fault tolerant strategy module, a responder module, a fault manager module, an error factory, and an error mapper that determines the severity of the error. In the present reference implementation, the only fault tolerant strategy implemented is introspection. The introspection code waits for an application node to send an error notification to it. It then uses the error factory to create an error object, and at this time, a severity level is assigned to the error. The introspection code uses its built-in knowledge base to generate a recommended response to the error. Responses might include ignoring the error, logging it, rolling back the application to a previously saved checkpoint, swapping in a new node to replace a bad one, or restarting the application. The original error and recommended response are passed to the top-level fault manager module, which invokes the response. The responder module also notifies the introspection module of the generated response. This provides additional information to the

  17. Cell boundary fault detection system

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN

    2009-05-05

    A method determines a nodal fault along the boundary, or face, of a computing cell. Nodes on adjacent cell boundaries communicate with each other, and the communications are analyzed to determine if a node or connection is faulty.

  18. The fault-tree compiler

    NASA Technical Reports Server (NTRS)

    Martensen, Anna L.; Butler, Ricky W.

    1987-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.

  19. Differential Fault Analysis on CLEFIA

    NASA Astrophysics Data System (ADS)

    Chen, Hua; Wu, Wenling; Feng, Dengguo

    CLEFIA is a new 128-bit block cipher proposed by SONY corporation recently. The fundamental structure of CLEFIA is a generalized Feistel structure consisting of 4 data lines. In this paper, the strength of CLEFIA against the differential fault attack is explored. Our attack adopts the byte-oriented model of random faults. Through inducing randomly one byte fault in one round, four bytes of faults can be simultaneously obtained in the next round, which can efficiently reduce the total induce times in the attack. After attacking the last several rounds' encryptions, the original secret key can be recovered based on some analysis of the key schedule. The data complexity analysis and experiments show that only about 18 faulty ciphertexts are needed to recover the entire 128-bit secret key and about 54 faulty ciphertexts for 192/256-bit keys.

  20. Faulted Layers in Collapse Pits

    2016-04-06

    This image shows a set of coalesced collapse pits in western Valles Marineris as seen by NASA Mars Reconnaissance Orbiter. Fine layers are exposed in the walls of the pits, and in some places those layers are displaced by faults.

  1. Surface faulting. A preliminary view

    Sharp, R.V.

    1989-01-01

    This description of surface faulting near Spitak, Armenia, is based on a field inspection made December 22-26, 1988. The surface rupture west of Spitak, displacement of the ground surface, pre-earthquake surface expressions of the fault, and photolineaments in landsat images are described and surface faulting is compared to aftershocks. It is concluded that the 2 meters of maximum surface displacement fits well within the range of reliably measured maximum surface offsets for historic reverse and oblique-reverse faulting events throughout the world. By contrast, the presently known length of surface rupture near Spitak, between 8 and 13 km, is shorter than any other reverse or oblique-reverse event of magnitude greater than 6.0. This may be a reason to suppose that additional surface rupture might remain unmapped.

  2. Conditions of Fissuring in a Pumped-Faulted Aquifer System

    NASA Astrophysics Data System (ADS)

    Hernandez-Marin, M.; Burbey, T. J.

    2007-12-01

    Earth fissuring associated with subsidence from groundwater pumping is problematic in many arid-zone heavily pumped basins such as Las Vegas Valley. Long-term pumping at rates considerably greater than the natural recharge rate has stressed the heterogeneous aquifer system resulting in a complex stress-strain regime. A rigorous artificial recharge program coupled with increased surface-water importation has allowed water levels to appreciably recover, which has led to surface rebound in some localities. Nonetheless, new fissures continue to appear, particularly near basin-fill faults that behave as barriers to subsidence bowls. The purpose of this research is to develop a series of computational models to better understand the influence that structure (faults), pumping, and hydrostratigraphy has in the generation and propagation of fissures. The hydrostratigraphy of Las Vegas Valley consists of aquifers, aquitards and a relatively dry vadoze zone that may be as thick as 100m in much of the valley. Quaternary faults are typically depicted as scarps resulting from pre- pumping extensional tectonic events and are probably not responsible for the observed strain. The models developed to simulate the stress-strain and deformation processes in a faulted pumped aquifer-aquitard system of Las Vegas use the ABAQUS CAE (Complete ABAQUS Environment) software system. ABAQUS is a sophisticated engineering industry finite-element modeling package capable of simulating the complex fault- fissure system described here. A brittle failure criteria based on the tensile strength of the materials and the acting stresses (from previous models) are being used to understand how and where fissures are likely to form. , Hypothetical simulations include the role that faults and the vadose zone may play in fissure formation

  3. Probing Earth's State of Stress

    NASA Astrophysics Data System (ADS)

    Delorey, A. A.; Maceira, M.; Johnson, P. A.; Coblentz, D. D.

    2016-12-01

    The state of stress in the Earth's crust is a fundamental physical property that controls both engineered and natural systems. Engineered environments including those for hydrocarbon, geothermal energy, and mineral extraction, as well those for storage of wastewater, carbon dioxide, and nuclear fuel are as important as ever to our economy and environment. Yet, it is at spatial scales relevant to these activities where stress is least understood. Additionally, in engineered environments the rate of change in the stress field can be much higher than that of natural systems. In order to use subsurface resources more safely and effectively, we need to understand stress at the relevant temporal and spatial scales. We will present our latest results characterizing the state of stress in the Earth at scales relevant to engineered environments. Two important components of the state of stress are the orientation and magnitude of the stress tensor, and a measure of how close faults are to failure. The stress tensor at any point in a reservoir or repository has contributions from both far-field tectonic stress and local density heterogeneity. We jointly invert seismic (body and surface waves) and gravity data for a self-consistent model of elastic moduli and density and use the model to calculate the contribution of local heterogeneity to the total stress field. We then combine local and plate-scale contributions, using local indicators for calibration and ground-truth. In addition, we will present results from an analysis of the quantity and pattern of microseismicity as an indicator of critically stressed faults. Faults are triggered by transient stresses only when critically stressed (near failure). We show that tidal stresses can trigger earthquakes in both tectonic and reservoir environments and can reveal both stress and poroelastic conditions.

  4. Earth Observation

    2014-07-19

    ISS040-E-070439 (19 July 2014) --- One of the Expedition 40 crew members aboard the Earth-orbiting International Space Station recorded this July 19 image of wildfires which are plaguing the Northwest and causing widespread destruction. The orbital outpost was flying 223 nautical miles above a point on Earth located at 48.0 degrees north latitude and 116.9 degrees west longitude when the image was exposed. The state of Washington is especially affected by the fires, many of which have been blamed on lightning. This particular fire was part of the Carlton Complex Fire, located near the city of Brewster in north central Washington. The reservoir visible near the center of the image is Banks Lake.

  5. Managing Earth's Future: Global Self-Restraint for the Common Good or Domination by Incentive and Power?

    NASA Astrophysics Data System (ADS)

    Anbar, A. D.; Hartnett, H. E.; Rowan, L. R.; Caldeira, K.

    2016-12-01

    , planetary management will be largely reactive, driven by competition among those with incentive and power. With better policies in place, we can look forward to a future of continuous innovation and ever-improving well-being, with stable populations and diminishing environmental impact.

  6. Fault-tolerant rotary actuator

    DOEpatents

    Tesar, Delbert

    2006-10-17

    A fault-tolerant actuator module, in a single containment shell, containing two actuator subsystems that are either asymmetrically or symmetrically laid out is provided. Fault tolerance in the actuators of the present invention is achieved by the employment of dual sets of equal resources. Dual resources are integrated into single modules, with each having the external appearance and functionality of a single set of resources.

  7. Seismic fault zone trapped noise

    NASA Astrophysics Data System (ADS)

    Hillers, G.; Campillo, M.; Ben-Zion, Y.; Roux, P.

    2014-07-01

    Systematic velocity contrasts across and within fault zones can lead to head and trapped waves that provide direct information on structural units that are important for many aspects of earthquake and fault mechanics. Here we construct trapped waves from the scattered seismic wavefield recorded by a fault zone array. The frequency-dependent interaction between the ambient wavefield and the fault zone environment is studied using properties of the noise correlation field. A critical frequency fc ≈ 0.5 Hz defines a threshold above which the in-fault scattered wavefield has increased isotropy and coherency compared to the ambient noise. The increased randomization of in-fault propagation directions produces a wavefield that is trapped in a waveguide/cavity-like structure associated with the low-velocity damage zone. Dense spatial sampling allows the resolution of a near-field focal spot, which emerges from the superposition of a collapsing, time reversed wavefront. The shape of the focal spot depends on local medium properties, and a focal spot-based fault normal distribution of wave speeds indicates a ˜50% velocity reduction consistent with estimates from a far-field travel time inversion. The arrival time pattern of a synthetic correlation field can be tuned to match properties of an observed pattern, providing a noise-based imaging tool that can complement analyses of trapped ballistic waves. The results can have wide applicability for investigating the internal properties of fault damage zones, because mechanisms controlling the emergence of trapped noise have less limitations compared to trapped ballistic waves.

  8. Fault Tree Analysis: A Bibliography

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Fault tree analysis is a top-down approach to the identification of process hazards. It is as one of the best methods for systematically identifying an graphically displaying the many ways some things can go wrong. This bibliography references 266 documents in the NASA STI Database that contain the major concepts. fault tree analysis, risk an probability theory, in the basic index or major subject terms. An abstract is included with most citations, followed by the applicable subject terms.

  9. Hardware Fault Simulator for Microprocessors

    NASA Technical Reports Server (NTRS)

    Hess, L. M.; Timoc, C. C.

    1983-01-01

    Breadboarded circuit is faster and more thorough than software simulator. Elementary fault simulator for AND gate uses three gates and shaft register to simulate stuck-at-one or stuck-at-zero conditions at inputs and output. Experimental results showed hardware fault simulator for microprocessor gave faster results than software simulator, by two orders of magnitude, with one test being applied every 4 microseconds.

  10. Earth Observation

    2014-07-25

    ISS040-E-081008 (25 July 2014) --- One of the Expedition 40 crew members aboard the International Space Station, flying 225 nautical miles above Earth, photographed this image of the Tifernine dunes and the Tassili Najjer Mountains in Algeria. The area is about 800 miles south, southeast of Algiers, the capital of Algeria. The dunes are in excess of 1,000 feet in height.

  11. Earth Observation

    2014-07-15

    ISS040-E-063578 (15 July 2014) --- One of the Expedition 40 crew members aboard the Earth-orbiting International Space Station, flying some 225 nautical miles above the Caribbean Sea in the early morning hours of July 15, photographed this north-looking panorama that includes parts of Cuba, the Bahamas and Florida, and even runs into several other areas in the southeastern U.S. The long stretch of lights to the left of center frame gives the shape of Miami.

  12. Earth Science

    1991-01-01

    In July 1990, the Marshall Space Flight Center, in a joint project with the Department of Defense/Air Force Space Test Program, launched the Combined Release and Radiation Effects Satellite (CRRES) using an Atlas I launch vehicle. The mission was designed to study the effects of artificial ion clouds produced by chemical releases on the Earth's ionosphere and magnetosphere, and to monitor the effects of space radiation environment on sophisticated electronics.

  13. Earth Observation

    2011-06-27

    ISS028-E-009979 (27 June 2011) --- The Massachusetts coastline is featured in this image photographed by an Expedition 28 crew member on the International Space Station. The Crew Earth Observations team at NASA Johnson Space Center sends specific ground targets for photography up to the station crew on a daily basis, but sometimes the crew takes imagery on their own of striking displays visible from orbit. One such display, often visible to the ISS crew due to their ability to look outwards at angles between 0 and 90 degrees, is sunglint on the waters of Earth. Sunglint is caused by sunlight reflecting off of a water surface?much as light reflects from a mirror?directly towards the observer. Roughness variations of the water surface scatter the light, blurring the reflection and producing the typical silvery sheen of the sunglint area. The point of maximum sunglint is centered within Cape Cod Bay, the body of water partially enclosed by the ?hook? of Cape Cod in Massachusetts (bottom). Cape Cod was formally designated a National Seashore in 1966. Sunglint off the water provides sharp contrast with the coastline and the nearby islands of Martha?s Vineyard and Nantucket (lower left), both popular destinations for tourists and summer residents. To the north, rocky Cape Ann extends out into the Atlantic Ocean; the border with New Hampshire is located approximately 30 kilometers up the coast. Further to the west, the eastern half of Long Island, New York is visible emerging from extensive cloud cover over the mid-Atlantic and Midwestern States. Persistent storm tracks had been contributing to record flooding along rivers in the Midwest at the time this image was taken in late June 2011. Thin blue layers of the atmosphere, contrasted against the darkness of space, are visible extending along the Earth?s curvature at top.

  14. Cloudy Earth

    2015-05-08

    Decades of satellite observations and astronaut photographs show that clouds dominate space-based views of Earth. One study based on nearly a decade of satellite data estimated that about 67 percent of Earth’s surface is typically covered by clouds. This is especially the case over the oceans, where other research shows less than 10 percent of the sky is completely clear of clouds at any one time. Over land, 30 percent of skies are completely cloud free. Earth’s cloudy nature is unmistakable in this global cloud fraction map, based on data collected by the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite. While MODIS collects enough data to make a new global map of cloudiness every day, this version of the map shows an average of all of the satellite’s cloud observations between July 2002 and April 2015. Colors range from dark blue (no clouds) to light blue (some clouds) to white (frequent clouds). Read more here: 1.usa.gov/1P6lbMU Credit: NASA Earth Observatory NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  15. Earth Observation

    2011-08-02

    ISS028-E-020276 (2 Aug. 2011) --- This photograph of polar mesospheric clouds was acquired at an altitude of just over 202 nautical miles (about 322 kilometers) in the evening hours (03:19:54 Greenwich Mean Time) on Aug. 2, 2011, as the International Space Station was passing over the English Channel. The nadir coordinates of the station were 49.1 degrees north latitude and 5.5 degrees west longitude. Polar mesospheric clouds (also known as noctilucent, or ?night-shining? clouds) are transient, upper atmospheric phenomena that are usually observed in the summer months at high latitudes (greater than 50 degrees) of both the Northern and Southern Hemispheres. They appear bright and cloudlike while in deep twilight. They are illuminated by sunlight when the lower layers of the atmosphere are in the darkness of Earth?s shadow. The horizon of Earth appears at the bottom of the image, with some layers of the lower atmosphere already illuminated by the rising sun. The higher, bluish-colored clouds look much like wispy cirrus clouds, which can be found as high as 60,000 feet (18 kilometers) in the atmosphere. However noctilucent clouds, as seen here, are observed in the mesosphere at altitudes of 250,000 to 280,000 feet (about 76 to 85 kilometers). Astronaut observations of polar mesospheric clouds over northern Europe in the summer are not uncommon.

  16. Applying Earth Observation Data to agriculture risk management: a public-private collaboration to develop drought maps in North-East China

    NASA Astrophysics Data System (ADS)

    Surminski, S.; Holt Andersen, B.; Hohl, R.; Andersen, S.

    2012-04-01

    Earth Observation Data (EO) can improve climate risk assessment particularly in developing countries where densities of weather stations are low. Access to data that reflects exposure to weather and climate risks is a key condition for any successful risk management approach. This is of particular importance in the context of agriculture and drought risk, where historical data sets, accurate current data about crop growth and weather conditions, as well as information about potential future changes based on climate projections and socio-economic factors are all relevant, but often not available to stakeholders. Efforts to overcome these challenges in using EO data have so far been predominantly focused on developed countries, where satellite-derived Normalized Difference Vegetation Indexes (NDVI) and the MERIS Global Vegetation Indexes (MGVI), are already used within the agricultural sector for assessing and managing crop risks and to parameterize crop yields. This paper assesses how public-private collaboration can foster the application of these data techniques. The findings are based on a pilot project in North-East China where severe droughts frequently impact the country's largest corn and soybeans areas. With support from the European Space Agency (ESA), a consortium of meteorological experts, mapping firms and (re)insurance experts has worked to explore the potential use and value of EO data for managing crop risk and assessing exposure to drought for four provinces in North-East China (Heilongjiang, Jilin, Inner Mongolia and Liaoning). Combining NDVI and MGVI data with meteorological observations to help alleviate shortcomings of NDVI specific to crop types and region has resulted in the development of new drought maps for the time 2000-2011 in digital format at a high resolution (1x1 km). The observed benefits of this data application range from improved risk management to cost effective drought monitoring and claims verification for insurance purposes

  17. Aeromagnetic anomalies over faulted strata

    Grauch, V.J.S.; Hudson, Mark R.

    2011-01-01

    High-resolution aeromagnetic surveys are now an industry standard and they commonly detect anomalies that are attributed to faults within sedimentary basins. However, detailed studies identifying geologic sources of magnetic anomalies in sedimentary environments are rare in the literature. Opportunities to study these sources have come from well-exposed sedimentary basins of the Rio Grande rift in New Mexico and Colorado. High-resolution aeromagnetic data from these areas reveal numerous, curvilinear, low-amplitude (2–15 nT at 100-m terrain clearance) anomalies that consistently correspond to intrasedimentary normal faults (Figure 1). Detailed geophysical and rock-property studies provide evidence for the magnetic sources at several exposures of these faults in the central Rio Grande rift (summarized in Grauch and Hudson, 2007, and Hudson et al., 2008). A key result is that the aeromagnetic anomalies arise from the juxtaposition of magnetically differing strata at the faults as opposed to chemical processes acting at the fault zone. The studies also provide (1) guidelines for understanding and estimating the geophysical parameters controlling aeromagnetic anomalies at faulted strata (Grauch and Hudson), and (2) observations on key geologic factors that are favorable for developing similar sedimentary sources of aeromagnetic anomalies elsewhere (Hudson et al.).

  18. Normal fault earthquakes or graviquakes

    PubMed Central

    Doglioni, C.; Carminati, E.; Petricca, P.; Riguzzi, F.

    2015-01-01

    Earthquakes are dissipation of energy throughout elastic waves. Canonically is the elastic energy accumulated during the interseismic period. However, in crustal extensional settings, gravity is the main energy source for hangingwall fault collapsing. Gravitational potential is about 100 times larger than the observed magnitude, far more than enough to explain the earthquake. Therefore, normal faults have a different mechanism of energy accumulation and dissipation (graviquakes) with respect to other tectonic settings (strike-slip and contractional), where elastic energy allows motion even against gravity. The bigger the involved volume, the larger is their magnitude. The steeper the normal fault, the larger is the vertical displacement and the larger is the seismic energy released. Normal faults activate preferentially at about 60° but they can be shallower in low friction rocks. In low static friction rocks, the fault may partly creep dissipating gravitational energy without releasing great amount of seismic energy. The maximum volume involved by graviquakes is smaller than the other tectonic settings, being the activated fault at most about three times the hypocentre depth, explaining their higher b-value and the lower magnitude of the largest recorded events. Having different phenomenology, graviquakes show peculiar precursors. PMID:26169163

  19. Passive fault current limiting device

    DOEpatents

    Evans, D.J.; Cha, Y.S.

    1999-04-06

    A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment. 6 figs.

  20. Passive fault current limiting device

    DOEpatents

    Evans, Daniel J.; Cha, Yung S.

    1999-01-01

    A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment.

  1. Software Fault Tolerance: A Tutorial

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2000-01-01

    Because of our present inability to produce error-free software, software fault tolerance is and will continue to be an important consideration in software systems. The root cause of software design errors is the complexity of the systems. Compounding the problems in building correct software is the difficulty in assessing the correctness of software for highly complex systems. After a brief overview of the software development processes, we note how hard-to-detect design faults are likely to be introduced during development and how software faults tend to be state-dependent and activated by particular input sequences. Although component reliability is an important quality measure for system level analysis, software reliability is hard to characterize and the use of post-verification reliability estimates remains a controversial issue. For some applications software safety is more important than reliability, and fault tolerance techniques used in those applications are aimed at preventing catastrophes. Single version software fault tolerance techniques discussed include system structuring and closure, atomic actions, inline fault detection, exception handling, and others. Multiversion techniques are based on the assumption that software built differently should fail differently and thus, if one of the redundant versions fails, it is expected that at least one of the other versions will provide an acceptable output. Recovery blocks, N-version programming, and other multiversion techniques are reviewed.

  2. Challenges in the Management and Stewardship of Airborne Observational Data at the National Center for Atmospheric Research (NCAR) Earth Observing Laboratory (EOL)

    NASA Astrophysics Data System (ADS)

    Aquino, J.; Daniels, M. D.

    2015-12-01

    The National Science Foundation (NSF) provides the National Center for Atmospheric Research (NCAR) Earth Observing Laboratory (EOL) funding for the operation, maintenance and upgrade of two research aircraft: the NSF/NCAR High-performance Instrumented Airborne Platform for Environmental Research (HIAPER) Gulfstream V and the NSF/NCAR Hercules C-130. A suite of in-situ and remote sensing airborne instruments housed at the EOL Research Aviation Facility (RAF) provide a basic set of measurements that are typically deployed on most airborne field campaigns. In addition, instruments to address more specific research requirements are provided by collaborating participants from universities, industry, NASA, NOAA or other agencies (referred to as Principal Investigator, or PI, instruments). At the 2014 AGU Fall Meeting, a poster (IN13B-3639) was presented outlining the components of Airborne Data Management included field phase data collection, formats, data archival and documentation, version control, storage practices, stewardship and obsolete data formats, and public data access. This talk will cover lessons learned, challenges associated with the above components, and current developments to address these challenges, including: tracking data workflows for aircraft instrumentation to facilitate identification, and correction, of gaps in these workflows; implementation of dataset versioning guidelines; and assignment of Digital Object Identifiers (DOIs) to data and instrumentation to facilitate tracking data and facility use in publications.

  3. Tutorial: Advanced fault tree applications using HARP

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.

    1993-01-01

    Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.

  4. Continuous Record of Permeability inside the Wenchuan Earthquake Fault Zone

    NASA Astrophysics Data System (ADS)

    Xue, Lian; Li, Haibing; Brodsky, Emily

    2013-04-01

    Faults are complex hydrogeological structures which include a highly permeable damage zone with fracture-dominated permeability. Since fractures are generated by earthquakes, we would expect that in the aftermath of a large earthquake, the permeability would be transiently high in a fault zone. Over time, the permeability may recover due to a combination of chemical and mechanical processes. However, the in situ fault zone hydrological properties are difficult to measure and have never been directly constrained on a fault zone immediately after a large earthquake. In this work, we use water level response to solid Earth tides to constrain the hydraulic properties inside the Wenchuan Earthquake Fault Zone. The transmissivity and storage determine the phase and amplitude response of the water level to the tidal loading. By measuring phase and amplitude response, we can constrain the average hydraulic properties of the damage zone at 800-1200 m below the surface (~200-600 m from the principal slip zone). We use Markov chain Monte Carlo methods to evaluate the phase and amplitude responses and the corresponding errors for the largest semidiurnal Earth tide M2 in the time domain. The average phase lag is ~ 30o, and the average amplitude response is 6×10-7 strain/m. Assuming an isotropic, homogenous and laterally extensive aquifer, the average storage coefficient S is 2×10-4 and the average transmissivity T is 6×10-7 m2 using the measured phase and the amplitude response. Calculation for the hydraulic diffusivity D with D=T/S, yields the reported value of D is 3×10-3 m2/s, which is two orders of magnitude larger than pump test values on the Chelungpu Fault which is the site of the Mw 7.6 Chi-Chi earthquake. If the value is representative of the fault zone, then this means the hydrology processes should have an effect on the earthquake rupture process. This measurement is done through continuous monitoring and we could track the evolution for hydraulic properties

  5. Continuous Record of Permeability inside the Wenchuan Earthquake Fault Zone

    NASA Astrophysics Data System (ADS)

    Xue, L.; Li, H.; Brodsky, E. E.; Wang, H.; Pei, J.

    2012-12-01

    Faults are complex hydrogeological structures which include a highly permeable damage zone with fracture-dominated permeability. Since fractures are generated by earthquakes, we would expect that in the aftermath of a large earthquake, the permeability would be transiently high in a fault zone. Over time, the permeability may recover due to a combination of chemical and mechanical processes. However, the in situ fault zone hydrological properties are difficult to measure and have never been directly constrained on a fault zone immediately after a large earthquake. In this work, we use water level response to solid Earth tides to constrain the hydraulic properties inside the Wenchuan Earthquake Fault Zone. The transmissivity and storage determine the phase and amplitude response of the water level to the tidal loading. By measuring phase and amplitude response, we can constrain the average hydraulic properties of the damage zone at 800-1200 m below the surface (˜200-600 m from the principal slip zone). We use Markov chain Monte Carlo methods to evaluate the phase and amplitude responses and the corresponding errors for the largest semidiurnal Earth tide M2 in the time domain. The average phase lag is ˜30°, and the average amplitude response is 6×10-7 strain/m. Assuming an isotropic, homogenous and laterally extensive aquifer, the average storage coefficient S is 2×10-4 and the average transmissivity T is 6×10-7 m2 using the measured phase and the amplitude response. Calculation for the hydraulic diffusivity D with D=T/S, yields the reported value of D is 3×10-3 m2/s, which is two orders of magnitude larger than pump test values on the Chelungpu Fault which is the site of the Mw 7.6 Chi-Chi earthquake. If the value is representative of the fault zone, then this means the hydrology processes should have an effect on the earthquake rupture process. This measurement is done through continuous monitoring and we could track the evolution for hydraulic properties

  6. Quantifying Coseismic Normal Fault Rupture at the Seafloor: The 2004 Les Saintes Earthquake Along the Roseau Fault (French Antilles)

    NASA Astrophysics Data System (ADS)

    Olive, J. A. L.; Escartin, J.; Leclerc, F.; Garcia, R.; Gracias, N.; Odemar Science Party, T.

    2016-12-01

    While >70% of Earth's seismicity is submarine, almost all observations of earthquake-related ruptures and surface deformation are restricted to subaerial environments. Such observations are critical for understanding fault behavior and associated hazards (including tsunamis), but are not routinely conducted at the seafloor due to obvious constraints. During the 2013 ODEMAR cruise we used autonomous and remotely operated vehicles to map the Roseau normal Fault (Lesser Antilles), source of the 2004 Mw6.3 earthquake and associated tsunami (<3.5m run-up). These vehicles acquired acoustic (multibeam bathymetry) and optical data (video and electronic images) spanning from regional (>1 km) to outcrop (<1 m) scales. These high-resolution submarine observations, analogous to those routinely conducted subaerially, rely on advanced image and video processing techniques, such as mosaicking and structure-from-motion (SFM). We identify sub-vertical fault slip planes along the Roseau scarp, displaying coseismic deformation structures undoubtedly due to the 2004 event. First, video mosaicking allows us to identify the freshly exposed fault plane at the base of one of these scarps. A maximum vertical coseismic displacement of 0.9 m can be measured from the video-derived terrain models and the texture-mapped imagery, which have better resolution than any available acoustic systems (<10 cm). Second, seafloor photomosaics allow us to identify and map both additional sub-vertical fault scarps, and cracks and fissures at their base, recording hangingwall damage from the same event. These observations provide critical parameters to understand the seismic cycle and long-term seismic behavior of this submarine fault. Our work demonstrates the feasibility of extensive, high-resolution underwater surveys using underwater vehicles and novel imaging techniques, thereby opening new possibilities to study recent seafloor changes associated with tectonic, volcanic, or hydrothermal activity.

  7. Award ER25750: Coordinated Infrastructure for Fault Tolerance Systems Indiana University Final Report

    SciT

    Lumsdaine, Andrew

    2013-03-08

    The main purpose of the Coordinated Infrastructure for Fault Tolerance in Systems initiative has been to conduct research with a goal of providing end-to-end fault tolerance on a systemwide basis for applications and other system software. While fault tolerance has been an integral part of most high-performance computing (HPC) system software developed over the past decade, it has been treated mostly as a collection of isolated stovepipes. Visibility and response to faults has typically been limited to the particular hardware and software subsystems in which they are initially observed. Little fault information is shared across subsystems, allowing little flexibility ormore » control on a system-wide basis, making it practically impossible to provide cohesive end-to-end fault tolerance in support of scientific applications. As an example, consider faults such as communication link failures that can be seen by a network library but are not directly visible to the job scheduler, or consider faults related to node failures that can be detected by system monitoring software but are not inherently visible to the resource manager. If information about such faults could be shared by the network libraries or monitoring software, then other system software, such as a resource manager or job scheduler, could ensure that failed nodes or failed network links were excluded from further job allocations and that further diagnosis could be performed. As a founding member and one of the lead developers of the Open MPI project, our efforts over the course of this project have been focused on making Open MPI more robust to failures by supporting various fault tolerance techniques, and using fault information exchange and coordination between MPI and the HPC system software stack from the application, numeric libraries, and programming language runtime to other common system components such as jobs schedulers, resource managers, and monitoring tools.« less

  8. Mission Adaptive Uas Capabilities for Earth Science and Resource Assessment

    NASA Astrophysics Data System (ADS)

    Dunagan, S.; Fladeland, M.; Ippolito, C.; Knudson, M.; Young, Z.

    2015-04-01

    Unmanned aircraft systems (UAS) are important assets for accessing high risk airspace and incorporate technologies for sensor coordination, onboard processing, tele-communication, unconventional flight control, and ground based monitoring and optimization. These capabilities permit adaptive mission management in the face of complex requirements and chaotic external influences. NASA Ames Research Center has led a number of Earth science remote sensing missions directed at the assessment of natural resources and here we describe two resource mapping problems having mission characteristics requiring a mission adaptive capability extensible to other resource assessment challenges. One example involves the requirement for careful control over solar angle geometry for passive reflectance measurements. This constraint exists when collecting imaging spectroscopy data over vegetation for time series analysis or for the coastal ocean where solar angle combines with sea state to produce surface glint that can obscure the signal. Furthermore, the primary flight control imperative to minimize tracking error should compromise with the requirement to minimize aircraft motion artifacts in the spatial measurement distribution. A second example involves mapping of natural resources in the Earth's crust using precision magnetometry. In this case the vehicle flight path must be oriented to optimize magnetic flux gradients over a spatial domain having continually emerging features, while optimizing the efficiency of the spatial mapping task. These requirements were highlighted in recent Earth Science missions including the OCEANIA mission directed at improving the capability for spectral and radiometric reflectance measurements in the coastal ocean, and the Surprise Valley Mission directed at mapping sub-surface mineral composition and faults, using high-sensitivity magnetometry. This paper reports the development of specific aircraft control approaches to incorporate the unusual and

  9. Mission Adaptive UAS Platform for Earth Science Resource Assessment

    NASA Technical Reports Server (NTRS)

    Dunagan, S.; Fladeland, M.; Ippolito, C.; Knudson, M.

    2015-01-01

    NASA Ames Research Center has led a number of important Earth science remote sensing missions including several directed at the assessment of natural resources. A key asset for accessing high risk airspace has been the 180 kg class SIERRA UAS platform, providing mission durations of up to 8 hrs at altitudes up to 3 km. Recent improvements to this mission capability are embodied in the incipient SIERRA-B variant. Two resource mapping problems having unusual mission characteristics requiring a mission adaptive capability are explored here. One example involves the requirement for careful control over solar angle geometry for passive reflectance measurements. This challenges the management of resources in the coastal ocean where solar angle combines with sea state to produce surface glint that can obscure the ocean color signal. Furthermore, as for all scanning imager applications, the primary flight control priority to fly the UAS directly to the next waypoint should compromise with the requirement to minimize roll and crab effects in the imagery. A second example involves the mapping of natural resources in the Earth's crust using precision magnetometry. In this case the vehicle flight path must be oriented to optimize magnetic flux gradients over a spatial domain having continually emerging features, while optimizing the efficiency of the spatial mapping task. These requirements were highlighted in several recent Earth Science missions including the October 2013 OCEANIA mission directed at improving the capability for hyperspectral reflectance measurements in the coastal ocean, and the Surprise Valley Mission directed at mapping sub-surface mineral composition and faults, using high-sensitivity magentometry. This paper reports the development of specific aircraft control approaches to incorporate the unusual and demanding requirements to manage solar angle, aircraft attitude and flight path orientation, and efficient (directly geo-rectified) surface and sub

  10. Moving Closer to EarthScope: A Major New Initiative for the Earth Sciences*

    NASA Astrophysics Data System (ADS)

    Simpson, D.; Blewitt, G.; Ekstrom, G.; Henyey, T.; Hickman, S.; Prescott, W.; Zoback, M.

    2002-12-01

    EarthScope is a scientific research and infrastructure initiative designed to provide a suite of new observational facilities to address fundamental questions about the evolution of continents and the processes responsible for earthquakes and volcanic eruptions. The integrated observing systems that will comprise EarthScope capitalize on recent developments in sensor technology and communications to provide Earth scientists with synoptic and high-resolution data derived from a variety of geophysical sensors. An array of 400 broadband seismometers will spend more than ten years crossing the contiguous 48 states and Alaska to image features that make up the internal structure of the continent and underlying mantle. Additional seismic and electromagnetic instrumentation will be available for high resolution imaging of geological targets of special interest. A network of continuously recording Global Positioning System (GPS) receivers and sensitive borehole strainmeters will be installed along the western U.S. plate boundary. These sensors will measure how western North America is deforming, what motions occur along faults, how earthquakes start, and how magma flows beneath active volcanoes. A four-kilometer deep observatory bored directly into the San Andreas fault will provide the first opportunity to observe directly the conditions under which earthquakes occur, to collect fault rocks and fluids for laboratory study, and to monitor continuously an active fault zone at depth. All data from the EarthScope facilities will be openly available in real-time to maximize participation from the scientific community and to provide on-going educational outreach to students and the public. EarthScope's sensors will revolutionize observational Earth science in terms of the quantity, quality and spatial extent of the data they provide. Turning these data into exciting scientific discovery will require new modes of experimentation and interdisciplinary cooperation from the Earth

  11. Coordinated Fault-Tolerance for High-Performance Computing Final Project Report

    SciT

    Panda, Dhabaleswar Kumar; Beckman, Pete

    2011-07-28

    With the Coordinated Infrastructure for Fault Tolerance Systems (CIFTS, as the original project came to be called) project, our aim has been to understand and tackle the following broad research questions, the answers to which will help the HEC community analyze and shape the direction of research in the field of fault tolerance and resiliency on future high-end leadership systems. Will availability of global fault information, obtained by fault information exchange between the different HEC software on a system, allow individual system software to better detect, diagnose, and adaptively respond to faults? If fault-awareness is raised throughout the system throughmore » fault information exchange, is it possible to get all system software working together to provide a more comprehensive end-to-end fault management on the system? What are the missing fault-tolerance features that widely used HEC system software lacks today that would inhibit such software from taking advantage of systemwide global fault information? What are the practical limitations of a systemwide approach for end-to-end fault management based on fault awareness and coordination? What mechanisms, tools, and technologies are needed to bring about fault awareness and coordination of responses on a leadership-class system? What standards, outreach, and community interaction are needed for adoption of the concept of fault awareness and coordination for fault management on future systems? Keeping our overall objectives in mind, the CIFTS team has taken a parallel fourfold approach. Our central goal was to design and implement a light-weight, scalable infrastructure with a simple, standardized interface to allow communication of fault-related information through the system and facilitate coordinated responses. This work led to the development of the Fault Tolerance Backplane (FTB) publish-subscribe API specification, together with a reference implementation and several experimental implementations on

  12. A simulation of the San Andreas fault experiment

    NASA Technical Reports Server (NTRS)

    Agreen, R. W.; Smith, D. E.

    1973-01-01

    The San Andreas Fault Experiment, which employs two laser tracking systems for measuring the relative motion of two points on opposite sides of the fault, was simulated for an eight year observation period. The two tracking stations are located near San Diego on the western side of the fault and near Quincy on the eastern side; they are roughly 900 kilometers apart. Both will simultaneously track laser reflector equipped satellites as they pass near the stations. Tracking of the Beacon Explorer C Spacecraft was simulated for these two stations during August and September for eight consecutive years. An error analysis of the recovery of the relative location of Quincy from the data was made, allowing for model errors in the mass of the earth, the gravity field, solar radiation pressure, atmospheric drag, errors in the position of the San Diego site, and laser systems range biases and noise. The results of this simulation indicate that the distance of Quincy from San Diego will be determined each year with a precision of about 10 centimeters. This figure is based on the accuracy of earth models and other parameters available in 1972.

  13. NASA to Survey Earth's Resources

    NASA Technical Reports Server (NTRS)

    Mittauer, R. T.

    1971-01-01

    A wide variety of the natural resources of earth and man's management of them will be studied by an initial group of foreign and domestic scientists tentatively chosen by the National Aeronautics and Space Administration to analyze data to be gathered by two earth-orbiting spacecraft. The spacecraft are the first Earth Resources Technology Satellite (ERTS-A) and the manned Skylab which will carry an Earth Resources Experiment Package (EREP). In the United States, the initial experiments will study the feasibility of remote sensing from a satellite in gathering information on ecological problems. The objective of both ERTS and EREP aboard Skylab is to obtain multispectral images of the surface of the earth with high resolution remote sensors and to process and distribute the images to scientific users in a wide variety of disciplines. The ERTS-A, EREP, and Skylab systems are described and their operation is discussed.

  14. Fracture structures of active Nojima fault, Japan, revealed by borehole televiewer imaging

    NASA Astrophysics Data System (ADS)

    Nishiwaki, T.; Lin, A.

    2017-12-01

    Most large intraplate earthquakes occur as slip on mature active faults, any investigation of the seismic faulting process and assessment of seismic hazards require an understanding of the nature of active fault damage zones as seismogenic source. In this study, we focus on the fracture structures of the Nojima Fault (NF) that triggered the 1995 Kobe Mw 7.2 earthquake using ultrasonic borehole televiewer (BHTV) images from a borehole wall. The borehole used in this study was drilled throughout the NF at 1000 m in depth by a science project of Drilling into Fault Damage Zone(DFDZ) in 2016 (Lin, 2016; Miyawaki et al., 2016). In the depth of <230 m of the borehole, the rocks are composed of weak consolidated sandstone and conglomerate of the Plio-Pleistocene Osaka-Group and mudstone and sandstone of the Miocene Kobe Group. The basement rock in the depth of >230 m consist of pre-Neogene granitic rock. Based on the observations of cores and analysis of the BHTV images, the main fault plane was identified at a depth of 529.3 m with a 15 cm thick fault gouge zone and a damage zone of 100 m wide developed in the both sides of the main fault plane. Analysis of the BHTV images shows that the fractures are concentrated in two groups: N45°E (Group-1), parallel to the general trend of the NF, and another strikes N70°E (Group-2), oblique to the fault with an angle of 20°. It is well known that Riedel shear structures are common within strike-slip fault zones. Previous studies show that the NF is a right-lateral strike-slip fault with a minor thrust component, and that the fault damage zone is characterized by Riedel shear structures dominated by Y shears (main faults), R shears and P foliations (Lin, 2001). We interpret that the fractures of Group (1) correspond to Y Riedel fault shears, and those of Group (2) are R shears. Such Riedel shear structures indicate that the NF is a right-lateral strike-slip fault which is activated under a regional stress field oriented to the

  15. ARGES: an Expert System for Fault Diagnosis Within Space-Based ECLS Systems

    NASA Technical Reports Server (NTRS)

    Pachura, David W.; Suleiman, Salem A.; Mendler, Andrew P.

    1988-01-01

    ARGES (Atmospheric Revitalization Group Expert System) is a demonstration prototype expert system for fault management for the Solid Amine, Water Desorbed (SAWD) CO2 removal assembly, associated with the Environmental Control and Life Support (ECLS) System. ARGES monitors and reduces data in real time from either the SAWD controller or a simulation of the SAWD assembly. It can detect gradual degradations or predict failures. This allows graceful shutdown and scheduled maintenance, which reduces crew maintenance overhead. Status and fault information is presented in a user interface that simulates what would be seen by a crewperson. The user interface employs animated color graphics and an object oriented approach to provide detailed status information, fault identification, and explanation of reasoning in a rapidly assimulated manner. In addition, ARGES recommends possible courses of action for predicted and actual faults. ARGES is seen as a forerunner of AI-based fault management systems for manned space systems.

  16. The Australian Computational Earth Systems Simulator

    NASA Astrophysics Data System (ADS)

    Mora, P.; Muhlhaus, H.; Lister, G.; Dyskin, A.; Place, D.; Appelbe, B.; Nimmervoll, N.; Abramson, D.

    2001-12-01

    Numerical simulation of the physics and dynamics of the entire earth system offers an outstanding opportunity for advancing earth system science and technology but represents a major challenge due to the range of scales and physical processes involved, as well as the magnitude of the software engineering effort required. However, new simulation and computer technologies are bringing this objective within reach. Under a special competitive national funding scheme to establish new Major National Research Facilities (MNRF), the Australian government together with a consortium of Universities and research institutions have funded construction of the Australian Computational Earth Systems Simulator (ACcESS). The Simulator or computational virtual earth will provide the research infrastructure to the Australian earth systems science community required for simulations of dynamical earth processes at scales ranging from microscopic to global. It will consist of thematic supercomputer infrastructure and an earth systems simulation software system. The Simulator models and software will be constructed over a five year period by a multi-disciplinary team of computational scientists, mathematicians, earth scientists, civil engineers and software engineers. The construction team will integrate numerical simulation models (3D discrete elements/lattice solid model, particle-in-cell large deformation finite-element method, stress reconstruction models, multi-scale continuum models etc) with geophysical, geological and tectonic models, through advanced software engineering and visualization technologies. When fully constructed, the Simulator aims to provide the software and hardware infrastructure needed to model solid earth phenomena including global scale dynamics and mineralisation processes, crustal scale processes including plate tectonics, mountain building, interacting fault system dynamics, and micro-scale processes that control the geological, physical and dynamic

  17. LiDAR-Assisted identification of an active fault near Truckee, California

    Hunter, L.E.; Howle, J.F.; Rose, R.S.; Bawden, G.W.

    2011-01-01

    We use high-resolution (1.5-2.4 points/m2) bare-earth airborne Light Detection and Ranging (LiDAR) imagery to identify, map, constrain, and visualize fault-related geomorphology in densely vegetated terrain surrounding Martis Creek Dam near Truckee, California. Bare-earth LiDAR imagery reveals a previously unrecognized and apparently youthful right-lateral strike-slip fault that exhibits laterally continuous tectonic geomorphic features over a 35-km-long zone. If these interpretations are correct, the fault, herein named the Polaris fault, may represent a significant seismic hazard to the greater Truckee-Lake Tahoe and Reno-Carson City regions. Three-dimensional modeling of an offset late Quaternary terrace riser indicates a minimum tectonic slip rate of 0.4 ?? 0.1 mm/yr.Mapped fault patterns are fairly typical of regional patterns elsewhere in the northern Walker Lane and are in strong coherence with moderate magnitude historical seismicity of the immediate area, as well as the current regional stress regime. Based on a range of surface-rupture lengths and depths to the base of the seismogenic zone, we estimate a maximum earthquake magnitude (M) for the Polaris fault to be between 6.4 and 6.9.

  18. Critical fault patterns determination in fault-tolerant computer systems

    NASA Technical Reports Server (NTRS)

    Mccluskey, E. J.; Losq, J.

    1978-01-01

    The method proposed tries to enumerate all the critical fault-patterns (successive occurrences of failures) without analyzing every single possible fault. The conditions for the system to be operating in a given mode can be expressed in terms of the static states. Thus, one can find all the system states that correspond to a given critical mode of operation. The next step consists in analyzing the fault-detection mechanisms, the diagnosis algorithm and the process of switch control. From them, one can find all the possible system configurations that can result from a failure occurrence. Thus, one can list all the characteristics, with respect to detection, diagnosis, and switch control, that failures must have to constitute critical fault-patterns. Such an enumeration of the critical fault-patterns can be directly used to evaluate the overall system tolerance to failures. Present research is focused on how to efficiently make use of these system-level characteristics to enumerate all the failures that verify these characteristics.

  19. Fault Analysis in Solar Photovoltaic Arrays

    NASA Astrophysics Data System (ADS)

    Zhao, Ye

    Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.

  20. Comparative study of two active faults in different stages of the earthquake cycle in central Japan -The Atera fault (with 1586 Tensho earthquake) and the Nojima fault (with 1995 Kobe earthquake)-

    NASA Astrophysics Data System (ADS)

    Matsuda, T.; Omura, K.; Ikeda, R.

    2003-12-01

    National Research Institute for Earth Science and Disaster Prevention (NIED) has been conducting _gFault zone drilling_h. Fault zone drilling is especially important in understanding the structure, composition, and physical properties of an active fault. In the Chubu district of central Japan, large active faults such as the Atotsugawa (with 1858 Hietsu earthquake) and the Atera (with 1586 Tensho earthquake) faults exist. After the occurrence of the 1995 Kobe earthquake, it has been widely recognized that direct measurements in fault zones by drilling. This time, we describe about the Atera fault and the Nojima fault. Because, these two faults are similar in geological situation (mostly composed of granitic rocks), so it is easy to do comparative study of drilling investigation. The features of the Atera fault, which have been dislocated by the 1586 Tensho earthquake, are as follows. Total length is about 70 km. That general trend is NW45 degree with a left-lateral strike slip. Slip rate is estimated as 3-5 m / 1000 years. Seismicity is very low at present and lithologies around the fault are basically granitic rocks and rhyolite. Six boreholes have been drilled from the depth of 400 m to 630 m. Four of these boreholes (Hatajiri, Fukuoka, Ueno and Kawaue) are located on a line crossing in a direction perpendicular to the Atera fault. In the Kawaue well, mostly fractured and alternating granitic rock continued from the surface to the bottom at 630 m. X-ray fluorescence analysis (XRF) is conducted to estimate the amount of major chemical elements using the glass bead method for core samples. The amounts of H20+ are about from 0.5 to 2.5 weight percent. This fractured zone is also characterized by the logging data such as low resistivity, low P-wave velocity, low density and high neutron porosity. The 1995 Kobe (Hyogo-ken Nanbu) earthquake occurred along the NE-SW-trending Rokko-Awaji fault system, and the Nojima fault appeared on the surface on Awaji Island when this