Science.gov

Sample records for faulting

  1. Zipper Faults

    NASA Astrophysics Data System (ADS)

    Platt, J. P.; Passchier, C. W.

    2015-12-01

    Intersecting simultaneously active pairs of faults with different orientations and opposing slip sense ("conjugate faults") present geometrical and kinematic problems. Such faults rarely offset each other, even when they have displacements of many km. A simple solution to the problem is that the two faults merge, either zippering up or unzippering, depending on the relationship between the angle of intersection and the slip senses. A widely recognized example of this is the so-called blind front developed in some thrust belts, where a backthrust branches off a decollement surface at depth. The decollement progressively unzippers, so that its hanging wall becomes the hanging wall of the backthrust, and its footwall becomes the footwall of the active decollement. The opposite situation commonly arises in core complexes, where conjugate low-angle normal faults merge to form a single detachment; in this case the two faults zipper up. Analogous situations may arise for conjugate pairs of strike-slip faults. We present kinematic and geometrical analyses of the Garlock and San Andreas faults in California, the Najd fault system in Saudi Arabia, the North and East Anatolian faults, the Karakoram and Altyn Tagh faults in Tibet, and the Tonale and Guidicarie faults in the southern Alps, all of which appear to have undergone zippering over distances of several tens to hundreds of km. The zippering process may produce complex and significant patterns of strain and rotation in the surrounding rocks, particularly if the angle between the zippered faults is large. A zippering fault may be inactive during active movement on the intersecting faults, or it may have a slip rate that differs from either fault. Intersecting conjugate ductile shear zones behave in the same way on outcrop and micro-scales.

  2. Fault finder

    DOEpatents

    Bunch, Richard H.

    1986-01-01

    A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

  3. Fault diagnosis

    NASA Technical Reports Server (NTRS)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to

  4. Fault mechanics

    SciTech Connect

    Segall, P. )

    1991-01-01

    Recent observational, experimental, and theoretical modeling studies of fault mechanics are discussed in a critical review of U.S. research from the period 1987-1990. Topics examined include interseismic strain accumulation, coseismic deformation, postseismic deformation, and the earthquake cycle; long-term deformation; fault friction and the instability mechanism; pore pressure and normal stress effects; instability models; strain measurements prior to earthquakes; stochastic modeling of earthquakes; and deep-focus earthquakes. Maps, graphs, and a comprehensive bibliography are provided. 220 refs.

  5. Fault slip distribution and fault roughness

    NASA Astrophysics Data System (ADS)

    Candela, Thibault; Renard, François; Schmittbuhl, Jean; Bouchon, Michel; Brodsky, Emily E.

    2011-11-01

    We present analysis of the spatial correlations of seismological slip maps and fault topography roughness, illuminating their identical self-affine exponent. Though the complexity of the coseismic spatial slip distribution can be intuitively associated with geometrical or stress heterogeneities along the fault surface, this has never been demonstrated. Based on new measurements of fault surface topography and on statistical analyses of kinematic inversions of slip maps, we propose a model, which quantitatively characterizes the link between slip distribution and fault surface roughness. Our approach can be divided into two complementary steps: (i) Using a numerical computation, we estimate the influence of fault roughness on the frictional strength (pre-stress). We model a fault as a rough interface where elastic asperities are squeezed. The Hurst exponent ?, characterizing the self-affinity of the frictional strength field, approaches ?, where ? is the roughness exponent of the fault surface in the direction of slip. (ii) Using a quasi-static model of fault propagation, which includes the effect of long-range elastic interactions and spatial correlations in the frictional strength, the spatial slip correlation is observed to scale as ?, where ? represents the Hurst exponent of the slip distribution. Under the assumption that the origin of the spatial fluctuations in frictional strength along faults is the elastic squeeze of fault asperities, we show that self-affine geometrical properties of fault surface roughness control slip correlations and that ?. Given that ? for a wide range of faults (various accumulated displacement, host rock and slip movement), we predict that ?. Even if our quasi-static fault model is more relevant for creeping faults, the spatial slip correlations observed are consistent with those of seismological slip maps. A consequence is that the self-affinity property of slip roughness may be explained by fault geometry without considering

  6. Flight elements: Fault detection and fault management

    NASA Technical Reports Server (NTRS)

    Lum, H.; Patterson-Hine, A.; Edge, J. T.; Lawler, D.

    1990-01-01

    Fault management for an intelligent computational system must be developed using a top down integrated engineering approach. An approach proposed includes integrating the overall environment involving sensors and their associated data; design knowledge capture; operations; fault detection, identification, and reconfiguration; testability; causal models including digraph matrix analysis; and overall performance impacts on the hardware and software architecture. Implementation of the concept to achieve a real time intelligent fault detection and management system will be accomplished via the implementation of several objectives, which are: Development of fault tolerant/FDIR requirement and specification from a systems level which will carry through from conceptual design through implementation and mission operations; Implementation of monitoring, diagnosis, and reconfiguration at all system levels providing fault isolation and system integration; Optimize system operations to manage degraded system performance through system integration; and Lower development and operations costs through the implementation of an intelligent real time fault detection and fault management system and an information management system.

  7. Fault damage zones

    NASA Astrophysics Data System (ADS)

    Kim, Young-Seog; Peacock, David C. P.; Sanderson, David J.

    2004-03-01

    Damage zones show very similar geometries across a wide range of scales and fault types, including strike-slip, normal and thrust faults. We use a geometric classification of damage zones into tip-, wall-, and linking-damage zones, based on their location around faults. These classes can be sub-divided in terms of fault and fracture patterns within the damage zone. A variety of damage zone structures can occur at mode II tips of strike-slip faults, including wing cracks, horsetail fractures, antithetic faults, and synthetic branch faults. Wall damage zones result from the propagation of mode II and mode III fault tips through a rock, or from damage associated with the increase in slip on a fault. Wall damage zone structures include extension fractures, antithetic faults, synthetic faults, and rotated blocks with associated triangular openings. The damage formed at the mode III tips of strike-slip faults (e.g. observed in cliff sections) are classified as wall damage zones, because the damage zone structures are distributed along a fault trace in map view. Mixed-mode tips are likely to show characteristics of both mode II and mode III tips. Linking damage zones are developed at steps between two sub-parallel faults, and the structures developed depend on whether the step is extensional or contractional. Extension fractures and pull-aparts typically develop in extensional steps, whilst solution seams, antithetic faults and synthetic faults commonly develop in contractional steps. Rotated blocks, isolated lenses or strike-slip duplexes may occur in both extensional and contractional steps. Damage zone geometries and structures are strongly controlled by the location around a fault, the slip mode at a fault tip, and by the evolutionary stage of the fault. Although other factors control the nature of damage zones (e.g. lithology, rheology and stress system), the three-dimensional fault geometry and slip mode at each tip must be considered to gain an understanding of

  8. Fault tree handbook

    SciTech Connect

    Haasl, D.F.; Roberts, N.H.; Vesely, W.E.; Goldberg, F.F.

    1981-01-01

    This handbook describes a methodology for reliability analysis of complex systems such as those which comprise the engineered safety features of nuclear power generating stations. After an initial overview of the available system analysis approaches, the handbook focuses on a description of the deductive method known as fault tree analysis. The following aspects of fault tree analysis are covered: basic concepts for fault tree analysis; basic elements of a fault tree; fault tree construction; probability, statistics, and Boolean algebra for the fault tree analyst; qualitative and quantitative fault tree evaluation techniques; and computer codes for fault tree evaluation. Also discussed are several example problems illustrating the basic concepts of fault tree construction and evaluation.

  9. Fault zone hydrogeology

    NASA Astrophysics Data System (ADS)

    Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.

    2013-12-01

    Deformation along faults in the shallow crust (< 1 km) introduces permeability heterogeneity and anisotropy, which has an important impact on processes such as regional groundwater flow, hydrocarbon migration, and hydrothermal fluid circulation. Fault zones have the capacity to be hydraulic conduits connecting shallow and deep geological environments, but simultaneously the fault cores of many faults often form effective barriers to flow. The direct evaluation of the impact of faults to fluid flow patterns remains a challenge and requires a multidisciplinary research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and

  10. Fault recovery characteristics of the fault tolerant multi-processor

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1990-01-01

    The fault handling performance of the fault tolerant multiprocessor (FTMP) was investigated. Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles byzantine or lying faults. It is pointed out that these weak areas in the FTMP's design increase the probability that, for any hardware fault, a good LRU (line replaceable unit) is mistakenly disabled by the fault management software. It is concluded that fault injection can help detect and analyze the behavior of a system in the ultra-reliable regime. Although fault injection testing cannot be exhaustive, it has been demonstrated that it provides a unique capability to unmask problems and to characterize the behavior of a fault-tolerant system.

  11. Fault model development for fault tolerant VLSI design

    NASA Astrophysics Data System (ADS)

    Hartmann, C. R.; Lala, P. K.; Ali, A. M.; Visweswaran, G. S.; Ganguly, S.

    1988-05-01

    Fault models provide systematic and precise representations of physical defects in microcircuits in a form suitable for simulation and test generation. The current difficulty in testing VLSI circuits can be attributed to the tremendous increase in design complexity and the inappropriateness of traditional stuck-at fault models. This report develops fault models for three different types of common defects that are not accurately represented by the stuck-at fault model. The faults examined in this report are: bridging faults, transistor stuck-open faults, and transient faults caused by alpha particle radiation. A generalized fault model could not be developed for the three fault types. However, microcircuit behavior and fault detection strategies are described for the bridging, transistor stuck-open, and transient (alpha particle strike) faults. The results of this study can be applied to the simulation and analysis of faults in fault tolerant VLSI circuits.

  12. FTAPE: A fault injection tool to measure fault tolerance

    NASA Astrophysics Data System (ADS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1994-07-01

    The paper introduces FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The tool combines system-wide fault injection with a controllable workload. A workload generator is used to create high stress conditions for the machine. Faults are injected based on this workload activity in order to ensure a high level of fault propagation. The errors/fault ratio and performance degradation are presented as measures of fault tolerance.

  13. FTAPE: A fault injection tool to measure fault tolerance

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1994-01-01

    The paper introduces FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The tool combines system-wide fault injection with a controllable workload. A workload generator is used to create high stress conditions for the machine. Faults are injected based on this workload activity in order to ensure a high level of fault propagation. The errors/fault ratio and performance degradation are presented as measures of fault tolerance.

  14. FTAPE: A fault injection tool to measure fault tolerance

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1995-01-01

    The paper introduces FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The tool combines system-wide fault injection with a controllable workload. A workload generator is used to create high stress conditions for the machine. Faults are injected based on this workload activity in order to ensure a high level of fault propagation. The errors/fault ratio and performance degradation are presented as measures of fault tolerance.

  15. Isolability of faults in sensor fault diagnosis

    NASA Astrophysics Data System (ADS)

    Sharifi, Reza; Langari, Reza

    2011-10-01

    A major concern with fault detection and isolation (FDI) methods is their robustness with respect to noise and modeling uncertainties. With this in mind, several approaches have been proposed to minimize the vulnerability of FDI methods to these uncertainties. But, apart from the algorithm used, there is a theoretical limit on the minimum effect of noise on detectability and isolability. This limit has been quantified in this paper for the problem of sensor fault diagnosis based on direct redundancies. In this study, first a geometric approach to sensor fault detection is proposed. The sensor fault is isolated based on the direction of residuals found from a residual generator. This residual generator can be constructed from an input-output or a Principal Component Analysis (PCA) based model. The simplicity of this technique, compared to the existing methods of sensor fault diagnosis, allows for more rational formulation of the isolability concepts in linear systems. Using this residual generator and the assumption of Gaussian noise, the effect of noise on isolability is studied, and the minimum magnitude of isolable fault in each sensor is found based on the distribution of noise in the measurement system. Finally, some numerical examples are presented to clarify this approach.

  16. Three-dimensional fault drawing

    SciTech Connect

    Dongan, L. )

    1992-01-01

    In this paper, the author presents a structure interpretation based on three-dimensional fault drawing. It is required that fault closure must be based on geological theory, spacial plotting principle and restrictions in seismic exploration. Geological structure can be well ascertained by analysing the shapes and interrelation of the faults which have been drawn through reasonable fault point closure and fault point correlation. According to this method, the interrelation of fault points is determined by first closing corresponding fault points in intersecting sections, then reasonably correlating the relevant fault points. Fault point correlation is not achieved in base map, so its correctness can be improved greatly. Three-dimensional fault closure is achieved by iteratively revising. The closure grid should be densified gradually. The distribution of major fault system is determined prior to secondary faults. Fault interpretation by workstation also follows this procedure.

  17. How Faults Shape the Earth.

    ERIC Educational Resources Information Center

    Bykerk-Kauffman, Ann

    1992-01-01

    Presents fault activity with an emphasis on earthquakes and changes in continent shapes. Identifies three types of fault movement: normal, reverse, and strike faults. Discusses the seismic gap theory, plate tectonics, and the principle of superposition. Vignettes portray fault movement, and the locations of the San Andreas fault and epicenters of…

  18. Fault detection and fault tolerance in robotics

    NASA Technical Reports Server (NTRS)

    Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.

    1992-01-01

    Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.

  19. Normal faults, normal friction?

    NASA Astrophysics Data System (ADS)

    Collettini, Cristiano; Sibson, Richard H.

    2001-10-01

    Debate continues as to whether normal faults may be seismically active at very low dips (δ < 30°) in the upper continental crust. An updated compilation of dip estimates (n = 25) has been prepared from focal mechanisms of shallow, intracontinental, normal-slip earthquakes (M > 5.5; slip vector raking 90° ± 30° in the fault plane) where the rupture plane is unambiguously discriminated. The dip distribution for these moderate-to-large normal fault ruptures extends from 65° > δ > 30°, corresponding to a range, 25° < θr < 60°, for the reactivation angle between the fault and inferred vertical σ1. In a comparable data set previously obtained for reverse fault ruptures (n = 33), the active dip distribution is 10° < δ = θr < 60°. For vertical and horizontal σ1 trajectories within extensional and compressional tectonic regimes, respectively, dip-slip reactivation is thus restricted to faults oriented at θr ≤ 60° to inferred σ1. Apparent lockup at θr ≈ 60° in each dip distribution and a dominant 30° ± 5° peak in the reverse fault dip distribution, are both consistent with a friction coefficient μs ≈ 0.6, toward the bottom of Byerlee's experimental range, though localized fluid overpressuring may be needed for reactivation of less favorably oriented faults.

  20. Solar system fault detection

    DOEpatents

    Farrington, R.B.; Pruett, J.C. Jr.

    1984-05-14

    A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

  1. Solar system fault detection

    DOEpatents

    Farrington, Robert B.; Pruett, Jr., James C.

    1986-01-01

    A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

  2. How clays weaken faults.

    NASA Astrophysics Data System (ADS)

    van der Pluijm, Ben A.; Schleicher, Anja M.; Warr, Laurence N.

    2010-05-01

    The weakness of upper crustal faults has been variably attributed to (i) low values of normal stress, (ii) elevated pore-fluid pressure, and (iii) low frictional strength. Direct observations on natural faults rocks provide new evidence for the role of frictional properties on fault strength, as illustrated by our recent work on samples from the San Andreas Fault Observatory at Depth (SAFOD) drillhole at Parkfield, California. Mudrock samples from fault zones at ~3066 m and ~3296 m measured depth show variably spaced and interconnected networks of displacement surfaces that consist of host rock particles that are abundantly coated by polished films with occasional striations. Transmission electron microscopy and X-ray diffraction study of the surfaces reveal the occurrence of neocrystallized thin-film clay coatings containing illite-smectite (I-S) and chlorite-smectite (C-S) phases. X-ray texture goniometry shows that the crystallographic fabric of these faults rocks is characteristically low, in spite of an abundance of clay phases. 40Ar/39Ar dating of the illitic mix-layered coatings demonstrate recent crystallization and reveal the initiation of an "older" fault strand (~8 Ma) at 3066 m measured depth, and a "younger" fault strand (~4 Ma) at 3296 m measured depth. Today, the younger strand is the site of active creep behavior, reflecting continued activation of these clay-weakened zones. We propose that the majority of slow fault creep is controlled by the high density of thin (< 100nm thick) nano-coatings on fracture surfaces, which become sufficiently smectite-rich and interconnected at low angles to allow slip with minimal breakage of stronger matrix clasts. Displacements are accommodated by localized frictional slip along coated particle surfaces and hydrated smectitic phases, in combination with intracrystalline deformation of the clay lattice, associated with extensive mineral dissolution, mass transfer and continued growth of expandable layers. The

  3. The Kunlun Fault

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The Kunlun fault is one of the gigantic strike-slip faults that bound the north side of Tibet. Left-lateral motion along the 1,500-kilometer (932-mile) length of the Kunlun has occurred uniformly for the last 40,000 years at a rate of 1.1 centimeter per year, creating a cumulative offset of more than 400 meters. In this image, two splays of the fault are clearly seen crossing from east to west. The northern fault juxtaposes sedimentary rocks of the mountains against alluvial fans. Its trace is also marked by lines of vegetation, which appear red in the image. The southern, younger fault cuts through the alluvium. A dark linear area in the center of the image is wet ground where groundwater has ponded against the fault. Measurements from the image of displacements of young streams that cross the fault show 15 to 75 meters (16 to 82 yards) of left-lateral offset. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) acquired the visible light and near infrared scene on July 20, 2000. Image courtesy NASA/GSFC/MITI/ERSDAC/JAROS, and the U.S./Japan ASTER Science Team

  4. Fault detection and isolation

    NASA Technical Reports Server (NTRS)

    Bernath, Greg

    1994-01-01

    In order for a current satellite-based navigation system (such as the Global Positioning System, GPS) to meet integrity requirements, there must be a way of detecting erroneous measurements, without help from outside the system. This process is called Fault Detection and Isolation (FDI). Fault detection requires at least one redundant measurement, and can be done with a parity space algorithm. The best way around the fault isolation problem is not necessarily isolating the bad measurement, but finding a new combination of measurements which excludes it.

  5. Measuring fault tolerance with the FTAPE fault injection tool

    NASA Astrophysics Data System (ADS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1995-05-01

    This paper describes FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The major parts of the tool include a system-wide fault-injector, a workload generator, and a workload activity measurement tool. The workload creates high stress conditions on the machine. Using stress-based injection, the fault injector is able to utilize knowledge of the workload activity to ensure a high level of fault propagation. The errors/fault ratio, performance degradation, and number of system crashes are presented as measures of fault tolerance.

  6. Measuring fault tolerance with the FTAPE fault injection tool

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1995-01-01

    This paper describes FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The major parts of the tool include a system-wide fault-injector, a workload generator, and a workload activity measurement tool. The workload creates high stress conditions on the machine. Using stress-based injection, the fault injector is able to utilize knowledge of the workload activity to ensure a high level of fault propagation. The errors/fault ratio, performance degradation, and number of system crashes are presented as measures of fault tolerance.

  7. OpenStudio - Fault Modeling

    Energy Science and Technology Software Center (ESTSC)

    2014-09-19

    This software record documents the OpenStudio fault model development portion of the Fault Detection and Diagnostics LDRD project.The software provides a suite of OpenStudio measures (scripts) for modeling typical HVAC system faults in commercial buildings and also included supporting materials: example projects and OpenStudio measures for reporting fault costs and energy impacts.

  8. Hayward Fault, California Interferogram

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This image of California's Hayward fault is an interferogram created using a pair of images taken by Synthetic Aperture Radar(SAR) combined to measure changes in the surface that may have occurred between the time the two images were taken.

    The images were collected by the European Space Agency's Remote Sensing satellites ERS-1 and ERS-2 in June 1992 and September 1997 over the central San Francisco Bay in California.

    The radar image data are shown as a gray-scale image, with the interferometric measurements that show the changes rendered in color. Only the urbanized area could be mapped with these data. The color changes from orange tones to blue tones across the Hayward fault (marked by a thin red line) show about 2-3centimeters (0.8-1.1 inches) of gradual displacement or movement of the southwest side of the fault. The block west of the fault moved horizontally toward the northwest during the 63 months between the acquisition of the two SAR images. This fault movement is called a seismic creep because the fault moved slowly without generating an earthquake.

    Scientists are using the SAR interferometry along with other data collected on the ground to monitor this fault motion in an attempt to estimate the probability of earthquake on the Hayward fault, which last had a major earthquake of magnitude 7 in 1868. This analysis indicates that the northern part of the Hayward fault is creeping all the way from the surface to a depth of 12 kilometers (7.5 miles). This suggests that the potential for a large earthquake on the northern Hayward fault might be less than previously thought. The blue area to the west (lower left) of the fault near the center of the image seemed to move upward relative to the yellow and orange areas nearby by about 2 centimeters (0.8 inches). The cause of this apparent motion is not yet confirmed, but the rise of groundwater levels during the time between the images may have caused the reversal of a small portion of the subsidence that

  9. Cable-fault locator

    NASA Technical Reports Server (NTRS)

    Cason, R. L.; Mcstay, J. J.; Heymann, A. P., Sr.

    1979-01-01

    Inexpensive system automatically indicates location of short-circuited section of power cable. Monitor does not require that cable be disconnected from its power source or that test signals be applied. Instead, ground-current sensors are installed in manholes or at other selected locations along cable run. When fault occurs, sensors transmit information about fault location to control center. Repair crew can be sent to location and cable can be returned to service with minimum of downtime.

  10. Fault rupture segmentation

    NASA Astrophysics Data System (ADS)

    Cleveland, Kenneth Michael

    A critical foundation to earthquake study and hazard assessment is the understanding of controls on fault rupture, including segmentation. Key challenges to understanding fault rupture segmentation include, but are not limited to: What determines if a fault segment will rupture in a single great event or multiple moderate events? How is slip along a fault partitioned between seismic and seismic components? How does the seismicity of a fault segment evolve over time? How representative are past events for assessing future seismic hazards? In order to address the difficult questions regarding fault rupture segmentation, new methods must be developed that utilize the information available. Much of the research presented in this study focuses on the development of new methods for attacking the challenges of understanding fault rupture segmentation. Not only do these methods exploit a broader band of information within the waveform than has traditionally been used, but they also lend themselves to the inclusion of even more seismic phases providing deeper understandings. Additionally, these methods are designed to be fast and efficient with large datasets, allowing them to utilize the enormous volume of data available. Key findings from this body of work include demonstration that focus on fundamental earthquake properties on regional scales can provide general understanding of fault rupture segmentation. We present a more modern, waveform-based method that locates events using cross-correlation of the Rayleigh waves. Additionally, cross-correlation values can also be used to calculate precise earthquake magnitudes. Finally, insight regarding earthquake rupture directivity can be easily and quickly exploited using cross-correlation of surface waves.

  11. Fault lubrication during earthquakes.

    PubMed

    Di Toro, G; Han, R; Hirose, T; De Paola, N; Nielsen, S; Mizoguchi, K; Ferri, F; Cocco, M; Shimamoto, T

    2011-03-24

    The determination of rock friction at seismic slip rates (about 1 m s(-1)) is of paramount importance in earthquake mechanics, as fault friction controls the stress drop, the mechanical work and the frictional heat generated during slip. Given the difficulty in determining friction by seismological methods, elucidating constraints are derived from experimental studies. Here we review a large set of published and unpublished experiments (∼300) performed in rotary shear apparatus at slip rates of 0.1-2.6 m s(-1). The experiments indicate a significant decrease in friction (of up to one order of magnitude), which we term fault lubrication, both for cohesive (silicate-built, quartz-built and carbonate-built) rocks and non-cohesive rocks (clay-rich, anhydrite, gypsum and dolomite gouges) typical of crustal seismogenic sources. The available mechanical work and the associated temperature rise in the slipping zone trigger a number of physicochemical processes (gelification, decarbonation and dehydration reactions, melting and so on) whose products are responsible for fault lubrication. The similarity between (1) experimental and natural fault products and (2) mechanical work measures resulting from these laboratory experiments and seismological estimates suggests that it is reasonable to extrapolate experimental data to conditions typical of earthquake nucleation depths (7-15 km). It seems that faults are lubricated during earthquakes, irrespective of the fault rock composition and of the specific weakening mechanism involved. PMID:21430777

  12. Packaged Fault Model for Geometric Segmentation of Active Faults Into Earthquake Source Faults

    NASA Astrophysics Data System (ADS)

    Nakata, T.; Kumamoto, T.

    2004-12-01

    In Japan, the empirical formula proposed by Matsuda (1975) mainly based on the length of the historical surface fault ruptures and magnitude, is generally applied to estimate the size of future earthquakes from the extent of existing active faults for seismic hazard assessment. Therefore validity of the active fault length and defining individual segment boundaries where propagating ruptures terminate are essential and crucial to the reliability for the accurate assessments. It is, however, not likely for us to clearly identify the behavioral earthquake segments from observation of surface faulting during the historical period, because most of the active faults have longer recurrence intervals than 1000 years in Japan. Besides uncertainties of the datasets obtained mainly from fault trenching studies are quite large for fault grouping/segmentation. This is why new methods or criteria should be applied for active fault grouping/segmentation, and one of the candidates may be geometric criterion of active faults. Matsuda (1990) used _gfive kilometer_h as a critical distance for grouping and separation of neighboring active faults. On the other hand, Nakata and Goto (1998) proposed the geometric criteria such as (1) branching features of active fault traces and (2) characteristic pattern of vertical-slip distribution along the fault traces as tools to predict rupture length of future earthquakes. The branching during the fault rupture propagation is regarded as an effective energy dissipation process and could result in final rupture termination. With respect to the characteristic pattern of vertical-slip distribution, especially with strike-slip components, the up-thrown sides along the faults are, in general, located on the fault blocks in the direction of relative strike-slip. Applying these new geometric criteria to the high-resolution active fault distribution maps, the fault grouping/segmentation could be more practically conducted. We tested this model

  13. Fault Roughness Records Strength

    NASA Astrophysics Data System (ADS)

    Brodsky, E. E.; Candela, T.; Kirkpatrick, J. D.

    2014-12-01

    Fault roughness is commonly ~0.1-1% at the outcrop exposure scale. More mature faults are smoother than less mature ones, but the overall range of roughness is surprisingly limited which suggests dynamic control. In addition, the power spectra of many exposed fault surfaces follow a single power law over scales from millimeters to 10's of meters. This is another surprising observation as distinct structures such as slickenlines and mullions are clearly visible on the same surfaces at well-defined scales. We can reconcile both observations by suggesting that the roughness of fault surfaces is controlled by the maximum strain that can be supported elastically in the wallrock. If the fault surface topography requires more than 0.1-1% strain, it fails. Invoking wallrock strength explains two additional observations on the Corona Heights fault for which we have extensive roughness data. Firstly, the surface is isotropic below a scale of 30 microns and has grooves at larger scales. Samples from at least three other faults (Dixie Valley, Mount St. Helens and San Andreas) also are isotropic at scales below 10's of microns. If grooves can only persist when the walls of the grooves have a sufficiently low slope to maintain the shape, this scale of isotropy can be predicted based on the measured slip perpendicular roughness data. The observed 30 micron scale at Corona Heights is consistent with an elastic strain of 0.01 estimated from the observed slip perpendicular roughness with a Hurst exponent of 0.8. The second observation at Corona Heights is that slickenlines are not deflected around meter-scale mullions. Yielding of these mullions at centimeter to meter scale is predicted from the slip parallel roughness as measured here. The success of the strain criterion for Corona Heights supports it as the appropriate control on fault roughness. Micromechanically, the criterion implies that failure of the fault surface is a continual process during slip. Macroscopically, the

  14. Fault reactivation control on normal fault growth: an experimental study

    NASA Astrophysics Data System (ADS)

    Bellahsen, Nicolas; Daniel, Jean Marc

    2005-04-01

    Field studies frequently emphasize how fault reactivation is involved in the deformation of the upper crust. However, this phenomenon is generally neglected (except in inversion models) in analogue and numerical models performed to study fault network growth. Using sand/silicon analogue models, we show how pre-existing discontinuities can control the geometry and evolution of a younger fault network. The models show that the reactivation of pre-existing discontinuities and their orientation control: (i) the evolution of the main fault orientation distribution through time, (ii) the geometry of relay fault zones, (iii) the geometry of small scale faulting, and (iv) the geometry and location of fault-controlled basins and depocenters. These results are in good agreement with natural fault networks observed in both the Gulf of Suez and Lake Tanganyika. They demonstrate that heterogeneities such as pre-existing faults should be included in models designed to understand the behavior and the tectonic evolution of sedimentary basins.

  15. Validated Fault Tolerant Architectures for Space Station

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.

    1990-01-01

    Viewgraphs on validated fault tolerant architectures for space station are presented. Topics covered include: fault tolerance approach; advanced information processing system (AIPS); and fault tolerant parallel processor (FTPP).

  16. Fault terminations, Seminoe Mountains, Wyoming

    SciTech Connect

    Dominic, J.B.; McConnell, D.A. . Dept. of Geology)

    1992-01-01

    Two basement-involved faults terminate in folds in the Seminoe Mountains. Mesoscopic and macroscopic structures in sedimentary rocks provide clues to the interrelationship of faults and folds in this region, and on the linkage between faulting and folding in general. The Hurt Creek fault trends 320[degree] and has maximum separation of 1.5 km measured at the basement/cover contact. Separation on the fault decreases upsection to zero within the Jurassic Sundance Formation. Unfaulted rock units form an anticline around the fault tip. The complementary syncline is angular with planar limbs and a narrow hinge zone. The syncline axial trace intersects the fault in the footwall at the basement/cover cut-off. Map patterns are interpreted to show thickening of Mesozoic units adjacent to the syncline hinge. In contrast, extensional structures are common in the faulted anticline within the Permian Goose Egg and Triassic Chugwater Formations. A hanging wall splay fault loses separation into the Goose Egg formation which is thinned by 50% at the fault tip. Mesoscopic normal faults are oriented 320--340[degree] and have an average inclination of 75[degree] SW. Megaboudins of Chugwater are present in the footwall of the Hurt Creek fault, immediately adjacent to the fault trace. The Black Canyon fault transported Precambrian-Pennsylvanian rocks over Pennsylvanian Tensleep sandstone. This fault is layer-parallel at the top of the Tensleep and loses separation along strike into an unfaulted syncline in the Goose Egg Formation. Shortening in the pre-Permian units is accommodated by slip on the basement-involved Black Canyon fault. Equivalent shortening in Permian-Cretaceous units occurs on a system of thin-skinned'' thrust faults.

  17. Cable fault locator research

    NASA Astrophysics Data System (ADS)

    Cole, C. A.; Honey, S. K.; Petro, J. P.; Phillips, A. C.

    1982-07-01

    Cable fault location and the construction of four field test units are discussed. Swept frequency sounding of mine cables with RF signals was the technique most thoroughly investigated. The swept frequency technique is supplemented with a form of moving target indication to provide a method for locating the position of a technician along a cable and relative to a suspected fault. Separate, more limited investigations involved high voltage time domain reflectometry and acoustical probing of mine cables. Particular areas of research included microprocessor-based control of the swept frequency system, a microprocessor based fast Fourier transform for spectral analysis, and RF synthesizers.

  18. DIFFERENTIAL FAULT SENSING CIRCUIT

    DOEpatents

    Roberts, J.H.

    1961-09-01

    A differential fault sensing circuit is designed for detecting arcing in high-voltage vacuum tubes arranged in parallel. A circuit is provided which senses differences in voltages appearing between corresponding elements likely to fault. Sensitivity of the circuit is adjusted to some level above which arcing will cause detectable differences in voltage. For particular corresponding elements, a group of pulse transformers are connected in parallel with diodes connected across the secondaries thereof so that only voltage excursions are transmitted to a thyratron which is biased to the sensitivity level mentioned.

  19. Computer hardware fault administration

    DOEpatents

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  20. Ius Chasma Fault

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-415, 8 July 2003

    This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a 'text-book example' of an offset in layered rock caused by a fault. The offset is most easily seen near the upper right of the image. The martian crust is faulted, and the planet has probably experienced 'earthquakes' (or, marsquakes) in the past. This scene is located on the floor of Ius Chasma near 7.8oS, 80.6oW. Sunlight illuminates the scene from the upper left.

  1. Fault tolerant linear actuator

    DOEpatents

    Tesar, Delbert

    2004-09-14

    In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.

  2. Fault displacement hazard for strike-slip faults

    USGS Publications Warehouse

    Petersen, M.D.; Dawson, T.E.; Chen, R.; Cao, T.; Wills, C.J.; Schwartz, D.P.; Frankel, A.D.

    2011-01-01

    In this paper we present a methodology, data, and regression equations for calculating the fault rupture hazard at sites near steeply dipping, strike-slip faults. We collected and digitized on-fault and off-fault displacement data for 9 global strikeslip earthquakes ranging from moment magnitude M 6.5 to M 7.6 and supplemented these with displacements from 13 global earthquakes compiled byWesnousky (2008), who considers events up to M 7.9. Displacements on the primary fault fall off at the rupture ends and are often measured in meters, while displacements on secondary (offfault) or distributed faults may measure a few centimeters up to more than a meter and decay with distance from the rupture. Probability of earthquake rupture is less than 15% for cells 200 m??200 m and is less than 2% for 25 m??25 m cells at distances greater than 200mfrom the primary-fault rupture. Therefore, the hazard for off-fault ruptures is much lower than the hazard near the fault. Our data indicate that rupture displacements up to 35cm can be triggered on adjacent faults at distances out to 10kmor more from the primary-fault rupture. An example calculation shows that, for an active fault which has repeated large earthquakes every few hundred years, fault rupture hazard analysis should be an important consideration in the design of structures or lifelines that are located near the principal fault, within about 150 m of well-mapped active faults with a simple trace and within 300 m of faults with poorly defined or complex traces.

  3. Fault tree models for fault tolerant hypercube multiprocessors

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Tuazon, Jezus O.

    1991-01-01

    Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.

  4. Characteristics of On-fault and Off-fault displacement of various fault types based on numerical simulation

    NASA Astrophysics Data System (ADS)

    Inoue, N.; Kitada, N.; Takemura, K.

    2015-12-01

    There are two types of fault displacement related to the earthquake fault: on-fault displacement and off-fault displacement. Off-fault displacement should be evaluated in important facilities, such as Nuclear Installations. Probabilistic Fault Displacement Hazard Analysis (PFDHA) is developing on the basis of PSHA. PFDHA estimates on-fault and off-fault displacement. For estimation, PFDHA uses distance-displacement functions, which are constructed from field measurement data. However, observed displacement data are still sparse, especially off-fault displacement. In Nuclear Installations, estimation of off-fault displacement is more important than that of on-fault. We carried out numerical fault displacement simulations to assist in understanding distance-displacement relations of on-fault and off-fault according to fault types, normal, reverse and strike fault. We used Okada's dislocation method. The displacements were calculated based on the single fault model with several rakes of slip. On-fault displacements (along the fault profile) of each fault types show a similar trend. Off-fault displacements (cross profile to the fault) of vertical (reverse and normal) fault types show the rapid decreasing displacement on the foot wall side. In the presentation, we will show the displacement profile and also stress, strain and so on. The dislocation model can not express discontinuous displacements. In the future, we will apply various numerical simulations (Finite Element Method, Distinct Element Method) in order to evaluate off-fault displacements. We will also compare numerical simulation results with observed data.

  5. The property of fault zone and fault activity of Shionohira Fault, Fukushima, Japan

    NASA Astrophysics Data System (ADS)

    Seshimo, K.; Aoki, K.; Tanaka, Y.; Niwa, M.; Kametaka, M.; Sakai, T.; Tanaka, Y.

    2015-12-01

    The April 11, 2011 Fukushima-ken Hamadori Earthquake (hereafter the 4.11 earthquake) formed co-seismic surface ruptures trending in the NNW-SSE direction in Iwaki City, Fukushima Prefecture, which were newly named as the Shionohira Fault by Ishiyama et al. (2011). This earthquake was characterized by a westward dipping normal slip faulting, with a maximum displacement of about 2 m (e.g., Kurosawa et al., 2012). To the south of the area, the same trending lineaments were recognized to exist even though no surface ruptures occurred by the earthquake. In an attempt to elucidate the differences of active and non-active segments of the fault, this report discusses the results of observation of fault outcrops along the Shionohira Fault as well as the Coulomb stress calculations. Only a few outcrops have basement rocks of both the hanging-wall and foot-wall of the fault plane. Three of these outcrops (Kyodo-gawa, Shionohira and Betto) were selected for investigation. In addition, a fault outcrop (Nameishi-minami) located about 300 m south of the southern tip of the surface ruptures was investigated. The authors carried out observations of outcrops, polished slabs and thin sections, and performed X-ray diffraction (XRD) to fault materials. As a result, the fault zones originating from schists were investigated at Kyodo-gawa and Betto. A thick fault gouge was cut by a fault plane of the 4.11 earthquake in each outcrop. The fault materials originating from schists were fault bounded with (possibly Neogene) weakly deformed sandstone at Shionohira. A thin fault gouge was found along the fault plane of 4.11 earthquake. A small-scale fault zone with thin fault gouge was observed in Nameishi-minami. According to XRD analysis, smectite was detected in the gouges from Kyodo-gawa, Shionohira and Betto, while not in the gouge from Nameishi-minami.

  6. Towards Fault Resilient Global Arrays

    SciTech Connect

    Tipparaju, Vinod; Krishnan, Manoj Kumar; Palmer, Bruce J.; Petrini, Fabrizio; Nieplocha, Jaroslaw

    2007-09-03

    The focus of the current paper is adding fault resiliency to the Global Arrays. We extended the GA toolkit to provide a minimal level of capabilities to enable programmer to implement fault resiliency at the user level. Our fault-recovery approach is programmer assisted and based on frequent incremental checkpoints and rollback recovery. In addition, it relies of pool of spare nodes that are used to replace the failing node. We demonstrate usefulness of fault resilient Global Arrays in application context.

  7. Row fault detection system

    DOEpatents

    Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward

    2012-02-07

    An apparatus, program product and method check for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  8. Row fault detection system

    DOEpatents

    Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward

    2010-02-23

    An apparatus and program product check for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  9. Dynamic Fault Detection Chassis

    SciTech Connect

    Mize, Jeffery J

    2007-01-01

    Abstract The high frequency switching megawatt-class High Voltage Converter Modulator (HVCM) developed by Los Alamos National Laboratory for the Oak Ridge National Laboratory's Spallation Neutron Source (SNS) is now in operation. One of the major problems with the modulator systems is shoot-thru conditions that can occur in a IGBTs H-bridge topology resulting in large fault currents and device failure in a few microseconds. The Dynamic Fault Detection Chassis (DFDC) is a fault monitoring system; it monitors transformer flux saturation using a window comparator and dV/dt events on the cathode voltage caused by any abnormality such as capacitor breakdown, transformer primary turns shorts, or dielectric breakdown between the transformer primary and secondary. If faults are detected, the DFDC will inhibit the IGBT gate drives and shut the system down, significantly reducing the possibility of a shoot-thru condition or other equipment damaging events. In this paper, we will present system integration considerations, performance characteristics of the DFDC, and discuss its ability to significantly reduce costly down time for the entire facility.

  10. Row fault detection system

    SciTech Connect

    Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward

    2008-10-14

    An apparatus, program product and method checks for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  11. Fault-Related Sanctuaries

    NASA Astrophysics Data System (ADS)

    Piccardi, L.

    2001-12-01

    Beyond the study of historical surface faulting events, this work investigates the possibility, in specific cases, of identifying pre-historical events whose memory survives in myths and legends. The myths of many famous sacred places of the ancient world contain relevant telluric references: "sacred" earthquakes, openings to the Underworld and/or chthonic dragons. Given the strong correspondence with local geological evidence, these myths may be considered as describing natural phenomena. It has been possible in this way to shed light on the geologic origin of famous myths (Piccardi, 1999, 2000 and 2001). Interdisciplinary researches reveal that the origin of several ancient sanctuaries may be linked in particular to peculiar geological phenomena observed on local active faults (like ground shaking and coseismic surface ruptures, gas and flames emissions, strong underground rumours). In many of these sanctuaries the sacred area is laid directly above the active fault. In a few cases, faulting has affected also the archaeological relics, right through the main temple (e.g. Delphi, Cnidus, Hierapolis of Phrygia). As such, the arrangement of the cult site and content of relative myths suggest that specific points along the trace of active faults have been noticed in the past and worshiped as special `sacred' places, most likely interpreted as Hades' Doors. The mythological stratification of most of these sanctuaries dates back to prehistory, and points to a common derivation from the cult of the Mother Goddess (the Lady of the Doors), which was largely widespread since at least 25000 BC. The cult itself was later reconverted into various different divinities, while the `sacred doors' of the Great Goddess and/or the dragons (offspring of Mother Earth and generally regarded as Keepers of the Doors) persisted in more recent mythologies. Piccardi L., 1999: The "Footprints" of the Archangel: Evidence of Early-Medieval Surface Faulting at Monte Sant'Angelo (Gargano, Italy

  12. Quantifying Anderson's fault types

    USGS Publications Warehouse

    Simpson, R.W.

    1997-01-01

    Anderson [1905] explained three basic types of faulting (normal, strike-slip, and reverse) in terms of the shape of the causative stress tensor and its orientation relative to the Earth's surface. Quantitative parameters can be defined which contain information about both shape and orientation [Ce??le??rier, 1995], thereby offering a way to distinguish fault-type domains on plots of regional stress fields and to quantify, for example, the degree of normal-faulting tendencies within strike-slip domains. This paper offers a geometrically motivated generalization of Angelier's [1979, 1984, 1990] shape parameters ?? and ?? to new quantities named A?? and A??. In their simple forms, A?? varies from 0 to 1 for normal, 1 to 2 for strike-slip, and 2 to 3 for reverse faulting, and A?? ranges from 0?? to 60??, 60?? to 120??, and 120?? to 180??, respectively. After scaling, A?? and A?? agree to within 2% (or 1??), a difference of little practical significance, although A?? has smoother analytical properties. A formulation distinguishing horizontal axes as well as the vertical axis is also possible, yielding an A?? ranging from -3 to +3 and A?? from -180?? to +180??. The geometrically motivated derivation in three-dimensional stress space presented here may aid intuition and offers a natural link with traditional ways of plotting yield and failure criteria. Examples are given, based on models of Bird [1996] and Bird and Kong [1994], of the use of Anderson fault parameters A?? and A?? for visualizing tectonic regimes defined by regional stress fields. Copyright 1997 by the American Geophysical Union.

  13. Earthquakes and fault creep on the northern San Andreas fault

    USGS Publications Warehouse

    Nason, R.

    1979-01-01

    At present there is an absence of both fault creep and small earthquakes on the northern San Andreas fault, which had a magnitude 8 earthquake with 5 m of slip in 1906. The fault has apparently been dormant after the 1906 earthquake. One possibility is that the fault is 'locked' in some way and only produces great earthquakes. An alternative possibility, presented here, is that the lack of current activity on the northern San Andreas fault is because of a lack of sufficient elastic strain after the 1906 earthquake. This is indicated by geodetic measurements at Fort Ross in 1874, 1906 (post-earthquake), and 1969, which show that the strain accumulation in 1969 (69 ?? 10-6 engineering strain) was only about one-third of the strain release (rebound) in the 1906 earthquake (200 ?? 10-6 engineering strain). The large difference in seismicity before and after 1906, with many strong local earthquakes from 1836 to 1906, but only a few strong earthquakes from 1906 to 1976, also indicates a difference of elastic strain. The geologic characteristics (serpentine, fault straightness) of most of the northern San Andreas fault are very similar to the characteristics of the fault south of Hollister, where fault creep is occurring. Thus, the current absence of fault creep on the northern fault segment is probably due to a lack of sufficient elastic strain at the present time. ?? 1979.

  14. An empirical comparison of software fault tolerance and fault elimination

    NASA Technical Reports Server (NTRS)

    Shimeall, Timothy J.; Leveson, Nancy G.

    1991-01-01

    Reliability is an important concern in the development of software for modern systems. Some researchers have hypothesized that particular fault-handling approaches or techniques are so effective that other approaches or techniques are superfluous. The authors have performed a study that compares two major approaches to the improvement of software, software fault elimination and software fault tolerance, by examination of the fault detection obtained by five techniques: run-time assertions, multi-version voting, functional testing augmented by structural testing, code reading by stepwise abstraction, and static data-flow analysis. This study has focused on characterizing the sets of faults detected by the techniques and on characterizing the relationships between these sets of faults. The results of the study show that none of the techniques studied is necessarily redundant to any combination of the others. Further results reveal strengths and weakness in the fault detection by the techniques studied and suggest directions for future research.

  15. Fault diagnosis of analog circuits

    SciTech Connect

    Bandler, J.W.; Salama, A.E.

    1985-08-01

    In this paper, various fault location techniques in analog networks are described and compared. The emphasis is on the more recent developments in the subject. Four main approaches for fault location are addressed, examined, and illustrated using simple network examples. In particular, we consider the fault dictionary approach, the parameter identification approach, the fault verification approach, and the approximation approach. Theory and algorithms that are associated with these approaches are reviewed and problems of their practical application are identified. Associated with the fault dictionary approach we consider fault dictionary construction techniques, methods of optimum measurement selection, different fault isolation criteria, and efficient fault simulation techniques. Parameter identification techniques that either utilize linear or nonlinear systems of equations to identify all network elements are examined very thoroughly. Under fault verification techniques we discuss node-fault diagnosis, branch-fault diagnosis, subnetwork testability conditions as well as combinatorial techniques, the failure bound technique, and the network decomposition technique. For the approximation approach we consider probabilistic methods and optimization-based methods. The artificial intelligence technique and the different measures of testability are also considered. The main features of the techniques considered are summarized in a comparative table. An extensive, but not exhaustive, bibliography is provided.

  16. Fault intersections along the Hosgri Fault Zone, Central California

    NASA Astrophysics Data System (ADS)

    Watt, J. T.; Johnson, S. Y.; Langenheim, V. E.

    2011-12-01

    It is well-established that stresses concentrate at fault intersections or bends when subjected to tectonic loading, making focused studies of these areas particularly important for seismic hazard analysis. In addition, detailed fault models can be used to investigate how slip on one fault might transfer to another during an earthquake. We combine potential-field, high-resolution seismic-reflection, and multibeam bathymetry data with existing geologic and seismicity data to investigate the fault geometry and connectivity of the Hosgri, Los Osos, and Shoreline faults offshore of San Luis Obispo, California. The intersection of the Hosgri and Los Osos faults in Estero Bay is complex. The offshore extension of the Los Osos fault, as imaged with multibeam and high-resolution seismic data, is characterized by a west-northwest-trending zone (1-3 km wide) of near vertical faulting. Three distinct strands (northern, central, and southern) are visible on shallow seismic reflection profiles. The steep dip combined with dramatic changes in reflection character across mapped faults within this zone suggests horizontal offset of rock units and argues for predominantly strike-slip motion, however, the present orientation of the fault zone suggests oblique slip. As the Los Osos fault zone approaches the Hosgri fault, the northern and central strands become progressively more northwest-trending in line with the Hosgri fault. The northern strand runs subparallel to the Hosgri fault along the edge of a long-wavelength magnetic anomaly, intersecting the Hosgri fault southwest of Point Estero. Geophysical modeling suggests the northern strand dips 70° to the northeast, which is in agreement with earthquake focal mechanisms that parallel this strand. The central strand bends northward and intersects the Hosgri fault directly west of Morro Rock, corresponding to an area of compressional deformation visible in shallow seismic-reflection profiles. The southern strand of the Los Osos

  17. Fault Scarp Offsets and Fault Population Analysis on Dione

    NASA Astrophysics Data System (ADS)

    Tarlow, S.; Collins, G. C.

    2010-12-01

    Cassini images of Dione show several fault zones cutting through the moon’s icy surface. We have measured the displacement and length of 271 faults, and estimated the strain occurring in 6 different fault zones. These measurements allow us to quantify the total amount of surface strain on Dione as well as constrain what processes might have caused these faults to form. Though we do not have detailed topography across fault scarps on Dione, we can use their projected size on the camera plane to estimate their heights, assuming a reasonable surface slope. Starting with high resolution images of Dione obtained by the Cassini ISS, we marked points at the top to the bottom of each fault scarp to measure the fault’s projected displacement and its orientation along strike. Line and sample information for the measurements were then processed through ISIS to derive latitude/longitude information and pixel dimensions. We then calculate the three dimensional orientation of a vector running from the bottom to the top of the fault scarp, assuming a 45 degree angle with respect to the surface, and project this vector onto the spacecraft camera plane. This projected vector gives us a correction factor to estimate the actual vertical displacement of the fault scarp. This process was repeated many times for each fault, to show variations of displacement along the length of the fault. To compare each fault to its neighbors and see how strain was accommodated across a population of faults, we divided the faults into fault zones, and created new coordinate systems oriented along the central axis of each fault zone. We could then quantify the amount of fault overlap and add the displacement of overlapping faults to estimate the amount of strain accommodated in each zone. Faults in the southern portion of Padua have a strain of 0.031(+/-) 0.0097, central Padua exhibits a strain of .032(+/-) 0.012, and faults in northern Padua have a strain of 0.025(+/-) 0.0080. The western faults of

  18. Abnormal fault-recovery characteristics of the fault-tolerant multiprocessor uncovered using a new fault-injection methodology

    NASA Astrophysics Data System (ADS)

    Padilla, Peter A.

    1991-03-01

    An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.

  19. Abnormal fault-recovery characteristics of the fault-tolerant multiprocessor uncovered using a new fault-injection methodology

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1991-01-01

    An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.

  20. Holocene faulting on the Mission fault, northwest Montana

    SciTech Connect

    Ostenaa, D.A.; Klinger, R.E.; Levish, D.R. )

    1993-04-01

    South of Flathead Lake, fault scarps on late Quaternary surfaces are nearly continuous for 45 km along the western flank of the Mission Range. On late Pleistocene alpine lateral moraines, scarp heights reach a maximum of 17 m. Scarp heights on post glacial Lake Missoula surfaces range from 2.6--7.2 m and maximum scarp angles range from 10[degree]--24[degree]. The stratigraphy exposed in seven trenches across the fault demonstrates that the post glacial Lake Missoula scarps resulted from at least two surface-faulting events. Larger scarp heights on late Pleistocene moraines suggests a possible third event. This yields an estimated recurrence of 4--8 kyr. Analyses of scarp profiles show that the age of the most surface faulting is middle Holocene, consistent with stratigraphic evidence found in the trenches. Rupture length and displacement imply earthquake magnitudes of 7 to 7.5. Previous studies have not identified geologic evidence of late Quaternary surface faulting in the Rocky Mountain Trench or on faults north of the Lewis and Clark line despite abundant historic seismicity in the Flathead Lake area. In addition to the Mission fault, reconnaissance studies have located late Quaternary fault scarps along portions of faults bordering Jocko and Thompson Valleys. These are the first documented late Pleistocene/Holocene faults north of the Lewis and Clark line in Montana and should greatly revise estimates of earthquake hazards in this region.

  1. Managing Fault Management Development

    NASA Technical Reports Server (NTRS)

    McDougal, John M.

    2010-01-01

    As the complexity of space missions grows, development of Fault Management (FM) capabilities is an increasingly common driver for significant cost overruns late in the development cycle. FM issues and the resulting cost overruns are rarely caused by a lack of technology, but rather by a lack of planning and emphasis by project management. A recent NASA FM Workshop brought together FM practitioners from a broad spectrum of institutions, mission types, and functional roles to identify the drivers underlying FM overruns and recommend solutions. They identified a number of areas in which increased program and project management focus can be used to control FM development cost growth. These include up-front planning for FM as a distinct engineering discipline; managing different, conflicting, and changing institutional goals and risk postures; ensuring the necessary resources for a disciplined, coordinated approach to end-to-end fault management engineering; and monitoring FM coordination across all mission systems.

  2. Dynamic faulting on a conjugate fault system detected by near-fault tilt measurements

    NASA Astrophysics Data System (ADS)

    Fukuyama, Eiichi

    2015-03-01

    There have been reports of conjugate faults that have ruptured during earthquakes. However, it is still unclear whether or not these conjugate faults ruptured coseismically during earthquakes. In this paper, we investigated near-fault ground tilt motions observed at the IWTH25 station during the 2008 Iwate-Miyagi Nairiku earthquake ( M w 6.9). Since near-fault tilt motion is very sensitive to the fault geometry on which the slip occurs during an earthquake, these data make it possible to distinguish between the main fault rupture and a rupture on the conjugate fault. We examined several fault models that have already been proposed and confirmed that only the models with a conjugated fault could explain the tilt data observed at IWTH25. The results support the existence of simultaneous conjugate faulting during the main rupture. This will contribute to the understanding of earthquake rupture dynamics because the conjugate rupture releases the same shear strain as that released on the main fault, and thus it has been considered quite difficult for both ruptures to accelerate simultaneously.

  3. Fault management for data systems

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Iverson, David L.; Patterson-Hine, F. Ann

    1993-01-01

    Issues related to automating the process of fault management (fault diagnosis and response) for data management systems are considered. Substantial benefits are to be gained by successful automation of this process, particularly for large, complex systems. The use of graph-based models to develop a computer assisted fault management system is advocated. The general problem is described and the motivation behind choosing graph-based models over other approaches for developing fault diagnosis computer programs is outlined. Some existing work in the area of graph-based fault diagnosis is reviewed, and a new fault management method which was developed from existing methods is offered. Our method is applied to an automatic telescope system intended as a prototype for future lunar telescope programs. Finally, an application of our method to general data management systems is described.

  4. Fluid involvement in normal faulting

    NASA Astrophysics Data System (ADS)

    Sibson, Richard H.

    2000-04-01

    Evidence of fluid interaction with normal faults comes from their varied role as flow barriers or conduits in hydrocarbon basins and as hosting structures for hydrothermal mineralisation, and from fault-rock assemblages in exhumed footwalls of steep active normal faults and metamorphic core complexes. These last suggest involvement of predominantly aqueous fluids over a broad depth range, with implications for fault shear resistance and the mechanics of normal fault reactivation. A general downwards progression in fault rock assemblages (high-level breccia-gouge (often clay-rich) → cataclasites → phyllonites → mylonite → mylonitic gneiss with the onset of greenschist phyllonites occurring near the base of the seismogenic crust) is inferred for normal fault zones developed in quartzo-feldspathic continental crust. Fluid inclusion studies in hydrothermal veining from some footwall assemblages suggest a transition from hydrostatic to suprahydrostatic fluid pressures over the depth range 3-5 km, with some evidence for near-lithostatic to hydrostatic pressure cycling towards the base of the seismogenic zone in the phyllonitic assemblages. Development of fault-fracture meshes through mixed-mode brittle failure in rock-masses with strong competence layering is promoted by low effective stress in the absence of thoroughgoing cohesionless faults that are favourably oriented for reactivation. Meshes may develop around normal faults in the near-surface under hydrostatic fluid pressures to depths determined by rock tensile strength, and at greater depths in overpressured portions of normal fault zones and at stress heterogeneities, especially dilational jogs. Overpressures localised within developing normal fault zones also determine the extent to which they may reutilise existing discontinuities (for example, low-angle thrust faults). Brittle failure mode plots demonstrate that reactivation of existing low-angle faults under vertical σ1 trajectories is only likely if

  5. Experimental Fault Reactivation on Favourably and Unfavourably Oriented Faults

    NASA Astrophysics Data System (ADS)

    Mitchell, T. M.; Sibson, R. H.; Renner, J.; Toy, V. G.; di Toro, G.; Smith, S. A.

    2010-12-01

    In this study, we introduce work which aims assess the loading of faults to failure under different stress regimes in a triaxial deformation apparatus. We explore experimentally the reshear of an existing fault in various orientations for particular values of (σ1 - σ3) and σ3' for contrasting loading systems - load-strengthening (equivalent to a thrust fault) with σ1' increasing at constant σ3', versus load-weakening (equivalent to a normal fault) with reducing σ3' under constant σ1'. Experiments are conducted on sawcut granite samples with fault angles at a variety of orientations relative to σ1 , ranging from an optimal orientation for reactivation to lockup angles where new faults are formed in preference to reactivating the existing sawcut orientation. Prefailure and postfailure behaviour is compared in terms of damage zone development via monitoring variations in ultrasonic velocity and acoustic emission behaviour. For example, damage surrounding unfavourably oriented faults is significantly higher than that seen around favourably orientated faults due to greater maximum stresses attained prior to unstable slip, which is reflected by the increased acoustic emission activity leading up to failure. In addition, we also experimentally explore the reshear of natural pseudotachylytes (PSTs) from two different fault zones; the Gole Larghe Fault, Adamello, Italy in which the PSTs are in relatively isotropic Tonalite (at lab sample scale) and the Alpine Fault, New Zealand in which the PSTs are in highly anisotropic foliated shist. We test whether PSTs will reshear in both rock types under the right conditions, or whether new fractures in the wall rock will form in preference to reactivating the PST (PST shear strength is higher than that of the host rock). Are PSTs representative of one slip event?

  6. Fault-tolerant processing system

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L. (Inventor)

    1996-01-01

    A fault-tolerant, fiber optic interconnect, or backplane, which serves as a via for data transfer between modules. Fault tolerance algorithms are embedded in the backplane by dividing the backplane into a read bus and a write bus and placing a redundancy management unit (RMU) between the read bus and the write bus so that all data transmitted by the write bus is subjected to the fault tolerance algorithms before the data is passed for distribution to the read bus. The RMU provides both backplane control and fault tolerance.

  7. Fault interaction near Hollister, California

    SciTech Connect

    Mavko, G.M.

    1982-09-10

    A numerical model is used to study fault stress slip near Hollister, California. The geometrically complex system of interacting faults, including the San Andreas, Calaveras, Sargent, and Busch faults, is approximated with a two-dimensional distribution of short planar fault segments in an elastic medium. The steady stress and slip rate are simulated by specifying frictional strength and stepping the remote stress ahead in time. The resulting computed fault stress is roughly proportional to the observed spatial density of small earthquakes, suggesting that the distinction between segments characterized by earthquakes and those with aseismic creep results, in part, from geometry. A nonsteady simulation is made by introducing, in addition, stress drops for individual moderate earthquakes. A close fit of observed creep with calculated slip on the Calaveras and San Andreas faults suggests that many changes in creep rate (averaged over several months) are caused by local moderate earthquakes. In particular, a 3-year creep lag preceding the August 6, 1979, Coyote Lake earthquake on the Calaveras fault seems to have been a direct result of the November 28, 1974, Thanksgiving Day earthquake on the Busch fault. Computed lags in slip rate preceding some other moderate earthquakes in the area are also due to earlier earthquakes. Although the response of the upper 1 km of the fault zone may cause some individual creep events and introduce delays in others, the long-term rate appears to reflect deep slip.

  8. Fault interaction near Hollister, California

    NASA Astrophysics Data System (ADS)

    Mavko, Gerald M.

    1982-09-01

    A numerical model is used to study fault stress and slip near Hollister, California. The geometrically complex system of interacting faults, including the San Andreas, Calaveras, Sargent, and Busch faults, is approximated with a two-dimensional distribution of short planar fault segments in an elastic medium. The steady stress and slip rate are simulated by specifying frictional strength and stepping the remote stress ahead in time. The resulting computed fault stress is roughly proportional to the observed spatial density of small earthquakes, suggesting that the distinction between segments characterized by earthquakes and those with aseismic creep results, in part, from geometry. A nosteady simulation is made by introducing, in addition, stress drops for individual moderate earthquakes. A close fit of observed creep with calculated slip on the Calaveras and San Andreas faults suggests that many changes in creep rate (averaged over several months) are caused by local moderate earthquakes. In particular, a 3-year creep lag preceding the August 6, 1979, Coyote Lake earthquake on the Calaveras fault seems to have been a direct result of the November 28, 1974, Thanksgiving Day earthquake on the Busch fault. Computed lags in slip rate preceding some other moderate earthquakes in the area are also due to earlier earthquakes. Although the response of the upper 1 km of the fault zone may cause some individual creep events and introduce delays in others, the long-term rate appears to reflect deep slip.

  9. Fault welding by pseudotachylyte generation

    NASA Astrophysics Data System (ADS)

    Mitchell, T. M.; Toy, V. G.; Di Toro, G.; Renner, J.

    2014-12-01

    During earthquakes, frictional melts can localize on slip surfaces and dramatically weaken faults by melt lubrication. Once seismic slip is arrested, the melt cools and solidifies to form pseudotachylyte (PST), the presence of which is commonly used to infer earthquake slip on ancient exhumed faults. Little is known about the effect of solidified melt on the strength of faults directly preceding a subsequent earthquake. We performed triaxial deformation experiments on cores of tonalite (Gole Larghe fault zone, N. Italy) and mylonite (Alpine fault, New Zealand) in order to assess the strength of PST bearing faults in the lab. Three types of sample were prepared for each rock type; intact, sawcut and PST bearing, and were cored so that the sawcut, PST and foliation planes were orientated at 35° to the length of the core and direction of σ1, i.e., a favorable orientation for reactivation. This choice of samples allowed us to compare the strength of 'pre-earthquake' fault (sawcut) to a 'post-earthquake' fault with solidified frictional melt, and assess their strength relative to intact samples. Our results show that PST veins effectively weld fault surfaces together, allowing previously faulted rocks to regain cohesive strengths comparable to that of an intact rock. Shearing of the PST is not favored, but subsequent failure and slip is accommodated on new faults nucleating at other zones of weakness. Thus, the mechanism of coseismic weakening by melt lubrication does not necessarily facilitate long-term interseismic deformation localization, at least at the scale of these experiments. In natural fault zones, PSTs are often found distributed over multiple adjacent fault planes or other zones of weakness such as foliation planes. We also modeled the temperature distribution in and around a PST using an approximation for cooling of a thin, infinite sheet by conduction perpendicular to its margins at ambient temperatures commensurate with the depth of PST formation

  10. Perspective View, Garlock Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    California's Garlock Fault, marking the northwestern boundary of the Mojave Desert, lies at the foot of the mountains, running from the lower right to the top center of this image, which was created with data from NASA's shuttle Radar Topography Mission (SRTM), flown in February 2000. The data will be used by geologists studying fault dynamics and landforms resulting from active tectonics. These mountains are the southern end of the Sierra Nevada and the prominent canyon emerging at the lower right is Lone Tree canyon. In the distance, the San Gabriel Mountains cut across from the leftside of the image. At their base lies the San Andreas Fault which meets the Garlock Fault near the left edge at Tejon Pass. The dark linear feature running from lower right to upper left is State Highway 14 leading from the town of Mojave in the distance to Inyokern and the Owens Valley in the north. The lighter parallel lines are dirt roads related to power lines and the Los Angeles Aqueduct which run along the base of the mountains.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast

  11. Fault current limiter

    DOEpatents

    Darmann, Francis Anthony

    2013-10-08

    A fault current limiter (FCL) includes a series of high permeability posts for collectively define a core for the FCL. A DC coil, for the purposes of saturating a portion of the high permeability posts, surrounds the complete structure outside of an enclosure in the form of a vessel. The vessel contains a dielectric insulation medium. AC coils, for transporting AC current, are wound on insulating formers and electrically interconnected to each other in a manner such that the senses of the magnetic field produced by each AC coil in the corresponding high permeability core are opposing. There are insulation barriers between phases to improve dielectric withstand properties of the dielectric medium.

  12. 20 CFR 404.507 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Officer § 404.507 Fault. Fault as used in without fault (see § 404.506 and 42 CFR 405.355) applies only to the individual. Although the Administration may have been at fault in making the overpayment, that... 20 Employees' Benefits 2 2014-04-01 2014-04-01 false Fault. 404.507 Section 404.507...

  13. 20 CFR 404.507 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Officer § 404.507 Fault. Fault as used in without fault (see § 404.506 and 42 CFR 405.355) applies only to the individual. Although the Administration may have been at fault in making the overpayment, that... 20 Employees' Benefits 2 2012-04-01 2012-04-01 false Fault. 404.507 Section 404.507...

  14. 20 CFR 404.507 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Officer § 404.507 Fault. Fault as used in without fault (see § 404.506 and 42 CFR 405.355) applies only to the individual. Although the Administration may have been at fault in making the overpayment, that... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Fault. 404.507 Section 404.507...

  15. 20 CFR 404.507 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Officer § 404.507 Fault. Fault as used in without fault (see § 404.506 and 42 CFR 405.355) applies only to the individual. Although the Administration may have been at fault in making the overpayment, that... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Fault. 404.507 Section 404.507...

  16. Final Technical Report: PV Fault Detection Tool.

    SciTech Connect

    King, Bruce Hardison; Jones, Christian Birk

    2015-12-01

    The PV Fault Detection Tool project plans to demonstrate that the FDT can (a) detect catastrophic and degradation faults and (b) identify the type of fault. This will be accomplished by collecting fault signatures using different instruments and integrating this information to establish a logical controller for detecting, diagnosing and classifying each fault.

  17. Central Asia Active Fault Database

    NASA Astrophysics Data System (ADS)

    Mohadjer, Solmaz; Ehlers, Todd A.; Kakar, Najibullah

    2014-05-01

    The ongoing collision of the Indian subcontinent with Asia controls active tectonics and seismicity in Central Asia. This motion is accommodated by faults that have historically caused devastating earthquakes and continue to pose serious threats to the population at risk. Despite international and regional efforts to assess seismic hazards in Central Asia, little attention has been given to development of a comprehensive database for active faults in the region. To address this issue and to better understand the distribution and level of seismic hazard in Central Asia, we are developing a publically available database for active faults of Central Asia (including but not limited to Afghanistan, Tajikistan, Kyrgyzstan, northern Pakistan and western China) using ArcGIS. The database is designed to allow users to store, map and query important fault parameters such as fault location, displacement history, rate of movement, and other data relevant to seismic hazard studies including fault trench locations, geochronology constraints, and seismic studies. Data sources integrated into the database include previously published maps and scientific investigations as well as strain rate measurements and historic and recent seismicity. In addition, high resolution Quickbird, Spot, and Aster imagery are used for selected features to locate and measure offset of landforms associated with Quaternary faulting. These features are individually digitized and linked to attribute tables that provide a description for each feature. Preliminary observations include inconsistent and sometimes inaccurate information for faults documented in different studies. For example, the Darvaz-Karakul fault which roughly defines the western margin of the Pamir, has been mapped with differences in location of up to 12 kilometers. The sense of motion for this fault ranges from unknown to thrust and strike-slip in three different studies despite documented left-lateral displacements of Holocene and late

  18. Dynamics of fault interaction - Parallel strike-slip faults

    NASA Astrophysics Data System (ADS)

    Harris, Ruth A.; Day, Steven M.

    1993-03-01

    We use a 2D finite difference computer program to study the effect of fault steps on dynamic ruptures. Our results indicate that a strike-slip earthquake is unlikely to jump a fault step wider than 5 km, in correlation with field observations of moderate to great-sized earthquakes. We also find that dynamically propagating ruptures can jump both compressional and dilational fault steps, although wider dilational fault steps can be jumped. Dilational steps tend to delay the rupture for a longer time than compressional steps do. This delay leads to a slower apparent rupture velocity in the vicinity of dilational steps. These 'dry' cases assumed hydrostatic or greater pore-pressures but did not include the effects of changing pore pressures. In an additional study, we simulated the dynamic effects of a fault rupture on 'undrained' pore fluids to test Sibson's (1985, 1986) suggestion that 'wet' dilational steps are a barrier to rupture propagation. Our numerical results validate Sibson's hypothesis.

  19. Fault Management Design Strategies

    NASA Technical Reports Server (NTRS)

    Day, John C.; Johnson, Stephen B.

    2014-01-01

    Development of dependable systems relies on the ability of the system to determine and respond to off-nominal system behavior. Specification and development of these fault management capabilities must be done in a structured and principled manner to improve our understanding of these systems, and to make significant gains in dependability (safety, reliability and availability). Prior work has described a fundamental taxonomy and theory of System Health Management (SHM), and of its operational subset, Fault Management (FM). This conceptual foundation provides a basis to develop framework to design and implement FM design strategies that protect mission objectives and account for system design limitations. Selection of an SHM strategy has implications for the functions required to perform the strategy, and it places constraints on the set of possible design solutions. The framework developed in this paper provides a rigorous and principled approach to classifying SHM strategies, as well as methods for determination and implementation of SHM strategies. An illustrative example is used to describe the application of the framework and the resulting benefits to system and FM design and dependability.

  20. SFT: Scalable Fault Tolerance

    SciTech Connect

    Petrini, Fabrizio; Nieplocha, Jarek; Tipparaju, Vinod

    2006-04-15

    In this paper we will present a new technology that we are currently developing within the SFT: Scalable Fault Tolerance FastOS project which seeks to implement fault tolerance at the operating system level. Major design goals include dynamic reallocation of resources to allow continuing execution in the presence of hardware failures, very high scalability, high efficiency (low overhead), and transparency—requiring no changes to user applications. Our technology is based on a global coordination mechanism, that enforces transparent recovery lines in the system, and TICK, a lightweight, incremental checkpointing software architecture implemented as a Linux kernel module. TICK is completely user-transparent and does not require any changes to user code or system libraries; it is highly responsive: an interrupt, such as a timer interrupt, can trigger a checkpoint in as little as 2.5μs; and it supports incremental and full checkpoints with minimal overhead—less than 6% with full checkpointing to disk performed as frequently as once per minute.

  1. Colorado Regional Faults

    DOE Data Explorer

    Hussein, Khalid

    2012-02-01

    Citation Information: Originator: Earth Science &Observation Center (ESOC), CIRES, University of Colorado at Boulder Originator: Colorado Geological Survey (CGS) Publication Date: 2012 Title: Regional Faults Edition: First Publication Information: Publication Place: Earth Science & Observation Center, Cooperative Institute for Research in Environmental Science, University of Colorado, Boulder Publisher: Earth Science &Observation Center (ESOC), CIRES, University of Colorado at Boulder Description: This layer contains the regional faults of Colorado Spatial Domain: Extent: Top: 4543192.100000 m Left: 144385.020000 m Right: 754585.020000 m Bottom: 4094592.100000 m Contact Information: Contact Organization: Earth Science &Observation Center (ESOC), CIRES, University of Colorado at Boulder Contact Person: Khalid Hussein Address: CIRES, Ekeley Building Earth Science & Observation Center (ESOC) 216 UCB City: Boulder State: CO Postal Code: 80309-0216 Country: USA Contact Telephone: 303-492-6782 Spatial Reference Information: Coordinate System: Universal Transverse Mercator (UTM) WGS’1984 Zone 13N False Easting: 500000.00000000 False Northing: 0.00000000 Central Meridian: -105.00000000 Scale Factor: 0.99960000 Latitude of Origin: 0.00000000 Linear Unit: Meter Datum: World Geodetic System 1984 (WGS ’984) Prime Meridian: Greenwich Angular Unit: Degree Digital Form: Format Name: Shape file

  2. Fault deformation mechanisms and fault rocks in micritic limestones: Examples from Corinth rift normal faults

    NASA Astrophysics Data System (ADS)

    Bussolotto, M.; Benedicto, A.; Moen-Maurel, L.; Invernizzi, C.

    2015-08-01

    A multidisciplinary study investigates the influence of different parameters on fault rock architecture development along normal faults affecting non-porous carbonates of the Corinth rift southern margin. Here, some fault systems cut the same carbonate unit (Pindus), and the gradual and fast uplift since the initiation of the rift led to the exhumation of deep parts of the older faults. This exceptional context allows superficial active fault zones and old exhumed fault zones to be compared. Our approach includes field studies, micro-structural (optical microscope and cathodoluminescence), geochemical analyses (δ13C, δ18O, trace elements) and fluid inclusions microthermometry of calcite sin-kinematic cements. Our main results, in a depth-window ranging from 0 m to about 2500 m, are: i) all cements precipitated from meteoric fluids in a close or open circulation system depending on depth; ii) depth (in terms of P/T condition) determines the development of some structures and their sealing; iii) lithology (marly levels) influences the type of structures and its cohesive/non-cohesive nature; iv) early distributed rather than final total displacement along the main fault plane is the responsible for the fault zone architecture; v) petrophysical properties of each fault zone depend on the variable combination of these factors.

  3. Chip level simulation of fault tolerant computers

    NASA Technical Reports Server (NTRS)

    Armstrong, J. R.

    1982-01-01

    Chip-level modeling techniques in the evaluation of fault tolerant systems were researched. A fault tolerant computer was modeled. An efficient approach to functional fault simulation was developed. Simulation software was also developed.

  4. 20 CFR 404.507 - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Officer § 404.507 Fault. Fault as used in without fault (see § 404.506 and 42 CFR 405.355) applies only to..., educational, or linguistic limitations (including any lack of facility with the English language)...

  5. Accelerometer having integral fault null

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1995-01-01

    An improved accelerometer is introduced. It comprises a transducer responsive to vibration in machinery which produces an electrical signal related to the magnitude and frequency of the vibration; and a decoding circuit responsive to the transducer signal which produces a first fault signal to produce a second fault signal in which ground shift effects are nullified.

  6. Experimental Fault Reactivation on Favourably and Unfavourably Oriented Faults

    NASA Astrophysics Data System (ADS)

    Mitchell, T. M.; Renner, J.; Sibson, R. H.

    2011-12-01

    In this study, we assess the loading of faults to failure under different stress regimes in a triaxial deformation apparatus, both in dry and saturated conditions. We explore experimentally the reshear of an existing fault in various orientations for particular values of (σ_1 - σ_3) and σ_3' for contrasting loading systems - load-strengthening (equivalent to a thrust fault) with σ1' increasing at constant σ_3', versus load-weakening (equivalent to a normal fault) with reducing σ_3' under constant σ_1'. Experiments are conducted on sawcut granite samples with fault angles at a variety of orientations relative to σ_1, ranging from an optimal orientation for reactivation to lockup angles where new faults are formed in preference to reactivating the existing sawcut orientation. Prefailure and postfailure behaviour is compared in terms of damage zone development via monitoring variations in ultrasonic velocity and acoustic emission behaviour. For example, damage surrounding unfavourably oriented faults is significantly higher than that seen around favourably orientated faults due to greater maximum stresses attained prior to unstable slip, which is reflected by the increased acoustic emission activity leading up to failure. In addition, we explore reshear conditions under an initial condition of (σ_1' = σ_3'), then inducing reshear on the existing fault first by increasing σ_1'(load-strengthening), then by decreasing σ_3' (load-weakening), again comparing relative damage zone development and acoustic emission levels. In saturated experiments, we explore the values of pore fluid pressure (P_f) needed for re-shear to occur in preference to the formation of a new fault. Typically a limiting factor in conventional triaxial experiments performed in compression is that P_f cannot exceed the confining pressure (σ_2 and σ_3). By employing a sample assembly that allows deformation while the loading piston is in extension, it enables us to achieve pore pressures in

  7. How do normal faults grow?

    NASA Astrophysics Data System (ADS)

    Jackson, Christopher; Bell, Rebecca; Rotevatn, Atle; Tvedt, Anette

    2016-04-01

    Normal faulting accommodates stretching of the Earth's crust, and it is arguably the most fundamental tectonic process leading to continent rupture and oceanic crust emplacement. Furthermore, the incremental and finite geometries associated with normal faulting dictate landscape evolution, sediment dispersal and hydrocarbon systems development in rifts. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate

  8. Differential Fault Analysis of Rabbit

    NASA Astrophysics Data System (ADS)

    Kircanski, Aleksandar; Youssef, Amr M.

    Rabbit is a high speed scalable stream cipher with 128-bit key and a 64-bit initialization vector. It has passed all three stages of the ECRYPT stream cipher project and is a member of eSTREAM software portfolio. In this paper, we present a practical fault analysis attack on Rabbit. The fault model in which we analyze the cipher is the one in which the attacker is assumed to be able to fault a random bit of the internal state of the cipher but cannot control the exact location of injected faults. Our attack requires around 128 - 256 faults, precomputed table of size 241.6 bytes and recovers the complete internal state of Rabbit in about 238 steps.

  9. The Lawanopo Fault, central Sulawesi, East Indonesia

    NASA Astrophysics Data System (ADS)

    Natawidjaja, Danny Hilman; Daryono, Mudrik R.

    2015-04-01

    The dominant tectonic-force factor in the Sulawesi Island is the westward Bangga-Sula microplate tectonic intrusion, driven by the 12 mm/year westward motion of the Pacific Plate relative to Eurasia. This tectonic intrusion are accommodated by a series of major left-lateral strike-slip fault zones including Sorong Fault, Sula-Sorong Fault, Matano Fault, Palukoro Fault, and Lawanopo Fault zones. The Lawanopo fault has been considered as an active left-lateral strike-slip fault. The natural exposures of the Lawanopo Fault are clear, marked by the breaks and liniemants of topography along the fault line, and also it serves as a tectonic boundary between the different rock assemblages. Inpections of IFSAR 5m-grid DEM and field checks show that the fault traces are visible by lineaments of topographical slope breaks, linear ridges and stream valleys, ridge neckings, and they are also associated with hydrothermal deposits and hot springs. These are characteristics of young fault, so their morphological expressions can be seen still. However, fault scarps and other morpho-tectonic features appear to have been diffused by erosions and young sediment depositions. No fresh fault scarps, stream deflections or offsets, or any influences of fault movements on recent landscapes are observed associated with fault traces. Hence, the faults do not show any evidence of recent activity. This is consistent with lack of seismicity on the fault.

  10. Fault Tolerant State Machines

    NASA Technical Reports Server (NTRS)

    Burke, Gary R.; Taft, Stephanie

    2004-01-01

    State machines are commonly used to control sequential logic in FPGAs and ASKS. An errant state machine can cause considerable damage to the device it is controlling. For example in space applications, the FPGA might be controlling Pyros, which when fired at the wrong time will cause a mission failure. Even a well designed state machine can be subject to random errors us a result of SEUs from the radiation environment in space. There are various ways to encode the states of a state machine, and the type of encoding makes a large difference in the susceptibility of the state machine to radiation. In this paper we compare 4 methods of state machine encoding and find which method gives the best fault tolerance, as well as determining the resources needed for each method.

  11. Arc fault detection system

    DOEpatents

    Jha, K.N.

    1999-05-18

    An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard. 1 fig.

  12. Arc fault detection system

    DOEpatents

    Jha, Kamal N.

    1999-01-01

    An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard.

  13. Faulted Sedimentary Rocks

    NASA Technical Reports Server (NTRS)

    2004-01-01

    27 June 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows some of the layered, sedimentary rock outcrops that occur in a crater located at 8oN, 7oW, in western Arabia Terra. Dark layers and dark sand have enhanced the contrast of this scene. In the upper half of the image, one can see numerous lines that off-set the layers. These lines are faults along which the rocks have broken and moved. The regularity of layer thickness and erosional expression are taken as evidence that the crater in which these rocks occur might once have been a lake. The image covers an area about 1.9 km (1.2 mi) wide. Sunlight illuminates the scene from the lower left.

  14. Comparison of Cenozoic Faulting at the Savannah River Site to Fault Characteristics of the Atlantic Coast Fault Province: Implications for Fault Capability

    SciTech Connect

    Cumbest, R.J.

    2000-11-14

    This study compares the faulting observed on the Savannah River Site and vicinity with the faults of the Atlantic Coastal Fault Province and concludes that both sets of faults exhibit the same general characteristics and are closely associated. Based on the strength of this association it is concluded that the faults observed on the Savannah River Site and vicinity are in fact part of the Atlantic Coastal Fault Province. Inclusion in this group means that the historical precedent established by decades of previous studies on the seismic hazard potential for the Atlantic Coastal Fault Province is relevant to faulting at the Savannah River Site. That is, since these faults are genetically related the conclusion of ''not capable'' reached in past evaluations applies.In addition, this study establishes a set of criteria by which individual faults may be evaluated in order to assess their inclusion in the Atlantic Coast Fault Province and the related association of the ''not capable'' conclusion.

  15. Improving Multiple Fault Diagnosability using Possible Conflicts

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2012-01-01

    Multiple fault diagnosis is a difficult problem for dynamic systems. Due to fault masking, compensation, and relative time of fault occurrence, multiple faults can manifest in many different ways as observable fault signature sequences. This decreases diagnosability of multiple faults, and therefore leads to a loss in effectiveness of the fault isolation step. We develop a qualitative, event-based, multiple fault isolation framework, and derive several notions of multiple fault diagnosability. We show that using Possible Conflicts, a model decomposition technique that decouples faults from residuals, we can significantly improve the diagnosability of multiple faults compared to an approach using a single global model. We demonstrate these concepts and provide results using a multi-tank system as a case study.

  16. ANNs pinpoint underground distribution faults

    SciTech Connect

    Glinkowski, M.T.; Wang, N.C.

    1995-10-01

    Many offline fault location techniques in power distribution circuits involve patrolling along the lines or cables. In overhead distribution lines, most of the failures can be located quickly by visual inspection without the aid of special equipment. However, locating a fault in underground cable systems is more difficult. It involves additional equipment (e.g., thumpers, radars, etc.) to transform the invisibility of the cable into other forms of signals, such as acoustic sound and electromagnetic pulses. Trained operators must carry the equipment above the ground, follow the path of the signal, and draw lines on their maps in order to locate the fault. Sometimes, even smelling the burnt cable faults is a way of detecting the problem. These techniques are time consuming, not always reliable, and, as in the case of high-voltage dc thumpers, can cause additional damage to the healthy parts of the cable circuit. Online fault location in power networks that involve interconnected lines (cables) and multiterminal sources continues receiving great attention, with limited success in techniques that would provide simple and practical solutions. This article features a new online fault location technique that: uses the pattern recognition feature of artificial neural networks (ANNs); utilizes new capabilities of modern protective relaying hardware. The output of the neural network can be graphically displayed as a simple three-dimensional (3-D) chart that can provide an operator with an instantaneous indication of the location of the fault.

  17. Subaru FATS (fault tracking system)

    NASA Astrophysics Data System (ADS)

    Winegar, Tom W.; Noumaru, Junichi

    2000-07-01

    The Subaru Telescope requires a fault tracking system to record the problems and questions that staff experience during their work, and the solutions provided by technical experts to these problems and questions. The system records each fault and routes it to a pre-selected 'solution-provider' for each type of fault. The solution provider analyzes the fault and writes a solution that is routed back to the fault reporter and recorded in a 'knowledge-base' for future reference. The specifications of our fault tracking system were unique. (1) Dual language capacity -- Our staff speak both English and Japanese. Our contractors speak Japanese. (2) Heterogeneous computers -- Our computer workstations are a mixture of SPARCstations, Macintosh and Windows computers. (3) Integration with prime contractors -- Mitsubishi and Fujitsu are primary contractors in the construction of the telescope. In many cases, our 'experts' are our contractors. (4) Operator scheduling -- Our operators spend 50% of their work-month operating the telescope, the other 50% is spent working day shift at the base facility in Hilo, or day shift at the summit. We plan for 8 operators, with a frequent rotation. We need to keep all operators informed on the current status of all faults, no matter the operator's location.

  18. The Dynamics of Fault Zones

    NASA Astrophysics Data System (ADS)

    Mooney, W. D.; Beroza, G.; Kind, R.

    2006-05-01

    Geophysical studies of the Earth's crust, including fault zones, have developed over the past 80 years. Among the first methods to be employed, seismic refraction and reflection profiles were recorded in the North American Gulf Coast to detect salt domes which were known to trap hydrocarbons. Seismic methods continue to be the most important geophysical technique in use today due to the methods' relatively high accuracy, high resolution, and great depth of penetration. However, in the past decade, a much expanded repertoire of seismic and non-seismic techniques have been brought to bear on studies of the Earth's crust and uppermost mantle. Important insights have also been obtained using seismic tomography, measurements of seismic anisotropy, fault zone guided waves, borehole surveys, and geo-electrical, magnetic, and gravity methods. In this presentation, we briefly review recent geophysical progress in the study of the structure and internal properties of faults zones, from their surface exposures to their lower limit. We focus on the structure of faults within continental crystalline and competent sedimentary rock rather than within the overlying, poorly consolidated sedimentary rocks. A significant body of literature exists for oceanic fracture zones, however, due to space limitations we restrict this review to faults within and at the margins of the continents. We also address some unanswered questions, including: 1) Does fault-zone complexity, as observed at the surface, extend to great depth, or do active faults become thin simple planes at depth? and 2) How is crustal deformation accommodated within the lithospheric mantle?

  19. Fault Injection Campaign for a Fault Tolerant Duplex Framework

    NASA Technical Reports Server (NTRS)

    Sacco, Gian Franco; Ferraro, Robert D.; von llmen, Paul; Rennels, Dave A.

    2007-01-01

    Fault tolerance is an efficient approach adopted to avoid or reduce the damage of a system failure. In this work we present the results of a fault injection campaign we conducted on the Duplex Framework (DF). The DF is a software developed by the UCLA group [1, 2] that uses a fault tolerant approach and allows to run two replicas of the same process on two different nodes of a commercial off-the-shelf (COTS) computer cluster. A third process running on a different node, constantly monitors the results computed by the two replicas, and eventually restarts the two replica processes if an inconsistency in their computation is detected. This approach is very cost efficient and can be adopted to control processes on spacecrafts where the fault rate produced by cosmic rays is not very high.

  20. Finding faults with the data

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    Rudolph Giuliani and Hillary Rodham Clinton are crisscrossing upstate New York looking for votes in the U.S. Senate race. Also cutting back and forth across upstate New York are hundreds of faults of a kind characterized by very sporadic seismic activity according to Robert Jacobi, professor of geology at the University of Buffalo (UB), who conducted research with fellow UB geology professor John Fountain."We have proof that upstate New York is crisscrossed by faults," Jacobi said. "In the past, the Appalachian Plateau—which stretches from Albany to Buffalo—was considered a pretty boring place structurally without many faults or folds of any significance."

  1. Granular packings and fault zones

    PubMed

    Astrom; Herrmann; Timonen

    2000-01-24

    The failure of a two-dimensional packing of elastic grains is analyzed using a numerical model. The packing fails through formation of shear bands or faults. During failure there is a separation of the system into two grain-packing states. In a shear band, local "rotating bearings" are spontaneously formed. The bearing state is favored in a shear band because it has a low stiffness against shearing. The "seismic activity" distribution in the packing has the same characteristics as that of the earthquake distribution in tectonic faults. The directions of the principal stresses in a bearing are reminiscent of those found at the San Andreas Fault. PMID:11017335

  2. Method of locating ground faults

    NASA Astrophysics Data System (ADS)

    Patterson, Richard L.; Rose, Allen H.; Cull, Ronald C.

    1994-11-01

    The present invention discloses a method of detecting and locating current imbalances such as ground faults in multiwire systems using the Faraday effect. As an example, for 2-wire or 3-wire (1 ground wire) electrical systems, light is transmitted along an optical path which is exposed to magnetic fields produced by currents flowing in the hot and neutral wires. The rotations produced by these two magnetic fields cancel each other, therefore light on the optical path does not read the effect of either. However, when a ground fault occurs, the optical path is exposed to a net Faraday effect rotation due to the current imbalance thereby exposing the ground fault.

  3. Method of locating ground faults

    NASA Technical Reports Server (NTRS)

    Patterson, Richard L. (Inventor); Rose, Allen H. (Inventor); Cull, Ronald C. (Inventor)

    1994-01-01

    The present invention discloses a method of detecting and locating current imbalances such as ground faults in multiwire systems using the Faraday effect. As an example, for 2-wire or 3-wire (1 ground wire) electrical systems, light is transmitted along an optical path which is exposed to magnetic fields produced by currents flowing in the hot and neutral wires. The rotations produced by these two magnetic fields cancel each other, therefore light on the optical path does not read the effect of either. However, when a ground fault occurs, the optical path is exposed to a net Faraday effect rotation due to the current imbalance thereby exposing the ground fault.

  4. Fault-free performance validation of fault-tolerant multiprocessors

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Feather, Frank E.; Grizzaffi, Ann Marie; Segall, Zary Z.; Siewiorek, Daniel P.

    1987-01-01

    A validation methodology for testing the performance of fault-tolerant computer systems was developed and applied to the Fault-Tolerant Multiprocessor (FTMP) at NASA-Langley's AIRLAB facility. This methodology was claimed to be general enough to apply to any ultrareliable computer system. The goal of this research was to extend the validation methodology and to demonstrate the robustness of the validation methodology by its more extensive application to NASA's Fault-Tolerant Multiprocessor System (FTMP) and to the Software Implemented Fault-Tolerance (SIFT) Computer System. Furthermore, the performance of these two multiprocessors was compared by conducting similar experiments. An analysis of the results shows high level language instruction execution times for both SIFT and FTMP were consistent and predictable, with SIFT having greater throughput. At the operating system level, FTMP consumes 60% of the throughput for its real-time dispatcher and 5% on fault-handling tasks. In contrast, SIFT consumes 16% of its throughput for the dispatcher, but consumes 66% in fault-handling software overhead.

  5. 20 CFR 410.561b - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Fault. 410.561b Section 410.561b Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL MINE HEALTH AND SAFETY ACT OF 1969, TITLE IV-BLACK LUNG BENEFITS (1969- ) Payment of Benefits § 410.561b Fault. Fault as used in without fault (see §...

  6. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of... agency may have been at fault in initiating an overpayment will not necessarily relieve the...

  7. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of... agency may have been at fault in initiating an overpayment will not necessarily relieve the...

  8. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of... agency may have been at fault in initiating an overpayment will not necessarily relieve the...

  9. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of... agency may have been at fault in initiating an overpayment will not necessarily relieve the...

  10. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of... agency may have been at fault in initiating an overpayment will not necessarily relieve the...

  11. 20 CFR 410.561b - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Fault. 410.561b Section 410.561b Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL MINE HEALTH AND SAFETY ACT OF 1969, TITLE IV-BLACK LUNG BENEFITS (1969- ) Payment of Benefits § 410.561b Fault. Fault as used in without fault (see §...

  12. Normal faults geometry and morphometry on Mars

    NASA Astrophysics Data System (ADS)

    Vaz, D. A.; Spagnuolo, M. G.; Silvestro, S.

    2014-04-01

    In this report, we show how normal faults scarps geometry and degradation history can be accessed using high resolution imagery and topography. We show how the initial geometry of the faults can be inferred from faulted craters and we demonstrate how a comparative morphometric analysis of faults scarps can be used to study erosion rates through time on Mars.

  13. Spontaneous rupture on irregular faults

    NASA Astrophysics Data System (ADS)

    Liu, C.

    2014-12-01

    It is now know (e.g. Robinson et al., 2006) that when ruptures propagate around bends, the rupture velocity decrease. In the extreme case, a large bend in the fault can stop the rupture. We develop a 2-D finite difference method to simulate spontaneous dynamic rupture on irregular faults. This method is based on a second order leap-frog finite difference scheme on a uniform mesh of triangles. A relaxation method is used to generate an irregular fault geometry-conforming mesh from the uniform mesh. Through this numerical coordinate mapping, the elastic wave equations are transformed and solved in a curvilinear coordinate system. Extensive numerical experiments using the linear slip-weakening law will be shown to demonstrate the effect of fault geometry on rupture properties. A long term goal is to simulate the strong ground motion near the vicinity of bends, jogs, etc.

  14. The fault-tree compiler

    NASA Technical Reports Server (NTRS)

    Martensen, Anna L.; Butler, Ricky W.

    1987-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.

  15. Cell boundary fault detection system

    DOEpatents

    Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward

    2009-05-05

    A method determines a nodal fault along the boundary, or face, of a computing cell. Nodes on adjacent cell boundaries communicate with each other, and the communications are analyzed to determine if a node or connection is faulty.

  16. A fault-tolerant clock

    NASA Technical Reports Server (NTRS)

    Daley, W. P.; Mckenna, J. F., Jr.

    1973-01-01

    Computers must operate correctly even though one or more of components have failed. Electronic clock has been designed to be insensitive to occurrence of faults; it is substantial advance over any known clock.

  17. Weakening inside incipient thrust fault

    NASA Astrophysics Data System (ADS)

    Lacroix, B.; Tesei, T.; Collettini, C.; Oliot, E.

    2013-12-01

    In fold-and-thrust belts, shortening is mainly accommodated by thrust faults that nucleate along décollement levels. Geological and geophysical evidence suggests that these faults might be weak because of a combination of processes such as pressure-solution, phyllosilicates reorientation and delamination, and fluid pressurization. In this study we aim to decipher the processes and the kinetics responsible for weakening of tectonic décollements. We studied the Millaris thrust (Southern Pyrenees): a fault representative of a décollement in its incipient stage. This fault accommodated a total shortening of about 30 meters and is constituted by a 10m thick, intensively foliated phyllonite developed inside a homogeneous marly unit. Detailed chemical and mineralogical analyses have been carried out to characterize the mineralogical change, the chemical transfers and volume change in the fault zone compared to non-deformed parent sediments. We also carried out microstructural analysis on natural and experimentally deformed rocks. Illite and chlorite are the main hydrous minerals. Inside fault zone, illite minerals are oriented along the schistosity whereas chlorite coats the shear surfaces. Mass balance calculations demonstrated a volume loss of up to 50% for calcite inside fault zone (and therefore a relative increase of phyllosilicates contents) because of calcite pressure solution mechanisms. We performed friction experiments in a biaxial deformation apparatus using intact rocks sheared in the in-situ geometry from the Millaris fault and its host sediments. We imposed a range of normal stresses (10 to 50 MPa), sliding velocity steps (3-100 μm/s) and slide-hold slide sequences (3 to 1000 s hold) under saturated conditions. Mechanical results demonstrate that both fault rocks and parent sediments are weaker than average geological materials (friction μ<<0.6) and have velocity-strengthening behavior because of the presence of phyllosilicate horizons. Fault rocks are

  18. Hardware Fault Simulator for Microprocessors

    NASA Technical Reports Server (NTRS)

    Hess, L. M.; Timoc, C. C.

    1983-01-01

    Breadboarded circuit is faster and more thorough than software simulator. Elementary fault simulator for AND gate uses three gates and shaft register to simulate stuck-at-one or stuck-at-zero conditions at inputs and output. Experimental results showed hardware fault simulator for microprocessor gave faster results than software simulator, by two orders of magnitude, with one test being applied every 4 microseconds.

  19. Fault-tolerant rotary actuator

    DOEpatents

    Tesar, Delbert

    2006-10-17

    A fault-tolerant actuator module, in a single containment shell, containing two actuator subsystems that are either asymmetrically or symmetrically laid out is provided. Fault tolerance in the actuators of the present invention is achieved by the employment of dual sets of equal resources. Dual resources are integrated into single modules, with each having the external appearance and functionality of a single set of resources.

  20. Seismic fault zone trapped noise

    NASA Astrophysics Data System (ADS)

    Hillers, G.; Campillo, M.; Ben-Zion, Y.; Roux, P.

    2014-07-01

    Systematic velocity contrasts across and within fault zones can lead to head and trapped waves that provide direct information on structural units that are important for many aspects of earthquake and fault mechanics. Here we construct trapped waves from the scattered seismic wavefield recorded by a fault zone array. The frequency-dependent interaction between the ambient wavefield and the fault zone environment is studied using properties of the noise correlation field. A critical frequency fc ≈ 0.5 Hz defines a threshold above which the in-fault scattered wavefield has increased isotropy and coherency compared to the ambient noise. The increased randomization of in-fault propagation directions produces a wavefield that is trapped in a waveguide/cavity-like structure associated with the low-velocity damage zone. Dense spatial sampling allows the resolution of a near-field focal spot, which emerges from the superposition of a collapsing, time reversed wavefront. The shape of the focal spot depends on local medium properties, and a focal spot-based fault normal distribution of wave speeds indicates a ˜50% velocity reduction consistent with estimates from a far-field travel time inversion. The arrival time pattern of a synthetic correlation field can be tuned to match properties of an observed pattern, providing a noise-based imaging tool that can complement analyses of trapped ballistic waves. The results can have wide applicability for investigating the internal properties of fault damage zones, because mechanisms controlling the emergence of trapped noise have less limitations compared to trapped ballistic waves.

  1. Fault Tree Analysis: A Bibliography

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Fault tree analysis is a top-down approach to the identification of process hazards. It is as one of the best methods for systematically identifying an graphically displaying the many ways some things can go wrong. This bibliography references 266 documents in the NASA STI Database that contain the major concepts. fault tree analysis, risk an probability theory, in the basic index or major subject terms. An abstract is included with most citations, followed by the applicable subject terms.

  2. Aeromagnetic anomalies over faulted strata

    USGS Publications Warehouse

    Grauch, V.J.S.; Hudson, Mark R.

    2011-01-01

    High-resolution aeromagnetic surveys are now an industry standard and they commonly detect anomalies that are attributed to faults within sedimentary basins. However, detailed studies identifying geologic sources of magnetic anomalies in sedimentary environments are rare in the literature. Opportunities to study these sources have come from well-exposed sedimentary basins of the Rio Grande rift in New Mexico and Colorado. High-resolution aeromagnetic data from these areas reveal numerous, curvilinear, low-amplitude (2–15 nT at 100-m terrain clearance) anomalies that consistently correspond to intrasedimentary normal faults (Figure 1). Detailed geophysical and rock-property studies provide evidence for the magnetic sources at several exposures of these faults in the central Rio Grande rift (summarized in Grauch and Hudson, 2007, and Hudson et al., 2008). A key result is that the aeromagnetic anomalies arise from the juxtaposition of magnetically differing strata at the faults as opposed to chemical processes acting at the fault zone. The studies also provide (1) guidelines for understanding and estimating the geophysical parameters controlling aeromagnetic anomalies at faulted strata (Grauch and Hudson), and (2) observations on key geologic factors that are favorable for developing similar sedimentary sources of aeromagnetic anomalies elsewhere (Hudson et al.).

  3. Software Fault Tolerance: A Tutorial

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2000-01-01

    Because of our present inability to produce error-free software, software fault tolerance is and will continue to be an important consideration in software systems. The root cause of software design errors is the complexity of the systems. Compounding the problems in building correct software is the difficulty in assessing the correctness of software for highly complex systems. After a brief overview of the software development processes, we note how hard-to-detect design faults are likely to be introduced during development and how software faults tend to be state-dependent and activated by particular input sequences. Although component reliability is an important quality measure for system level analysis, software reliability is hard to characterize and the use of post-verification reliability estimates remains a controversial issue. For some applications software safety is more important than reliability, and fault tolerance techniques used in those applications are aimed at preventing catastrophes. Single version software fault tolerance techniques discussed include system structuring and closure, atomic actions, inline fault detection, exception handling, and others. Multiversion techniques are based on the assumption that software built differently should fail differently and thus, if one of the redundant versions fails, it is expected that at least one of the other versions will provide an acceptable output. Recovery blocks, N-version programming, and other multiversion techniques are reviewed.

  4. Passive fault current limiting device

    DOEpatents

    Evans, D.J.; Cha, Y.S.

    1999-04-06

    A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment. 6 figs.

  5. Passive fault current limiting device

    DOEpatents

    Evans, Daniel J.; Cha, Yung S.

    1999-01-01

    A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment.

  6. Fault diagnosis of power systems

    SciTech Connect

    Sekine, Y. ); Akimoto, Y. ); Kunugi, M. )

    1992-05-01

    Fault diagnosis of power systems plays a crucial role in power system monitoring and control that ensures stable supply of electrical power to consumers. In the case of multiple faults or incorrect operation of protective devices, fault diagnosis requires judgment of complex conditions at various levels. For this reason, research into application of knowledge-based systems go an early start and reports of such systems have appeared in may papers. In this paper, these systems are classified by the method of inference utilized in the knowledge-based systems for fault diagnosis of power systems. The characteristics of each class and corresponding issues as well as the state-of-the-art techniques for improving their performance are presented. Additional topics covered are user interfaces, interfaces with energy management systems (EMS's), and expert system development tools for fault diagnosis. Results and evaluation of actual operation in the field are also discussed. Knowledge-based fault diagnosis of power systems will continue to disseminate.

  7. Normal fault earthquakes or graviquakes.

    PubMed

    Doglioni, C; Carminati, E; Petricca, P; Riguzzi, F

    2015-01-01

    Earthquakes are dissipation of energy throughout elastic waves. Canonically is the elastic energy accumulated during the interseismic period. However, in crustal extensional settings, gravity is the main energy source for hangingwall fault collapsing. Gravitational potential is about 100 times larger than the observed magnitude, far more than enough to explain the earthquake. Therefore, normal faults have a different mechanism of energy accumulation and dissipation (graviquakes) with respect to other tectonic settings (strike-slip and contractional), where elastic energy allows motion even against gravity. The bigger the involved volume, the larger is their magnitude. The steeper the normal fault, the larger is the vertical displacement and the larger is the seismic energy released. Normal faults activate preferentially at about 60° but they can be shallower in low friction rocks. In low static friction rocks, the fault may partly creep dissipating gravitational energy without releasing great amount of seismic energy. The maximum volume involved by graviquakes is smaller than the other tectonic settings, being the activated fault at most about three times the hypocentre depth, explaining their higher b-value and the lower magnitude of the largest recorded events. Having different phenomenology, graviquakes show peculiar precursors. PMID:26169163

  8. Normal fault earthquakes or graviquakes

    PubMed Central

    Doglioni, C.; Carminati, E.; Petricca, P.; Riguzzi, F.

    2015-01-01

    Earthquakes are dissipation of energy throughout elastic waves. Canonically is the elastic energy accumulated during the interseismic period. However, in crustal extensional settings, gravity is the main energy source for hangingwall fault collapsing. Gravitational potential is about 100 times larger than the observed magnitude, far more than enough to explain the earthquake. Therefore, normal faults have a different mechanism of energy accumulation and dissipation (graviquakes) with respect to other tectonic settings (strike-slip and contractional), where elastic energy allows motion even against gravity. The bigger the involved volume, the larger is their magnitude. The steeper the normal fault, the larger is the vertical displacement and the larger is the seismic energy released. Normal faults activate preferentially at about 60° but they can be shallower in low friction rocks. In low static friction rocks, the fault may partly creep dissipating gravitational energy without releasing great amount of seismic energy. The maximum volume involved by graviquakes is smaller than the other tectonic settings, being the activated fault at most about three times the hypocentre depth, explaining their higher b-value and the lower magnitude of the largest recorded events. Having different phenomenology, graviquakes show peculiar precursors. PMID:26169163

  9. Nonlinear Network Dynamics on Earthquake Fault Systems

    SciTech Connect

    Rundle, Paul B.; Rundle, John B.; Tiampo, Kristy F.; Sa Martins, Jorge S.; McGinnis, Seth; Klein, W.

    2001-10-01

    Earthquake faults occur in interacting networks having emergent space-time modes of behavior not displayed by isolated faults. Using simulations of the major faults in southern California, we find that the physics depends on the elastic interactions among the faults defined by network topology, as well as on the nonlinear physics of stress dissipation arising from friction on the faults. Our results have broad applications to other leaky threshold systems such as integrate-and-fire neural networks.

  10. Tutorial: Advanced fault tree applications using HARP

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.

    1993-01-01

    Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.

  11. Fault Management Guiding Principles

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn E.; Friberg, Kenneth H.; Fesq, Lorraine; Barley, Bryan

    2011-01-01

    Regardless of the mission type: deep space or low Earth orbit, robotic or human spaceflight, Fault Management (FM) is a critical aspect of NASA space missions. As the complexity of space missions grows, the complexity of supporting FM systems increase in turn. Data on recent NASA missions show that development of FM capabilities is a common driver for significant cost overruns late in the project development cycle. Efforts to understand the drivers behind these cost overruns, spearheaded by NASA's Science Mission Directorate (SMD), indicate that they are primarily caused by the growing complexity of FM systems and the lack of maturity of FM as an engineering discipline. NASA can and does develop FM systems that effectively protect mission functionality and assets. The cost growth results from a lack of FM planning and emphasis by project management, as well the maturity of FM as an engineering discipline, which lags behind the maturity of other engineering disciplines. As a step towards controlling the cost growth associated with FM development, SMD has commissioned a multi-institution team to develop a practitioner's handbook representing best practices for the end-to-end processes involved in engineering FM systems. While currently concentrating primarily on FM for science missions, the expectation is that this handbook will grow into a NASA-wide handbook, serving as a companion to the NASA Systems Engineering Handbook. This paper presents a snapshot of the principles that have been identified to guide FM development from cradle to grave. The principles range from considerations for integrating FM into the project and SE organizational structure, the relationship between FM designs and mission risk, and the use of the various tools of FM (e.g., redundancy) to meet the FM goal of protecting mission functionality and assets.

  12. Fault Analysis in Solar Photovoltaic Arrays

    NASA Astrophysics Data System (ADS)

    Zhao, Ye

    Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.

  13. Faulting processes at high fluid pressures: An example of fault valve behavior from the Wattle Gully Fault, Victoria, Australia

    NASA Astrophysics Data System (ADS)

    Cox, Stephen F.

    1995-07-01

    The internal structures of the Wattle Gully Fault provide insights about the mechanics and dynamics of fault systems exhibiting fault valve behavior in high fluid pressure regimes. This small, high-angle reverse fault zone developed at temperatures near 300°C in the upper crust, late during mid-Devonian regional crustal shortening in central Victoria, Australia. The Wattle Gully Fault forms part of a network of faults that focused upward migration of fluids generated by metamorphism and devolatilisation at deeper crustal levels. The fault has a length of around 800 m and a maximum displacement of 50 m and was oriented at 60° to 80° to the maximum principal stress during faulting. The structure was therefore severely misoriented for frictional reactivation. This factor, together with the widespread development of steeply dipping fault fill quartz veins and associated subhorizontal extension veins within the fault zone, indicates that faulting occurred at low shear stresses and in a near-lithostatic fluid pressure regime. The internal structures of these veins, and overprinting relationships between veins and faults, indicate that vein development was intimately associated with faulting and involved numerous episodes of fault dilatation and hydrothermal sealing and slip, together with repeated hydraulic extension fracturing adjacent to slip surfaces. The geometries, distribution and internal structures of veins in the Wattle Gully Fault Zone are related to variations in shear stress, fluid pressure, and near-field principal stress orientations during faulting. Vein opening is interpreted to have been controlled by repeated fluid pressure fluctuations associated with cyclic, deformation-induced changes in fault permeability during fault valve behavior. Rates of recovery of shear stress and fluid pressure after rupture events are interpreted to be important factors controlling time dependence of fault shear strength and slip recurrence. Fluctuations in shear stress

  14. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1991-01-01

    Twenty independently developed but functionally equivalent software versions were used to investigate and compare empirically some properties of N-version programming, Recovery Block, and Consensus Recovery Block, using the majority and consensus voting algorithms. This was also compared with another hybrid fault-tolerant scheme called Acceptance Voting, using dynamic versions of consensus and majority voting. Consensus voting provides adaptation of the voting strategy to varying component reliability, failure correlation, and output space characteristics. Since failure correlation among versions effectively reduces the cardinality of the space in which the voter make decisions, consensus voting is usually preferable to simple majority voting in any fault-tolerant system. When versions have considerably different reliabilities, the version with the best reliability will perform better than any of the fault-tolerant techniques.

  15. Fault zone connectivity: slip rates on faults in the san francisco bay area, california.

    PubMed

    Bilham, R; Bodin, P

    1992-10-01

    The slip rate of a fault segment is related to the length of the fault zone of which it is part. In turn, the slip rate of a fault zone is related to its connectivity with adjoining or contiguous fault zones. The observed variation in slip rate on fault segments in the San Francisco Bay area in California is consistent with connectivity between the Hayward, Calaveras, and San Andreas fault zones. Slip rates on the southern Hayward fault taper northward from a maximum of more than 10 millimeters per year and are sensitive to the active length of the Maacama fault. PMID:17835127

  16. Reconsidering Fault Slip Scaling

    NASA Astrophysics Data System (ADS)

    Gomberg, J. S.; Wech, A.; Creager, K. C.; Obara, K.; Agnew, D. C.

    2015-12-01

    The scaling of fault slip events given by the relationship between the scalar moment M0, and duration T, potentially provides key constraints on the underlying physics controlling slip. Many studies have suggested that measurements of M0 and T are related as M0=KfT3 for 'fast' slip events (earthquakes) and M0=KsT for 'slow' slip events, in which Kf and Ks are proportionality constants, although some studies have inferred intermediate relations. Here 'slow' and 'fast' refer to slip front propagation velocities, either so slow that seismic radiation is too small or long period to be measurable or fast enough that dynamic processes may be important for the slip process and measurable seismic waves radiate. Numerous models have been proposed to explain the differing M0-T scaling relations. We show that a single, simple dislocation model of slip events within a bounded slip zone may explain nearly all M0-T observations. Rather than different scaling for fast and slow populations, we suggest that within each population the scaling changes from M0 proportional to T3 to T when the slipping area reaches the slip zone boundaries and transitions from unbounded, 2-dimensional to bounded, 1-dimensional growth. This transition has not been apparent previously for slow events because data have sampled only the bounded regime and may be obscured for earthquakes when observations from multiple tectonic regions are combined. We have attempted to sample the expected transition between bounded and unbounded regimes for the slow slip population, measuring tremor cluster parameters from catalogs for Japan and Cascadia and using them as proxies for small slow slip event characteristics. For fast events we employed published earthquake slip models. Observations corroborate our hypothesis, but highlight observational difficulties. We find that M0-T observations for both slow and fast slip events, spanning 12 orders of magnitude in M0, are consistent with a single model based on dislocation

  17. Rupture interaction with fault jogs

    NASA Astrophysics Data System (ADS)

    Sibson, Richard H.

    Propagation of moderate to large earthquake ruptures within major transcurrent fault systems is affected by their large-scale brittle infrastructure, comprising echelon segmentation and curvature of principal slip surfaces (PSS) within typically ˜1 km wide main fault zones. These PSS irregularities are classified into dilational and antidilational fault jogs depending on the tendency for areal increase or reduction, respectively, across the jog structures. High precision microearthquake studies show that the jogs often extend throughout the seismogenic regime to depths of around 10 km. On geomorphic evidence, the larger jogs may persist for periods >105 years. While antidilational jogs form obstacles to both short- and long-term displacements, dilational jogs appear to act as kinetic barriers capable of perturbing or arresting earthquake ruptures, but allowing time-dependent slip transfer. In the case of antidilational jogs slip transfer is accommodated by widespread subsidiary faulting, but for dilational jogs it additionally involves extensional fracture opening localized in the echelon stepover. In fluid-saturated crust, the rapid opening of linking extensional fracture systems to allow passage of earthquake ruptures is opposed by induced suctions which scale with the width of the jog. Rupture arrest at dilational jogs may then be followed by delayed slip transfer as fluid pressures reequilibrate by diffusion. Aftershock distributions associated with the different fault jogs reflect these contrasts in their internal structure and mechanical response.

  18. Faulting in porous carbonate grainstones

    NASA Astrophysics Data System (ADS)

    Tondi, Emanuele; Agosta, Fabrizio

    2010-05-01

    In the recent past, a new faulting mechanism has been documented within porous carbonate grainstones. This mechanism is due to strain localization into narrow tabular bands characterized by both volumetric and shear strain; for this reason, these features are named compactive shear bands. In the field, compactive shear bands are easily recognizable because they are lightly coloured with respect to the parent rock, and/or show a positive relief because of their increased resistance to weathering. Both characteristics, light colours and positive relief, are a consequence of the compaction processes that characterize these bands, which are the simplest structure element that form within porous carbonate grainstones. With ongoing deformation, the single compactive shear bands, which solve only a few mm of displacement, may evolve into zone of compactive shear bands and, finally, into well-developed faults characterized by slip surfaces and fault rocks. Field analysis conducted in key areas of Italy allow us to documented different modalities of interaction and linkage among the compactive shear bands: (i) a simple divergence of two different compactive shear bands from an original one, (ii) extensional and contractional jogs formed by two continuous, interacting compactive shear bands, and (iii) eye structures formed by collinear interacting compactive shear bands, which have been already described for deformation bands in sandstones. The last two types of interaction may localize the formation of compaction bands, which are characterized by pronounced component of compaction and negligible components of shearing, and/or pressure solution seams. All the aforementioned types of interaction and linkage could happen at any deformation stage, single bands, zone of bands or well developed faults. The transition from one deformation process to another, which is likely to be controlled by the changes in the material properties, is recorded by different ratios and

  19. Intelligent fault-tolerant controllers

    NASA Technical Reports Server (NTRS)

    Huang, Chien Y.

    1987-01-01

    A system with fault tolerant controls is one that can detect, isolate, and estimate failures and perform necessary control reconfiguration based on this new information. Artificial intelligence (AI) is concerned with semantic processing, and it has evolved to include the topics of expert systems and machine learning. This research represents an attempt to apply AI to fault tolerant controls, hence, the name intelligent fault tolerant control (IFTC). A generic solution to the problem is sought, providing a system based on logic in addition to analytical tools, and offering machine learning capabilities. The advantages are that redundant system specific algorithms are no longer needed, that reasonableness is used to quickly choose the correct control strategy, and that the system can adapt to new situations by learning about its effects on system dynamics.

  20. Transient Faults in Computer Systems

    NASA Technical Reports Server (NTRS)

    Masson, Gerald M.

    1993-01-01

    A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.

  1. Approximate Entropy Based Fault Localization and Fault Type Recognition for Non-solidly Earthed Network

    NASA Astrophysics Data System (ADS)

    Pang, Qingle; Liu, Xinyun; Sun, Bo; Ling, Qunli

    2012-12-01

    For non-solidly earthed network, the fault localization of single phase grounding fault has been a problem. A novel fault localization and fault type recognition method of single phase grounding fault based on approximate entropy is presented. The approximate entropies of transient zero sequence current at both ends of healthy section are approximately equal, and the ratio is close to 1. On the contrary, the approximate entropies at both ends of fault section are different, and the ratio is far from 1. So, the fault section is located. At the same fault section, the smaller is the fault resistance, the larger is the approximate entropy of transient zero sequence current. According to the function between approximate entropy and fault resistance, the fault type is determined. The method has the advantages of transferring less data and unneeded synchronous sampling accurately. The simulation results show that the proposed method is feasible and accurate.

  2. InSAR measurements around active faults: creeping Philippine Fault and un-creeping Alpine Fault

    NASA Astrophysics Data System (ADS)

    Fukushima, Y.

    2013-12-01

    Recently, interferometric synthetic aperture radar (InSAR) time-series analyses have been frequently applied to measure the time-series of small and quasi-steady displacements in wide areas. Large efforts in the methodological developments have been made to pursue higher temporal and spatial resolutions by using frequently acquired SAR images and detecting more pixels that exhibit phase stability. While such a high resolution is indispensable for tracking displacements of man-made and other small-scale structures, it is not necessarily needed and can be unnecessarily computer-intensive for measuring the crustal deformation associated with active faults and volcanic activities. I apply a simple and efficient method to measure the deformation around the Alpine Fault in the South Island of New Zealand, and the Philippine Fault in the Leyte Island. I use a small-baseline subset (SBAS) analysis approach (Berardino, et al., 2002). Generally, the more we average the pixel values, the more coherent the signals are. Considering that, for the deformation around active faults, the spatial resolution can be as coarse as a few hundred meters, we can severely 'multi-look' the interferograms. The two applied cases in this study benefited from this approach; I could obtain the mean velocity maps on practically the entire area without discarding decorrelated areas. The signals could have been only partially obtained by standard persistent scatterer or single-look small-baseline approaches that are much more computer-intensive. In order to further increase the signal detection capability, it is sometimes effective to introduce a processing algorithm adapted to the signal of interest. In an InSAR time-series processing, one usually needs to set the reference point because interferograms are all relative measurements. It is difficult, however, to fix the reference point when one aims to measure long-wavelength deformation signals that span the whole analysis area. This problem can be

  3. Update: San Andreas Fault experiment

    NASA Technical Reports Server (NTRS)

    Christodoulidis, D. C.; Smith, D. E.

    1984-01-01

    Satellite laser ranging techniques are used to monitor the broad motion of the tectonic plates comprising the San Andreas Fault System. The San Andreas Fault Experiment, (SAFE), has progressed through the upgrades made to laser system hardware and an improvement in the modeling capabilities of the spaceborne laser targets. Of special note is the launch of the Laser Geodynamic Satellite, LAGEOS spacecraft, NASA's only completely dedicated laser satellite in 1976. The results of plate motion projected into this 896 km measured line over the past eleven years are summarized and intercompared.

  4. Faulting at Mormon Point, Death Valley, California: A low-angle normal fault cut by high-angle faults

    NASA Astrophysics Data System (ADS)

    Keener, Charles; Serpa, Laura; Pavlis, Terry L.

    1993-04-01

    New geophysical and fault kinematic studies indicate that late Cenozoic basin development in the Mormon Point area of Death Valley, California, was accommodated by fault rotations. Three of six fault segments recognized at Mormon Point are now inactive and have been rotated to low dips during extension. The remaining three segments are now active and moderately to steeply dipping. From the geophysical data, one active segment appears to offset the low-angle faults in the subsurface of Death Valley.

  5. Maximum Magnitude in Relation to Mapped Fault Length and Fault Rupture

    NASA Astrophysics Data System (ADS)

    Black, N.; Jackson, D.; Rockwell, T.

    2004-12-01

    Earthquake hazard zones are highlighted using known fault locations and an estimate of the fault's maximum magnitude earthquake. Magnitude limits are commonly determined from fault geometry, which is dependent on fault length. Over the past 30 years it has become apparent that fault length is often poorly constrained and that a single event can rupture across several individual fault segments. In this study fault geometries are analyzed before and after several moderate to large magnitude earthquakes to determine how well fault length can accurately assess seismic hazard. Estimates of future earthquake magnitudes are often inferred from prior determinations of fault length, but use magnitude regressions based on rupture length. However, rupture length is not always limited to the previously estimated fault length or contained on a single fault. Therefore, the maximum magnitude for a fault may be underestimated, unless the geometry and segmentation of faulting is completely understood. This study examines whether rupture/fault length can be used to accurately predict the maximum magnitude for a given fault. We examine earthquakes greater than 6.0 that occurred after 1970 in Southern California. Geologic maps, fault evaluation reports, and aerial photos that existed prior to these earthquakes are used to obtain the pre-earthquake fault lengths. Pre-earthquake fault lengths are compared with rupture lengths to determine: 1) if fault lengths are the same before and after the ruptures and 2) to constrain the geology and geometry of ruptures that propagated beyond the originally recognized endpoints of a mapped fault. The ruptures examined in this study typically follow one of the following models. The ruptures are either: 1) contained within the dimensions of the original fault trace, 2) break through one or both end points of the originally mapped fault trace, or 3) break through multiple faults, connecting segments into one large fault line. No rupture simply broke a

  6. Active fault traces along Bhuj Fault and Katrol Hill Fault, and trenching survey at Wandhay, Kachchh, Gujarat, India

    NASA Astrophysics Data System (ADS)

    Morino, Michio; Malik, Javed N.; Mishra, Prashant; Bhuiyan, Chandrashekhar; Kaneko, Fumio

    2008-06-01

    Several new active fault traces were identified along Katrol Hill Fault (KHF). A new fault (named as Bhuj Fault, BF) that extends into the Bhuj Plain was also identified. These fault traces were identified based on satellite photo interpretation and field survey. Trenches were excavated to identify the paleoseismic events, pattern of faulting and the nature of deformation. New active fault traces were recognized about 1km north of the topographic boundary between the Katrol Hill and the plain area. The fault exposure along the left bank of Khari River with 10m wide shear zone in the Mesozoic rocks and showing displacement of the overlying Quaternary deposits is indicative of continued tectonic activity along the ancient fault. The E-W trending active fault traces along the KHF in the western part changes to NE-SW or ENE-WSW near Wandhay village. Trenching survey across a low scarp near Wandhay village reveals three major fault strands F1, F2, and F3. These fault strands displaced the older terrace deposits comprising Sand, Silt and Gravel units along with overlying younger deposits from units 1 to 5 made of gravel, sand and silt. Stratigraphic relationship indicates at least three large magnitude earthquakes along KHF during Late Holocene or recent historic past.

  7. Fault seals in oil fields in Nevada

    SciTech Connect

    Foster, N.H.; Veal, H.K.; Bortz, L.C.

    1987-08-01

    Faults forms seals for oil accumulations in the Eagle Springs, Trap Spring, and Blackburn fields, and probably in the Grant Canyon field, in Nevada. The main boundary fault on the east side of the Pine Valley graben forms a seal in the Blackburn field. A fault on the west side of the trap Spring field forms a seal. In Grant Canyon field, it is interpreted that the main boundary fault on the east side of the Railroad Valley graben forms a seal. Calcite is deposited by hot spring activity, plugging up many fault zones and, in some cases, forming seals. Some fault zones have calcite mineralization up to several thousand feet wide. Within the Eagle Springs field on the east side of the Railroad Valley graben, a northeast-trending fault separates oil accumulations with different oil-water contacts. This separation indicates that the fault forms at least a partial seal within the accumulation.

  8. Parametric Modeling and Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva; Ju, Jianhong

    2000-01-01

    Fault tolerant control is considered for a nonlinear aircraft model expressed as a linear parameter-varying system. By proper parameterization of foreseeable faults, the linear parameter-varying system can include fault effects as additional varying parameters. A recently developed technique in fault effect parameter estimation allows us to assume that estimates of the fault effect parameters are available on-line. Reconfigurability is calculated for this model with respect to the loss of control effectiveness to assess the potentiality of the model to tolerate such losses prior to control design. The control design is carried out by applying a polytopic method to the aircraft model. An error bound on fault effect parameter estimation is provided, within which the Lyapunov stability of the closed-loop system is robust. Our simulation results show that as long as the fault parameter estimates are sufficiently accurate, the polytopic controller can provide satisfactory fault-tolerance.

  9. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1987-01-01

    Specific topics briefly addressed include: the consistent comparison problem in N-version system; analytic models of comparison testing; fault tolerance through data diversity; and the relationship between failures caused by automatically seeded faults.

  10. Seismology: Diary of a wimpy fault

    NASA Astrophysics Data System (ADS)

    Bürgmann, Roland

    2015-05-01

    Subduction zone faults can slip slowly, generating tremor. The varying correlation between tidal stresses and tremor occurring deep in the Cascadia subduction zone suggests that the fault is inherently weak, and gets weaker as it slips.

  11. Solar Dynamic Power System Fault Diagnosis

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Dias, Lakshman G.

    1996-01-01

    The objective of this research is to conduct various fault simulation studies for diagnosing the type and location of faults in the power distribution system. Different types of faults are simulated at different locations within the distribution system and the faulted waveforms are monitored at measurable nodes such as at the output of the DDCU's. These fault signatures are processed using feature extractors such as FFT and wavelet transforms. The extracted features are fed to a clustering based neural network for training and subsequent testing using previously unseen data. Different load models consisting of constant impedance and constant power are used for the loads. Open circuit faults and short circuit faults are studied. It is concluded from present studies that using features extracted from wavelet transforms give better success rates during ANN testing. The trained ANN's are capable of diagnosing fault types and approximate locations in the solar dynamic power distribution system.

  12. Cell boundary fault detection system

    DOEpatents

    Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward

    2011-04-19

    An apparatus and program product determine a nodal fault along the boundary, or face, of a computing cell. Nodes on adjacent cell boundaries communicate with each other, and the communications are analyzed to determine if a node or connection is faulty.

  13. Implementing fault-tolerant sensors

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith

    1989-01-01

    One aspect of fault tolerance in process control programs is the ability to tolerate sensor failure. A methodology is presented for transforming a process control program that cannot tolerate sensor failures to one that can. Additionally, a hierarchy of failure models is identified.

  14. MOS integrated circuit fault modeling

    NASA Technical Reports Server (NTRS)

    Sievers, M.

    1985-01-01

    Three digital simulation techniques for MOS integrated circuit faults were examined. These techniques embody a hierarchy of complexity bracketing the range of simulation levels. The digital approaches are: transistor-level, connector-switch-attenuator level, and gate level. The advantages and disadvantages are discussed. Failure characteristics are also described.

  15. Reliability computation using fault tree analysis

    NASA Technical Reports Server (NTRS)

    Chelson, P. O.

    1971-01-01

    A method is presented for calculating event probabilities from an arbitrary fault tree. The method includes an analytical derivation of the system equation and is not a simulation program. The method can handle systems that incorporate standby redundancy and it uses conditional probabilities for computing fault trees where the same basic failure appears in more than one fault path.

  16. Fault tolerant software modules for SIFT

    NASA Technical Reports Server (NTRS)

    Hecht, M.; Hecht, H.

    1982-01-01

    The implementation of software fault tolerance is investigated for critical modules of the Software Implemented Fault Tolerance (SIFT) operating system to support the computational and reliability requirements of advanced fly by wire transport aircraft. Fault tolerant designs generated for the error reported and global executive are examined. A description of the alternate routines, implementation requirements, and software validation are included.

  17. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission...

  18. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 2 2014-01-01 2014-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The...

  19. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The...

  20. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Fault. 255.11 Section 255.11 Employees... § 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than...

  1. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 2 2013-01-01 2013-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The...

  2. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Fault. 255.11 Section 255.11 Employees... § 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than...

  3. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 2 2013-01-01 2013-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission...

  4. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 2 2014-01-01 2014-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission...

  5. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission...

  6. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Fault areas. 258.13 Section 258.13... SOLID WASTE LANDFILLS Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral expansions shall not be located within 200 feet (60 meters) of a fault that has had displacement in...

  7. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Fault. 255.11 Section 255.11 Employees... § 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than...

  8. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission...

  9. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Fault. 255.11 Section 255.11 Employees... § 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than...

  10. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The...

  11. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Fault areas. 258.13 Section 258.13... SOLID WASTE LANDFILLS Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral expansions shall not be located within 200 feet (60 meters) of a fault that has had displacement in...

  12. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Fault areas. 258.13 Section 258.13... SOLID WASTE LANDFILLS Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral expansions shall not be located within 200 feet (60 meters) of a fault that has had displacement in...

  13. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The...

  14. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Fault. 255.11 Section 255.11 Employees... § 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than...

  15. FAULT & COORDINATION STUDY FOR T PLANT COMPLEX

    SciTech Connect

    MCDONALD, G.P.; BOYD-BODIAU, E.A.

    2004-09-01

    A short circuit study is performed to determine the maximum fault current that the system protective devices, transformers, and interconnections would he subject to in event of a three phase, phase-to-phase, or phase-to-ground fault. Generally, the short circuit study provides the worst case fault current levels at each bus or connection point of the system.

  16. High temperature superconducting fault current limiter

    DOEpatents

    Hull, J.R.

    1997-02-04

    A fault current limiter for an electrical circuit is disclosed. The fault current limiter includes a high temperature superconductor in the electrical circuit. The high temperature superconductor is cooled below its critical temperature to maintain the superconducting electrical properties during operation as the fault current limiter. 15 figs.

  17. High temperature superconducting fault current limiter

    DOEpatents

    Hull, John R.

    1997-01-01

    A fault current limiter (10) for an electrical circuit (14). The fault current limiter (10) includes a high temperature superconductor (12) in the electrical circuit (14). The high temperature superconductor (12) is cooled below its critical temperature to maintain the superconducting electrical properties during operation as the fault current limiter (10).

  18. Fault-related clay authigenesis along the Moab Fault: Implications for calculations of fault rock composition and mechanical and hydrologic fault zone properties

    NASA Astrophysics Data System (ADS)

    Solum, John G.; Davatzes, Nicholas C.; Lockner, David A.

    2010-12-01

    The presence of clays in fault rocks influences both the mechanical and hydrologic properties of clay-bearing faults, and therefore it is critical to understand the origin of clays in fault rocks and their distributions is of great importance for defining fundamental properties of faults in the shallow crust. Field mapping shows that layers of clay gouge and shale smear are common along the Moab Fault, from exposures with throws ranging from 10 to ˜1000 m. Elemental analyses of four locations along the Moab Fault show that fault rocks are enriched in clays at R191 and Bartlett Wash, but that this clay enrichment occurred at different times and was associated with different fluids. Fault rocks at Corral and Courthouse Canyons show little difference in elemental composition from adjacent protolith, suggesting that formation of fault rocks at those locations is governed by mechanical processes. Friction tests show that these authigenic clays result in fault zone weakening, and potentially influence the style of failure along the fault (seismogenic vs. aseismic) and potentially influence the amount of fluid loss associated with coseismic dilation. Scanning electron microscopy shows that authigenesis promotes that continuity of slip surfaces, thereby enhancing seal capacity. The occurrence of the authigenesis, and its influence on the sealing properties of faults, highlights the importance of determining the processes that control this phenomenon.

  19. Fault-related clay authigenesis along the Moab Fault: Implications for calculations of fault rock composition and mechanical and hydrologic fault zone properties

    USGS Publications Warehouse

    Solum, J.G.; Davatzes, N.C.; Lockner, D.A.

    2010-01-01

    The presence of clays in fault rocks influences both the mechanical and hydrologic properties of clay-bearing faults, and therefore it is critical to understand the origin of clays in fault rocks and their distributions is of great importance for defining fundamental properties of faults in the shallow crust. Field mapping shows that layers of clay gouge and shale smear are common along the Moab Fault, from exposures with throws ranging from 10 to ???1000 m. Elemental analyses of four locations along the Moab Fault show that fault rocks are enriched in clays at R191 and Bartlett Wash, but that this clay enrichment occurred at different times and was associated with different fluids. Fault rocks at Corral and Courthouse Canyons show little difference in elemental composition from adjacent protolith, suggesting that formation of fault rocks at those locations is governed by mechanical processes. Friction tests show that these authigenic clays result in fault zone weakening, and potentially influence the style of failure along the fault (seismogenic vs. aseismic) and potentially influence the amount of fluid loss associated with coseismic dilation. Scanning electron microscopy shows that authigenesis promotes that continuity of slip surfaces, thereby enhancing seal capacity. The occurrence of the authigenesis, and its influence on the sealing properties of faults, highlights the importance of determining the processes that control this phenomenon. ?? 2010 Elsevier Ltd.

  20. Ground Fault--A Health Hazard

    ERIC Educational Resources Information Center

    Jacobs, Clinton O.

    1977-01-01

    A ground fault is especially hazardous because the resistance through which the current is flowing to ground may be sufficient to cause electrocution. The Ground Fault Circuit Interrupter (G.F.C.I.) protects 15 and 25 ampere 120 volt circuits from ground fault condition. The design and examples of G.F.C.I. functions are described in this article.…

  1. Fault-crossing P delays, epicentral biasing, and fault behavior in Central California

    USGS Publications Warehouse

    Marks, S.M.; Bufe, C.G.

    1979-01-01

    The P delays across the San Andreas fault zone in central California have been determined from travel-time differences at station pairs spanning the fault, using off-fault local earthquake or quarry blast sources. Systematic delays as large as 0.4 sec have been observed for paths crossing the fault at depths of 5-10 km. These delays can account for the apparent deviation of epicenters from the mapped fault trace. The largest delays occur along the San Andreas fault between San Juan Bautista and Bear Valley and Between Bitterwater Valley and Parkfield. Spatial variations in fault behavior correlate with the magnitude of the fault-crossing P delay. The delay decreases to the northwest of San Juan Bautista across the "locked" section of the San Andreas fault and also decreases to the southeast approaching Parkfield. Where the delay is large, seismicity is relatively high and the fault is creeping. ?? 1979.

  2. Architecture of small-scale fault zones in the context of the Leinetalgraben Fault System

    NASA Astrophysics Data System (ADS)

    Reyer, Dorothea; Philipp, Sonja L.

    2010-05-01

    Understanding fault zone properties in different geological settings is important to better assess the development and propagation of faults. In addition this allows better evaluation and permeability estimates of potential fault-related geothermal reservoirs. The Leinetalgraben fault system provides an outcrop analogue for many fault zones in the subsurface of the North German Basin. The Leinetalgraben is a N-S-trending graben structure, initiated in the Jurassic, in the south of Lower Saxony and as such part of the North German Basin. The fault system was reactivated and inverted during Alpine compression in the Tertiary. This complex geological situation was further affected by halotectonics. Therefore we can find different types of fault zones, that is normal, reverse, strike-slip an oblique-slip faults, surrounding the major Leinetalgraben boundary faults. Here we present first results of structural geological field studies on the geometry and architecture of fault zones in the Leinetalgraben Fault System in outcrop-scale. We measured the orientations and displacements of 17 m-scale fault zones in limestone (Muschelkalk) outcrops, the thicknesses of their fault cores and damage zones, as well as the fracture densities and geometric parameters of the fracture systems therein. We also analysed the effects of rock heterogeneities, particularly stiffness variations between layers (mechanical layering) on the propagation of natural fractures and fault zones. The analysed fault zones predominantly show similar orientations as the major fault zones they surround. Other faults are conjugate or perpendicular to the major fault zones. The direction of predominant joint strike corresponds to the orientation of the fault zones in the majority of cases. The mechanical layering of the limestone and marlstone stratification obviously has great effects on fracture propagation. Already thin layers (mm- to cm-scale) of low stiffness - here marl - seem to suffice to change the

  3. Fault-Tolerant Heat Exchanger

    NASA Technical Reports Server (NTRS)

    Izenson, Michael G.; Crowley, Christopher J.

    2005-01-01

    A compact, lightweight heat exchanger has been designed to be fault-tolerant in the sense that a single-point leak would not cause mixing of heat-transfer fluids. This particular heat exchanger is intended to be part of the temperature-regulation system for habitable modules of the International Space Station and to function with water and ammonia as the heat-transfer fluids. The basic fault-tolerant design is adaptable to other heat-transfer fluids and heat exchangers for applications in which mixing of heat-transfer fluids would pose toxic, explosive, or other hazards: Examples could include fuel/air heat exchangers for thermal management on aircraft, process heat exchangers in the cryogenic industry, and heat exchangers used in chemical processing. The reason this heat exchanger can tolerate a single-point leak is that the heat-transfer fluids are everywhere separated by a vented volume and at least two seals. The combination of fault tolerance, compactness, and light weight is implemented in a unique heat-exchanger core configuration: Each fluid passage is entirely surrounded by a vented region bridged by solid structures through which heat is conducted between the fluids. Precise, proprietary fabrication techniques make it possible to manufacture the vented regions and heat-conducting structures with very small dimensions to obtain a very large coefficient of heat transfer between the two fluids. A large heat-transfer coefficient favors compact design by making it possible to use a relatively small core for a given heat-transfer rate. Calculations and experiments have shown that in most respects, the fault-tolerant heat exchanger can be expected to equal or exceed the performance of the non-fault-tolerant heat exchanger that it is intended to supplant (see table). The only significant disadvantages are a slight weight penalty and a small decrease in the mass-specific heat transfer.

  4. Fault tolerant control of spacecraft

    NASA Astrophysics Data System (ADS)

    Godard

    Autonomous multiple spacecraft formation flying space missions demand the development of reliable control systems to ensure rapid, accurate, and effective response to various attitude and formation reconfiguration commands. Keeping in mind the complexities involved in the technology development to enable spacecraft formation flying, this thesis presents the development and validation of a fault tolerant control algorithm that augments the AOCS on-board a spacecraft to ensure that these challenging formation flying missions will fly successfully. Taking inspiration from the existing theory of nonlinear control, a fault-tolerant control system for the RyePicoSat missions is designed to cope with actuator faults whilst maintaining the desirable degree of overall stability and performance. Autonomous fault tolerant adaptive control scheme for spacecraft equipped with redundant actuators and robust control of spacecraft in underactuated configuration, represent the two central themes of this thesis. The developed algorithms are validated using a hardware-in-the-loop simulation. A reaction wheel testbed is used to validate the proposed fault tolerant attitude control scheme. A spacecraft formation flying experimental testbed is used to verify the performance of the proposed robust control scheme for underactuated spacecraft configurations. The proposed underactuated formation flying concept leads to more than 60% savings in fuel consumption when compared to a fully actuated spacecraft formation configuration. We also developed a novel attitude control methodology that requires only a single thruster to stabilize three axis attitude and angular velocity components of a spacecraft. Numerical simulations and hardware-in-the-loop experimental results along with rigorous analytical stability analysis shows that the proposed methodology will greatly enhance the reliability of the spacecraft, while allowing for potentially significant overall mission cost reduction.

  5. Fault Diagnosis in HVAC Chillers

    NASA Technical Reports Server (NTRS)

    Choi, Kihoon; Namuru, Setu M.; Azam, Mohammad S.; Luo, Jianhui; Pattipati, Krishna R.; Patterson-Hine, Ann

    2005-01-01

    Modern buildings are being equipped with increasingly sophisticated power and control systems with substantial capabilities for monitoring and controlling the amenities. Operational problems associated with heating, ventilation, and air-conditioning (HVAC) systems plague many commercial buildings, often the result of degraded equipment, failed sensors, improper installation, poor maintenance, and improperly implemented controls. Most existing HVAC fault-diagnostic schemes are based on analytical models and knowledge bases. These schemes are adequate for generic systems. However, real-world systems significantly differ from the generic ones and necessitate modifications of the models and/or customization of the standard knowledge bases, which can be labor intensive. Data-driven techniques for fault detection and isolation (FDI) have a close relationship with pattern recognition, wherein one seeks to categorize the input-output data into normal or faulty classes. Owing to the simplicity and adaptability, customization of a data-driven FDI approach does not require in-depth knowledge of the HVAC system. It enables the building system operators to improve energy efficiency and maintain the desired comfort level at a reduced cost. In this article, we consider a data-driven approach for FDI of chillers in HVAC systems. To diagnose the faults of interest in the chiller, we employ multiway dynamic principal component analysis (MPCA), multiway partial least squares (MPLS), and support vector machines (SVMs). The simulation of a chiller under various fault conditions is conducted using a standard chiller simulator from the American Society of Heating, Refrigerating, and Air-conditioning Engineers (ASHRAE). We validated our FDI scheme using experimental data obtained from different types of chiller faults.

  6. Three-dimensional Geology of the Hayward Fault and its Correlation with Fault Behavior, Northern California

    NASA Astrophysics Data System (ADS)

    Ponce, D. A.; Graymer, R. C.; Jachens, R. C.; Simpson, R. W.; Phelps, G. A.; Wentworth, C. M.

    2004-12-01

    Relationships between fault behavior and geology along the Hayward Fault were investigated using a three-dimensional geologic model of the Hayward fault and vicinity. The three-dimensional model, derived from geologic, geophysical, and seismicity data, allowed the construction of a `geologic map' of east- and west-side surfaces, maps that show the distribution of geologic units on either side of the fault that truncate against the fault surface. These two resulting geologic maps were compared with seismicity and creep along the Hayward Fault using three-dimensional visualization software. The seismic behavior of the Hayward Fault correlates with rock unit contacts along the fault, rather than in rock types across the fault. This suggests that fault activity is, in part, controlled by the physical properties of the rocks that abut the fault and not by properties of the fault zone itself. For example, far fewer earthquakes occur along the northern part of the fault where an intensely sheared Franciscan mélange on the west side abuts the fault face, compared to the region to the south where more coherent rocks of other Franciscan terranes or the Coast Range Ophiolite are present. More locally, clusters of earthquakes correlate spatially with some of the contacts between Franciscan terranes as well as mafic rocks of the Coast Range Ophiolite. Steady creep rates along the fault correlate with the lateral extent of the San Leandro gabbro, and changes in creep rate correlate with changes in geology. Although preliminary, the results of comparing fault behavior with the inferred three-dimensional geology adjacent to the Hayward Fault suggest that any attempt to understand the detailed distribution of earthquakes or creep along the fault should include consideration of the rock types that abut the fault surface. Such consideration would benefit greatly from incorporating into the three-dimensional geologic model the physical properties of the rock types along the fault.

  7. Predeployment validation of fault-tolerant systems through software-implemented fault insertion

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1989-01-01

    Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.

  8. Recurrent late Quaternary surface faulting along the southern Mohawk Valley fault zone, NE California

    SciTech Connect

    Sawyer, T.L.; Hemphill-Haley, M.A. ); Page, W.D. )

    1993-04-01

    The Mohawk Valley fault zone comprises NW- to NNW-striking, normal and strike-slip( ) faults that form the western edge of the Plumas province, a diffuse transitional zone between the Basin and Range and the northern Sierra Nevada. The authors detailed evaluation of the southern part of the fault zone reveals evidence for recurrent late Pleistocene to possibly Holocene, moderate to large surface-faulting events. The southern Mohawk fault zone is a complex, 6-km-wide zone of faults and related features that extends from near the crest of the Sierra Nevada to the middle of southern Sierra Valley. The fault zone has two distinct and generally parallel subzones, 3 km apart, that are delineated by markedly different geomorphic characteristics and apparently different styles of faulting. Paleoseismic activity of the western subzone was evaluated in two trenches: one across a fault antithetic to the main range-bounding fault, and the other across a splay fault delineated by a 3.7-m-high scarp in alluvium. Stratigraphic relations, soil development, and radiocarbon dates indicate that at least four mid- to late-Pleistocene surface-faulting events, having single-event displacements in excess of 1.6 to 2.6 m, occurred along the splay fault prior to 12 ka. The antithetic fault has evidence of three late Pleistocene events that may correspond to event documented on the splay fault, and a Holocene event that is inferred from youthful scarplets and small closed depressions.

  9. Novel neural networks-based fault tolerant control scheme with fault alarm.

    PubMed

    Shen, Qikun; Jiang, Bin; Shi, Peng; Lim, Cheng-Chew

    2014-11-01

    In this paper, the problem of adaptive active fault-tolerant control for a class of nonlinear systems with unknown actuator fault is investigated. The actuator fault is assumed to have no traditional affine appearance of the system state variables and control input. The useful property of the basis function of the radial basis function neural network (NN), which will be used in the design of the fault tolerant controller, is explored. Based on the analysis of the design of normal and passive fault tolerant controllers, by using the implicit function theorem, a novel NN-based active fault-tolerant control scheme with fault alarm is proposed. Comparing with results in the literature, the fault-tolerant control scheme can minimize the time delay between fault occurrence and accommodation that is called the time delay due to fault diagnosis, and reduce the adverse effect on system performance. In addition, the FTC scheme has the advantages of a passive fault-tolerant control scheme as well as the traditional active fault-tolerant control scheme's properties. Furthermore, the fault-tolerant control scheme requires no additional fault detection and isolation model which is necessary in the traditional active fault-tolerant control scheme. Finally, simulation results are presented to demonstrate the efficiency of the developed techniques. PMID:25014982

  10. Multiple Fault Isolation in Redundant Systems

    NASA Technical Reports Server (NTRS)

    Pattipati, Krishna R.; Patterson-Hine, Ann; Iverson, David

    1997-01-01

    Fault diagnosis in large-scale systems that are products of modern technology present formidable challenges to manufacturers and users. This is due to large number of failure sources in such systems and the need to quickly isolate and rectify failures with minimal down time. In addition, for fault-tolerant systems and systems with infrequent opportunity for maintenance (e.g., Hubble telescope, space station), the assumption of at most a single fault in the system is unrealistic. In this project, we have developed novel block and sequential diagnostic strategies to isolate multiple faults in the shortest possible time without making the unrealistic single fault assumption.

  11. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1986-01-01

    Multiversion or N-version programming was proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. Specific topics addressed are: failure probabilities in N-version systems, consistent comparison in N-version systems, descriptions of the faults found in the Knight and Leveson experiment, analytic models of comparison testing, characteristics of the input regions that trigger faults, fault tolerance through data diversity, and the relationship between failures caused by automatically seeded faults.

  12. Multiple Fault Isolation in Redundant Systems

    NASA Technical Reports Server (NTRS)

    Pattipati, Krishna R.

    1997-01-01

    Fault diagnosis in large-scale systems that are products of modem technology present formidable challenges to manufacturers and users. This is due to large number of failure sources in such systems and the need to quickly isolate and rectify failures with minimal down time. In addition, for fault-tolerant systems and systems with infrequent opportunity for maintenance (e.g., Hubble telescope, space station), the assumption of at most a single fault in the system is unrealistic. In this project, we have developed novel block and sequential diagnostic strategies to isolate multiple faults in the shortest possible time without making the unrealistic single fault assumption.

  13. Managing Space System Faults: Coalescing NASA's Views

    NASA Technical Reports Server (NTRS)

    Muirhead, Brian; Fesq, Lorraine

    2012-01-01

    Managing faults and their resultant failures is a fundamental and critical part of developing and operating aerospace systems. Yet, recent studies have shown that the engineering "discipline" required to manage faults is not widely recognized nor evenly practiced within the NASA community. Attempts to simply name this discipline in recent years has been fraught with controversy among members of the Integrated Systems Health Management (ISHM), Fault Management (FM), Fault Protection (FP), Hazard Analysis (HA), and Aborts communities. Approaches to managing space system faults typically are unique to each organization, with little commonality in the architectures, processes and practices across the industry.

  14. A Quaternary fault database for central Asia

    NASA Astrophysics Data System (ADS)

    Mohadjer, Solmaz; Ehlers, Todd Alan; Bendick, Rebecca; Stübner, Konstanze; Strube, Timo

    2016-02-01

    Earthquakes represent the highest risk in terms of potential loss of lives and economic damage for central Asian countries. Knowledge of fault location and behavior is essential in calculating and mapping seismic hazard. Previous efforts in compiling fault information for central Asia have generated a large amount of data that are published in limited-access journals with no digital maps publicly available, or are limited in their description of important fault parameters such as slip rates. This study builds on previous work by improving access to fault information through a web-based interactive map and an online database with search capabilities that allow users to organize data by different fields. The data presented in this compilation include fault location, its geographic, seismic, and structural characteristics, short descriptions, narrative comments, and references to peer-reviewed publications. The interactive map displays 1196 fault traces and 34 000 earthquake locations on a shaded-relief map. The online database contains attributes for 123 faults mentioned in the literature, with Quaternary and geodetic slip rates reported for 38 and 26 faults respectively, and earthquake history reported for 39 faults. All data are accessible for viewing and download via http://www.geo.uni-tuebingen.de/faults/. This work has implications for seismic hazard studies in central Asia as it summarizes important fault parameters, and can reduce earthquake risk by enhancing public access to information. It also allows scientists and hazard assessment teams to identify structures and regions where data gaps exist and future investigations are needed.

  15. Experiments in fault tolerant software reliability

    NASA Technical Reports Server (NTRS)

    Mcallister, David F.; Vouk, Mladen A.

    1989-01-01

    Twenty functionally equivalent programs were built and tested in a multiversion software experiment. Following unit testing, all programs were subjected to an extensive system test. In the process sixty-one distinct faults were identified among the versions. Less than 12 percent of the faults exhibited varying degrees of positive correlation. The common-cause (or similar) faults spanned as many as 14 components. However, a majority of these faults were trivial, and easily detected by proper unit and/or system testing. Only two of the seven similar faults were difficult faults, and both were caused by specification ambiguities. One of these faults exhibited variable identical-and-wrong response span, i.e. response span which varied with the testing conditions and input data. Techniques that could have been used to avoid the faults are discussed. For example, it was determined that back-to-back testing of 2-tuples could have been used to eliminate about 90 percent of the faults. In addition, four of the seven similar faults could have been detected by using back-to-back testing of 5-tuples. It is believed that most, if not all, similar faults could have been avoided had the specifications been written using more formal notation, the unit testing phase was subject to more stringent standards and controls, and better tools for measuring the quality and adequacy of the test data (e.g. coverage) were used.

  16. Model-Based Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Kumar, Aditya; Viassolo, Daniel

    2008-01-01

    The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.

  17. Tool for Viewing Faults Under Terrain

    NASA Technical Reports Server (NTRS)

    Siegel, Herbert, L.; Li, P. Peggy

    2005-01-01

    Multi Surface Light Table (MSLT) is an interactive software tool that was developed in support of the QuakeSim project, which has created an earthquake- fault database and a set of earthquake- simulation software tools. MSLT visualizes the three-dimensional geometries of faults embedded below the terrain and animates time-varying simulations of stress and slip. The fault segments, represented as rectangular surfaces at dip angles, are organized into collections, that is, faults. An interface built into MSLT queries and retrieves fault definitions from the QuakeSim fault database. MSLT also reads time-varying output from one of the QuakeSim simulation tools, called "Virtual California." Stress intensity is represented by variations in color. Slips are represented by directional indicators on the fault segments. The magnitudes of the slips are represented by the duration of the directional indicators in time. The interactive controls in MSLT provide a virtual track-ball, pan and zoom, translucency adjustment, simulation playback, and simulation movie capture. In addition, geographical information on the fault segments and faults is displayed on text windows. Because of the extensive viewing controls, faults can be seen in relation to one another, and to the terrain. These relations can be realized in simulations. Correlated slips in parallel faults are visible in the playback of Virtual California simulations.

  18. Parallel fault-tolerant robot control

    NASA Technical Reports Server (NTRS)

    Hamilton, D. L.; Bennett, J. K.; Walker, I. D.

    1992-01-01

    A shared memory multiprocessor architecture is used to develop a parallel fault-tolerant robot controller. Several versions of the robot controller are developed and compared. A robot simulation is also developed for control observation. Comparison of a serial version of the controller and a parallel version without fault tolerance showed the speedup possible with the coarse-grained parallelism currently employed. The performance degradation due to the addition of processor fault tolerance was demonstrated by comparison of these controllers with their fault-tolerant versions. Comparison of the more fault-tolerant controller with the lower-level fault-tolerant controller showed how varying the amount of redundant data affects performance. The results demonstrate the trade-off between speed performance and processor fault tolerance.

  19. Alp Transit: Crossing Faults 44 and 49

    NASA Astrophysics Data System (ADS)

    El Tani, M.; Bremen, R.

    2014-05-01

    This paper describes the crossing of faults 44 and 49 when constructing the 57 km Gotthard base tunnel of the Alp Transit project. Fault 44 is a permeable fault that triggered significant surface deformations 1,400 m above the tunnel when it was reached by the advancing excavation. The fault runs parallel to the downstream face of the Nalps arch dam. Significant deformations were measured at the dam crown. Fault 49 is sub-vertical and permeable, and runs parallel at the upstream face of the dam. It was necessary to assess the risk when crossing fault 49, as a limit was put on the acceptable dam deformation for structural safety. The simulation model, forecasts and action decided when crossing over the faults are presented, with a brief description of the tunnel, the dam, and the monitoring system.

  20. Arc burst pattern analysis fault detection system

    NASA Technical Reports Server (NTRS)

    Russell, B. Don (Inventor); Aucoin, B. Michael (Inventor); Benner, Carl L. (Inventor)

    1997-01-01

    A method and apparatus are provided for detecting an arcing fault on a power line carrying a load current. Parameters indicative of power flow and possible fault events on the line, such as voltage and load current, are monitored and analyzed for an arc burst pattern exhibited by arcing faults in a power system. These arcing faults are detected by identifying bursts of each half-cycle of the fundamental current. Bursts occurring at or near a voltage peak indicate arcing on that phase. Once a faulted phase line is identified, a comparison of the current and voltage reveals whether the fault is located in a downstream direction of power flow toward customers, or upstream toward a generation station. If the fault is located downstream, the line is de-energized, and if located upstream, the line may remain energized to prevent unnecessary power outages.

  1. Multiple Fault Isolation in Redundant Systems

    NASA Technical Reports Server (NTRS)

    Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Iverson, David L.

    1997-01-01

    We consider the problem of sequencing tests to isolate multiple faults in redundant (fault-tolerant) systems with minimum expected testing cost (time). It can be shown that single faults and minimal faults, i.e., minimum number of failures with a failure signature different from the union of failure signatures of individual failures, together with their failure signatures, constitute the necessary information for fault diagnosis in redundant systems. In this paper, we develop an algorithm to find all the minimal faults and their failure signatures. Then, we extend the Sure diagnostic strategies [1] of our previous work to diagnose multiple faults in redundant systems. The proposed algorithms and strategies are illustrated using several examples.

  2. Rule-based fault diagnosis of hall sensors and fault-tolerant control of PMSM

    NASA Astrophysics Data System (ADS)

    Song, Ziyou; Li, Jianqiu; Ouyang, Minggao; Gu, Jing; Feng, Xuning; Lu, Dongbin

    2013-07-01

    Hall sensor is widely used for estimating rotor phase of permanent magnet synchronous motor(PMSM). And rotor position is an essential parameter of PMSM control algorithm, hence it is very dangerous if Hall senor faults occur. But there is scarcely any research focusing on fault diagnosis and fault-tolerant control of Hall sensor used in PMSM. From this standpoint, the Hall sensor faults which may occur during the PMSM operating are theoretically analyzed. According to the analysis results, the fault diagnosis algorithm of Hall sensor, which is based on three rules, is proposed to classify the fault phenomena accurately. The rotor phase estimation algorithms, based on one or two Hall sensor(s), are initialized to engender the fault-tolerant control algorithm. The fault diagnosis algorithm can detect 60 Hall fault phenomena in total as well as all detections can be fulfilled in 1/138 rotor rotation period. The fault-tolerant control algorithm can achieve a smooth torque production which means the same control effect as normal control mode (with three Hall sensors). Finally, the PMSM bench test verifies the accuracy and rapidity of fault diagnosis and fault-tolerant control strategies. The fault diagnosis algorithm can detect all Hall sensor faults promptly and fault-tolerant control algorithm allows the PMSM to face failure conditions of one or two Hall sensor(s). In addition, the transitions between health-control and fault-tolerant control conditions are smooth without any additional noise and harshness. Proposed algorithms can deal with the Hall sensor faults of PMSM in real applications, and can be provided to realize the fault diagnosis and fault-tolerant control of PMSM.

  3. A “mesh” of crossing faults: Fault networks of southern California

    NASA Astrophysics Data System (ADS)

    Janecke, S. U.

    2009-12-01

    Detailed geologic mapping of active fault systems in the western Salton Trough and northern Peninsular Ranges of southern California make it possible to expand the inventory of mapped and known faults by compiling and updating existing geologic maps, and analyzing high resolution imagery, LIDAR, InSAR, relocated hypocenters and other geophysical datasets. A fault map is being compiled on Google Earth and will ultimately discriminate between a range of different fault expressions: from well-mapped faults to subtle lineaments and geomorphic anomalies. The fault map shows deformation patterns in both crystalline and basinal deposits and reveals a complex fault mesh with many curious and unexpected relationships. Key findings are: 1) Many fault systems have mutually interpenetrating geometries, are grossly coeval, and allow faults to cross one another. A typical relationship reveals a dextral fault zone that appears to be continuous at the regional scale. In detail, however, there are no continuous NW-striking dextral fault traces and instead the master dextral fault is offset in a left-lateral sense by numerous crossing faults. Left-lateral faults also show small offsets where they interact with right lateral faults. Both fault sets show evidence of Quaternary activity. Examples occur along the Clark, Coyote Creek, Earthquake Valley and Torres Martinez fault zones. 2) Fault zones cross in other ways. There are locations where active faults continue across or beneath significant structural barriers. Major fault zones like the Clark fault of the San Jacinto fault system appears to end at NE-striking sinistral fault zones (like the Extra and Pumpkin faults) that clearly cross from the SW to the NE side of the projection of the dextral traces. Despite these blocking structures, there is good evidence for continuation of the dextral faults on the opposite sides of the crossing fault array. In some instances there is clear evidence (in deep microseismic alignments of

  4. Tracing the Geomorphic Signature of Lateral Faulting

    NASA Astrophysics Data System (ADS)

    Duvall, A. R.; Tucker, G. E.

    2012-12-01

    Active strike-slip faults are among the most dangerous geologic features on Earth. Unfortunately, it is challenging to estimate their slip rates, seismic hazard, and evolution over a range of timescales. An under-exploited tool in strike-slip fault characterization is quantitative analysis of the geomorphic response to lateral fault motion to extract tectonic information directly from the landscape. Past geomorphic work of this kind has focused almost exclusively on vertical motion, despite the ubiquity of horizontal motion in crustal deformation and mountain building. We seek to address this problem by investigating the landscape response to strike-slip faulting in two ways: 1) examining the geomorphology of the Marlborough Fault System (MFS), a suite of parallel strike-slip faults within the actively deforming South Island of New Zealand, and 2) conducting controlled experiments in strike-slip landscape evolution using the CHILD landscape evolution model. The MFS offers an excellent natural experiment site because fault initiation ages and cumulative displacements decrease from north to south, whereas slip rates increase over four fold across a region underlain by a single bedrock unit (Torlesse Greywacke). Comparison of planform and longitudinal profiles of rivers draining the MFS reveals strong disequilibrium within tributaries that drain to active fault strands, and suggests that river capture related to fault activity may be a regular process in strike-slip fault zones. Simple model experiments support this view. Model calculations that include horizontal motion as well as vertical uplift demonstrate river lengthening and shortening due to stream capture in response to shutter ridges sliding in front of stream outlets. These results suggest that systematic variability in fluvial knickpoint location, drainage area, and incision rates along different faults or fault segments may be expected in catchments upstream of strike-slip faults and could act as useful

  5. Has the San Gabriel fault been offset

    SciTech Connect

    Sheehan, J.R.

    1988-03-01

    The San Gabriel fault (SGF) in southern California is a right-lateral, strike-slip fault extending for 85 mi in an arcuate, southwestward-bowing curve from near the San Andreas fault at Frazier Mountain to its intersection with the left-lateral San Antonio Canyon fault (SACF) in the eastern San Gabriel Mountains. Termination of the SGF at the presently active SACF is abrupt and prompts the question Has the San Gabriel Fault been offset. Tectonic and geometric relationships in the area suggest that the SGF has been offset approximately 6 mi in a left-lateral sense and that the offset continuation of the SGF, across the SACF, is the right-lateral, strike-slip San Jacinto fault (SJF), which also terminates at the SACF. Reversing the left-lateral movement on the SACF to rejoin the offset ends of the SGF and SJF reveals a fault trace that is remarkably similar in geometry and movement (and perhaps in tectonic history), to the trace of the San Andreas fault through the southern part of the San Bernardino Mountains. The relationship of the Sierra Madre-Cucamonga fault system to the restored SGF-SJF fault is strikingly similar to the relationship of the Banning fault to the Mission Creek-Mill Creek portion of the San Andreas fault. Structural relations suggest that the San Gabriel-San Jacinto system predates the San Andreas fault in the eastern San Gabriel Mountains and that continuing movement on the SACF is currently affecting the trace of the San Andreas fault in the Cajon Pass area.

  6. Fault trees and imperfect coverage

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne B.

    1989-01-01

    A new algorithm is presented for solving the fault tree. The algorithm includes the dynamic behavior of the fault/error handling model but obviates the need for the Markov chain solution. As the state space is expanded in a breadth-first search (the same is done in the conversion to a Markov chain), the state's contribution to each future state is calculated exactly. A dynamic state truncation technique is also presented; it produces bounds on the unreliability of the system by considering only part of the state space. Since the model is solved as the state space is generated, the process can be stopped as soon as the desired accuracy is reached.

  7. Perspective View, San Andreas Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The prominent linear feature straight down the center of this perspective view is California's famous San Andreas Fault. The image, created with data from NASA's Shuttle Radar Topography Mission (SRTM), will be used by geologists studying fault dynamics and landforms resulting from active tectonics. This segment of the fault lies west of the city of Palmdale, Calif., about 100 kilometers (about 60 miles) northwest of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. Two large mountain ranges are visible, the San Gabriel Mountains on the left and the Tehachapi Mountains in the upper right. Another fault, the Garlock Fault lies at the base of the Tehachapis; the San Andreas and the Garlock Faults meet in the center distance near the town of Gorman. In the distance, over the Tehachapi Mountains is California's Central Valley. Along the foothills in the right hand part of the image is the Antelope Valley, including the Antelope Valley California Poppy Reserve. The data used to create this image were acquired by SRTM aboard the Space Shuttle Endeavour, launched on February 11, 2000.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    SRTM uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space

  8. Heat flow, strong near-fault seismic waves, and near-fault tectonics on the central San Andreas Fault

    NASA Astrophysics Data System (ADS)

    Sleep, Norman H.

    2016-05-01

    The main San Andreas Fault strikes subparallel to compressional folds and thrust faults. Its fault-normal traction is on average a factor of γ=1+2μthr>(√(1+μthr2)+μthr>), where μthr is the coefficient of friction for thrust faults, times the effective lithostatic pressure. A useful upper limit for μthr of 0.6 (where γ is 3.12) is obtained from the lack of heat flow anomalies by considering off-fault convergence at a rate of 1 mm/yr for 10 km across strike. If the fault-normal traction is in fact this high, the well-known heat flow constraint of average stresses of 10-20 MPa during strike slip on the main fault becomes more severe. Only a few percent of the total slip during earthquakes can occur at the peak stress before dynamic mechanisms weaken the fault. The spatial dimension of the high-stress rupture-tip zone is ˜10 m for γ = 3.12 and, for comparison, ˜100 m for γ = 1. High dynamic stresses during shaking occur within these distances of the fault plane. In terms of scalars, fine-scale tectonic stresses cannot exceed the difference between failure stress and dynamic stress. Plate-scale slip causes stresses to build up near geometrical irregularities of the fault plane. Strong dynamic stresses near the rupture tip facilitate anelastic deformation with the net effects of relaxing the local deviatoric tectonic stress and accommodating deformation around the irregularities. There also is a mild tendency for near-fault material to extrude upward. Slip on minor thrust faults causes the normal traction on the main fault to be spatially variable.

  9. Fault growth and interactions in a multiphase rift fault network: Horda Platform, Norwegian North Sea

    NASA Astrophysics Data System (ADS)

    Duffy, Oliver B.; Bell, Rebecca E.; Jackson, Christopher A.-L.; Gawthorpe, Rob L.; Whipp, Paul S.

    2015-11-01

    Physical models predict that multiphase rifts that experience a change in extension direction between stretching phases will typically develop non-colinear normal fault sets. Furthermore, multiphase rifts will display a greater frequency and range of styles of fault interactions than single-phase rifts. Although these physical models have yielded useful information on the evolution of fault networks in map view, the true 3D geometry of the faults and associated interactions are poorly understood. Here, we use an integrated 3D seismic reflection and borehole dataset to examine a range of fault interactions that occur in a natural multiphase fault network in the northern Horda Platform, northern North Sea. In particular we aim to: i) determine the range of styles of fault interaction that occur between non-colinear faults; ii) examine the typical geometries and throw patterns associated with each of these different styles; and iii) highlight the differences between single-phase and multiphase rift fault networks. Our study focuses on a ca. 350 km2 region around the >60 km long, N-S-striking Tusse Fault, a normal fault system that was active in the Permian-Triassic and again in the Late Jurassic-to-Early Cretaceous. The Tusse Fault is one of a series of large (>1500 m throw) N-S-striking faults forming part of the northern Horda Platform fault network, which includes numerous smaller (2-10 km long), lower throw (<100 m), predominantly NW-SE-striking faults that were only active during the Late Jurassic to Early Cretaceous. We examine how the 2nd-stage NW-SE-striking faults grew, interacted and linked with the N-S-striking Tusse Fault, documenting a range of interaction styles including mechanical and kinematic isolation, abutment, retardation and reactivated relays. Our results demonstrate that: i) isolated, and abutting interactions are the most common fault interaction styles in the northern Horda Platform; ii) pre-existing faults can act as sites of nucleation for

  10. New insights on Southern Coyote Creek Fault and Superstition Hills Fault

    NASA Astrophysics Data System (ADS)

    van Zandt, A. J.; Mellors, R. J.; Rockwell, T. K.; Burgess, M. K.; O'Hare, M.

    2007-12-01

    Recent field work has confirmed an extension of the southern Coyote Creek (CCF) branch of the San Jacinto fault in the western Salton trough. The fault marks the western edge of an area of subsidence caused by groundwater extraction, and field measurements suggest that recent strike-slip motion has occurred on this fault as well. We attempt to determine whether this fault connects at depth with the Superstition Hills fault (SHF) to the southeast by modeling observed surface deformation between the two faults measured by InSAR. Stacked ERS (descending) InSAR data from 1992 to 2000 is initially modeled using a finite fault in an elastic half-space. Observed deformation along the SHF and Elmore Ranch fault is modeled assuming shallow (< 5 km) creep. We test various models to explain surface deformation between the two faults.

  11. Inverter Ground Fault Overvoltage Testing

    SciTech Connect

    Hoke, Andy; Nelson, Austin; Chakraborty, Sudipta; Chebahtah, Justin; Wang, Trudie; McCarty, Michael

    2015-08-12

    This report describes testing conducted at NREL to determine the duration and magnitude of transient overvoltages created by several commercial PV inverters during ground fault conditions. For this work, a test plan developed by the Forum on Inverter Grid Integration Issues (FIGII) has been implemented in a custom test setup at NREL. Load rejection overvoltage test results were reported previously in a separate technical report.

  12. Fault detection using genetic programming

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; B. Jack, Lindsay; Nandi, Asoke K.

    2005-03-01

    Genetic programming (GP) is a stochastic process for automatically generating computer programs. GP has been applied to a variety of problems which are too wide to reasonably enumerate. As far as the authors are aware, it has rarely been used in condition monitoring (CM). In this paper, GP is used to detect faults in rotating machinery. Featuresets from two different machines are used to examine the performance of two-class normal/fault recognition. The results are compared with a few other methods for fault detection: Artificial neural networks (ANNs) have been used in this field for many years, while support vector machines (SVMs) also offer successful solutions. For ANNs and SVMs, genetic algorithms have been used to do feature selection, which is an inherent function of GP. In all cases, the GP demonstrates performance which equals or betters that of the previous best performing approaches on these data sets. The training times are also found to be considerably shorter than the other approaches, whilst the generated classification rules are easy to understand and independently validate.

  13. Watching Faults Grow in Sand

    NASA Astrophysics Data System (ADS)

    Cooke, M. L.

    2015-12-01

    Accretionary sandbox experiments provide a rich environment for investigating the processes of fault development. These experiments engage students because 1) they enable direct observation of fault growth, which is impossible in the crust (type 1 physical model), 2) they are not only representational but can also be manipulated (type 2 physical model), 3) they can be used to test hypotheses (type 3 physical model) and 4) they resemble experiments performed by structural geology researchers around the world. The structural geology courses at UMass Amherst utilize a series of accretionary sandboxes experiments where students first watch a video of an experiment and then perform a group experiment. The experiments motivate discussions of what conditions they would change and what outcomes they would expect from these changes; hypothesis development. These discussions inevitably lead to calculations of the scaling relationships between model and crustal fault growth and provide insight into the crustal processes represented within the dry sand. Sketching of the experiments has been shown to be a very effective assessment method as the students reveal which features they are analyzing. Another approach used at UMass is to set up a forensic experiment. The experiment is set up with spatially varying basal friction before the meeting and students must figure out what the basal conditions are through the experiment. This experiment leads to discussions of equilibrium and force balance within the accretionary wedge. Displacement fields can be captured throughout the experiment using inexpensive digital image correlation techniques to foster quantitative analysis of the experiments.

  14. CONTROL AND FAULT DETECTOR CIRCUIT

    DOEpatents

    Winningstad, C.N.

    1958-04-01

    A power control and fault detectcr circuit for a radiofrequency system is described. The operation of the circuit controls the power output of a radio- frequency power supply to automatically start the flow of energizing power to the radio-frequency power supply and to gradually increase the power to a predetermined level which is below the point where destruction occurs upon the happening of a fault. If the radio-frequency power supply output fails to increase during such period, the control does not further increase the power. On the other hand, if the output of the radio-frequency power supply properly increases, then the control continues to increase the power to a maximum value. After the maximumn value of radio-frequency output has been achieved. the control is responsive to a ''fault,'' such as a short circuit in the radio-frequency system being driven, so that the flow of power is interrupted for an interval before the cycle is repeated.

  15. From fissure to fault: A model of fault growth in the Krafla Fissure System, NE Iceland

    NASA Astrophysics Data System (ADS)

    Bramham, Emma; Paton, Douglas; Wright, Tim

    2015-04-01

    Current models of fault growth examine the relationship of fault length (L) to vertical displacement (D) where the faults exhibit the classic fault shape of gradually increasing vertical displacement from zero at the fault tips to a maximum displacement (Dmax) at the middle of the fault. These models cannot adequately explain displacement-length observations at the Krafla fissure swarm, in Iceland's northern volcanic zone, where we observe that many of the faults with significant vertical displacements still retain fissure-like features, with no vertical displacement, along portions of their lengths. We have created a high resolution digital elevation model (DEM) of the Krafla region using airborne LiDAR and measured the displacement/length profiles of 775 faults, with lengths ranging from 10s to 1000s of metres. We have categorised the faults based on the proportion of the profile that was still fissure-like. Fully-developed faults (no fissure-like regions) were further grouped into those with profiles that had a flat-top geometry (i.e. significant proportion of fault length with constant throw), those with a bell-shaped throw profile and those that show regions of fault linkage. We suggest that a fault can most easily accommodate stress by displacing regions that are still fissure-like, and that a fault would be more likely to accommodate stress by linkage once it has reached the maximum displacement for its fault length. Our results demonstrate that there is a pattern of growth from fissure to fault in the Dmax/L ratio of the categorised faults and propose a model for this growth. These data better constrain our understanding of how fissures develop into faults but also provide insights into the discrepancy in D/L profiles from a typical bell-shaped distribution.

  16. Influence of fault trend, fault bends, and fault convergence on shallow structure, geomorphology, and hazards, Hosgri strike-slip fault, offshore central California

    NASA Astrophysics Data System (ADS)

    Johnson, S. Y.; Watt, J. T.; Hartwell, S. R.

    2012-12-01

    We mapped a ~94-km-long portion of the right-lateral Hosgri Fault Zone from Point Sal to Piedras Blancas in offshore central California using high-resolution seismic reflection profiles, marine magnetic data, and multibeam bathymetry. The database includes 121 seismic profiles across the fault zone and is perhaps the most comprehensive reported survey of the shallow structure of an active strike-slip fault. These data document the location, length, and near-surface continuity of multiple fault strands, highlight fault-zone heterogeneity, and demonstrate the importance of fault trend, fault bends, and fault convergences in the development of shallow structure and tectonic geomorphology. The Hosgri Fault Zone is continuous through the study area passing through a broad arc in which fault trend changes from about 338° to 328° from south to north. The southern ~40 km of the fault zone in this area is more extensional, resulting in accommodation space that is filled by deltaic sediments of the Santa Maria River. The central ~24 km of the fault zone is characterized by oblique convergence of the Hosgri Fault Zone with the more northwest-trending Los Osos and Shoreline Faults. Convergence between these faults has resulted in the formation of local restraining and releasing fault bends, transpressive uplifts, and transtensional basins of varying size and morphology. We present a hypothesis that links development of a paired fault bend to indenting and bulging of the Hosgri Fault by a strong crustal block translated to the northwest along the Shoreline Fault. Two diverging Hosgri Fault strands bounding a central uplifted block characterize the northern ~30 km of the Hosgri Fault in this area. The eastern Hosgri strand passes through releasing and restraining bends; the releasing bend is the primary control on development of an elongate, asymmetric, "Lazy Z" sedimentary basin. The western strand of the Hosgri Fault Zone passes through a significant restraining bend and

  17. Building the GEM Faulted Earth database

    NASA Astrophysics Data System (ADS)

    Litchfield, N. J.; Berryman, K. R.; Christophersen, A.; Thomas, R. F.; Wyss, B.; Tarter, J.; Pagani, M.; Stein, R. S.; Costa, C. H.; Sieh, K. E.

    2011-12-01

    The GEM Faulted Earth project is aiming to build a global active fault and seismic source database with a common set of strategies, standards, and formats, to be placed in the public domain. Faulted Earth is one of five hazard global components of the Global Earthquake Model (GEM) project. A key early phase of the GEM Faulted Earth project is to build a database which is flexible enough to capture existing and variable (e.g., from slow interplate faults to fast subduction interfaces) global data, and yet is not too onerous to enter new data from areas where existing databases are not available. The purpose of this talk is to give an update on progress building the GEM Faulted Earth database. The database design conceptually has two layers, (1) active faults and folds, and (2) fault sources, and automated processes are being defined to generate fault sources. These include the calculation of moment magnitude using a user-selected magnitude-length or magnitude-area scaling relation, and the calculation of recurrence interval from displacement divided by slip rate, where displacement is calculated from moment and moment magnitude. The fault-based earthquake sources defined by the Faulted Earth project will then be rationalised with those defined by the other GEM global components. A web based tool is being developed for entering individual faults and folds, and fault sources, and includes capture of additional information collected at individual sites, as well as descriptions of the data sources. GIS shapefiles of individual faults and folds, and fault sources will also be able to be uploaded. A data dictionary explaining the database design rationale, definitions of the attributes and formats, and a tool user guide is also being developed. Existing national databases will be uploaded outside of the fault compilation tool, through a process of mapping common attributes between the databases. Regional workshops are planned for compilation in areas where existing

  18. Fault tolerant operation of switched reluctance machine

    NASA Astrophysics Data System (ADS)

    Wang, Wei

    The energy crisis and environmental challenges have driven industry towards more energy efficient solutions. With nearly 60% of electricity consumed by various electric machines in industry sector, advancement in the efficiency of the electric drive system is of vital importance. Adjustable speed drive system (ASDS) provides excellent speed regulation and dynamic performance as well as dramatically improved system efficiency compared with conventional motors without electronics drives. Industry has witnessed tremendous grow in ASDS applications not only as a driving force but also as an electric auxiliary system for replacing bulky and low efficiency auxiliary hydraulic and mechanical systems. With the vast penetration of ASDS, its fault tolerant operation capability is more widely recognized as an important feature of drive performance especially for aerospace, automotive applications and other industrial drive applications demanding high reliability. The Switched Reluctance Machine (SRM), a low cost, highly reliable electric machine with fault tolerant operation capability, has drawn substantial attention in the past three decades. Nevertheless, SRM is not free of fault. Certain faults such as converter faults, sensor faults, winding shorts, eccentricity and position sensor faults are commonly shared among all ASDS. In this dissertation, a thorough understanding of various faults and their influence on transient and steady state performance of SRM is developed via simulation and experimental study, providing necessary knowledge for fault detection and post fault management. Lumped parameter models are established for fast real time simulation and drive control. Based on the behavior of the faults, a fault detection scheme is developed for the purpose of fast and reliable fault diagnosis. In order to improve the SRM power and torque capacity under faults, the maximum torque per ampere excitation are conceptualized and validated through theoretical analysis and

  19. A Log-Scaling Fault Tolerant Agreement Algorithm for a Fault Tolerant MPI

    SciTech Connect

    Hursey, Joshua J; Naughton, III, Thomas J; Vallee, Geoffroy R; Graham, Richard L

    2011-01-01

    The lack of fault tolerance is becoming a limiting factor for application scalability in HPC systems. The MPI does not provide standardized fault tolerance interfaces and semantics. The MPI Forum's Fault Tolerance Working Group is proposing a collective fault tolerant agreement algorithm for the next MPI standard. Such algorithms play a central role in many fault tolerant applications. This paper combines a log-scaling two-phase commit agreement algorithm with a reduction operation to provide the necessary functionality for the new collective without any additional messages. Error handling mechanisms are described that preserve the fault tolerance properties while maintaining overall scalability.

  20. West Coast Tsunami: Cascadia's Fault?

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Bernard, E. N.; Titov, V.

    2013-12-01

    The tragedies of 2004 Sumatra and 2011 Japan tsunamis exposed the limits of our knowledge in preparing for devastating tsunamis. The 1,100-km coastline of the Pacific coast of North America has tectonic and geological settings similar to Sumatra and Japan. The geological records unambiguously show that the Cascadia fault had caused devastating tsunamis in the past and this geological process will cause tsunamis in the future. Hypotheses of the rupture process of Cascadia fault include a long rupture (M9.1) along the entire fault line, short ruptures (M8.8 - M9.1) nucleating only a segment of the coastline, or a series of lesser events of M8+. Recent studies also indicate an increasing probability of small rupture occurring at the south end of the Cascadia fault. Some of these hypotheses were implemented in the development of tsunami evacuation maps in Washington and Oregon. However, the developed maps do not reflect the tsunami impact caused by the most recent updates regarding the Cascadia fault rupture process. The most recent study by Wang et al. (2013) suggests a rupture pattern of high- slip patches separated by low-slip areas constrained by estimates of coseismic subsidence based on microfossil analyses. Since this study infers that a Tokohu-type of earthquake could strike in the Cascadia subduction zone, how would such an tsunami affect the tsunami hazard assessment and planning along the Pacific Coast of North America? The rapid development of computing technology allowed us to look into the tsunami impact caused by above hypotheses using high-resolution models with large coverage of Pacific Northwest. With the slab model of MaCrory et al. (2012) (as part of the USGS slab 1.0 model) for the Cascadia earthquake, we tested the above hypotheses to assess the tsunami hazards along the entire U.S. West Coast. The modeled results indicate these hypothetical scenarios may cause runup heights very similar to those observed along Japan's coastline during the 2011

  1. A Quaternary Fault Database for Central Asia

    NASA Astrophysics Data System (ADS)

    Mohadjer, S.; Ehlers, T. A.; Bendick, R.; Stübner, K.; Strube, T.

    2015-09-01

    Earthquakes represent the highest risk in terms of potential loss of lives and economic damage for Central Asian countries. Knowledge of fault location and behavior is essential in calculating and mapping seismic hazard. Previous efforts in compiling fault information for Central Asia have generated a large amount of data that are published in limited-access journals with no digital maps publicly available, or are limited in their description of important fault parameters such as slip rates. This study builds on previous work by improving access to fault information through a web-based interactive map and an online database with search capabilities that allow users to organize data by different fields. The data presented in this compilation include fault location, its geographic, seismic and structural characteristics, short descriptions, narrative comments and references to peer-reviewed publications. The interactive map displays 1196 fault segments and 34 000 earthquake locations on a shaded-relief map. The online database contains attributes for 122 faults mentioned in the literature, with Quaternary and geodetic slip rates reported for 38 and 26 faults respectively, and earthquake history reported for 39 faults. This work has implications for seismic hazard studies in Central Asia as it summarizes important fault parameters, and can reduce earthquake risk by enhancing public access to information. It also allows scientists and hazard assessment teams to identify structures and regions where data gaps exist and future investigations are needed.

  2. Learning and diagnosing faults using neural networks

    NASA Technical Reports Server (NTRS)

    Whitehead, Bruce A.; Kiech, Earl L.; Ali, Moonis

    1990-01-01

    Neural networks have been employed for learning fault behavior from rocket engine simulator parameters and for diagnosing faults on the basis of the learned behavior. Two problems in applying neural networks to learning and diagnosing faults are (1) the complexity of the sensor data to fault mapping to be modeled by the neural network, which implies difficult and lengthy training procedures; and (2) the lack of sufficient training data to adequately represent the very large number of different types of faults which might occur. Methods are derived and tested in an architecture which addresses these two problems. First, the sensor data to fault mapping is decomposed into three simpler mappings which perform sensor data compression, hypothesis generation, and sensor fusion. Efficient training is performed for each mapping separately. Secondly, the neural network which performs sensor fusion is structured to detect new unknown faults for which training examples were not presented during training. These methods were tested on a task of fault diagnosis by employing rocket engine simulator data. Results indicate that the decomposed neural network architecture can be trained efficiently, can identify faults for which it has been trained, and can detect the occurrence of faults for which it has not been trained.

  3. Early weakening processes inside thrust fault

    NASA Astrophysics Data System (ADS)

    Lacroix, B.; Tesei, T.; Oliot, E.; Lahfid, A.; Collettini, C.

    2015-07-01

    Observations from deep boreholes at several locations worldwide, laboratory measurements of frictional strength on quartzo-feldspathic materials, and earthquake focal mechanisms indicate that crustal faults are strong (apparent friction μ ≥ 0.6). However, friction experiments on phyllosilicate-rich rocks and some geophysical data have demonstrated that some major faults are considerably weaker. This weakness is commonly considered to be characteristic of mature faults in which rocks are altered by prolonged deformation and fluid-rock interaction (i.e., San Andreas, Zuccale, and Nankai Faults). In contrast, in this study we document fault weakening occurring along a marly shear zone in its infancy (<30 m displacement). Geochemical mass balance calculation and microstructural data show that a massive calcite departure (up to 50 vol %) from the fault rocks facilitated the concentration and reorganization of weak phyllosilicate minerals along the shear surfaces. Friction experiments carried out on intact foliated samples of host marls and fault rocks demonstrated that this structural reorganization lead to a significant fault weakening and that the incipient structure has strength and slip behavior comparable to that of the major weak faults previously documented. These results indicate that some faults, especially those nucleating in lithologies rich of both clays and high-solubility minerals (such as calcite), might experience rapid mineralogical and structural alteration and become weak even in the early stages of their activity.

  4. Determining Fault Orientation with Sagnac Interferometers

    NASA Astrophysics Data System (ADS)

    Gruenwald, Konstantin; Dunn, Robert

    2014-03-01

    Typically, earthquake fault ruptures emit seismic waves in directions dependent on the fault's orientation. Specifically, as the fault slips to release strain, compressional P-waves propagate parallel and perpendicular to the fault plane, and transverse S-waves propagate at 45 degree angles to the fault-a result of the double-couple model of fault slippage. Sagnac Interferometers (ring-lasers) have been used to study wave components of several natural phenomena. We used the initial responses of a ring-laser from transverse S-waves to determine the orientation of the nearby Guy/Greenbrier fault, the source of an earthquake swarm in 2010-11 purportedly caused by hydraulic fracturing. This orientation was compared to the structure of the fault extracted by nearby seismogram responses. Our goal was to determine if ring-lasers could reinforce or add to the models of fault orientation constructed from seismographs. The results indicate that the ring-laser's responses can aid in constructing fault orientation in a manner similar to traditional seismographs. Funded by the Arkansas Space Grant Consortium and the National Science Foundation.

  5. Perspective View, San Andreas Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The prominent linear feature straight down the center of this perspective view is the San Andreas Fault in an image created with data from NASA's shuttle Radar Topography Mission (SRTM), which will be used by geologists studying fault dynamics and landforms resulting from active tectonics. This segment of the fault lies west of the city of Palmdale, California, about 100 kilometers (about 60 miles) northwest of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. This area is at the junction of two large mountain ranges, the San Gabriel Mountains on the left and the Tehachapi Mountains on the right. Quail Lake Reservoir sits in the topographic depression created by past movement along the fault. Interstate 5 is the prominent linear feature starting at the left edge of the image and continuing into the fault zone, passing eventually over Tejon Pass into the Central Valley, visible at the upper left.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994

  6. Off-fault tip splay networks: A genetic and generic property of faults indicative of their long-term propagation

    NASA Astrophysics Data System (ADS)

    Perrin, Clément; Manighetti, Isabelle; Gaudemer, Yves

    2016-01-01

    We use fault maps and fault propagation evidences available in the literature to examine geometrical relations between parent faults and off-fault splays. The population includes 47 worldwide crustal faults with lengths from millimetres to thousands of kilometres and of different slip modes. We show that fault splays form adjacent to any propagating fault tip, whereas they are absent at non-propagating fault ends. Independent of fault length, slip mode, context, etc., tip splay networks have a similar fan shape widening in direction of long-term propagation, a similar relative length and width (∼ 30 and ∼ 10% of parent fault length, respectively), and a similar range of mean angles to parent fault (10-20°). We infer that tip splay networks are a genetic and a generic property of faults indicative of their long-term propagation. Their generic geometrical properties suggest they result from generic off-fault stress distribution at propagating fault ends.

  7. Paleomagnetic Data From the Rinconada Fault in Central California: Evidence for Off-fault Deformation

    NASA Astrophysics Data System (ADS)

    Crump, S.; Titus, S.; McGuire, Z.; Housen, B. A.

    2009-12-01

    The Rinconada fault is one of three major sub-parallel faults of the San Andreas fault system in central California. The fault has 18 km of dextral displacement since the Pliocene and up to 60 km of total displacement for the Tertiary. A fold and thrust best is well developed in Miocene and younger sedimentary rocks on either side of the Rinconada fault. We sampled ~150 sites from the Miocene Monterey Formation within this fold and thrust belt, a unit that is often used in regional paleomagnetic studies. The sites were located within 15 km of the fault trace along a segment of the Rinconada fault that stretches from Greenfield to Paso Robles. Because this unit was deposited while the San Andreas fault system was active at this latitude, any deformation recorded by these rocks is related to plate boundary deformation. Unlike the large (>90°) rotations observed in the Transverse Ranges to the south, vertical axis rotations adjacent to the Rinconada fault are smaller (<15°) and vary with distance from the fault as well as along strike. Thus, the model for rotations from the Transverse Ranges, where large fault-bound panels rotate within a system of conjugate strike-slip faults, does not apply for this region in central California. Instead, we believe rotations occur in small fault blocks and the magnitude of rotation may be affected by local parameters such as fault geometries, specific rock types, and structural complexities. One implication of these vertical axis rotations adjacent to the Riconada fault is that off-fault regions are accommodating some of the fault-parallel plate motion. This is important for our understanding of the partitioning of plate boundary deformation in California.

  8. Fault geometries in basement-induced wrench faulting under different initial stress states

    NASA Astrophysics Data System (ADS)

    Naylor, M. A.; Mandl, G.; Supesteijn, C. H. K.

    Scaled sandbox experiments were used to generate models for relative ages, dip, strike and three-dimensional shape of faults in basement-controlled wrench faulting. The basic fault sequence runs from early en échelon Riedel shears and splay faults through 'lower-angle' shears to P shears. The Riedel shears are concave upwards and define a tulip structure in cross-section. In three dimensions, each Riedel shear has a helicoidal form. The sequence of faults and three-dimensional geometry are rationalized in terms of the prevailing stress field and Coulomb-Mohr theory of shear failure. The stress state in the sedimentary overburden before wrenching begins has a substantial influence on the fault geometries and on the final complexity of the fault zone. With the maximum compressive stress (∂ 1) initially parallel to the basement fault (transtension), Riedel shears are only slightly en échelon, sub-parallel to the basement fault, steeply dipping with a reduced helicoidal aspect. Conversely, with ∂ 1 initially perpendicular to the basement fault (transpression), Riedel shears are strongly oblique to the basement fault strike, have lower dips and an exaggerated helicoidal form; the final fault zone is both wide and complex. We find good agreement between the models and both mechanical theory and natural examples of wrench faulting.

  9. Geometrical effects of fault bends on fault frictional and mechanical behavior: insights from Distinct Element simulations

    NASA Astrophysics Data System (ADS)

    Guo, Y.; Morgan, J.

    2006-12-01

    Strike slip and transform faults often consist of nonlinear segments, i.e., restraining bends and releasing bends that have significant impacts on stress pattern, strain accumulation, slip rate, and therefore the variation of seismicity along these faults. In order to study the geometrical effects of nonlinear faults on fault frictional and mechanical behavior during fault loading and slip, we simulate the rupture process of faults with bends using the Distinct Element Method (DEM) in 2-dimensions. Breakable elastic bonds were added between adjacent, closely packed circular particles to generate fault blocks. A nonlinear fault surface with a restraining bend and a releasing bend that are symmetrically distributed was defined in the middle of the fault blocks. Deformation was introduced by pulling a spring attached on one of fault zone boundaries at a constant velocity and keeping another boundary fixed, producing compression and contraction along the restraining bend, and tension and dilation along the releasing bend. Significant strain is accommodated adjacent to the restraining bend by formation of secondary faults and slip along them. The slip rates, fault frictional strengths, and rupture processes are affected by multiple parameters, including bond strength, loading velocity, bend angle and amplitude. Among these parameters, bend geometry plays a more important role in determining spatial and temporal distribution of contact slip and failure of our simulated nonlinear faults.

  10. Seismicity and fault geometry of the San Andreas fault around Parkfield, California and their implications

    NASA Astrophysics Data System (ADS)

    Kim, Woohan; Hong, Tae-Kyung; Lee, Junhyung; Taira, Taka'aki

    2016-05-01

    Fault geometry is a consequence of tectonic evolution, and it provides important information on potential seismic hazards. We investigated fault geometry and its properties in Parkfield, California on the basis of local seismicity and seismic velocity residuals refined by an adaptive-velocity hypocentral-parameter inversion method. The station correction terms from the hypocentral-parameter inversion present characteristic seismic velocity changes around the fault, suggesting low seismic velocities in the region east of the fault and high seismic velocities in the region to the west. Large seismic velocity anomalies are observed at shallow depths along the whole fault zone. At depths of 3-8 km, seismic velocity anomalies are small in the central fault zone, but are large in the northern and southern fault zones. At depths > 8 km, low seismic velocities are observed in the northern fault zone. High seismicity is observed in the Southwest Fracture Zone, which has developed beside the creeping segment of the San Andreas fault. The vertical distribution of seismicity suggests that the fault has spiral geometry, dipping NE in the northern region, nearly vertical in the central region, and SW in the southern region. The rapid twisting of the fault plane occurs in a short distance of approximately 50 km. The seismic velocity anomalies and fault geometry suggest location-dependent piecewise faulting, which may cause the periodic M6 events in the Parkfield region.

  11. Surface faulting along the Superstition Hills fault zone and nearby faults associated with the earthquakes of 24 November 1987

    USGS Publications Warehouse

    Sharp, R.V.

    1989-01-01

    The M6.2 Elmore Desert Ranch earthquake of 24 November 1987 was associated spatially and probably temporally with left-lateral surface rupture on many northeast-trending faults in and near the Superstition Hills in western Imperial Valley. Three curving discontinuous principal zones of rupture among these breaks extended northeastward from near the Superstition Hills fault zone as far as 9km; the maximum observed surface slip, 12.5cm, was on the northern of the three, the Elmore Ranch fault, at a point near the epicenter. Twelve hours after the Elmore Ranch earthquake, the M6.6 Superstition Hills earthquake occurred near the northwest end of the right-lateral Superstition Hills fault zone. We measured displacements over 339 days at as many as 296 sites along the Superstition Hills fault zone, and repeated measurements at 49 sites provided sufficient data to fit with a simple power law. The overall distributions of right-lateral displacement at 1 day and the estimated final slip are nearly symmetrical about the midpoint of the surface rupture. The average estimated final right-lateral slip for the Superstition Hills fault zone is ~54cm. The average left-lateral slip for the conjugate faults trending northeastward is ~23cm. The southernmost ruptured member of the Superstition Hills fault zone, newly named the Wienert fault, extends the known length of the zone by about 4km. -from Authors

  12. Networking of Near Fault Observatories in Europe

    NASA Astrophysics Data System (ADS)

    Vogfjörd, Kristín; Bernard, Pascal; Chiraluce, Lauro; Fäh, Donat; Festa, Gaetano; Zulficar, Can

    2014-05-01

    Networking of six European near-fault observatories (NFO) was established In the FP7 infrastructure project NERA (Network of European Research Infrastructures for Earthquake Risk Assessment and Mitigation). This networking has included sharing of expertise and know-how among the observatories, distribution of analysis tools and access to data. The focus of the NFOs is on research into the active processes of their respective fault zones through acquisition and analysis of multidisciplinary data. These studies include the role of fluids in fault initiation, site effects, derived processes such as earthquake generated tsunamis and landslides, mapping the internal structure of fault systems and development of automatic early warning systems. The six fault zones are in different tectonic regimes: The South Iceland Seismic Zone (SISZ) in Iceland, the Marmara Sea in Turkey and the Corinth Rift in Greece are at plate boundaries, with strike-slip faulting characterizing the SISZ and the Marmara Sea, while normal faulting dominates in the Corinth Rift. The Alto Tiberina and Irpinia faults, dominated by low- and medium-angle normal faulting, respectively are in the Apennine mountain range in Italy and the Valais Region, characterized by both strike-slip and normal faulting is located in the Swiss Alps. The fault structures range from well-developed long faults, such as in the Marmara Sea, to more complex networks of smaller, book-shelf faults such as in the SISZ. Earthquake hazard in the fault zones ranges from significant to substantial. The Marmara Sea and Corinth rift are under ocean causing additional tsunami hazard and steep slopes and sediment-filled valleys in the Valais give rise to hazards from landslides and liquefaction. Induced seismicity has repeatedly occurred in connection with geothermal drilling and water injection in the SISZ and active volcanoes flanking the SISZ also give rise to volcanic hazard due to volcano-tectonic interaction. Organization among the

  13. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1990-01-01

    The use of back-to-back, or comparison, testing for regression test or porting is examined. The efficiency and the cost of the strategy is compared with manual and table-driven single version testing. Some of the key parameters that influence the efficiency and the cost of the approach are the failure identification effort during single version program testing, the extent of implemented changes, the nature of the regression test data (e.g., random), and the nature of the inter-version failure correlation and fault-masking. The advantages and disadvantages of the technique are discussed, together with some suggestions concerning its practical use.

  14. Performance Analysis on Fault Tolerant Control System

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Belcastro, Christine

    2005-01-01

    In a fault tolerant control (FTC) system, a parameter varying FTC law is reconfigured based on fault parameters estimated by fault detection and isolation (FDI) modules. FDI modules require some time to detect fault occurrences in aero-vehicle dynamics. In this paper, an FTC analysis framework is provided to calculate the upper bound of an induced-L(sub 2) norm of an FTC system with existence of false identification and detection time delay. The upper bound is written as a function of a fault detection time and exponential decay rates and has been used to determine which FTC law produces less performance degradation (tracking error) due to false identification. The analysis framework is applied for an FTC system of a HiMAT (Highly Maneuverable Aircraft Technology) vehicle. Index Terms fault tolerant control system, linear parameter varying system, HiMAT vehicle.

  15. Holocene fault scarps near Tacoma, Washington, USA

    USGS Publications Warehouse

    Sherrod, B.L.; Brocher, T.M.; Weaver, C.S.; Bucknam, R.C.; Blakely, R.J.; Kelsey, H.M.; Nelson, A.R.; Haugerud, R.

    2004-01-01

    Airborne laser mapping confirms that Holocene active faults traverse the Puget Sound metropolitan area, northwestern continental United States. The mapping, which detects forest-floor relief of as little as 15 cm, reveals scarps along geophysical lineaments that separate areas of Holocene uplift and subsidence. Along one such line of scarps, we found that a fault warped the ground surface between A.D. 770 and 1160. This reverse fault, which projects through Tacoma, Washington, bounds the southern and western sides of the Seattle uplift. The northern flank of the Seattle uplift is bounded by a reverse fault beneath Seattle that broke in A.D. 900-930. Observations of tectonic scarps along the Tacoma fault demonstrate that active faulting with associated surface rupture and ground motions pose a significant hazard in the Puget Sound region.

  16. A new intelligent hierarchical fault diagnosis system

    SciTech Connect

    Huang, Y.C.; Huang, C.L.; Yang, H.T.

    1997-02-01

    As a part of a substation-level decision support system, a new intelligent Hierarchical Fault Diagnosis System for on-line fault diagnosis is presented in this paper. The proposed diagnosis system divides the fault diagnosis process into two phases. Using time-stamped information of relays and breakers, phase 1 identifies the possible fault sections through the Group Method of Data Handling (GMDH) networks, and phase 2 recognizes the types and detailed situations of the faults identified in phase 1 by using a fast bit-operation logical inference mechanism. The diagnosis system has been practically verified by testing on a typical Taiwan power secondary transmission system. Test results show that rapid and accurate diagnosis can be obtained with flexibility and portability for fault diagnosis purpose of diverse substations.

  17. Probable origin of the Livingston Fault Zone

    NASA Astrophysics Data System (ADS)

    Monroe, Watson H.

    1991-09-01

    Most faulting in the Coastal Plain is high angle and generally normal, but the faults in the Livingston Fault Zone are all medium-angle reverse, forming a series of parallel horsts and grabens. Parallel to the fault zone are a number of phenomena all leading to the conclusion that the faults result from the solution of a late Cretaceous salt anticline by fresh groundwater, which then migrated up to the Eutaw and perhaps Tuscaloosa aquifers, causing an anomalous elongated area of highly saline water. The origin of the Livingston Fault Zone and the association of salt water in underlying aquifers is of particular importance at this time in relation to environmental concerns associated with hazardous waste management in the area.

  18. Fault-tolerant dynamic task graph scheduling

    SciTech Connect

    Kurt, Mehmet C.; Krishnamoorthy, Sriram; Agrawal, Kunal; Agrawal, Gagan

    2014-11-16

    In this paper, we present an approach to fault tolerant execution of dynamic task graphs scheduled using work stealing. In particular, we focus on selective and localized recovery of tasks in the presence of soft faults. We elicit from the user the basic task graph structure in terms of successor and predecessor relationships. The work stealing-based algorithm to schedule such a task graph is augmented to enable recovery when the data and meta-data associated with a task get corrupted. We use this redundancy, and the knowledge of the task graph structure, to selectively recover from faults with low space and time overheads. We show that the fault tolerant design retains the essential properties of the underlying work stealing-based task scheduling algorithm, and that the fault tolerant execution is asymptotically optimal when task re-execution is taken into account. Experimental evaluation demonstrates the low cost of recovery under various fault scenarios.

  19. Fault system polarity: A matter of chance?

    NASA Astrophysics Data System (ADS)

    Schöpfer, Martin; Childs, Conrad; Manzocchi, Tom; Walsh, John; Nicol, Andy; Grasemann, Bernhard

    2015-04-01

    Many normal fault systems and, on a smaller scale, fracture boudinage exhibit asymmetry so that one fault dip direction dominates. The fraction of throw (or heave) accommodated by faults with the same dip direction in relation to the total fault system throw (or heave) is a quantitative measure of fault system asymmetry and termed 'polarity'. It is a common belief that the formation of domino and shear band boudinage with a monoclinic symmetry requires a component of layer parallel shearing, whereas torn boudins reflect coaxial flow. Moreover, domains of parallel faults are frequently used to infer the presence of a common décollement. Here we show, using Distinct Element Method (DEM) models in which rock is represented by an assemblage of bonded circular particles, that asymmetric fault systems can emerge under symmetric boundary conditions. The pre-requisite for the development of domains of parallel faults is however that the medium surrounding the brittle layer has a very low strength. We demonstrate that, if the 'competence' contrast between the brittle layer and the surrounding material ('jacket', or 'matrix') is high, the fault dip directions and hence fault system polarity can be explained using a random process. The results imply that domains of parallel faults are, for the conditions and properties used in our models, in fact a matter of chance. Our models suggest that domino and shear band boudinage can be an unreliable shear-sense indicator. Moreover, the presence of a décollement should not be inferred on the basis of a domain of parallel faults only.

  20. 31 CFR 29.522 - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Fault. 29.522 Section 29.522 Money... Overpayments § 29.522 Fault. (a) General rule. A debtor is considered to be at fault if he or she, or any other... requirement. (3) The following factors may affect the decision as to whether the debtor is or is not at...

  1. 31 CFR 29.522 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance: Treasury 1 2013-07-01 2013-07-01 false Fault. 29.522 Section 29.522 Money... Overpayments § 29.522 Fault. (a) General rule. A debtor is considered to be at fault if he or she, or any other... requirement. (3) The following factors may affect the decision as to whether the debtor is or is not at...

  2. 31 CFR 29.522 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 31 Money and Finance: Treasury 1 2011-07-01 2011-07-01 false Fault. 29.522 Section 29.522 Money... Overpayments § 29.522 Fault. (a) General rule. A debtor is considered to be at fault if he or she, or any other... requirement. (3) The following factors may affect the decision as to whether the debtor is or is not at...

  3. 31 CFR 29.522 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 31 Money and Finance: Treasury 1 2014-07-01 2014-07-01 false Fault. 29.522 Section 29.522 Money... Overpayments § 29.522 Fault. (a) General rule. A debtor is considered to be at fault if he or she, or any other... requirement. (3) The following factors may affect the decision as to whether the debtor is or is not at...

  4. 31 CFR 29.522 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance: Treasury 1 2012-07-01 2012-07-01 false Fault. 29.522 Section 29.522 Money... Overpayments § 29.522 Fault. (a) General rule. A debtor is considered to be at fault if he or she, or any other... requirement. (3) The following factors may affect the decision as to whether the debtor is or is not at...

  5. Diagnosing process faults using neural network models

    SciTech Connect

    Buescher, K.L.; Jones, R.D.; Messina, M.J.

    1993-11-01

    In order to be of use for realistic problems, a fault diagnosis method should have the following three features. First, it should apply to nonlinear processes. Second, it should not rely on extensive amounts of data regarding previous faults. Lastly, it should detect faults promptly. The authors present such a scheme for static (i.e., non-dynamic) systems. It involves using a neural network to create an associative memory whose fixed points represent the normal behavior of the system.

  6. The fault-tolerant multiprocessor computer

    NASA Technical Reports Server (NTRS)

    Smith, T. B., III (Editor); Lala, J. H. (Editor); Goldberg, J. (Editor); Kautz, W. H. (Editor); Melliar-Smith, P. M. (Editor); Green, M. W. (Editor); Levitt, K. N. (Editor); Schwartz, R. L. (Editor); Weinstock, C. B. (Editor); Palumbo, D. L. (Editor)

    1986-01-01

    The development and evaluation of fault-tolerant computer architectures and software-implemented fault tolerance (SIFT) for use in advanced NASA vehicles and potentially in flight-control systems are described in a collection of previously published reports prepared for NASA. Topics addressed include the principles of fault-tolerant multiprocessor (FTMP) operation; processor and slave regional designs; FTMP executive, facilities, acceptance-test/diagnostic, applications, and support software; FTM reliability and availability models; SIFT hardware design; and SIFT validation and verification.

  7. Hydrogen Embrittlement And Stacking-Fault Energies

    NASA Technical Reports Server (NTRS)

    Parr, R. A.; Johnson, M. H.; Davis, J. H.; Oh, T. K.

    1988-01-01

    Embrittlement in Ni/Cu alloys appears related to stacking-fault porbabilities. Report describes attempt to show a correlation between stacking-fault energy of different Ni/Cu alloys and susceptibility to hydrogen embrittlement. Correlation could lead to more fundamental understanding and method of predicting susceptibility of given Ni/Cu alloy form stacking-fault energies calculated from X-ray diffraction measurements.

  8. Focused fault injection testing of software implemented fault tolerance mechanisms of Voltan TMR nodes

    NASA Astrophysics Data System (ADS)

    Tao, S.; Ezhilchelvan, P. D.; Shrivastava, S. K.

    1995-03-01

    One way of gaining confidence in the adequacy of fault tolerance mechanisms of a system is to test the system by injecting faults and see how the system performs under faulty conditions. This paper presents an application of the focused fault injection method that has been developed for testing software implemented fault tolerance mechanisms of distributed systems. The method exploits the object oriented approach of software implementation to support the injection of specific classes of faults. With the focused fault injection method, the system tester is able to inject specific classes of faults (including malicious ones) such that the fault tolerance mechanisms of a target system can be tested adequately. The method has been applied to test the design and implementation of voting, clock synchronization, and ordering modules of the Voltan TMR (triple modular redundant) node. The tests performed uncovered three flaws in the system software.

  9. Fault Zone Guided Wave generation on the locked, late interseismic Alpine Fault, New Zealand

    NASA Astrophysics Data System (ADS)

    Eccles, J. D.; Gulley, A. K.; Malin, P. E.; Boese, C. M.; Townend, J.; Sutherland, R.

    2015-07-01

    Fault Zone Guided Waves (FZGWs) have been observed for the first time within New Zealand's transpressional continental plate boundary, the Alpine Fault, which is late in its typical seismic cycle. Ongoing study of these phases provides the opportunity to monitor interseismic conditions in the fault zone. Distinctive dispersive seismic codas (~7-35 Hz) have been recorded on shallow borehole seismometers installed within 20 m of the principal slip zone. Near the central Alpine Fault, known for low background seismicity, FZGW-generating microseismic events are located beyond the catchment-scale partitioning of the fault indicating lateral connectivity of the low-velocity zone immediately below the near-surface segmentation. Initial modeling of the low-velocity zone indicates a waveguide width of 60-200 m with a 10-40% reduction in S wave velocity, similar to that inferred for the fault core of other mature plate boundary faults such as the San Andreas and North Anatolian Faults.

  10. Distributed bearing fault diagnosis based on vibration analysis

    NASA Astrophysics Data System (ADS)

    Dolenc, Boštjan; Boškoski, Pavle; Juričić, Đani

    2016-01-01

    Distributed bearing faults appear under various circumstances, for example due to electroerosion or the progression of localized faults. Bearings with distributed faults tend to generate more complex vibration patterns than those with localized faults. Despite the frequent occurrence of such faults, their diagnosis has attracted limited attention. This paper examines a method for the diagnosis of distributed bearing faults employing vibration analysis. The vibrational patterns generated are modeled by incorporating the geometrical imperfections of the bearing components. Comparing envelope spectra of vibration signals shows that one can distinguish between localized and distributed faults. Furthermore, a diagnostic procedure for the detection of distributed faults is proposed. This is evaluated on several bearings with naturally born distributed faults, which are compared with fault-free bearings and bearings with localized faults. It is shown experimentally that features extracted from vibrations in fault-free, localized and distributed fault conditions form clearly separable clusters, thus enabling diagnosis.