Science.gov

Sample records for faulting

  1. Interacting faults

    NASA Astrophysics Data System (ADS)

    Peacock, D. C. P.; Nixon, C. W.; Rotevatn, A.; Sanderson, D. J.; Zuluaga, L. F.

    2017-04-01

    The way that faults interact with each other controls fault geometries, displacements and strains. Faults rarely occur individually but as sets or networks, with the arrangement of these faults producing a variety of different fault interactions. Fault interactions are characterised in terms of the following: 1) Geometry - the spatial arrangement of the faults. Interacting faults may or may not be geometrically linked (i.e. physically connected), when fault planes share an intersection line. 2) Kinematics - the displacement distributions of the interacting faults and whether the displacement directions are parallel, perpendicular or oblique to the intersection line. Interacting faults may or may not be kinematically linked, where the displacements, stresses and strains of one fault influences those of the other. 3) Displacement and strain in the interaction zone - whether the faults have the same or opposite displacement directions, and if extension or contraction dominates in the acute bisector between the faults. 4) Chronology - the relative ages of the faults. This characterisation scheme is used to suggest a classification for interacting faults. Different types of interaction are illustrated using metre-scale faults from the Mesozoic rocks of Somerset and examples from the literature.

  2. Faulting Mars

    NASA Image and Video Library

    2016-07-15

    This region of Xanthe Terra has mostly been contracted due to thrust faulting, but this local region shows evidence of extensional faulting, also called normal faulting. When two normal faults face each other, they create a bathtub-like depression called a "graben." http://photojournal.jpl.nasa.gov/catalog/PIA20813

  3. Zipper Faults

    NASA Astrophysics Data System (ADS)

    Platt, J. P.; Passchier, C. W.

    2015-12-01

    Intersecting simultaneously active pairs of faults with different orientations and opposing slip sense ("conjugate faults") present geometrical and kinematic problems. Such faults rarely offset each other, even when they have displacements of many km. A simple solution to the problem is that the two faults merge, either zippering up or unzippering, depending on the relationship between the angle of intersection and the slip senses. A widely recognized example of this is the so-called blind front developed in some thrust belts, where a backthrust branches off a decollement surface at depth. The decollement progressively unzippers, so that its hanging wall becomes the hanging wall of the backthrust, and its footwall becomes the footwall of the active decollement. The opposite situation commonly arises in core complexes, where conjugate low-angle normal faults merge to form a single detachment; in this case the two faults zipper up. Analogous situations may arise for conjugate pairs of strike-slip faults. We present kinematic and geometrical analyses of the Garlock and San Andreas faults in California, the Najd fault system in Saudi Arabia, the North and East Anatolian faults, the Karakoram and Altyn Tagh faults in Tibet, and the Tonale and Guidicarie faults in the southern Alps, all of which appear to have undergone zippering over distances of several tens to hundreds of km. The zippering process may produce complex and significant patterns of strain and rotation in the surrounding rocks, particularly if the angle between the zippered faults is large. A zippering fault may be inactive during active movement on the intersecting faults, or it may have a slip rate that differs from either fault. Intersecting conjugate ductile shear zones behave in the same way on outcrop and micro-scales.

  4. Fault finder

    DOEpatents

    Bunch, Richard H.

    1986-01-01

    A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

  5. Fault Branching

    NASA Astrophysics Data System (ADS)

    Dmowska, R.; Rice, J. R.; Poliakov, A. N.

    2001-12-01

    Theoretical stress analysis for a propagating shear rupture suggests that the propensity of the rupture path to branch is determined by rupture speed and by the preexisting stress state. See Poliakov, Dmowska and Rice (JGR, submitted April 2001, URL below). Deviatoric stresses near a mode II rupture tip are found to be much higher to both sides of the fault plane than directly ahead, when rupture speed becomes close to the Rayleigh speed. However, the actual pattern of predicted Coulomb failure on secondary faults is strongly dependent on the angle between the fault and the direction of maximum compression Smax in the pre-stress field. Steep Smax angles lead to more extensive failure on the extensional side, whereas shallow angles give comparable failure regions on both. Here we test such concepts against natural examples. For crustal thrust faults we may assume that Smax is horizontal. Thus nucleation on a steeply dipping plane, like the 53 ° dip for the 1971 San Fernando earthquake, is consistent with rupture path kinking to the extensional side, as inferred. Nucleation on a shallow dip, like for the 12 ° -18 ° of the 1985 Kettleman Hills event, should activate both sides, as seems consistent with aftershock patterns. Similarly, in a strike slip example, Smax is inferred to be at approximately 60 ° with the Johnson Valley fault where it branched to the extensional side onto the Landers-Kickapoo fault in the 1992 event, and this too is consistent. Further, geological examination of the activation of secondary fault features along the Johnson Valley fault and the Homestead Valley fault consistently shows that most activity occurs on the extensional side. Another strike-slip example is the Imperial Valley 1979 earthquake. The approximate Smax direction is north-south, at around 35 ° with the main fault, where it branched, on the extensional side, onto Brawley fault, again interpretable with the concepts developed.

  6. Fault diagnosis

    NASA Technical Reports Server (NTRS)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to

  7. Fault mechanics

    SciTech Connect

    Segall, P. )

    1991-01-01

    Recent observational, experimental, and theoretical modeling studies of fault mechanics are discussed in a critical review of U.S. research from the period 1987-1990. Topics examined include interseismic strain accumulation, coseismic deformation, postseismic deformation, and the earthquake cycle; long-term deformation; fault friction and the instability mechanism; pore pressure and normal stress effects; instability models; strain measurements prior to earthquakes; stochastic modeling of earthquakes; and deep-focus earthquakes. Maps, graphs, and a comprehensive bibliography are provided. 220 refs.

  8. Flight elements: Fault detection and fault management

    NASA Technical Reports Server (NTRS)

    Lum, H.; Patterson-Hine, A.; Edge, J. T.; Lawler, D.

    1990-01-01

    Fault management for an intelligent computational system must be developed using a top down integrated engineering approach. An approach proposed includes integrating the overall environment involving sensors and their associated data; design knowledge capture; operations; fault detection, identification, and reconfiguration; testability; causal models including digraph matrix analysis; and overall performance impacts on the hardware and software architecture. Implementation of the concept to achieve a real time intelligent fault detection and management system will be accomplished via the implementation of several objectives, which are: Development of fault tolerant/FDIR requirement and specification from a systems level which will carry through from conceptual design through implementation and mission operations; Implementation of monitoring, diagnosis, and reconfiguration at all system levels providing fault isolation and system integration; Optimize system operations to manage degraded system performance through system integration; and Lower development and operations costs through the implementation of an intelligent real time fault detection and fault management system and an information management system.

  9. Fault zone hydrogeology

    NASA Astrophysics Data System (ADS)

    Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.

    2013-12-01

    Deformation along faults in the shallow crust (< 1 km) introduces permeability heterogeneity and anisotropy, which has an important impact on processes such as regional groundwater flow, hydrocarbon migration, and hydrothermal fluid circulation. Fault zones have the capacity to be hydraulic conduits connecting shallow and deep geological environments, but simultaneously the fault cores of many faults often form effective barriers to flow. The direct evaluation of the impact of faults to fluid flow patterns remains a challenge and requires a multidisciplinary research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and

  10. Fault-Tree Compiler

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Boerschlein, David P.

    1993-01-01

    Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.

  11. Earthquake fault superhighways

    NASA Astrophysics Data System (ADS)

    Robinson, D. P.; Das, S.; Searle, M. P.

    2010-10-01

    Motivated by the observation that the rare earthquakes which propagated for significant distances at supershear speeds occurred on very long straight segments of faults, we examine every known major active strike-slip fault system on land worldwide and identify those with long (> 100 km) straight portions capable not only of sustained supershear rupture speeds but having the potential to reach compressional wave speeds over significant distances, and call them "fault superhighways". The criteria used for identifying these are discussed. These superhighways include portions of the 1000 km long Red River fault in China and Vietnam passing through Hanoi, the 1050 km long San Andreas fault in California passing close to Los Angeles, Santa Barbara and San Francisco, the 1100 km long Chaman fault system in Pakistan north of Karachi, the 700 km long Sagaing fault connecting the first and second cities of Burma, Rangoon and Mandalay, the 1600 km Great Sumatra fault, and the 1000 km Dead Sea fault. Of the 11 faults so classified, nine are in Asia and two in North America, with seven located near areas of very dense populations. Based on the current population distribution within 50 km of each fault superhighway, we find that more than 60 million people today have increased seismic hazards due to them.

  12. Trishear for curved faults

    NASA Astrophysics Data System (ADS)

    Brandenburg, J. P.

    2013-08-01

    Fault-propagation folds form an important trapping element in both onshore and offshore fold-thrust belts, and as such benefit from reliable interpretation. Building an accurate geologic interpretation of such structures requires palinspastic restorations, which are made more challenging by the interplay between folding and faulting. Trishear (Erslev, 1991; Allmendinger, 1998) is a useful tool to unravel this relationship kinematically, but is limited by a restriction to planar fault geometries, or at least planar fault segments. Here, new methods are presented for trishear along continuously curved reverse faults defining a flat-ramp transition. In these methods, rotation of the hanging wall above a curved fault is coupled to translation along a horizontal detachment. Including hanging wall rotation allows for investigation of structures with progressive backlimb rotation. Application of the new algorithms are shown for two fault-propagation fold structures: the Turner Valley Anticline in Southwestern Alberta, and the Alpha Structure in the Niger Delta.

  13. FTAPE: A fault injection tool to measure fault tolerance

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1995-01-01

    The paper introduces FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The tool combines system-wide fault injection with a controllable workload. A workload generator is used to create high stress conditions for the machine. Faults are injected based on this workload activity in order to ensure a high level of fault propagation. The errors/fault ratio and performance degradation are presented as measures of fault tolerance.

  14. FTAPE: A fault injection tool to measure fault tolerance

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1994-01-01

    The paper introduces FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The tool combines system-wide fault injection with a controllable workload. A workload generator is used to create high stress conditions for the machine. Faults are injected based on this workload activity in order to ensure a high level of fault propagation. The errors/fault ratio and performance degradation are presented as measures of fault tolerance.

  15. Isolability of faults in sensor fault diagnosis

    NASA Astrophysics Data System (ADS)

    Sharifi, Reza; Langari, Reza

    2011-10-01

    A major concern with fault detection and isolation (FDI) methods is their robustness with respect to noise and modeling uncertainties. With this in mind, several approaches have been proposed to minimize the vulnerability of FDI methods to these uncertainties. But, apart from the algorithm used, there is a theoretical limit on the minimum effect of noise on detectability and isolability. This limit has been quantified in this paper for the problem of sensor fault diagnosis based on direct redundancies. In this study, first a geometric approach to sensor fault detection is proposed. The sensor fault is isolated based on the direction of residuals found from a residual generator. This residual generator can be constructed from an input-output or a Principal Component Analysis (PCA) based model. The simplicity of this technique, compared to the existing methods of sensor fault diagnosis, allows for more rational formulation of the isolability concepts in linear systems. Using this residual generator and the assumption of Gaussian noise, the effect of noise on isolability is studied, and the minimum magnitude of isolable fault in each sensor is found based on the distribution of noise in the measurement system. Finally, some numerical examples are presented to clarify this approach.

  16. How Faults Shape the Earth.

    ERIC Educational Resources Information Center

    Bykerk-Kauffman, Ann

    1992-01-01

    Presents fault activity with an emphasis on earthquakes and changes in continent shapes. Identifies three types of fault movement: normal, reverse, and strike faults. Discusses the seismic gap theory, plate tectonics, and the principle of superposition. Vignettes portray fault movement, and the locations of the San Andreas fault and epicenters of…

  17. How Faults Shape the Earth.

    ERIC Educational Resources Information Center

    Bykerk-Kauffman, Ann

    1992-01-01

    Presents fault activity with an emphasis on earthquakes and changes in continent shapes. Identifies three types of fault movement: normal, reverse, and strike faults. Discusses the seismic gap theory, plate tectonics, and the principle of superposition. Vignettes portray fault movement, and the locations of the San Andreas fault and epicenters of…

  18. Fault Diagnosis Method of Fault Indicator Based on Maximum Probability

    NASA Astrophysics Data System (ADS)

    Yin, Zili; Zhang, Wei

    2017-05-01

    In order to solve the problem of distribution fault diagnosis in case of misreporting or failed-report of fault indicator information, the characteristics of the fault indicator are analyzed, and the concept of the minimum fault judgment area of the distribution network is developed. Based on which, the mathematical model of fault indicator fault diagnosis is evaluated. The characteristics of fault indicator signals are analyzed. Based on two-in-three principle, a probabilistic fault indicator combination signal processing method is proposed. Based on the combination of the minimum fault judgment area model, the fault indicator combination signal and the interdependence between the fault indicators, a fault diagnosis method based on maximum probability is proposed. The method is based on the similarity between the simulated fault signal and the real fault signal, and the detailed formula is given. The method has good fault-tolerance in the case of misreporting or failed-report of fault indicator information, which can more accurately determine the fault area. The probability of each area is given, and fault alternatives are provided. The proposed approach is feasible and valuable for the dispatching and maintenance personnel to deal with the fault.

  19. Fault detection and fault tolerance in robotics

    NASA Technical Reports Server (NTRS)

    Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.

    1992-01-01

    Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.

  20. New York Bight fault

    USGS Publications Warehouse

    Hutchinson, Deborah R.; Grow, John A.

    1985-01-01

    The fault parallels a magnetic low to the east, interpreted as a buried Mesozoic rift basin from seismic-reflection data, and a gravity low to the west, interpreted as a structure within Paleozoic rocks from well data. Whether these structures control the location of, or movement on, the fault is not clear.

  1. Solar system fault detection

    DOEpatents

    Farrington, R.B.; Pruett, J.C. Jr.

    1984-05-14

    A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

  2. Solar system fault detection

    DOEpatents

    Farrington, Robert B.; Pruett, Jr., James C.

    1986-01-01

    A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

  3. Characterization of leaky faults

    SciTech Connect

    Shan, Chao

    1990-05-01

    Leaky faults provide a flow path for fluids to move underground. It is very important to characterize such faults in various engineering projects. The purpose of this work is to develop mathematical solutions for this characterization. The flow of water in an aquifer system and the flow of air in the unsaturated fault-rock system were studied. If the leaky fault cuts through two aquifers, characterization of the fault can be achieved by pumping water from one of the aquifers, which are assumed to be horizontal and of uniform thickness. Analytical solutions have been developed for two cases of either a negligibly small or a significantly large drawdown in the unpumped aquifer. Some practical methods for using these solutions are presented. 45 refs., 72 figs., 11 tabs.

  4. Rough faults, distributed weakening, and off-fault deformation

    NASA Astrophysics Data System (ADS)

    Griffith, W. Ashley; Nielsen, Stefan; di Toro, Giulio; Smith, Steven A. F.

    2010-08-01

    We report systematic spatial variations in fault rocks along nonplanar strike-slip faults cross-cutting the Lake Edison Granodiorite, Sierra Nevada, California (Sierran wavy fault) and Lobbia outcrops of the Adamello Batholith in the Italian Alps (Lobbia wavy fault). In the case of the Sierran fault, pseudotachylyte formed at contractional fault bends, where it is found as thin (1-2 mm) fault-parallel veins. Epidote and chlorite developed in the same seismic context as the pseudotachylyte and are especially abundant in extensional fault bends. We argue that the presence of fluids, as illustrated by this example, does not necessarily preclude the development of frictional melt. In the case of the Lobbia fault, pseudotachylyte thickness varies along the length of the fault, but the pseudotachylyte veins thicken and pool in extensional bends. We conduct a quantitative analysis of fault roughness, microcrack distribution, stress, and friction along the Lobbia fault. Numerical modeling results show that opening in extensional bends and localized thermal weakening in contractional bends counteract resistance encountered by fault waviness, resulting in an overall weaker fault than suggested by the corresponding static friction coefficient. The models also predict static stress redistribution around bends in the faults which is consistent with distribution of microcracks, indicating significant elastic and inelastic strain energy is dissipated into the wall rocks due to nonplanar fault geometry. Together these observations suggest that damage and energy dissipation occurs along the entire nonplanar fault during slip, rather than being confined to the region close to the dynamically propagating crack tip.

  5. Methods to enhance seismic faults and construct fault surfaces

    NASA Astrophysics Data System (ADS)

    Wu, Xinming; Zhu, Zhihui

    2017-10-01

    Faults are often apparent as reflector discontinuities in a seismic volume. Numerous types of fault attributes have been proposed to highlight fault positions from a seismic volume by measuring reflection discontinuities. These attribute volumes, however, can be sensitive to noise and stratigraphic features that are also apparent as discontinuities in a seismic volume. We propose a matched filtering method to enhance a precomputed fault attribute volume, and simultaneously estimate fault strikes and dips. In this method, a set of efficient 2D exponential filters, oriented by all possible combinations of strike and dip angles, are applied to the input attribute volume to find the maximum filtering responses at all samples in the volume. These maximum filtering responses are recorded to obtain the enhanced fault attribute volume while the corresponding strike and dip angles, that yield the maximum filtering responses, are recoded to obtain volumes of fault strikes and dips. By doing this, we assume that a fault surface is locally planar, and a 2D smoothing filter will yield a maximum response if the smoothing plane coincides with a local fault plane. With the enhanced fault attribute volume and the estimated fault strike and dip volumes, we then compute oriented fault samples on the ridges of the enhanced fault attribute volume, and each sample is oriented by the estimated fault strike and dip. Fault surfaces can be constructed by directly linking the oriented fault samples with consistent fault strikes and dips. For complicated cases with missing fault samples and noisy samples, we further propose to use a perceptual grouping method to infer fault surfaces that reasonably fit the positions and orientations of the fault samples. We apply these methods to 3D synthetic and real examples and successfully extract multiple intersecting fault surfaces and complete fault surfaces without holes.

  6. Fault Management Metrics

    NASA Technical Reports Server (NTRS)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  7. The Kunlun Fault

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The Kunlun fault is one of the gigantic strike-slip faults that bound the north side of Tibet. Left-lateral motion along the 1,500-kilometer (932-mile) length of the Kunlun has occurred uniformly for the last 40,000 years at a rate of 1.1 centimeter per year, creating a cumulative offset of more than 400 meters. In this image, two splays of the fault are clearly seen crossing from east to west. The northern fault juxtaposes sedimentary rocks of the mountains against alluvial fans. Its trace is also marked by lines of vegetation, which appear red in the image. The southern, younger fault cuts through the alluvium. A dark linear area in the center of the image is wet ground where groundwater has ponded against the fault. Measurements from the image of displacements of young streams that cross the fault show 15 to 75 meters (16 to 82 yards) of left-lateral offset. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) acquired the visible light and near infrared scene on July 20, 2000. Image courtesy NASA/GSFC/MITI/ERSDAC/JAROS, and the U.S./Japan ASTER Science Team

  8. Measuring fault tolerance with the FTAPE fault injection tool

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1995-01-01

    This paper describes FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The major parts of the tool include a system-wide fault-injector, a workload generator, and a workload activity measurement tool. The workload creates high stress conditions on the machine. Using stress-based injection, the fault injector is able to utilize knowledge of the workload activity to ensure a high level of fault propagation. The errors/fault ratio, performance degradation, and number of system crashes are presented as measures of fault tolerance.

  9. Fault detection and isolation

    NASA Technical Reports Server (NTRS)

    Bernath, Greg

    1993-01-01

    Erroneous measurements in multisensor navigation systems must be detected and isolated. A recursive estimator can find fast growing errors; a least squares batch estimator can find slow growing errors. This process is called fault detection. A protection radius can be calculated as a function of time for a given location. This protection radius can be used to guarantee the integrity of the navigation data. Fault isolation can be accomplished using either a snapshot method or by examining the history of the fault detection statistics.

  10. Fault detection and isolation

    NASA Technical Reports Server (NTRS)

    Bernath, Greg

    1994-01-01

    In order for a current satellite-based navigation system (such as the Global Positioning System, GPS) to meet integrity requirements, there must be a way of detecting erroneous measurements, without help from outside the system. This process is called Fault Detection and Isolation (FDI). Fault detection requires at least one redundant measurement, and can be done with a parity space algorithm. The best way around the fault isolation problem is not necessarily isolating the bad measurement, but finding a new combination of measurements which excludes it.

  11. Fault zone structure of the Wildcat fault in Berkeley, California - Field survey and fault model test -

    NASA Astrophysics Data System (ADS)

    Ueta, K.; Onishi, C. T.; Karasaki, K.; Tanaka, S.; Hamada, T.; Sasaki, T.; Ito, H.; Tsukuda, K.; Ichikawa, K.; Goto, J.; Moriya, T.

    2010-12-01

    In order to develop hydrologic characterization technology of fault zones, it is desirable to clarify the relationship between the geologic structure and hydrologic properties of fault zones. To this end, we are performing surface-based geologic and trench investigations, geophysical surveys and borehole-based hydrologic investigations along the Wildcat fault in Berkeley,California to investigate the effect of fault zone structure on regional hydrology. The present paper outlines the fault zone structure of the Wildcat fault in Berkeley on the basis of results from trench excavation surveys. The approximately 20 - 25 km long Wildcat fault is located within the Berkeley Hills and extends northwest-southeast from Richmond to Oakland, subparallel to the Hayward fault. The Wildcat fault, which is a predominantly right-lateral strike-slip fault, steps right in a releasing bend at the Berkeley Hills region. A total of five trenches have been excavated across the fault to investigate the deformation structure of the fault zone in the bedrock. Along the Wildcat fault, multiple fault surfaces are branched, bent, paralleled, forming a complicated shear zone. The shear zone is ~ 300 m in width, and the fault surfaces may be classified under the following two groups: 1) Fault surfaces offsetting middle Miocene Claremont Chert on the east against late Miocene Orinda formation and/or San Pablo Group on the west. These NNW-SSE trending fault surfaces dip 50 - 60° to the southwest. Along the fault surfaces, fault gouge of up to 1 cm wide and foliated cataclasite of up to 60 cm wide can be observed. S-C fabrics of the fault gouge and foliated cataclasite show normal right-slip shear sense. 2) Fault surfaces forming a positive flower structure in Claremont Chert. These NW-SE trending fault surfaces are sub-vertical or steeply dipping. Along the fault surfaces, fault gouge of up to 3 cm wide and foliated cataclasite of up to 200 cm wide can be observed. S-C fabrics of the fault

  12. OpenStudio - Fault Modeling

    SciTech Connect

    Frank, Stephen; Robertson, Joseph; Cheung, Howard; Horsey, Henry

    2014-09-19

    This software record documents the OpenStudio fault model development portion of the Fault Detection and Diagnostics LDRD project.The software provides a suite of OpenStudio measures (scripts) for modeling typical HVAC system faults in commercial buildings and also included supporting materials: example projects and OpenStudio measures for reporting fault costs and energy impacts.

  13. Hayward Fault, California Interferogram

    NASA Image and Video Library

    2000-08-17

    This image of California Hayward fault is an interferogram created using a pair of images taken by ESA ERS-1 and ERS-2 in June 1992 and September 1997 over the central San Francisco Bay in California.

  14. Faults in Claritas Fossae

    NASA Image and Video Library

    2011-07-15

    NASA Mars Reconnaissance Orbiter captured this image of the Claritas Fossae region, characterized by systems of graben. A graben forms when a block of the planet crust drops down between two faults, due to extension, or pulling, of the crust.

  15. Glossary of normal faults

    NASA Astrophysics Data System (ADS)

    Peacock, D. C. P.; Knipe, R. J.; Sanderson, D. J.

    2000-03-01

    Increased interest in normal faults and extended terranes has led to the development of an increasingly complex terminology. The most important terms are defined in this paper, with original references being given wherever possible, along with examples of current usage.

  16. Hayward Fault, California Interferogram

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This image of California's Hayward fault is an interferogram created using a pair of images taken by Synthetic Aperture Radar(SAR) combined to measure changes in the surface that may have occurred between the time the two images were taken.

    The images were collected by the European Space Agency's Remote Sensing satellites ERS-1 and ERS-2 in June 1992 and September 1997 over the central San Francisco Bay in California.

    The radar image data are shown as a gray-scale image, with the interferometric measurements that show the changes rendered in color. Only the urbanized area could be mapped with these data. The color changes from orange tones to blue tones across the Hayward fault (marked by a thin red line) show about 2-3centimeters (0.8-1.1 inches) of gradual displacement or movement of the southwest side of the fault. The block west of the fault moved horizontally toward the northwest during the 63 months between the acquisition of the two SAR images. This fault movement is called a seismic creep because the fault moved slowly without generating an earthquake.

    Scientists are using the SAR interferometry along with other data collected on the ground to monitor this fault motion in an attempt to estimate the probability of earthquake on the Hayward fault, which last had a major earthquake of magnitude 7 in 1868. This analysis indicates that the northern part of the Hayward fault is creeping all the way from the surface to a depth of 12 kilometers (7.5 miles). This suggests that the potential for a large earthquake on the northern Hayward fault might be less than previously thought. The blue area to the west (lower left) of the fault near the center of the image seemed to move upward relative to the yellow and orange areas nearby by about 2 centimeters (0.8 inches). The cause of this apparent motion is not yet confirmed, but the rise of groundwater levels during the time between the images may have caused the reversal of a small portion of the subsidence that

  17. Hayward Fault, California Interferogram

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This image of California's Hayward fault is an interferogram created using a pair of images taken by Synthetic Aperture Radar(SAR) combined to measure changes in the surface that may have occurred between the time the two images were taken.

    The images were collected by the European Space Agency's Remote Sensing satellites ERS-1 and ERS-2 in June 1992 and September 1997 over the central San Francisco Bay in California.

    The radar image data are shown as a gray-scale image, with the interferometric measurements that show the changes rendered in color. Only the urbanized area could be mapped with these data. The color changes from orange tones to blue tones across the Hayward fault (marked by a thin red line) show about 2-3centimeters (0.8-1.1 inches) of gradual displacement or movement of the southwest side of the fault. The block west of the fault moved horizontally toward the northwest during the 63 months between the acquisition of the two SAR images. This fault movement is called a seismic creep because the fault moved slowly without generating an earthquake.

    Scientists are using the SAR interferometry along with other data collected on the ground to monitor this fault motion in an attempt to estimate the probability of earthquake on the Hayward fault, which last had a major earthquake of magnitude 7 in 1868. This analysis indicates that the northern part of the Hayward fault is creeping all the way from the surface to a depth of 12 kilometers (7.5 miles). This suggests that the potential for a large earthquake on the northern Hayward fault might be less than previously thought. The blue area to the west (lower left) of the fault near the center of the image seemed to move upward relative to the yellow and orange areas nearby by about 2 centimeters (0.8 inches). The cause of this apparent motion is not yet confirmed, but the rise of groundwater levels during the time between the images may have caused the reversal of a small portion of the subsidence that

  18. Fault tolerant magnetic bearings

    SciTech Connect

    Maslen, E.H.; Sortore, C.K.; Gillies, G.T.; Williams, R.D.; Fedigan, S.J.; Aimone, R.J.

    1999-07-01

    A fault tolerant magnetic bearing system was developed and demonstrated on a large flexible-rotor test rig. The bearing system comprises a high speed, fault tolerant digital controller, three high capacity radial magnetic bearings, one thrust bearing, conventional variable reluctance position sensors, and an array of commercial switching amplifiers. Controller fault tolerance is achieved through a very high speed voting mechanism which implements triple modular redundancy with a powered spare CPU, thereby permitting failure of up to three CPU modules without system failure. Amplifier/cabling/coil fault tolerance is achieved by using a separate power amplifier for each bearing coil and permitting amplifier reconfiguration by the controller upon detection of faults. This allows hot replacement of failed amplifiers without any system degradation and without providing any excess amplifier kVA capacity over the nominal system requirement. Implemented on a large (2440 mm in length) flexible rotor, the system shows excellent rejection of faults including the failure of three CPUs as well as failure of two adjacent amplifiers (or cabling) controlling an entire stator quadrant.

  19. Pen Branch Fault Program

    SciTech Connect

    Price, V.; Stieve, A.L.; Aadland, R.

    1990-09-28

    Evidence from subsurface mapping and seismic reflection surveys at Savannah River Site (SRS) suggests the presence of a fault which displaces Cretaceous through Tertiary (90--35 million years ago) sediments. This feature has been described and named the Pen Branch fault (PBF) in a recent Savannah River Laboratory (SRL) paper (DP-MS-88-219). Because the fault is located near operating nuclear facilities, public perception and federal regulations require a thorough investigation of the fault to determine whether any seismic hazard exists. A phased program with various elements has been established to investigate the PBF to address the Nuclear Regulatory Commission regulatory guidelines represented in 10 CFR 100 Appendix A. The objective of the PBF program is to fully characterize the nature of the PBF (ESS-SRL-89-395). This report briefly presents current understanding of the Pen Branch fault based on shallow drilling activities completed the fall of 1989 (PBF well series) and subsequent core analyses (SRL-ESS-90-145). The results are preliminary and ongoing: however, investigations indicate that the fault is not capable. In conjunction with the shallow drilling, other activities are planned or in progress. 7 refs., 8 figs., 1 tab.

  20. Packaged Fault Model for Geometric Segmentation of Active Faults Into Earthquake Source Faults

    NASA Astrophysics Data System (ADS)

    Nakata, T.; Kumamoto, T.

    2004-12-01

    In Japan, the empirical formula proposed by Matsuda (1975) mainly based on the length of the historical surface fault ruptures and magnitude, is generally applied to estimate the size of future earthquakes from the extent of existing active faults for seismic hazard assessment. Therefore validity of the active fault length and defining individual segment boundaries where propagating ruptures terminate are essential and crucial to the reliability for the accurate assessments. It is, however, not likely for us to clearly identify the behavioral earthquake segments from observation of surface faulting during the historical period, because most of the active faults have longer recurrence intervals than 1000 years in Japan. Besides uncertainties of the datasets obtained mainly from fault trenching studies are quite large for fault grouping/segmentation. This is why new methods or criteria should be applied for active fault grouping/segmentation, and one of the candidates may be geometric criterion of active faults. Matsuda (1990) used _gfive kilometer_h as a critical distance for grouping and separation of neighboring active faults. On the other hand, Nakata and Goto (1998) proposed the geometric criteria such as (1) branching features of active fault traces and (2) characteristic pattern of vertical-slip distribution along the fault traces as tools to predict rupture length of future earthquakes. The branching during the fault rupture propagation is regarded as an effective energy dissipation process and could result in final rupture termination. With respect to the characteristic pattern of vertical-slip distribution, especially with strike-slip components, the up-thrown sides along the faults are, in general, located on the fault blocks in the direction of relative strike-slip. Applying these new geometric criteria to the high-resolution active fault distribution maps, the fault grouping/segmentation could be more practically conducted. We tested this model

  1. Fault Roughness Records Strength

    NASA Astrophysics Data System (ADS)

    Brodsky, E. E.; Candela, T.; Kirkpatrick, J. D.

    2014-12-01

    Fault roughness is commonly ~0.1-1% at the outcrop exposure scale. More mature faults are smoother than less mature ones, but the overall range of roughness is surprisingly limited which suggests dynamic control. In addition, the power spectra of many exposed fault surfaces follow a single power law over scales from millimeters to 10's of meters. This is another surprising observation as distinct structures such as slickenlines and mullions are clearly visible on the same surfaces at well-defined scales. We can reconcile both observations by suggesting that the roughness of fault surfaces is controlled by the maximum strain that can be supported elastically in the wallrock. If the fault surface topography requires more than 0.1-1% strain, it fails. Invoking wallrock strength explains two additional observations on the Corona Heights fault for which we have extensive roughness data. Firstly, the surface is isotropic below a scale of 30 microns and has grooves at larger scales. Samples from at least three other faults (Dixie Valley, Mount St. Helens and San Andreas) also are isotropic at scales below 10's of microns. If grooves can only persist when the walls of the grooves have a sufficiently low slope to maintain the shape, this scale of isotropy can be predicted based on the measured slip perpendicular roughness data. The observed 30 micron scale at Corona Heights is consistent with an elastic strain of 0.01 estimated from the observed slip perpendicular roughness with a Hurst exponent of 0.8. The second observation at Corona Heights is that slickenlines are not deflected around meter-scale mullions. Yielding of these mullions at centimeter to meter scale is predicted from the slip parallel roughness as measured here. The success of the strain criterion for Corona Heights supports it as the appropriate control on fault roughness. Micromechanically, the criterion implies that failure of the fault surface is a continual process during slip. Macroscopically, the

  2. Rough Faults, Distributed Weakening, and Off-Fault Deformation

    NASA Astrophysics Data System (ADS)

    Griffith, W. A.; Nielsen, S. B.; di Toro, G.; Smith, S. A.; Niemeijer, A. R.

    2009-12-01

    We report systematic spatial variations of fault rocks along non-planar strike-slip faults cross-cutting the Lake Edison Granodiorite, Sierra Nevada, California (Sierran Wavy Fault) and the Lobbia outcrops of the Adamello Batholith in the Italian Alps (Lobbia Wavy Fault). In the case of the Sierran fault, pseudotachylyte formed at contractional fault bends, where it is found as thin (1-2 mm) fault-parallel veins. Epidote and chlorite developed in the same seismic context as the pseudotachylyte and are especially abundant in extensional fault bends. We argue that the presence of fluids, as illustrated by this example, does not necessarily preclude the development of frictional melt. In the case of the Lobbia fault, pseudotachylyte is present in variable thickness along the length of the fault, but the pseudotachylyte veins thicken and pool in extensional bends. The Lobbia fault surface is self-affine, and we conduct a quantitative analysis of microcrack distribution, stress, and friction along the fault. Numerical modeling results show that opening in extensional bends and localized thermal weakening in contractional bends counteract resistance encountered by fault waviness, resulting in an overall weaker fault than suggested by the corresponding static friction coefficient. Models also predict stress redistribution around bends in the faults which mirror microcrack distributions, indicating significant elastic and anelastic strain energy is dissipated into the wall rocks due to non-planar fault geometry. Together these observations suggest that, along non-planar faults, damage and energy dissipation occurs along the entire fault during slip, rather than being confined to the region close to the crack tip as predicted by classical fracture mechanics.

  3. Diagnosable systems for intermittent faults

    NASA Technical Reports Server (NTRS)

    Mallela, S.; Masson, G. M.

    1978-01-01

    The fault diagnosis capabilities of systems composed of interconnected units capable of testing each other are studied for the case of systems with intermittent faults. A central role is played by the concept of t(i)-fault diagnosability. A system is said to be t(i)-fault diagnosable when it is such that if no more than t(i) units are intermittently faulty then a fault-free unit will never be diagnosed as faulty and the diagnosis at any time is at worst incomplete. Necessary and sufficient conditions for t(i)-fault diagnosability are proved, and bounds for t(i) are established. The conditions are in general more restrictive than those for permanent-fault diagnosability. For intermittent faults there is only one testing strategy (repetitive testing), and consequently only one type of intermittent-fault diagnosable system.

  4. Validated Fault Tolerant Architectures for Space Station

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.

    1990-01-01

    Viewgraphs on validated fault tolerant architectures for space station are presented. Topics covered include: fault tolerance approach; advanced information processing system (AIPS); and fault tolerant parallel processor (FTPP).

  5. Changes in fault length distributions due to fault linkage

    NASA Astrophysics Data System (ADS)

    Xu, Shunshan; Nieto-Samaniego, A. F.; Alaniz-Álvarez, S. A.; Velasquillo-Martínez, L. G.; Grajales-Nishimura, J. M.; García-Hernández, J.; Murillo-Muñetón, G.

    2010-01-01

    Fault linkage plays an important role in the growth of faults. In this paper we analyze a published synthetic model to simulate fault linkage. The results of the simulation indicate that fault linkage is the cause of the shallower local slopes on the length-frequency plots. The shallower local slopes lead to two effects. First, the curves of log cumulative number against log length exhibit fluctuating shapes as reported in literature. Second, for a given fault population, the power-law exponents after linkage are negatively related to the linked length scales. Also, we present datasets of fault length measured from four structural maps at the Cantarell oilfield in the southern Gulf of Mexico (offshore Campeche). The results demonstrate that the fault length data, corrected by seismic resolution at the tip fault zone, also exhibit fluctuating curves of log cumulative frequency vs. log length. The steps (shallower slopes) on the curves imply the scale positions of fault linkage. We conclude that fault linkage is the main reason for the fluctuating shapes of log cumulative frequency vs. log length. On the other hand, our data show that the two-tip faults are better for linear analysis between maximum displacement ( D) and length ( L). Evidently, two-tip faults underwent fewer fault linkages and interactions.

  6. Fault terminations, Seminoe Mountains, Wyoming

    SciTech Connect

    Dominic, J.B.; McConnell, D.A. . Dept. of Geology)

    1992-01-01

    Two basement-involved faults terminate in folds in the Seminoe Mountains. Mesoscopic and macroscopic structures in sedimentary rocks provide clues to the interrelationship of faults and folds in this region, and on the linkage between faulting and folding in general. The Hurt Creek fault trends 320[degree] and has maximum separation of 1.5 km measured at the basement/cover contact. Separation on the fault decreases upsection to zero within the Jurassic Sundance Formation. Unfaulted rock units form an anticline around the fault tip. The complementary syncline is angular with planar limbs and a narrow hinge zone. The syncline axial trace intersects the fault in the footwall at the basement/cover cut-off. Map patterns are interpreted to show thickening of Mesozoic units adjacent to the syncline hinge. In contrast, extensional structures are common in the faulted anticline within the Permian Goose Egg and Triassic Chugwater Formations. A hanging wall splay fault loses separation into the Goose Egg formation which is thinned by 50% at the fault tip. Mesoscopic normal faults are oriented 320--340[degree] and have an average inclination of 75[degree] SW. Megaboudins of Chugwater are present in the footwall of the Hurt Creek fault, immediately adjacent to the fault trace. The Black Canyon fault transported Precambrian-Pennsylvanian rocks over Pennsylvanian Tensleep sandstone. This fault is layer-parallel at the top of the Tensleep and loses separation along strike into an unfaulted syncline in the Goose Egg Formation. Shortening in the pre-Permian units is accommodated by slip on the basement-involved Black Canyon fault. Equivalent shortening in Permian-Cretaceous units occurs on a system of thin-skinned'' thrust faults.

  7. Fault displacement hazard for strike-slip faults

    USGS Publications Warehouse

    Petersen, M.D.; Dawson, T.E.; Chen, R.; Cao, T.; Wills, C.J.; Schwartz, D.P.; Frankel, A.D.

    2011-01-01

    In this paper we present a methodology, data, and regression equations for calculating the fault rupture hazard at sites near steeply dipping, strike-slip faults. We collected and digitized on-fault and off-fault displacement data for 9 global strikeslip earthquakes ranging from moment magnitude M 6.5 to M 7.6 and supplemented these with displacements from 13 global earthquakes compiled byWesnousky (2008), who considers events up to M 7.9. Displacements on the primary fault fall off at the rupture ends and are often measured in meters, while displacements on secondary (offfault) or distributed faults may measure a few centimeters up to more than a meter and decay with distance from the rupture. Probability of earthquake rupture is less than 15% for cells 200 m??200 m and is less than 2% for 25 m??25 m cells at distances greater than 200mfrom the primary-fault rupture. Therefore, the hazard for off-fault ruptures is much lower than the hazard near the fault. Our data indicate that rupture displacements up to 35cm can be triggered on adjacent faults at distances out to 10kmor more from the primary-fault rupture. An example calculation shows that, for an active fault which has repeated large earthquakes every few hundred years, fault rupture hazard analysis should be an important consideration in the design of structures or lifelines that are located near the principal fault, within about 150 m of well-mapped active faults with a simple trace and within 300 m of faults with poorly defined or complex traces.

  8. Fault tolerant linear actuator

    DOEpatents

    Tesar, Delbert

    2004-09-14

    In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.

  9. DIFFERENTIAL FAULT SENSING CIRCUIT

    DOEpatents

    Roberts, J.H.

    1961-09-01

    A differential fault sensing circuit is designed for detecting arcing in high-voltage vacuum tubes arranged in parallel. A circuit is provided which senses differences in voltages appearing between corresponding elements likely to fault. Sensitivity of the circuit is adjusted to some level above which arcing will cause detectable differences in voltage. For particular corresponding elements, a group of pulse transformers are connected in parallel with diodes connected across the secondaries thereof so that only voltage excursions are transmitted to a thyratron which is biased to the sensitivity level mentioned.

  10. Computer hardware fault administration

    DOEpatents

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  11. Fault tree models for fault tolerant hypercube multiprocessors

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Tuazon, Jezus O.

    1991-01-01

    Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.

  12. Fault tree models for fault tolerant hypercube multiprocessors

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Tuazon, Jezus O.

    1991-01-01

    Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.

  13. Fault diagnosis of analog circuits

    NASA Astrophysics Data System (ADS)

    Bandler, J. W.; Salama, A. E.

    1985-08-01

    Theory and algorithms associated with four main categories of modern techniques used to locate faults in analog circuits are presented. These four general approaches are: the fault dictionary (FDA), the parameter identification (PIA), the fault verification (FVA), and the approximation (AA) approaches. The preliminaries and problems associated with the FDA, such as fault dictionary construction, the methods of optimum measurement selection, fault isolation criteria, and efficient methods of fault simulation, are discussed. The PIA techniques that utilize either linear or nonlinear systems of equations for identification of network elements are examined. Description of the FVA includes node-fault diagnosis, branch-fault diagnosis, subnetwork testability conditions, as well as combinatorial techniques, the failure-bound technique, and the network decomposition technique. In the AA, probabilistic methods and optimization-based methods are considered. In addition, the artificial intelligence technique and the different measures of testability are presented. A series of block diagrams is included.

  14. Fault injection experiments using FIAT

    NASA Technical Reports Server (NTRS)

    Barton, James H.; Czeck, Edward W.; Segall, Zary Z.; Siewiorek, Daniel P.

    1990-01-01

    The results of several experiments conducted using the fault-injection-based automated testing (FIAT) system are presented. FIAT is capable of emulating a variety of distributed system architectures, and it provides the capabilities to monitor system behavior and inject faults for the purpose of experimental characterization and validation of a system's dependability. The experiments consist of exhaustively injecting three separate fault types into various locations, encompassing both the code and data portions of memory images, of two distinct applications executed with several different data values and sizes. Fault types are variations of memory bit faults. The results show that there are a limited number of system-level fault manifestations. These manifestations follow a normal distribution for each fault type. Error detection latencies are found to be normally distributed. The methodology can be used to predict the system-level fault responses during the system design stage.

  15. The property of fault zone and fault activity of Shionohira Fault, Fukushima, Japan

    NASA Astrophysics Data System (ADS)

    Seshimo, K.; Aoki, K.; Tanaka, Y.; Niwa, M.; Kametaka, M.; Sakai, T.; Tanaka, Y.

    2015-12-01

    The April 11, 2011 Fukushima-ken Hamadori Earthquake (hereafter the 4.11 earthquake) formed co-seismic surface ruptures trending in the NNW-SSE direction in Iwaki City, Fukushima Prefecture, which were newly named as the Shionohira Fault by Ishiyama et al. (2011). This earthquake was characterized by a westward dipping normal slip faulting, with a maximum displacement of about 2 m (e.g., Kurosawa et al., 2012). To the south of the area, the same trending lineaments were recognized to exist even though no surface ruptures occurred by the earthquake. In an attempt to elucidate the differences of active and non-active segments of the fault, this report discusses the results of observation of fault outcrops along the Shionohira Fault as well as the Coulomb stress calculations. Only a few outcrops have basement rocks of both the hanging-wall and foot-wall of the fault plane. Three of these outcrops (Kyodo-gawa, Shionohira and Betto) were selected for investigation. In addition, a fault outcrop (Nameishi-minami) located about 300 m south of the southern tip of the surface ruptures was investigated. The authors carried out observations of outcrops, polished slabs and thin sections, and performed X-ray diffraction (XRD) to fault materials. As a result, the fault zones originating from schists were investigated at Kyodo-gawa and Betto. A thick fault gouge was cut by a fault plane of the 4.11 earthquake in each outcrop. The fault materials originating from schists were fault bounded with (possibly Neogene) weakly deformed sandstone at Shionohira. A thin fault gouge was found along the fault plane of 4.11 earthquake. A small-scale fault zone with thin fault gouge was observed in Nameishi-minami. According to XRD analysis, smectite was detected in the gouges from Kyodo-gawa, Shionohira and Betto, while not in the gouge from Nameishi-minami.

  16. The engine fuel system fault analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Song, Hanqiang; Yang, Changsheng; Zhao, Wei

    2017-05-01

    For improving the reliability of the engine fuel system, the typical fault factor of the engine fuel system was analyzed from the point view of structure and functional. The fault character was gotten by building the fuel system fault tree. According the utilizing of fault mode effect analysis method (FMEA), several factors of key component fuel regulator was obtained, which include the fault mode, the fault cause, and the fault influences. All of this made foundation for next development of fault diagnosis system.

  17. Perspective View, Garlock Fault

    NASA Image and Video Library

    2000-04-20

    California Garlock Fault, marking the northwestern boundary of the Mojave Desert, lies at the foot of the mountains, running from the lower right to the top center of this image, which was created with data from NASA shuttle Radar Topography Mission.

  18. Faults and Flows

    NASA Image and Video Library

    2014-10-20

    Lava flows of Daedalia Planum can be seen at the top and bottom portions of this image from NASA 2001 Mars Odyssey spacecraft. The ridge and linear depression in the central part of the image are part of Mangala Fossa, a fault bounded graben.

  19. Row fault detection system

    DOEpatents

    Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward

    2010-02-23

    An apparatus and program product check for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  20. Row fault detection system

    DOEpatents

    Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward

    2008-10-14

    An apparatus, program product and method checks for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  1. Dynamic Fault Detection Chassis

    SciTech Connect

    Mize, Jeffery J

    2007-01-01

    Abstract The high frequency switching megawatt-class High Voltage Converter Modulator (HVCM) developed by Los Alamos National Laboratory for the Oak Ridge National Laboratory's Spallation Neutron Source (SNS) is now in operation. One of the major problems with the modulator systems is shoot-thru conditions that can occur in a IGBTs H-bridge topology resulting in large fault currents and device failure in a few microseconds. The Dynamic Fault Detection Chassis (DFDC) is a fault monitoring system; it monitors transformer flux saturation using a window comparator and dV/dt events on the cathode voltage caused by any abnormality such as capacitor breakdown, transformer primary turns shorts, or dielectric breakdown between the transformer primary and secondary. If faults are detected, the DFDC will inhibit the IGBT gate drives and shut the system down, significantly reducing the possibility of a shoot-thru condition or other equipment damaging events. In this paper, we will present system integration considerations, performance characteristics of the DFDC, and discuss its ability to significantly reduce costly down time for the entire facility.

  2. Fault-Mechanism Simulator

    ERIC Educational Resources Information Center

    Guyton, J. W.

    1972-01-01

    An inexpensive, simple mechanical model of a fault can be produced to simulate the effects leading to an earthquake. This model has been used successfully with students from elementary to college levels and can be demonstrated to classes as large as thirty students. (DF)

  3. Row fault detection system

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN

    2012-02-07

    An apparatus, program product and method check for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  4. Fault-Mechanism Simulator

    ERIC Educational Resources Information Center

    Guyton, J. W.

    1972-01-01

    An inexpensive, simple mechanical model of a fault can be produced to simulate the effects leading to an earthquake. This model has been used successfully with students from elementary to college levels and can be demonstrated to classes as large as thirty students. (DF)

  5. Fault-Related Sanctuaries

    NASA Astrophysics Data System (ADS)

    Piccardi, L.

    2001-12-01

    Beyond the study of historical surface faulting events, this work investigates the possibility, in specific cases, of identifying pre-historical events whose memory survives in myths and legends. The myths of many famous sacred places of the ancient world contain relevant telluric references: "sacred" earthquakes, openings to the Underworld and/or chthonic dragons. Given the strong correspondence with local geological evidence, these myths may be considered as describing natural phenomena. It has been possible in this way to shed light on the geologic origin of famous myths (Piccardi, 1999, 2000 and 2001). Interdisciplinary researches reveal that the origin of several ancient sanctuaries may be linked in particular to peculiar geological phenomena observed on local active faults (like ground shaking and coseismic surface ruptures, gas and flames emissions, strong underground rumours). In many of these sanctuaries the sacred area is laid directly above the active fault. In a few cases, faulting has affected also the archaeological relics, right through the main temple (e.g. Delphi, Cnidus, Hierapolis of Phrygia). As such, the arrangement of the cult site and content of relative myths suggest that specific points along the trace of active faults have been noticed in the past and worshiped as special `sacred' places, most likely interpreted as Hades' Doors. The mythological stratification of most of these sanctuaries dates back to prehistory, and points to a common derivation from the cult of the Mother Goddess (the Lady of the Doors), which was largely widespread since at least 25000 BC. The cult itself was later reconverted into various different divinities, while the `sacred doors' of the Great Goddess and/or the dragons (offspring of Mother Earth and generally regarded as Keepers of the Doors) persisted in more recent mythologies. Piccardi L., 1999: The "Footprints" of the Archangel: Evidence of Early-Medieval Surface Faulting at Monte Sant'Angelo (Gargano, Italy

  6. Earthquakes and fault creep on the northern San Andreas fault

    USGS Publications Warehouse

    Nason, R.

    1979-01-01

    At present there is an absence of both fault creep and small earthquakes on the northern San Andreas fault, which had a magnitude 8 earthquake with 5 m of slip in 1906. The fault has apparently been dormant after the 1906 earthquake. One possibility is that the fault is 'locked' in some way and only produces great earthquakes. An alternative possibility, presented here, is that the lack of current activity on the northern San Andreas fault is because of a lack of sufficient elastic strain after the 1906 earthquake. This is indicated by geodetic measurements at Fort Ross in 1874, 1906 (post-earthquake), and 1969, which show that the strain accumulation in 1969 (69 ?? 10-6 engineering strain) was only about one-third of the strain release (rebound) in the 1906 earthquake (200 ?? 10-6 engineering strain). The large difference in seismicity before and after 1906, with many strong local earthquakes from 1836 to 1906, but only a few strong earthquakes from 1906 to 1976, also indicates a difference of elastic strain. The geologic characteristics (serpentine, fault straightness) of most of the northern San Andreas fault are very similar to the characteristics of the fault south of Hollister, where fault creep is occurring. Thus, the current absence of fault creep on the northern fault segment is probably due to a lack of sufficient elastic strain at the present time. ?? 1979.

  7. Quantifying Anderson's fault types

    USGS Publications Warehouse

    Simpson, R.W.

    1997-01-01

    Anderson [1905] explained three basic types of faulting (normal, strike-slip, and reverse) in terms of the shape of the causative stress tensor and its orientation relative to the Earth's surface. Quantitative parameters can be defined which contain information about both shape and orientation [Ce??le??rier, 1995], thereby offering a way to distinguish fault-type domains on plots of regional stress fields and to quantify, for example, the degree of normal-faulting tendencies within strike-slip domains. This paper offers a geometrically motivated generalization of Angelier's [1979, 1984, 1990] shape parameters ?? and ?? to new quantities named A?? and A??. In their simple forms, A?? varies from 0 to 1 for normal, 1 to 2 for strike-slip, and 2 to 3 for reverse faulting, and A?? ranges from 0?? to 60??, 60?? to 120??, and 120?? to 180??, respectively. After scaling, A?? and A?? agree to within 2% (or 1??), a difference of little practical significance, although A?? has smoother analytical properties. A formulation distinguishing horizontal axes as well as the vertical axis is also possible, yielding an A?? ranging from -3 to +3 and A?? from -180?? to +180??. The geometrically motivated derivation in three-dimensional stress space presented here may aid intuition and offers a natural link with traditional ways of plotting yield and failure criteria. Examples are given, based on models of Bird [1996] and Bird and Kong [1994], of the use of Anderson fault parameters A?? and A?? for visualizing tectonic regimes defined by regional stress fields. Copyright 1997 by the American Geophysical Union.

  8. Effect of surrounding fault on distributed fault of blind reverse fault in sedimentary basin - Uemachi Faults, Osaka Basin, Southwest Japan -

    NASA Astrophysics Data System (ADS)

    Inoue, N.

    2012-12-01

    Several large cities and metropolitan areas, such as Osaka and Kobe are located in the Osaka basin, which has been filled by the Pleistocene Osaka group and the later sediments. The basin is surrounded by E-W trending strike slip faults and N-S trending reverse faults. The N-S trending 42-km-long Uemachi faults traverse in the central part of the Osaka city. The various geological, geophysical surveys, such as seismic reflection, micro tremor, gravity surveys and deep boreholes, revealed the complex basement configuration along the Uemachi faults. The depth of the basement is shallow in the central part of the Osaka plain. The Uemachi faults are locates on the western side of the basement upland. In the central part of the Uemachi faults, the displacement decreases. The fault model of the Uemachi faults consists of the two parts, the north and south parts. The NE-SW trending branch faults, Suminoe and Sakuragawa flexures, are also recognized based on various surveys around the central part. Kusumoto et al. (2001) reported that surrounding faults enable to form the basement configuration without the Uemachi faults model based on a dislocation model. Inoue et al. (2011) performed various parameter studies for dislocation model and gravity changes based on simplified faults model, which were designed based on the distribution of the real faults. The model was consisted of 7 faults including the Uemachi faults. In this study, the Osaka-wan fault was considered for the dislocation model. The results show the basement configuration including NE-SW branch faults. The basement configuration differs from the subsurface structure derived from the investigation of abundance geotechnical borehole data around the central part of the Uemachi faults. The tectonic developing process including the erosion and sea level change are require to understanding the structure from the basement to the surface of the Uemachi Fault Zone. This research is partly funded by the Comprehensive

  9. Abnormal fault-recovery characteristics of the fault-tolerant multiprocessor uncovered using a new fault-injection methodology

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1991-01-01

    An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.

  10. Fault intersections along the Hosgri Fault Zone, Central California

    NASA Astrophysics Data System (ADS)

    Watt, J. T.; Johnson, S. Y.; Langenheim, V. E.

    2011-12-01

    It is well-established that stresses concentrate at fault intersections or bends when subjected to tectonic loading, making focused studies of these areas particularly important for seismic hazard analysis. In addition, detailed fault models can be used to investigate how slip on one fault might transfer to another during an earthquake. We combine potential-field, high-resolution seismic-reflection, and multibeam bathymetry data with existing geologic and seismicity data to investigate the fault geometry and connectivity of the Hosgri, Los Osos, and Shoreline faults offshore of San Luis Obispo, California. The intersection of the Hosgri and Los Osos faults in Estero Bay is complex. The offshore extension of the Los Osos fault, as imaged with multibeam and high-resolution seismic data, is characterized by a west-northwest-trending zone (1-3 km wide) of near vertical faulting. Three distinct strands (northern, central, and southern) are visible on shallow seismic reflection profiles. The steep dip combined with dramatic changes in reflection character across mapped faults within this zone suggests horizontal offset of rock units and argues for predominantly strike-slip motion, however, the present orientation of the fault zone suggests oblique slip. As the Los Osos fault zone approaches the Hosgri fault, the northern and central strands become progressively more northwest-trending in line with the Hosgri fault. The northern strand runs subparallel to the Hosgri fault along the edge of a long-wavelength magnetic anomaly, intersecting the Hosgri fault southwest of Point Estero. Geophysical modeling suggests the northern strand dips 70° to the northeast, which is in agreement with earthquake focal mechanisms that parallel this strand. The central strand bends northward and intersects the Hosgri fault directly west of Morro Rock, corresponding to an area of compressional deformation visible in shallow seismic-reflection profiles. The southern strand of the Los Osos

  11. Fault linkage and continental breakup

    NASA Astrophysics Data System (ADS)

    Cresswell, Derren; Lymer, Gaël; Reston, Tim; Stevenson, Carl; Bull, Jonathan; Sawyer, Dale; Morgan, Julia

    2017-04-01

    The magma-poor rifted margin off the west coast of Galicia (NW Spain) has provided some of the key observations in the development of models describing the final stages of rifting and continental breakup. In 2013, we collected a 68 x 20 km 3D seismic survey across the Galicia margin, NE Atlantic. Processing through to 3D Pre-stack Time Migration (12.5 m bin-size) and 3D depth conversion reveals the key structures, including an underlying detachment fault (the S detachment), and the intra-block and inter-block faults. These data reveal multiple phases of faulting, which overlap spatially and temporally, have thinned the crust to between zero and a few km thickness, producing 'basement windows' where crustal basement has been completely pulled apart and sediments lie directly on the mantle. Two approximately N-S trending fault systems are observed: 1) a margin proximal system of two linked faults that are the upward extension (breakaway faults) of the S; in the south they form one surface that splays northward to form two faults with an intervening fault block. These faults were thus demonstrably active at one time rather than sequentially. 2) An oceanward relay structure that shows clear along strike linkage. Faults within the relay trend NE-SW and heavily dissect the basement. The main block bounding faults can be traced from the S detachment through the basement into, and heavily deforming, the syn-rift sediments where they die out, suggesting that the faults propagated up from the S detachment surface. Analysis of the fault heaves and associated maps at different structural levels show complementary fault systems. The pattern of faulting suggests a variation in main tectonic transport direction moving oceanward. This might be interpreted as a temporal change during sequential faulting, however the transfer of extension between faults and the lateral variability of fault blocks suggests that many of the faults across the 3D volume were active at least in part

  12. Holocene faulting on the Mission fault, northwest Montana

    SciTech Connect

    Ostenaa, D.A.; Klinger, R.E.; Levish, D.R. )

    1993-04-01

    South of Flathead Lake, fault scarps on late Quaternary surfaces are nearly continuous for 45 km along the western flank of the Mission Range. On late Pleistocene alpine lateral moraines, scarp heights reach a maximum of 17 m. Scarp heights on post glacial Lake Missoula surfaces range from 2.6--7.2 m and maximum scarp angles range from 10[degree]--24[degree]. The stratigraphy exposed in seven trenches across the fault demonstrates that the post glacial Lake Missoula scarps resulted from at least two surface-faulting events. Larger scarp heights on late Pleistocene moraines suggests a possible third event. This yields an estimated recurrence of 4--8 kyr. Analyses of scarp profiles show that the age of the most surface faulting is middle Holocene, consistent with stratigraphic evidence found in the trenches. Rupture length and displacement imply earthquake magnitudes of 7 to 7.5. Previous studies have not identified geologic evidence of late Quaternary surface faulting in the Rocky Mountain Trench or on faults north of the Lewis and Clark line despite abundant historic seismicity in the Flathead Lake area. In addition to the Mission fault, reconnaissance studies have located late Quaternary fault scarps along portions of faults bordering Jocko and Thompson Valleys. These are the first documented late Pleistocene/Holocene faults north of the Lewis and Clark line in Montana and should greatly revise estimates of earthquake hazards in this region.

  13. Randomness fault detection system

    NASA Technical Reports Server (NTRS)

    Russell, B. Don (Inventor); Aucoin, B. Michael (Inventor); Benner, Carl L. (Inventor)

    1996-01-01

    A method and apparatus are provided for detecting a fault on a power line carrying a line parameter such as a load current. The apparatus monitors and analyzes the load current to obtain an energy value. The energy value is compared to a threshold value stored in a buffer. If the energy value is greater than the threshold value a counter is incremented. If the energy value is greater than a high value threshold or less than a low value threshold then a second counter is incremented. If the difference between two subsequent energy values is greater than a constant then a third counter is incremented. A fault signal is issued if the counter is greater than a counter limit value and either the second counter is greater than a second limit value or the third counter is greater than a third limit value.

  14. Managing Fault Management Development

    NASA Technical Reports Server (NTRS)

    McDougal, John M.

    2010-01-01

    As the complexity of space missions grows, development of Fault Management (FM) capabilities is an increasingly common driver for significant cost overruns late in the development cycle. FM issues and the resulting cost overruns are rarely caused by a lack of technology, but rather by a lack of planning and emphasis by project management. A recent NASA FM Workshop brought together FM practitioners from a broad spectrum of institutions, mission types, and functional roles to identify the drivers underlying FM overruns and recommend solutions. They identified a number of areas in which increased program and project management focus can be used to control FM development cost growth. These include up-front planning for FM as a distinct engineering discipline; managing different, conflicting, and changing institutional goals and risk postures; ensuring the necessary resources for a disciplined, coordinated approach to end-to-end fault management engineering; and monitoring FM coordination across all mission systems.

  15. Fault Tolerant Paradigms

    DTIC Science & Technology

    2016-02-26

    AFRL-AFOSR-VA-TR-2016-0105 BRI) Fault Tolerant Paradigms BENJAMIN ONG MICHIGAN STATE UNIV EAST LANSING Final Report 02/26/2016 DISTRIBUTION A...property allows the algorithm to outperform FFTW over a wide range of sparsity and noise values, and is to the best of our knowledge novel in the...best of our knowledge novel. The new algorithm gives excellent performance in the noisy setting without significantly increasing the computational

  16. Fault tolerant control laws

    NASA Technical Reports Server (NTRS)

    Ly, U. L.; Ho, J. K.

    1986-01-01

    A systematic procedure for the synthesis of fault tolerant control laws to actuator failure has been presented. Two design methods were used to synthesize fault tolerant controllers: the conventional LQ design method and a direct feedback controller design method SANDY. The latter method is used primarily to streamline the full-state Q feedback design into a practical implementable output feedback controller structure. To achieve robustness to control actuator failure, the redundant surfaces are properly balanced according to their control effectiveness. A simple gain schedule based on the landing gear up/down logic involving only three gains was developed to handle three design flight conditions: Mach .25 and Mach .60 at 5000 ft and Mach .90 at 20,000 ft. The fault tolerant control law developed in this study provides good stability augmentation and performance for the relaxed static stability aircraft. The augmented aircraft responses are found to be invariant to the presence of a failure. Furthermore, single-loop stability margins of +6 dB in gain and +30 deg in phase were achieved along with -40 dB/decade rolloff at high frequency.

  17. Large earthquakes and creeping faults

    USGS Publications Warehouse

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  18. Large earthquakes and creeping faults

    NASA Astrophysics Data System (ADS)

    Harris, Ruth A.

    2017-03-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  19. Imaging of subsurface faults using refraction migration with fault flooding

    NASA Astrophysics Data System (ADS)

    Metwally, Ahmed; Hanafy, Sherif; Guo, Bowen; Kosmicki, Maximillian

    2017-08-01

    We propose a novel method for imaging shallow faults by migration of transmitted refraction arrivals. The assumption is that there is a significant velocity contrast across the fault boundary that is underlain by a refracting interface. This procedure, denoted as refraction migration with fault flooding, largely overcomes the difficulty in imaging shallow faults with seismic surveys. Numerical results successfully validate this method on three synthetic examples and two field-data sets. The first field-data set is next to the Gulf of Aqaba and the second example is from a seismic profile recorded in Arizona. The faults detected by refraction migration in the Gulf of Aqaba data were in agreement with those indicated in a P-velocity tomogram. However, a new fault is detected at the end of the migration image that is not clearly seen in the traveltime tomogram. This result is similar to that for the Arizona data where the refraction image showed faults consistent with those seen in the P-velocity tomogram, except that it also detected an antithetic fault at the end of the line. This fault cannot be clearly seen in the traveltime tomogram due to the limited ray coverage.

  20. Fault management for data systems

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Iverson, David L.; Patterson-Hine, F. Ann

    1993-01-01

    Issues related to automating the process of fault management (fault diagnosis and response) for data management systems are considered. Substantial benefits are to be gained by successful automation of this process, particularly for large, complex systems. The use of graph-based models to develop a computer assisted fault management system is advocated. The general problem is described and the motivation behind choosing graph-based models over other approaches for developing fault diagnosis computer programs is outlined. Some existing work in the area of graph-based fault diagnosis is reviewed, and a new fault management method which was developed from existing methods is offered. Our method is applied to an automatic telescope system intended as a prototype for future lunar telescope programs. Finally, an application of our method to general data management systems is described.

  1. Mechanical stratigraphy and normal faulting

    NASA Astrophysics Data System (ADS)

    Ferrill, David A.; Morris, Alan P.; McGinnis, Ronald N.; Smart, Kevin J.; Wigginton, Sarah S.; Hill, Nicola J.

    2017-01-01

    Mechanical stratigraphy encompasses the mechanical properties, thicknesses, and interface properties of rock units. Although mechanical stratigraphy often relates directly to lithostratigraphy, lithologic description alone does not adequately describe mechanical behavior. Analyses of normal faults with displacements of millimeters to 10's of kilometers in mechanically layered rocks reveal that mechanical stratigraphy influences nucleation, failure mode, fault geometry, displacement gradient, displacement distribution, fault core and damage zone characteristics, and fault zone deformation processes. The relationship between normal faulting and mechanical stratigraphy can be used either to predict structural style using knowledge of mechanical stratigraphy, or conversely to interpret mechanical stratigraphy based on characterization of the structural style. This review paper explores a range of mechanical stratigraphic controls on normal faulting illustrated by natural and modeled examples.

  2. Handling Software Faults with Redundancy

    NASA Astrophysics Data System (ADS)

    Carzaniga, Antonio; Gorla, Alessandra; Pezzè, Mauro

    Software engineering methods can increase the dependability of software systems, and yet some faults escape even the most rigorous and methodical development process. Therefore, to guarantee high levels of reliability in the presence of faults, software systems must be designed to reduce the impact of the failures caused by such faults, for example by deploying techniques to detect and compensate for erroneous runtime conditions. In this chapter, we focus on software techniques to handle software faults, and we survey several such techniques developed in the area of fault tolerance and more recently in the area of autonomic computing. Since practically all techniques exploit some form of redundancy, we consider the impact of redundancy on the software architecture, and we propose a taxonomy centered on the nature and use of redundancy in software systems. The primary utility of this taxonomy is to classify and compare techniques to handle software faults.

  3. Software Evolution and the Fault Process

    NASA Technical Reports Server (NTRS)

    Nikora, Allen P.; Munson, John C.

    1999-01-01

    In developing a software system, we would like to estimate the way in which the fault content changes during its development, as well determine the locations having the highest concentration of faults. In the phases prior to test, however, there may be very little direct information regarding the number and location of faults. This lack of direct information requires developing a fault surrogate from which the number of faults and their location can be estimated. We develop a fault surrogate based on changes in the fault index, a synthetic measure which has been successfully used as a fault surrogate in previous work. We show that changes in the fault index can be used to estimate the rates at which faults are inserted into a system between successive revisions. We can then continuously monitor the total number of faults inserted into a system, the residual fault content, and identify those portions of a system requiring the application of additional fault detection and removal resources.

  4. Final Technical Report: PV Fault Detection Tool.

    SciTech Connect

    King, Bruce Hardison; Jones, Christian Birk

    2015-12-01

    The PV Fault Detection Tool project plans to demonstrate that the FDT can (a) detect catastrophic and degradation faults and (b) identify the type of fault. This will be accomplished by collecting fault signatures using different instruments and integrating this information to establish a logical controller for detecting, diagnosing and classifying each fault.

  5. 20 CFR 404.507 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 2 2014-04-01 2014-04-01 false Fault. 404.507 Section 404.507 Employees... Officer § 404.507 Fault. Fault as used in without fault (see § 404.506 and 42 CFR 405.355) applies only to the individual. Although the Administration may have been at fault in making the overpayment,...

  6. 20 CFR 404.507 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 2 2012-04-01 2012-04-01 false Fault. 404.507 Section 404.507 Employees... Officer § 404.507 Fault. Fault as used in without fault (see § 404.506 and 42 CFR 405.355) applies only to the individual. Although the Administration may have been at fault in making the overpayment,...

  7. 20 CFR 404.507 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Fault. 404.507 Section 404.507 Employees... Officer § 404.507 Fault. Fault as used in without fault (see § 404.506 and 42 CFR 405.355) applies only to the individual. Although the Administration may have been at fault in making the overpayment,...

  8. Experimental Fault Reactivation on Favourably and Unfavourably Oriented Faults

    NASA Astrophysics Data System (ADS)

    Mitchell, T. M.; Sibson, R. H.; Renner, J.; Toy, V. G.; di Toro, G.; Smith, S. A.

    2010-12-01

    In this study, we introduce work which aims assess the loading of faults to failure under different stress regimes in a triaxial deformation apparatus. We explore experimentally the reshear of an existing fault in various orientations for particular values of (σ1 - σ3) and σ3' for contrasting loading systems - load-strengthening (equivalent to a thrust fault) with σ1' increasing at constant σ3', versus load-weakening (equivalent to a normal fault) with reducing σ3' under constant σ1'. Experiments are conducted on sawcut granite samples with fault angles at a variety of orientations relative to σ1 , ranging from an optimal orientation for reactivation to lockup angles where new faults are formed in preference to reactivating the existing sawcut orientation. Prefailure and postfailure behaviour is compared in terms of damage zone development via monitoring variations in ultrasonic velocity and acoustic emission behaviour. For example, damage surrounding unfavourably oriented faults is significantly higher than that seen around favourably orientated faults due to greater maximum stresses attained prior to unstable slip, which is reflected by the increased acoustic emission activity leading up to failure. In addition, we also experimentally explore the reshear of natural pseudotachylytes (PSTs) from two different fault zones; the Gole Larghe Fault, Adamello, Italy in which the PSTs are in relatively isotropic Tonalite (at lab sample scale) and the Alpine Fault, New Zealand in which the PSTs are in highly anisotropic foliated shist. We test whether PSTs will reshear in both rock types under the right conditions, or whether new fractures in the wall rock will form in preference to reactivating the PST (PST shear strength is higher than that of the host rock). Are PSTs representative of one slip event?

  9. SEISMOLOGY: Watching the Hayward Fault.

    PubMed

    Simpson, R W

    2000-08-18

    The Hayward fault, located on the east side of the San Francisco Bay, represents a natural laboratory for seismologists, because it does not sleep silently between major earthquakes. In his Perspective, Simpson discusses the study by Bürgmann et al., who have used powerful new techniques to study the fault. The results indicate that major earthquakes cannot originate in the northern part of the fault. However, surface-rupturing earthquakes have occurred in the area, suggesting that they originated to the north or south of the segment studied by Bürgmann et al. Fundamental questions remain regarding the mechanism by which plate tectonic stresses are transferred to the Hayward fault.

  10. Physical fault tolerance of nanoelectronics.

    PubMed

    Szkopek, Thomas; Roychowdhury, Vwani P; Antoniadis, Dimitri A; Damoulakis, John N

    2011-04-29

    The error rate in complementary transistor circuits is suppressed exponentially in electron number, arising from an intrinsic physical implementation of fault-tolerant error correction. Contrariwise, explicit assembly of gates into the most efficient known fault-tolerant architecture is characterized by a subexponential suppression of error rate with electron number, and incurs significant overhead in wiring and complexity. We conclude that it is more efficient to prevent logical errors with physical fault tolerance than to correct logical errors with fault-tolerant architecture.

  11. Stacking Faults in Cotton Fibers

    NASA Astrophysics Data System (ADS)

    Divakara, S.; Niranjana, A. R.; Siddaraju, G. N.; Somashekar, R.

    2011-07-01

    The stacking faults in different variety of cotton fibers have been quantified using wide-angle X-ray scattering (WAXS) data. Exponential functions for the column length distribution have been used for the determination of microstructural parameters. The crystal imperfection parameters like crystal size , lattice strain (g in %), stacking faults (αd) and twin faults (β) have been determined by profile analysis using Fourier method of Warren. We examined different variety of raw cotton fibers using WAXS techniques. In all these cases we note that, the stacking faults are quite significant in determining the property of cotton fibers.

  12. Fault trees and sequence dependencies

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne Bechta; Boyd, Mark A.; Bavuso, Salvatore J.

    1990-01-01

    One of the frequently cited shortcomings of fault-tree models, their inability to model so-called sequence dependencies, is discussed. Several sources of such sequence dependencies are discussed, and new fault-tree gates to capture this behavior are defined. These complex behaviors can be included in present fault-tree models because they utilize a Markov solution. The utility of the new gates is demonstrated by presenting several models of the fault-tolerant parallel processor, which include both hot and cold spares.

  13. Fault-Tree Compiler Program

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Martensen, Anna L.

    1992-01-01

    FTC, Fault-Tree Compiler program, is reliability-analysis software tool used to calculate probability of top event of fault tree. Five different types of gates allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language of FTC easy to understand and use. Program supports hierarchical fault-tree-definition feature simplifying process of description of tree and reduces execution time. Solution technique implemented in FORTRAN, and user interface in Pascal. Written to run on DEC VAX computer operating under VMS operating system.

  14. Fault interaction near Hollister, California

    NASA Astrophysics Data System (ADS)

    Mavko, Gerald M.

    1982-09-01

    A numerical model is used to study fault stress and slip near Hollister, California. The geometrically complex system of interacting faults, including the San Andreas, Calaveras, Sargent, and Busch faults, is approximated with a two-dimensional distribution of short planar fault segments in an elastic medium. The steady stress and slip rate are simulated by specifying frictional strength and stepping the remote stress ahead in time. The resulting computed fault stress is roughly proportional to the observed spatial density of small earthquakes, suggesting that the distinction between segments characterized by earthquakes and those with aseismic creep results, in part, from geometry. A nosteady simulation is made by introducing, in addition, stress drops for individual moderate earthquakes. A close fit of observed creep with calculated slip on the Calaveras and San Andreas faults suggests that many changes in creep rate (averaged over several months) are caused by local moderate earthquakes. In particular, a 3-year creep lag preceding the August 6, 1979, Coyote Lake earthquake on the Calaveras fault seems to have been a direct result of the November 28, 1974, Thanksgiving Day earthquake on the Busch fault. Computed lags in slip rate preceding some other moderate earthquakes in the area are also due to earlier earthquakes. Although the response of the upper 1 km of the fault zone may cause some individual creep events and introduce delays in others, the long-term rate appears to reflect deep slip.

  15. Fault interaction near Hollister, California

    SciTech Connect

    Mavko, G.M.

    1982-09-10

    A numerical model is used to study fault stress slip near Hollister, California. The geometrically complex system of interacting faults, including the San Andreas, Calaveras, Sargent, and Busch faults, is approximated with a two-dimensional distribution of short planar fault segments in an elastic medium. The steady stress and slip rate are simulated by specifying frictional strength and stepping the remote stress ahead in time. The resulting computed fault stress is roughly proportional to the observed spatial density of small earthquakes, suggesting that the distinction between segments characterized by earthquakes and those with aseismic creep results, in part, from geometry. A nonsteady simulation is made by introducing, in addition, stress drops for individual moderate earthquakes. A close fit of observed creep with calculated slip on the Calaveras and San Andreas faults suggests that many changes in creep rate (averaged over several months) are caused by local moderate earthquakes. In particular, a 3-year creep lag preceding the August 6, 1979, Coyote Lake earthquake on the Calaveras fault seems to have been a direct result of the November 28, 1974, Thanksgiving Day earthquake on the Busch fault. Computed lags in slip rate preceding some other moderate earthquakes in the area are also due to earlier earthquakes. Although the response of the upper 1 km of the fault zone may cause some individual creep events and introduce delays in others, the long-term rate appears to reflect deep slip.

  16. Perspective View, Garlock Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    California's Garlock Fault, marking the northwestern boundary of the Mojave Desert, lies at the foot of the mountains, running from the lower right to the top center of this image, which was created with data from NASA's shuttle Radar Topography Mission (SRTM), flown in February 2000. The data will be used by geologists studying fault dynamics and landforms resulting from active tectonics. These mountains are the southern end of the Sierra Nevada and the prominent canyon emerging at the lower right is Lone Tree canyon. In the distance, the San Gabriel Mountains cut across from the leftside of the image. At their base lies the San Andreas Fault which meets the Garlock Fault near the left edge at Tejon Pass. The dark linear feature running from lower right to upper left is State Highway 14 leading from the town of Mojave in the distance to Inyokern and the Owens Valley in the north. The lighter parallel lines are dirt roads related to power lines and the Los Angeles Aqueduct which run along the base of the mountains.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast

  17. Fault Tree Handbook

    DTIC Science & Technology

    1981-01-01

    uat r CmIi S ft II >5±4"’’ NUREG-0492 Fault Tree Handbook Date Published: January 1981 W. E. Vesely, U.S. Nuclear Regulatory Commission F. F...Goldberg, U.S. Nuclear Regulatory Commission N. H. Roberts, University of Washington D. F. Haasl, Institute of System Sciences, Inc. Systems and...Reliability Research Office of Nuclear Regulatory Research U.S. Nuclear Regulatory Commission Washington, D.C. 20555 RG &,, ý0 For sale by the U.S

  18. Perspective View, Garlock Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    California's Garlock Fault, marking the northwestern boundary of the Mojave Desert, lies at the foot of the mountains, running from the lower right to the top center of this image, which was created with data from NASA's shuttle Radar Topography Mission (SRTM), flown in February 2000. The data will be used by geologists studying fault dynamics and landforms resulting from active tectonics. These mountains are the southern end of the Sierra Nevada and the prominent canyon emerging at the lower right is Lone Tree canyon. In the distance, the San Gabriel Mountains cut across from the leftside of the image. At their base lies the San Andreas Fault which meets the Garlock Fault near the left edge at Tejon Pass. The dark linear feature running from lower right to upper left is State Highway 14 leading from the town of Mojave in the distance to Inyokern and the Owens Valley in the north. The lighter parallel lines are dirt roads related to power lines and the Los Angeles Aqueduct which run along the base of the mountains.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast

  19. Nonlinear Fault Diagnosis,

    DTIC Science & Technology

    1981-05-01

    Systems, New York, Marcel Dekker, (to appear). 3. Desoer , C.A. and S.E. Kuh, Basic Circuit Theory, McGraw-Hill, New York, 1969, pp. 423-425. 130 NONLINEAR...DIAGNOSIS A 7*ssior For 1 MU3 CRA&T IY’IC TAB Ju-st i.cat IC- P.U A: CONTENTS Fault Diagnosis in Electronic Circuits , R. Saeks and R.-w. Liu...Vincentelli and R. Saeks .............. 61 Multitest Diagnosibility of Nonlinear Circuits and Systems, A. Sangiovanni-Vincentelli and R. Saeks

  20. Fault current limiter

    DOEpatents

    Darmann, Francis Anthony

    2013-10-08

    A fault current limiter (FCL) includes a series of high permeability posts for collectively define a core for the FCL. A DC coil, for the purposes of saturating a portion of the high permeability posts, surrounds the complete structure outside of an enclosure in the form of a vessel. The vessel contains a dielectric insulation medium. AC coils, for transporting AC current, are wound on insulating formers and electrically interconnected to each other in a manner such that the senses of the magnetic field produced by each AC coil in the corresponding high permeability core are opposing. There are insulation barriers between phases to improve dielectric withstand properties of the dielectric medium.

  1. Cross-Cutting Faults

    NASA Technical Reports Server (NTRS)

    2005-01-01

    16 May 2005 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows cross-cutting fault scarps among graben features in northern Tempe Terra. Graben form in regions where the crust of the planet has been extended; such features are common in the regions surrounding the vast 'Tharsis Bulge' on Mars.

    Location near: 43.7oN, 90.2oW Image width: 3 km (1.9 mi) Illumination from: lower left Season: Northern Summer

  2. Cross-Cutting Faults

    NASA Technical Reports Server (NTRS)

    2005-01-01

    16 May 2005 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows cross-cutting fault scarps among graben features in northern Tempe Terra. Graben form in regions where the crust of the planet has been extended; such features are common in the regions surrounding the vast 'Tharsis Bulge' on Mars.

    Location near: 43.7oN, 90.2oW Image width: 3 km (1.9 mi) Illumination from: lower left Season: Northern Summer

  3. Integrated design of fault reconstruction and fault-tolerant control against actuator faults using learning observers

    NASA Astrophysics Data System (ADS)

    Jia, Qingxian; Chen, Wen; Zhang, Yingchun; Li, Huayi

    2016-12-01

    This paper addresses the problem of integrated fault reconstruction and fault-tolerant control in linear systems subject to actuator faults via learning observers (LOs). A reconfigurable fault-tolerant controller is designed based on the constructed LO to compensate for the influence of actuator faults by stabilising the closed-loop system. An integrated design of the proposed LO and the fault-tolerant controller is explored such that their performance can be simultaneously considered and their coupling problem can be effectively solved. In addition, such an integrated design is formulated in terms of linear matrix inequalities (LMIs) that can be conveniently solved in a unified framework using LMI optimisation technique. At last, simulation studies on a micro-satellite attitude control system are provided to verify the effectiveness of the proposed approach.

  4. AGSM Functional Fault Models for Fault Isolation Project

    NASA Technical Reports Server (NTRS)

    Harp, Janicce Leshay

    2014-01-01

    This project implements functional fault models to automate the isolation of failures during ground systems operations. FFMs will also be used to recommend sensor placement to improve fault isolation capabilities. The project enables the delivery of system health advisories to ground system operators.

  5. Fault Branching and Rupture Directivity

    NASA Astrophysics Data System (ADS)

    Dmowska, R.; Rice, J. R.; Kame, N.

    2002-12-01

    Can the rupture directivity of past earthquakes be inferred from fault geometry? Nakata et al. [J. Geogr., 1998] propose to relate the observed surface branching of fault systems with directivity. Their work assumes that all branches are through acute angles in the direction of rupture propagation. However, in some observed cases rupture paths seem to branch through highly obtuse angles, as if to propagate ``backwards". Field examples of that are as follows: (1) Landers 1992. When crossing from the Johnson Valley to the Homestead Valley (HV) fault via the Kickapoo (Kp) fault, the rupture from Kp progressed not just forward onto the northern stretch of the HV fault, but also backwards, i.e., SSE along the HV [Sowers et al., 1994, Spotila and Sieh, 1995, Zachariasen and Sieh, 1995, Rockwell et al., 2000]. Measurements of surface slip along that backward branch, a prominent feature of 4 km length, show right-lateral slip, decreasing towards the SSE. (2) At a similar crossing from the HV to the Emerson (Em) fault, the rupture progressed backwards along different SSE splays of the Em fault [Zachariasen and Sieh, 1995]. (3). In crossing from the Em to Camp Rock (CR) fault, again, rupture went SSE on the CR fault. (4). Hector Mine 1999. The rupture originated on a buried fault without surface trace [Li et al., 2002; Hauksson et al., 2002] and progressed bilaterally south and north. In the south it met the Lavic Lake (LL) fault and progressed south on it, but also progressed backward, i.e. NNW, along the northern stretch of the LL fault. The angle between the buried fault and the northern LL fault is around -160o, and that NNW stretch extends around 15 km. The field examples with highly obtuse branch angles suggest that there may be no simple correlation between fault geometry and rupture directivity. We propose that an important distinction is whether those obtuse branches actually involved a rupture path which directly turned through the obtuse angle (while continuing

  6. Central Asia Active Fault Database

    NASA Astrophysics Data System (ADS)

    Mohadjer, Solmaz; Ehlers, Todd A.; Kakar, Najibullah

    2014-05-01

    The ongoing collision of the Indian subcontinent with Asia controls active tectonics and seismicity in Central Asia. This motion is accommodated by faults that have historically caused devastating earthquakes and continue to pose serious threats to the population at risk. Despite international and regional efforts to assess seismic hazards in Central Asia, little attention has been given to development of a comprehensive database for active faults in the region. To address this issue and to better understand the distribution and level of seismic hazard in Central Asia, we are developing a publically available database for active faults of Central Asia (including but not limited to Afghanistan, Tajikistan, Kyrgyzstan, northern Pakistan and western China) using ArcGIS. The database is designed to allow users to store, map and query important fault parameters such as fault location, displacement history, rate of movement, and other data relevant to seismic hazard studies including fault trench locations, geochronology constraints, and seismic studies. Data sources integrated into the database include previously published maps and scientific investigations as well as strain rate measurements and historic and recent seismicity. In addition, high resolution Quickbird, Spot, and Aster imagery are used for selected features to locate and measure offset of landforms associated with Quaternary faulting. These features are individually digitized and linked to attribute tables that provide a description for each feature. Preliminary observations include inconsistent and sometimes inaccurate information for faults documented in different studies. For example, the Darvaz-Karakul fault which roughly defines the western margin of the Pamir, has been mapped with differences in location of up to 12 kilometers. The sense of motion for this fault ranges from unknown to thrust and strike-slip in three different studies despite documented left-lateral displacements of Holocene and late

  7. Structure and flow properties of syn-rift border faults: The interplay between fault damage and fault-related chemical alteration (Dombjerg Fault, Wollaston Forland, NE Greenland)

    NASA Astrophysics Data System (ADS)

    Kristensen, Thomas B.; Rotevatn, Atle; Peacock, David C. P.; Henstra, Gijs A.; Midtkandal, Ivar; Grundvåg, Sten-Andreas

    2016-11-01

    Structurally controlled, syn-rift, clastic depocentres are of economic interest as hydrocarbon reservoirs; understanding the structure of their bounding faults is of great relevance, e.g. in the assessment of fault-controlled hydrocarbon retention potential. Here we investigate the structure of the Dombjerg Fault Zone (Wollaston Forland, NE Greenland), a syn-rift border fault that juxtaposes syn-rift deep-water hanging-wall clastics against a footwall of crystalline basement. A series of discrete fault strands characterize the central fault zone, where discrete slip surfaces, fault rock assemblages and extreme fracturing are common. A chemical alteration zone (CAZ) of fault-related calcite cementation envelops the fault and places strong controls on the style of deformation, particularly in the hanging-wall. The hanging-wall damage zone includes faults, joints, veins and, outside the CAZ, disaggregation deformation bands. Footwall deformation includes faults, joints and veins. Our observations suggest that the CAZ formed during early-stage fault slip and imparted a mechanical control on later fault-related deformation. This study thus gives new insights to the structure of an exposed basin-bounding fault and highlights a spatiotemporal interplay between fault damage and chemical alteration, the latter of which is often underreported in fault studies. To better elucidate the structure, evolution and flow properties of faults (outcrop or subsurface), both fault damage and fault-related chemical alteration must be considered.

  8. Colorado Regional Faults

    SciTech Connect

    Hussein, Khalid

    2012-02-01

    Citation Information: Originator: Earth Science &Observation Center (ESOC), CIRES, University of Colorado at Boulder Originator: Colorado Geological Survey (CGS) Publication Date: 2012 Title: Regional Faults Edition: First Publication Information: Publication Place: Earth Science & Observation Center, Cooperative Institute for Research in Environmental Science, University of Colorado, Boulder Publisher: Earth Science &Observation Center (ESOC), CIRES, University of Colorado at Boulder Description: This layer contains the regional faults of Colorado Spatial Domain: Extent: Top: 4543192.100000 m Left: 144385.020000 m Right: 754585.020000 m Bottom: 4094592.100000 m Contact Information: Contact Organization: Earth Science &Observation Center (ESOC), CIRES, University of Colorado at Boulder Contact Person: Khalid Hussein Address: CIRES, Ekeley Building Earth Science & Observation Center (ESOC) 216 UCB City: Boulder State: CO Postal Code: 80309-0216 Country: USA Contact Telephone: 303-492-6782 Spatial Reference Information: Coordinate System: Universal Transverse Mercator (UTM) WGS’1984 Zone 13N False Easting: 500000.00000000 False Northing: 0.00000000 Central Meridian: -105.00000000 Scale Factor: 0.99960000 Latitude of Origin: 0.00000000 Linear Unit: Meter Datum: World Geodetic System 1984 (WGS ’984) Prime Meridian: Greenwich Angular Unit: Degree Digital Form: Format Name: Shape file

  9. Fault Management Design Strategies

    NASA Technical Reports Server (NTRS)

    Day, John C.; Johnson, Stephen B.

    2014-01-01

    Development of dependable systems relies on the ability of the system to determine and respond to off-nominal system behavior. Specification and development of these fault management capabilities must be done in a structured and principled manner to improve our understanding of these systems, and to make significant gains in dependability (safety, reliability and availability). Prior work has described a fundamental taxonomy and theory of System Health Management (SHM), and of its operational subset, Fault Management (FM). This conceptual foundation provides a basis to develop framework to design and implement FM design strategies that protect mission objectives and account for system design limitations. Selection of an SHM strategy has implications for the functions required to perform the strategy, and it places constraints on the set of possible design solutions. The framework developed in this paper provides a rigorous and principled approach to classifying SHM strategies, as well as methods for determination and implementation of SHM strategies. An illustrative example is used to describe the application of the framework and the resulting benefits to system and FM design and dependability.

  10. Fault Management Design Strategies

    NASA Technical Reports Server (NTRS)

    Day, John C.; Johnson, Stephen B.

    2014-01-01

    Development of dependable systems relies on the ability of the system to determine and respond to off-nominal system behavior. Specification and development of these fault management capabilities must be done in a structured and principled manner to improve our understanding of these systems, and to make significant gains in dependability (safety, reliability and availability). Prior work has described a fundamental taxonomy and theory of System Health Management (SHM), and of its operational subset, Fault Management (FM). This conceptual foundation provides a basis to develop framework to design and implement FM design strategies that protect mission objectives and account for system design limitations. Selection of an SHM strategy has implications for the functions required to perform the strategy, and it places constraints on the set of possible design solutions. The framework developed in this paper provides a rigorous and principled approach to classifying SHM strategies, as well as methods for determination and implementation of SHM strategies. An illustrative example is used to describe the application of the framework and the resulting benefits to system and FM design and dependability.

  11. SFT: Scalable Fault Tolerance

    SciTech Connect

    Petrini, Fabrizio; Nieplocha, Jarek; Tipparaju, Vinod

    2006-04-15

    In this paper we will present a new technology that we are currently developing within the SFT: Scalable Fault Tolerance FastOS project which seeks to implement fault tolerance at the operating system level. Major design goals include dynamic reallocation of resources to allow continuing execution in the presence of hardware failures, very high scalability, high efficiency (low overhead), and transparency—requiring no changes to user applications. Our technology is based on a global coordination mechanism, that enforces transparent recovery lines in the system, and TICK, a lightweight, incremental checkpointing software architecture implemented as a Linux kernel module. TICK is completely user-transparent and does not require any changes to user code or system libraries; it is highly responsive: an interrupt, such as a timer interrupt, can trigger a checkpoint in as little as 2.5μs; and it supports incremental and full checkpoints with minimal overhead—less than 6% with full checkpointing to disk performed as frequently as once per minute.

  12. Accelerometer having integral fault null

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1995-01-01

    An improved accelerometer is introduced. It comprises a transducer responsive to vibration in machinery which produces an electrical signal related to the magnitude and frequency of the vibration; and a decoding circuit responsive to the transducer signal which produces a first fault signal to produce a second fault signal in which ground shift effects are nullified.

  13. Naval Weapons Center Active Fault Map Series.

    DTIC Science & Technology

    1987-08-31

    SECURITY CLASSIFICATION OF ’MiS PACE NWC TP 6828 CONTENTS Introduction . . . . . . . . . . . . . . . . . ........... 2 Active Fault Definition ...established along the trace of the Little Take fault zone, within the City of Ridgecrest. ACTIVE FAULT DEFINITION Although it is a commonly used term...34active fault" lacks a pre- cise and universally accepted definition . Most workers, however, accept the following: "Active fault - a fault along

  14. Experimental Fault Reactivation on Favourably and Unfavourably Oriented Faults

    NASA Astrophysics Data System (ADS)

    Mitchell, T. M.; Renner, J.; Sibson, R. H.

    2011-12-01

    In this study, we assess the loading of faults to failure under different stress regimes in a triaxial deformation apparatus, both in dry and saturated conditions. We explore experimentally the reshear of an existing fault in various orientations for particular values of (σ_1 - σ_3) and σ_3' for contrasting loading systems - load-strengthening (equivalent to a thrust fault) with σ1' increasing at constant σ_3', versus load-weakening (equivalent to a normal fault) with reducing σ_3' under constant σ_1'. Experiments are conducted on sawcut granite samples with fault angles at a variety of orientations relative to σ_1, ranging from an optimal orientation for reactivation to lockup angles where new faults are formed in preference to reactivating the existing sawcut orientation. Prefailure and postfailure behaviour is compared in terms of damage zone development via monitoring variations in ultrasonic velocity and acoustic emission behaviour. For example, damage surrounding unfavourably oriented faults is significantly higher than that seen around favourably orientated faults due to greater maximum stresses attained prior to unstable slip, which is reflected by the increased acoustic emission activity leading up to failure. In addition, we explore reshear conditions under an initial condition of (σ_1' = σ_3'), then inducing reshear on the existing fault first by increasing σ_1'(load-strengthening), then by decreasing σ_3' (load-weakening), again comparing relative damage zone development and acoustic emission levels. In saturated experiments, we explore the values of pore fluid pressure (P_f) needed for re-shear to occur in preference to the formation of a new fault. Typically a limiting factor in conventional triaxial experiments performed in compression is that P_f cannot exceed the confining pressure (σ_2 and σ_3). By employing a sample assembly that allows deformation while the loading piston is in extension, it enables us to achieve pore pressures in

  15. Synchronized sampling improves fault location

    SciTech Connect

    Kezunovic, M.; Perunicic, B.

    1995-04-01

    Transmission line faults must be located accurately to allow maintenance crews to arrive at the scene and repair the faulted section as soon as possible. Rugged terrain and geographical layout cause some sections of power transmission lines to be difficult to reach. In the past, a variety of fault location algorithms were introduced as either an add-on feature in protective relays or stand-alone implementation in fault locators. In both cases, the measurements of current and voltages were taken at one terminal of a transmission line only. Under such conditions, it may become difficult to determine the fault location accurately, since data from other transmission line ends are required for more precise computations. In the absence of data from the other end, existing algorithms have accuracy problems under several circumstances, such as varying switching and loading conditions, fault infeed from the other end, and random value of fault resistance. Most of the one-end algorithms were based on estimation of voltage and current phasors. The need to estimate phasors introduces additional difficulty in high-speed tripping situations where the algorithms may not be fast enough in determining fault location accurately before the current signals disappear due to the relay operation and breaker opening. This article introduces a unique concept of high-speed fault location that can be implemented either as a simple add-on to the digital fault recorders (DFRs) or as a stand-alone new relaying function. This advanced concept is based on the use of voltage and current samples that are synchronously taken at both ends of a transmission line. This sampling technique can be made readily available in some new DFR designs incorporating receivers for accurate sampling clock synchronization using the satellite Global Positioning System (GPS).

  16. Frictional Heterogeneities Along Carbonate Faults

    NASA Astrophysics Data System (ADS)

    Collettini, C.; Carpenter, B. M.; Scuderi, M.; Tesei, T.

    2014-12-01

    The understanding of fault-slip behaviour in carbonates has an important societal impact as a) a significant number of earthquakes nucleate within or propagate through these rocks, and b) half of the known petroleum reserves occur within carbonate reservoirs, which likely contain faults that experience fluid pressure fluctuations. Field studies on carbonate-bearing faults that are exhumed analogues of currently active structures of the seismogenic crust, show that fault rock types are systematically controlled by the lithology of the faulted protolith: localization associated with cataclasis, thermal decomposition and plastic deformation commonly affect fault rocks in massive limestone, whereas distributed deformation, pressure-solution and frictional sliding along phyllosilicates are observed in marly rocks. In addition, hydraulic fractures, indicating cyclic fluid pressure build-ups during the fault activity, are widespread. Standard double direct friction experiments on fault rocks from massive limestones show high friction, velocity neutral/weakening behaviour and significant re-strengthening during hold periods, on the contrary, phyllosilicate-rich shear zones are characterized by low friction, significant velocity strengthening behavior and no healing. We are currently running friction experiments on large rock samples (20x20 cm) in order to reproduce and characterize the interaction of fault rock frictional heterogeneities observed in the field. In addition we have been performing experiments at near lithostatic fluid pressure in the double direct shear configuration within a pressure vessel to test the Rate and State friction stability under these conditions. Our combination of structural observations and mechanical data have been revealing the processes and structures that are at the base of the broad spectrum of fault slip behaviors recently documented by high-resolution geodetic and seismological data.

  17. Constraint of fault parameters inferred from nonplanar fault modeling

    NASA Astrophysics Data System (ADS)

    Aochi, Hideo; Madariaga, Raul; Fukuyama, Eiichi

    2003-02-01

    We study the distribution of initial stress and frictional parameters for the 28 June 1992 Landers, California, earthquake through dynamic rupture simulation along a nonplanar fault system. We find that observational evidence of large slip distribution near the ground surface requires large nonzero cohesive forces in the depth-dependent friction law. This is the only way that stress can accumulate and be released at shallow depths. We then study the variation of frictional parameters along the strike of the fault. For this purpose we mapped into our segmented fault model the initial stress heterogeneity inverted by Peyrat et al. [2001] using a planar fault model. Simulations with this initial stress field improved the overall fit of the rupture process to that inferred from kinematic inversions, and also improved the fit to the ground motion observed in Southern California. In order to obtain this fit, we had to introduce an additional variations of frictional parameters along the fault. The most important is a weak Kickapoo fault and a strong Johnson Valley fault.

  18. Fault Tolerant Cache Schemes

    NASA Astrophysics Data System (ADS)

    Tu, H.-Yu.; Tasneem, Sarah

    Most of modern microprocessors employ on—chip cache memories to meet the memory bandwidth demand. These caches are now occupying a greater real es tate of chip area. Also, continuous down scaling of transistors increases the possi bility of defects in the cache area which already starts to occupies more than 50% of chip area. For this reason, various techniques have been proposed to tolerate defects in cache blocks. These techniques can be classified into three different cat egories, namely, cache line disabling, replacement with spare block, and decoder reconfiguration without spare blocks. This chapter examines each of those fault tol erant techniques with a fixed typical size and organization of L1 cache, through extended simulation using SPEC2000 benchmark on individual techniques. The de sign and characteristics of each technique are summarized with a view to evaluate the scheme. We then present our simulation results and comparative study of the three different methods.

  19. Fault Tolerant State Machines

    NASA Technical Reports Server (NTRS)

    Burke, Gary R.; Taft, Stephanie

    2004-01-01

    State machines are commonly used to control sequential logic in FPGAs and ASKS. An errant state machine can cause considerable damage to the device it is controlling. For example in space applications, the FPGA might be controlling Pyros, which when fired at the wrong time will cause a mission failure. Even a well designed state machine can be subject to random errors us a result of SEUs from the radiation environment in space. There are various ways to encode the states of a state machine, and the type of encoding makes a large difference in the susceptibility of the state machine to radiation. In this paper we compare 4 methods of state machine encoding and find which method gives the best fault tolerance, as well as determining the resources needed for each method.

  20. Faulted Sedimentary Rocks

    NASA Technical Reports Server (NTRS)

    2004-01-01

    27 June 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows some of the layered, sedimentary rock outcrops that occur in a crater located at 8oN, 7oW, in western Arabia Terra. Dark layers and dark sand have enhanced the contrast of this scene. In the upper half of the image, one can see numerous lines that off-set the layers. These lines are faults along which the rocks have broken and moved. The regularity of layer thickness and erosional expression are taken as evidence that the crater in which these rocks occur might once have been a lake. The image covers an area about 1.9 km (1.2 mi) wide. Sunlight illuminates the scene from the lower left.

  1. Arc fault detection system

    DOEpatents

    Jha, K.N.

    1999-05-18

    An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard. 1 fig.

  2. Arc fault detection system

    DOEpatents

    Jha, Kamal N.

    1999-01-01

    An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard.

  3. Improving Multiple Fault Diagnosability using Possible Conflicts

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2012-01-01

    Multiple fault diagnosis is a difficult problem for dynamic systems. Due to fault masking, compensation, and relative time of fault occurrence, multiple faults can manifest in many different ways as observable fault signature sequences. This decreases diagnosability of multiple faults, and therefore leads to a loss in effectiveness of the fault isolation step. We develop a qualitative, event-based, multiple fault isolation framework, and derive several notions of multiple fault diagnosability. We show that using Possible Conflicts, a model decomposition technique that decouples faults from residuals, we can significantly improve the diagnosability of multiple faults compared to an approach using a single global model. We demonstrate these concepts and provide results using a multi-tank system as a case study.

  4. Comparison of Cenozoic Faulting at the Savannah River Site to Fault Characteristics of the Atlantic Coast Fault Province: Implications for Fault Capability

    SciTech Connect

    Cumbest, R.J.

    2000-11-14

    This study compares the faulting observed on the Savannah River Site and vicinity with the faults of the Atlantic Coastal Fault Province and concludes that both sets of faults exhibit the same general characteristics and are closely associated. Based on the strength of this association it is concluded that the faults observed on the Savannah River Site and vicinity are in fact part of the Atlantic Coastal Fault Province. Inclusion in this group means that the historical precedent established by decades of previous studies on the seismic hazard potential for the Atlantic Coastal Fault Province is relevant to faulting at the Savannah River Site. That is, since these faults are genetically related the conclusion of ''not capable'' reached in past evaluations applies.In addition, this study establishes a set of criteria by which individual faults may be evaluated in order to assess their inclusion in the Atlantic Coast Fault Province and the related association of the ''not capable'' conclusion.

  5. ANNs pinpoint underground distribution faults

    SciTech Connect

    Glinkowski, M.T.; Wang, N.C.

    1995-10-01

    Many offline fault location techniques in power distribution circuits involve patrolling along the lines or cables. In overhead distribution lines, most of the failures can be located quickly by visual inspection without the aid of special equipment. However, locating a fault in underground cable systems is more difficult. It involves additional equipment (e.g., thumpers, radars, etc.) to transform the invisibility of the cable into other forms of signals, such as acoustic sound and electromagnetic pulses. Trained operators must carry the equipment above the ground, follow the path of the signal, and draw lines on their maps in order to locate the fault. Sometimes, even smelling the burnt cable faults is a way of detecting the problem. These techniques are time consuming, not always reliable, and, as in the case of high-voltage dc thumpers, can cause additional damage to the healthy parts of the cable circuit. Online fault location in power networks that involve interconnected lines (cables) and multiterminal sources continues receiving great attention, with limited success in techniques that would provide simple and practical solutions. This article features a new online fault location technique that: uses the pattern recognition feature of artificial neural networks (ANNs); utilizes new capabilities of modern protective relaying hardware. The output of the neural network can be graphically displayed as a simple three-dimensional (3-D) chart that can provide an operator with an instantaneous indication of the location of the fault.

  6. Subaru FATS (fault tracking system)

    NASA Astrophysics Data System (ADS)

    Winegar, Tom W.; Noumaru, Junichi

    2000-07-01

    The Subaru Telescope requires a fault tracking system to record the problems and questions that staff experience during their work, and the solutions provided by technical experts to these problems and questions. The system records each fault and routes it to a pre-selected 'solution-provider' for each type of fault. The solution provider analyzes the fault and writes a solution that is routed back to the fault reporter and recorded in a 'knowledge-base' for future reference. The specifications of our fault tracking system were unique. (1) Dual language capacity -- Our staff speak both English and Japanese. Our contractors speak Japanese. (2) Heterogeneous computers -- Our computer workstations are a mixture of SPARCstations, Macintosh and Windows computers. (3) Integration with prime contractors -- Mitsubishi and Fujitsu are primary contractors in the construction of the telescope. In many cases, our 'experts' are our contractors. (4) Operator scheduling -- Our operators spend 50% of their work-month operating the telescope, the other 50% is spent working day shift at the base facility in Hilo, or day shift at the summit. We plan for 8 operators, with a frequent rotation. We need to keep all operators informed on the current status of all faults, no matter the operator's location.

  7. Why the 2002 Denali fault rupture propagated onto the Totschunda fault: implications for fault branching and seismic hazards

    USGS Publications Warehouse

    Schwartz, David P.; Haeussler, Peter J.; Seitz, Gordon G.; Dawson, Timothy E.

    2012-01-01

    The propagation of the rupture of the Mw7.9 Denali fault earthquake from the central Denali fault onto the Totschunda fault has provided a basis for dynamic models of fault branching in which the angle of the regional or local prestress relative to the orientation of the main fault and branch plays a principal role in determining which fault branch is taken. GeoEarthScope LiDAR and paleoseismic data allow us to map the structure of the Denali-Totschunda fault intersection and evaluate controls of fault branching from a geological perspective. LiDAR data reveal the Denali-Totschunda fault intersection is structurally simple with the two faults directly connected. At the branch point, 227.2 km east of the 2002 epicenter, the 2002 rupture diverges southeast to become the Totschunda fault. We use paleoseismic data to propose that differences in the accumulated strain on each fault segment, which express differences in the elapsed time since the most recent event, was one important control of the branching direction. We suggest that data on event history, slip rate, paleo offsets, fault geometry and structure, and connectivity, especially on high slip rate-short recurrence interval faults, can be used to assess the likelihood of branching and its direction. Analysis of the Denali-Totschunda fault intersection has implications for evaluating the potential for a rupture to propagate across other types of fault intersections and for characterizing sources of future large earthquakes.

  8. Fault Injection Campaign for a Fault Tolerant Duplex Framework

    NASA Technical Reports Server (NTRS)

    Sacco, Gian Franco; Ferraro, Robert D.; von llmen, Paul; Rennels, Dave A.

    2007-01-01

    Fault tolerance is an efficient approach adopted to avoid or reduce the damage of a system failure. In this work we present the results of a fault injection campaign we conducted on the Duplex Framework (DF). The DF is a software developed by the UCLA group [1, 2] that uses a fault tolerant approach and allows to run two replicas of the same process on two different nodes of a commercial off-the-shelf (COTS) computer cluster. A third process running on a different node, constantly monitors the results computed by the two replicas, and eventually restarts the two replica processes if an inconsistency in their computation is detected. This approach is very cost efficient and can be adopted to control processes on spacecrafts where the fault rate produced by cosmic rays is not very high.

  9. Method of locating ground faults

    NASA Technical Reports Server (NTRS)

    Patterson, Richard L. (Inventor); Rose, Allen H. (Inventor); Cull, Ronald C. (Inventor)

    1994-01-01

    The present invention discloses a method of detecting and locating current imbalances such as ground faults in multiwire systems using the Faraday effect. As an example, for 2-wire or 3-wire (1 ground wire) electrical systems, light is transmitted along an optical path which is exposed to magnetic fields produced by currents flowing in the hot and neutral wires. The rotations produced by these two magnetic fields cancel each other, therefore light on the optical path does not read the effect of either. However, when a ground fault occurs, the optical path is exposed to a net Faraday effect rotation due to the current imbalance thereby exposing the ground fault.

  10. Granular Packings and Fault Zones

    NASA Astrophysics Data System (ADS)

    Åström, J. A.; Herrmann, H. J.; Timonen, J.

    2000-01-01

    The failure of a two-dimensional packing of elastic grains is analyzed using a numerical model. The packing fails through formation of shear bands or faults. During failure there is a separation of the system into two grain-packing states. In a shear band, local ``rotating bearings'' are spontaneously formed. The bearing state is favored in a shear band because it has a low stiffness against shearing. The ``seismic activity'' distribution in the packing has the same characteristics as that of the earthquake distribution in tectonic faults. The directions of the principal stresses in a bearing are reminiscent of those found at the San Andreas Fault.

  11. Method of locating ground faults

    NASA Astrophysics Data System (ADS)

    Patterson, Richard L.; Rose, Allen H.; Cull, Ronald C.

    1994-11-01

    The present invention discloses a method of detecting and locating current imbalances such as ground faults in multiwire systems using the Faraday effect. As an example, for 2-wire or 3-wire (1 ground wire) electrical systems, light is transmitted along an optical path which is exposed to magnetic fields produced by currents flowing in the hot and neutral wires. The rotations produced by these two magnetic fields cancel each other, therefore light on the optical path does not read the effect of either. However, when a ground fault occurs, the optical path is exposed to a net Faraday effect rotation due to the current imbalance thereby exposing the ground fault.

  12. Finding faults with the data

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    Rudolph Giuliani and Hillary Rodham Clinton are crisscrossing upstate New York looking for votes in the U.S. Senate race. Also cutting back and forth across upstate New York are hundreds of faults of a kind characterized by very sporadic seismic activity according to Robert Jacobi, professor of geology at the University of Buffalo (UB), who conducted research with fellow UB geology professor John Fountain."We have proof that upstate New York is crisscrossed by faults," Jacobi said. "In the past, the Appalachian Plateau—which stretches from Albany to Buffalo—was considered a pretty boring place structurally without many faults or folds of any significance."

  13. 20 CFR 410.561b - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Fault. 410.561b Section 410.561b Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL MINE HEALTH AND SAFETY ACT OF 1969, TITLE IV-BLACK LUNG BENEFITS (1969- ) Payment of Benefits § 410.561b Fault. Fault as used in without fault (see §...

  14. 20 CFR 410.561b - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Fault. 410.561b Section 410.561b Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL MINE HEALTH AND SAFETY ACT OF 1969, TITLE IV-BLACK LUNG BENEFITS (1969- ) Payment of Benefits § 410.561b Fault. Fault as used in without fault (see §...

  15. Expert System Detects Power-Distribution Faults

    NASA Technical Reports Server (NTRS)

    Walters, Jerry L.; Quinn, Todd M.

    1994-01-01

    Autonomous Power Expert (APEX) computer program is prototype expert-system program detecting faults in electrical-power-distribution system. Assists human operators in diagnosing faults and deciding what adjustments or repairs needed for immediate recovery from faults or for maintenance to correct initially nonthreatening conditions that could develop into faults. Written in Lisp.

  16. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of... agency may have been at fault in initiating an overpayment will not necessarily relieve the...

  17. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of... agency may have been at fault in initiating an overpayment will not necessarily relieve the...

  18. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of... agency may have been at fault in initiating an overpayment will not necessarily relieve the...

  19. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of... agency may have been at fault in initiating an overpayment will not necessarily relieve the...

  20. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of... agency may have been at fault in initiating an overpayment will not necessarily relieve the...

  1. Spontaneous rupture on irregular faults

    NASA Astrophysics Data System (ADS)

    Liu, C.

    2014-12-01

    It is now know (e.g. Robinson et al., 2006) that when ruptures propagate around bends, the rupture velocity decrease. In the extreme case, a large bend in the fault can stop the rupture. We develop a 2-D finite difference method to simulate spontaneous dynamic rupture on irregular faults. This method is based on a second order leap-frog finite difference scheme on a uniform mesh of triangles. A relaxation method is used to generate an irregular fault geometry-conforming mesh from the uniform mesh. Through this numerical coordinate mapping, the elastic wave equations are transformed and solved in a curvilinear coordinate system. Extensive numerical experiments using the linear slip-weakening law will be shown to demonstrate the effect of fault geometry on rupture properties. A long term goal is to simulate the strong ground motion near the vicinity of bends, jogs, etc.

  2. The fault-tree compiler

    NASA Technical Reports Server (NTRS)

    Martensen, Anna L.; Butler, Ricky W.

    1987-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.

  3. Fault Interactions in Extensional Regimes

    NASA Astrophysics Data System (ADS)

    Streepey, M.; Lithgow-Bertelloni, C.

    2001-12-01

    Fault Interactions in Extensional Regimes M. Streepey and C. Lithgow-Bertelloni Department of Geological Sciences, University of Michigan, Ann Arbor, MI 48109 Studies have shown that faults generally tend to reactivate over long histories of deformation, often in spite of less favorable orientations or changing stress regimes in the region. Reactivation of faults suggests that rheology is a key determining factor in the localization of intense deformation in orogenic belts. It is evident in these studies that stresses are preferentially partitioned into pre-existing weak zones of the crust. This is shown commonly in orogenic belts, where thrust faults reactivate as normal faults during syn- to post-orogenic extension. Therefore, the interaction of faults might be an important element in the deformation of the lithosphere during pre- and post-orogenic tectonics. On shorter timescales, it has been suggested that fault interactions are commonplace in areas of active seismicity, and that those interactions can be related to earthquake triggering and therefore may be critically important in assessing the behavior of the lithosphere during deformation. We investigate this problem concentrating on the time evolution of faults in extensional regimes. Geologic evidence in ancient orogenic belts shows periods of protracted normal fault motion over timescales of hundreds of millions of years after orogenesis. This motion is likely episodic rather than continuous; however, this is not constrained by field and geochronological studies. Fault evolution on these timescales is modeled using the finite element code ABAQUS. Our elastic results show, as expected from dislocation theory, that stress shadows produced by motion along faults can be linearly superposed and that faults do not have a high degree of interaction. We have constructed new models of two-dimensional finite elements that represent a block of crust under extensional stresses. Sited in these blocks are weak zones

  4. Cell boundary fault detection system

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN

    2009-05-05

    A method determines a nodal fault along the boundary, or face, of a computing cell. Nodes on adjacent cell boundaries communicate with each other, and the communications are analyzed to determine if a node or connection is faulty.

  5. Weakening inside incipient thrust fault

    NASA Astrophysics Data System (ADS)

    Lacroix, B.; Tesei, T.; Collettini, C.; Oliot, E.

    2013-12-01

    In fold-and-thrust belts, shortening is mainly accommodated by thrust faults that nucleate along décollement levels. Geological and geophysical evidence suggests that these faults might be weak because of a combination of processes such as pressure-solution, phyllosilicates reorientation and delamination, and fluid pressurization. In this study we aim to decipher the processes and the kinetics responsible for weakening of tectonic décollements. We studied the Millaris thrust (Southern Pyrenees): a fault representative of a décollement in its incipient stage. This fault accommodated a total shortening of about 30 meters and is constituted by a 10m thick, intensively foliated phyllonite developed inside a homogeneous marly unit. Detailed chemical and mineralogical analyses have been carried out to characterize the mineralogical change, the chemical transfers and volume change in the fault zone compared to non-deformed parent sediments. We also carried out microstructural analysis on natural and experimentally deformed rocks. Illite and chlorite are the main hydrous minerals. Inside fault zone, illite minerals are oriented along the schistosity whereas chlorite coats the shear surfaces. Mass balance calculations demonstrated a volume loss of up to 50% for calcite inside fault zone (and therefore a relative increase of phyllosilicates contents) because of calcite pressure solution mechanisms. We performed friction experiments in a biaxial deformation apparatus using intact rocks sheared in the in-situ geometry from the Millaris fault and its host sediments. We imposed a range of normal stresses (10 to 50 MPa), sliding velocity steps (3-100 μm/s) and slide-hold slide sequences (3 to 1000 s hold) under saturated conditions. Mechanical results demonstrate that both fault rocks and parent sediments are weaker than average geological materials (friction μ<<0.6) and have velocity-strengthening behavior because of the presence of phyllosilicate horizons. Fault rocks are

  6. Fault Tree Analysis: A Bibliography

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Fault tree analysis is a top-down approach to the identification of process hazards. It is as one of the best methods for systematically identifying an graphically displaying the many ways some things can go wrong. This bibliography references 266 documents in the NASA STI Database that contain the major concepts. fault tree analysis, risk an probability theory, in the basic index or major subject terms. An abstract is included with most citations, followed by the applicable subject terms.

  7. Fault-tolerant rotary actuator

    DOEpatents

    Tesar, Delbert

    2006-10-17

    A fault-tolerant actuator module, in a single containment shell, containing two actuator subsystems that are either asymmetrically or symmetrically laid out is provided. Fault tolerance in the actuators of the present invention is achieved by the employment of dual sets of equal resources. Dual resources are integrated into single modules, with each having the external appearance and functionality of a single set of resources.

  8. Tutorial: Advanced fault tree applications using HARP

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.

    1993-01-01

    Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.

  9. Developing Fault Models for Space Mission Software

    NASA Technical Reports Server (NTRS)

    Nikora, Allen P.; Munson, John C.

    2003-01-01

    A viewgraph presentation on the development of fault models for space mission software is shown. The topics include: 1) Goal: Improve Understanding of Technology Fault Generation Process; 2) Required Measurement; 3) Measuring Structural Evolution; 4) Module Attributes; 5) Principal Components of Raw Metrics; 6) The Measurement Process; 7) View of Structural Evolution at the System and Module Level; 8) Identifying and Counting Faults; 9) Fault Enumeration; 10) Modeling Fault Content; 11) Modeling Results; 12) Current and Future Work; and 13) Discussion and Conclusions.

  10. Error latency estimation using functional fault modeling

    NASA Technical Reports Server (NTRS)

    Manthani, S. R.; Saxena, N. R.; Robinson, J. P.

    1983-01-01

    A complete modeling of faults at gate level for a fault tolerant computer is both infeasible and uneconomical. Functional fault modeling is an approach where units are characterized at an intermediate level and then combined to determine fault behavior. The applicability of functional fault modeling to the FTMP is studied. Using this model a forecast of error latency is made for some functional blocks. This approach is useful in representing larger sections of the hardware and aids in uncovering system level deficiencies.

  11. Fault diagnosis of power systems

    SciTech Connect

    Sekine, Y. ); Akimoto, Y. ); Kunugi, M. )

    1992-05-01

    Fault diagnosis of power systems plays a crucial role in power system monitoring and control that ensures stable supply of electrical power to consumers. In the case of multiple faults or incorrect operation of protective devices, fault diagnosis requires judgment of complex conditions at various levels. For this reason, research into application of knowledge-based systems go an early start and reports of such systems have appeared in may papers. In this paper, these systems are classified by the method of inference utilized in the knowledge-based systems for fault diagnosis of power systems. The characteristics of each class and corresponding issues as well as the state-of-the-art techniques for improving their performance are presented. Additional topics covered are user interfaces, interfaces with energy management systems (EMS's), and expert system development tools for fault diagnosis. Results and evaluation of actual operation in the field are also discussed. Knowledge-based fault diagnosis of power systems will continue to disseminate.

  12. Software Fault Tolerance: A Tutorial

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2000-01-01

    Because of our present inability to produce error-free software, software fault tolerance is and will continue to be an important consideration in software systems. The root cause of software design errors is the complexity of the systems. Compounding the problems in building correct software is the difficulty in assessing the correctness of software for highly complex systems. After a brief overview of the software development processes, we note how hard-to-detect design faults are likely to be introduced during development and how software faults tend to be state-dependent and activated by particular input sequences. Although component reliability is an important quality measure for system level analysis, software reliability is hard to characterize and the use of post-verification reliability estimates remains a controversial issue. For some applications software safety is more important than reliability, and fault tolerance techniques used in those applications are aimed at preventing catastrophes. Single version software fault tolerance techniques discussed include system structuring and closure, atomic actions, inline fault detection, exception handling, and others. Multiversion techniques are based on the assumption that software built differently should fail differently and thus, if one of the redundant versions fails, it is expected that at least one of the other versions will provide an acceptable output. Recovery blocks, N-version programming, and other multiversion techniques are reviewed.

  13. Normal fault earthquakes or graviquakes.

    PubMed

    Doglioni, C; Carminati, E; Petricca, P; Riguzzi, F

    2015-07-14

    Earthquakes are dissipation of energy throughout elastic waves. Canonically is the elastic energy accumulated during the interseismic period. However, in crustal extensional settings, gravity is the main energy source for hangingwall fault collapsing. Gravitational potential is about 100 times larger than the observed magnitude, far more than enough to explain the earthquake. Therefore, normal faults have a different mechanism of energy accumulation and dissipation (graviquakes) with respect to other tectonic settings (strike-slip and contractional), where elastic energy allows motion even against gravity. The bigger the involved volume, the larger is their magnitude. The steeper the normal fault, the larger is the vertical displacement and the larger is the seismic energy released. Normal faults activate preferentially at about 60° but they can be shallower in low friction rocks. In low static friction rocks, the fault may partly creep dissipating gravitational energy without releasing great amount of seismic energy. The maximum volume involved by graviquakes is smaller than the other tectonic settings, being the activated fault at most about three times the hypocentre depth, explaining their higher b-value and the lower magnitude of the largest recorded events. Having different phenomenology, graviquakes show peculiar precursors.

  14. Normal fault earthquakes or graviquakes

    PubMed Central

    Doglioni, C.; Carminati, E.; Petricca, P.; Riguzzi, F.

    2015-01-01

    Earthquakes are dissipation of energy throughout elastic waves. Canonically is the elastic energy accumulated during the interseismic period. However, in crustal extensional settings, gravity is the main energy source for hangingwall fault collapsing. Gravitational potential is about 100 times larger than the observed magnitude, far more than enough to explain the earthquake. Therefore, normal faults have a different mechanism of energy accumulation and dissipation (graviquakes) with respect to other tectonic settings (strike-slip and contractional), where elastic energy allows motion even against gravity. The bigger the involved volume, the larger is their magnitude. The steeper the normal fault, the larger is the vertical displacement and the larger is the seismic energy released. Normal faults activate preferentially at about 60° but they can be shallower in low friction rocks. In low static friction rocks, the fault may partly creep dissipating gravitational energy without releasing great amount of seismic energy. The maximum volume involved by graviquakes is smaller than the other tectonic settings, being the activated fault at most about three times the hypocentre depth, explaining their higher b-value and the lower magnitude of the largest recorded events. Having different phenomenology, graviquakes show peculiar precursors. PMID:26169163

  15. Passive fault current limiting device

    DOEpatents

    Evans, Daniel J.; Cha, Yung S.

    1999-01-01

    A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment.

  16. Passive fault current limiting device

    DOEpatents

    Evans, D.J.; Cha, Y.S.

    1999-04-06

    A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment. 6 figs.

  17. Aeromagnetic anomalies over faulted strata

    USGS Publications Warehouse

    Grauch, V.J.S.; Hudson, Mark R.

    2011-01-01

    High-resolution aeromagnetic surveys are now an industry standard and they commonly detect anomalies that are attributed to faults within sedimentary basins. However, detailed studies identifying geologic sources of magnetic anomalies in sedimentary environments are rare in the literature. Opportunities to study these sources have come from well-exposed sedimentary basins of the Rio Grande rift in New Mexico and Colorado. High-resolution aeromagnetic data from these areas reveal numerous, curvilinear, low-amplitude (2–15 nT at 100-m terrain clearance) anomalies that consistently correspond to intrasedimentary normal faults (Figure 1). Detailed geophysical and rock-property studies provide evidence for the magnetic sources at several exposures of these faults in the central Rio Grande rift (summarized in Grauch and Hudson, 2007, and Hudson et al., 2008). A key result is that the aeromagnetic anomalies arise from the juxtaposition of magnetically differing strata at the faults as opposed to chemical processes acting at the fault zone. The studies also provide (1) guidelines for understanding and estimating the geophysical parameters controlling aeromagnetic anomalies at faulted strata (Grauch and Hudson), and (2) observations on key geologic factors that are favorable for developing similar sedimentary sources of aeromagnetic anomalies elsewhere (Hudson et al.).

  18. Fault Analysis in Solar Photovoltaic Arrays

    NASA Astrophysics Data System (ADS)

    Zhao, Ye

    Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.

  19. Critical fault patterns determination in fault-tolerant computer systems

    NASA Technical Reports Server (NTRS)

    Mccluskey, E. J.; Losq, J.

    1978-01-01

    The method proposed tries to enumerate all the critical fault-patterns (successive occurrences of failures) without analyzing every single possible fault. The conditions for the system to be operating in a given mode can be expressed in terms of the static states. Thus, one can find all the system states that correspond to a given critical mode of operation. The next step consists in analyzing the fault-detection mechanisms, the diagnosis algorithm and the process of switch control. From them, one can find all the possible system configurations that can result from a failure occurrence. Thus, one can list all the characteristics, with respect to detection, diagnosis, and switch control, that failures must have to constitute critical fault-patterns. Such an enumeration of the critical fault-patterns can be directly used to evaluate the overall system tolerance to failures. Present research is focused on how to efficiently make use of these system-level characteristics to enumerate all the failures that verify these characteristics.

  20. Fault Management Guiding Principles

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn E.; Friberg, Kenneth H.; Fesq, Lorraine; Barley, Bryan

    2011-01-01

    Regardless of the mission type: deep space or low Earth orbit, robotic or human spaceflight, Fault Management (FM) is a critical aspect of NASA space missions. As the complexity of space missions grows, the complexity of supporting FM systems increase in turn. Data on recent NASA missions show that development of FM capabilities is a common driver for significant cost overruns late in the project development cycle. Efforts to understand the drivers behind these cost overruns, spearheaded by NASA's Science Mission Directorate (SMD), indicate that they are primarily caused by the growing complexity of FM systems and the lack of maturity of FM as an engineering discipline. NASA can and does develop FM systems that effectively protect mission functionality and assets. The cost growth results from a lack of FM planning and emphasis by project management, as well the maturity of FM as an engineering discipline, which lags behind the maturity of other engineering disciplines. As a step towards controlling the cost growth associated with FM development, SMD has commissioned a multi-institution team to develop a practitioner's handbook representing best practices for the end-to-end processes involved in engineering FM systems. While currently concentrating primarily on FM for science missions, the expectation is that this handbook will grow into a NASA-wide handbook, serving as a companion to the NASA Systems Engineering Handbook. This paper presents a snapshot of the principles that have been identified to guide FM development from cradle to grave. The principles range from considerations for integrating FM into the project and SE organizational structure, the relationship between FM designs and mission risk, and the use of the various tools of FM (e.g., redundancy) to meet the FM goal of protecting mission functionality and assets.

  1. Fault Management Guiding Principles

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn E.; Friberg, Kenneth H.; Fesq, Lorraine; Barley, Bryan

    2011-01-01

    Regardless of the mission type: deep space or low Earth orbit, robotic or human spaceflight, Fault Management (FM) is a critical aspect of NASA space missions. As the complexity of space missions grows, the complexity of supporting FM systems increase in turn. Data on recent NASA missions show that development of FM capabilities is a common driver for significant cost overruns late in the project development cycle. Efforts to understand the drivers behind these cost overruns, spearheaded by NASA's Science Mission Directorate (SMD), indicate that they are primarily caused by the growing complexity of FM systems and the lack of maturity of FM as an engineering discipline. NASA can and does develop FM systems that effectively protect mission functionality and assets. The cost growth results from a lack of FM planning and emphasis by project management, as well the maturity of FM as an engineering discipline, which lags behind the maturity of other engineering disciplines. As a step towards controlling the cost growth associated with FM development, SMD has commissioned a multi-institution team to develop a practitioner's handbook representing best practices for the end-to-end processes involved in engineering FM systems. While currently concentrating primarily on FM for science missions, the expectation is that this handbook will grow into a NASA-wide handbook, serving as a companion to the NASA Systems Engineering Handbook. This paper presents a snapshot of the principles that have been identified to guide FM development from cradle to grave. The principles range from considerations for integrating FM into the project and SE organizational structure, the relationship between FM designs and mission risk, and the use of the various tools of FM (e.g., redundancy) to meet the FM goal of protecting mission functionality and assets.

  2. Fault branching and rupture directivity

    NASA Astrophysics Data System (ADS)

    Fliss, Sonia; Bhat, Harsha S.; Dmowska, Renata; Rice, James R.

    2005-06-01

    Could the directivity of a complex earthquake be inferred from the ruptured fault branches it created? Typically, branches develop in forward orientation, making acute angles relative to the propagation direction. Direct backward branching of the same style as the main rupture (e.g., both right lateral) is disallowed by the stress field at the rupture front. Here we propose another mechanism of backward branching. In that mechanism, rupture stops along one fault strand, radiates stress to a neighboring strand, nucleates there, and develops bilaterally, generating a backward branch. Such makes diagnosing directivity of a past earthquake difficult without detailed knowledge of the branching process. As a field example, in the Landers 1992 earthquake, rupture stopped at the northern end of the Kickapoo fault, jumped onto the Homestead Valley fault, and developed bilaterally there, NNW to continue the main rupture but also SSE for 4 km forming a backward branch. We develop theoretical principles underlying such rupture transitions, partly from elastostatic stress analysis, and then simulate the Landers example numerically using a two-dimensional elastodynamic boundary integral equation formulation incorporating slip-weakening rupture. This reproduces the proposed backward branching mechanism based on realistic if simplified fault geometries, prestress orientation corresponding to the region, standard lab friction values for peak strength, and fracture energies characteristic of the Landers event. We also show that the seismic S ratio controls the jumpable distance and that curving of a fault toward its compressional side, like locally along the southeastern Homestead Valley fault, induces near-tip increase of compressive normal stress that slows rupture propagation.

  3. Fault detection and isolation for complex system

    NASA Astrophysics Data System (ADS)

    Jing, Chan Shi; Bayuaji, Luhur; Samad, R.; Mustafa, M.; Abdullah, N. R. H.; Zain, Z. M.; Pebrianti, Dwi

    2017-07-01

    Fault Detection and Isolation (FDI) is a method to monitor, identify, and pinpoint the type and location of system fault in a complex multiple input multiple output (MIMO) non-linear system. A two wheel robot is used as a complex system in this study. The aim of the research is to construct and design a Fault Detection and Isolation algorithm. The proposed method for the fault identification is using hybrid technique that combines Kalman filter and Artificial Neural Network (ANN). The Kalman filter is able to recognize the data from the sensors of the system and indicate the fault of the system in the sensor reading. Error prediction is based on the fault magnitude and the time occurrence of fault. Additionally, Artificial Neural Network (ANN) is another algorithm used to determine the type of fault and isolate the fault in the system.

  4. Fault seal analysis in the North Sea

    SciTech Connect

    Knott, S.D. )

    1993-05-01

    The majority of North Sea structural traps requires that at least one fault be a sealing fault. Over 400 faults from 101 exploration targets and 25 oil and gas fields were analyzed in a regional study of the North Sea. The faults cut clastic successions from a variety of depositional environments (marine, paralic, and nonmarine). The emphasis of the study was on fault-related seals that act as pressure or migration barriers over geologic time. Parameters such as fault strike and throw, reservoir thickness, depth, net-to-gross ratio, porosity, and net sand connectivity were plotted against seal performance to define trends and correlations to predict fault seal characteristics. A correlation appears to exist between fault orientation and sealing, although this is not statistically significant. Sealing is proportional to fault throw norminalized as a fraction of the reservoir thickness. The great majority of faults with throw greater than the thickness of the reservoir interval were sealing faults. The most useful parameters in fault seal prediction are fault displacement, net-to-gross ratio, and net sand connectivity. The conclusions of this study have general applicability to fault seal prediction in exploration, development, and production of hydrocarbons in clastic successions in the North Sea and perhaps other areas as well. 15 refs., 19 figs., 1 tab.

  5. Anisotropy of permeability in faulted porous sandstones

    NASA Astrophysics Data System (ADS)

    Farrell, N. J. C.; Healy, D.; Taylor, C. W.

    2014-06-01

    Studies of fault rock permeabilities advance the understanding of fluid migration patterns around faults and contribute to predictions of fault stability. In this study a new model is proposed combining brittle deformation structures formed during faulting, with fluid flow through pores. It assesses the impact of faulting on the permeability anisotropy of porous sandstone, hypothesising that the formation of fault related micro-scale deformation structures will alter the host rock porosity organisation and create new permeability pathways. Core plugs and thin sections were sampled around a normal fault and oriented with respect to the fault plane. Anisotropy of permeability was determined in three orientations to the fault plane at ambient and confining pressures. Results show that permeabilities measured parallel to fault dip were up to 10 times higher than along fault strike permeability. Analysis of corresponding thin sections shows elongate pores oriented at a low angle to the maximum principal palaeo-stress (σ1) and parallel to fault dip, indicating that permeability anisotropy is produced by grain scale deformation mechanisms associated with faulting. Using a soil mechanics 'void cell model' this study shows how elongate pores could be produced in faulted porous sandstone by compaction and reorganisation of grains through shearing and cataclasis.

  6. Fault prediction for nonlinear stochastic system with incipient faults based on particle filter and nonlinear regression.

    PubMed

    Ding, Bo; Fang, Huajing

    2017-03-31

    This paper is concerned with the fault prediction for the nonlinear stochastic system with incipient faults. Based on the particle filter and the reasonable assumption about the incipient faults, the modified fault estimation algorithm is proposed, and the system state is estimated simultaneously. According to the modified fault estimation, an intuitive fault detection strategy is introduced. Once each of the incipient fault is detected, the parameters of which are identified by a nonlinear regression method. Then, based on the estimated parameters, the future fault signal can be predicted. Finally, the effectiveness of the proposed method is verified by the simulations of the Three-tank system.

  7. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1991-01-01

    Twenty independently developed but functionally equivalent software versions were used to investigate and compare empirically some properties of N-version programming, Recovery Block, and Consensus Recovery Block, using the majority and consensus voting algorithms. This was also compared with another hybrid fault-tolerant scheme called Acceptance Voting, using dynamic versions of consensus and majority voting. Consensus voting provides adaptation of the voting strategy to varying component reliability, failure correlation, and output space characteristics. Since failure correlation among versions effectively reduces the cardinality of the space in which the voter make decisions, consensus voting is usually preferable to simple majority voting in any fault-tolerant system. When versions have considerably different reliabilities, the version with the best reliability will perform better than any of the fault-tolerant techniques.

  8. PC-based fault finder

    SciTech Connect

    Bengiamin, N.N. ); Jensen, C.A. . Electrical Engineering Dept. Otter Tail Power Co., Fergus Falls, MN . System Protection Group); McMahon, H. )

    1993-07-01

    Electric utilities are continually pressed to stay competitive while meeting the increasing demand of today's sophisticated customer. Advances in electron equipment and the improved array of electric driven devices are setting new standards for improved reliability and quality of service. Besides the specifications on voltage and frequency regulation and the permitted harmonic content, to name a few, the number and duration of service interruptions have a dramatic direct effect on the customer. Accurate fault locating reduces transmission line patrolling and is of particular significance in repairing long lines in rough terrain. Shortened outage times, reduced equipment degrading and stress on the system, fast restored service, and improved revenue are immediate outcomes of fast fault locating which insure minimum loss of system security. This article focuses on a PC-based (DOS) computer program that has unique features for identifying the type of fault and its location on overhead transmission/distribution lines. Balanced and unbalanced faults are identified and located accurately while accounting for changes in conductor sizes and network configuration. The presented concepts and methodologies have been spurred by Otter Tail Power's need for an accurate fault locating scheme to accommodate multiple feeders with mixed lone configurations. A case study based on a section of the Otter Tail network is presented to illustrate the features and capabilities of the developed software.

  9. Quaternary faults of west Texas

    SciTech Connect

    Collins, E.W.; Raney, J.A. . Bureau of Economic Geology)

    1993-04-01

    North- and northwest-striking intermontane basins and associated normal faults in West Texas and adjacent Chihuahua, Mexico, formed in response to Basin and Range tectonism that began about 24 Ma ago. Data on the precise ages of faulted and unfaulted Quaternary deposits are sparse. However, age estimates made on the basis of field stratigraphic relationships and the degree of calcic soil development have helped determine that many of the faults that bound the basin margins ruptured since the middle Pleistocene and that some faults probably ruptured during the Holocene. Average recurrence intervals between surface ruptures since the middle Pleistocene appear to be relatively long, about 10,000 to 100,000 yr. Maximum throw during single rupture events have been between 1 and 3 m. Historic seismicity in West Texas is low compared to seismicity in many parts of the Basin and Range province. The largest historic earthquake, the 1931 Valentine earthquake in Ryan Flat/Lobo Valley, had a magnitude of 6.4 and no reported surface rupture. The most active Quaternary faults occur within the 120-km-long Hueco Bolson, the 70-km-long Red Light Bolson, and the > 200-km-long Salt Basins/Wild Horse Flat/Lobo Valley/Ryan Flat.

  10. Where's the Hayward Fault? A Green Guide to the Fault

    USGS Publications Warehouse

    Stoffer, Philip W.

    2008-01-01

    This report describes self-guided field trips to one of North America?s most dangerous earthquake faults?the Hayward Fault. Locations were chosen because of their easy access using mass transit and/or their significance relating to the natural and cultural history of the East Bay landscape. This field-trip guidebook was compiled to help commemorate the 140th anniversary of an estimated M 7.0 earthquake that occurred on the Hayward Fault at approximately 7:50 AM, October 21st, 1868. Although many reports and on-line resources have been compiled about the science and engineering associated with earthquakes on the Hayward Fault, this report has been prepared to serve as an outdoor guide to the fault for the interested public and for educators. The first chapter is a general overview of the geologic setting of the fault. This is followed by ten chapters of field trips to selected areas along the fault, or in the vicinity, where landscape, geologic, and man-made features that have relevance to understanding the nature of the fault and its earthquake history can be found. A glossary is provided to define and illustrate scientific term used throughout this guide. A ?green? theme helps conserve resources and promotes use of public transportation, where possible. Although access to all locations described in this guide is possible by car, alternative suggestions are provided. To help conserve paper, this guidebook is available on-line only; however, select pages or chapters (field trips) within this guide can be printed separately to take along on an excursion. The discussions in this paper highlight transportation alternatives to visit selected field trip locations. In some cases, combinations, such as a ride on BART and a bus, can be used instead of automobile transportation. For other locales, bicycles can be an alternative means of transportation. Transportation descriptions on selected pages are intended to help guide fieldtrip planners or participants choose trip

  11. Reconsidering Fault Slip Scaling

    NASA Astrophysics Data System (ADS)

    Gomberg, J. S.; Wech, A.; Creager, K. C.; Obara, K.; Agnew, D. C.

    2015-12-01

    The scaling of fault slip events given by the relationship between the scalar moment M0, and duration T, potentially provides key constraints on the underlying physics controlling slip. Many studies have suggested that measurements of M0 and T are related as M0=KfT3 for 'fast' slip events (earthquakes) and M0=KsT for 'slow' slip events, in which Kf and Ks are proportionality constants, although some studies have inferred intermediate relations. Here 'slow' and 'fast' refer to slip front propagation velocities, either so slow that seismic radiation is too small or long period to be measurable or fast enough that dynamic processes may be important for the slip process and measurable seismic waves radiate. Numerous models have been proposed to explain the differing M0-T scaling relations. We show that a single, simple dislocation model of slip events within a bounded slip zone may explain nearly all M0-T observations. Rather than different scaling for fast and slow populations, we suggest that within each population the scaling changes from M0 proportional to T3 to T when the slipping area reaches the slip zone boundaries and transitions from unbounded, 2-dimensional to bounded, 1-dimensional growth. This transition has not been apparent previously for slow events because data have sampled only the bounded regime and may be obscured for earthquakes when observations from multiple tectonic regions are combined. We have attempted to sample the expected transition between bounded and unbounded regimes for the slow slip population, measuring tremor cluster parameters from catalogs for Japan and Cascadia and using them as proxies for small slow slip event characteristics. For fast events we employed published earthquake slip models. Observations corroborate our hypothesis, but highlight observational difficulties. We find that M0-T observations for both slow and fast slip events, spanning 12 orders of magnitude in M0, are consistent with a single model based on dislocation

  12. Transient Faults in Computer Systems

    NASA Technical Reports Server (NTRS)

    Masson, Gerald M.

    1993-01-01

    A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.

  13. InSAR measurements around active faults: creeping Philippine Fault and un-creeping Alpine Fault

    NASA Astrophysics Data System (ADS)

    Fukushima, Y.

    2013-12-01

    Recently, interferometric synthetic aperture radar (InSAR) time-series analyses have been frequently applied to measure the time-series of small and quasi-steady displacements in wide areas. Large efforts in the methodological developments have been made to pursue higher temporal and spatial resolutions by using frequently acquired SAR images and detecting more pixels that exhibit phase stability. While such a high resolution is indispensable for tracking displacements of man-made and other small-scale structures, it is not necessarily needed and can be unnecessarily computer-intensive for measuring the crustal deformation associated with active faults and volcanic activities. I apply a simple and efficient method to measure the deformation around the Alpine Fault in the South Island of New Zealand, and the Philippine Fault in the Leyte Island. I use a small-baseline subset (SBAS) analysis approach (Berardino, et al., 2002). Generally, the more we average the pixel values, the more coherent the signals are. Considering that, for the deformation around active faults, the spatial resolution can be as coarse as a few hundred meters, we can severely 'multi-look' the interferograms. The two applied cases in this study benefited from this approach; I could obtain the mean velocity maps on practically the entire area without discarding decorrelated areas. The signals could have been only partially obtained by standard persistent scatterer or single-look small-baseline approaches that are much more computer-intensive. In order to further increase the signal detection capability, it is sometimes effective to introduce a processing algorithm adapted to the signal of interest. In an InSAR time-series processing, one usually needs to set the reference point because interferograms are all relative measurements. It is difficult, however, to fix the reference point when one aims to measure long-wavelength deformation signals that span the whole analysis area. This problem can be

  14. Growth of faults in crystalline rock

    NASA Astrophysics Data System (ADS)

    Martel, S. J.

    2009-04-01

    The growth of faults depends on the coupled interplay of the distribution of slip, fault geometry, the stress field in the host rock, and deformation of the host rock, which commonly is manifest in secondary fracturing. The distribution of slip along a fault depends highly on its structure, the stress perturbation associated with its interaction with nearby faults, and its strength distribution; mechanical analyses indicate that the first two factors are more influential than the third. Slip distribution data typically are discrete, but commonly are described, either explicitly or implicitly, using continuous interpolation schemes. Where the third derivative of a continuous slip profile is discontinuous, the compatibility conditions of strain are violated, and fracturing and perturbations to fault geometry should occur. Discontinuous third derivatives accompany not only piecewise linear functions, but also functions as seemingly benign as cubic splines. The stress distribution and fracture distribution along a fault depends strongly on how the fault grows. Evidence to date indicates that a fault that nucleates along a pre-existing, nearly planar joint or a dike typically develops secondary fractures only near its tipline when the slip is small relative to the fault length. In contrast, stress concentrations and fractures are predicted where a discontinuous or non-planar fault exhibits steps and bends; field observations bear this prediction out. Secondary fracturing influences how faults grow by creating damage zones and by linking originally discontinuous elements into a single fault zone. Field observations of both strike-slip faults and dip-slip faults show that linked segments usually will not be coplanar; elastic stress analyses indicate that this is an inherent tendency of how three-dimensional faults grow. Advances in the data we collect and in the rigor and sophistication of our analyses seem essential to substantially advance our ability to successfully

  15. Faulting at Mormon Point, Death Valley, California: A low-angle normal fault cut by high-angle faults

    NASA Astrophysics Data System (ADS)

    Keener, Charles; Serpa, Laura; Pavlis, Terry L.

    1993-04-01

    New geophysical and fault kinematic studies indicate that late Cenozoic basin development in the Mormon Point area of Death Valley, California, was accommodated by fault rotations. Three of six fault segments recognized at Mormon Point are now inactive and have been rotated to low dips during extension. The remaining three segments are now active and moderately to steeply dipping. From the geophysical data, one active segment appears to offset the low-angle faults in the subsurface of Death Valley.

  16. Seismology: Diary of a wimpy fault

    NASA Astrophysics Data System (ADS)

    Bürgmann, Roland

    2015-05-01

    Subduction zone faults can slip slowly, generating tremor. The varying correlation between tidal stresses and tremor occurring deep in the Cascadia subduction zone suggests that the fault is inherently weak, and gets weaker as it slips.

  17. Parametric Modeling and Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva; Ju, Jianhong

    2000-01-01

    Fault tolerant control is considered for a nonlinear aircraft model expressed as a linear parameter-varying system. By proper parameterization of foreseeable faults, the linear parameter-varying system can include fault effects as additional varying parameters. A recently developed technique in fault effect parameter estimation allows us to assume that estimates of the fault effect parameters are available on-line. Reconfigurability is calculated for this model with respect to the loss of control effectiveness to assess the potentiality of the model to tolerate such losses prior to control design. The control design is carried out by applying a polytopic method to the aircraft model. An error bound on fault effect parameter estimation is provided, within which the Lyapunov stability of the closed-loop system is robust. Our simulation results show that as long as the fault parameter estimates are sufficiently accurate, the polytopic controller can provide satisfactory fault-tolerance.

  18. Solar Dynamic Power System Fault Diagnosis

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Dias, Lakshman G.

    1996-01-01

    The objective of this research is to conduct various fault simulation studies for diagnosing the type and location of faults in the power distribution system. Different types of faults are simulated at different locations within the distribution system and the faulted waveforms are monitored at measurable nodes such as at the output of the DDCU's. These fault signatures are processed using feature extractors such as FFT and wavelet transforms. The extracted features are fed to a clustering based neural network for training and subsequent testing using previously unseen data. Different load models consisting of constant impedance and constant power are used for the loads. Open circuit faults and short circuit faults are studied. It is concluded from present studies that using features extracted from wavelet transforms give better success rates during ANN testing. The trained ANN's are capable of diagnosing fault types and approximate locations in the solar dynamic power distribution system.

  19. An experimental study of memory fault latency

    NASA Technical Reports Server (NTRS)

    Chillarege, Ram; Iyer, Ravi K.

    1989-01-01

    The difficulty with the measurement of fault latency is due to the lack of observability of the fault occurrence and error generation instants in a production environment. The authors describe an experiment, using data from a VAX 11/780 under real workload, to study fault latency in the memory subsystem accurately. Fault latency distributions are generated for stuck-at-zero (s-a-0) and stuck-at-one (s-a-1) permanent fault models. The results show that the mean fault latency of an s-a-0 fault is nearly five times that of the s-a-1 fault. An analysis of variance is performed to quantify the relative influence of different workload measures on the evaluated latency.

  20. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1987-01-01

    Specific topics briefly addressed include: the consistent comparison problem in N-version system; analytic models of comparison testing; fault tolerance through data diversity; and the relationship between failures caused by automatically seeded faults.

  1. A summary of the active fault investigation in the extension sea area of Kikugawa fault and the Nishiyama fault , N-S direction fault in south west Japan

    NASA Astrophysics Data System (ADS)

    Abe, S.

    2010-12-01

    In this study, we carried out two sets of active fault investigation by the request from Ministry of Education, Culture, Sports, Science and Technology in the sea area of the extension of Kikugawa fault and the Nishiyama fault. We want to clarify the five following matters about both active faults based on those results. (1)Fault continuity of the land and the sea. (2) The length of the active fault. (3) The division of the segment. (4) Activity characteristics. In this investigation, we carried out a digital single channel seismic reflection survey in the whole area of both active faults. In addition, a high-resolution multichannel seismic reflection survey was carried out to recognize the detailed structure of a shallow stratum. Furthermore, the sampling with the vibrocoring to get information of the sedimentation age was carried out. The reflection profile of both active faults was extremely clear. The characteristics of the lateral fault such as flower structure, the dispersion of the active fault were recognized. In addition, from analysis of the age of the stratum, it was recognized that the thickness of the sediment was extremely thin in Holocene epoch on the continental shelf in this sea area. It was confirmed that the Kikugawa fault extended to the offing than the existing results of research by a result of this investigation. In addition, the width of the active fault seems to become wide toward the offing while dispersing. At present, we think that we can divide Kikugawa fault into some segments based on the distribution form of the segment. About the Nishiyama fault, reflection profiles to show the existence of the active fault was acquired in the sea between Ooshima and Kyushu. From this result and topographical existing results of research in Ooshima, it is thought that Nishiyama fault and the Ooshima offing active fault are a series of structure. As for Ooshima offing active fault, the upheaval side changes, and a direction changes too. Therefore, we

  2. Hydraulic Diagnostics and Fault Isolation Test Program.

    DTIC Science & Technology

    1987-02-13

    and Fault Isolation Test Program was to demonstrate and evaluate the practicality of a fault detection and isolation system on an aircraft. The...system capable of fault detection and isolation in a hydraulic subsystem through the use of sensors and a microprocessor (Fig. 1). The microprocessor...DISCUSSION 2.1 DESCRIPTION OF HYDRAULIC SYSTEM SIMULATOR The fault detection and isolation test arrangement consisted of a high pressure, lightweight

  3. Does fault size control earthquake size?

    NASA Astrophysics Data System (ADS)

    Holt, A.; Jackson, D.

    2003-04-01

    We examine the hypothesis that earthquake size is limited by fault length. Large earthquakes generally have longer and wider rupture surfaces and greater displacement than small ones. Several publications (e.g. Wells and Coppersmith, 1995) present regression relationships between earthquake size and extent of faulting. In these studies, earthquake size is generally measured by the seismic moment and faulting extent is measured by length or area of the rupture surface. The regression relationships are frequently used to estimate the upper magnitude limit for future earthquakes based on the dimensions of mapped faults. Two assumptions are crucial: (1) that future ruptures will be limited by the mapped fault dimensions, and (2) that the relationship between earthquake size and fault rupture is the same for ordinary earthquakes as for fault-limited ones. To test the first assumption we compared rupture lengths of California earthquakes with previously mapped fault length. There are 14 California earthquakes and 5 Italian earthquakes after 1976 for which this comparison can be made. Neither rupture length nor moment correlates significantly with prior fault length. For five California and three Italian events, the rupture extends beyond both ends of the previously mapped fault, or else there was no mapped fault. None of the 19 ruptures stopped at the end of a fault trace. For California and Italy, earthquakes don't stop at the ends of mapped faults, assumption (1) fails, and the regression relationships are not valid estimators of maximum magnitude. Today's faults were not always there, so some earthquakes must create or lengthen faults However, we don't know whether the maverick earthquakes actually ruptured virgin rock, or whether the fault maps were merely inadequate.

  4. Ground Fault--A Health Hazard

    ERIC Educational Resources Information Center

    Jacobs, Clinton O.

    1977-01-01

    A ground fault is especially hazardous because the resistance through which the current is flowing to ground may be sufficient to cause electrocution. The Ground Fault Circuit Interrupter (G.F.C.I.) protects 15 and 25 ampere 120 volt circuits from ground fault condition. The design and examples of G.F.C.I. functions are described in this article.…

  5. Fault-related clay authigenesis along the Moab Fault: Implications for calculations of fault rock composition and mechanical and hydrologic fault zone properties

    USGS Publications Warehouse

    Solum, J.G.; Davatzes, N.C.; Lockner, D.A.

    2010-01-01

    The presence of clays in fault rocks influences both the mechanical and hydrologic properties of clay-bearing faults, and therefore it is critical to understand the origin of clays in fault rocks and their distributions is of great importance for defining fundamental properties of faults in the shallow crust. Field mapping shows that layers of clay gouge and shale smear are common along the Moab Fault, from exposures with throws ranging from 10 to ???1000 m. Elemental analyses of four locations along the Moab Fault show that fault rocks are enriched in clays at R191 and Bartlett Wash, but that this clay enrichment occurred at different times and was associated with different fluids. Fault rocks at Corral and Courthouse Canyons show little difference in elemental composition from adjacent protolith, suggesting that formation of fault rocks at those locations is governed by mechanical processes. Friction tests show that these authigenic clays result in fault zone weakening, and potentially influence the style of failure along the fault (seismogenic vs. aseismic) and potentially influence the amount of fluid loss associated with coseismic dilation. Scanning electron microscopy shows that authigenesis promotes that continuity of slip surfaces, thereby enhancing seal capacity. The occurrence of the authigenesis, and its influence on the sealing properties of faults, highlights the importance of determining the processes that control this phenomenon. ?? 2010 Elsevier Ltd.

  6. Ground Fault--A Health Hazard

    ERIC Educational Resources Information Center

    Jacobs, Clinton O.

    1977-01-01

    A ground fault is especially hazardous because the resistance through which the current is flowing to ground may be sufficient to cause electrocution. The Ground Fault Circuit Interrupter (G.F.C.I.) protects 15 and 25 ampere 120 volt circuits from ground fault condition. The design and examples of G.F.C.I. functions are described in this article.…

  7. Measurement and application of fault latency

    NASA Technical Reports Server (NTRS)

    Shin, K. G.; Lee, Y.-H.

    1986-01-01

    The time interval between the occurrence of a fault and the detection of the error caused by the fault is divided by the generation of that error into two parts: fault latency and error latency. Since the moment of error generation is not directly observable, all related works in the literature have dealt with only the sum of fault and error latencies, thereby making the analysis of their separate effects impossible. To remedy this deficiency, (1) a new methodology for indirectly measuring fault latency is presented; the distribution of fault latency is derived from the methodology; and (3) the knowledge of fault latency is applied to the analysis of two important examples. The proposed methodology has been implemented for measuring fault latency in the Fault-Tolerant Multiprocessor (FTMP) at the NASA Airlab. The experimental results show wide variations in the mean fault latencies of different function circuits within FTMP. Also, the measured distributions of fault latency are shown to have monotone hazard rates. Consequently, Gamma and Weibull distributions are selected for the least-squares fit as the distribution of fault latency.

  8. High temperature superconducting fault current limiter

    DOEpatents

    Hull, J.R.

    1997-02-04

    A fault current limiter for an electrical circuit is disclosed. The fault current limiter includes a high temperature superconductor in the electrical circuit. The high temperature superconductor is cooled below its critical temperature to maintain the superconducting electrical properties during operation as the fault current limiter. 15 figs.

  9. High temperature superconducting fault current limiter

    DOEpatents

    Hull, John R.

    1997-01-01

    A fault current limiter (10) for an electrical circuit (14). The fault current limiter (10) includes a high temperature superconductor (12) in the electrical circuit (14). The high temperature superconductor (12) is cooled below its critical temperature to maintain the superconducting electrical properties during operation as the fault current limiter (10).

  10. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Fault. 255.11 Section 255.11 Employees... § 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than...

  11. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission...

  12. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Fault. 255.11 Section 255.11 Employees... § 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than...

  13. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission...

  14. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Fault. 255.11 Section 255.11 Employees... § 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than...

  15. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 2 2013-01-01 2013-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The...

  16. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Fault areas. 258.13 Section 258.13... SOLID WASTE LANDFILLS Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral expansions shall not be located within 200 feet (60 meters) of a fault that has had displacement in...

  17. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 2 2014-01-01 2014-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission...

  18. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 2 2013-01-01 2013-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission...

  19. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 2 2014-01-01 2014-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The...

  20. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The...

  1. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The...

  2. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission...

  3. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Fault. 255.11 Section 255.11 Employees... § 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than...

  4. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Fault. 255.11 Section 255.11 Employees... § 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than...

  5. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The...

  6. Tsunamis and splay fault dynamics

    USGS Publications Warehouse

    Wendt, J.; Oglesby, D.D.; Geist, E.L.

    2009-01-01

    The geometry of a fault system can have significant effects on tsunami generation, but most tsunami models to date have not investigated the dynamic processes that determine which path rupture will take in a complex fault system. To gain insight into this problem, we use the 3D finite element method to model the dynamics of a plate boundary/splay fault system. We use the resulting ground deformation as a time-dependent boundary condition for a 2D shallow-water hydrodynamic tsunami calculation. We find that if me stress distribution is homogeneous, rupture remains on the plate boundary thrust. When a barrier is introduced along the strike of the plate boundary thrust, rupture propagates to the splay faults, and produces a significantly larger tsunami man in the homogeneous case. The results have implications for the dynamics of megathrust earthquakes, and also suggest mat dynamic earthquake modeling may be a useful tool in tsunami researcn. Copyright 2009 by the American Geophysical Union.

  7. MOS integrated circuit fault modeling

    NASA Technical Reports Server (NTRS)

    Sievers, M.

    1985-01-01

    Three digital simulation techniques for MOS integrated circuit faults were examined. These techniques embody a hierarchy of complexity bracketing the range of simulation levels. The digital approaches are: transistor-level, connector-switch-attenuator level, and gate level. The advantages and disadvantages are discussed. Failure characteristics are also described.

  8. Fault current limiters using superconductors

    NASA Astrophysics Data System (ADS)

    Norris, W. T.; Power, A.

    Fault current limiters on power systems are to reduce damage by heating and electromechanical forces, to alleviate duty on switchgear used to clear the fault, and to mitigate disturbance to unfaulted parts of the system. A basic scheme involves a super-resistor which is a superconductor being driven to high resistance when fault current flows either when current is high during a cycle of a.c. or, if the temperature of the superconductive material rises, for the full cycle. Current may be commuted from superconductor to an impedance in parallel, thus reducing the energy dispersed at low temperature and saving refrigeration. In a super-shorted transformer the ambient temperature primary carries the power system current; the superconductive secondary goes to a resistive condition when excessive currents flow in the primary. A super-transformer has the advantage of not needing current leads from high temperature to low temperature; it behaves as a parallel super-resistor and inductor. The supertransductor with a superconductive d.c. bias winding is large and has small effect on the rate of fall of current at current zero; it does little to alleviate duty on switchgear but does reduce heating and electromechanical forces. It is fully active after a fault has been cleared. Other schemes depend on rapid recooling of the superconductor to achieve this.

  9. Fault Tolerant Frequent Pattern Mining

    SciTech Connect

    Shohdy, Sameh; Vishnu, Abhinav; Agrawal, Gagan

    2016-12-19

    FP-Growth algorithm is a Frequent Pattern Mining (FPM) algorithm that has been extensively used to study correlations and patterns in large scale datasets. While several researchers have designed distributed memory FP-Growth algorithms, it is pivotal to consider fault tolerant FP-Growth, which can address the increasing fault rates in large scale systems. In this work, we propose a novel parallel, algorithm-level fault-tolerant FP-Growth algorithm. We leverage algorithmic properties and MPI advanced features to guarantee an O(1) space complexity, achieved by using the dataset memory space itself for checkpointing. We also propose a recovery algorithm that can use in-memory and disk-based checkpointing, though in many cases the recovery can be completed without any disk access, and incurring no memory overhead for checkpointing. We evaluate our FT algorithm on a large scale InfiniBand cluster with several large datasets using up to 2K cores. Our evaluation demonstrates excellent efficiency for checkpointing and recovery in comparison to the disk-based approach. We have also observed 20x average speed-up in comparison to Spark, establishing that a well designed algorithm can easily outperform a solution based on a general fault-tolerant programming model.

  10. Cell boundary fault detection system

    DOEpatents

    Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward

    2011-04-19

    An apparatus and program product determine a nodal fault along the boundary, or face, of a computing cell. Nodes on adjacent cell boundaries communicate with each other, and the communications are analyzed to determine if a node or connection is faulty.

  11. Geometric Analyses of Rotational Faults.

    ERIC Educational Resources Information Center

    Schwert, Donald Peters; Peck, Wesley David

    1986-01-01

    Describes the use of analysis of rotational faults in undergraduate structural geology laboratories to provide students with applications of both orthographic and stereographic techniques. A demonstration problem is described, and an orthographic/stereographic solution and a reproducible black model demonstration pattern are provided. (TW)

  12. Deep pulverization along active faults ?

    NASA Astrophysics Data System (ADS)

    Doan, M.

    2013-12-01

    Pulverization is a intensive damage observed along some active faults. Rarely found in the field, it has been associated with dynamic damage produced by large earthquakes. Pulverization has been so far only described at the ground surface, consistent with the high frequency tensile loading expected for earthquake occurring along bimaterial faults. However, we discuss here a series of hints suggesting that pulverization is expected also several hundred of meters deep. In the deep well drilled within Nojima fault after the 1995 Kobe earthquake, thin sections reveal non localized damage, with microfractured pervading a sample, but with little shear disturbing the initial microstructure. In the SAFOD borehole drilled near Parkfield, Wiersberg and Erzinger (2008) made gas monitoring while drilling found large amount of H2 gas in the sandstone west to the fault. They attribute this high H2 concentration to mechanochemical origin, in accordance with some example of diffuse microfracturing found in thin sections from cores of SAFOD phase 3 and from geophysical data from logs. High strain rate experiments in both dry (Yuan et al, 2011) and wet samples (Forquin et al, 2010) show that even under confining pressures of several tens of megapascals, diffuse damage similar to pulverization is possible. This could explain the occurrence of pulverization at depth.

  13. Implementing fault-tolerant sensors

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith

    1989-01-01

    One aspect of fault tolerance in process control programs is the ability to tolerate sensor failure. A methodology is presented for transforming a process control program that cannot tolerate sensor failures to one that can. Additionally, a hierarchy of failure models is identified.

  14. MOS integrated circuit fault modeling

    NASA Technical Reports Server (NTRS)

    Sievers, M.

    1985-01-01

    Three digital simulation techniques for MOS integrated circuit faults were examined. These techniques embody a hierarchy of complexity bracketing the range of simulation levels. The digital approaches are: transistor-level, connector-switch-attenuator level, and gate level. The advantages and disadvantages are discussed. Failure characteristics are also described.

  15. Fault-Tolerant Heat Exchanger

    NASA Technical Reports Server (NTRS)

    Izenson, Michael G.; Crowley, Christopher J.

    2005-01-01

    A compact, lightweight heat exchanger has been designed to be fault-tolerant in the sense that a single-point leak would not cause mixing of heat-transfer fluids. This particular heat exchanger is intended to be part of the temperature-regulation system for habitable modules of the International Space Station and to function with water and ammonia as the heat-transfer fluids. The basic fault-tolerant design is adaptable to other heat-transfer fluids and heat exchangers for applications in which mixing of heat-transfer fluids would pose toxic, explosive, or other hazards: Examples could include fuel/air heat exchangers for thermal management on aircraft, process heat exchangers in the cryogenic industry, and heat exchangers used in chemical processing. The reason this heat exchanger can tolerate a single-point leak is that the heat-transfer fluids are everywhere separated by a vented volume and at least two seals. The combination of fault tolerance, compactness, and light weight is implemented in a unique heat-exchanger core configuration: Each fluid passage is entirely surrounded by a vented region bridged by solid structures through which heat is conducted between the fluids. Precise, proprietary fabrication techniques make it possible to manufacture the vented regions and heat-conducting structures with very small dimensions to obtain a very large coefficient of heat transfer between the two fluids. A large heat-transfer coefficient favors compact design by making it possible to use a relatively small core for a given heat-transfer rate. Calculations and experiments have shown that in most respects, the fault-tolerant heat exchanger can be expected to equal or exceed the performance of the non-fault-tolerant heat exchanger that it is intended to supplant (see table). The only significant disadvantages are a slight weight penalty and a small decrease in the mass-specific heat transfer.

  16. Fault Diagnosis in HVAC Chillers

    NASA Technical Reports Server (NTRS)

    Choi, Kihoon; Namuru, Setu M.; Azam, Mohammad S.; Luo, Jianhui; Pattipati, Krishna R.; Patterson-Hine, Ann

    2005-01-01

    Modern buildings are being equipped with increasingly sophisticated power and control systems with substantial capabilities for monitoring and controlling the amenities. Operational problems associated with heating, ventilation, and air-conditioning (HVAC) systems plague many commercial buildings, often the result of degraded equipment, failed sensors, improper installation, poor maintenance, and improperly implemented controls. Most existing HVAC fault-diagnostic schemes are based on analytical models and knowledge bases. These schemes are adequate for generic systems. However, real-world systems significantly differ from the generic ones and necessitate modifications of the models and/or customization of the standard knowledge bases, which can be labor intensive. Data-driven techniques for fault detection and isolation (FDI) have a close relationship with pattern recognition, wherein one seeks to categorize the input-output data into normal or faulty classes. Owing to the simplicity and adaptability, customization of a data-driven FDI approach does not require in-depth knowledge of the HVAC system. It enables the building system operators to improve energy efficiency and maintain the desired comfort level at a reduced cost. In this article, we consider a data-driven approach for FDI of chillers in HVAC systems. To diagnose the faults of interest in the chiller, we employ multiway dynamic principal component analysis (MPCA), multiway partial least squares (MPLS), and support vector machines (SVMs). The simulation of a chiller under various fault conditions is conducted using a standard chiller simulator from the American Society of Heating, Refrigerating, and Air-conditioning Engineers (ASHRAE). We validated our FDI scheme using experimental data obtained from different types of chiller faults.

  17. Fault tolerant control of spacecraft

    NASA Astrophysics Data System (ADS)

    Godard

    Autonomous multiple spacecraft formation flying space missions demand the development of reliable control systems to ensure rapid, accurate, and effective response to various attitude and formation reconfiguration commands. Keeping in mind the complexities involved in the technology development to enable spacecraft formation flying, this thesis presents the development and validation of a fault tolerant control algorithm that augments the AOCS on-board a spacecraft to ensure that these challenging formation flying missions will fly successfully. Taking inspiration from the existing theory of nonlinear control, a fault-tolerant control system for the RyePicoSat missions is designed to cope with actuator faults whilst maintaining the desirable degree of overall stability and performance. Autonomous fault tolerant adaptive control scheme for spacecraft equipped with redundant actuators and robust control of spacecraft in underactuated configuration, represent the two central themes of this thesis. The developed algorithms are validated using a hardware-in-the-loop simulation. A reaction wheel testbed is used to validate the proposed fault tolerant attitude control scheme. A spacecraft formation flying experimental testbed is used to verify the performance of the proposed robust control scheme for underactuated spacecraft configurations. The proposed underactuated formation flying concept leads to more than 60% savings in fuel consumption when compared to a fully actuated spacecraft formation configuration. We also developed a novel attitude control methodology that requires only a single thruster to stabilize three axis attitude and angular velocity components of a spacecraft. Numerical simulations and hardware-in-the-loop experimental results along with rigorous analytical stability analysis shows that the proposed methodology will greatly enhance the reliability of the spacecraft, while allowing for potentially significant overall mission cost reduction.

  18. Fault Diagnosis in HVAC Chillers

    NASA Technical Reports Server (NTRS)

    Choi, Kihoon; Namuru, Setu M.; Azam, Mohammad S.; Luo, Jianhui; Pattipati, Krishna R.; Patterson-Hine, Ann

    2005-01-01

    Modern buildings are being equipped with increasingly sophisticated power and control systems with substantial capabilities for monitoring and controlling the amenities. Operational problems associated with heating, ventilation, and air-conditioning (HVAC) systems plague many commercial buildings, often the result of degraded equipment, failed sensors, improper installation, poor maintenance, and improperly implemented controls. Most existing HVAC fault-diagnostic schemes are based on analytical models and knowledge bases. These schemes are adequate for generic systems. However, real-world systems significantly differ from the generic ones and necessitate modifications of the models and/or customization of the standard knowledge bases, which can be labor intensive. Data-driven techniques for fault detection and isolation (FDI) have a close relationship with pattern recognition, wherein one seeks to categorize the input-output data into normal or faulty classes. Owing to the simplicity and adaptability, customization of a data-driven FDI approach does not require in-depth knowledge of the HVAC system. It enables the building system operators to improve energy efficiency and maintain the desired comfort level at a reduced cost. In this article, we consider a data-driven approach for FDI of chillers in HVAC systems. To diagnose the faults of interest in the chiller, we employ multiway dynamic principal component analysis (MPCA), multiway partial least squares (MPLS), and support vector machines (SVMs). The simulation of a chiller under various fault conditions is conducted using a standard chiller simulator from the American Society of Heating, Refrigerating, and Air-conditioning Engineers (ASHRAE). We validated our FDI scheme using experimental data obtained from different types of chiller faults.

  19. Predeployment validation of fault-tolerant systems through software-implemented fault insertion

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1989-01-01

    Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.

  20. Novel neural networks-based fault tolerant control scheme with fault alarm.

    PubMed

    Shen, Qikun; Jiang, Bin; Shi, Peng; Lim, Cheng-Chew

    2014-11-01

    In this paper, the problem of adaptive active fault-tolerant control for a class of nonlinear systems with unknown actuator fault is investigated. The actuator fault is assumed to have no traditional affine appearance of the system state variables and control input. The useful property of the basis function of the radial basis function neural network (NN), which will be used in the design of the fault tolerant controller, is explored. Based on the analysis of the design of normal and passive fault tolerant controllers, by using the implicit function theorem, a novel NN-based active fault-tolerant control scheme with fault alarm is proposed. Comparing with results in the literature, the fault-tolerant control scheme can minimize the time delay between fault occurrence and accommodation that is called the time delay due to fault diagnosis, and reduce the adverse effect on system performance. In addition, the FTC scheme has the advantages of a passive fault-tolerant control scheme as well as the traditional active fault-tolerant control scheme's properties. Furthermore, the fault-tolerant control scheme requires no additional fault detection and isolation model which is necessary in the traditional active fault-tolerant control scheme. Finally, simulation results are presented to demonstrate the efficiency of the developed techniques.

  1. Multiple Fault Isolation in Redundant Systems

    NASA Technical Reports Server (NTRS)

    Pattipati, Krishna R.; Patterson-Hine, Ann; Iverson, David

    1997-01-01

    Fault diagnosis in large-scale systems that are products of modern technology present formidable challenges to manufacturers and users. This is due to large number of failure sources in such systems and the need to quickly isolate and rectify failures with minimal down time. In addition, for fault-tolerant systems and systems with infrequent opportunity for maintenance (e.g., Hubble telescope, space station), the assumption of at most a single fault in the system is unrealistic. In this project, we have developed novel block and sequential diagnostic strategies to isolate multiple faults in the shortest possible time without making the unrealistic single fault assumption.

  2. Multiple Fault Isolation in Redundant Systems

    NASA Technical Reports Server (NTRS)

    Pattipati, Krishna R.

    1997-01-01

    Fault diagnosis in large-scale systems that are products of modem technology present formidable challenges to manufacturers and users. This is due to large number of failure sources in such systems and the need to quickly isolate and rectify failures with minimal down time. In addition, for fault-tolerant systems and systems with infrequent opportunity for maintenance (e.g., Hubble telescope, space station), the assumption of at most a single fault in the system is unrealistic. In this project, we have developed novel block and sequential diagnostic strategies to isolate multiple faults in the shortest possible time without making the unrealistic single fault assumption.

  3. Bridging faults in BiCMOS circuits

    NASA Technical Reports Server (NTRS)

    Menon, Sankaran M.; Malaiya, Yashwant K.; Jayasumana, Anura P.

    1993-01-01

    Combining the advantages of CMOS and bipolar, BiCMOS is emerging as a major technology for many high performance digital and mixed signal applications. Recent investigations revealed that bridging faults can be a major failure mode in IC's. Effects of bridging faults in BiCMOS circuits are presented. Bridging faults between logical units without feedback and logical units with feedback are considered. Several bridging faults can be detected by monitoring the power supply current (I(sub DDQ) monitoring). Effects of bridging faults and bridging resistance on output logic levels were examined along with their effects on noise immunity.

  4. A Quaternary fault database for central Asia

    NASA Astrophysics Data System (ADS)

    Mohadjer, Solmaz; Ehlers, Todd Alan; Bendick, Rebecca; Stübner, Konstanze; Strube, Timo

    2016-02-01

    Earthquakes represent the highest risk in terms of potential loss of lives and economic damage for central Asian countries. Knowledge of fault location and behavior is essential in calculating and mapping seismic hazard. Previous efforts in compiling fault information for central Asia have generated a large amount of data that are published in limited-access journals with no digital maps publicly available, or are limited in their description of important fault parameters such as slip rates. This study builds on previous work by improving access to fault information through a web-based interactive map and an online database with search capabilities that allow users to organize data by different fields. The data presented in this compilation include fault location, its geographic, seismic, and structural characteristics, short descriptions, narrative comments, and references to peer-reviewed publications. The interactive map displays 1196 fault traces and 34 000 earthquake locations on a shaded-relief map. The online database contains attributes for 123 faults mentioned in the literature, with Quaternary and geodetic slip rates reported for 38 and 26 faults respectively, and earthquake history reported for 39 faults. All data are accessible for viewing and download via http://www.geo.uni-tuebingen.de/faults/. This work has implications for seismic hazard studies in central Asia as it summarizes important fault parameters, and can reduce earthquake risk by enhancing public access to information. It also allows scientists and hazard assessment teams to identify structures and regions where data gaps exist and future investigations are needed.

  5. Experiments in fault tolerant software reliability

    NASA Technical Reports Server (NTRS)

    Mcallister, David F.; Vouk, Mladen A.

    1989-01-01

    Twenty functionally equivalent programs were built and tested in a multiversion software experiment. Following unit testing, all programs were subjected to an extensive system test. In the process sixty-one distinct faults were identified among the versions. Less than 12 percent of the faults exhibited varying degrees of positive correlation. The common-cause (or similar) faults spanned as many as 14 components. However, a majority of these faults were trivial, and easily detected by proper unit and/or system testing. Only two of the seven similar faults were difficult faults, and both were caused by specification ambiguities. One of these faults exhibited variable identical-and-wrong response span, i.e. response span which varied with the testing conditions and input data. Techniques that could have been used to avoid the faults are discussed. For example, it was determined that back-to-back testing of 2-tuples could have been used to eliminate about 90 percent of the faults. In addition, four of the seven similar faults could have been detected by using back-to-back testing of 5-tuples. It is believed that most, if not all, similar faults could have been avoided had the specifications been written using more formal notation, the unit testing phase was subject to more stringent standards and controls, and better tools for measuring the quality and adequacy of the test data (e.g. coverage) were used.

  6. Fault diagnosis for magnetic bearing systems

    NASA Astrophysics Data System (ADS)

    Tsai, Nan-Chyuan; King, Yueh-Hsun; Lee, Rong-Mao

    2009-05-01

    A full fault diagnosis for active magnetic bearing (AMB) and rotor systems to monitor the closed-loop operation and analyze fault patterns on-line in case any malfunction occurs is proposed in this paper. Most traditional approaches for fault diagnosis are based on actuator or sensor diagnosis individually and can solely detect a single fault at a time. This research combines two diagnosis methodologies by using both state estimators and parameter estimators to detect, identify and analyze actuators and sensors faults in AMB/rotor systems. The proposed fault diagnosis algorithm not only enhances the diagnosis accuracy, but also illustrates the capability to detect multiple sensors faults which occur concurrently. The efficacy of the presented algorithm has been verified by computer simulations and intensive experiments. The test rig for experiments is equipped with AMB, interface module (dSPACE DS1104), data acquisition unit MATLAB/Simulink simulation environment. At last, the fault patterns, such as bias, multiplicative loop gain variation and noise addition, can be identified by the algorithm presented in this work. In other words, the proposed diagnosis algorithm is able to detect faults at the first moment, find which sensors or actuators under failure and identify which fault pattern the found faults belong to.

  7. Model-Based Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Kumar, Aditya; Viassolo, Daniel

    2008-01-01

    The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.

  8. Tool for Viewing Faults Under Terrain

    NASA Technical Reports Server (NTRS)

    Siegel, Herbert, L.; Li, P. Peggy

    2005-01-01

    Multi Surface Light Table (MSLT) is an interactive software tool that was developed in support of the QuakeSim project, which has created an earthquake- fault database and a set of earthquake- simulation software tools. MSLT visualizes the three-dimensional geometries of faults embedded below the terrain and animates time-varying simulations of stress and slip. The fault segments, represented as rectangular surfaces at dip angles, are organized into collections, that is, faults. An interface built into MSLT queries and retrieves fault definitions from the QuakeSim fault database. MSLT also reads time-varying output from one of the QuakeSim simulation tools, called "Virtual California." Stress intensity is represented by variations in color. Slips are represented by directional indicators on the fault segments. The magnitudes of the slips are represented by the duration of the directional indicators in time. The interactive controls in MSLT provide a virtual track-ball, pan and zoom, translucency adjustment, simulation playback, and simulation movie capture. In addition, geographical information on the fault segments and faults is displayed on text windows. Because of the extensive viewing controls, faults can be seen in relation to one another, and to the terrain. These relations can be realized in simulations. Correlated slips in parallel faults are visible in the playback of Virtual California simulations.

  9. Rule-based fault diagnosis of hall sensors and fault-tolerant control of PMSM

    NASA Astrophysics Data System (ADS)

    Song, Ziyou; Li, Jianqiu; Ouyang, Minggao; Gu, Jing; Feng, Xuning; Lu, Dongbin

    2013-07-01

    Hall sensor is widely used for estimating rotor phase of permanent magnet synchronous motor(PMSM). And rotor position is an essential parameter of PMSM control algorithm, hence it is very dangerous if Hall senor faults occur. But there is scarcely any research focusing on fault diagnosis and fault-tolerant control of Hall sensor used in PMSM. From this standpoint, the Hall sensor faults which may occur during the PMSM operating are theoretically analyzed. According to the analysis results, the fault diagnosis algorithm of Hall sensor, which is based on three rules, is proposed to classify the fault phenomena accurately. The rotor phase estimation algorithms, based on one or two Hall sensor(s), are initialized to engender the fault-tolerant control algorithm. The fault diagnosis algorithm can detect 60 Hall fault phenomena in total as well as all detections can be fulfilled in 1/138 rotor rotation period. The fault-tolerant control algorithm can achieve a smooth torque production which means the same control effect as normal control mode (with three Hall sensors). Finally, the PMSM bench test verifies the accuracy and rapidity of fault diagnosis and fault-tolerant control strategies. The fault diagnosis algorithm can detect all Hall sensor faults promptly and fault-tolerant control algorithm allows the PMSM to face failure conditions of one or two Hall sensor(s). In addition, the transitions between health-control and fault-tolerant control conditions are smooth without any additional noise and harshness. Proposed algorithms can deal with the Hall sensor faults of PMSM in real applications, and can be provided to realize the fault diagnosis and fault-tolerant control of PMSM.

  10. Arc burst pattern analysis fault detection system

    NASA Technical Reports Server (NTRS)

    Russell, B. Don (Inventor); Aucoin, B. Michael (Inventor); Benner, Carl L. (Inventor)

    1997-01-01

    A method and apparatus are provided for detecting an arcing fault on a power line carrying a load current. Parameters indicative of power flow and possible fault events on the line, such as voltage and load current, are monitored and analyzed for an arc burst pattern exhibited by arcing faults in a power system. These arcing faults are detected by identifying bursts of each half-cycle of the fundamental current. Bursts occurring at or near a voltage peak indicate arcing on that phase. Once a faulted phase line is identified, a comparison of the current and voltage reveals whether the fault is located in a downstream direction of power flow toward customers, or upstream toward a generation station. If the fault is located downstream, the line is de-energized, and if located upstream, the line may remain energized to prevent unnecessary power outages.

  11. Multiple sensor fault diagnosis for dynamic processes.

    PubMed

    Li, Cheng-Chih; Jeng, Jyh-Cheng

    2010-10-01

    Modern industrial plants are usually large scaled and contain a great amount of sensors. Sensor fault diagnosis is crucial and necessary to process safety and optimal operation. This paper proposes a systematic approach to detect, isolate and identify multiple sensor faults for multivariate dynamic systems. The current work first defines deviation vectors for sensor observations, and further defines and derives the basic sensor fault matrix (BSFM), consisting of the normalized basic fault vectors, by several different methods. By projecting a process deviation vector to the space spanned by BSFM, this research uses a vector with the resulted weights on each direction for multiple sensor fault diagnosis. This study also proposes a novel monitoring index and derives corresponding sensor fault detectability. The study also utilizes that vector to isolate and identify multiple sensor faults, and discusses the isolatability and identifiability. Simulation examples and comparison with two conventional PCA-based contribution plots are presented to demonstrate the effectiveness of the proposed methodology.

  12. Naval weapons center active fault map series

    NASA Astrophysics Data System (ADS)

    Roquemore, G. R.; Zellmer, J. T.

    1987-08-01

    The NWC Active Fault Map Series shows the locations of active faults and features indicative of active faulting within much of Indian Wells Valley and portions of the Randsburg Wash/Mojave B test range areas of the Naval Weapons Center. Map annotations are used extensively to identify criteria employed in identifying the fault offsets, and to present other valuable data. All of the mapped faults show evidence of having moved during about the last 12,500 years or represent geologically young faults that occur within seismic gaps. Only faults that offset the surface or show other evidence of surface deformation were mapped. A portion of the City of Ridgecrest is recommended as being a Seismic Hazard Special Studies Zone in which detailed earthquake hazard studies should be required.

  13. Alp Transit: Crossing Faults 44 and 49

    NASA Astrophysics Data System (ADS)

    El Tani, M.; Bremen, R.

    2014-05-01

    This paper describes the crossing of faults 44 and 49 when constructing the 57 km Gotthard base tunnel of the Alp Transit project. Fault 44 is a permeable fault that triggered significant surface deformations 1,400 m above the tunnel when it was reached by the advancing excavation. The fault runs parallel to the downstream face of the Nalps arch dam. Significant deformations were measured at the dam crown. Fault 49 is sub-vertical and permeable, and runs parallel at the upstream face of the dam. It was necessary to assess the risk when crossing fault 49, as a limit was put on the acceptable dam deformation for structural safety. The simulation model, forecasts and action decided when crossing over the faults are presented, with a brief description of the tunnel, the dam, and the monitoring system.

  14. Silica Lubrication in Faults (Invited)

    NASA Astrophysics Data System (ADS)

    Rowe, C. D.; Rempe, M.; Lamothe, K.; Kirkpatrick, J. D.; White, J. C.; Mitchell, T. M.; Andrews, M.; Di Toro, G.

    2013-12-01

    Silica-rich rocks are common in the crust, so silica lubrication may be important for causing fault weakening during earthquakes if the phenomenon occurs in nature. In laboratory friction experiments on chert, dramatic shear weakening has been attributed to amorphization and attraction of water from atmospheric humidity to form a 'silica gel'. Few observations of the slip surfaces have been reported, and the details of weakening mechanism(s) remain enigmatic. Therefore, no criteria exist on which to make comparisons of experimental materials to natural faults. We performed a series of friction experiments, characterized the materials formed on the sliding surface, and compared these to a geological fault in the same rock type. Experiments were performed in the presence of room humidity at 2.5 MPa normal stress with 3 and 30 m total displacement for a variety of slip rates (10-4 - 10-1 m/s). The friction coefficient (μ) reduced from >0.6 to ~0.2 at 10-1 m/s, but only fell to ~0.4 at 10-2 - 10-4 m/s. The slip surfaces and wear material were observed using laser confocal Raman microscopy, electron microprobe, X-ray diffraction, and transmission electron microscopy. Experiments at 10-1 m/s formed wear material consisting of ≤1 μm powder that is aggregated into irregular 5-20 μm clumps. Some material disaggregated during analysis with electron beams and lasers, suggesting hydrous and unstable components. Compressed powder forms smooth pavements on the surface in which grains are not visible (if present, they are <100 nm). Powder contains amorphous material and as yet unidentified crystalline and non-crystalline forms of silica (not quartz), while the worn chert surface underneath shows Raman spectra consistent with a mixture of quartz and amorphous material. If silica amorphization facilitates shear weakening in natural faults, similar wear materials should be formed, and we may be able to identify them through microstructural studies. However, the sub

  15. Frictional constraints on crustal faulting

    USGS Publications Warehouse

    Boatwright, J.; Cocco, M.

    1996-01-01

    We consider how variations in fault frictional properties affect the phenomenology of earthquake faulting. In particular, we propose that lateral variations in fault friction produce the marked heterogeneity of slip observed in large earthquakes. We model these variations using a rate- and state-dependent friction law, where we differentiate velocity-weakening behavior into two fields: the strong seismic field is very velocity weakening and the weak seismic field is slightly velocity weakening. Similarly, we differentiate velocity-strengthening behavior into two fields: the compliant field is slightly velocity strengthening and the viscous field is very velocity strengthening. The strong seismic field comprises the seismic slip concentrations, or asperities. The two "intermediate" fields, weak seismic and compliant, have frictional velocity dependences that are close to velocity neutral: these fields modulate both the tectonic loading and the dynamic rupture process. During the interseismic period, the weak seismic and compliant regions slip aseismically, while the strong seismic regions remain locked, evolving into stress concentrations that fail only in main shocks. The weak seismic areas exhibit most of the interseismic activity and aftershocks but can also creep seismically. This "mixed" frictional behavior can be obtained from a sufficiently heterogenous distribution of the critical slip distance. The model also provides a mechanism for rupture arrest: dynamic rupture fronts decelerate as they penetrate into unloaded complaint or weak seismic areas, producing broad areas of accelerated afterslip. Aftershocks occur on both the weak seismic and compliant areas around a fault, but most of the stress is diffused through aseismic slip. Rapid afterslip on these peripheral areas can also produce aftershocks within the main shock rupture area by reloading weak fault areas that slipped in the main shock and then healed. We test this frictional model by comparing the

  16. A “mesh” of crossing faults: Fault networks of southern California

    NASA Astrophysics Data System (ADS)

    Janecke, S. U.

    2009-12-01

    Detailed geologic mapping of active fault systems in the western Salton Trough and northern Peninsular Ranges of southern California make it possible to expand the inventory of mapped and known faults by compiling and updating existing geologic maps, and analyzing high resolution imagery, LIDAR, InSAR, relocated hypocenters and other geophysical datasets. A fault map is being compiled on Google Earth and will ultimately discriminate between a range of different fault expressions: from well-mapped faults to subtle lineaments and geomorphic anomalies. The fault map shows deformation patterns in both crystalline and basinal deposits and reveals a complex fault mesh with many curious and unexpected relationships. Key findings are: 1) Many fault systems have mutually interpenetrating geometries, are grossly coeval, and allow faults to cross one another. A typical relationship reveals a dextral fault zone that appears to be continuous at the regional scale. In detail, however, there are no continuous NW-striking dextral fault traces and instead the master dextral fault is offset in a left-lateral sense by numerous crossing faults. Left-lateral faults also show small offsets where they interact with right lateral faults. Both fault sets show evidence of Quaternary activity. Examples occur along the Clark, Coyote Creek, Earthquake Valley and Torres Martinez fault zones. 2) Fault zones cross in other ways. There are locations where active faults continue across or beneath significant structural barriers. Major fault zones like the Clark fault of the San Jacinto fault system appears to end at NE-striking sinistral fault zones (like the Extra and Pumpkin faults) that clearly cross from the SW to the NE side of the projection of the dextral traces. Despite these blocking structures, there is good evidence for continuation of the dextral faults on the opposite sides of the crossing fault array. In some instances there is clear evidence (in deep microseismic alignments of

  17. From Fault Seal to Fault Leak: Effect of Mechanical Stratigraphy on the Evolution of Transport Processes in Fault Zones (Invited)

    NASA Astrophysics Data System (ADS)

    Urai, J. L.; Schmatz, J.; van Gent, H. W.; Abe, S.; Holland, M.

    2009-12-01

    Predictions of the transport properties of faults in layered sequences are usually based on geometry and lithology of the faulted sequence. Mechanical properties and fault resealing processes are used much less frequently. Based on laboratory, field and numerical studies we present a model, which takes into account these additional factors. When the ratio of rock strength and in-situ mean effective stress is high enough to allow hybrid failure, dilatant fracture networks will form in that part of the sequence which meets this condition, dramatically increasing permeability along the fault, with possibility of along-fault fluid flow and vertical transport of fine grained sediment to form clay gouge in dilatant jogs. A key parameter here is the 3D connectivity of the dilatant fracture network. In systems where fracturing is non-dilatant and the mechanical contrast between the layers is small, the fault zones are relatively simple in structure, with complexity concentrated in relay zones between segments at different scales. With increasing mechanical contrast between the layers (and the presence of preexisting fractures), patterns of localization and fault zone structure become increasingly complex. Mechanical mixing in the fault gouge is a major process especially when one of the lithologies is highly permeable. Reworking of wall rocks composed of hard claystones produces a low-permeability clay gouge in critical state. Circulating supersaturated fluids in the fault zone produce vein networks, which reseal the fault zone, typically in a cyclic fashion.

  18. New insights on Southern Coyote Creek Fault and Superstition Hills Fault

    NASA Astrophysics Data System (ADS)

    van Zandt, A. J.; Mellors, R. J.; Rockwell, T. K.; Burgess, M. K.; O'Hare, M.

    2007-12-01

    Recent field work has confirmed an extension of the southern Coyote Creek (CCF) branch of the San Jacinto fault in the western Salton trough. The fault marks the western edge of an area of subsidence caused by groundwater extraction, and field measurements suggest that recent strike-slip motion has occurred on this fault as well. We attempt to determine whether this fault connects at depth with the Superstition Hills fault (SHF) to the southeast by modeling observed surface deformation between the two faults measured by InSAR. Stacked ERS (descending) InSAR data from 1992 to 2000 is initially modeled using a finite fault in an elastic half-space. Observed deformation along the SHF and Elmore Ranch fault is modeled assuming shallow (< 5 km) creep. We test various models to explain surface deformation between the two faults.

  19. The susitna glacier thrust fault: Characteristics of surface ruptures on the fault that initiated the 2002 denali fault earthquake

    USGS Publications Warehouse

    Crone, A.J.; Personius, S.F.; Craw, P.A.; Haeussler, P.J.; Staft, L.A.

    2004-01-01

    The 3 November 2002 Mw 7.9 Denali fault earthquake sequence initiated on the newly discovered Susitna Glacier thrust fault and caused 48 km of surface rupture. Rupture of the Susitna Glacier fault generated scarps on ice of the Susitna and West Fork glaciers and on tundra and surficial deposits along the southern front of the central Alaska Range. Based on detailed mapping, 27 topographic profiles, and field observations, we document the characteristics and slip distribution of the 2002 ruptures and describe evidence of pre-2002 ruptures on the fault. The 2002 surface faulting produced structures that range from simple folds on a single trace to complex thrust-fault ruptures and pressure ridges on multiple, sinuous strands. The deformation zone is locally more than 1 km wide. We measured a maximum vertical displacement of 5.4 m on the south-directed main thrust. North-directed backthrusts have more than 4 m of surface offset. We measured a well-constrained near-surface fault dip of about 19?? at one site, which is considerably less than seismologically determined values of 35??-48??. Surface-rupture data yield an estimated magnitude of Mw 7.3 for the fault, which is similar to the seismological value of Mw 7.2. Comparison of field and seismological data suggest that the Susitna Glacier fault is part of a large positive flower structure associated with northwest-directed transpressive deformation on the Denali fault. Prehistoric scarps are evidence of previous rupture of the Sustina Glacier fault, but additional work is needed to determine if past failures of the Susitna Glacier fault have consistently induced rupture of the Denali fault.

  20. Fault linkage: Three-dimensional mechanical interaction between echelon normal faults

    NASA Astrophysics Data System (ADS)

    Crider, Juliet G.; Pollard, David D.

    1998-10-01

    Field observations of two overlapping normal faults and associated deformation document features common to many normal-fault relay zones: a topographic ramp between the fault segments, tapering slip on the faults as they enter the overlap zone, and associated fracturing, especially at the top of the ramp. These observations motivate numerical modeling of the development of a relay zone. A three-dimensional boundary element method numerical model, using simple fault-plane geometries, material properties, and boundary conditions, reproduces the principal characteristics of the observed fault scarps. The model, with overlapping, semicircular fault segments under orthogonal extension, produces a region of high Coulomb shear stress in the relay zone that would favor fault linkage at the center to upper relay ramp. If the fault height is increased, the magnitude of the stresses in the relay zone increases, but the position of the anticipated linkage does not change. The amount of fault overlap changes the magnitude of the Coulomb stress in the relay zone: the greatest potential for fault linkage occurs with the closest underlapping fault tips. Ultimately, the mechanical interaction between segments of a developing normal-fault system promote the development of connected, zigzagging fault scarps.

  1. Fault trees and imperfect coverage

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne B.

    1989-01-01

    A new algorithm is presented for solving the fault tree. The algorithm includes the dynamic behavior of the fault/error handling model but obviates the need for the Markov chain solution. As the state space is expanded in a breadth-first search (the same is done in the conversion to a Markov chain), the state's contribution to each future state is calculated exactly. A dynamic state truncation technique is also presented; it produces bounds on the unreliability of the system by considering only part of the state space. Since the model is solved as the state space is generated, the process can be stopped as soon as the desired accuracy is reached.

  2. Perspective View, San Andreas Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The prominent linear feature straight down the center of this perspective view is California's famous San Andreas Fault. The image, created with data from NASA's Shuttle Radar Topography Mission (SRTM), will be used by geologists studying fault dynamics and landforms resulting from active tectonics. This segment of the fault lies west of the city of Palmdale, Calif., about 100 kilometers (about 60 miles) northwest of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. Two large mountain ranges are visible, the San Gabriel Mountains on the left and the Tehachapi Mountains in the upper right. Another fault, the Garlock Fault lies at the base of the Tehachapis; the San Andreas and the Garlock Faults meet in the center distance near the town of Gorman. In the distance, over the Tehachapi Mountains is California's Central Valley. Along the foothills in the right hand part of the image is the Antelope Valley, including the Antelope Valley California Poppy Reserve. The data used to create this image were acquired by SRTM aboard the Space Shuttle Endeavour, launched on February 11, 2000.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    SRTM uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space

  3. Fault Injection Techniques and Tools

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen; Tsai, Timothy K.; Iyer, Ravishankar K.

    1997-01-01

    Dependability evaluation involves the study of failures and errors. The destructive nature of a crash and long error latency make it difficult to identify the causes of failures in the operational environment. It is particularly hard to recreate a failure scenario for a large, complex system. To identify and understand potential failures, we use an experiment-based approach for studying the dependability of a system. Such an approach is applied not only during the conception and design phases, but also during the prototype and operational phases. To take an experiment-based approach, we must first understand a system's architecture, structure, and behavior. Specifically, we need to know its tolerance for faults and failures, including its built-in detection and recovery mechanisms, and we need specific instruments and tools to inject faults, create failures or errors, and monitor their effects.

  4. Folding above faults, Rocky Mountains

    SciTech Connect

    McConnell, D.A. . Dept. of Geology)

    1992-01-01

    Asymmetric folds formed above basement faults can be observed throughout the Rocky Mountains. Several previous interpretations of the folding process made the implicit assumption that one or both fold hinges migrated or rolled'' through the steep forelimb of the fold as the structure evolved (rolling hinge model). Results of mapping in the Bighorn and Seminoe Mountains, WY, and Sangre de Cristo Range, CO, do not support this hypothesis. An alternative interpretation is presented in which fold hinges remained fixed in position during folding (fixed hinge model). Mapped folds share common characteristics: (1) axial traces of the folds intersect faults at or near the basement/cover interface, and diverge from faults upsection; (2) fold hinges are narrow and interlimb angles cluster around 80--100[degree] regardless of fold location; (3) fold shape is typically angular, despite published cross sections that show concentric folds; and, (4) beds within the folds show thickening and/or thinning, most commonly adjacent to fold hinges. The rolling hinge model requires that rocks in the fold forelimbs bend through narrow fold hinges as deformation progressed. Examination of massive, competent rock units such as the Ord. Bighorn Dolomite, Miss. Madison Limestone, and, Penn. Tensleep Sandstone reveals no evidence of the extensive internal deformation that would be expected if hinges rolled through rocks of the forelimb. The hinges of some folds (e.g. Golf Creek anticline, Bighorn Mountains) are offset by secondary faults, effectively preventing the passage of rocks from backlimb to forelimb. The fixed hinge model proposes that the fold hinges were defined early in fold evolution, and beds were progressively rotated and steepened as the structure grew.

  5. Inverter Ground Fault Overvoltage Testing

    SciTech Connect

    Hoke, Andy; Nelson, Austin; Chakraborty, Sudipta; Chebahtah, Justin; Wang, Trudie; McCarty, Michael

    2015-08-12

    This report describes testing conducted at NREL to determine the duration and magnitude of transient overvoltages created by several commercial PV inverters during ground fault conditions. For this work, a test plan developed by the Forum on Inverter Grid Integration Issues (FIGII) has been implemented in a custom test setup at NREL. Load rejection overvoltage test results were reported previously in a separate technical report.

  6. Fault Tolerance of Neural Networks

    DTIC Science & Technology

    1994-07-01

    Systematic Ap - proach, Proc. Government Microcircuit Application Conf. (GOMAC), San Diego, Nov. 1986. [10] D.E.Goldberg, Genetic Algorithms in Search...s l m n ttempt to develop fault tolerant neural networks. The lows. Given a well-trained network, we first eliminate temp todevlopfaut tlernt eurl ...both ap - proaches, and this resulted in very slight improve- ments over the addition/deletion procedure. 103 Fisher’s Iris data in average case Fisher’s

  7. Watching Faults Grow in Sand

    NASA Astrophysics Data System (ADS)

    Cooke, M. L.

    2015-12-01

    Accretionary sandbox experiments provide a rich environment for investigating the processes of fault development. These experiments engage students because 1) they enable direct observation of fault growth, which is impossible in the crust (type 1 physical model), 2) they are not only representational but can also be manipulated (type 2 physical model), 3) they can be used to test hypotheses (type 3 physical model) and 4) they resemble experiments performed by structural geology researchers around the world. The structural geology courses at UMass Amherst utilize a series of accretionary sandboxes experiments where students first watch a video of an experiment and then perform a group experiment. The experiments motivate discussions of what conditions they would change and what outcomes they would expect from these changes; hypothesis development. These discussions inevitably lead to calculations of the scaling relationships between model and crustal fault growth and provide insight into the crustal processes represented within the dry sand. Sketching of the experiments has been shown to be a very effective assessment method as the students reveal which features they are analyzing. Another approach used at UMass is to set up a forensic experiment. The experiment is set up with spatially varying basal friction before the meeting and students must figure out what the basal conditions are through the experiment. This experiment leads to discussions of equilibrium and force balance within the accretionary wedge. Displacement fields can be captured throughout the experiment using inexpensive digital image correlation techniques to foster quantitative analysis of the experiments.

  8. CONTROL AND FAULT DETECTOR CIRCUIT

    DOEpatents

    Winningstad, C.N.

    1958-04-01

    A power control and fault detectcr circuit for a radiofrequency system is described. The operation of the circuit controls the power output of a radio- frequency power supply to automatically start the flow of energizing power to the radio-frequency power supply and to gradually increase the power to a predetermined level which is below the point where destruction occurs upon the happening of a fault. If the radio-frequency power supply output fails to increase during such period, the control does not further increase the power. On the other hand, if the output of the radio-frequency power supply properly increases, then the control continues to increase the power to a maximum value. After the maximumn value of radio-frequency output has been achieved. the control is responsive to a ''fault,'' such as a short circuit in the radio-frequency system being driven, so that the flow of power is interrupted for an interval before the cycle is repeated.

  9. Fault detection using genetic programming

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; B. Jack, Lindsay; Nandi, Asoke K.

    2005-03-01

    Genetic programming (GP) is a stochastic process for automatically generating computer programs. GP has been applied to a variety of problems which are too wide to reasonably enumerate. As far as the authors are aware, it has rarely been used in condition monitoring (CM). In this paper, GP is used to detect faults in rotating machinery. Featuresets from two different machines are used to examine the performance of two-class normal/fault recognition. The results are compared with a few other methods for fault detection: Artificial neural networks (ANNs) have been used in this field for many years, while support vector machines (SVMs) also offer successful solutions. For ANNs and SVMs, genetic algorithms have been used to do feature selection, which is an inherent function of GP. In all cases, the GP demonstrates performance which equals or betters that of the previous best performing approaches on these data sets. The training times are also found to be considerably shorter than the other approaches, whilst the generated classification rules are easy to understand and independently validate.

  10. Influence of fault trend, fault bends, and fault convergence on shallow structure, geomorphology, and hazards, Hosgri strike-slip fault, offshore central California

    NASA Astrophysics Data System (ADS)

    Johnson, S. Y.; Watt, J. T.; Hartwell, S. R.

    2012-12-01

    We mapped a ~94-km-long portion of the right-lateral Hosgri Fault Zone from Point Sal to Piedras Blancas in offshore central California using high-resolution seismic reflection profiles, marine magnetic data, and multibeam bathymetry. The database includes 121 seismic profiles across the fault zone and is perhaps the most comprehensive reported survey of the shallow structure of an active strike-slip fault. These data document the location, length, and near-surface continuity of multiple fault strands, highlight fault-zone heterogeneity, and demonstrate the importance of fault trend, fault bends, and fault convergences in the development of shallow structure and tectonic geomorphology. The Hosgri Fault Zone is continuous through the study area passing through a broad arc in which fault trend changes from about 338° to 328° from south to north. The southern ~40 km of the fault zone in this area is more extensional, resulting in accommodation space that is filled by deltaic sediments of the Santa Maria River. The central ~24 km of the fault zone is characterized by oblique convergence of the Hosgri Fault Zone with the more northwest-trending Los Osos and Shoreline Faults. Convergence between these faults has resulted in the formation of local restraining and releasing fault bends, transpressive uplifts, and transtensional basins of varying size and morphology. We present a hypothesis that links development of a paired fault bend to indenting and bulging of the Hosgri Fault by a strong crustal block translated to the northwest along the Shoreline Fault. Two diverging Hosgri Fault strands bounding a central uplifted block characterize the northern ~30 km of the Hosgri Fault in this area. The eastern Hosgri strand passes through releasing and restraining bends; the releasing bend is the primary control on development of an elongate, asymmetric, "Lazy Z" sedimentary basin. The western strand of the Hosgri Fault Zone passes through a significant restraining bend and

  11. Fault tolerant operation of switched reluctance machine

    NASA Astrophysics Data System (ADS)

    Wang, Wei

    The energy crisis and environmental challenges have driven industry towards more energy efficient solutions. With nearly 60% of electricity consumed by various electric machines in industry sector, advancement in the efficiency of the electric drive system is of vital importance. Adjustable speed drive system (ASDS) provides excellent speed regulation and dynamic performance as well as dramatically improved system efficiency compared with conventional motors without electronics drives. Industry has witnessed tremendous grow in ASDS applications not only as a driving force but also as an electric auxiliary system for replacing bulky and low efficiency auxiliary hydraulic and mechanical systems. With the vast penetration of ASDS, its fault tolerant operation capability is more widely recognized as an important feature of drive performance especially for aerospace, automotive applications and other industrial drive applications demanding high reliability. The Switched Reluctance Machine (SRM), a low cost, highly reliable electric machine with fault tolerant operation capability, has drawn substantial attention in the past three decades. Nevertheless, SRM is not free of fault. Certain faults such as converter faults, sensor faults, winding shorts, eccentricity and position sensor faults are commonly shared among all ASDS. In this dissertation, a thorough understanding of various faults and their influence on transient and steady state performance of SRM is developed via simulation and experimental study, providing necessary knowledge for fault detection and post fault management. Lumped parameter models are established for fast real time simulation and drive control. Based on the behavior of the faults, a fault detection scheme is developed for the purpose of fast and reliable fault diagnosis. In order to improve the SRM power and torque capacity under faults, the maximum torque per ampere excitation are conceptualized and validated through theoretical analysis and

  12. A Quaternary Fault Database for Central Asia

    NASA Astrophysics Data System (ADS)

    Mohadjer, S.; Ehlers, T. A.; Bendick, R.; Stübner, K.; Strube, T.

    2015-09-01

    Earthquakes represent the highest risk in terms of potential loss of lives and economic damage for Central Asian countries. Knowledge of fault location and behavior is essential in calculating and mapping seismic hazard. Previous efforts in compiling fault information for Central Asia have generated a large amount of data that are published in limited-access journals with no digital maps publicly available, or are limited in their description of important fault parameters such as slip rates. This study builds on previous work by improving access to fault information through a web-based interactive map and an online database with search capabilities that allow users to organize data by different fields. The data presented in this compilation include fault location, its geographic, seismic and structural characteristics, short descriptions, narrative comments and references to peer-reviewed publications. The interactive map displays 1196 fault segments and 34 000 earthquake locations on a shaded-relief map. The online database contains attributes for 122 faults mentioned in the literature, with Quaternary and geodetic slip rates reported for 38 and 26 faults respectively, and earthquake history reported for 39 faults. This work has implications for seismic hazard studies in Central Asia as it summarizes important fault parameters, and can reduce earthquake risk by enhancing public access to information. It also allows scientists and hazard assessment teams to identify structures and regions where data gaps exist and future investigations are needed.

  13. Determining Fault Orientation with Sagnac Interferometers

    NASA Astrophysics Data System (ADS)

    Gruenwald, Konstantin; Dunn, Robert

    2014-03-01

    Typically, earthquake fault ruptures emit seismic waves in directions dependent on the fault's orientation. Specifically, as the fault slips to release strain, compressional P-waves propagate parallel and perpendicular to the fault plane, and transverse S-waves propagate at 45 degree angles to the fault-a result of the double-couple model of fault slippage. Sagnac Interferometers (ring-lasers) have been used to study wave components of several natural phenomena. We used the initial responses of a ring-laser from transverse S-waves to determine the orientation of the nearby Guy/Greenbrier fault, the source of an earthquake swarm in 2010-11 purportedly caused by hydraulic fracturing. This orientation was compared to the structure of the fault extracted by nearby seismogram responses. Our goal was to determine if ring-lasers could reinforce or add to the models of fault orientation constructed from seismographs. The results indicate that the ring-laser's responses can aid in constructing fault orientation in a manner similar to traditional seismographs. Funded by the Arkansas Space Grant Consortium and the National Science Foundation.

  14. Learning and diagnosing faults using neural networks

    NASA Technical Reports Server (NTRS)

    Whitehead, Bruce A.; Kiech, Earl L.; Ali, Moonis

    1990-01-01

    Neural networks have been employed for learning fault behavior from rocket engine simulator parameters and for diagnosing faults on the basis of the learned behavior. Two problems in applying neural networks to learning and diagnosing faults are (1) the complexity of the sensor data to fault mapping to be modeled by the neural network, which implies difficult and lengthy training procedures; and (2) the lack of sufficient training data to adequately represent the very large number of different types of faults which might occur. Methods are derived and tested in an architecture which addresses these two problems. First, the sensor data to fault mapping is decomposed into three simpler mappings which perform sensor data compression, hypothesis generation, and sensor fusion. Efficient training is performed for each mapping separately. Secondly, the neural network which performs sensor fusion is structured to detect new unknown faults for which training examples were not presented during training. These methods were tested on a task of fault diagnosis by employing rocket engine simulator data. Results indicate that the decomposed neural network architecture can be trained efficiently, can identify faults for which it has been trained, and can detect the occurrence of faults for which it has not been trained.

  15. Seismological Constraints on Fault Plane Curvature

    NASA Astrophysics Data System (ADS)

    Reynolds, K.

    2015-12-01

    The down-dip geometry of seismically active normal faults is not well known. Many examples of normal faults with down-dip curvature exist, such as listric faults revealed in cross-section or in seismic reflection data, or the exposed domes of core complexes. However, it is not understood: (1) whether curved faults fail in earthquakes, and (2) if those faults have generated earthquakes, is the curvature a primary feature of the rupture or due to later modification of the plane? Even if an event is surface-rupturing, because of the limited depth-extent over which observations can be made, it is difficult to reliably constrain the change in dip with depth (if any) and therefore the fault curvature. Despite the uncertainty in seismogenic normal fault geometries, published slip inversions most commonly use planar fault models. We investigate the seismological constraints on normal fault geometry using a forward-modelling approach and present a seismological technique for determining down-dip geometry. We demonstrate that complexity in the shape of teleseismic body waveforms may be used to investigate the presence of down-dip fault plane curvature. We have applied this method to a catalogue of continental and oceanic normal faulting events. Synthetic models demonstrate that the shapes of SH waveforms at along-strike stations are particularly sensitive to fault plane geometry. It is therefore important to consider the azimuthal station coverage before modelling an event. We find that none of the data require significant down-dip curvature, although the modelling results for some events remain ambiguous. In some cases we can constrain that the down-dip fault geometry is within 20° of planar.

  16. West Coast Tsunami: Cascadia's Fault?

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Bernard, E. N.; Titov, V.

    2013-12-01

    The tragedies of 2004 Sumatra and 2011 Japan tsunamis exposed the limits of our knowledge in preparing for devastating tsunamis. The 1,100-km coastline of the Pacific coast of North America has tectonic and geological settings similar to Sumatra and Japan. The geological records unambiguously show that the Cascadia fault had caused devastating tsunamis in the past and this geological process will cause tsunamis in the future. Hypotheses of the rupture process of Cascadia fault include a long rupture (M9.1) along the entire fault line, short ruptures (M8.8 - M9.1) nucleating only a segment of the coastline, or a series of lesser events of M8+. Recent studies also indicate an increasing probability of small rupture occurring at the south end of the Cascadia fault. Some of these hypotheses were implemented in the development of tsunami evacuation maps in Washington and Oregon. However, the developed maps do not reflect the tsunami impact caused by the most recent updates regarding the Cascadia fault rupture process. The most recent study by Wang et al. (2013) suggests a rupture pattern of high- slip patches separated by low-slip areas constrained by estimates of coseismic subsidence based on microfossil analyses. Since this study infers that a Tokohu-type of earthquake could strike in the Cascadia subduction zone, how would such an tsunami affect the tsunami hazard assessment and planning along the Pacific Coast of North America? The rapid development of computing technology allowed us to look into the tsunami impact caused by above hypotheses using high-resolution models with large coverage of Pacific Northwest. With the slab model of MaCrory et al. (2012) (as part of the USGS slab 1.0 model) for the Cascadia earthquake, we tested the above hypotheses to assess the tsunami hazards along the entire U.S. West Coast. The modeled results indicate these hypothetical scenarios may cause runup heights very similar to those observed along Japan's coastline during the 2011

  17. Perspective View, San Andreas Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The prominent linear feature straight down the center of this perspective view is the San Andreas Fault in an image created with data from NASA's shuttle Radar Topography Mission (SRTM), which will be used by geologists studying fault dynamics and landforms resulting from active tectonics. This segment of the fault lies west of the city of Palmdale, California, about 100 kilometers (about 60 miles) northwest of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. This area is at the junction of two large mountain ranges, the San Gabriel Mountains on the left and the Tehachapi Mountains on the right. Quail Lake Reservoir sits in the topographic depression created by past movement along the fault. Interstate 5 is the prominent linear feature starting at the left edge of the image and continuing into the fault zone, passing eventually over Tejon Pass into the Central Valley, visible at the upper left.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994

  18. Fault geometries in basement-induced wrench faulting under different initial stress states

    NASA Astrophysics Data System (ADS)

    Naylor, M. A.; Mandl, G.; Supesteijn, C. H. K.

    Scaled sandbox experiments were used to generate models for relative ages, dip, strike and three-dimensional shape of faults in basement-controlled wrench faulting. The basic fault sequence runs from early en échelon Riedel shears and splay faults through 'lower-angle' shears to P shears. The Riedel shears are concave upwards and define a tulip structure in cross-section. In three dimensions, each Riedel shear has a helicoidal form. The sequence of faults and three-dimensional geometry are rationalized in terms of the prevailing stress field and Coulomb-Mohr theory of shear failure. The stress state in the sedimentary overburden before wrenching begins has a substantial influence on the fault geometries and on the final complexity of the fault zone. With the maximum compressive stress (∂ 1) initially parallel to the basement fault (transtension), Riedel shears are only slightly en échelon, sub-parallel to the basement fault, steeply dipping with a reduced helicoidal aspect. Conversely, with ∂ 1 initially perpendicular to the basement fault (transpression), Riedel shears are strongly oblique to the basement fault strike, have lower dips and an exaggerated helicoidal form; the final fault zone is both wide and complex. We find good agreement between the models and both mechanical theory and natural examples of wrench faulting.

  19. Primary and secondary faulting in the Najd fault system, Kingdom of Saudi Arabia

    USGS Publications Warehouse

    Moore, John McMahon

    1979-01-01

    The Najd fault system is a major transcurrent (strike-slip) fault system of Proterozoic age in the Arabian Shield. The system is a braided complex of parallel and curved en echelon faults. Complex arrays of secondary structures including strike-slip, oblique-slip, thrust, and normal faults, together with folds and dike swarms, are associated with some major faults, particularly near their terminations. The secondary structures indicate that compressional and extensional and dilational conditions existed synchronously in different parts of the fault zone. The outcrop traces of faults and syntectonic dikes have been used to interpret the configuration of principal compressive stresses during formation of parts of the secondary fracture systems. Second-order deformation was a series of separate events in a complex episodic faulting history. Comparison with model studies indicates that master faults extended in length in stages and periodically developed arrays of secondary structures. Propagation of the major faults took place along splay trajectories, which inter-connected to form a subparallel sheeted and braided zone. Interpretation of the aeromagnetic maps indicates that the Najd system is broader at depth than the outcropping fault complex, and that more continuous structures underlie arrays of faults at surface. The fault pattern is mechanically explicable in terms of simple shear between rigid blocks beneath the exposed structures.

  20. The Energetics of Gravity Driven Faulting

    NASA Astrophysics Data System (ADS)

    Barrows, L.

    2007-12-01

    Faulting can result from either of two different mechanisms. These involve fundamentally different energetics. In displacement-bounded faulting, locked-in elastic strain energy is transformed into seismic waves plus work done in the fault zone. Elastic rebound is an example of displacement-bounded faulting. In force-driven faulting, the forces that create the stress on the fault supply work or energy to the faulting process. Half of this energy is transformed into seismic waves plus work done in the fault zone and half goes into an increase in locked-in elastic strain. In displacement-bounded faulting the locked-in elastic strain drives slip on the fault. In force-driven faulting it stops slip on the fault. Tectonic stress is reasonably attributed to gravity acting on topography and the Earth's lateral density variations. This includes the thermal convection that ultimately drives plate tectonics. The gravity collapse seismic mechanism assumes the fault fails and slips in direct response to the gravitational tectonic stress. Gravity collapse is an example of force-driven faulting. In the simplest case, energy that is released from the gravitational potential of the topography and internal stress-causing density variations is equally split between the seismic waves plus work done in the fault zone and the increase in locked-in elastic strain. The release of gravitational potential energy requires a change in the Earth's density distribution. Gravitational body forces are solely dependent on density so a change in the density distribution requires a change in the body forces. This implies the existence of volumetric body-force displacements. The volumetric body-force displacements are in addition to displacements generated by slip on the fault. They must exist if gravity participates in the energetics of the faulting process. From the perspective of gravitational tectonics, the gravity collapse mechanism is direct and simple. The related mechanics are a little more

  1. Surface faulting along the Superstition Hills fault zone and nearby faults associated with the earthquakes of 24 November 1987

    USGS Publications Warehouse

    Sharp, R.V.

    1989-01-01

    The M6.2 Elmore Desert Ranch earthquake of 24 November 1987 was associated spatially and probably temporally with left-lateral surface rupture on many northeast-trending faults in and near the Superstition Hills in western Imperial Valley. Three curving discontinuous principal zones of rupture among these breaks extended northeastward from near the Superstition Hills fault zone as far as 9km; the maximum observed surface slip, 12.5cm, was on the northern of the three, the Elmore Ranch fault, at a point near the epicenter. Twelve hours after the Elmore Ranch earthquake, the M6.6 Superstition Hills earthquake occurred near the northwest end of the right-lateral Superstition Hills fault zone. We measured displacements over 339 days at as many as 296 sites along the Superstition Hills fault zone, and repeated measurements at 49 sites provided sufficient data to fit with a simple power law. The overall distributions of right-lateral displacement at 1 day and the estimated final slip are nearly symmetrical about the midpoint of the surface rupture. The average estimated final right-lateral slip for the Superstition Hills fault zone is ~54cm. The average left-lateral slip for the conjugate faults trending northeastward is ~23cm. The southernmost ruptured member of the Superstition Hills fault zone, newly named the Wienert fault, extends the known length of the zone by about 4km. -from Authors

  2. Fault Model Development for Fault Tolerant VLSI Design

    DTIC Science & Technology

    1988-05-01

    it minimizes the number of bridging 5 % -W V,. Pi’%A faults but because of the ease with which the layout principles can be automated . This implies a...diffusion over a significant portion. Thus, it turns out .. 4 that the layout chosen on the basis of easy automation is also efficient in terms of...34, Proo. 24th ACM/IEEE . Design Automation Conference, June 1987, pp 244-250. 106 ii * . .A 16. [Reddy,19861 Sudhakar M. Reddy and Madhukar M. Reddy

  3. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1990-01-01

    The use of back-to-back, or comparison, testing for regression test or porting is examined. The efficiency and the cost of the strategy is compared with manual and table-driven single version testing. Some of the key parameters that influence the efficiency and the cost of the approach are the failure identification effort during single version program testing, the extent of implemented changes, the nature of the regression test data (e.g., random), and the nature of the inter-version failure correlation and fault-masking. The advantages and disadvantages of the technique are discussed, together with some suggestions concerning its practical use.

  4. Holocene fault scarps near Tacoma, Washington, USA

    USGS Publications Warehouse

    Sherrod, B.L.; Brocher, T.M.; Weaver, C.S.; Bucknam, R.C.; Blakely, R.J.; Kelsey, H.M.; Nelson, A.R.; Haugerud, R.

    2004-01-01

    Airborne laser mapping confirms that Holocene active faults traverse the Puget Sound metropolitan area, northwestern continental United States. The mapping, which detects forest-floor relief of as little as 15 cm, reveals scarps along geophysical lineaments that separate areas of Holocene uplift and subsidence. Along one such line of scarps, we found that a fault warped the ground surface between A.D. 770 and 1160. This reverse fault, which projects through Tacoma, Washington, bounds the southern and western sides of the Seattle uplift. The northern flank of the Seattle uplift is bounded by a reverse fault beneath Seattle that broke in A.D. 900-930. Observations of tectonic scarps along the Tacoma fault demonstrate that active faulting with associated surface rupture and ground motions pose a significant hazard in the Puget Sound region.

  5. A new intelligent hierarchical fault diagnosis system

    SciTech Connect

    Huang, Y.C.; Huang, C.L.; Yang, H.T.

    1997-02-01

    As a part of a substation-level decision support system, a new intelligent Hierarchical Fault Diagnosis System for on-line fault diagnosis is presented in this paper. The proposed diagnosis system divides the fault diagnosis process into two phases. Using time-stamped information of relays and breakers, phase 1 identifies the possible fault sections through the Group Method of Data Handling (GMDH) networks, and phase 2 recognizes the types and detailed situations of the faults identified in phase 1 by using a fast bit-operation logical inference mechanism. The diagnosis system has been practically verified by testing on a typical Taiwan power secondary transmission system. Test results show that rapid and accurate diagnosis can be obtained with flexibility and portability for fault diagnosis purpose of diverse substations.

  6. Fault-tolerant dynamic task graph scheduling

    SciTech Connect

    Kurt, Mehmet C.; Krishnamoorthy, Sriram; Agrawal, Kunal; Agrawal, Gagan

    2014-11-16

    In this paper, we present an approach to fault tolerant execution of dynamic task graphs scheduled using work stealing. In particular, we focus on selective and localized recovery of tasks in the presence of soft faults. We elicit from the user the basic task graph structure in terms of successor and predecessor relationships. The work stealing-based algorithm to schedule such a task graph is augmented to enable recovery when the data and meta-data associated with a task get corrupted. We use this redundancy, and the knowledge of the task graph structure, to selectively recover from faults with low space and time overheads. We show that the fault tolerant design retains the essential properties of the underlying work stealing-based task scheduling algorithm, and that the fault tolerant execution is asymptotically optimal when task re-execution is taken into account. Experimental evaluation demonstrates the low cost of recovery under various fault scenarios.

  7. New results in fault latency modelling

    NASA Technical Reports Server (NTRS)

    Mcgough, J. G.; Swern, F. L.; Bavuso, S.

    1983-01-01

    Studies carried out by McGough and Swern (1981, 1983) are summarized. In these studies, an avionics processor was simulated and a series of fault injection experiments was carried out to determine the degree of fault latency in a redundant flight control system that employed comparison monitoring as the exclusive means of failure detection. A determination was also made of the fault coverage of a typical self-test program. The summary presented stresses that a self-test program should be designed to capitalize on the hardware mechanization of the processor. If this is not done, subtests tend to repeatedly exercise the same hardware components while neglecting to exercise a substantial proportion of the remainder. It is also pointed out that fault latency is relatively independent of both the length and instruction mix of a program. A significant difference is found in fault coverage assessed using pin-level and gate-level fault models.

  8. Performance Analysis on Fault Tolerant Control System

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Belcastro, Christine

    2005-01-01

    In a fault tolerant control (FTC) system, a parameter varying FTC law is reconfigured based on fault parameters estimated by fault detection and isolation (FDI) modules. FDI modules require some time to detect fault occurrences in aero-vehicle dynamics. In this paper, an FTC analysis framework is provided to calculate the upper bound of an induced-L(sub 2) norm of an FTC system with existence of false identification and detection time delay. The upper bound is written as a function of a fault detection time and exponential decay rates and has been used to determine which FTC law produces less performance degradation (tracking error) due to false identification. The analysis framework is applied for an FTC system of a HiMAT (Highly Maneuverable Aircraft Technology) vehicle. Index Terms fault tolerant control system, linear parameter varying system, HiMAT vehicle.

  9. In-circuit fault injector user's guide

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1987-01-01

    A fault injector system, called an in-circuit injector, was designed and developed to facilitate fault injection experiments performed at NASA-Langley's Avionics Integration Research Lab (AIRLAB). The in-circuit fault injector (ICFI) allows fault injections to be performed on electronic systems without special test features, e.g., sockets. The system supports stuck-at-zero, stuck-at-one, and transient fault models. The ICFI system is interfaced to a VAX-11/750 minicomputer. An interface program has been developed in the VAX. The computer code required to access the interface program is presented. Also presented is the connection procedure to be followed to connect the ICFI system to a circuit under test and the ICFI front panel controls which allow manual control of fault injections.

  10. Identifiability of Additive Actuator and Sensor Faults by State Augmentation

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh; Gonzalez, Oscar R.; Upchurch, Jason M.

    2014-01-01

    A class of fault detection and identification (FDI) methods for bias-type actuator and sensor faults is explored in detail from the point of view of fault identifiability. The methods use state augmentation along with banks of Kalman-Bucy filters for fault detection, fault pattern determination, and fault value estimation. A complete characterization of conditions for identifiability of bias-type actuator faults, sensor faults, and simultaneous actuator and sensor faults is presented. It is shown that FDI of simultaneous actuator and sensor faults is not possible using these methods when all sensors have unknown biases. The fault identifiability conditions are demonstrated via numerical examples. The analytical and numerical results indicate that caution must be exercised to ensure fault identifiability for different fault patterns when using such methods.

  11. Seismically invisible fault zones: Laboratory insights into imaging faults in anisotropic rocks

    NASA Astrophysics Data System (ADS)

    Kelly, C. M.; Faulkner, D. R.; Rietbrock, A.

    2017-08-01

    Phyllosilicate-rich rocks which commonly occur within fault zones cause seismic velocity anisotropy. However, anisotropy is not always taken into account in seismic imaging and the extent of the anisotropy is often unknown. Laboratory measurements of the velocity anisotropy of fault zone rocks and gouge from the Carboneras fault zone in SE Spain indicate 10-15% velocity anisotropy in the gouge and 35-50% anisotropy in the mica-schist protolith. Greater differences in velocity are observed between the fast and slow directions in the mica-schist rock than between the gouge and the slow direction of the rock. This implies that the orientation of the anisotropy with respect to the fault is key in imaging the fault seismically. For example, for fault-parallel anisotropy, a significantly greater velocity contrast between fault gouge and rock will occur along the fault than across it, highlighting the importance of considering the foliation orientation in design of seismic experiments.

  12. On concentrated solute sources in faulted aquifers

    NASA Astrophysics Data System (ADS)

    Robinson, N. I.; Werner, A. D.

    2017-06-01

    Finite aperture faults and fractures within aquifers (collectively called 'faults' hereafter) theoretically enable flowing water to move through them but with refractive displacement, both on entry and exit. When a 2D or 3D point source of solute concentration is located upstream of the fault, the plume emanating from the source relative to one in a fault-free aquifer is affected by the fault, both before it and after it. Previous attempts to analyze this situation using numerical methods faced challenges in overcoming computational constraints that accompany requisite fine mesh resolutions. To address these, an analytical solution of this problem is developed and interrogated using statistical evaluation of solute distributions. The method of solution is based on novel spatial integral representations of the source with axes rotated from the direction of uniform water flow and aligning with fault faces and normals. Numerical exemplification is given to the case of a 2D steady state source, using various parameter combinations. Statistical attributes of solute plumes show the relative impact of parameters, the most important being, fault rotation, aperture and conductivity ratio. New general observations of fault-affected solution plumes are offered, including: (a) the plume's mode (i.e. peak concentration) on the downstream face of the fault is less displaced than the refracted groundwater flowline, but at some distance downstream of the fault, these realign; (b) porosities have no influence in steady state calculations; (c) previous numerical modeling results of barrier faults show significant boundary effects. The current solution adds to available benchmark problems involving fractures, faults and layered aquifers, in which grid resolution effects are often barriers to accurate simulation.

  13. Hydrogen Embrittlement And Stacking-Fault Energies

    NASA Technical Reports Server (NTRS)

    Parr, R. A.; Johnson, M. H.; Davis, J. H.; Oh, T. K.

    1988-01-01

    Embrittlement in Ni/Cu alloys appears related to stacking-fault porbabilities. Report describes attempt to show a correlation between stacking-fault energy of different Ni/Cu alloys and susceptibility to hydrogen embrittlement. Correlation could lead to more fundamental understanding and method of predicting susceptibility of given Ni/Cu alloy form stacking-fault energies calculated from X-ray diffraction measurements.

  14. The fault-tolerant multiprocessor computer

    NASA Technical Reports Server (NTRS)

    Smith, T. B., III (Editor); Lala, J. H. (Editor); Goldberg, J. (Editor); Kautz, W. H. (Editor); Melliar-Smith, P. M. (Editor); Green, M. W. (Editor); Levitt, K. N. (Editor); Schwartz, R. L. (Editor); Weinstock, C. B. (Editor); Palumbo, D. L. (Editor)

    1986-01-01

    The development and evaluation of fault-tolerant computer architectures and software-implemented fault tolerance (SIFT) for use in advanced NASA vehicles and potentially in flight-control systems are described in a collection of previously published reports prepared for NASA. Topics addressed include the principles of fault-tolerant multiprocessor (FTMP) operation; processor and slave regional designs; FTMP executive, facilities, acceptance-test/diagnostic, applications, and support software; FTM reliability and availability models; SIFT hardware design; and SIFT validation and verification.

  15. Approximate active fault detection and control

    NASA Astrophysics Data System (ADS)

    Škach, Jan; Punčochář, Ivo; Šimandl, Miroslav

    2014-12-01

    This paper deals with approximate active fault detection and control for nonlinear discrete-time stochastic systems over an infinite time horizon. Multiple model framework is used to represent fault-free and finitely many faulty models. An imperfect state information problem is reformulated using a hyper-state and dynamic programming is applied to solve the problem numerically. The proposed active fault detector and controller is illustrated in a numerical example of an air handling unit.

  16. 31 CFR 29.522 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance: Treasury 1 2012-07-01 2012-07-01 false Fault. 29.522 Section 29.522 Money... Overpayments § 29.522 Fault. (a) General rule. A debtor is considered to be at fault if he or she, or any other... requirement. (3) The following factors may affect the decision as to whether the debtor is or is not at...

  17. 31 CFR 29.522 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 31 Money and Finance: Treasury 1 2011-07-01 2011-07-01 false Fault. 29.522 Section 29.522 Money... Overpayments § 29.522 Fault. (a) General rule. A debtor is considered to be at fault if he or she, or any other... requirement. (3) The following factors may affect the decision as to whether the debtor is or is not at...

  18. 31 CFR 29.522 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 31 Money and Finance: Treasury 1 2014-07-01 2014-07-01 false Fault. 29.522 Section 29.522 Money... Overpayments § 29.522 Fault. (a) General rule. A debtor is considered to be at fault if he or she, or any other... requirement. (3) The following factors may affect the decision as to whether the debtor is or is not at...

  19. 31 CFR 29.522 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance: Treasury 1 2013-07-01 2013-07-01 false Fault. 29.522 Section 29.522 Money... Overpayments § 29.522 Fault. (a) General rule. A debtor is considered to be at fault if he or she, or any other... requirement. (3) The following factors may affect the decision as to whether the debtor is or is not at...

  20. Fault system polarity: A matter of chance?

    NASA Astrophysics Data System (ADS)

    Schöpfer, Martin; Childs, Conrad; Manzocchi, Tom; Walsh, John; Nicol, Andy; Grasemann, Bernhard

    2015-04-01

    Many normal fault systems and, on a smaller scale, fracture boudinage exhibit asymmetry so that one fault dip direction dominates. The fraction of throw (or heave) accommodated by faults with the same dip direction in relation to the total fault system throw (or heave) is a quantitative measure of fault system asymmetry and termed 'polarity'. It is a common belief that the formation of domino and shear band boudinage with a monoclinic symmetry requires a component of layer parallel shearing, whereas torn boudins reflect coaxial flow. Moreover, domains of parallel faults are frequently used to infer the presence of a common décollement. Here we show, using Distinct Element Method (DEM) models in which rock is represented by an assemblage of bonded circular particles, that asymmetric fault systems can emerge under symmetric boundary conditions. The pre-requisite for the development of domains of parallel faults is however that the medium surrounding the brittle layer has a very low strength. We demonstrate that, if the 'competence' contrast between the brittle layer and the surrounding material ('jacket', or 'matrix') is high, the fault dip directions and hence fault system polarity can be explained using a random process. The results imply that domains of parallel faults are, for the conditions and properties used in our models, in fact a matter of chance. Our models suggest that domino and shear band boudinage can be an unreliable shear-sense indicator. Moreover, the presence of a décollement should not be inferred on the basis of a domain of parallel faults only.

  1. Stability of fault during fluid injection

    NASA Astrophysics Data System (ADS)

    Passelegue, Francois; Brantut, Nicolas; Mitchell, Tom

    2017-04-01

    Elevated pore pressure can lead to slip reactivation on pre-existing fractures and faults when the coulomb failure point is reached. From a static point of view, the reactivation of fault submitted to a background stress (τ0) is a function of the peak strength of the fault, i.e. the quasi-static effective friction coefficient (µeff). In this study, we present new results about the influence of the injection rate on the stability of faults. Experiments were conducted on a saw-cut sample of westerly granite. The experimental fault was 8 cm length. Injections were conducted through a 2 mm diameter hole reaching the fault surface. Experiments were conducted at four different order magnitudes fluid pressure injection rates (from 1 MPa/minute to 1 GPa/minute), in a fault system submitted to 50 and 100 MPa confining pressure. Our results show that the peak fluid pressure leading to slip depends on injection rate. The faster the injection rate, the larger the peak fluid pressure leading to instability. Our result suggest that the stability of the fault is not only a function of the fluid pressure required to reach the failure criterion, but is mainly a function of the ratio between the length of the fault affected by fluid pressure and the total fault length. In addition, we show that the slip rate increases with the background effective stress and with the intensity of the fluid pressure pertubation, i.e. with the excess shear stress acting on the part of the fault pertubated by fluid injection. Our results suggest that crustal fault can be reactivated by fluid pressures that are locally much higher than expected from a static Coulomb stress analysis. These results could explain the "large" magnitude human-induced earthquakes recently observed in Basel (Mw 3.6, 2006) and in Oklahoma (Mw 5.6, 2016).

  2. Fault seal analysis: Methodology and case studies

    SciTech Connect

    Badley, M.E.; Freeman, B.; Needham, D.T. )

    1996-01-01

    Fault seal can arise from reservoir/non-reservoir juxtaposition or by development of fault rock of high entry-pressure. The methodology for evaluating these possibilities uses detailed seismic mapping and well analysis. A [open quote]first-order[close quote] seal analysis involves identifying reservoir juxtaposition areas over the fault surface, using the mapped horizons and a refined reservoir stratigraphy defined by isochores at the fault surface. The [open quote]second-order[close quote] phase of the analysis assesses whether the sand-sand contacts are likely to support a pressure difference. We define two lithology-dependent attributes [open quote]Gouge Ratio[close quote] and [open quote]Smear Factor[close quote]. Gouge Ratio is an estimate of the proportion of fine-grained material entrained into the fault gouge from the wall rocks. Smear Factor methods estimate the profile thickness of a ductile shale drawn along the fault zone during faulting. Both of these parameters vary over the fault surface implying that faults cannot simply be designated [open quote]sealing[close quote] or [open quote]non-sealing[close quote]. An important step in using these parameters is to calibrate them in areas where across-fault pressure differences are explicitly known from wells on both sides of a fault. Our calibration for a number of datasets shows remarkably consistent results despite their diverse settings (e.g. Brent Province, Niger Delta, Columbus Basin). For example, a Shale Gouge Ratio of c. 20% (volume of shale in the slipped interval) is a typical threshold between minimal across-fault pressure difference and significant seal.

  3. Seismicity and fault interaction, Southern San Jacinto Fault Zone and adjacent faults, southern California: Implications for seismic hazard

    NASA Astrophysics Data System (ADS)

    Petersen, Mark D.; Seeber, Leonardo; Sykes, Lynn R.; NáBěLek, John L.; Armbruster, John G.; Pacheco, Javier; Hudnut, Kenneth W.

    1991-12-01

    The southern San Jacinto fault zone is characterized by high seismicity and a complex fault pattern that offers an excellent setting for investigating interactions between distinct faults. This fault zone is roughly outlined by two subparallel master fault strands, the Coyote Creek and Clark-San Felipe Hills faults, that are located 2 to 10 km apart and are intersected by a series of secondary cross faults. Seismicity is intense on both master faults and secondary cross faults in the southern San Jacinto fault zone. The seismicity on the two master strands occurs primarily below 10 km; the upper 10 km of the master faults are now mostly quiescent and appear to rupture mainly or solely in large earthquakes. Our results also indicate that a considerable portion of recent background activity near the April 9, 1968, Borrego Mountain rupture zone (ML=6.4) is located on secondary faults outside the fault zone. We name and describe the Palm Wash fault, a very active secondary structure located about 25 km northeast of Borrego Mountain that is oriented subparallel to the San Jacinto fault system, dips approximately 70° to the northeast, and accommodates right-lateral shear motion. The Vallecito Mountain cluster is another secondary feature delineated by the recent seismicity and is characterized by swarming activity prior to nearby large events on the master strand. The 1968 Borrego Mountain and the April 28, 1969, Coyote Mountain (ML=5.8) events are examples of earthquakes with aftershocks and subevents on these secondary and master faults. Mechanisms from those earthquakes and recent seismic data for the period 1981 to 1986 are not simply restricted to strike-slip motion; dipslip motion is also indicated. Teleseismic body waves (long-period P and SH) of the 1968 and 1969 earthquakes were inverted simultaneously for source mechanism, seismic moment, rupture history, and centroid depth. The complicated waveforms of the 1968 event (Mo=1.2 × 1019 N m) are interpreted in

  4. Estimating the distribution of fault latency in a digital processor

    NASA Technical Reports Server (NTRS)

    Ellis, Erik L.; Butler, Ricky W.

    1987-01-01

    Presented is a statistical approach to measuring fault latency in a digital processor. The method relies on the use of physical fault injection where the duration of the fault injection can be controlled. Although a specific fault's latency period is never directly measured, the method indirectly determines the distribution of fault latency.

  5. Active faults in southeastern Harris County, Texas

    NASA Technical Reports Server (NTRS)

    Clanton, U. S.; Amsbury, D. L.

    1975-01-01

    Aerial color infrared photography was used to investigate active faults in a complex graben in southeastern Harris County, Tex. The graben extends east-west across an oil field and an interstate highway through Ellington Air Force Base (EAFB), into the Clear Lake oil field and on to LaPorte, Tex. It was shown that the fault pattern at EAFB indicates an appreciable horizontal component associated with the failure of buildings, streets, and runways. Another fault system appears to control the shoreline configuration of Clear Lake, with some of the faults associated with tectonic movements and the production of oil and gas, but many related to extensive ground water withdrawal.

  6. Sequential Test Strategies for Multiple Fault Isolation

    NASA Technical Reports Server (NTRS)

    Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.

    1997-01-01

    In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.

  7. Dating faults by quantifying shear heating

    NASA Astrophysics Data System (ADS)

    Maino, Matteo; Casini, Leonardo; Langone, Antonio; Oggiano, Giacomo; Seno, Silvio; Stuart, Finlay

    2017-04-01

    Dating brittle and brittle-ductile faults is crucial for developing seismic models and for understanding the geological evolution of a region. Improvement the geochronological approaches for absolute fault dating and its accuracy is, therefore, a key objective for the geological community. Direct dating of ancient faults may be attained by exploiting the thermal effects associated with deformation. Heat generated during faulting - i.e. the shear heating - is perhaps the best signal that provides a link between time and activity of a fault. However, other mechanisms not instantaneously related to fault motion can generate heating (advection, upwelling of hot fluids), resulting in a difficulty to determine if the thermal signal corresponds to the timing of fault movement. Recognizing the contribution of shear heating is a fundamental pre-requisite for dating the fault motion through thermochronometric techniques; therefore, a comprehensive thermal characterization of the fault zone is needed. Several methods have been proposed to assess radiometric ages of faulting from either newly grown crystals on fault gouges or surfaces (e.g. Ar/Ar dating), or thermochronometric reset of existing minerals (e.g. zircon and apatite fission tracks). In this contribution we show two cases of brittle and brittle-ductile faulting, one shallow thrust from the SW Alps and one HT, pseudotachylite-bearing fault zone in Sardinia. We applied, in both examples, a multidisciplinary approach that integrates field and micro-structural observations, petrographical characterization, geochemical and mineralogical analyses, fluid inclusion microthermometry and numerical modeling with thermochronometric dating of the two fault zones. We used the zircon (U-Th)/He thermochronometry to estimate the temperatures experienced by the shallow Alpine thrust. The ZHe thermochronometer has a closure temperature (Tc) of 180°C. Consequently, it is ideally suited to dating large heat-producing faults that were

  8. Faults Discovery By Using Mined Data

    NASA Technical Reports Server (NTRS)

    Lee, Charles

    2005-01-01

    Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.

  9. Block rotations, fault domains and crustal deformation

    NASA Technical Reports Server (NTRS)

    Nur, A.; Ron, H.

    1987-01-01

    Much of the earth's crust is broken by sets of parallel strike-slip faults which are organized in domains. A simple kinematic model suggests that when subject to tectonic strain, the faults, and the blocks bound by them, rotate. The rotation can be estimated from the structurally-determined fault slip and fault spacing, and independently from local deviations of paleomagnetic declinations from global values. A rigorous test of this model was carried out in northern Israel, where good agreement was found between the two rotations.

  10. Diagnosing multiple faults in SSM/PMAD

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel

    1990-01-01

    Multiple fault diagnosis for SSM/PMAD (space station module/power management and distribution) using the knowledge management design system as applied to the SSM/PMAD domain (KNOMAD-SSM/PMAD) is discussed. KNOMAD-SSM/PMAD provides a powerful facility for knowledge representation and reasoning which has been used to build the second generation of FRAMES (fault recovery and management expert system). FRAMES now handles the diagnosis of multiple faults and provides support for a more powerful interface for user interaction during autonomous operation. There are two types of multiple fault diagnosis handled in FRAMES. The first diagnoses hard faults, soft faults, and incipient faults simultaneously. The second diagnoses multiple hard faults which occur in close proximity in time to one another. Multiple fault diagnosis in FRAMES is performed using a rule-based approach. This rule-based approach, enabled by the KNOMAD-SSM/PMAD system, has proven to be powerful. Levels of autonomy are discussed, focusing on the approach taken in FRAMES for providing at least three levels of autonomy: complete autonomy, partial autonomy, and complete manual mode.

  11. Chip level simulation of fault tolerant computers

    NASA Technical Reports Server (NTRS)

    Armstrong, J. R.

    1983-01-01

    Chip level modeling techniques, functional fault simulation, simulation software development, a more efficient, high level version of GSP, and a parallel architecture for functional simulation are discussed.

  12. Applications of Fault Detection in Vibrating Structures

    NASA Technical Reports Server (NTRS)

    Eure, Kenneth W.; Hogge, Edward; Quach, Cuong C.; Vazquez, Sixto L.; Russell, Andrew; Hill, Boyd L.

    2012-01-01

    Structural fault detection and identification remains an area of active research. Solutions to fault detection and identification may be based on subtle changes in the time series history of vibration signals originating from various sensor locations throughout the structure. The purpose of this paper is to document the application of vibration based fault detection methods applied to several structures. Overall, this paper demonstrates the utility of vibration based methods for fault detection in a controlled laboratory setting and limitations of applying the same methods to a similar structure during flight on an experimental subscale aircraft.

  13. Mantle fault zone beneath Kilauea Volcano, Hawaii.

    PubMed

    Wolfe, Cecily J; Okubo, Paul G; Shearer, Peter M

    2003-04-18

    Relocations and focal mechanism analyses of deep earthquakes (>/=13 kilometers) at Kilauea volcano demonstrate that seismicity is focused on an active fault zone at 30-kilometer depth, with seaward slip on a low-angle plane, and other smaller, distinct fault zones. The earthquakes we have analyzed predominantly reflect tectonic faulting in the brittle lithosphere rather than magma movement associated with volcanic activity. The tectonic earthquakes may be induced on preexisting faults by stresses of magmatic origin, although background stresses from volcano loading and lithospheric flexure may also contribute.

  14. Mantle fault zone beneath Kilauea Volcano, Hawaii

    USGS Publications Warehouse

    Wolfe, C.J.; Okubo, P.G.; Shearer, P.M.

    2003-01-01

    Relocations and focal mechanism analyses of deep earthquakes (???13 kilometers) at Kilauea volcano demonstrate that seismicity is focused on an active fault zone at 30-kilometer depth, with seaward slip on a low-angle plane, and other smaller, distinct fault zones. The earthquakes we have analyzed predominantly reflect tectonic faulting in the brittle lithosphere rather than magma movement associated with volcanic activity. The tectonic earthquakes may be induced on preexisting faults by stresses of magmatic origin, although background stresses from volcano loading and lithospheric flexure may also contribute.

  15. Late Quaternary faulting along the Death Valley-Furnace Creek fault system, California and Nevada

    USGS Publications Warehouse

    Brogan, George E.; Kellogg, Karl; Slemmons, D. Burton; Terhune, Christina L.

    1991-01-01

    The Death Valley-Furnace Creek fault system, in California and Nevada, has a variety of impressive late Quaternary neotectonic features that record a long history of recurrent earthquake-induced faulting. Although no neotectonic features of unequivocal historical age are known, paleoseismic features from multiple late Quaternary events of surface faulting are well developed throughout the length of the system. Comparison of scarp heights to amount of horizontal offset of stream channels and the relationships of both scarps and channels to the ages of different geomorphic surfaces demonstrate that Quaternary faulting along the northwest-trending Furnace Creek fault zone is predominantly right lateral, whereas that along the north-trending Death Valley fault zone is predominantly normal. These observations are compatible with tectonic models of Death Valley as a northwest-trending pull-apart basin. The largest late Quaternary scarps along the Furnace Creek fault zone, with vertical separation of late Pleistocene surfaces of as much as 64 m (meters), are in Fish Lake Valley. Despite the predominance of normal faulting along the Death Valley fault zone, vertical offset of late Pleistocene surfaces along the Death Valley fault zone apparently does not exceed about 15 m. Evidence for four to six separate late Holocene faulting events along the Furnace Creek fault zone and three or more late Holocene events along the Death Valley fault zone are indicated by rupturing of Q1B (about 200-2,000 years old) geomorphic surfaces. Probably the youngest neotectonic feature observed along the Death Valley-Furnace Creek fault system, possibly historic in age, is vegetation lineaments in southernmost Fish Lake Valley. Near-historic faulting in Death Valley, within several kilometers south of Furnace Creek Ranch, is represented by (1) a 2,000-year-old lake shoreline that is cut by sinuous scarps, and (2) a system of young scarps with free-faceted faces (representing several faulting

  16. Fault tolerant filtering and fault detection for quantum systems driven by fields in single photon states

    SciTech Connect

    Gao, Qing Dong, Daoyi Petersen, Ian R.; Rabitz, Herschel

    2016-06-15

    The purpose of this paper is to solve the fault tolerant filtering and fault detection problem for a class of open quantum systems driven by a continuous-mode bosonic input field in single photon states when the systems are subject to stochastic faults. Optimal estimates of both the system observables and the fault process are simultaneously calculated and characterized by a set of coupled recursive quantum stochastic differential equations.

  17. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1993-01-01

    Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.

  18. Fault rheology beyond frictional melting.

    PubMed

    Lavallée, Yan; Hirose, Takehiro; Kendrick, Jackie E; Hess, Kai-Uwe; Dingwell, Donald B

    2015-07-28

    During earthquakes, comminution and frictional heating both contribute to the dissipation of stored energy. With sufficient dissipative heating, melting processes can ensue, yielding the production of frictional melts or "pseudotachylytes." It is commonly assumed that the Newtonian viscosities of such melts control subsequent fault slip resistance. Rock melts, however, are viscoelastic bodies, and, at high strain rates, they exhibit evidence of a glass transition. Here, we present the results of high-velocity friction experiments on a well-characterized melt that demonstrate how slip in melt-bearing faults can be governed by brittle fragmentation phenomena encountered at the glass transition. Slip analysis using models that incorporate viscoelastic responses indicates that even in the presence of melt, slip persists in the solid state until sufficient heat is generated to reduce the viscosity and allow remobilization in the liquid state. Where a rock is present next to the melt, we note that wear of the crystalline wall rock by liquid fragmentation and agglutination also contributes to the brittle component of these experimentally generated pseudotachylytes. We conclude that in the case of pseudotachylyte generation during an earthquake, slip even beyond the onset of frictional melting is not controlled merely by viscosity but rather by an interplay of viscoelastic forces around the glass transition, which involves a response in the brittle/solid regime of these rock melts. We warn of the inadequacy of simple Newtonian viscous analyses and call for the application of more realistic rheological interpretation of pseudotachylyte-bearing fault systems in the evaluation and prediction of their slip dynamics.

  19. Acoustic fault injection tool (AFIT)

    NASA Astrophysics Data System (ADS)

    Schoess, Jeffrey N.

    1999-05-01

    On September 18, 1997, Honeywell Technology Center (HTC) successfully completed a three-week flight test of its rotor acoustic monitoring system (RAMS) at Patuxent River Flight Test Center. This flight test was the culmination of an ambitious 38-month proof-of-concept effort directed at demonstrating the feasibility of detecting crack propagation in helicopter rotor components. The program was funded as part of the U.S. Navy's Air Vehicle Diagnostic Systems (AVDS) program. Reductions in Navy maintenance budgets and available personnel have dictated the need to transition from time-based to 'condition-based' maintenance. Achieving this will require new enabling diagnostic technologies. The application of acoustic emission for the early detection of helicopter rotor head dynamic component faults has proven the feasibility of the technology. The flight-test results demonstrated that stress-wave acoustic emission technology can detect signals equivalent to small fatigue cracks in rotor head components and can do so across the rotating articulated rotor head joints and in the presence of other background acoustic noise generated during flight operation. During the RAMS flight test, 12 test flights were flown from which 25 Gbyte of digital acoustic data and about 15 hours of analog flight data recorder (FDR) data were collected from the eight on-rotor acoustic sensors. The focus of this paper is to describe the CH-46 flight-test configuration and present design details about a new innovative machinery diagnostic technology called acoustic fault injection. This technology involves the injection of acoustic sound into machinery to assess health and characterize operational status. The paper will also address the development of the Acoustic Fault Injection Tool (AFIT), which was successfully demonstrated during the CH-46 flight tests.

  20. Fault rheology beyond frictional melting

    PubMed Central

    Lavallée, Yan; Hirose, Takehiro; Kendrick, Jackie E.; Hess, Kai-Uwe; Dingwell, Donald B.

    2015-01-01

    During earthquakes, comminution and frictional heating both contribute to the dissipation of stored energy. With sufficient dissipative heating, melting processes can ensue, yielding the production of frictional melts or “pseudotachylytes.” It is commonly assumed that the Newtonian viscosities of such melts control subsequent fault slip resistance. Rock melts, however, are viscoelastic bodies, and, at high strain rates, they exhibit evidence of a glass transition. Here, we present the results of high-velocity friction experiments on a well-characterized melt that demonstrate how slip in melt-bearing faults can be governed by brittle fragmentation phenomena encountered at the glass transition. Slip analysis using models that incorporate viscoelastic responses indicates that even in the presence of melt, slip persists in the solid state until sufficient heat is generated to reduce the viscosity and allow remobilization in the liquid state. Where a rock is present next to the melt, we note that wear of the crystalline wall rock by liquid fragmentation and agglutination also contributes to the brittle component of these experimentally generated pseudotachylytes. We conclude that in the case of pseudotachylyte generation during an earthquake, slip even beyond the onset of frictional melting is not controlled merely by viscosity but rather by an interplay of viscoelastic forces around the glass transition, which involves a response in the brittle/solid regime of these rock melts. We warn of the inadequacy of simple Newtonian viscous analyses and call for the application of more realistic rheological interpretation of pseudotachylyte-bearing fault systems in the evaluation and prediction of their slip dynamics. PMID:26124123

  1. SUMC fault tolerant computer system

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The results of the trade studies are presented. These trades cover: establishing the basic configuration, establishing the CPU/memory configuration, establishing an approach to crosstrapping interfaces, defining the requirements of the redundancy management unit (RMU), establishing a spare plane switching strategy for the fault-tolerant memory (FTM), and identifying the most cost effective way of extending the memory addressing capability beyond the 64 K-bytes (K=1024) of SUMC-II B. The results of the design are compiled in Contract End Item (CEI) Specification for the NASA Standard Spacecraft Computer II (NSSC-II), IBM 7934507. The implementation of the FTM and memory address expansion.

  2. On Identifiability of Bias-Type Actuator-Sensor Faults in Multiple-Model-Based Fault Detection and Identification

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh M.

    2012-01-01

    This paper explores a class of multiple-model-based fault detection and identification (FDI) methods for bias-type faults in actuators and sensors. These methods employ banks of Kalman-Bucy filters to detect the faults, determine the fault pattern, and estimate the fault values, wherein each Kalman-Bucy filter is tuned to a different failure pattern. Necessary and sufficient conditions are presented for identifiability of actuator faults, sensor faults, and simultaneous actuator and sensor faults. It is shown that FDI of simultaneous actuator and sensor faults is not possible using these methods when all sensors have biases.

  3. Detection of CMOS bridging faults using minimal stuck-at fault test sets

    NASA Technical Reports Server (NTRS)

    Ijaz, Nabeel; Frenzel, James F.

    1993-01-01

    The performance of minimal stuck-at fault test sets at detecting bridging faults are evaluated. New functional models of circuit primitives are presented which allow accurate representation of bridging faults under switch-level simulation. The effectiveness of the patterns is evaluated using both voltage and current testing.

  4. Transform fault earthquakes in the North Atlantic: Source mechanisms and depth of faulting

    NASA Technical Reports Server (NTRS)

    Bergman, Eric A.; Solomon, Sean C.

    1987-01-01

    The centroid depths and source mechanisms of 12 large earthquakes on transform faults of the northern Mid-Atlantic Ridge were determined from an inversion of long-period body waveforms. The earthquakes occurred on the Gibbs, Oceanographer, Hayes, Kane, 15 deg 20 min, and Vema transforms. The depth extent of faulting during each earthquake was estimated from the centroid depth and the fault width. The source mechanisms for all events in this study display the strike slip motion expected for transform fault earthquakes; slip vector azimuths agree to 2 to 3 deg of the local strike of the zone of active faulting. The only anomalies in mechanism were for two earthquakes near the western end of the Vema transform which occurred on significantly nonvertical fault planes. Secondary faulting, occurring either precursory to or near the end of the main episode of strike-slip rupture, was observed for 5 of the 12 earthquakes. For three events the secondary faulting was characterized by reverse motion on fault planes striking oblique to the trend of the transform. In all three cases, the site of secondary reverse faulting is near a compression jog in the current trace of the active transform fault zone. No evidence was found to support the conclusions of Engeln, Wiens, and Stein that oceanic transform faults in general are either hotter than expected from current thermal models or weaker than normal oceanic lithosphere.

  5. A novel KFCM based fault diagnosis method for unknown faults in satellite reaction wheels.

    PubMed

    Hu, Di; Sarosh, Ali; Dong, Yun-Feng

    2012-03-01

    Reaction wheels are one of the most critical components of the satellite attitude control system, therefore correct diagnosis of their faults is quintessential for efficient operation of these spacecraft. The known faults in any of the subsystems are often diagnosed by supervised learning algorithms, however, this method fails to work correctly when a new or unknown fault occurs. In such cases an unsupervised learning algorithm becomes essential for obtaining the correct diagnosis. Kernel Fuzzy C-Means (KFCM) is one of the unsupervised algorithms, although it has its own limitations; however in this paper a novel method has been proposed for conditioning of KFCM method (C-KFCM) so that it can be effectively used for fault diagnosis of both known and unknown faults as in satellite reaction wheels. The C-KFCM approach involves determination of exact class centers from the data of known faults, in this way discrete number of fault classes are determined at the start. Similarity parameters are derived and determined for each of the fault data point. Thereafter depending on the similarity threshold each data point is issued with a class label. The high similarity points fall into one of the 'known-fault' classes while the low similarity points are labeled as 'unknown-faults'. Simulation results show that as compared to the supervised algorithm such as neural network, the C-KFCM method can effectively cluster historical fault data (as in reaction wheels) and diagnose the faults to an accuracy of more than 91%.

  6. Transform fault earthquakes in the North Atlantic - Source mechanisms and depth of faulting

    NASA Astrophysics Data System (ADS)

    Bergman, Eric A.; Solomon, Sean C.

    1988-08-01

    The centroid depths and source mechanisms of 12 large earthquakes on transform faults of the northern Mid-Atlantic Ridge were determined from an inversion of long-period body waveforms. The earthquakes occurred on the Gibbs, Oceanographer, Hayes, Kane, 15 deg 20 min, and Vema transforms. The depth extent of faulting during each earthquake was estimated from the centroid depth and the fault width. The source mechanisms for all events in this study display the strike slip motion expected for transform fault earthquakes; slip vector azimuths agree to 2 to 3 deg of the local strike of the zone of active faulting. The only anomalies in mechanism were for two earthquakes near the western end of the Vema transform which occurred on significantly nonvertical fault planes. Secondary faulting, occurring either precursory to or near the end of the main episode of strike-slip rupture, was observed for 5 of the 12 earthquakes. For three events the secondary faulting was characterized by reverse motion on fault planes striking oblique to the trend of the transform. In all three cases, the site of secondary reverse faulting is near a compression jog in the current trace of the active transform fault zone. No evidence was found to support the conclusions of Engeln, Wiens, and Stein that oceanic transform faults in general are either hotter than expected from current thermal models or weaker than normal oceanic lithosphere.

  7. Transform fault earthquakes in the North Atlantic: Source mechanisms and depth of faulting

    NASA Astrophysics Data System (ADS)

    Bergman, Eric A.; Solomon, Sean C.

    1987-11-01

    The centroid depths and source mechanisms of 12 large earthquakes on transform faults of the northern Mid-Atlantic Ridge were determined from an inversion of long-period body waveforms. The earthquakes occurred on the Gibbs, Oceanographer, Hayes, Kane, 15 deg 20 min, and Vema transforms. The depth extent of faulting during each earthquake was estimated from the centroid depth and the fault width. The source mechanisms for all events in this study display the strike slip motion expected for transform fault earthquakes; slip vector azimuths agree to 2 to 3 deg of the local strike of the zone of active faulting. The only anomalies in mechanism were for two earthquakes near the western end of the Vema transform which occurred on significantly nonvertical fault planes. Secondary faulting, occurring either precursory to or near the end of the main episode of strike-slip rupture, was observed for 5 of the 12 earthquakes. For three events the secondary faulting was characterized by reverse motion on fault planes striking oblique to the trend of the transform. In all three cases, the site of secondary reverse faulting is near a compression jog in the current trace of the active transform fault zone. No evidence was found to support the conclusions of Engeln, Wiens, and Stein that oceanic transform faults in general are either hotter than expected from current thermal models or weaker than normal oceanic lithosphere.

  8. Transform fault earthquakes in the North Atlantic - Source mechanisms and depth of faulting

    NASA Technical Reports Server (NTRS)

    Bergman, Eric A.; Solomon, Sean C.

    1988-01-01

    The centroid depths and source mechanisms of 12 large earthquakes on transform faults of the northern Mid-Atlantic Ridge were determined from an inversion of long-period body waveforms. The earthquakes occurred on the Gibbs, Oceanographer, Hayes, Kane, 15 deg 20 min, and Vema transforms. The depth extent of faulting during each earthquake was estimated from the centroid depth and the fault width. The source mechanisms for all events in this study display the strike slip motion expected for transform fault earthquakes; slip vector azimuths agree to 2 to 3 deg of the local strike of the zone of active faulting. The only anomalies in mechanism were for two earthquakes near the western end of the Vema transform which occurred on significantly nonvertical fault planes. Secondary faulting, occurring either precursory to or near the end of the main episode of strike-slip rupture, was observed for 5 of the 12 earthquakes. For three events the secondary faulting was characterized by reverse motion on fault planes striking oblique to the trend of the transform. In all three cases, the site of secondary reverse faulting is near a compression jog in the current trace of the active transform fault zone. No evidence was found to support the conclusions of Engeln, Wiens, and Stein that oceanic transform faults in general are either hotter than expected from current thermal models or weaker than normal oceanic lithosphere.

  9. Transform fault earthquakes in the North Atlantic - Source mechanisms and depth of faulting

    NASA Technical Reports Server (NTRS)

    Bergman, Eric A.; Solomon, Sean C.

    1988-01-01

    The centroid depths and source mechanisms of 12 large earthquakes on transform faults of the northern Mid-Atlantic Ridge were determined from an inversion of long-period body waveforms. The earthquakes occurred on the Gibbs, Oceanographer, Hayes, Kane, 15 deg 20 min, and Vema transforms. The depth extent of faulting during each earthquake was estimated from the centroid depth and the fault width. The source mechanisms for all events in this study display the strike slip motion expected for transform fault earthquakes; slip vector azimuths agree to 2 to 3 deg of the local strike of the zone of active faulting. The only anomalies in mechanism were for two earthquakes near the western end of the Vema transform which occurred on significantly nonvertical fault planes. Secondary faulting, occurring either precursory to or near the end of the main episode of strike-slip rupture, was observed for 5 of the 12 earthquakes. For three events the secondary faulting was characterized by reverse motion on fault planes striking oblique to the trend of the transform. In all three cases, the site of secondary reverse faulting is near a compression jog in the current trace of the active transform fault zone. No evidence was found to support the conclusions of Engeln, Wiens, and Stein that oceanic transform faults in general are either hotter than expected from current thermal models or weaker than normal oceanic lithosphere.

  10. Pseudo-fault signal assisted EMD for fault detection and isolation in rotating machines

    NASA Astrophysics Data System (ADS)

    Singh, Dheeraj Sharan; Zhao, Qing

    2016-12-01

    This paper presents a novel data driven technique for the detection and isolation of faults, which generate impacts in a rotating equipment. The technique is built upon the principles of empirical mode decomposition (EMD), envelope analysis and pseudo-fault signal for fault separation. Firstly, the most dominant intrinsic mode function (IMF) is identified using EMD of a raw signal, which contains all the necessary information about the faults. The envelope of this IMF is often modulated with multiple vibration sources and noise. A second level decomposition is performed by applying pseudo-fault signal (PFS) assisted EMD on the envelope. A pseudo-fault signal is constructed based on the known fault characteristic frequency of the particular machine. The objective of using external (pseudo-fault) signal is to isolate different fault frequencies, present in the envelope . The pseudo-fault signal serves dual purposes: (i) it solves the mode mixing problem inherent in EMD, (ii) it isolates and quantifies a particular fault frequency component. The proposed technique is suitable for real-time implementation, which has also been validated on simulated fault and experimental data corresponding to a bearing and a gear-box set-up, respectively.

  11. a case of casing deformation and fault slip for the active fault drilling

    NASA Astrophysics Data System (ADS)

    Ge, H.; Song, L.; Yuan, S.; Yang, W.

    2010-12-01

    Active fault is normally defined as a fault with displacement or seismic activity during the geologically recent period (in the last 10,000 years, USGS). Here, we refer the active fault to the fault that is under the post-seismic stress modification or recovery. Micro-seismic, fault slip would happen during the recovery of the active faults. It is possible that the drilling through this active fault, such as the Wenchuan Fault Scientific Drilling(WFSD), will be accompanied with some possible wellbore instability and casing deformation, which is noteworthy for the fault scientific drilling. This presentation gives a field case of the Wenchuan earthquake. The great Wenchuan earthquake happened on May 12, 2008. An oilfield is 400km apart from the epicenter and 260km from the main fault. Many wells were drilled or are under drilling. Some are drilled through the active fault and a few tectonic active phenomenons were observed. For instance, a drill pipe was cut off in the well which was just drilled through the fault. We concluded that this is due to the fault slip,if not, so thick wall pipe cannot be cut off. At the same time, a mass of well casings of the oilfield deformed during the great Wenchuan Earthquake. The analysis of the casing deformation characteristic, formation structure, seismicity, tectonic stress variation suggest that the casing deformation is closely related to the Wenchuan Earthquake. It is the tectonic stress variation that induces seismic activities, fault slip, salt/gypsum creep speedup, and deformation inconsistent between stratums. Additional earthquake dynamic loads were exerted on the casing and caused its deformation. Active fault scientific drilling has become an important tool to understand earthquake mechanism and physics. The casing deformation and wellbore instability is not only a consequence of the earthquake but also an indicator of stress modification and fault activity. It is noteworthy that tectonic stress variation and fault

  12. Numerical simulation of spontaneous rupture processes on twonon-coplanar faults: the effect of geometry on fault interaction

    NASA Astrophysics Data System (ADS)

    Kase, Yuko; Kuge, Keiko

    1998-12-01

    Analyses of earthquake sources have revealed that the earthquake rupture process is complex and that the rupture does not occur on a single plane. Earthquake faults are often composed of several subfaults, and rupture propagation tends to decelerate or terminate at places where the fault strike changes. These observations imply that fault geometry, including fault steps and fault strike change, plays an important role in earthquake rupture complexity. In this paper, we calculate the spontaneous rupture processes of two non-coplanar faults in 2-D in-plane problems, attempting to clarify the effect of fault geometry. We consider two simple models-models in which two faults are either parallel or perpendicular to each other. We calculate spontaneous rupture propagation on the faults by a finite difference method, and we then compare the results. In our simulations, rupture initially grows on the main fault, and stress perturbation from the main rupture then triggers rupture on the secondary fault. Propagation of the main-fault rupture controls a spatio-temporal pattern of stress difference in the uniform elastic medium, which determines the rupture process of the secondary fault. The rupture propagation and termination of the secondary fault are significantly different between the two models. The difference is obvious when rupture of the main fault is arrested and the secondary fault is located near the arrested end of the main fault. When the secondary fault is parallel to the main fault, rupture can propagate ahead on the secondary fault. However, when the secondary fault is perpendicular to the main fault, rupture is either not triggered on the secondary fault, or soon terminates if triggered. This variation of the rupture process implies that fault interaction, depending on geometry, can explain the termination and change of rupture at places where the fault strike varies. This shows the importance of the fault geometry in studying spontaneous dynamic rupture

  13. Seismic images and fault relations of the Santa Monica thrust fault, West Los Angeles, California

    USGS Publications Warehouse

    Catchings, R.D.; Gandhok, G.; Goldman, M.R.; Okaya, D.

    2001-01-01

    In May 1997, the US Geological Survey (USGS) and the University of Southern California (USC) acquired high-resolution seismic reflection and refraction images on the grounds of the Wadsworth Veterans Administration Hospital (WVAH) in the city of Los Angeles (Fig. 1a,b). The objective of the seismic survey was to better understand the near-surface geometry and faulting characteristics of the Santa Monica fault zone. In this report, we present seismic images, an interpretation of those images, and a comparison of our results with results from studies by Dolan and Pratt (1997), Pratt et al. (1998) and Gibbs et al. (2000). The Santa Monica fault is one of the several northeast-southwest-trending, north-dipping, reverse faults that extend through the Los Angeles metropolitan area (Fig. 1a). Through much of area, the Santa Monica fault trends subparallel to the Hollywood fault, but the two faults apparently join into a single fault zone to the southwest and to the northeast (Dolan et al., 1995). The Santa Monica and Hollywood faults may be part of a larger fault system that extends from the Pacific Ocean to the Transverse Ranges. Crook et al. (1983) refer to this fault system as the Malibu Coast-Santa Monica-Raymond-Cucamonga fault system. They suggest that these faults have not formed a contiguous zone since the Pleistocene and conclude that each of the faults should be treated as a separate fault with respect to seismic hazards. However, Dolan et al. (1995) suggest that the Hollywood and Santa Monica faults are capable of generating Mw 6.8 and Mw 7.0 earthquakes, respectively. Thus, regardless of whether the overall fault system is connected and capable of rupturing in one event, individually, each of the faults present a sizable earthquake hazard to the Los Angeles metropolitan area. If, however, these faults are connected, and they were to rupture along a continuous fault rupture, the resulting hazard would be even greater. Although the Santa Monica fault represents

  14. Fault Management Techniques in Human Spaceflight Operations

    NASA Technical Reports Server (NTRS)

    O'Hagan, Brian; Crocker, Alan

    2006-01-01

    This paper discusses human spaceflight fault management operations. Fault detection and response capabilities available in current US human spaceflight programs Space Shuttle and International Space Station are described while emphasizing system design impacts on operational techniques and constraints. Preflight and inflight processes along with products used to anticipate, mitigate and respond to failures are introduced. Examples of operational products used to support failure responses are presented. Possible improvements in the state of the art, as well as prioritization and success criteria for their implementation are proposed. This paper describes how the architecture of a command and control system impacts operations in areas such as the required fault response times, automated vs. manual fault responses, use of workarounds, etc. The architecture includes the use of redundancy at the system and software function level, software capabilities, use of intelligent or autonomous systems, number and severity of software defects, etc. This in turn drives which Caution and Warning (C&W) events should be annunciated, C&W event classification, operator display designs, crew training, flight control team training, and procedure development. Other factors impacting operations are the complexity of a system, skills needed to understand and operate a system, and the use of commonality vs. optimized solutions for software and responses. Fault detection, annunciation, safing responses, and recovery capabilities are explored using real examples to uncover underlying philosophies and constraints. These factors directly impact operations in that the crew and flight control team need to understand what happened, why it happened, what the system is doing, and what, if any, corrective actions they need to perform. If a fault results in multiple C&W events, or if several faults occur simultaneously, the root cause(s) of the fault(s), as well as their vehicle-wide impacts, must be

  15. 1906 Meishan earthquake revisit: thrust faulting mechanism?

    NASA Astrophysics Data System (ADS)

    Liao, Y.; Hsieh, M. C.; Ma, K. F.

    2016-12-01

    The 1906 Meishan earthquake (M7.1) was one of the largest damaging earthquakes in Taiwan in the early 20th century. Historical literatures and recent studies showed that the 1906 Meishan earthquake was related to the Meishan Fault and had a right-lateral faulting mechanism striking in east-west direction. With the historical Omori records at station Taipei, Taichung and Tainan, we carried out a waveform simulation of the 1906 Meishan earthquake for understanding source rupture properties of the 1906 Meishan earthquake and the yielding predicting ground-motion in the region. A two-step waveform simulation based on SGT (strain Green's tensor) is carried out for this attempt. In the first step, possible fault models of the 1906 Meishan earthquake from geological survey and recent studies are compiled for simulation. As the preliminary results, with an east-west strike faulting mechanism as Meishan fault, the synthetic waveforms and intensity maps were not well explained. We, thus, further carried out a grid-search in focal mechanism by fitting first-motion and shear-wave polarities of historical records and synthetics to evaluate possible focal mechanism. By comparing the simulated intensity distribution maps with the historical records, the 1906 Meishan earthquake is suggested to be associated with a north-south striking thrust faulting mechanism. This might indicate that the rupture of the Meishan fault is a transfer fault between two thrust faulting systems in the western coastal plain of Taiwan. The fault systems in the western part of Taiwan might be primarily dominated by the north-south striking thrust faults even though an east-west striking surface rupture with a strike-slip mechanism was found after the Meishan earthquake.

  16. Estimating Fault Slip From Radar Interferograms

    NASA Astrophysics Data System (ADS)

    Parker, J. W.; Donnellan, A.; Glasscoe, M. T.; Stough, T.

    2016-12-01

    Estimating Fault Slip From Radar InterferogramsJay Parker, Andrea Donnellan, Margaret Glasscoe, and Tim StoughJet Propulsion Laboratory, California Institute of Technology Radar interferogerams can measure near-surface fault slip with sub-cm accuracy. Common analysis considers a repeat-pass interferogram from a single viewing angle, which maps fault slip projected into the line-of-sight of the radar. Nonetheless fault motion estimates are signed: whatever the mechanism of slip, one side of the fault moves relatively toward the radar, the other away. Line-of-slight slip estimates are compared (in some cases) with field observations of surface fracture projected into the same radar line-of-sight direction. Views from two sufficiently distinct directions allows separate estimates for vertical and horizontal motion but by necessity leaves one component of slip undetermined. View from more than two angles is rare, but resolves three components of fault slip. In contrast with field measurements of surface fractures, radar interferograms allow estimating the motion of cross-fault patches of tens of meters extent. Many such faults have discernable shear-zone width, allowing modest inversion for slip at depth down to tens of meters. This allow characterization of near-surface slip deficit. Also when there are multiple fractures across a fault zone, the interferogram will detect the overall mean fault motion, while field measurements may only discover one strand among many. Algorithmic estimates using a uniform set of control parameters are applied to California faults, including artifacts of the El Mayor Cucapah M7.2 2010 event and aftershocks, the La Habra M5.1 2014 Event, and the South Napa M6.0 2014 event.

  17. Effects of Fault Displacement on Emplacement Drifts

    SciTech Connect

    F. Duan

    2000-04-25

    The purpose of this analysis is to evaluate potential effects of fault displacement on emplacement drifts, including drip shields and waste packages emplaced in emplacement drifts. The output from this analysis not only provides data for the evaluation of long-term drift stability but also supports the Engineered Barrier System (EBS) process model report (PMR) and Disruptive Events Report currently under development. The primary scope of this analysis includes (1) examining fault displacement effects in terms of induced stresses and displacements in the rock mass surrounding an emplacement drift and (2 ) predicting fault displacement effects on the drip shield and waste package. The magnitude of the fault displacement analyzed in this analysis bounds the mean fault displacement corresponding to an annual frequency of exceedance of 10{sup -5} adopted for the preclosure period of the repository and also supports the postclosure performance assessment. This analysis is performed following the development plan prepared for analyzing effects of fault displacement on emplacement drifts (CRWMS M&O 2000). The analysis will begin with the identification and preparation of requirements, criteria, and inputs. A literature survey on accommodating fault displacements encountered in underground structures such as buried oil and gas pipelines will be conducted. For a given fault displacement, the least favorable scenario in term of the spatial relation of a fault to an emplacement drift is chosen, and the analysis is then performed analytically. Based on the analysis results, conclusions are made regarding the effects and consequences of fault displacement on emplacement drifts. Specifically, the analysis will discuss loads which can be induced by fault displacement on emplacement drifts, drip shield and/or waste packages during the time period of postclosure.

  18. Paleoseismicity of two historically quiescent faults in Australia: Implications for fault behavior in stable continental regions

    USGS Publications Warehouse

    Crone, A.J.; De Martini, P. M.; Machette, M.M.; Okumura, K.; Prescott, J.R.

    2003-01-01

    Paleoseismic studies of two historically aseismic Quaternary faults in Australia confirm that cratonic faults in stable continental regions (SCR) typically have a long-term behavior characterized by episodes of activity separated by quiescent intervals of at least 10,000 and commonly 100,000 years or more. Studies of the approximately 30-km-long Roopena fault in South Australia and the approximately 30-km-long Hyden fault in Western Australia document multiple Quaternary surface-faulting events that are unevenly spaced in time. The episodic clustering of events on cratonic SCR faults may be related to temporal fluctuations of fault-zone fluid pore pressures in a volume of strained crust. The long-term slip rate on cratonic SCR faults is extremely low, so the geomorphic expression of many cratonic SCR faults is subtle, and scarps may be difficult to detect because they are poorly preserved. Both the Roopena and Hyden faults are in areas of limited or no significant seismicity; these and other faults that we have studied indicate that many potentially hazardous SCR faults cannot be recognized solely on the basis of instrumental data or historical earthquakes. Although cratonic SCR faults may appear to be nonhazardous because they have been historically aseismic, those that are favorably oriented for movement in the current stress field can and have produced unexpected damaging earthquakes. Paleoseismic studies of modern and prehistoric SCR faulting events provide the basis for understanding of the long-term behavior of these faults and ultimately contribute to better seismic-hazard assessments.

  19. Fault Tolerant Homopolar Magnetic Bearings

    NASA Technical Reports Server (NTRS)

    Li, Ming-Hsiu; Palazzolo, Alan; Kenny, Andrew; Provenza, Andrew; Beach, Raymond; Kascak, Albert

    2003-01-01

    Magnetic suspensions (MS) satisfy the long life and low loss conditions demanded by satellite and ISS based flywheels used for Energy Storage and Attitude Control (ACESE) service. This paper summarizes the development of a novel MS that improves reliability via fault tolerant operation. Specifically, flux coupling between poles of a homopolar magnetic bearing is shown to deliver desired forces even after termination of coil currents to a subset of failed poles . Linear, coordinate decoupled force-voltage relations are also maintained before and after failure by bias linearization. Current distribution matrices (CDM) which adjust the currents and fluxes following a pole set failure are determined for many faulted pole combinations. The CDM s and the system responses are obtained utilizing 1D magnetic circuit models with fringe and leakage factors derived from detailed, 3D, finite element field models. Reliability results are presented vs. detection/correction delay time and individual power amplifier reliability for 4, 6, and 7 pole configurations. Reliability is shown for two success criteria, i.e. (a) no catcher bearing contact following pole failures and (b) re-levitation off of the catcher bearings following pole failures. An advantage of the method presented over other redundant operation approaches is a significantly reduced requirement for backup hardware such as additional actuators or power amplifiers.

  20. Illuminating Northern California's Active Faults

    NASA Astrophysics Data System (ADS)

    Prentice, Carol S.; Crosby, Christopher J.; Whitehill, Caroline S.; Arrowsmith, J. Ramón; Furlong, Kevin P.; Phillips, David A.

    2009-02-01

    Newly acquired light detection and ranging (lidar) topographic data provide a powerful community resource for the study of landforms associated with the plate boundary faults of northern California (Figure 1). In the spring of 2007, GeoEarthScope, a component of the EarthScope Facility construction project funded by the U.S. National Science Foundation, acquired approximately 2000 square kilometers of airborne lidar topographic data along major active fault zones of northern California. These data are now freely available in point cloud (x, y, z coordinate data for every laser return), digital elevation model (DEM), and KMZ (zipped Keyhole Markup Language, for use in Google Earth™ and other similar software) formats through the GEON OpenTopography Portal (http://www.OpenTopography.org/data). Importantly, vegetation can be digitally removed from lidar data, producing high-resolution images (0.5- or 1.0-meter DEMs) of the ground surface beneath forested regions that reveal landforms typically obscured by vegetation canopy (Figure 2).

  1. Intermittent/transient fault phenomena in digital systems

    NASA Technical Reports Server (NTRS)

    Masson, G. M.

    1977-01-01

    An overview of the intermittent/transient (IT) fault study is presented. An interval survivability evaluation of digital systems for IT faults is discussed along with a method for detecting and diagnosing IT faults in digital systems.

  2. Reverse fault growth and fault interaction with frictional interfaces: insights from analogue models

    NASA Astrophysics Data System (ADS)

    Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio

    2017-04-01

    The association of faulting and folding is a common feature in mountain chains, fold-and-thrust belts, and accretionary wedges. Kinematic models are developed and widely used to explain a range of relationships between faulting and folding. However, these models may result not to be completely appropriate to explain shortening in mechanically heterogeneous rock bodies. Weak layers, bedding surfaces, or pre-existing faults placed ahead of a propagating fault tip may influence the fault propagation rate itself and the associated fold shape. In this work, we employed clay analogue models to investigate how mechanical discontinuities affect the propagation rate and the associated fold shape during the growth of reverse master faults. The simulated master faults dip at 30° and 45°, recalling the range of the most frequent dip angles for active reverse faults that occurs in nature. The mechanical discontinuities are simulated by pre-cutting the clay pack. For both experimental setups (30° and 45° dipping faults) we analyzed three different configurations: 1) isotropic, i.e. without precuts; 2) with one precut in the middle of the clay pack; and 3) with two evenly-spaced precuts. To test the repeatability of the processes and to have a statistically valid dataset we replicate each configuration three times. The experiments were monitored by collecting successive snapshots with a high-resolution camera pointing at the side of the model. The pictures were then processed using the Digital Image Correlation method (D.I.C.), in order to extract the displacement and shear-rate fields. These two quantities effectively show both the on-fault and off-fault deformation, indicating the activity along the newly-formed faults and whether and at what stage the discontinuities (precuts) are reactivated. To study the fault propagation and fold shape variability we marked the position of the fault tips and the fold profiles for every successive step of deformation. Then we compared

  3. Fault-tolerant software - Experiment with the sift operating system. [Software Implemented Fault Tolerance computer

    NASA Technical Reports Server (NTRS)

    Brunelle, J. E.; Eckhardt, D. E., Jr.

    1985-01-01

    Results are presented of an experiment conducted in the NASA Avionics Integrated Research Laboratory (AIRLAB) to investigate the implementation of fault-tolerant software techniques on fault-tolerant computer architectures, in particular the Software Implemented Fault Tolerance (SIFT) computer. The N-version programming and recovery block techniques were implemented on a portion of the SIFT operating system. The results indicate that, to effectively implement fault-tolerant software design techniques, system requirements will be impacted and suggest that retrofitting fault-tolerant software on existing designs will be inefficient and may require system modification.

  4. Implementation of a model based fault detection and diagnosis technique for actuation faults of the SSME

    NASA Technical Reports Server (NTRS)

    Duyar, A.; Guo, T.-H.; Merrill, W.; Musgrave, J.

    1991-01-01

    In a previous study, Guo, Merrill and Duyar, 1990, reported a conceptual development of a fault detection and diagnosis system for actuation faults of the Space Shuttle main engine. This study, which is a continuation of the previous work, implements the developed fault detection and diagnosis scheme for the real time actuation fault diagnosis of the Space Shuttle Main Engine. The scheme will be used as an integral part of an intelligent control system demonstration experiment at NASA Lewis. The diagnosis system utilizes a model based method with real time identification and hypothesis testing for actuation, sensor, and performance degradation faults.

  5. Fault-tolerant software - Experiment with the sift operating system. [Software Implemented Fault Tolerance computer

    NASA Technical Reports Server (NTRS)

    Brunelle, J. E.; Eckhardt, D. E., Jr.

    1985-01-01

    Results are presented of an experiment conducted in the NASA Avionics Integrated Research Laboratory (AIRLAB) to investigate the implementation of fault-tolerant software techniques on fault-tolerant computer architectures, in particular the Software Implemented Fault Tolerance (SIFT) computer. The N-version programming and recovery block techniques were implemented on a portion of the SIFT operating system. The results indicate that, to effectively implement fault-tolerant software design techniques, system requirements will be impacted and suggest that retrofitting fault-tolerant software on existing designs will be inefficient and may require system modification.

  6. The width of fault zones in a brittle-viscous lithosphere: Strike-slip faults

    NASA Technical Reports Server (NTRS)

    Parmentier, E. M.

    1991-01-01

    A fault zone in an ideal brittle material overlying a very weak substrate could, in principle, consist of a single slip surface. Real fault zones have a finite width consisting of a number of nearly parallel slip surfaces on which deformation is distributed. The hypothesis that the finite width of fault zones reflects stresses due to quasistatic flow in the ductile substrate of a brittle surface layer is explored. Because of the simplicity of theory and observations, strike-slip faults are examined first, but the analysis can be extended to normal and thrust faulting.

  7. Runtime Speculative Software-Only Fault Tolerance

    DTIC Science & Technology

    2012-06-01

    5.6.2 Memory consumption . . . . . . . . . . . . . . . . . . . . . . . . 61 5.6.3 Power consumption...Overhead . . . . . . . . . . . . . . . . . . . . . . . . . . 68 6.2.2 Physical Memory Usage . . . . . . . . . . . . . . . . . . . . . . . 69 6.2.3 Power ...overhead for RSFT with and without fault recovery. . . . 70 6.5 Physical memory overhead for RSFT with and without fault recovery. . . . 72 6.6 Power

  8. Fault detection with principal component pursuit method

    NASA Astrophysics Data System (ADS)

    Pan, Yijun; Yang, Chunjie; Sun, Youxian; An, Ruqiao; Wang, Lin

    2015-11-01

    Data-driven approaches are widely applied for fault detection in industrial process. Recently, a new method for fault detection called principal component pursuit(PCP) is introduced. PCP is not only robust to outliers, but also can accomplish the objectives of model building, fault detection, fault isolation and process reconstruction simultaneously. PCP divides the data matrix into two parts: a fault-free low rank matrix and a sparse matrix with sensor noise and process fault. The statistics presented in this paper fully utilize the information in data matrix. Since the low rank matrix in PCP is similar to principal components matrix in PCA, a T2 statistic is proposed for fault detection in low rank matrix. And this statistic can illustrate that PCP is more sensitive to small variations in variables than PCA. In addition, in sparse matrix, a new monitored statistic performing the online fault detection with PCP-based method is introduced. This statistic uses the mean and the correlation coefficient of variables. Monte Carlo simulation and Tennessee Eastman (TE) benchmark process are provided to illustrate the effectiveness of monitored statistics.

  9. The Curiosity Mars Rover's Fault Protection Engine

    NASA Technical Reports Server (NTRS)

    Benowitz, Ed

    2014-01-01

    The Curiosity Rover, currently operating on Mars, contains flight software onboard to autonomously handle aspects of system fault protection. Over 1000 monitors and 39 responses are present in the flight software. Orchestrating these behaviors is the flight software's fault protection engine. In this paper, we discuss the engine's design, responsibilities, and present some lessons learned for future missions.

  10. A Game Theoretic Fault Detection Filter

    NASA Technical Reports Server (NTRS)

    Chung, Walter H.; Speyer, Jason L.

    1995-01-01

    The fault detection process is modelled as a disturbance attenuation problem. The solution to this problem is found via differential game theory, leading to an H(sub infinity) filter which bounds the transmission of all exogenous signals save the fault to be detected. For a general class of linear systems which includes some time-varying systems, it is shown that this transmission bound can be taken to zero by simultaneously bringing the sensor noise weighting to zero. Thus, in the limit, a complete transmission block can he achieved, making the game filter into a fault detection filter. When we specialize this result to time-invariant system, it is found that the detection filter attained in the limit is identical to the well known Beard-Jones Fault Detection Filter. That is, all fault inputs other than the one to be detected (the "nuisance faults") are restricted to an invariant subspace which is unobservable to a projection on the output. For time-invariant systems, it is also shown that in the limit, the order of the state-space and the game filter can be reduced by factoring out the invariant subspace. The result is a lower dimensional filter which can observe only the fault to be detected. A reduced-order filter can also he generated for time-varying systems, though the computational overhead may be intensive. An example given at the end of the paper demonstrates the effectiveness of the filter as a tool for fault detection and identification.

  11. Interactive Instruction in Solving Fault Finding Problems.

    ERIC Educational Resources Information Center

    Brooke, J. B.; And Others

    1978-01-01

    A training program is described which provides, during fault diagnosis, additional information about the relationship between the remaining faults and the available indicators. An interactive computer program developed for this purpose and the first results of experimental training are described. (Author)

  12. Training for Skill in Fault Diagnosis

    ERIC Educational Resources Information Center

    Turner, J. D.

    1974-01-01

    The Knitting, Lace and Net Industry Training Board has developed a training innovation called fault diagnosis training. The entire training process concentrates on teaching based on the experiences of troubleshooters or any other employees whose main tasks involve fault diagnosis and rectification. (Author/DS)

  13. Measurement selection for parametric IC fault diagnosis

    NASA Technical Reports Server (NTRS)

    Wu, A.; Meador, J.

    1991-01-01

    Experimental results obtained with the use of measurement reduction for statistical IC fault diagnosis are described. The reduction method used involves data pre-processing in a fashion consistent with a specific definition of parametric faults. The effects of this preprocessing are examined.

  14. Diagnostics Tools Identify Faults Prior to Failure

    NASA Technical Reports Server (NTRS)

    2013-01-01

    Through the SBIR program, Rochester, New York-based Impact Technologies LLC collaborated with Ames Research Center to commercialize the Center s Hybrid Diagnostic Engine, or HyDE, software. The fault detecting program is now incorporated into a software suite that identifies potential faults early in the design phase of systems ranging from printers to vehicles and robots, saving time and money.

  15. Glossary of fault and other fracture networks

    NASA Astrophysics Data System (ADS)

    Peacock, D. C. P.; Nixon, C. W.; Rotevatn, A.; Sanderson, D. J.; Zuluaga, L. F.

    2016-11-01

    Increased interest in the two- and three-dimensional geometries and development of faults and other types of fractures in rock has led to an increasingly bewildering terminology. Here we give definitions for the geometric, topological, kinematic and mechanical relationships between geological faults and other types of fractures, focussing on how they relate to form networks.

  16. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Fault areas. 258.13 Section 258.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES CRITERIA FOR MUNICIPAL SOLID WASTE LANDFILLS Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral...

  17. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Fault areas. 258.13 Section 258.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES CRITERIA FOR MUNICIPAL SOLID WASTE LANDFILLS Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral...

  18. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Fault areas. 258.13 Section 258.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES CRITERIA FOR MUNICIPAL SOLID WASTE LANDFILLS Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral...

  19. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Fault areas. 258.13 Section 258.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES CRITERIA FOR MUNICIPAL SOLID WASTE LANDFILLS Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral...

  20. Late Cenozoic intraplate faulting in eastern Australia

    NASA Astrophysics Data System (ADS)

    Babaahmadi, Abbas; Rosenbaum, Gideon

    2014-12-01

    The intensity and tectonic origin of late Cenozoic intraplate deformation in eastern Australia is relatively poorly understood. Here we show that Cenozoic volcanic rocks in southeast Queensland have been deformed by numerous faults. Using gridded aeromagnetic data and field observations, structural investigations were conducted on these faults. Results show that faults have mainly undergone strike-slip movement with a reverse component, displacing Cenozoic volcanic rocks ranging in ages from ˜31 to ˜21 Ma. These ages imply that faulting must have occurred after the late Oligocene. Late Cenozoic deformation has mostly occurred due to the reactivation of major faults, which were active during episodes of basin formation in the Jurassic-Early Cretaceous and later during the opening of the Tasman and Coral Seas from the Late Cretaceous to the early Eocene. The wrench reactivation of major faults in the late Cenozoic also gave rise to the occurrence of brittle subsidiary reverse strike-slip faults that affected Cenozoic volcanic rocks. Intraplate transpressional deformation possibly resulted from far-field stresses transmitted from the collisional zones at the northeast and southeast boundaries of the Australian plate during the late Oligocene-early Miocene and from the late Miocene to the Pliocene. These events have resulted in the hitherto unrecognized reactivation of faults in eastern Australia.

  1. Intermittent/transient faults in digital systems

    NASA Technical Reports Server (NTRS)

    Masson, G. M.; Glazer, R. E.

    1982-01-01

    Containment set techniques are applied to 8085 microprocessor controllers so as to transform a typical control system into a slightly modified version, shown to be crashproof: after the departure of the intermittent/transient fault, return to one proper control algorithm is assured, assuming no permanent faults occur.

  2. Fault-tolerant parallel processing system

    SciTech Connect

    Harper, R.E.; Lala, J.H.

    1990-03-06

    This patent describes a fault tolerant processing system for providing processing operations, while tolerating f failures in the execution thereof. It comprises: at least (3f + 1) fault containment regions. Each of the regions includes a plurality of processors; network means connected to the processors and to the network means of the others of the fault containment regions; groups of one or more processors being configured to form redundant processing sites at least one of the groups having (2f + 1) processors, each of the processors of a group being included in a different one of the fault containment regions. Each network means of a fault containment region includes means for providing communication operations between the network means and the network means of the others of the fault containment regions, each of the network means being connected to each other network means by at lest (2f + 1) disjoint communication paths, a minimum of (f + 1) rounds of communication being provided among the network means of the fault containment regions in the execution of a the processing operation; and means for synchronizing the communication operations of the network means with the communications operations of the network means of the other fault containment regions.

  3. The Curiosity Mars Rover's Fault Protection Engine

    NASA Technical Reports Server (NTRS)

    Benowitz, Ed

    2014-01-01

    The Curiosity Rover, currently operating on Mars, contains flight software onboard to autonomously handle aspects of system fault protection. Over 1000 monitors and 39 responses are present in the flight software. Orchestrating these behaviors is the flight software's fault protection engine. In this paper, we discuss the engine's design, responsibilities, and present some lessons learned for future missions.

  4. Investigation of an Advanced Fault Tolerant Integrated Avionics System

    DTIC Science & Technology

    1986-03-01

    Fault Detection and Isolation 50 5.4.2 Cockpit Fault Monitoring and Reconfiguration 53 Logical...Management Design Considerations 5.2.2.1 Authority Hierarchy Redundancy management involves not only fault detection and isolation but action to deselect... Fault Detection and Isolation in the event of a fault in an active channel, three events must transpire: a) The fault must be detected, b) The

  5. Geophysical characterization of buried active faults: the Concud Fault (Iberian Chain, NE Spain)

    NASA Astrophysics Data System (ADS)

    Pueyo Anchuela, Óscar; Lafuente, Paloma; Arlegui, Luis; Liesa, Carlos L.; Simón, José L.

    2016-11-01

    The Concud Fault is a 14-km-long active fault that extends close to Teruel, a city with about 35,000 inhabitants in the Iberian Range (NE Spain). It shows evidence of recurrent activity during Late Pleistocene time, posing a significant seismic hazard in an area of moderate-to-low tectonic rates. A geophysical survey was carried out along the mapped trace of the southern branch of the Concud Fault to evaluate the geophysical signature from the fault and the location of paleoseismic trenches. The survey identified a lineation of inverse magnetic dipoles at residual and vertical magnetic gradient, a local increase in apparent conductivity, and interruptions of the underground sediment structure along GPR profiles. The origin of these anomalies is due to lateral contrast between both fault blocks and the geophysical signature of Quaternary materials located above and directly south of the fault. The spatial distribution of anomalies was successfully used to locate suitable trench sites and to map non-exposed segments of the fault. The geophysical anomalies are related to the sedimentological characteristics and permeability differences of the deposits and to deformation related to fault activity. The results illustrate the usefulness of geophysics to detect and map non-exposed faults in areas of moderate-to-low tectonic activity where faults are often covered by recent pediments that obscure geological evidence of the most recent earthquakes. The results also highlight the importance of applying multiple geophysical techniques in defining the location of buried faults.

  6. Spatial analysis of hypocenter to fault relationships for determining fault process zone width in Japan.

    SciTech Connect

    Arnold, Bill Walter; Roberts, Barry L.; McKenna, Sean Andrew; Coburn, Timothy C. (Abilene Christian University, Abilene, TX)

    2004-09-01

    Preliminary investigation areas (PIA) for a potential repository of high-level radioactive waste must be evaluated by NUMO with regard to a number of qualifying factors. One of these factors is related to earthquakes and fault activity. This study develops a spatial statistical assessment method that can be applied to the active faults in Japan to perform such screening evaluations. This analysis uses the distribution of seismicity near faults to define the width of the associated process zone. This concept is based on previous observations of aftershock earthquakes clustered near active faults and on the assumption that such seismic activity is indicative of fracturing and associated impacts on bedrock integrity. Preliminary analyses of aggregate data for all of Japan confirmed that the frequency of earthquakes is higher near active faults. Data used in the analysis were obtained from NUMO and consist of three primary sources: (1) active fault attributes compiled in a spreadsheet, (2) earthquake hypocenter data, and (3) active fault locations. Examination of these data revealed several limitations with regard to the ability to associate fault attributes from the spreadsheet to locations of individual fault trace segments. In particular, there was no direct link between attributes of the active faults in the spreadsheet and the active fault locations in the GIS database. In addition, the hypocenter location resolution in the pre-1983 data was less accurate than for later data. These pre-1983 hypocenters were eliminated from further analysis.

  7. Fault structure, frictional properties and mixed-mode fault slip behavior

    NASA Astrophysics Data System (ADS)

    Collettini, Cristiano; Niemeijer, André; Viti, Cecilia; Smith, Steven A. F.; Marone, Chris

    2011-11-01

    Recent high-resolution GPS and seismological data reveal that tectonic faults exhibit complex, multi-mode slip behavior including earthquakes, creep events, slow and silent earthquakes, low-frequency events and earthquake afterslip. The physical processes responsible for this range of behavior and the mechanisms that dictate fault slip rate or rupture propagation velocity are poorly understood. One avenue for improving knowledge of these mechanisms involves coupling direct observations of ancient faults exhumed at the Earth's surface with laboratory experiments on the frictional properties of the fault rocks. Here, we show that fault zone structure has an important influence on mixed-mode fault slip behavior. Our field studies depict a complex fault zone structure where foliated horizons surround meter- to decameter-sized lenses of competent material. The foliated rocks are composed of weak mineral phases, possess low frictional strength, and exhibit inherently stable, velocity-strengthening frictional behavior. In contrast, the competent lenses are made of strong minerals, possess high frictional strength, and exhibit potentially unstable, velocity-weakening frictional behavior. Tectonic loading of this heterogeneous fault zone may initially result in fault creep along the weak and frictionally stable foliated horizons. With continued deformation, fault creep will concentrate stress within and around the strong and potentially unstable competent lenses, which may lead to earthquake nucleation. Our studies provide field and mechanical constraints for complex, mixed-mode fault slip behavior ranging from repeating earthquakes to transient slip, episodic slow-slip and creep events.

  8. Active faulting in the Walker Lane

    NASA Astrophysics Data System (ADS)

    Wesnousky, Steven G.

    2005-06-01

    Deformation across the San Andreas and Walker Lane fault systems accounts for most relative Pacific-North American transform plate motion. The Walker Lane is composed of discontinuous sets of right-slip faults that are located to the east and strike approximately parallel to the San Andreas fault system. Mapping of active faults in the central Walker Lane shows that right-lateral shear is locally accommodated by rotation of crustal blocks bounded by steep-dipping east striking left-slip faults. The left slip and clockwise rotation of crustal blocks bounded by the east striking faults has produced major basins in the area, including Rattlesnake and Garfield flats; Teels, Columbus and Rhodes salt marshes; and Queen Valley. The Benton Springs and Petrified Springs faults are the major northwest striking structures currently accommodating transform motion in the central Walker Lane. Right-lateral offsets of late Pleistocene surfaces along the two faults point to slip rates of at least 1 mm/yr. The northern limit of northwest trending strike-slip faults in the central Walker Lane is abrupt and reflects transfer of strike-slip to dip-slip deformation in the western Basin and Range and transformation of right slip into rotation of crustal blocks to the north. The transfer of strike slip in the central Walker Lane to dip slip in the western Basin and Range correlates to a northward broadening of the modern strain field suggested by geodesy and appears to be a long-lived feature of the deformation field. The complexity of faulting and apparent rotation of crustal blocks within the Walker Lane is consistent with the concept of a partially detached and elastic-brittle crust that is being transported on a continuously deforming layer below. The regional pattern of faulting within the Walker Lane is more complex than observed along the San Andreas fault system to the west. The difference is attributed to the relatively less cumulative slip that has occurred across the Walker

  9. Modeling fault among motorcyclists involved in crashes.

    PubMed

    Haque, Md Mazharul; Chin, Hoong Chor; Huang, Helai

    2009-03-01

    Singapore crash statistics from 2001 to 2006 show that the motorcyclist fatality and injury rates per registered vehicle are higher than those of other motor vehicles by 13 and 7 times, respectively. The crash involvement rate of motorcyclists as victims of other road users is also about 43%. The objective of this study is to identify the factors that contribute to the fault of motorcyclists involved in crashes. This is done by using the binary logit model to differentiate between at-fault and not-at-fault cases and the analysis is further categorized by the location of the crashes, i.e., at intersections, on expressways and at non-intersections. A number of explanatory variables representing roadway characteristics, environmental factors, motorcycle descriptions, and rider demographics have been evaluated. Time trend effect shows that not-at-fault crash involvement of motorcyclists has increased with time. The likelihood of night time crashes has also increased for not-at-fault crashes at intersections and expressways. The presence of surveillance cameras is effective in reducing not-at-fault crashes at intersections. Wet-road surfaces increase at-fault crash involvement at non-intersections. At intersections, not-at-fault crash involvement is more likely on single-lane roads or on median lane of multi-lane roads, while on expressways at-fault crash involvement is more likely on the median lane. Roads with higher speed limit have higher at-fault crash involvement and this is also true on expressways. Motorcycles with pillion passengers or with higher engine capacity have higher likelihood of being at-fault in crashes on expressways. Motorcyclists are more likely to be at-fault in collisions involving pedestrians and this effect is higher at night. In multi-vehicle crashes, motorcyclists are more likely to be victims than at-fault. Young and older riders are more likely to be at-fault in crashes than middle-aged group of riders. The findings of this study will help

  10. Do faults stay cool under stress?

    NASA Astrophysics Data System (ADS)

    Savage, H. M.; Polissar, P. J.; Sheppard, R. E.; Brodsky, E. E.; Rowe, C. D.

    2011-12-01

    Determining the absolute stress on faults during slip is one of the major goals of earthquake physics as this information is necessary for full mechanical modeling of the rupture process. One indicator of absolute stress is the total energy dissipated as heat through frictional resistance. The heat results in a temperature rise on the fault that is potentially measurable and interpretable as an indicator of the absolute stress. We present a new paleothermometer for fault zones that utilizes the thermal maturity of extractable organic material to determine the maximum frictional heating experienced by the fault. Because there are no retrograde reactions in these organic systems, maximum heating is preserved. We investigate four different faults: 1) the Punchbowl Fault, a strike-slip fault that is part of the ancient San Andreas system in southern California, 2) the Muddy Mountain Thrust, a continental thrust sheet in Nevada, 3) large shear zones of Sitkanik Island, AK, part of the proto-megathrust of the Kodiak Accretionary Complex and 4) the Pasagshak Point Megathrust, Kodiak Accretionary Complex, AK. According to a variety of organic thermal maturity indices, the thermal maturity of the rocks falls within the range of heating expected from the bounds on burial depth and time, indicating that the method is robust and in some cases improving our knowledge of burial depth. Only the Pasagshak Point Thrust, which is also pseudotachylyte-bearing, shows differential heating between the fault and off-fault samples. This implies that most of the faults did not get hotter than the surrounding rock during slip. Simple temperature models coupled to the kinetic reactions for organic maturity let us constrain certain aspects of the fault during slip such as fault friction, maximum slip in a single earthquake, the thickness of the active slipping zone and the effective normal stress. Because of the significant length of these faults, we find it unlikely that they never sustained

  11. Shear heating by translational brittle reverse faulting along a single, sharp and straight fault plane

    NASA Astrophysics Data System (ADS)

    Mukherjee, Soumyajit

    2017-02-01

    Shear heating by reverse faulting on a sharp straight fault plane is modelled. Increase in temperature ( T i ) of faulted hangingwall and footwall blocks by frictional/shear heating for planar rough reverse faults is proportional to the coefficient of friction ( μ), density and thickness of the hangingwall block ( ρ). T i increases as movement progresses with time. Thermal conductivity ( K i ) and thermal diffusivity (ki^' }) of faulted blocks govern T i but they do not bear simple relation. T i is significant only near the fault plane. If the lithology is dry and faulting brings adjacent hangingwall and footwall blocks of the same lithology in contact, those blocks undergo the same rate of increase in shear heating per unit area per unit time.

  12. Physiochemical Evidence of Faulting Processes and Modeling of Fluid in Evolving Fault Systems in Southern California

    SciTech Connect

    Boles, James

    2013-05-24

    Our study targets recent (Plio-Pleistocene) faults and young (Tertiary) petroleum fields in southern California. Faults include the Refugio Fault in the Transverse Ranges, the Ellwood Fault in the Santa Barbara Channel, and most recently the Newport- Inglewood in the Los Angeles Basin. Subsurface core and tubing scale samples, outcrop samples, well logs, reservoir properties, pore pressures, fluid compositions, and published structural-seismic sections have been used to characterize the tectonic/diagenetic history of the faults. As part of the effort to understand the diagenetic processes within these fault zones, we have studied analogous processes of rapid carbonate precipitation (scaling) in petroleum reservoir tubing and manmade tunnels. From this, we have identified geochemical signatures in carbonate that characterize rapid CO2 degassing. These data provide constraints for finite element models that predict fluid pressures, multiphase flow patterns, rates and patterns of deformation, subsurface temperatures and heat flow, and geochemistry associated with large fault systems.

  13. Classification of Aircraft Maneuvers for Fault Detection

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj; Tumer, Irem Y.; Tumer, Kagan; Huff, Edward M.; Koga, Dennis (Technical Monitor)

    2002-01-01

    Automated fault detection is an increasingly important problem in aircraft maintenance and operation. Standard methods of fault detection assume the availability of either data produced during all possible faulty operation modes or a clearly-defined means to determine whether the data provide a reasonable match to known examples of proper operation. In the domain of fault detection in aircraft, the first assumption is unreasonable and the second is difficult to determine. We envision a system for online fault detection in aircraft, one part of which is a classifier that predicts the maneuver being performed by the aircraft as a function of vibration data and other available data. To develop such a system, we use flight data collected under a controlled test environment, subject to many sources of variability. We explain where our classifier fits into the envisioned fault detection system as well as experiments showing the promise of this classification subsystem.

  14. Maneuver Classification for Aircraft Fault Detection

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Tumer, Irem Y.; Tumer, Kagan; Huff, Edward M.

    2003-01-01

    Automated fault detection is an increasingly important problem in aircraft maintenance and operation. Standard methods of fault detection assume the availability of either data produced during all possible faulty operation modes or a clearly-defined means to determine whether the data provide a reasonable match to known examples of proper operation. In the domain of fault detection in aircraft, identifying all possible faulty and proper operating modes is clearly impossible. We envision a system for online fault detection in aircraft, one part of which is a classifier that predicts the maneuver being performed by the aircraft as a function of vibration data and other available data. To develop such a system, we use flight data collected under a controlled test environment, subject to many sources of variability. We explain where our classifier fits into the envisioned fault detection system as well as experiments showing the promise of this classification subsystem.

  15. Classification of Aircraft Maneuvers for Fault Detection

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Tumer, Irem Y.; Tumer, Kagan; Huff, Edward M.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Automated fault detection is an increasingly important problem in aircraft maintenance and operation. Standard methods of fault detection assume the availability of either data produced during all possible faulty operation modes or a clearly-defined means to determine whether the data is a reasonable match to known examples of proper operation. In our domain of fault detection in aircraft, the first assumption is unreasonable and the second is difficult to determine. We envision a system for online fault detection in aircraft, one part of which is a classifier that predicts the maneuver being performed by the aircraft as a function of vibration data and other available data. We explain where this subsystem fits into our envisioned fault detection system as well its experiments showing the promise of this classification subsystem.

  16. Active Fault Topography and Fault Outcrops in the Central Part of the Nukumi fault, the 1891 Nobi Earthquake Fault System, Central Japan

    NASA Astrophysics Data System (ADS)

    Sasaki, T.; Ueta, K.; Inoue, D.; Aoyagi, Y.; Yanagida, M.; Ichikawa, K.; Goto, N.

    2010-12-01

    It is important to evaluate the magnitude of earthquake caused by multiple active faults, taking into account the simultaneous effects. The simultaneity of adjacent active faults are often decided on the basis of geometric distances except for known these paleoseismic records. We have been studied the step area between the Nukumi fault and the Neodani fault, which appeared as consecutive ruptures in the 1891 Nobi earthquake, since 2009. The purpose of this study is to establish innovation in valuation technique of the simultaneity of adjacent active faults in addition to the paleoseismic record and the geometric distance. Geomorphological, geological and reconnaissance microearthquake surveys are concluded. The present work is intended to clarify the distribution of tectonic geomorphology along the Nukumi fault and the Neodani fault by high-resolution interpretations of airborne LiDAR DEM and aerial photograph, and the field survey of outcrops and location survey. The study area of this work is the southeastern Nukumi fault and the northwestern Neodani fault. We interpret DEM using shaded relief map and stereoscopic bird's-eye view made from 2m mesh DEM data which is obtained by airborne laser scanner of Kokusai Kogyo Co., Ltd. Aerial photographic survey is for confirmation of DEM interpretation using 1/16,000 scale photo. As a result of topographic survey, we found consecutive tectonic topography which is left lateral displacement of ridge and valley lines and reverse scarplets along the Nukumi fault and the Neodani fault . From Ogotani 2km southeastern of Nukumi pass which is located at the southeastern end of surface rupture along the Nukumi fault by previous study to Neooppa 9km southeastern of Nukumi pass, we can interpret left lateral topographies and small uphill-facing fault scarps on the terrace surface by detail DEM investigation. These topographies are unrecognized by aerial photographic survey because of heavy vegetation. We have found several new

  17. Quantifying fault recovery in multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw; Harary, Frank

    1990-01-01

    Various aspects of reliable computing are formalized and quantified with emphasis on efficient fault recovery. The mathematical model which proves to be most appropriate is provided by the theory of graphs. New measures for fault recovery are developed and the value of elements of the fault recovery vector are observed to depend not only on the computation graph H and the architecture graph G, but also on the specific location of a fault. In the examples, a hypercube is chosen as a representative of parallel computer architecture, and a pipeline as a typical configuration for program execution. Dependability qualities of such a system is defined with or without a fault. These qualities are determined by the resiliency triple defined by three parameters: multiplicity, robustness, and configurability. Parameters for measuring the recovery effectiveness are also introduced in terms of distance, time, and the number of new, used, and moved nodes and edges.

  18. Self-triggering superconducting fault current limiter

    DOEpatents

    Yuan, Xing [Albany, NY; Tekletsadik, Kasegn [Rexford, NY

    2008-10-21

    A modular and scaleable Matrix Fault Current Limiter (MFCL) that functions as a "variable impedance" device in an electric power network, using components made of superconducting and non-superconducting electrically conductive materials. The matrix fault current limiter comprises a fault current limiter module that includes a superconductor which is electrically coupled in parallel with a trigger coil, wherein the trigger coil is magnetically coupled to the superconductor. The current surge doing a fault within the electrical power network will cause the superconductor to transition to its resistive state and also generate a uniform magnetic field in the trigger coil and simultaneously limit the voltage developed across the superconductor. This results in fast and uniform quenching of the superconductors, significantly reduces the burnout risk associated with non-uniformity often existing within the volume of superconductor materials. The fault current limiter modules may be electrically coupled together to form various "n" (rows).times."m" (columns) matrix configurations.

  19. Tuning of fault tolerant control design parameters.

    PubMed

    DeLima, Pedro G; Yen, Gary G

    2008-01-01

    This paper presents two major contributions in the field of fault tolerant control. First, it gathers points of concern typical to most fault tolerant control applications and translates the chosen performance metrics into a set of six practical design specifications. Second, it proposes initialization and tuning procedures through which a particular fault tolerant control architecture not only can be set to comply with the required specifications, but also can be tuned online to compensate for a total of twelve properties, such as the noise rejection levels for fault detection and diagnosis signals. The proposed design is realized over a powerful architecture that combines the flexibility of adaptive critic designs with the long term memory and learning capabilities of a supervisor. This paper presents a practical design procedure to facilitate the applications of a fundamentally sound fault tolerant control architecture in real-world problems.

  20. Holocene fault scarps in the Western Alps

    NASA Astrophysics Data System (ADS)

    Hippolyte, J. C.

    2003-04-01

    In the Tarentaise Valley, Goguel (1969) had described recent fault scarps. The present work shows that they are normal faults indicating a SE-directed trend of extension in agreement with recent microseismicity data (Sue et al., 1999). It is proposed that they reflect the Quaternary normal reactivation of the "Front du Houiller" thrust fault. In the Belledonne external crystalline massif, Bordet (1970) had observed from helicopter three main fault scarps that he interpreted as active SE-dipping reverse faults. Partly owing to the difficulties of access this area was not visited until now. Field observations reveal that these faults dip in fact 61-68° to the NW, and are normal faults. The faults scarps are 1 to 13 meters high. These faults, together with at least 10 newly discovered conjugate SE-dipping normal fault scarps of 0.5 to 18 meters high, form an about 2 km wide fault zone along the "Synclinal Median" (S.M.) fault. They attest for the activity of this 70 km-long NNE-striking main fault running in the middle of the Belledonne Massif. Its activity is confirmed by major faceted spurs at the La Perche, the La Perrière and the Claran passes, and by ruptures cutting moraines. Other fault scarps are discovered in the whole Belledonne massif showing in particular that the Font-de-France fault, a 60 km-long SE-dipping fault, is also active. All the observed active faults are normal. Their offsets of mountains slopes, of screes and of rock glacier morphologies demonstrate their activity during the Holocene. They indicate a present SE-directed extension in agreement with recent GPS data (Calais et al., 2002). This mapping shows that the present extensional deformation of the Alps is not limited to the west by the "Frontal Pennine thrust" (Sue et al., 1999) but affects also the external Alps. Taking into account focal plane mechanisms, extension affects at least 70 % of the Western Alps. Some scarps have been sampled for Beryllium cosmogenic dating. However

  1. Fault Detection for Automotive Shock Absorber

    NASA Astrophysics Data System (ADS)

    Hernandez-Alcantara, Diana; Morales-Menendez, Ruben; Amezquita-Brooks, Luis

    2015-11-01

    Fault detection for automotive semi-active shock absorbers is a challenge due to the non-linear dynamics and the strong influence of the disturbances such as the road profile. First obstacle for this task, is the modeling of the fault, which has been shown to be of multiplicative nature. Many of the most widespread fault detection schemes consider additive faults. Two model-based fault algorithms for semiactive shock absorber are compared: an observer-based approach and a parameter identification approach. The performance of these schemes is validated and compared using a commercial vehicle model that was experimentally validated. Early results shows that a parameter identification approach is more accurate, whereas an observer-based approach is less sensible to parametric uncertainty.

  2. Fault analysis of multichannel spacecraft power systems

    NASA Technical Reports Server (NTRS)

    Dugal-Whitehead, Norma R.; Lollar, Louis F.

    1990-01-01

    The NASA Marshall Space Flight Center proposes to implement computer-controlled fault injection into an electrical power system breadboard to study the reactions of the various control elements of this breadboard. Elements under study include the remote power controllers, the algorithms in the control computers, and the artificially intelligent control programs resident in this breadboard. To this end, a study of electrical power system faults is being performed to yield a list of the most common power system faults. The results of this study will be applied to a multichannel high-voltage DC spacecraft power system called the large autonomous spacecraft electrical power system (LASEPS) breadboard. The results of the power system fault study and the planned implementation of these faults into the LASEPS breadboard are described.

  3. Fault analysis of multichannel spacecraft power systems

    NASA Technical Reports Server (NTRS)

    Dugal-Whitehead, Norma R.; Lollar, Louis F.

    1990-01-01

    The NASA Marshall Space Flight Center proposes to implement computer-controlled fault injection into an electrical power system breadboard to study the reactions of the various control elements of this breadboard. Elements under study include the remote power controllers, the algorithms in the control computers, and the artificially intelligent control programs resident in this breadboard. To this end, a study of electrical power system faults is being performed to yield a list of the most common power system faults. The results of this study will be applied to a multichannel high-voltage DC spacecraft power system called the large autonomous spacecraft electrical power system (LASEPS) breadboard. The results of the power system fault study and the planned implementation of these faults into the LASEPS breadboard are described.

  4. Stafford fault system: 120 million year fault movement history of northern Virginia

    USGS Publications Warehouse

    Powars, David S.; Catchings, Rufus D.; Horton, J. Wright; Schindler, J. Stephen; Pavich, Milan J.

    2015-01-01

    The Stafford fault system, located in the mid-Atlantic coastal plain of the eastern United States, provides the most complete record of fault movement during the past ~120 m.y. across the Virginia, Washington, District of Columbia (D.C.), and Maryland region, including displacement of Pleistocene terrace gravels. The Stafford fault system is close to and aligned with the Piedmont Spotsylvania and Long Branch fault zones. The dominant southwest-northeast trend of strong shaking from the 23 August 2011, moment magnitude Mw 5.8 Mineral, Virginia, earthquake is consistent with the connectivity of these faults, as seismic energy appears to have traveled along the documented and proposed extensions of the Stafford fault system into the Washington, D.C., area. Some other faults documented in the nearby coastal plain are clearly rooted in crystalline basement faults, especially along terrane boundaries. These coastal plain faults are commonly assumed to have undergone relatively uniform movement through time, with average slip rates from 0.3 to 1.5 m/m.y. However, there were higher rates during the Paleocene–early Eocene and the Pliocene (4.4–27.4 m/m.y), suggesting that slip occurred primarily during large earthquakes. Further investigation of the Stafford fault system is needed to understand potential earthquake hazards for the Virginia, Maryland, and Washington, D.C., area. The combined Stafford fault system and aligned Piedmont faults are ~180 km long, so if the combined fault system ruptured in a single event, it would result in a significantly larger magnitude earthquake than the Mineral earthquake. Many structures most strongly affected during the Mineral earthquake are along or near the Stafford fault system and its proposed northeastward extension.

  5. Methodology for Designing Fault-Protection Software

    NASA Technical Reports Server (NTRS)

    Barltrop, Kevin; Levison, Jeffrey; Kan, Edwin

    2006-01-01

    A document describes a methodology for designing fault-protection (FP) software for autonomous spacecraft. The methodology embodies and extends established engineering practices in the technical discipline of Fault Detection, Diagnosis, Mitigation, and Recovery; and has been successfully implemented in the Deep Impact Spacecraft, a NASA Discovery mission. Based on established concepts of Fault Monitors and Responses, this FP methodology extends the notion of Opinion, Symptom, Alarm (aka Fault), and Response with numerous new notions, sub-notions, software constructs, and logic and timing gates. For example, Monitor generates a RawOpinion, which graduates into Opinion, categorized into no-opinion, acceptable, or unacceptable opinion. RaiseSymptom, ForceSymptom, and ClearSymptom govern the establishment and then mapping to an Alarm (aka Fault). Local Response is distinguished from FP System Response. A 1-to-n and n-to- 1 mapping is established among Monitors, Symptoms, and Responses. Responses are categorized by device versus by function. Responses operate in tiers, where the early tiers attempt to resolve the Fault in a localized step-by-step fashion, relegating more system-level response to later tier(s). Recovery actions are gated by epoch recovery timing, enabling strategy, urgency, MaxRetry gate, hardware availability, hazardous versus ordinary fault, and many other priority gates. This methodology is systematic, logical, and uses multiple linked tables, parameter files, and recovery command sequences. The credibility of the FP design is proven via a fault-tree analysis "top-down" approach, and a functional fault-mode-effects-and-analysis via "bottoms-up" approach. Via this process, the mitigation and recovery strategy(s) per Fault Containment Region scope (width versus depth) the FP architecture.

  6. Fault reactivation: The Picuris-Pecos fault system of north-central New Mexico

    NASA Astrophysics Data System (ADS)

    McDonald, David Wilson

    The PPFS is a N-trending fault system extending over 80 km in the Sangre de Cristo Mountains of northern New Mexico. Precambrian basement rocks are offset 37 km in a right-lateral sense; however, this offset includes dextral strike-slip (Precambrian), mostly normal dip-slip (Pennsylvanian), mostly reverse dip-slip (Early Laramide), limited strike-slip (Late Laramide) and mostly normal dip-slip (Cenozoic). The PPFS is broken into at least 3 segments by the NE-trending Embudo fault and by several Laramide age NW-trending tear faults. These segments are (from N to S): the Taos, the Picuris, and the Pecos segments. On the east side of the Picuris segment in the Picuris Mountains, the Oligocene-Miocene age Miranda graben developed and represents a complex extension zone south of the Embudo fault. Regional analysis of remotely sensed data and geologic maps indicate that lineaments subparallel to the trace of the PPFS are longer and less frequent than lineaments that trend orthogonal to the PPFS. Significant cross cutting faults and subtle changes in fault trends in each segment are clear in the lineament data. Detailed mapping in the eastern Picuris Mountains showed that the favorably oriented Picuris segment was not reactivated in the Tertiary development of the Rio Grande rift. Segmentation of the PPFS and post-Laramide annealing of the Picuris segment are interpreted to have resulted in the development of the subparallel La Serna fault. The Picuris segment of the PPFS is offset by several E-ESE trending faults. These faults are Late Cenozoic in age and interpreted to be related to the uplift of the Picuris Mountains and the continuing sinistral motion on the Embudo fault. Differential subsidence within the Miranda graben caused the development of several synthetic and orthogonal faults between the bounding La Serna and Miranda faults. Analysis of over 10,000 outcrop scale brittle structures reveals a strong correlation between faults and fracture systems. The dominant

  7. Facies composition and scaling relationships of extensional faults in carbonates

    NASA Astrophysics Data System (ADS)

    Bastesen, Eivind; Braathen, Alvar

    2010-05-01

    Fault seal evaluations in carbonates are challenged by limited input data. Our analysis of 100 extensional faults in shallow-buried layered carbonate rocks aims to improve forecasting of fault core characteristics in these rocks. We have analyzed the spatial distribution of fault core elements described using a Fault Facies classification scheme; a method specifically developed for 3D fault description and quantification, with application in reservoir modelling. In modelling, the fault envelope is populated with fault facies originating from the host rock, the properties of which (e.g. dimensions, geometry, internal structure, petrophysical properties, and spatial distribution of structural elements) are defined by outcrop data. Empirical data sets were collected from outcrops of extensional faults in fine grained, micro-porosity carbonates from western Sinai (Egypt), Central Spitsbergen (Arctic Norway), and Central Oman (Adam Foothills) which all have experienced maximum burial of 2-3 kilometres and exhibit displacements ranging from 4 centimetres to 400 meters. Key observations include fault core thickness, intrinsic composition and geometry. The studied fault cores display several distinct fault facies and facies associations. Based on geometry, fault cores can be categorised as distributed or localized. Each can be further sub-divided according to the presence of shale smear, carbonate fault rocks and cement/secondary calcite layers. Fault core thickness in carbonate rocks may be controlled by several mechanisms: (1) Mechanical breakdown: Irregularities such as breached relays and asperities are broken down by progressive faulting and fracturing to eventually form a thicker fault rock layer. (2) Layer shearing: Accumulations of shale smear along the fault core. (3) Diagenesis; pressure solution, karstification and precipitation of secondary calcite in the core. Observed fault core thicknesses scatter over three orders of magnitude, with a D/T range of 1:1 to 1

  8. An arc fault detection system

    SciTech Connect

    Jha, Kamal N.

    1997-12-01

    An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn, opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard.

  9. Fault tolerant data management system

    NASA Technical Reports Server (NTRS)

    Gustin, W. M.; Smither, M. A.

    1972-01-01

    Described in detail are: (1) results obtained in modifying the onboard data management system software to a multiprocessor fault tolerant system; (2) a functional description of the prototype buffer I/O units; (3) description of modification to the ACADC and stimuli generating unit of the DTS; and (4) summaries and conclusions on techniques implemented in the rack and prototype buffers. Also documented is the work done in investigating techniques of high speed (5 Mbps) digital data transmission in the data bus environment. The application considered is a multiport data bus operating with the following constraints: no preferred stations; random bus access by all stations; all stations equally likely to source or sink data; no limit to the number of stations along the bus; no branching of the bus; and no restriction on station placement along the bus.

  10. An observer based approach for achieving fault diagnosis and fault tolerant control of systems modeled as hybrid Petri nets.

    PubMed

    Renganathan, K; Bhaskar, VidhyaCharan

    2011-07-01

    In this paper, we propose an approach for achieving detection and identification of faults, and provide fault tolerant control for systems that are modeled using timed hybrid Petri nets. For this purpose, an observer based technique is adopted which is useful in detection of faults, such as sensor faults, actuator faults, signal conditioning faults, etc. The concepts of estimation, reachability and diagnosability have been considered for analyzing faulty behaviors, and based on the detected faults, different schemes are proposed for achieving fault tolerant control using optimization techniques. These concepts are applied to a typical three tank system and numerical results are obtained.

  11. How fault evolution changes strain partitioning and fault slip rates in Southern California: Results from geodynamic modeling

    NASA Astrophysics Data System (ADS)

    Ye, Jiyang; Liu, Mian

    2017-08-01

    In Southern California, the Pacific-North America relative plate motion is accommodated by the complex southern San Andreas Fault system that includes many young faults (<2 Ma). The initiation of these young faults and their impact on strain partitioning and fault slip rates are important for understanding the evolution of this plate boundary zone and assessing earthquake hazard in Southern California. Using a three-dimensional viscoelastoplastic finite element model, we have investigated how this plate boundary fault system has evolved to accommodate the relative plate motion in Southern California. Our results show that when the plate boundary faults are not optimally configured to accommodate the relative plate motion, strain is localized in places where new faults would initiate to improve the mechanical efficiency of the fault system. In particular, the Eastern California Shear Zone, the San Jacinto Fault, the Elsinore Fault, and the offshore dextral faults all developed in places of highly localized strain. These younger faults compensate for the reduced fault slip on the San Andreas Fault proper because of the Big Bend, a major restraining bend. The evolution of the fault system changes the apportionment of fault slip rates over time, which may explain some of the slip rate discrepancy between geological and geodetic measurements in Southern California. For the present fault configuration, our model predicts localized strain in western Transverse Ranges and along the dextral faults across the Mojave Desert, where numerous damaging earthquakes occurred in recent years.

  12. Fault failure with moderate earthquakes

    USGS Publications Warehouse

    Johnston, M.J.S.; Linde, A.T.; Gladwin, M.T.; Borcherdt, R.D.

    1987-01-01

    High resolution strain and tilt recordings were made in the near-field of, and prior to, the May 1983 Coalinga earthquake (ML = 6.7, ?? = 51 km), the August 4, 1985, Kettleman Hills earthquake (ML = 5.5, ?? = 34 km), the April 1984 Morgan Hill earthquake (ML = 6.1, ?? = 55 km), the November 1984 Round Valley earthquake (ML = 5.8, ?? = 54 km), the January 14, 1978, Izu, Japan earthquake (ML = 7.0, ?? = 28 km), and several other smaller magnitude earthquakes. These recordings were made with near-surface instruments (resolution 10-8), with borehole dilatometers (resolution 10-10) and a 3-component borehole strainmeter (resolution 10-9). While observed coseismic offsets are generally in good agreement with expectations from elastic dislocation theory, and while post-seismic deformation continued, in some cases, with a moment comparable to that of the main shock, preseismic strain or tilt perturbations from hours to seconds (or less) before the main shock are not apparent above the present resolution. Precursory slip for these events, if any occurred, must have had a moment less than a few percent of that of the main event. To the extent that these records reflect general fault behavior, the strong constraint on the size and amount of slip triggering major rupture makes prediction of the onset times and final magnitudes of the rupture zones a difficult task unless the instruments are fortuitously installed near the rupture initiation point. These data are best explained by an inhomogeneous failure model for which various areas of the fault plane have either different stress-slip constitutive laws or spatially varying constitutive parameters. Other work on seismic waveform analysis and synthetic waveforms indicates that the rupturing process is inhomogeneous and controlled by points of higher strength. These models indicate that rupture initiation occurs at smaller regions of higher strength which, when broken, allow runaway catastrophic failure. ?? 1987.

  13. Reconfigurable fault tolerant avionics system

    NASA Astrophysics Data System (ADS)

    Ibrahim, M. M.; Asami, K.; Cho, Mengu

    This paper presents the design of a reconfigurable avionics system based on modern Static Random Access Memory (SRAM)-based Field Programmable Gate Array (FPGA) to be used in future generations of nano satellites. A major concern in satellite systems and especially nano satellites is to build robust systems with low-power consumption profiles. The system is designed to be flexible by providing the capability of reconfiguring itself based on its orbital position. As Single Event Upsets (SEU) do not have the same severity and intensity in all orbital locations, having the maximum at the South Atlantic Anomaly (SAA) and the polar cusps, the system does not have to be fully protected all the time in its orbit. An acceptable level of protection against high-energy cosmic rays and charged particles roaming in space is provided within the majority of the orbit through software fault tolerance. Check pointing and roll back, besides control flow assertions, is used for that level of protection. In the minority part of the orbit where severe SEUs are expected to exist, a reconfiguration for the system FPGA is initiated where the processor systems are triplicated and protection through Triple Modular Redundancy (TMR) with feedback is provided. This technique of reconfiguring the system as per the level of the threat expected from SEU-induced faults helps in reducing the average dynamic power consumption of the system to one-third of its maximum. This technique can be viewed as a smart protection through system reconfiguration. The system is built on the commercial version of the (XC5VLX50) Xilinx Virtex5 FPGA on bulk silicon with 324 IO. Simulations of orbit SEU rates were carried out using the SPENVIS web-based software package.

  14. Robot Position Sensor Fault Tolerance

    NASA Technical Reports Server (NTRS)

    Aldridge, Hal A.

    1997-01-01

    Robot systems in critical applications, such as those in space and nuclear environments, must be able to operate during component failure to complete important tasks. One failure mode that has received little attention is the failure of joint position sensors. Current fault tolerant designs require the addition of directly redundant position sensors which can affect joint design. A new method is proposed that utilizes analytical redundancy to allow for continued operation during joint position sensor failure. Joint torque sensors are used with a virtual passive torque controller to make the robot joint stable without position feedback and improve position tracking performance in the presence of unknown link dynamics and end-effector loading. Two Cartesian accelerometer based methods are proposed to determine the position of the joint. The joint specific position determination method utilizes two triaxial accelerometers attached to the link driven by the joint with the failed position sensor. The joint specific method is not computationally complex and the position error is bounded. The system wide position determination method utilizes accelerometers distributed on different robot links and the end-effector to determine the position of sets of multiple joints. The system wide method requires fewer accelerometers than the joint specific method to make all joint position sensors fault tolerant but is more computationally complex and has lower convergence properties. Experiments were conducted on a laboratory manipulator. Both position determination methods were shown to track the actual position satisfactorily. A controller using the position determination methods and the virtual passive torque controller was able to servo the joints to a desired position during position sensor failure.

  15. Comparison of upwards splaying and upwards merging segmented normal faults

    NASA Astrophysics Data System (ADS)

    Freitag, U. A.; Sanderson, D. J.; Lonergan, L.; Bevan, T. G.

    2017-07-01

    A common model for normal fault growth involves a single fault at depth splaying upwards into a series of en-echelon segments. This model is applied to faults as well as a range of extension fractures, including veins, joints and igneous dykes. Examples of splaying growth fault systems in the Columbus Basin, offshore Trinidad, are presented. They include the commonly described upwards splaying type, but also one fault zone with an upward change from disconnected overlapping synthetic faults to a continuous fault. One fault zone with high-displacement fault segments is separated by a relay ramp at depth, becomes breached higher up, developing into a continuous fault at its upper part, where displacements are least. This example suggests that whilst kinematic linkage typically precedes geometric linkage in the evolution of relay ramps, low-displacement parts of a fault system may be geometrically linked whereas higher displacement areas are only kinematically linked.

  16. The End Of Chi-Shan Fault:Tectonic of Transtensional Fault

    NASA Astrophysics Data System (ADS)

    Chou, H.; Song, G.

    2011-12-01

    Chishan fault is an active strike-slip fault that located at the Southwestern Taiwan and extend to the offshore area of SouShan in Kaohsiung. The strike and dip of the fault is N80E,50N. It's believed that the Wushan Formation of Chishan fault, which is composed of sandstone, thrusts upon the Northwestern Kutingkeng Formation, which is composed of mudstone. Chishan fault is acting as a reversal fault with sinistral motion. (Tsan and Keng,1968; Hsieh, 1970; Wen-Pu Geng, 1981). This left-lateral strike-slip fault extend to shelf break and stop, with a transtensional basin at the termination. The transtensional basin has stopped extending to open sea, whereas it is spreading toward the inshore area. Therefore, we can know that a young extensional activity is developing at the offshore seabed of Tsoying Naval Port and the activity is relative to the transtension of left-lateral fault. ( Gwo-Shyh Song, 2010). Tectonic of transtensional basin deformed in strike-slip settings overland have been described by many authors, but the field outcrop could be distoryed by Weathering and made the tectonic features incomplete. Hence, this research use multibeam bathymetry and 3.5-kHz sub-bottom profiler data data collected from the offshore extended part of Chishan fault in Kaohsiung to define the transtensional characteristics of Chishan fault. At first, we use the multibeam bathymetry data to make a Geomorphological map of our research area and we can see a triangulate depressed area near shelf break. Then, we use Fledermaus to print 3D diagram for understanding the distribution of the major normal faults(fig.1). Furthermore, we find that there are amount of listric normal fault and the area between the listric faults is curving. After that, we use the 3.5-kHz sub-bottom profiler data to understand the subsurface structure of the normal faults and the curved area between the listric normal fault, which seems to be En e'chelon folds. As the amount of displacement on the wrench

  17. High Resolution Seismic Imaging of Fault Zones: Methods and Examples From The San Andreas Fault

    NASA Astrophysics Data System (ADS)

    Catchings, R. D.; Rymer, M. J.; Goldman, M.; Prentice, C. S.; Sickler, R. R.; Criley, C.

    2011-12-01

    Seismic imaging of fault zones at shallow depths is challenging. Conventional seismic reflection methods do not work well in fault zones that consist of non-planar strata or that have large variations in velocity structure, two properties that occur in most fault zones. Understanding the structure and geometry of fault zones is important to elucidate the earthquake hazard associated with fault zones and the barrier effect that faults impose on subsurface fluid flow. In collaboration with the San Francisco Public Utilities Commission (SFPUC) at San Andreas Lake on the San Francisco peninsula, we acquired combined seismic P-wave and S-wave reflection, refraction, and guided-wave data to image the principal strand of the San Andreas Fault (SAF) that ruptured the surface during the 1906 San Francisco earthquake and additional fault strands east of the rupture. The locations and geometries of these fault strands are important because the SFPUC is seismically retrofitting the Hetch Hetchy water delivery system, which provides much of the water for the San Francisco Bay area, and the delivery system is close to the SAF at San Andreas Lake. Seismic reflection images did not image the SAF zone well due to the brecciated bedrock, a lack of layered stratigraphy, and widely varying velocities. Tomographic P-wave velocity images clearly delineate the fault zone as a low-velocity zone at about 10 m depth in more competent rock, but due to soil saturation above the rock, the P-waves do not clearly image the fault strands at shallower depths. S-wave velocity images, however, clearly show a diagnostic low-velocity zone at the mapped 1906 surface break. To image the fault zone at greater depths, we utilized guided waves, which exhibit high amplitude seismic energy within fault zones. The guided waves appear to image the fault zone at varying depths depending on the frequency of the seismic waves. At higher frequencies (~30 to 40 Hz), the guided waves show strong amplification at the

  18. A rapid creeping reverse fault at the plate suture: the Chihshang fault in eastern Taiwan

    NASA Astrophysics Data System (ADS)

    Lee, J.; Angelier, J.; Chu, H.; Mu, C.; Hu, J.; Dong, J.

    2008-12-01

    The 35-km-long Chihshang fault is one of the most active segments of the Longitudinal Valley Fault, the plate suture between the converging Philippine and Eurasian plates in eastern Taiwan. In a time span of 50 years, two moderate to big earthquakes of M 6.2 and M 6.5 resulted from rupturing of the Chihshang fault with observable surface ruptures, occurred in 1951 and 2003, respectively. In between the inter-seismic period, the Chihshang fault reveals a seasonal creeping behavior at a rather rapid rate of about 20-30 mm/yr near the surface level. Based on in-situ measurements, including creep meters (once per day), campaign near- fault dense geodetic networks (twice per year for leveling and GPS) across the Chihshang fault zone since 1998, together with earlier measurements of markers on civil features since 1989, we characterize the interseismic fault motion at the surface level. The movement of the Chihshang fault showed, however, a significant slow down a few years before the 2003 M=6.5 earthquake. As for the 2003 earthquake, which initiated at about 20 km in depth, produced only a few cm of surface offset on the fault. By contrast, significant post-seismic slip occurred at the near surface level along the fault during the 6 months following the main shock. We interpret the large post-seismic near-fault deformation as a result of velocity strengthening of frictional instability at the shallow level, mainly due to thick unconsolidated surface deposits and significant fault gauge. Together strong relation between rainfall (groundwater) and fault movement inferred from the seasonal creep, we anticipate that hydro-mechanical coupling with the fault friction plays an important role on triggering of surface fault creep, and might affect the stress/strain in the more deeper part of the fault in the seismogenic zone. A drilling project together with a variety of monitoring of fault movement as well as on-site measurements is now undergoing. We aim at better

  19. Active faulting on the Wallula fault zone within the Olympic-Wallowa lineament, Washington State, USA

    USGS Publications Warehouse

    Sherrod, Brian; Blakely, Richard J.; Lasher, John P.; Lamb, Andrew P.; Mahan, Shannon; Foit, Franklin F.; Barnett, Elizabeth

    2016-01-01

    The Wallula fault zone is an integral feature of the Olympic-Wallowa lineament, an ∼500-km-long topographic lineament oblique to the Cascadia plate boundary, extending from Vancouver Island, British Columbia, to Walla Walla, Washington. The structure and past earthquake activity of the Wallula fault zone are important because of nearby infrastructure, and also because the fault zone defines part of the Olympic-Wallowa lineament in south-central Washington and suggests that the Olympic-Wallowa lineament may have a structural origin. We used aeromagnetic and ground magnetic data to locate the trace of the Wallula fault zone in the subsurface and map a quarry exposure of the Wallula fault zone near Finley, Washington, to investigate past earthquakes along the fault. We mapped three main packages of rocks and unconsolidated sediments in an ∼10-m-high quarry exposure. Our mapping suggests at least three late Pleistocene earthquakes with surface rupture, and an episode of liquefaction in the Holocene along the Wallula fault zone. Faint striae on the master fault surface are subhorizontal and suggest reverse dextral oblique motion for these earthquakes, consistent with dextral offset on the Wallula fault zone inferred from offset aeromagnetic anomalies associated with ca. 8.5 Ma basalt dikes. Magnetic surveys show that the Wallula fault actually lies 350 m to the southwest of the trace shown on published maps, passes directly through deformed late Pleistocene or younger deposits exposed at Finley quarry, and extends uninterrupted over 120 km.

  20. Fault diagnosis of sensor networked structures with multiple faults using a virtual beam based approach

    NASA Astrophysics Data System (ADS)

    Wang, H.; Jing, X. J.

    2017-07-01

    This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The ;virtual beam;, a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.

  1. Episodic activity of a dormant fault in tectonically stable Europe: The Rauw fault (NE Belgium)

    NASA Astrophysics Data System (ADS)

    Verbeeck, Koen; Wouters, Laurent; Vanneste, Kris; Camelbeeck, Thierry; Vandenberghe, Dimitri; Beerten, Koen; Rogiers, Bart; Schiltz, Marco; Burow, Christoph; Mees, Florias; De Grave, Johan; Vandenberghe, Noël

    2017-03-01

    Our knowledge about large earthquakes in stable continental regions comes from studies of faults that generated historical surface rupturing earthquakes or were identified by their recent imprint in the morphology. Here, we evaluate the co-seismic character and movement history of the Rauw fault in Belgium, which lacks geomorphological expression and historical/present seismicity. This 55-km-long normal fault, with known Neogene and possibly Early Pleistocene activity, is the largest offset fault west of the active Roer Valley Graben. Its trace was identified in the shallow subsurface based on high resolution geophysics. All the layers within the Late Pliocene Mol Formation (3.6 to 2.59 Ma) are displaced 7 m vertically, without growth faulting, but deeper deposits show increasing offset. A paleoseismic trench study revealed cryoturbated, but unfaulted, late glacial coversands overlying faulted layers of Mol Formation. In-between those deposits, the fault tip was eroded, along with evidence for individual displacement events. Fragmented clay gouge observed in a micromorphology sample of the main fault evidences co-seismic faulting, as opposed to fault creep. Based on optical and electron spin resonance dating and trench stratigraphy, the 7 m combined displacement is bracketed to have occurred between 2.59 Ma and 45 ka. The regional presence of the Sterksel Formation alluvial terrace deposits, limited to the hanging wall of the Rauw fault, indicates a deflection of the Meuse/Rhine confluence (1.0 to 0.5 Ma) by the fault's activity, suggesting that most of the offset occurred prior to/at this time interval. In the trench, Sterksel Formation is eroded but reworked gravel testifies for its former presence. Hence, the Rauw fault appears as typical of plate interior context, with an episodic seismic activity concentrated between 1.0 and 0.5 Ma or at least between 2.59 Ma to 45 ka, possibly related to activity variations in the adjacent, continuously active Roer Valley

  2. Characterization of slow slip rate faults in humid areas: Cimandiri fault zone, Indonesia

    NASA Astrophysics Data System (ADS)

    Marliyani, G. I.; Arrowsmith, J. R.; Whipple, K. X.

    2016-12-01

    In areas where regional tectonic strain is accommodated by broad zones of short and low slip rate faults, geomorphic and paleoseismic characterization of faults is difficult because of poor surface expression and long earthquake recurrence intervals. In humid areas, faults can be buried by thick sediments or soils; their geomorphic expression subdued and sometimes undetectable until the next earthquake. In Java, active faults are diffused, and their characterization is challenging. Among them is the ENE striking Cimandiri fault zone. Cumulative displacement produces prominent ENE oriented ranges with the southeast side moving relatively upward and to the northeast. The fault zone is expressed in the bedrock by numerous NE, west, and NW trending thrust- and strike-slip faults and folds. However, it is unclear which of these structures are active. We performed a morphometric analysis of the fault zone using 30 m resolution Shuttle Radar Topography Mission digital elevation model. We constructed longitudinal profiles of 601 bedrock rivers along the upthrown ranges along the fault zone, calculated the normalized channel steepness index, identified knickpoints and use their distribution to infer relative magnitudes of rock uplift and locate boundaries that may indicate active fault traces. We compare the rock uplift distribution to surface displacement predicted by elastic dislocation model to determine the plausible fault kinematics. The active Cimandiri fault zone consists of six segments with predominant sense of reverse motion. Our analysis reveals considerable geometric complexity, strongly suggesting segmentation of the fault, and thus smaller maximum earthquakes, consistent with the limited historical record of upper plate earthquakes in Java.

  3. Experimental study on propagation of fault slip along a simulated rock fault

    NASA Astrophysics Data System (ADS)

    Mizoguchi, K.

    2015-12-01

    Around pre-existing geological faults in the crust, we have often observed off-fault damage zone where there are many fractures with various scales, from ~ mm to ~ m and their density typically increases with proximity to the fault. One of the fracture formation processes is considered to be dynamic shear rupture propagation on the faults, which leads to the occurrence of earthquakes. Here, I have conducted experiments on propagation of fault slip along a pre-cut rock surface to investigate the damaging behavior of rocks with slip propagation. For the experiments, I used a pair of metagabbro blocks from Tamil Nadu, India, of which the contacting surface simulates a fault of 35 cm in length and 1cm width. The experiments were done with the similar uniaxial loading configuration to Rosakis et al. (2007). Axial load σ is applied to the fault plane with an angle 60° to the loading direction. When σ is 5kN, normal and shear stresses on the fault are 1.25MPa and 0.72MPa, respectively. Timing and direction of slip propagation on the fault during the experiments were monitored with several strain gauges arrayed at an interval along the fault. The gauge data were digitally recorded with a 1MHz sampling rate and 16bit resolution. When σ is 4.8kN is applied, we observed some fault slip events where a slip nucleates spontaneously in a subsection of the fault and propagates to the whole fault. However, the propagation speed is about 1.2km/s, much lower than the S-wave velocity of the rock. This indicates that the slip events were not earthquake-like dynamic rupture ones. More efforts are needed to reproduce earthquake-like slip events in the experiments. This work is supported by the JSPS KAKENHI (26870912).

  4. Dissecting Oceanic Detachment Faults: Fault Zone Geometry, Deformation Mechanisms, and Nature of Fluid-Rock Interactions

    NASA Astrophysics Data System (ADS)

    Bonnemains, D.; Escartin, J.; Verlaguet, A.; Andreani, M.; Mevel, C.

    2015-12-01

    To understand the extreme strain localization at long-lived oceanic detachment faults rooting deeply below the axis, we present results of geological investigations at the 13°19'N detachment along the Mid-Atlantic Ridge, conducted during the ODEMAR cruise (Nov-Dec13, NO Pourquoi Pas?) with ROV Victor6000 (IFREMER). During this cruise we investigated and sampled the corrugated fault to understand its geometry, nature of deformation, and links to fluid flow. We identified and explored 7 fault outcrops on the flanks of microbathymetric striations subparallel to extension. These outcrops expose extensive fault planes, with the most prominent ones extending 40-90m laterally, and up to 10 m vertically. These fault surfaces systematically show subhorizontal striations subparallel to extension, and define slabs of fault-rock that are flat and also striated at sample scale. Visual observations show a complex detachment fault zone, with anastomosing fault planes at outcrop scale (1-10 m), with a highly heterogeneous distribution of deformation. We observe heterogeneity in fault-rock nature at outcrop scale. In situ samples from striated faults are primarily basalt breccias with prior green-schist facies alteration, and a few ultramafic fault-rocks that show a complex deformation history, with early schistose textures, brittlely reworked as clasts within the fault. The basalt breccias show variable silicification and associated sulfides, recording important fluid-rock interactions during exhumation. To understand the link between fluid and deformation during exhumation, we will present microstructural observation of deformation textures, composition, and distribution and origin of quartz and sulfides, as well as constraints on the temperature of silicifying fluids from fluid inclusions in quartz. These results allow us to characterize in detail the detachment fault zone geometry, and investigate the timing of silicification relative to deformation.

  5. Poro-Elasto-Plastic Off-Fault Response and Dynamics of Earthquake Faulting

    NASA Astrophysics Data System (ADS)

    Hirakawa, Evan Tyler

    Previous models of earthquake rupture dynamics have neglected interesting deformational properties of fault zone materials. While most current studies involving off-fault inelastic deformation employ simple brittle failure yield criteria such as the Drucker-Prager yield criterion, the material surrounding the fault plane itself, known as fault gouge, has the tendency to deform in a ductile manner accompanied by compaction. We incorporate this behavior into a new constitutive model of undrained fault gouge in a dynamic rupture model. Dynamic compaction of undrained fault gouge occurs ahead of the rupture front. This corresponds to an increase in pore pressure which preweakens the fault, reducing the static friction. Subsequent dilatancy and softening of the gouge causes a reduction in pore pressure, resulting in fault restrengthening and brief slip pulses. This leads to localization of inelastic failure to a narrow shear zone. We extend the undrained gouge model to a study of self-similar rough faults. Extreme compaction and dilatancy occur at restraining and releasing bends, respectively. The consequent elevated pore pressure at restraining bends weakens the fault and allows the rupture to easily pass, while the decrease in pore pressure at releasing bends dynamically strengthens the fault and slows rupture. In comparison to other recent models, we show that the effects of fault roughness on propagation distance, slip distribution, and rupture velocity are diminished or reversed. Next, we represent large subduction zone megathrust earthquakes with a dynamic rupture model of a shallow dipping fault underlying an accretionary wedge. In previous models by our group [Ma, 2012; Ma and Hirakawa, 2013], inelastic deformation of wedge material was shown to enhance vertical uplift and potential tsunamigenesis. Here, we include a shallow region of velocity strengthening friction with a rate-and-state framework. We find that coseismic increase of the basal friction drives

  6. Effects of Hayward fault interactions with the Rodgers Creek and San Andreas faults

    NASA Astrophysics Data System (ADS)

    Parsons, T.; Geist, E.; Jachens, R.; Sliter, R.; Jaffe, B.

    2003-12-01

    Finite-element and crustal-structure models of the Hayward fault emphasize its position within a network of interacting faults, and indicate a number of expected influences from other faults. For example, a new structural cross section across San Pablo Bay in association with potential field maps allows us to map and model detailed interactions between the Hayward and Rodgers Creek faults. The two faults do not appear to connect at depth, and finite-element models indicate growing extensional stress in the stepover between the two faults. A model consequence of extensional stress in the stepover, combined with long-term interaction with the San Andreas fault, is normal-stress reduction (unclamping) of the north Hayward fault. If this occurs in the real Earth, then substantial reduction in frictional resistance on the north Hayward fault is expected, which might in turn be expected to influence the distribution of creep. Interaction effects on a shorter time scale are also evident. The 1906 San Francisco, and 1989 Loma Prieta earthquakes are calculated to have reduced stress on the Hayward fault at seismogenic depths. Models of the 1906 earthquake show complex interactions; coseismic static stress changes drop stress on the north Hayward fault while upper mantle viscoelastic relaxation slightly raises the stressing rate. Stress recovery is calculated to have occurred by ~1980, though earthquake probability is still affected by the delay induced by stress reduction. We conclude that the model Hayward fault is strongly influenced by its neighbors, and it is worth considering these effects when studying and attempting to understand the real fault.

  7. Data Fault Detection in Medical Sensor Networks

    PubMed Central

    Yang, Yang; Liu, Qian; Gao, Zhipeng; Qiu, Xuesong; Meng, Luoming

    2015-01-01

    Medical body sensors can be implanted or attached to the human body to monitor the physiological parameters of patients all the time. Inaccurate data due to sensor faults or incorrect placement on the body will seriously influence clinicians’ diagnosis, therefore detecting sensor data faults has been widely researched in recent years. Most of the typical approaches to sensor fault detection in the medical area ignore the fact that the physiological indexes of patients aren’t changing synchronously at the same time, and fault values mixed with abnormal physiological data due to illness make it difficult to determine true faults. Based on these facts, we propose a Data Fault Detection mechanism in Medical sensor networks (DFD-M). Its mechanism includes: (1) use of a dynamic-local outlier factor (D-LOF) algorithm to identify outlying sensed data vectors; (2) use of a linear regression model based on trapezoidal fuzzy numbers to predict which readings in the outlying data vector are suspected to be faulty; (3) the proposal of a novel judgment criterion of fault state according to the prediction values. The simulation results demonstrate the efficiency and superiority of DFD-M. PMID:25774708

  8. On-line diagnosis of unrestricted faults

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.; Sundstrom, R. J.

    1974-01-01

    A formal model for the study of on-line diagnosis is introduced and used to investigate the diagnosis of unrestricted faults. A fault of a system S is considered to be a transformation of S into another system S' at some time tau. The resulting faulty system is taken to be the system which looks like S up to time tau, and like S' thereafter. Notions of fault tolerance error are defined in terms of the resulting system being able to mimic some desired behavior as specified by a system similar to S. A notion of on-line diagnosis is formulated which involves an external detector and a maximum time delay within which every error caused by a fault in a prescribed set must be detected. It is shown that if a system is on-line diagnosable for the unrestricted set of faults then the detector is at least as complex, in terms of state set size, as the specification. The use of inverse systems for the diagnosis of unrestricted faults is considered. A partial characterization of those inverses which can be used for unrestricted fault diagnosis is obtained.

  9. Data fault detection in medical sensor networks.

    PubMed

    Yang, Yang; Liu, Qian; Gao, Zhipeng; Qiu, Xuesong; Meng, Luoming

    2015-03-12

    Medical body sensors can be implanted or attached to the human body to monitor the physiological parameters of patients all the time. Inaccurate data due to sensor faults or incorrect placement on the body will seriously influence clinicians' diagnosis, therefore detecting sensor data faults has been widely researched in recent years. Most of the typical approaches to sensor fault detection in the medical area ignore the fact that the physiological indexes of patients aren't changing synchronously at the same time, and fault values mixed with abnormal physiological data due to illness make it difficult to determine true faults. Based on these facts, we propose a Data Fault Detection mechanism in Medical sensor networks (DFD-M). Its mechanism includes: (1) use of a dynamic-local outlier factor (D-LOF) algorithm to identify outlying sensed data vectors; (2) use of a linear regression model based on trapezoidal fuzzy numbers to predict which readings in the outlying data vector are suspected to be faulty; (3) the proposal of a novel judgment criterion of fault state according to the prediction values. The simulation results demonstrate the efficiency and superiority of DFD-M.

  10. Formal Validation of Fault Management Design Solutions

    NASA Technical Reports Server (NTRS)

    Gibson, Corrina; Karban, Robert; Andolfato, Luigi; Day, John

    2013-01-01

    The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model's logic by performing fault injection simulations, and verify the fault protection system's logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-Statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.

  11. Extension and contraction of faulted marker planes

    NASA Astrophysics Data System (ADS)

    Jackson, Marie D.; Delaney, Paul T.

    1985-08-01

    We present graphical and analytical methods to determine the extensional or contractional separation of a faulted planar marker using commonly measured field data: fault attitude, slip direction, and bedding or other marker-plane attitude. This determination is easily accomplished for horizontal markers. Faults with normal components of slip extend the markers and indicate extensional tectonics; those with reverse components are contractional. Although the methods quantify this simple relation for horizontal markers, they are most useful in rocks with planar fabrics of steep dip where marker separation cannot be uniquely determined from map or outcrop patterns alone and where faults with normal components of dip slip can contract markers and those with reverse components can extend them. The methods rely on two parameters: (1) the angle between normals to the marker and fault planes and (2) the angle between the slip direction and intersection of the marker and fault. This second parameter measures the obliquity of slip relative to the directions of maximum extensional or contractional separation of the marker, and for a horizontal marker, it is equivalent to the rake of the slip direction. The graphical method requires stereographic projections routinely used for faulting data; the analytical method is programmable on a calculator. *Present address: Department of Applied Earth Sciences, Stanford University, Stanford, California 94035

  12. Paleoseismic evidence of surface faulting on a normal fault segment within the Aegean Extensional fault system; The Knidos fault, SW Turkey

    NASA Astrophysics Data System (ADS)

    Ersen Aksoy, M.; Yıldırım, Cengiz; Türe, Orkun; Yılmaz, Özlem; Şahin, Sefa; Akif Sarıkaya, M.; Doksanaltı, Ertekin M.

    2017-04-01

    The Knidos fault is a 2 km long fault segment within the southern Aegean Extensional Province. The normal fault is expressed as 5-6 m high limestone escarpments and strikes through an ancient city (Knidos) that dates back to 7 century B.C.. Historical documents and archeo-seismic data mark two destructive earthquakes in 2-3 c. B.C. and in 459 A.D. for Knidos city. Here, we opened four trenches to reveal the relationship of the fault and the earthquake related damage in the Knidos city. In Trench-2 and 3 we determined a 1-2 m wide fault zone. Trench-2 exposed six colluviums of which the lower four colluviums have been truncated by faults. The upper 2 layers overlay the faults with a sharp erosional contact. Our structural analysis points out the occurrence of at least 3, probably 4 faulting events. The most recent and penultimate events are overlain by separate colluviums that bury the event horizon for each surface rupture. Our trenches reached a depth of 1-2 m and exposed fragments of potteries dating (2. century B.C.- 2. century A.D.). Besides, ages obtained from bulk samples showed that the trenches exposed a stratigraphy from 1000 B.C. up to present. C14 dating results allowed us to constrain the age of the most recent two events. Thus, the penultimate event occurred most probably between 1336-1628 A.D. and the latter after 1655 A.D. Both earthquakes fall in the period where the city declined and are therefore not attributed to Knidos city by historical accounts. Our results reveal that the Knidos fault has ruptured two times within the last 700 years. However, further paleoseismic trenching studies are required to obtain a better constrain on the ages of these earthquakes.

  13. a Study of Fault Zone Hydrology

    NASA Astrophysics Data System (ADS)

    Karasaki, K.; Onishi, C. T.; Goto, J.; Moriya, T.; Tsuchi, H.; Ueta, K.; Kiho, K.; Miyakawa, K.

    2010-12-01

    The Nuclear Waste Management Organization of Japan and Lawrence Berkeley National Laboratory are presently collaborating at a dedicated field site to further understand, and to develop the characterization technology for, fault zone hydrology. To this end, several deep trenches were cut, and a number of geophysical surveys were conducted across the Wildcat Fault in the hills east of Berkeley, California. The Wildcat Fault is believed to be a strike-slip fault and a member of the Hayward Fault System, with over 10 km of displacement. So far, three boreholes of ~ 150 m have been core-drilled; one on the east side and two on the west side of the suspected fault trace. The lithology at Wildcat Fault mainly consists of chert, shale and sandstone, extensively sheared and fractured; with gouges observed at several depths and a thick cataclasite zone. After conducting hydraulic tests, the boreholes were instrumented with temperature and pressure sensors at multiple levels. Preliminary results from these holes indicated that the geology was not what was expected: while confirming some earlier, published conclusions about Wildcat, they have also led to some unexpected findings. The pressure and temperature distributions indicate a downward hydraulic gradient and a relatively large geothermal gradient. Wildcat near the field site appear to consist of multiple faults. The hydraulic test data suggest the dual properties of the hydrologic structure of the fault zone. At this writing an inclined fourth borehole is being drilled to penetrate the main Wildcat. Using the existing three boreholes as observation wells, we plan to conduct hydrologic cross-hole tests in this fourth borehole. The main philosophy behind our approach for the hydrologic characterization of such a complex fractured system is to let the system take its own average and monitor long term behavior, instead of collecting a multitude of data at small length and time scales, or at a discrete fracture scale, and

  14. Impacts of off-fault plasticity on fault slip and interaction at the base of the seismogenic zone

    NASA Astrophysics Data System (ADS)

    Nevitt, Johanna M.; Pollard, David D.

    2017-02-01

    Direct observations of faults exhumed from midcrustal depths indicate that distributed inelastic deformation enhances fault slip and interaction across steps. Constrained by field measurements, finite element models demonstrate that the slip distribution for a fault in a Mises elastoplastic continuum differs significantly from that of a linear elastic model fault. Lobes of plastic shear strain align with fault tips and effectively lengthen the fault, resulting in greater maximum slip and increased slip gradients near fault tips. Additionally, distributed plastic shear strain facilitates slip transfer between echelon fault segments. Fault arrays separated by contractional steps, which are subjected to greater mean normal stress and Mises equivalent stress, produce greater maximum slip than do those separated by extensional steps (with no fractures). These results provide insight into fault behavior at the base of the seismogenic zone, with implications for rupture dynamics of discontinuous faults.

  15. Mechanical Models of Fault-Related Folding

    SciTech Connect

    Johnson, A. M.

    2003-01-09

    The subject of the proposed research is fault-related folding and ground deformation. The results are relevant to oil-producing structures throughout the world, to understanding of damage that has been observed along and near earthquake ruptures, and to earthquake-producing structures in California and other tectonically-active areas. The objectives of the proposed research were to provide both a unified, mechanical infrastructure for studies of fault-related foldings and to present the results in computer programs that have graphical users interfaces (GUIs) so that structural geologists and geophysicists can model a wide variety of fault-related folds (FaRFs).

  16. Cooperative human-machine fault diagnosis

    NASA Technical Reports Server (NTRS)

    Remington, Roger; Palmer, Everett

    1987-01-01

    Current expert system technology does not permit complete automatic fault diagnosis; significant levels of human intervention are still required. This requirement dictates a need for a division of labor that recognizes the strengths and weaknesses of both human and machine diagnostic skills. Relevant findings from the literature on human cognition are combined with the results of reviews of aircrew performance with highly automated systems to suggest how the interface of a fault diagnostic expert system can be designed to assist human operators in verifying machine diagnoses and guiding interactive fault diagnosis. It is argued that the needs of the human operator should play an important role in the design of the knowledge base.

  17. Efficient fault diagnosis of helicopter gearboxes

    NASA Technical Reports Server (NTRS)

    Chin, H.; Danai, K.; Lewicki, D. G.

    1993-01-01

    Application of a diagnostic system to a helicopter gearbox is presented. The diagnostic system is a nonparametric pattern classifier that uses a multi-valued influence matrix (MVIM) as its diagnostic model and benefits from a fast learning algorithm that enables it to estimate its diagnostic model from a small number of measurement-fault data. To test this diagnostic system, vibration measurements were collected from a helicopter gearbox test stand during accelerated fatigue tests and at various fault instances. The diagnostic results indicate that the MVIM system can accurately detect and diagnose various gearbox faults so long as they are included in training.

  18. Tunable architecture for aircraft fault detection

    NASA Technical Reports Server (NTRS)

    Ganguli, Subhabrata (Inventor); Papageorgiou, George (Inventor); Glavaski-Radovanovic, Sonja (Inventor)

    2012-01-01

    A method for detecting faults in an aircraft is disclosed. The method involves predicting at least one state of the aircraft and tuning at least one threshold value to tightly upper bound the size of a mismatch between the at least one predicted state and a corresponding actual state of the non-faulted aircraft. If the mismatch between the at least one predicted state and the corresponding actual state is greater than or equal to the at least one threshold value, the method indicates that at least one fault has been detected.

  19. On-line diagnosis of unrestricted faults

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.; Sundstrom, R. J.

    1975-01-01

    Attention is given to the formal development of the notion of a discrete-time system and the associated concepts of fault, result of a fault, and error. The considered concept of on-line diagnosis is formalized and a diagnosis using inverse machines is discussed. The case of an inverse which is lossless is investigated. It is found that in such a case the class of unrestricted faults can be diagnosed with a delay equal to the delay of losslessness of the inverse system.

  20. Geofluid Dynamics of Faulted Sedimentary Basins

    NASA Astrophysics Data System (ADS)

    Garven, G.; Jung, B.; Boles, J. R.

    2014-12-01

    Faults are known to affect basin-scale groundwater flow, and exert a profound control on petroleum migration/accumulation, the PVT-history of hydrothermal fluids, and the natural (submarine) seepage from offshore reservoirs. For example, in the Santa Barbara basin, measured gas flow data from a natural submarine seep area in the Santa Barbara Channel helps constrain fault permeability k ~ 30 millidarcys for the large-scale upward migration of methane-bearing formation fluids along one of the major fault zones. At another offshore site near Platform Holly, pressure-transducer time-series data from a 1.5 km deep exploration well in the South Ellwood Field demonstrate a strong ocean tidal component, due to vertical fault connectivity to the seafloor. Analytical solutions to the poroelastic flow equation can be used to extract both fault permeability and compressibility parameters, based on tidal-signal amplitude attenuation and phase shift at depth. These data have proven useful in constraining coupled hydrogeologic 2-D models for reactive flow and geomechanical deformation. In a similar vein, our studies of faults in the Los Angeles basin, suggest an important role for the natural retention of fluids along the Newport-Inglewood fault zone. Based on the estimates of fault permeability derived above, we have also constructed new two-dimensional numerical simulations to characterize large-scale multiphase flow in complex heterogeneous and anisotropic geologic profiles, such as the Los Angeles basin. The numerical model was developed in our lab at Tufts from scratch, and based on an IMPES-type algorithm for a finite element/volume mesh. This numerical approach allowed us model large differentials in fluid saturation and relative permeability, caused by complex geological heterogeneities associated with sedimentation and faulting. Our two-phase flow models also replicated the formation-scale patterns of petroleum accumulation associated with the basin margin, where deep

  1. The San Andreas Fault System, California

    USGS Publications Warehouse

    Wallace, Robert E.

    1990-01-01

    Maps of northern and southern California printed on flyleaf inside front cover and on adjacent pages show faults that have had displacement within the past 2 million years. Those that have had displacement within historical time are shown in red. Bands of red tint emphasize zones of historical displacement; bands of orange tint emphasize major faults that have had Quaternary displacement before historical time. Faults are dashed where uncertain, dotted where covered by sedimentary deposits, and queried when doubtful. Arrows indicate direction of relative movement; sawteeth on upper plate of thrust fault. These maps are reproductions, in major part, of selected plates from the "Fault Map of California," published in 1975 by the California Division of Mines and Geology at a scale of 1:750,000; the State map was compiled and data interpreted by Charles W. Jennings. New data about faults, not shown on the 1975 edition, required modest revisions, primarily additions however, most of the map was left unchanged because the California Division of Mines and Geology is currently engaged in a major revision and update of the 1975 edition. Because of the reduced scale here, names of faults and places were redrafted or omitted. Faults added to the reduced map are not as precise as on the original State map, and the editor of this volume selected certain faults and omitted others. Principal regions for which new information was added are the region north of the San Francisco Bay area and the offshore regions.Many people have contributed to the present map, but the editor is solely responsible for any errors and omissions. Among those contributing informally, but extensively, and the regions to which each contributed were G.A. Carver, onland region north of lat 40°N.; S.H. Clarke, offshore region north of Cape Mendocino; R.J. McLaughlin, onland region between lat 40°00' and 40°30' N. and long 123°30' and 124°30' W.; D.S. McCulloch offshore region between lat 35° and 40° N

  2. Fault tolerance and testing for WSI systems

    NASA Astrophysics Data System (ADS)

    Ptak, Alan W.; McLeod, R. D.

    Fault tolerance and testing for wafer scale integration (WSI) processor arrays using boundary scan and built-in self-test (BIST) technology are discussed. A test strategy for verification of all components within an integrated circuit wafer is presented, and a fault tolerance technique using semi-concurrent fault detection is described. The test strategy consists of four steps taken to verify test bus continuity, boundary scan register continuity, interconnection network connectivity, and processor element integrity. The component-level area overhead for boundary scan and BIST is modest for present-day fabrication processes, and will diminish to an insignificant level as integrated circuit fabrication technology continues to improve.

  3. Cooperative application/OS DRAM fault recovery.

    SciTech Connect

    Ferreira, Kurt Brian; Bridges, Patrick G.; Heroux, Michael Allen; Hoemmen, Mark; Brightwell, Ronald Brian

    2012-05-01

    Exascale systems will present considerable fault-tolerance challenges to applications and system software. These systems are expected to suffer several hard and soft errors per day. Unfortunately, many fault-tolerance methods in use, such as rollback recovery, are unsuitable for many expected errors, for example DRAM failures. As a result, applications will need to address these resilience challenges to more effectively utilize future systems. In this paper, we describe work on a cross-layer application/OS framework to handle uncorrected memory errors. We illustrate the use of this framework through its integration with a new fault-tolerant iterative solver within the Trilinos library, and present initial convergence results.

  4. The mechanics of gravity-driven faulting

    NASA Astrophysics Data System (ADS)

    Barrows, L.; Barrows, V.

    2010-04-01

    Faulting can result from either of two different mechanisms. These involve fundamentally different energetics. In elastic rebound, locked-in elastic strain energy is transformed into the earthquake (seismic waves plus work done in the fault zone). In force-driven faulting, the forces that create the stress on the fault supply work or energy to the faulting process. Half of this energy is transformed into the earthquake and half goes into an increase in locked-in elastic strain. In elastic rebound the locked-in elastic strain drives slip on the fault. In force-driven faulting it stops slip on the fault. Tectonic stress is reasonably attributed to gravity acting on topography and the Earth's lateral density variations. This includes the thermal convection that ultimately drives plate tectonics. Mechanical analysis has shown the intensity of the gravitational tectonic stress that is associated with the regional topography and lateral density variations that actually exist is comparable with the stress drops that are commonly associated with tectonic earthquakes; both are in the range of tens of bar to several hundred bar. The gravity collapse seismic mechanism assumes the fault fails and slips in direct response to the gravitational tectonic stress. Gravity collapse is an example of force-driven faulting. In the simplest case, energy that is released from the gravitational potential of the stress-causing topography and lateral density variations is equally split between the earthquake and the increase in locked-in elastic strain. The release of gravitational potential energy requires a change in the Earth's density distribution. Gravitational body forces are solely dependent on density so a change in the density distribution requires a change in the body forces. This implies the existence of volumetric body-force displacements. The volumetric body-force displacements are in addition to displacements generated by slip on the fault. They must exist if gravity

  5. Cooperative human-machine fault diagnosis

    NASA Technical Reports Server (NTRS)

    Remington, Roger; Palmer, Everett

    1987-01-01

    Current expert system technology does not permit complete automatic fault diagnosis; significant levels of human intervention are still required. This requirement dictates a need for a division of labor that recognizes the strengths and weaknesses of both human and machine diagnostic skills. Relevant findings from the literature on human cognition are combined with the results of reviews of aircrew performance with highly automated systems to suggest how the interface of a fault diagnostic expert system can be designed to assist human operators in verifying machine diagnoses and guiding interactive fault diagnosis. It is argued that the needs of the human operator should play an important role in the design of the knowledge base.

  6. Cooperative Human-Machine Fault Diagnosis

    NASA Astrophysics Data System (ADS)

    Remington, Roger; Palmer, Everett

    1987-02-01

    Current expert system technology does not permit complete automatic fault diagnosis; significant levels of human intervention are still required. This requirement dictates a need for a division of labor that recognizes the strengths and weaknesses of both human and machine diagnostic skills. Relevant findings from the literature on human cognition are combined with the results of reviews of aircrew performance with highly automated systems to suggest how the interface of a fault diagnostic expert system can be designed to assist human operators in verifying machine diagnoses and guiding interactive fault diagnosis. It is argued that the needs of the human operator should play an important role in the design of the knowledge base.

  7. Negative Selection Algorithm for Aircraft Fault Detection

    NASA Technical Reports Server (NTRS)

    Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.

    2004-01-01

    We investigated a real-valued Negative Selection Algorithm (NSA) for fault detection in man-in-the-loop aircraft operation. The detection algorithm uses body-axes angular rate sensory data exhibiting the normal flight behavior patterns, to generate probabilistically a set of fault detectors that can detect any abnormalities (including faults and damages) in the behavior pattern of the aircraft flight. We performed experiments with datasets (collected under normal and various simulated failure conditions) using the NASA Ames man-in-the-loop high-fidelity C-17 flight simulator. The paper provides results of experiments with different datasets representing various failure conditions.

  8. Fault roughness evolution with slip (Gole Larghe Fault Zone, Italian Alps)

    NASA Astrophysics Data System (ADS)

    Bistacchi, A.; Spagnuolo, E.; Di Toro, G.; Nielsen, S. B.; Griffith, W. A.

    2011-12-01

    Fault surface roughness is a principal factor influencing fault and earthquake mechanics. However, little is known on roughness of fault surfaces at seismogenic depths, and particularly on how it evolves with accumulating slip. We have studied seismogenic fault surfaces of the Gole Larghe Fault Zone, which exploit precursor cooling joints of the Adamello tonalitic pluton (Italian Alps). These faults developed at 9-11 km and 250-300°C. Seismic slip along these surfaces, which individually accommodated from 1 to 20 m of net slip, resulted in the production of cm-thick cataclasites and pseudotachylytes (solidified melts produced during seismic slip). The roughness of fault surfaces was determined with a multi-resolution aerial and terrestrial LIDAR and photogrammetric dataset (Bistacchi et al., 2011, Pageoph, doi: 10.1007/s00024-011-0301-7). Fault surface roughness is self-affine, with Hurst exponent H < 1, indicating that faults are comparatively smoother at larger wavelengths. Fault surface roughness is inferred to have been inherited from the precursor cooling joints, which show H ≈ 0.8. Slip on faults progressively modified the roughness distribution, lowering the Hurst exponent in the along-slip direction up to H ≈ 0.6. This behaviour has been observed for wavelengths up to the scale of the accumulated slip along each individual fault surface, whilst at larger wavelengths the original roughness seems not to be affected by slip. Processes that contribute to modify fault roughness with slip include brittle failure of the interacting asperities (production of cataclasites) and frictional melting (production of pseudotachylytes). To quantify the "wear" due to these processes, we measured, together with the roughness of fault traces and their net slip, the thickness and distribution of cataclasites and pseudotachylytes. As proposed also in the tribological literature, we observe that wearing is scale dependent, as smaller wavelength asperities have a shorter

  9. Geochemical characteristics of fault core and damage zones of the Hong-Che Fault Zone of the Junggar Basin (NW China) with implications for the fault sealing process

    NASA Astrophysics Data System (ADS)

    Liu, Yin; Wu, Kongyou; Wang, Xi; Pei, Yangwen; Liu, Bo; Guo, Jianxun

    2017-08-01

    Faults may have a complex internal structure, including fault core and damage zone, and can act as major conduits for fluid migration. The migration of fluids along faults is generally associated with strong fluid-rock interaction, forming large amounts of cement that fill in the fractures. The cementation of the fault fractures is considered to be one of the important parameters of fault sealing. The different components of faults have diverse geochemical features because of varying physical characteristics. The investigation of the geochemical characteristics of the fault and damage zones could provide important information about the fault sealing process, which is very important in oil and gas exploration. To understand the fault-cemented sealing process, detailed geochemical studies were conducted on the fault and damage zones of the Hong-Che Fault of the northwestern Junggar Basin in China. The major and trace element data of our study suggest that the fault core is characterized by higher loss on ignition (LOI), potassium loss, Chemical Index of Alteration (CIA), and Plagioclase Index of Alteration (PIA) values and lower high field strength element (HFSE), large-ion lithosphile element (LILE), and rare earth element (REE) concentrations compared with the damage zone, implying more serious elemental loss and weathering of the fault core compared with the damage zone during faulting. The carbon and oxygen isotope data reveal that the cement of the Hong-Che Fault Zone formed due to multiple sources of fluids. The fault core was mainly affected by deep sources of hydrothermal fluids. In combination with previous studies, we suggest a potential fault-cemented sealing process during the period of fault movement. The fault core acts as the fluid conduit during faulting. After faulting, the fault core is cemented and the damage zone becomes the major conduit for fluid migration. The cementation firstly occurs on two sides of the damage zone in the upper part of the

  10. Physical and Mechanical Properties of the Mozumi Fault, Japan: Petrophysics of a Fine-Grained Fault Zone

    NASA Astrophysics Data System (ADS)

    Isaacs, A. J.; Evans, J. P.; Kolesar, P. T.

    2005-12-01

    The Mozumi-Sokenobu fault, a right-lateral strike-slip fault in north-central Honshu, Japan is intersected by the Active Fault Survey Tunnel. This tunnel allows for direct observation of the fault at a depth of 300-400 m below the ground surface. Within the tunnel, the Mozumi fault cuts Jurassic Tetori Group sandstone and shale. We have characterized microstructures, mineralogy, geochemistry, and elastic properties of fault rock samples from the Mozumi fault. These data can be combined to illustrate the in-situ macroscopic hydro-mechanical structure of the fault. Core samples from the main Mozumi fault zone intersected by the Active Fault Survey Tunnel borehole A were analyzed and compared to wireline logs for a petrophysical study of the fault zone rocks. Microstructures, mineralogy, and geochemistry of Mozumi fault rocks indicate syn-tectonic fluid flow and multiple deformation events. Resistivity and sonic log values are depressed through the main fault zone. Likewise, the seismic p and s wave velocity values are decreased across the main fault relative to the surrounding rock. Calculated values for Young's modulus and Poisson's ratio fall at the top of or above the experimentally derived range for elastic moduli of siltstone, shale, and sandstone. Smaller scale variations across the fault zone itself are also present. Samples of foliated fault rocks containing predominantly muscovite have intermediate values for elastic moduli and seismic velocity relative to other fault zone samples used in this study. Fault rocks significantly depleted in oxides relative to host rock samples and containing mixed clays have higher resistivity than surrounding fault rocks and intermediate permeability values. These variations in physical and mechanical properties throughout the fault zone coincide with the complex fault-parallel combined conduit/barrier permeability structure of the Mozumi fault zone.

  11. Data-based fault-tolerant control for affine nonlinear systems with actuator faults.

    PubMed

    Xie, Chun-Hua; Yang, Guang-Hong

    2016-09-01

    This paper investigates the fault-tolerant control (FTC) problem for unknown nonlinear systems with actuator faults including stuck, outage, bias and loss of effectiveness. The upper bounds of stuck faults, bias faults and loss of effectiveness faults are unknown. A new data-based FTC scheme is proposed. It consists of the online estimations of the bounds and a state-dependent function. The estimations are adjusted online to compensate automatically the actuator faults. The state-dependent function solved by using real system data helps to stabilize the system. Furthermore, all signals in the resulting closed-loop system are uniformly bounded and the states converge asymptotically to zero. Compared with the existing results, the proposed approach is data-based. Finally, two simulation examples are provided to show the effectiveness of the proposed approach.

  12. Probabilistic fault displacement hazards for the southern san andreas fault using scenarios and empirical slips

    USGS Publications Warehouse

    Chen, R.; Petersen, M.D.

    2011-01-01

    We apply a probabilistic method to develop fault displacement hazard maps and profiles for the southern San Andreas Fault. Two slip models are applied: (1) scenario slip, defined by the ShakeOut rupture model, and (2) empirical slip, calculated using regression equations relating global slip to earthquake magnitude and distance along the fault. The hazard is assessed using a range of magnitudes defined by the Uniform California Earthquake Rupture Forecast and the ShakeOut. For hazard mapping we develop a methodology to partition displacement among multiple fault branches basedon geological observations. Estimated displacement hazard extends a few kilometers wide in areas of multiple mapped fault branches and poor mapping accuracy. Scenario and empirical displacement hazard differs by a factor of two or three, particularly along the southernmost section of the San Andreas Fault. We recommend the empirical slip model with site-specific geological data to constrain uncertainties for engineering applications. ?? 2011, Earthquake Engineering Research Institute.

  13. Bayesian network based on a fault tree and its application in diesel engine fault diagnosis

    NASA Astrophysics Data System (ADS)

    Qian, Gang; Zheng, Shengguo; Cao, Longhan

    2005-12-01

    This paper discusses the faults diagnosis of diesel engine systems. This research aims at the optimization of the diagnosis results. Inspired by Bayesian Network (BN) possessing good performance in solving uncertainty problems, a new method was proposed for establishing a BN of diesel engine faults quickly, and diagnosing faults exactly. This method consisted of two stages,namely the establishment of a BN model, and a faults diagnosis of the diesel engine system using that BN mode. For the purpose of establishing the BN, a new algorithm, which can establish a BN quickly and easily, is presented. The Fault Tree (FT) diagnosis model of the diesel engine system was established first. Then it was transformed it into a BN by using our algorithm. Finally, the BN was used to diagnose the faults of a diesel engine system. Experimental results show that the diagnosis speed is increased and the accuracy is improved.

  14. Late quaternary faulting along the Death Valley-Furnace Creek fault system, California and Nevada

    SciTech Connect

    Brogan, G.E.; Kellogg, K.S.; Terhune, C.L.; Slemmons, D.B.

    1991-12-31

    The Death Valley-Furnace Creek fault system, in California and Nevada, has a variety of impressive late Quaternary neotectonic features that record a long history of recurrent earthquake-induced faulting. Although no neotectonic features of unequivocal historical age are known, paleoseismic features from multiple late Quaternary events of surface faulting are well developed throughout the length of the system. Comparison of scarp heights to amount of horizontal offset of stream channels and the relationships of both scarps and channels to the ages of different geomorphic surfaces demonstrate that Quaternary faulting along the northwest-trending Furnace Creek fault zone is predominantly right lateral, whereas that along the north-trending Death Valley fault zone is predominantly normal. These observations are compatible with tectonic models of Death Valley as a northwest- trending pull-apart basin.

  15. Fault Rock Variation as a Function of Host Rock Lithology

    NASA Astrophysics Data System (ADS)

    Fagereng, A.; Diener, J.

    2013-12-01

    Fault rocks contain an integrated record of the slip history of a fault, and thereby reflect the deformation processes associated with fault slip. Within the Aus Granulite Terrane, Namibia, a number of Jurassic to Cretaceous age strike-slip faults cross-cut Precambrian high grade metamorphic rocks. These strike-slip faults were active at subgreenschist conditions and occur in a variety of host rock lithologies. Where the host rock contains significant amounts of hydrous minerals, representing granulites that have undergone retrogressive metamorphism, the fault rock is dominated by hydrothermal breccias. In anhydrous, foliated rocks interlayered with minor layers containing hydrous phyllosilicates, the fault rock is a cataclasite partially cemented by jasper and quartz. Where the host rock is an isotropic granitic rock the fault rock is predominantly a fine grained black fault rock. Cataclasites and breccias show evidence for multiple deformation events, whereas the fine grained black fault rocks appear to only record a single slip increment. The strike-slip faults observed all formed in the same general orientation and at a similar time, and it is unlikely that regional stress, strain rate, pressure and temperature varied between the different faults. We therefore conclude that the type of fault rock here depended on the host rock lithology, and that lithology alone accounts for why some faults developed a hydrothermal breccia, some cataclasite, and some a fine grained black fault rock. Consequently, based on the assumption that fault rocks reflect specific slip styles, lithology was also the main control on different fault slip styles in this area at the time of strike-slip fault activity. Whereas fine grained black fault rock is inferred to represent high stress events, hydrothermal breccia is rather related to events involving fluid pressure in excess of the least stress. Jasper-bearing cataclasites may represent faults that experienced dynamic weakening as seen

  16. Geometry and earthquake potential of the shoreline fault, central California

    USGS Publications Warehouse

    Hardebeck, Jeanne L.

    2013-01-01

    The Shoreline fault is a vertical strike‐slip fault running along the coastline near San Luis Obispo, California. Much is unknown about the Shoreline fault, including its slip rate and the details of its geometry. Here, I study the geometry of the Shoreline fault at seismogenic depth, as well as the adjacent section of the offshore Hosgri fault, using seismicity relocations and earthquake focal mechanisms. The Optimal Anisotropic Dynamic Clustering (OADC) algorithm (Ouillon et al., 2008) is used to objectively identify the simplest planar fault geometry that fits all of the earthquakes to within their location uncertainty. The OADC results show that the Shoreline fault is a single continuous structure that connects to the Hosgri fault. Discontinuities smaller than about 1 km may be undetected, but would be too small to be barriers to earthquake rupture. The Hosgri fault dips steeply to the east, while the Shoreline fault is essentially vertical, so the Hosgri fault dips towards and under the Shoreline fault as the two faults approach their intersection. The focal mechanisms generally agree with pure right‐lateral strike‐slip on the OADC planes, but suggest a non‐planar Hosgri fault or another structure underlying the northern Shoreline fault. The Shoreline fault most likely transfers strike‐slip motion between the Hosgri fault and other faults of the Pacific–North America plate boundary system to the east. A hypothetical earthquake rupturing the entire known length of the Shoreline fault would have a moment magnitude of 6.4–6.8. A hypothetical earthquake rupturing the Shoreline fault and the section of the Hosgri fault north of the Hosgri–Shoreline junction would have a moment magnitude of 7.2–7.5.

  17. PV Systems Reliability Final Technical Report: Ground Fault Detection

    SciTech Connect

    Lavrova, Olga; Flicker, Jack David; Johnson, Jay

    2016-01-01

    We have examined ground faults in PhotoVoltaic (PV) arrays and the efficacy of fuse, current detection (RCD), current sense monitoring/relays (CSM), isolation/insulation (Riso) monitoring, and Ground Fault Detection and Isolation (GFID) using simulations based on a Simulation Program with Integrated Circuit Emphasis SPICE ground fault circuit model, experimental ground faults installed on real arrays, and theoretical equations.

  18. Active and inactive faults in southern California viewed from Skylab

    NASA Technical Reports Server (NTRS)

    Merifield, P. M.; Lamar, D. L.

    1975-01-01

    The application is discussed of Skylab imagery along with larger scale photography and field investigations in preparing fault maps of California for use in land use planning. The images were used to assist in distinguishing active from inactive faults (by recognizing indications of recent displacement), determining the length of potentially active faults, identifying previously unmapped faults, and gaining additional information on regional tectonic history.

  19. Influence of mechanical stratigraphy and kinematics on fault scaling relations

    NASA Astrophysics Data System (ADS)

    Gross, Michael R.; G´rrez-Alonso, Gabriel; Bai, Taixu; Wacker, Michael A.; Collinsworth, Kevin B.; Behl, Richard J.

    1997-02-01

    In order to document effects of mechanical anisotropy, fault geometry, and structural style on displacement-length ( D-L) scaling relations, we investigated fault dimensions in the lithologically heterogeneous Monterey Formation exposed along Arroyo Burro Beach, California. The faults, which range in length from several centimeters to several meters, group into two populations: small faults confined to individual mudstone beds, and larger faults that displace multiple beds and often merge into bedding plane detachments. Whereas a linear correlation exists between displacement and length for small faults, displacement across large faults is independent of length. We attribute this deviation from scale-invariance to a combination of geologic factors that influence fault growth once faults extend beyond the confines of mudstone beds. Propagation of large faults across higher moduli opal-CT porcellanite leads to a reduction in DL, as does the development of drag folds. Further scatter in DL occurs when fault tips splay as they approach detachments. Large faults eventually merge into bedding plane detachments, which originally formed due to flexural slip folding. Extremely high DL ratios are recorded for these merged faults as they accommodate block rotation within a simple shear zone. Thus, both mechanical stratigraphy and the temporal evolution of fault systems can lead to a breakdown in fault scaling relations thought to characterize isolated fault growth in a homogeneous medium.

  20. A Hybrid Approach for Fault Detection in Autonomous Physical Agents

    DTIC Science & Technology

    2014-05-01

    A Hybrid Approach for Fault Detection in Autonomous Physical Agents Eliahu Khalastchi, Meir Kalech, Lior Rokach Information Systems Engineering...Experimentation Keywords Fault detection, Model-Based Diagnosis , Robotics, UAV. 1. INTRODUCTION Autonomous physical agents such as Unmanned Vehicles (UVs...then a crash. To continue operate autonomously, the agent must have an accurate fault detection mechanism. Upon fault detection a diagnosis process

  1. Network Connectivity for Permanent, Transient, Independent, and Correlated Faults

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Sicher, Courtney; henry, Courtney

    2012-01-01

    This paper develops a method for the quantitative analysis of network connectivity in the presence of both permanent and transient faults. Even though transient noise is considered a common occurrence in networks, a survey of the literature reveals an emphasis on permanent faults. Transient faults introduce a time element into the analysis of network reliability. With permanent faults it is sufficient to consider the faults that have accumulated by the end of the operating period. With transient faults the arrival and recovery time must be included. The number and location of faults in the system is a dynamic variable. Transient faults also introduce system recovery into the analysis. The goal is the quantitative assessment of network connectivity in the presence of both permanent and transient faults. The approach is to construct a global model that includes all classes of faults: permanent, transient, independent, and correlated. A theorem is derived about this model that give distributions for (1) the number of fault occurrences, (2) the type of fault occurrence, (3) the time of the fault occurrences, and (4) the location of the fault occurrence. These results are applied to compare and contrast the connectivity of different network architectures in the presence of permanent, transient, independent, and correlated faults. The examples below use a Monte Carlo simulation, but the theorem mentioned above could be used to guide fault-injections in a laboratory.

  2. Fault zone Q values derived from Taiwan Chelungpu Fault borehole seismometers (TCDPBHS)

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Ju; Lin, Yen-Yu; Lee, Meng-Chieh; Ma, Kuo-Fong

    2012-11-01

    The attenuation factor, Q, at a fault zone is an important parameter for understanding the physical properties. In this study, we investigated the Q value of the Chelungpu Fault, the main rupture of the Mw 7.6 Chi-Chi earthquake, using the 7-level TCDP borehole seismometer array (TCDPBHS). The TCDPBHS was deployed at depths from 945 to 1270 m throughout the 1999 ruptured slip zone at 1111 m. Three borehole seismometers (BHS1-BHS3) were placed in the hanging wall, and the remaining three (BHS5-BHS7) were placed in the foot wall, with BHS4 near the slip zone. The configuration allowed us to estimate the Q-structure of the recent ruptured fault zone. In this study, we estimated Q values between BHS1 and BHS4, Qs1 (Qp1) at the fault zone and between BHS4 to 2 km in depth, Qs4 (Qp4) beneath the fault zone. We utilized two independent methods, the spectral ratio and spectral fitting analyses, for calculating the Q value of Qs1 (Qp1) in order to provide a reliability check. After analyzing 26 micro-events for Qs and 17 micro-events for Qp, we obtained consistent Q values from the two independent methods. The values of Qs1 and Qp1 were 21-22 and 27-35, respectively. The investigation for the value of Qs4 was close to 45, and Qp4 was 85. These Qp and Qs values are quiet consistent with observations obtained for the San Andreas Fault at the corresponding depth. A low Qs1 value for the recent Chelungpu Fault zone suggests that this fault zone has been highly fractured. Qs values within the Chelungpu Fault, similar to those within the San Andreas Fault, suggest that the Q structure within the fault zone is sedimentary rock independent. However, the possible existence of fluids, fractures, and cracks dominates the attenuation feature in the fault zone.

  3. Misbheaving Faults: The Expanding Role of Geodetic Imaging in Unraveling Unexpected Fault Slip Behavior

    NASA Astrophysics Data System (ADS)

    Barnhart, W. D.; Briggs, R.

    2015-12-01

    Geodetic imaging techniques enable researchers to "see" details of fault rupture that cannot be captured by complementary tools such as seismology and field studies, thus providing increasingly detailed information about surface strain, slip kinematics, and how an earthquake may be transcribed into the geological record. For example, the recent Haiti, Sierra El Mayor, and Nepal earthquakes illustrate the fundamental role of geodetic observations in recording blind ruptures where purely geological and seismological studies provided incomplete views of rupture kinematics. Traditional earthquake hazard analyses typically rely on sparse paleoseismic observations and incomplete mapping, simple assumptions of slip kinematics from Andersonian faulting, and earthquake analogs to characterize the probabilities of forthcoming ruptures and the severity of ground accelerations. Spatially dense geodetic observations in turn help to identify where these prevailing assumptions regarding fault behavior break down and highlight new and unexpected kinematic slip behavior. Here, we focus on three key contributions of space geodetic observations to the analysis of co-seismic deformation: identifying near-surface co-seismic slip where no easily recognized fault rupture exists; discerning non-Andersonian faulting styles; and quantifying distributed, off-fault deformation. The 2013 Balochistan strike slip earthquake in Pakistan illuminates how space geodesy precisely images non-Andersonian behavior and off-fault deformation. Through analysis of high-resolution optical imagery and DEMs, evidence emerges that a single fault map slip as both a strike slip and dip slip fault across multiple seismic cycles. These observations likewise enable us to quantify on-fault deformation, which account for ~72% of the displacements in this earthquake. Nonetheless, the spatial distribution of on- and off-fault deformation in this event is highly spatially variable- a complicating factor for comparisons

  4. Quaternary Geology and Surface Faulting Hazard: Active and Capable Faults in Central Apennines, Italy

    NASA Astrophysics Data System (ADS)

    Falcucci, E.; Gori, S.

    2015-12-01

    The 2009 L'Aquila earthquake (Mw 6.1), in central Italy, raised the issue of surface faulting hazard in Italy, since large urban areas were affected by surface displacement along the causative structure, the Paganica fault. Since then, guidelines for microzonation were drew up that take into consideration the problem of surface faulting in Italy, and laying the bases for future regulations about related hazard, similarly to other countries (e.g. USA). More specific guidelines on the management of areas affected by active and capable faults (i.e. able to produce surface faulting) are going to be released by National Department of Civil Protection; these would define zonation of areas affected by active and capable faults, with prescriptions for land use planning. As such, the guidelines arise the problem of the time interval and general operational criteria to asses fault capability for the Italian territory. As for the chronology, the review of the international literature and regulatory allowed Galadini et al. (2012) to propose different time intervals depending on the ongoing tectonic regime - compressive or extensional - which encompass the Quaternary. As for the operational criteria, the detailed analysis of the large amount of works dealing with active faulting in Italy shows that investigations exclusively based on surface morphological features (e.g. fault planes exposition) or on indirect investigations (geophysical data), are not sufficient or even unreliable to define the presence of an active and capable fault; instead, more accurate geological information on the Quaternary space-time evolution of the areas affected by such tectonic structures is needed. A test area for which active and capable faults can be first mapped based on such a classical but still effective methodological approach can be the central Apennines. Reference Galadini F., Falcucci E., Galli P., Giaccio B., Gori S., Messina P., Moro M., Saroli M., Scardia G., Sposato A. (2012). Time

  5. GIS coverages of the Castle Mountain Fault, south central Alaska

    USGS Publications Warehouse

    Labay, Keith A.; Haeussler, Peter J.

    2001-01-01

    The Castle Mountain fault is one of several major east-northeast-striking faults in southern Alaska, and it is the only fault with had historic seismicity and Holocene surface faulting. This report is a digital compilation of three maps along the Castle Mountain fault in south central Alaska. This compilation consists only of GIS coverages of the location of the fault, line attributes indicating the certainty of the fault location, and information about scarp height, where measured. The files are presented in ARC/INFO export file format and include metadata.

  6. Parameter Transient Behavior Analysis on Fault Tolerant Control System

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine (Technical Monitor); Shin, Jong-Yeob

    2003-01-01

    In a fault tolerant control (FTC) system, a parameter varying FTC law is reconfigured based on fault parameters estimated by fault detection and isolation (FDI) modules. FDI modules require some time to detect fault occurrences in aero-vehicle dynamics. This paper illustrates analysis of a FTC system based on estimated fault parameter transient behavior which may include false fault detections during a short time interval. Using Lyapunov function analysis, the upper bound of an induced-L2 norm of the FTC system performance is calculated as a function of a fault detection time and the exponential decay rate of the Lyapunov function.

  7. Study on the Evaluation Method for Fault Displacement: Probabilistic Approach Based on Japanese Earthquake Rupture Data - Principal fault displacements -

    NASA Astrophysics Data System (ADS)

    Kitada, N.; Inoue, N.; Tonagi, M.

    2016-12-01

    The purpose of Probabilistic Fault Displacement Hazard Analysis (PFDHA) is estimate fault displacement values and its extent of the impact. There are two types of fault displacement related to the earthquake fault: principal fault displacement and distributed fault displacement. Distributed fault displacement should be evaluated in important facilities, such as Nuclear Installations. PFDHA estimates principal fault and distributed fault displacement. For estimation, PFDHA uses distance-displacement functions, which are constructed from field measurement data. We constructed slip distance relation of principal fault displacement based on Japanese strike and reverse slip earthquakes in order to apply to Japan area that of subduction field. However, observed displacement data are sparse, especially reverse faults. Takao et al. (2013) tried to estimate the relation using all type fault systems (reverse fault and strike slip fault). After Takao et al. (2013), several inland earthquakes were occurred in Japan, so in this time, we try to estimate distance-displacement functions each strike slip fault type and reverse fault type especially add new fault displacement data set. To normalized slip function data, several criteria were provided by several researchers. We normalized principal fault displacement data based on several methods and compared slip-distance functions. The normalized by total length of Japanese reverse fault data did not show particular trend slip distance relation. In the case of segmented data, the slip-distance relationship indicated similar trend as strike slip faults. We will also discuss the relation between principal fault displacement distributions with source fault character. According to slip distribution function (Petersen et al., 2011), strike slip fault type shows the ratio of normalized displacement are decreased toward to the edge of fault. However, the data set of Japanese strike slip fault data not so decrease in the end of the fault

  8. Sea-Floor Spreading and Transform Faults

    ERIC Educational Resources Information Center

    Armstrong, Ronald E.; And Others

    1978-01-01

    Presents the Crustal Evolution Education Project (CEEP) instructional module on Sea-Floor Spreading and Transform Faults. The module includes activities and materials required, procedures, summary questions, and extension ideas for teaching Sea-Floor Spreading. (SL)

  9. Reset Tree-Based Optical Fault Detection

    PubMed Central

    Lee, Dong-Geon; Choi, Dooho; Seo, Jungtaek; Kim, Howon

    2013-01-01

    In this paper, we present a new reset tree-based scheme to protect cryptographic hardware against optical fault injection attacks. As one of the most powerful invasive attacks on cryptographic hardware, optical fault attacks cause semiconductors to misbehave by injecting high-energy light into a decapped integrated circuit. The contaminated result from the affected chip is then used to reveal secret information, such as a key, from the cryptographic hardware. Since the advent of such attacks, various countermeasures have been proposed. Although most of these countermeasures are strong, there is still the possibility of attack. In this paper, we present a novel optical fault detection scheme that utilizes the buffers on a circuit's reset signal tree as a fault detection sensor. To evaluate our proposal, we model radiation-induced currents into circuit components and perform a SPICE simulation. The proposed scheme is expected to be used as a supplemental security tool. PMID:23698267

  10. Seismomagnetic response of a fault zone

    NASA Astrophysics Data System (ADS)

    Adushkin, V. V.; Loktev, D. N.; Spivak, A. A.

    2017-01-01

    Based on the results of instrumental observations of geomagnetic variations caused by the propagation of seismic waves through a fault zone, the dependences between the amplitudes of the induced seismomagnetic effect and seismic signal as a function of distance r to the midline of the fault are obtained. For the first time, it is shown that the amplitude of the seismomagnetic effect is maximal in the fault damage zone. The phenomenological model describing the generation of magnetic signals by seismic waves propagating through the crushed rock in the tectonic fault zone is suggested. It is assumed that geomagnetic variations are generated by the changes in the electrical conductivity of the fragmented rocks as a result of the deformation of the rock pieces contacts. The amplitudes of the geomagnetic variations calculated from the model agree with the instrumental observations.

  11. Study of fault-tolerant software technology

    NASA Technical Reports Server (NTRS)

    Slivinski, T.; Broglio, C.; Wild, C.; Goldberg, J.; Levitt, K.; Hitt, E.; Webb, J.

    1984-01-01

    Presented is an overview of the current state of the art of fault-tolerant software and an analysis of quantitative techniques and models developed to assess its impact. It examines research efforts as well as experience gained from commercial application of these techniques. The paper also addresses the computer architecture and design implications on hardware, operating systems and programming languages (including Ada) of using fault-tolerant software in real-time aerospace applications. It concludes that fault-tolerant software has progressed beyond the pure research state. The paper also finds that, although not perfectly matched, newer architectural and language capabilities provide many of the notations and functions needed to effectively and efficiently implement software fault-tolerance.

  12. Transfer zones in listric normal fault systems

    NASA Astrophysics Data System (ADS)

    Bose, Shamik

    Listric normal faults are common in passive margin settings where sedimentary units are detached above weaker lithological units, such as evaporites or are driven by basal structural and stratigraphic discontinuities. The geometries and styles of faulting vary with the types of detachment and form landward and basinward dipping fault systems. Complex transfer zones therefore develop along the terminations of adjacent faults where deformation is accommodated by secondary faults, often below seismic resolution. The rollover geometry and secondary faults within the hanging wall of the major faults also vary with the styles of faulting and contribute to the complexity of the transfer zones. This study tries to understand the controlling factors for the formation of the different styles of listric normal faults and the different transfer zones formed within them, by using analog clay experimental models. Detailed analyses with respect to fault orientation, density and connectivity have been performed on the experiments in order to gather insights on the structural controls and the resulting geometries. A new high resolution 3D laser scanning technology has been introduced to scan the surfaces of the clay experiments for accurate measurements and 3D visualizations. Numerous examples from the Gulf of Mexico have been included to demonstrate and geometrically compare the observations in experiments and real structures. A salt cored convergent transfer zone from the South Timbalier Block 54, offshore Louisiana has been analyzed in detail to understand the evolutionary history of the region, which helps in deciphering the kinematic growth of similar structures in the Gulf of Mexico. The dissertation is divided into three chapters, written in a journal article format, that deal with three different aspects in understanding the listric normal fault systems and the transfer zones so formed. The first chapter involves clay experimental models to understand the fault patterns in

  13. Philippine fault: A key for Philippine kinematics

    NASA Astrophysics Data System (ADS)

    Barrier, E.; Huchon, P.; Aurelio, M.

    1991-01-01

    On the basis of new geologic data and a kinematic analysis, we establish a simple kinematic model in which the motion between the Philippine Sea plate and Eurasia is distributed on two boundaries: the Philippine Trench and the Philippine fault. This model predicts a velocity of 2 to 2.5 cm/yr along the fault. Geologic data from the Visayas provide an age of 2 to 4 Ma for the fault, an age in good agreement with the date of the beginning of subduction in the Philippine Trench. The origin of the Philippine fault would thus be the flip of subduction from west to east after the locking of convergence to the west by the collision of the Philippine mobile belt with the Eurasian margin.

  14. Inspection and rehabilitation of tunnels across faults

    SciTech Connect

    Abramson, L.W.; Schmidt, B.

    1995-12-31

    The inspection and rehabilitation of tunnels that cross faults is unique because they usually are in use and have a large variety of alternative lining types including bare rock, concrete, or steel often coated with accumulations of dirt, grime, algae and other minerals. Inspection methods are important including what to look for, how to clean the inner tunnel lining surfaces, non-destructive testing, coring, soundings, air quality detection and protection, ventilation, lightning, etc. Rehabilitation of tunnels crossing faults requires a practiced knowledge of underground design and construction practices. The most common methods of rehabilitation include grouting and concreting. The Variety of water, wastewater, transit, and highway tunnels in California provide ample examples of tunnels, new and old, that cross active faults. This paper will address specific methods of tunnel inspection and maintenance at fault crossings and give examples of relevant highway, transit, water, and wastewater projects and studies in California to demonstrate the discussions presented.

  15. Current Sensor Fault Reconstruction for PMSM Drives

    PubMed Central

    Huang, Gang; Luo, Yi-Ping; Zhang, Chang-Fan; He, Jing; Huang, Yi-Shan

    2016-01-01

    This paper deals with a current sensor fault reconstruction algorithm for the torque closed-loop drive system of an interior PMSM. First, sensor faults are equated to actuator ones by a new introduced state variable. Then, in αβ coordinates, based on the motor model with active flux linkage, a current observer is constructed with a specific sliding mode equivalent control methodology to eliminate the effects of unknown disturbances, and the phase current sensor faults are reconstructed by means of an adaptive method. Finally, an αβ axis current fault processing module is designed based on the reconstructed value. The feasibility and effectiveness of the proposed method are verified by simulation and experimental tests on the RT-LAB platform. PMID:26840317

  16. Fault-tolerant communication channel structures

    NASA Technical Reports Server (NTRS)

    Alkalai, Leon (Inventor); Chau, Savio N. (Inventor); Tai, Ann T. (Inventor)

    2006-01-01

    Systems and techniques for implementing fault-tolerant communication channels and features in communication systems. Selected commercial-off-the-shelf devices can be integrated in such systems to reduce the cost.

  17. Continuous reconfiguration: fault tolerance without a ripple

    SciTech Connect

    Bortner, R.A.

    1983-01-01

    The concepts of the continuously reconfiguring flight control system (crm/sup 2/fcs) and the impact of its architecture upon fault tolerance and reliability are covered. Some of the topics discussed are continuous reconfiguration, autonomous control, virtual common memory and the fault filter. Continuous reconfiguration is defined. An example is discussed with an explanation of transparent failure. Autonomous control is the scheme for controlling a continually reconfiguring system. The process of volunteering is also discussed. The virtual common memory is the common memory architecture used in the continuously reconfiguring system. Its physical implementation is explained. The fault filter is the method used to detect and deal with faulty processors. The different levels and the types of faults each handles are examined. 1 ref.

  18. Delineation of fault zones using imaging radar

    NASA Technical Reports Server (NTRS)

    Toksoz, M. N.; Gulen, L.; Prange, M.; Matarese, J.; Pettengill, G. H.; Ford, P. G.

    1986-01-01

    The assessment of earthquake hazards and mineral and oil potential of a given region requires a detailed knowledge of geological structure, including the configuration of faults. Delineation of faults is traditionally based on three types of data: (1) seismicity data, which shows the location and magnitude of earthquake activity; (2) field mapping, which in remote areas is typically incomplete and of insufficient accuracy; and (3) remote sensing, including LANDSAT images and high altitude photography. Recently, high resolution radar images of tectonically active regions have been obtained by SEASAT and Shuttle Imaging Radar (SIR-A and SIR-B) systems. These radar images are sensitive to terrain slope variations and emphasize the topographic signatures of fault zones. Techniques were developed for using the radar data in conjunction with the traditional types of data to delineate major faults in well-known test sites, and to extend interpretation techniques to remote areas.

  19. Current Sensor Fault Reconstruction for PMSM Drives.

    PubMed

    Huang, Gang; Luo, Yi-Ping; Zhang, Chang-Fan; He, Jing; Huang, Yi-Shan

    2016-01-30

    This paper deals with a current sensor fault reconstruction algorithm for the torque closed-loop drive system of an interior PMSM. First, sensor faults are equated to actuator ones by a new introduced state variable. Then, in αβ coordinates, based on the motor model with active flux linkage, a current observer is constructed with a specific sliding mode equivalent control methodology to eliminate the effects of unknown disturbances, and the phase current sensor faults are reconstructed by means of an adaptive method. Finally, an αβ axis current fault processing module is designed based on the reconstructed value. The feasibility and effectiveness of the proposed method are verified by simulation and experimental tests on the RT-LAB platform.

  20. Not-so-inactive fault in Oklahoma

    USGS Publications Warehouse

    Spall, Henry

    1986-01-01

    In connection with a search for geologically quiet areas for sitting large engineering ventures such as dams and nuclear power plants, geologists have recently started looking at the Meers fault in southwestern Oklahoma with an intense interest.

  1. Fault diagnosis of power transformer based on fault-tree analysis (FTA)

    NASA Astrophysics Data System (ADS)

    Wang, Yongliang; Li, Xiaoqiang; Ma, Jianwei; Li, SuoYu

    2017-05-01

    Power transformers is an important equipment in power plants and substations, power distribution transmission link is made an important hub of power systems. Its performance directly affects the quality and health of the power system reliability and stability. This paper summarizes the five parts according to the fault type power transformers, then from the time dimension divided into three stages of power transformer fault, use DGA routine analysis and infrared diagnostics criterion set power transformer running state, finally, according to the needs of power transformer fault diagnosis, by the general to the section by stepwise refinement of dendritic tree constructed power transformer fault

  2. Distributed Fault-Tolerant Control of Networked Uncertain Euler-Lagrange Systems Under Actuator Faults.

    PubMed

    Chen, Gang; Song, Yongduan; Lewis, Frank L

    2016-05-03

    This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.

  3. Late Quaternary Faulting along the San Juan de los Planes Fault Zone, Baja California Sur, Mexico

    NASA Astrophysics Data System (ADS)

    Busch, M. M.; Coyan, J. A.; Arrowsmith, J.; Maloney, S. J.; Gutierrez, G.; Umhoefer, P. J.

    2007-12-01

    As a result of continued distributed deformation in the Gulf Extensional Province along an oblique-divergent plate margin, active normal faulting is well manifest in southeastern Baja California. By characterizing normal-fault related deformation along the San Juan de los Planes fault zone (SJPFZ) southwest of La Paz, Baja California Sur we contribute to understanding the patterns and rates of faulting along the southwest gulf-margin fault system. The geometry, history, and rate of faulting provide constraints on the relative significance of gulf-margin deformation as compared to axial system deformation. The SJPFZ is a major north-trending structure in the southern Baja margin along which we focused our field efforts. These investigations included: a detailed strip map of the active fault zone, including delineation of active scarp traces and geomorphic surfaces on the hanging wall and footwall; fault scarp profiles; analysis of bedrock structures to better understand how the pattern and rate of strain varied during the development of this fault zone; and a gravity survey across the San Juan de los Planes basin to determine basin geometry and fault behavior. The map covers a N-S swath from the Gulf of California in the north to San Antonio in the south, an area ~45km long and ~1-4km wide. Bedrock along the SJPFZ varies from Cretaceous Las Cruces Granite in the north to Cretaceous Buena Mujer Tonalite in the south and is scarred by shear zones and brittle faults. The active scarp-forming fault juxtaposes bedrock in the footwall against Late Quaternary sandstone-conglomerate. This ~20m wide zone is highly fractured bedrock infused with carbonate. The northern ~12km of the SJPFZ, trending 200°, preserves discontinuous scarps 1-2km long and 1-3m high in Quaternary units. The scarps are separated by stretches of bedrock embayed by hundreds of meters-wide tongues of Quaternary sandstone-conglomerate, implying low Quaternary slip rate. Further south, ~2 km north of the

  4. Porosity variations in and around normal fault zones: implications for fault seal and geomechanics

    NASA Astrophysics Data System (ADS)

    Healy, David; Neilson, Joyce; Farrell, Natalie; Timms, Nick; Wilson, Moyra

    2015-04-01

    Porosity forms the building blocks for permeability, exerts a significant influence on the acoustic response of rocks to elastic waves, and fundamentally influences rock strength. And yet, published studies of porosity around fault zones or in faulted rock are relatively rare, and are hugely dominated by those of fault zone permeability. We present new data from detailed studies of porosity variations around normal faults in sandstone and limestone. We have developed an integrated approach to porosity characterisation in faulted rock exploiting different techniques to understand variations in the data. From systematic samples taken across exposed normal faults in limestone (Malta) and sandstone (Scotland), we combine digital image analysis on thin sections (optical and electron microscopy), core plug analysis (He porosimetry) and mercury injection capillary pressures (MICP). Our sampling includes representative material from undeformed protoliths and fault rocks from the footwall and hanging wall. Fault-related porosity can produce anisotropic permeability with a 'fast' direction parallel to the slip vector in a sandstone-hosted normal fault. Undeformed sandstones in the same unit exhibit maximum permeability in a sub-horizontal direction parallel to lamination in dune-bedded sandstones. Fault-related deformation produces anisotropic pores and pore networks with long axes aligned sub-vertically and this controls the permeability anisotropy, even under confining pressures up to 100 MPa. Fault-related porosity also has interesting consequences for the elastic properties and velocity structure of normal fault zones. Relationships between texture, pore type and acoustic velocity have been well documented in undeformed limestone. We have extended this work to include the effects of faulting on carbonate textures, pore types and P- and S-wave velocities (Vp, Vs) using a suite of normal fault zones in Malta, with displacements ranging from 0.5 to 90 m. Our results show a

  5. Inferred depth of creep on the Hayward Fault, central California

    USGS Publications Warehouse

    Savage, J.C.; Lisowski, M.

    1993-01-01

    A relation between creep rate at the surface trace of a fault, the depth to the bottom of the creeping zone, and the rate of stress accumulation on the fault is derived from Weertman's 1964 friction model of slip on a fault. A 5??1 km depth for the creeping zone on the Hayward fault is estimated from the measured creep rate (5mm/yr) at the fault trace and the rate of stress increase on the upper segment of the fault trace inferred from geodetic measurements across the San Francisco Bay area. Although fault creep partially accommodates the secular slip rate on the Hayward fault, a slip deficit is accumulating equivalent to a magnitude 6.6 earthquake on each 40 km segment of the fault each century. Thus, the current behavior of the fault is consistent with its seismic history, which includes two moderate earthquakes in the mid-1800s. -Authors

  6. Strong ground motions generated by earthquakes on creeping faults

    USGS Publications Warehouse

    Harris, Ruth A.; Abrahamson, Norman A.

    2014-01-01

    A tenet of earthquake science is that faults are locked in position until they abruptly slip during the sudden strain-relieving events that are earthquakes. Whereas it is expected that locked faults when they finally do slip will produce noticeable ground shaking, what is uncertain is how the ground shakes during earthquakes on creeping faults. Creeping faults are rare throughout much of the Earth's continental crust, but there is a group of them in the San Andreas fault system. Here we evaluate the strongest ground motions from the largest well-recorded earthquakes on creeping faults. We find that the peak ground motions generated by the creeping fault earthquakes are similar to the peak ground motions generated by earthquakes on locked faults. Our findings imply that buildings near creeping faults need to be designed to withstand the same level of shaking as those constructed near locked faults.

  7. Hydrologic Characterization Study at Wildcat Fault Zone

    NASA Astrophysics Data System (ADS)

    Karasaki, K.; Onishi, C. T.; Goto, J.; Moriya, T.; Ueta, K.; Kiho, K.

    2011-12-01

    A dedicated field site has been developed to further the understanding of, and to develop the characterization technology for, fault zone hydrology in the hills east of Berkeley, California across the Wildcat Fault. The Wildcat is believed to be a strike-slip fault and a member of the Hayward Fault System, with over 10 km of displacement. So far, several ~2-4-m deep trenches were cut, a number of surface-based geophysical surveys were conducted, and four ~150-m deep fully cored boreholes were drilled at the site; one on the east side and two on the west side of the suspected fault trace. The inclined fourth hole was drilled to penetrate the Wildcat. Geologic analysis results from these trenches and boreholes indicated that the geology was not always what was expected: while confirming some earlier, published conclusions about Wildcat, they have also led to some unexpected findings. The lithology at the Wildcat Fault area mainly consists of chert, shale, silt and sandstone, extensively sheared and fractured with gouge and cataclasite zones observed at several depths. Wildcat near the field site appears to consist of multiple fault planes with the major fault planes filled with unconsolidated pulverized rock instead of clay gouge. The pressure and temperature distributions indicate a downward hydraulic gradient and a relatively large geothermal gradient. Various types of borehole logging were conducted but there were no obvious correlations between boreholes or to hydrologic properties. Using the three other boreholes as observation wells, hydrologic cross-hole pumping tests were conducted in the fourth borehole. The hydraulic test data suggest the dual properties of the hydrologic structure of the fault zone: high permeability along the plane and low permeability across it, and the fault planes may be compartmentalizing aquifers. No correlation was found between fracture frequency and flow. Long term pressure monitoring over multiple seasons was shown to be very

  8. The San Andreas Fault System, California, USA

    USGS Publications Warehouse

    Brown, R.D.; Wallace, R.E.; Hill, D.P.

    1992-01-01

    Geologists, seismologists, and geophysicists have intensively studied the San Andreas fault system for the past 20 to 30 years. Their goals were to learn more about damaging earthquakes, the behavior of major stirke-slip faults, and methods of reducing earthquake hazards in populated areas. Field geologic investigations, seismic networks, post-earthquake studies, precision geodetic surveys, and reflection and refraction seismic surveys are among the methods used to decipher the history, geometry, and mechanics of the system. -from Authors

  9. Deformation Monitoring of AN Active Fault

    NASA Astrophysics Data System (ADS)

    Ostapchuk, A.

    2015-12-01

    The discovery of low frequency earthquakes, slow slip events and other deformation phenomena, new for geophysics, change our understanding of how the energy accumulated in the Earth's crust do release. The new geophysical data make one revise the underlying mechanism of geomechanical processes taking place in fault zones. Conditions for generating different slip modes are still unclear. The most vital question is whether a certain slip mode is intrinsic for a fault or may be controlled by external factors. This work presents the results of two and a half year deformation monitoring of a discontinuity in the zone of the Main Sayanskiy Fault. Main Sayanskiy Fault is right-lateral strike-slip fault. Observations were performed in the tunnel of Talaya seismic station (TLY), Irkutsk region, Russia. Measurements were carried out 70 m away from the entrance of the tunnel, the thickness of overlying rock was about 30 m. Inductive sensors of displacement were mounted at the both sides of a discontinuity, which recorded three components of relative fault side displacement with the accuracy of 0.2 mcm. Temperature variation inside the tunnel didn't exceed 0.5oC during the all period of observations. Important information about deformation properties of an active fault was obtained. A pronounced seasonality of deformation characteristics of discontinuity is observed in the investigated segment of rock. A great number of slow slip events with durations from several hours to several weeks were registered. Besides that alterations of fault deformation characteristics before the megathrust earthquake M9.0 Tohoku Oki 11 March 2011 and reaction to the event itself were detected. The work was supported by the Russian Science Foundation (grant no. 14-17-00719).

  10. GN and C Fault Protection Fundamentals

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert D.

    2008-01-01

    This is a companion presentation for a paper by the same name for the same conference. The objective of this paper is to shed some light on the fundamentals of fault tolerant design for GN&C. The common heritage of ideas behind both faulted and normal operation is explored, as is the increasingly indistinct line between these realms in complex missions. Techniques in common practice are then evaluated in this light to suggest a better direction for future efforts.

  11. GN and C Fault Protection Fundamentals

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert D.

    2008-01-01

    This is a companion presentation for a paper by the same name for the same conference. The objective of this paper is to shed some light on the fundamentals of fault tolerant design for GN&C. The common heritage of ideas behind both faulted and normal operation is explored, as is the increasingly indistinct line between these realms in complex missions. Techniques in common practice are then evaluated in this light to suggest a better direction for future efforts.

  12. A survey of fault diagnosis technology

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel

    1989-01-01

    Existing techniques and methodologies for fault diagnosis are surveyed. The techniques run the gamut from theoretical artificial intelligence work to conventional software engineering applications. They are shown to define a spectrum of implementation alternatives where tradeoffs determine their position on the spectrum. Various tradeoffs include execution time limitations and memory requirements of the algorithms as well as their effectiveness in addressing the fault diagnosis problem.

  13. A connecting network with fault tolerance capabilities

    SciTech Connect

    Ciminiera, L.; Serra, A.

    1986-06-01

    A new multistage interconnection network is presented in this paper. It is able to handle the communications between the connected devices correctly, even in the presence of fault(s) in the network. This goal is achieved by using redundant paths with a fast procedure able to dynamically reroute the message. It is also shown that the rerouting properties are still valid when broadcasting transmission is used.

  14. Limiting Maximum Magnitude by Fault Dimensions (Invited)

    NASA Astrophysics Data System (ADS)

    Stirling, M. W.

    2010-12-01

    A standard practise of seismic hazard modeling is to combine fault and background seismicity sources to produce a multidisciplinary source model for a region. Background sources are typically modeled with a Gutenberg-Richter magnitude-frequency distribution developed from historical seismicity catalogs, and fault sources are typically modeled with earthquakes that are limited in size by the mapped fault rupture dimensions. The combined source model typically exhibits a Gutenberg-Richter-like distribution due to there being many short faults relative to the number of longer faults. The assumption that earthquakes are limited by the mapped fault dimensions therefore appears to be consistent with the Gutenberg-Richter relationship, one of the fundamental laws of seismology. Recent studies of magnitude-frequency distributions for California and New Zealand have highlighted an excess of fault-derived earthquakes relative to the log-linear extrapolation of the Gutenberg-Richter relationship from the smaller magnitudes (known as the “bulge”). Relaxing the requirement of maximum magnitude being limited by fault dimensions is a possible solution for removing the “bulge” to produce a perfectly log-linear Gutenberg-Richter distribution. An alternative perspective is that the “bulge” does not represent a significant departure from a Gutenberg-Richter distribution, and may simply be an artefact of a small earthquake dataset relative to the more plentiful data at the smaller magnitudes. In other words the uncertainty bounds of the magnitude-frequency distribution at the moderate-to-large magnitudes may be far greater than the size of the “bulge”.

  15. Fault-fracture strain in Wingate Sandstone

    NASA Astrophysics Data System (ADS)

    Jamison, William R.

    The Laramide deformation of the Triassic Wingate Sandstone along the northeast flank of the Uncompahgre uplift has occurred by faulting at various scales. Macroscopically smooth flexures of beds within the Wingate occur by small displacements across a myriad of intraformational, mesoscale faults. The deformation resultant from these small faults may be approximated by a strain tensor, provided the measurement domain satisfies certain size criteria. Equivalent strain (ɛ) measurements, obtained from 22 locations in the East Kodel's Canyon, range from 1% to 15.5% (the maximum contractional strains range from -0.9% to -13.4%). The faults producing this strain have displacements ranging from a fraction of a millimeter to 18.5 cm. The fault intensity increases with increasing ɛ, although in a distinctly non-linear fashion. At low strains, incremental increases in the deformation produce additional, small displacement faults. At larger strains, incremental increases in the deformation occur via progressive displacement along existing faults. The principal strain axes are consistently non-coaxial with the inferred principal stresses (average σ1Λɛ1 is 18.5°). This non-coaxiality results from the non-uniform development of the conjugate fault systems. This same inequality of the conjugate systems produces a non-zero rotation tensor, ω, but ω is not related to σ1Λɛ1. The non-uniform development of conjugate shears (and the associated non-coaxiality of σ1 and ɛ1) may be an intrinsic characteristic of a Coulomb material.

  16. Geometric incompatibility in a fault system.

    PubMed Central

    Gabrielov, A; Keilis-Borok, V; Jackson, D D

    1996-01-01

    Interdependence between geometry of a fault system, its kinematics, and seismicity is investigated. Quantitative measure is introduced for inconsistency between a fixed configuration of faults and the slip rates on each fault. This measure, named geometric incompatibility (G), depicts summarily the instability near the fault junctions: their divergence or convergence ("unlocking" or "locking up") and accumulation of stress and deformations. Accordingly, the changes in G are connected with dynamics of seismicity. Apart from geometric incompatibility, we consider deviation K from well-known Saint Venant condition of kinematic compatibility. This deviation depicts summarily unaccounted stress and strain accumulation in the region and/or internal inconsistencies in a reconstruction of block- and fault system (its geometry and movements). The estimates of G and K provide a useful tool for bringing together the data on different types of movement in a fault system. An analog of Stokes formula is found that allows determination of the total values of G and K in a region from the data on its boundary. The phenomenon of geometric incompatibility implies that nucleation of strong earthquakes is to large extent controlled by processes near fault junctions. The junctions that have been locked up may act as transient asperities, and unlocked junctions may act as transient weakest links. Tentative estimates of K and G are made for each end of the Big Bend of the San Andreas fault system in Southern California. Recent strong earthquakes Landers (1992, M = 7.3) and Northridge (1994, M = 6.7) both reduced K but had opposite impact on G: Landers unlocked the area, whereas Northridge locked it up again. Images Fig. 1 Fig. 2 PMID:11607673

  17. The Southern California Fault Activity Database

    NASA Astrophysics Data System (ADS)

    Perry, S. C.; Silva, M. P.

    2001-12-01

    The Southern California Fault Activity Database (SCFAD) will supply WEB-accessible data about active faults throughout southern California, an essential resource for basic research and earthquake hazard mitigation. The SCFAD is funded by the Southern California Earthquake Center (SCEC) to compile and summarize published data pertaining to each fault's slip rate, recurrence interval, slip per event, and known damaging earthquakes, as well as fault location, orientation, and sense of movement. It is based predominantly, but not exclusively, on paleoseismic studies. In addition, the SCFAD archives publications and unpublished data, provides a forum for continuing discussion about fault activity, and highlights needed future research directions. A key goal is to develop a single, consistent representation of the region's faults. Thus, the SCFAD has contributed to, and is designed to coordinate with, databases of the California Division of Mines and Geology, the National Hazard Mapping Program, and 3-D fault geometry models of SCEC's Regional Earthquake Likelihood Models (RELM) project. The SCFAD builds on several existing databases, particularly a Web-based database of Los Angeles basin faults constructed by Ponti, Hecker, Kendrick, and Hamilton at the U. S. Geological Survey. The SCFAD is implemented using FileMaker Pro (v. 5) as a database management system (DBMS) which resides on a Windows 2000 server. The SCFAD will soon be available on-line, viewable through any W3C-compliant Internet browser. Please keep apprised of SCFAD progress at www.relm.org. Collaborations are fundamental to the SCFAD's mission, and we encourage you to participate in the SCFAD's continued growth through use, contributions, and comments.

  18. Using Relocatable Bitstreams For Fault Tolerance

    DTIC Science & Technology

    2007-03-01

    fault tolerant, increasing their dependability and availability, by allowing an FPGA to restore its functionality after a fault has been detected ...device to be programmed, thus providing direct support for dynamic reconfiguration [GLS99]. All action in JBits must be specified in the source code ... FPGA families, including the Virtex-II Pro, and provides a router based on JHDLBits, an open source project that connects JHDL and JBits. JBits 3.0

  19. A Primer on Architectural Level Fault Tolerance

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    2008-01-01

    This paper introduces the fundamental concepts of fault tolerant computing. Key topics covered are voting, fault detection, clock synchronization, Byzantine Agreement, diagnosis, and reliability analysis. Low level mechanisms such as Hamming codes or low level communications protocols are not covered. The paper is tutorial in nature and does not cover any topic in detail. The focus is on rationale and approach rather than detailed exposition.

  20. HVAC Fault Detection and Diagnosis Toolkit

    SciTech Connect

    Haves, Philip; Xu, Peng; Kim, Moosung

    2004-12-31

    This toolkit supports component-level model-based fault detection methods in commercial building HVAC systems. The toolbox consists of five basic modules: a parameter estimator for model calibration, a preprocessor, an AHU model simulator, a steady-state detector, and a comparator. Each of these modules and the fuzzy logic rules for fault diagnosis are described in detail. The toolbox is written in C++ and also invokes the SPARK simulation program.

  1. Stacking fault energy in some single crystals

    NASA Astrophysics Data System (ADS)

    Vora, Aditya M.

    2012-06-01

    The stacking fault energy of single crystals has been reported using the peak shift method. Presently studied all single crystals are grown by using a direct vapor transport (DVT) technique in the laboratory. The structural characterizations of these crystals are made by XRD. Considerable variations are shown in deformation (α) and growth (β) probabilities in single crystals due to off-stoichiometry, which possesses the stacking fault in the single crystal.

  2. [Does a victim's fault exonerate medical responsibility?].

    PubMed

    Bernard, M G

    2005-06-01

    Victim fault constitutes a paradox in which the person creates herself her damage and runs the risk to suffer of it. That situation can occur in several cases, exploration or therapeutics. French liability rules consist in application of Civil code articles n(o) 1382 and 1383, but mainly 1147 concerning contractual liability since 1936. Indubitably, victim fault exonerates practitioner from her liability, at least partly. After report of a few files, the author proposes safety rules which can help to avoid problems.

  3. Fault-related rocks: Suggestions for terminology

    NASA Astrophysics Data System (ADS)

    Wise, D. U.; Dunn, D. E.; Engelder, J. T.; Geiser, P. A.; Hatcher, R. D.; Kish, S. A.; Odom, A. L.; Schamel, S.

    1984-07-01

    Many traditional terms for fault-related rocks have undergone recent dynamic metamorphism under high-pressure discussions by various groups of specialists. A generally acceptable simplified framework encompassing these and associated structural terms is now needed for many geologic, engineering, and legal purposes. Such a framework is proposed here, focusing on a rate-of-strain versus rate-of-recovery diagram and relating this framework to the products of brittle and ductile deformation along faults.

  4. Effect of distributed inelastic deformation on fault slip profiles and fault interaction under mid-crustal conditions

    NASA Astrophysics Data System (ADS)

    Nevitt, J. M.; Pollard, D. D.

    2015-12-01

    Under mid-crustal conditions, faults commonly are associated with distributed inelastic deformation (i.e., ductile fabrics). The effect of such inelastic deformation on fault slip profiles and fault interaction remains poorly understood, though it likely plays a significant role in the earthquake cycle. We have investigated meter-scale strike-slip faults exhumed from ~10 km depth in the Lake Edison granodiorite (Sierra Nevada, CA). These faults are characterized by slip-to-length ratios and slip gradients near fault tips that greatly exceed what is measured for faults in the brittle upper crust, or produced by linear elastic models. Using Abaqus, we construct elastoplastic finite element models to evaluate the impact of off-fault plasticity on the resulting slip profiles for both continuous and discontinuous faults. Elastoplastic models show that plastic strain near fault tips effectively lengthens faults, allowing for greater overall slip and increased slip gradients near fault tips. In the field, regions adjacent to fault tips contain mylonitized granodiorite and ductilely sheared dikes and schlieren, consistent with the model results. In addition, distributed plastic strain facilitates slip transfer between echelon fault segments, particularly for contractional step geometries. Relative to an isolated fault, fault segments adjacent to contractional steps are asymmetric, with the maximum slip shifted in the direction of the step. Immediately adjacent to the contractional step, fault slip is significantly reduced because shear offset is accommodated by distributed plastic shearing within the step, rather than by discrete slip on the faults. Although slip is locally reduced on each fault segment directly adjacent to a contractional step, overall slip transfer between discontinuous fault segments is most efficient for this step geometry. That is, faults segmented by contractional steps produce greater maximum slip than do those separated by extensional steps

  5. Fault-tolerant PACS server

    NASA Astrophysics Data System (ADS)

    Cao, Fei; Liu, Brent J.; Huang, H. K.; Zhou, Michael Z.; Zhang, Jianguo; Zhang, X. C.; Mogel, Greg T.

    2002-05-01

    Failure of a PACS archive server could cripple an entire PACS operation. Last year we demonstrated that it was possible to design a fault-tolerant (FT) server with 99.999% uptime. The FT design was based on a triple modular redundancy with a simple majority vote to automatically detect and mask a faulty module. The purpose of this presentation is to report on its continuous developments in integrating with external mass storage devices, and to delineate laboratory failover experiments. An FT PACS Simulator with generic PACS software has been used in the experiment. To simulate a PACS clinical operation, image examinations are transmitted continuously from the modality simulator to the DICOM gateway and then to the FT PACS server and workstations. The hardware failures in network, FT server module, disk, RAID, and DLT are manually induced to observe the failover recovery of the FT PACS to resume its normal data flow. We then test and evaluate the FT PACS server in its reliability, functionality, and performance.

  6. Optical methods in fault dynamics

    NASA Astrophysics Data System (ADS)

    Uenishi, K.; Rossmanith, H. P.

    2003-10-01

    The Rayleigh pulse interaction with a pre-stressed, partially contacting interface between similar and dissimilar materials is investigated experimentally as well as numerically. This study is intended to obtain an improved understanding of the interface (fault) dynamics during the earthquake rupture process. Using dynamic photoelasticity in conjunction with high-speed cinematography, snapshots of time-dependent isochromatic fringe patterns associated with Rayleigh pulse-interface interaction are experimentally recorded. It is shown that interface slip (instability) can be triggered dynamically by a pulse which propagates along the interface at the Rayleigh wave speed. For the numerical investigation, the finite difference wave simulator SWIFD is used for solving the problem under different combinations of contacting materials. The effect of acoustic impedance ratio of the two contacting materials on the wave patterns is discussed. The results indicate that upon interface rupture, Mach (head) waves, which carry a relatively large amount of energy in a concentrated form, can be generated and propagated from the interface contact region (asperity) into the acoustically softer material. Such Mach waves can cause severe damage onto a particular region inside an adjacent acoustically softer area. This type of damage concentration might be a possible reason for the generation of the "damage belt" in Kobe, Japan, on the occasion of the 1995 Hyogo-ken Nanbu (Kobe) Earthquake.

  7. Evaluation of Cepstrum Algorithm with Impact Seeded Fault Data of Helicopter Oil Cooler Fan Bearings and Machine Fault Simulator Data

    DTIC Science & Technology

    2013-02-01

    seeded fault without gearbox and magnet load...Figure B-1. Ball seeded fault bearings with and without gearbox and magnet load for level 1. ..18 Figure B-2. Ball seeded fault bearings with and...without gearbox and magnet load for level 3. ..19 Figure B-3. Ball seeded fault bearings with and without gearbox and magnet load for level 5. ..20

  8. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    SciTech Connect

    Cheung, Howard; Braun, James E.

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

  9. The organization of seismicity on fault networks.

    PubMed Central

    Knopoff, L

    1996-01-01

    Although models of homogeneous faults develop seismicity that has a Gutenberg-Richter distribution, this is only a transient state that is followed by events that are strongly influenced by the nature of the boundaries. Models with geometrical inhomogeneities of fracture thresholds can limit the sizes of earthquakes but now favor the characteristic earthquake model for large earthquakes. The character of the seismicity is extremely sensitive to distributions of inhomogeneities, suggesting that statistical rules for large earthquakes in one region may not be applicable to large earthquakes in another region. Model simulations on simple networks of faults with inhomogeneities of threshold develop episodes of lacunarity on all members of the network. There is no validity to the popular assumption that the average rate of slip on individual faults is a constant. Intermediate term precursory activity such as local quiescence and increases in intermediate-magnitude activity at long range are simulated well by the assumption that strong weakening of faults by injection of fluids and weakening of asperities on inhomogeneous models of fault networks is the dominant process; the heat flow paradox, the orientation of the stress field, and the low average stress drop in some earthquakes are understood in terms of the asperity model of inhomogeneous faulting. PMID:11607672

  10. Sequential Testing Algorithms for Multiple Fault Diagnosis

    NASA Technical Reports Server (NTRS)

    Shakeri, Mojdeh; Raghavan, Vijaya; Pattipati, Krishna R.; Patterson-Hine, Ann

    1997-01-01

    In this paper, we consider the problem of constructing optimal and near-optimal test sequencing algorithms for multiple fault diagnosis. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and AND/OR graph search, we present several test sequencing algorithms for the multiple fault isolation problem. These algorithms provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a diagnostic directed graph (digraph), instead of a diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. The algorithms developed herein have been successfully applied to several real-world systems. Computational results indicate that the size of a multiple fault strategy is strictly related to the structure of the system.

  11. Protecting Against Faults in JPL Spacecraft

    NASA Technical Reports Server (NTRS)

    Morgan, Paula

    2007-01-01

    A paper discusses techniques for protecting against faults in spacecraft designed and operated by NASA s Jet Propulsion Laboratory (JPL). The paper addresses, more specifically, fault-protection requirements and techniques common to most JPL spacecraft (in contradistinction to unique, mission specific techniques), standard practices in the implementation of these techniques, and fault-protection software architectures. Common requirements include those to protect onboard command, data-processing, and control computers; protect against loss of Earth/spacecraft radio communication; maintain safe temperatures; and recover from power overloads. The paper describes fault-protection techniques as part of a fault-management strategy that also includes functional redundancy, redundant hardware, and autonomous monitoring of (1) the operational and health statuses of spacecraft components, (2) temperatures inside and outside the spacecraft, and (3) allocation of power. The strategy also provides for preprogrammed automated responses to anomalous conditions. In addition, the software running in almost every JPL spacecraft incorporates a general-purpose "Safe Mode" response algorithm that configures the spacecraft in a lower-power state that is safe and predictable, thereby facilitating diagnosis of more complex faults by a team of human experts on Earth.

  12. Software fault tolerance in computer operating systems

    NASA Technical Reports Server (NTRS)

    Iyer, Ravishankar K.; Lee, Inhwan

    1994-01-01

    This chapter provides data and analysis of the dependability and fault tolerance for three operating systems: the Tandem/GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Based on measurements from these systems, basic software error characteristics are investigated. Fault tolerance in operating systems resulting from the use of process pairs and recovery routines is evaluated. Two levels of models are developed to analyze error and recovery processes inside an operating system and interactions among multiple instances of an operating system running in a distributed environment. The measurements show that the use of process pairs in Tandem systems, which was originally intended for tolerating hardware faults, allows the system to tolerate about 70% of defects in system software that result in processor failures. The loose coupling between processors which results in the backup execution (the processor state and the sequence of events occurring) being different from the original execution is a major reason for the measured software fault tolerance. The IBM/MVS system fault tolerance almost doubles when recovery routines are provided, in comparison to the case in which no recovery routines are available. However, even when recovery routines are provided, there is almost a 50% chance of system failure when critical system jobs are involved.

  13. Diagnosing faults in autonomous robot plan execution

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Doshi, Rajkumar S.; Atkinson, David J.; Lawson, Denise M.

    1989-01-01

    A major requirement for an autonomous robot is the capability to diagnose faults during plan execution in an uncertain environment. Many diagnostic researches concentrate only on hardware failures within an autonomous robot. Taking a different approach, the implementation of a Telerobot Diagnostic System that addresses, in addition to the hardware failures, failures caused by unexpected event changes in the environment or failures due to plan errors, is described. One feature of the system is the utilization of task-plan knowledge and context information to deduce fault symptoms. This forward deduction provides valuable information on past activities and the current expectations of a robotic event, both of which can guide the plan-execution inference process. The inference process adopts a model-based technique to recreate the plan-execution process and to confirm fault-source hypotheses. This technique allows the system to diagnose multiple faults due to either unexpected plan failures or hardware errors. This research initiates a major effort to investigate relationships between hardware faults and plan errors, relationships which were not addressed in the past. The results of this research will provide a clear understanding of how to generate a better task planner for an autonomous robot and how to recover the robot from faults in a critical environment.

  14. Diagnosing faults in autonomous robot plan execution

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Doshi, Rajkumar S.; Atkinson, David J.; Lawson, Denise M.

    1988-01-01

    A major requirement for an autonomous robot is the capability to diagnose faults during plan execution in an uncertain environment. Many diagnostic researches concentrate only on hardware failures within an autonomous robot. Taking a different approach, the implementation of a Telerobot Diagnostic System that addresses, in addition to the hardware failures, failures caused by unexpected event changes in the environment or failures due to plan errors, is described. One feature of the system is the utilization of task-plan knowledge and context information to deduce fault symptoms. This forward deduction provides valuable information on past activities and the current expectations of a robotic event, both of which can guide the plan-execution inference process. The inference process adopts a model-based technique to recreate the plan-execution process and to confirm fault-source hypotheses. This technique allows the system to diagnose multiple faults due to either unexpected plan failures or hardware errors. This research initiates a major effort to investigate relationships between hardware faults and plan errors, relationships which were not addressed in the past. The results of this research will provide a clear understanding of how to generate a better task planner for an autonomous robot and how to recover the robot from faults in a critical environment.

  15. Photovoltaic system grounding and fault protection

    NASA Technical Reports Server (NTRS)

    Stolte, W. J.

    1983-01-01

    The grounding and fault protection aspects of large photovoltaic power systems are studied. Broadly, the overlapping functions of these two plant subsystems include providing for the safety of personnel and equipment. Grounding subsystem design is generaly governed by considerations of personnel safety and the limiting of hazardous voltages to which they are exposed during the occurrence of a fault or other misoperation of equipment. A ground system is designed to provide a safe path for fault currents. Metal portions of the modules, array structures, and array foundations are used as a part of the ground system, provided that they and their interconnection are designed to be suitably reliable over the life of the plant. Several alternative types of fault protection and detection equipment are designed into the source circuits and dc buses feeding the input terminals of the subfield power conditioner. This design process requires evaluation of plausible faults, equipment, and remedial actions planned to correct faults. The evaluation should also consider life cycle cost impacts.

  16. WFSD fault monitoring using active seismic source

    NASA Astrophysics Data System (ADS)

    Yang, W.; Ge, H.; Wang, B.; Yuan, S.; Song, L.

    2010-12-01

    The Wenchuan Fault Scientific Drilling(WFSD)is a rapid response drilling project to the great Wenchuan earthquake. It focuses on the fault structure, earthquake physical mechanism, fluid and in-situ stress, energy budget and so on. Temporal variation of stress and physical property in the fault zone is important information for understanding earthquake physics, especially when the fault is still under the post-seismic recovery or stress modification. Seismic velocity is a good indicator of the medium mechanics, stress state within the fault zone. After the great Wenchuan Ms 8.0 earthquake, May 12, 2008, we built up a fault dynamic monitoring system using active seismic source cross the WFSD fault. It consists of a 10 ton accurately controlled eccentric mass source and eight receivers to continuously monitor the seismic velocity cross the fault zone. Combining the aftershock data, we try to monitor the fault recovery and some aftershock physical process. The observatory is located at the middle of the Longmenshan range-front fault, Mianzhu, Sichuan Province. The No.3 hole of WFSD is on the survey line near the No.4 receiver. The source and receiver site were carefully treated. All instruments were well installed to ensure the system's repeatability. Seismic velocity across the fault zone was monitored with continuous observation. The recording system consists of Guralp-40T short period seismometer and RefTek-130B recorder which was continuously GPS timed up to 20us. The active source ran since June 20, 2009. It was operated routinely at night and working continuously from 21:00 to 02:00 of the next day. So far, we have gotten almost one year recording. The seismic velocity variation may be caused by changes of the fault zone medium mechanical property, fault stress, fluid, and earth tide, barometric pressure and rainfall. Deconvolution, stacking and cross-correlation analysis were used for the velocity analysis. Results show that the relationship between seismic

  17. The Lower Tagus Valley (LTV) Fault System

    NASA Astrophysics Data System (ADS)

    Besana-Ostman, G. M.; Fereira, H.; Pinheiro, A.; Falcao Flor, A. P.; Nemser, E.; Villanova, S. P.; Fonseca, J. D.

    2010-05-01

    The LTV fault and its associated historical seismic activity have been the focus of several scientific studies in Portugal. There are at least three historical earthquakes associated with the LTV fault, in 1344, 1531, and 1909. Magnitude estimates for these earthquakes range from 6.5 to 7.0. They caused widespread damage throughout the Lower Tagus Valley region with intensities ranging from VIII to X from Lisbon to Entroncamento. During the great 1755 earthquake, the LTV fault was likewise proposed to have ruptured coseismically. The Azambuja fault or the Vila Franca de Xira fault are suggested origins of the 1909 earthquake. Trenching activities together with borehole data analyses, geophysical investigations, and seismic hazard assessments were undertaken in the LTV in the recent years. Complex trench features along the excavated sections were argued to be either fault- or erosion-related phenomena. Borehole data and seismic profiles indicate subsurface structures within the Lower Tagus Valley and adjacent areas. Furthermore, recent attempts to improve seismic hazard assessment indicate that the highest values in Portugal for 10% probability of exceedance in 50 years correspond with the greater Lisbon area, with the LTV fault as the most probable source. Considering the above, efforts are being made to acquire more information about the location of the LTV seismic source taking into account the presence of extensive erosion and/or deposition processes within the valley, densely populated urban areas, heavily forested regions, and flooded sections such as the Tagus estuary. Results from recent mapping along the LTV reveal surface faulting that left-laterally displaced numerous geomorphic landforms within the Lower Tagus River valley. The mapped trace shows clear evidence of left-lateral displacement and deformation within the valley transecting the river, its tributaries, and innumerable young terraces. The trace has been mapped by analyzing topographic maps

  18. An update of Quaternary faults of central and eastern Oregon

    USGS Publications Warehouse

    Weldon, Ray J.; Fletcher, D.K.; Weldon, E.M.; Scharer, K.M.; McCrory, P.A.

    2002-01-01

    This is the online version of a CD-ROM publication. We have updated the eastern portion of our previous active fault map of Oregon (Pezzopane, Nakata, and Weldon, 1992) as a contribution to the larger USGS effort to produce digital maps of active faults in the Pacific Northwest region. The 1992 fault map has seen wide distribution and has been reproduced in essentially all subsequent compilations of active faults of Oregon. The new map provides a substantial update of known active or suspected active faults east of the Cascades. Improvements in the new map include (1) many newly recognized active faults, (2) a linked ArcInfo map and reference database, (3) more precise locations for previously recognized faults on shaded relief quadrangles generated from USGS 30-m digital elevations models (DEM), (4) more uniform coverage resulting in more consistent grouping of the ages of active faults, and (5) a new category of 'possibly' active faults that share characteristics with known active faults, but have not been studied adequately to assess their activity. The distribution of active faults has not changed substantially from the original Pezzopane, Nakata and Weldon map. Most faults occur in the south-central Basin and Range tectonic province that is located in the backarc portion of the Cascadia subduction margin. These faults occur in zones consisting of numerous short faults with similar rates, ages, and styles of movement. Many active faults strongly correlate with the most active volcanic centers of Oregon, including Newberry Craters and Crater Lake.

  19. Intermediate Depth Earthquakes in Middle America: Fault Reactivation or Formation?

    NASA Astrophysics Data System (ADS)

    Langstaff, M. A.; Warren, L. M.; Silver, P. G.

    2006-12-01

    Intermediate-depth earthquakes are often attributed to dehydration embrittlement reactivating pre-existing weak zones. The orientations of pre-subduction faults are particularly well known offshore of Middle America, where seismic reflection profiles show outer-rise faults dipping towards the trench and extending >20~km into the lithosphere. If water is transported along these faults and incorporated into hydrous minerals, the faults may be reactivated later when the minerals dehydrate. In this case, the fault orientations should be the same in the outer rise and at depth, after accounting for the angle of subduction. To test this hypothesis, we analyze the directivity of 54 large (M_W > 5.7) earthquakes between 40--220~km depth in the Middle America Trench. For 15 of these earthquakes, the directivity vector allows us to confidently distinguish the fault plane of the earthquake. Between 40--85~km depth, we observe both subhorizontal and subvertical fault planes. The subvertical fault planes are consistent with the reactivation of outer rise faults, whereas the subhorizontal fault planes suggest the formation of new faults. Deeper than 85~km, we only observe subhorizontal faults, indicating that the outer rise faults are no longer reactivated. The occurrence of only subhorizontal faults may be due to unbending stresses preferentially creating horizontal faults, or an isobaric rupture process.

  20. Towards understanding earthquake nucleation on a severely misoriented plate boundary fault, Alpine Fault, New Zealand

    NASA Astrophysics Data System (ADS)

    Boulton, C. J.; Faulkner, D. R.; Allen, M. J.; Coussens, J.; Menzies, C. D.; Mariani, E.

    2016-12-01

    New Zealand's Alpine Fault has accommodated relative motion between the Australian and Pacific plates for over 23 million years: first as strike-slip fault and then as an oblique transpressional fault. Despite being driven by principal stresses whose orientations have undoubtedly changed with time, the Alpine Fault continues to accommodate 70% of the relative plate boundary motion. Fault outcrop data and seismic reflection data indicate that the central Alpine Fault is consistently oriented 055/45°SE at depths up to 15 km (i.e., throughout the seismogenic zone); focal mechanisms indicate that the stress tensor is oriented σ1=σHmax=0/117°, σ2=σv, and σ3=0/207° (Boese et al. 2013, doi: 10.1016/j.epsl.2013.06.030). At depth, the central Alpine Fault lies at an angle of 51° to σ1. The Mohr-Coulomb failure criterion stipulates that, for incohesive rocks, reactivation of a fault requires sufficient driving stress to overcome frictional resistance to slip. Using a coefficient of friction (μ) of 0.6, as measured for representative Alpine Fault rocks under in situ conditions (Neimeijer et al. 2016, doi:10.1002/2015JB012593), and an estimated stress shape ratio (Φ=(σ2 - σ3)/(σ1 - σ3)=0.5), a 3-D reactivation analysis was performed (Leclère and Fabbri 2013, doi:10.1016/j.jsg.2012.11.004). Results show that the Alpine Fault is severely misoriented for failure, requiring pore fluid pressures greater than the least principal stress to initiate frictional sliding. However, microstructural evidence, including pseudotachylytes and fault gouge injection structures, suggests that earthquakes nucleate and propagate along this major plate boundary fault. By assuming an increase in differential stress of 15 MPa/km, our analysis shows that reactivation may occur with suprahydrostatic pore fluid pressures given a ≥10° counterclockwise rotation of σHmax. Using measured hydraulic data, we estimate the potential for pore fluid overpressure development within the Alpine

  1. Fault structure and mechanics of the Hayward Fault, California from double-difference earthquake locations

    USGS Publications Warehouse

    Waldhauser, F.; Ellsworth, W.L.

    2002-01-01

    The relationship between small-magnitude seismicity and large-scale crustal faulting along the Hayward Fault, California, is investigated using a double-difference (DD) earthquake location algorithm. We used the DD method to determine high-resolution hypocenter locations of the seismicity that occurred between 1967 and 1998. The DD technique incorporates catalog travel time data and relative P and S wave arrival time measurements from waveform cross correlation to solve for the hypocentral separation between events. The relocated seismicity reveals a narrow, near-vertical fault zone at most locations. This zone follows the Hayward Fault along its northern half and then diverges from it to the east near San Leandro, forming the Mission trend. The relocated seismicity is consistent with the idea that slip from the Calaveras Fault is transferred over the Mission trend onto the northern Hayward Fault. The Mission trend is not clearly associated with any mapped active fault as it continues to the south and joins the Calaveras Fault at Calaveras Reservoir. In some locations, discrete structures adjacent to the main trace are seen, features that were previously hidden in the uncertainty of the network locations. The fine structure of the seismicity suggest that the fault surface on the northern Hayward Fault is curved or that the events occur on several substructures. Near San Leandro, where the more westerly striking trend of the Mission seismicity intersects with the surface trace of the (aseismic) southern Hayward Fault, the seismicity remains diffuse after relocation, with strong variation in focal mechanisms between adjacent events indicating a highly fractured zone of deformation. The seismicity is highly organized in space, especially on the northern Hayward Fault, where it forms horizontal, slip-parallel streaks of hypocenters of only a few tens of meters width, bounded by areas almost absent of seismic activity. During the interval from 1984 to 1998, when digital

  2. Parametric analysis of inherited low-angle fault reactivation, application to the Aegean detachment faults

    NASA Astrophysics Data System (ADS)

    Lecomte, E.; Le Pourhiet, L.; Lacombe, O.; Jolivet, L.

    2009-04-01

    Widespread occurrences of low angle normal faults have been described within the extending continental crust since their discovery in the Basin and Range province. Although a number of field observations suggest that sliding may occur at very shallow dip in the brittle field, the seismic activity related to such normal faults is nearly inexistent and agrees with the locking angle of 30° predicted from Andersonian fault mechanics associated with Byerlee's law. To understand this apparent contradiction, we have introduced Mohr Coulomb plastic flow rule within the inherited low-angle faults where former studies were limited to a yield criterion. The fault is considered as a pre existing compacting or dilating plane with a shallow dip (0-45°) embedded in a brittle media. Following Anderson's theory, we assume that the maximal principal stress is vertical and equal to the lithostatic pressure. This approximation may not be true for small faults but it holds for large detachment faults where associated joints are generally vertical. With this model, we can predict not only whether new brittle features forms in the surrounding of the low angle normal faults but also the complete stress-strain evolution both within the faults and in its surrounding. Moreover, the introduction of a flow rule within the fault allows brittle strain to occur on very badly oriented faults (dip < 30°) before yielding occurs in the surrounding medium. After performing a full parametric study, we find that the reactivation of low angle normal faults depends primarily on the friction angle of the fault material and the ratio of the cohesion between the shear band and its surrounding. Our model is therefore in good agreement with previous simpler models, and the locking angles obtained differ in most cases by only 2 or 3° from previous yield criteria-based approaches which did explain most of the data especially the repartition of focal mechanisms worldwide. However, we find that in some cases

  3. Aftershocks illuminate the 2011 Mineral, Virginia, earthquake causative fault zone and nearby active faults

    USGS Publications Warehouse

    Horton, Jr., J. Wright; Shah, Anjana K.; McNamara, Daniel E.; Snyder, Stephen L.; Carter, Aina M

    2015-01-01

    Deployment of temporary seismic stations after the 2011 Mineral, Virginia (USA), earthquake produced a well-recorded aftershock sequence. The majority of aftershocks are in a tabular cluster that delineates the previously unknown Quail fault zone. Quail fault zone aftershocks range from ~3 to 8 km in depth and are in a 1-km-thick zone striking ~036° and dipping ~50°SE, consistent with a 028°, 50°SE main-shock nodal plane having mostly reverse slip. This cluster extends ~10 km along strike. The Quail fault zone projects to the surface in gneiss of the Ordovician Chopawamsic Formation just southeast of the Ordovician–Silurian Ellisville Granodiorite pluton tail. The following three clusters of shallow (<3 km) aftershocks illuminate other faults. (1) An elongate cluster of early aftershocks, ~10 km east of the Quail fault zone, extends 8 km from Fredericks Hall, strikes ~035°–039°, and appears to be roughly vertical. The Fredericks Hall fault may be a strand or splay of the older Lakeside fault zone, which to the south spans a width of several kilometers. (2) A cluster of later aftershocks ~3 km northeast of Cuckoo delineates a fault near the eastern contact of the Ordovician Quantico Formation. (3) An elongate cluster of late aftershocks ~1 km northwest of the Quail fault zone aftershock cluster delineates the northwest fault (described herein), which is temporally distinct, dips more steeply, and has a more northeastward strike. Some aftershock-illuminated faults coincide with preexisting units or structures evident from radiometric anomalies, suggesting tectonic inheritance or reactivation.

  4. Aftershocks of the 2014 South Napa, California, Earthquake: Complex faulting on secondary faults

    USGS Publications Warehouse

    Hardebeck, Jeanne L.; Shelly, David R.

    2016-01-01

    We investigate the aftershock sequence of the 2014 MW6.0 South Napa, California, earthquake. Low-magnitude aftershocks missing from the network catalog are detected by applying a matched-filter approach to continuous seismic data, with the catalog earthquakes serving as the waveform templates. We measure precise differential arrival times between events, which we use for double-difference event relocation in a 3D seismic velocity model. Most aftershocks are deeper than the mainshock slip, and most occur west of the mapped surface rupture. While the mainshock coseismic and postseismic slip appears to have occurred on the near-vertical, strike-slip West Napa fault, many of the aftershocks occur in a complex zone of secondary faulting. Earthquake locations in the main aftershock zone, near the mainshock hypocenter, delineate multiple dipping secondary faults. Composite focal mechanisms indicate strike-slip and oblique-reverse faulting on the secondary features. The secondary faults were moved towards failure by Coulomb stress changes from the mainshock slip. Clusters of aftershocks north and south of the main aftershock zone exhibit vertical strike-slip faulting more consistent with the West Napa Fault. The northern aftershocks correspond to the area of largest mainshock coseismic slip, while the main aftershock zone is adjacent to the fault area that has primarily slipped postseismically. Unlike most creeping faults, the zone of postseismic slip does not appear to contain embedded stick-slip patches that would have produced on-fault aftershocks. The lack of stick-slip patches along this portion of the fault may contribute to the low productivity of the South Napa aftershock sequence.

  5. Late Holocene earthquakes on the Toe Jam Hill fault, Seattle fault zone, Bainbridge Island, Washington

    USGS Publications Warehouse

    Nelson, A.R.; Johnson, S.Y.; Kelsey, H.M.; Wells, R.E.; Sherrod, B.L.; Pezzopane, S.K.; Bradley, L.-A.; Koehler, R. D.; Bucknam, R.C.

    2003-01-01

    Five trenches across a Holocene fault scarp yield the first radiocarbon-measured earthquake recurrence intervals for a crustal fault in western Washington. The scarp, the first to be revealed by laser imagery, marks the Toe Jam Hill fault, a north-dipping backthrust to the Seattle fault. Folded and faulted strata, liquefaction features, and forest soil A horizons buried by hanging-wall-collapse colluvium record three, or possibly four, earthquakes between 2500 and 1000 yr ago. The most recent earthquake is probably the 1050-1020 cal. (calibrated) yr B.P. (A.D. 900-930) earthquake that raised marine terraces and triggered a tsunami in Puget Sound. Vertical deformation estimated from stratigraphic and surface offsets at trench sites suggests late Holocene earthquake magnitudes near M7, corresponding to surface ruptures >36 km long. Deformation features recording poorly understood latest Pleistocene earthquakes suggest that they were smaller than late Holocene earthquakes. Postglacial earthquake recurrence intervals based on 97 radiocarbon ages, most on detrital charcoal, range from ???12,000 yr to as little as a century or less; corresponding fault-slip rates are 0.2 mm/yr for the past 16,000 yr and 2 mm/yr for the past 2500 yr. Because the Toe Jam Hill fault is a backthrust to the Seattle fault, it may not have ruptured during every earthquake on the Seattle fault. But the earthquake history of the Toe Jam Hill fault is at least a partial proxy for the history of the rest of the Seattle fault zone.

  6. Fault creep rates of the Chaman fault (Afghanistan and Pakistan) inferred from InSAR

    NASA Astrophysics Data System (ADS)

    Barnhart, William D.

    2017-01-01

    The Chaman fault is the major strike-slip structural boundary between the India and Eurasia plates. Despite sinistral slip rates similar to the North America-Pacific plate boundary, no major (>M7) earthquakes have been documented along the Chaman fault, indicating that the fault either creeps aseismically or is at a late stage in its seismic cycle. Recent work with remotely sensed interferometric synthetic aperture radar (InSAR) time series documented a heterogeneous distribution of fault creep and interseismic coupling along the entire length of the Chaman fault, including an 125 km long creeping segment and an 95 km long locked segment within the region documented in this study. Here I present additional InSAR time series results from the Envisat and ALOS radar missions spanning the southern and central Chaman fault in an effort to constrain the locking depth, dip, and slip direction of the Chaman fault. I find that the fault deviates little from a vertical geometry and accommodates little to no fault-normal displacements. Peak-documented creep rates on the fault are 9-12 mm/yr, accounting for 25-33% of the total motion between India and Eurasia, and locking depths in creeping segments are commonly shallower than 500 m. The magnitude of the 1892 Chaman earthquake is well predicted by the total area of the 95 km long coupled segment. To a first order, the heterogeneous distribution of aseismic creep combined with consistently shallow locking depths suggests that the southern and central Chaman fault may only produce small to moderate earthquakes (

  7. 3D Dynamic Rupture Simulations along the Wasatch Fault, Utah, Incorporating Rough-fault Topography

    NASA Astrophysics Data System (ADS)

    Withers, Kyle; Moschetti, Morgan

    2017-04-01

    Studies have found that the Wasatch Fault has experienced successive large magnitude (>Mw 7.2) earthquakes, with an average recurrence interval near 350 years. To date, no large magnitude event has been recorded along the fault, with the last rupture along the Salt Lake City segment occurring 1300 years ago. Because of this, as well as the lack of strong ground motion records in basins and from normal-faulting earthquakes worldwide, seismic hazard in the region is not well constrained. Previous numerical simulations have modeled deterministic ground motion in the heavily populated regions of Utah, near Salt Lake City, but were primarily restricted to low frequencies ( 1 Hz). Our goal is to better assess broadband ground motions from the Wasatch Fault Zone. Here, we extend deterministic ground motion prediction to higher frequencies ( 5 Hz) in this region by using physics-based spontaneous dynamic rupture simulations along a normal fault with characteristics derived from geologic observations. We use a summation by parts finite difference code (Waveqlab3D) with rough-fault topography following a self-similar fractal distribution (over length scales from 100 m to the size of the fault) and include off-fault plasticity to simulate ruptures > Mw 6.5. Geometric complexity along fault planes has previously been shown to generate broadband sources with spectral energy matching that of observations. We investigate the impact of varying the hypocenter location, as well as the influence that multiple realizations of rough-fault topography have on the rupture process and resulting ground motion. We utilize Waveqlab3's computational efficiency to model wave-propagation to a significant distance from the fault with media heterogeneity at both long and short spatial wavelengths. These simulations generate a synthetic dataset of ground motions to compare with GMPEs, in terms of both the median and inter and intraevent variability.

  8. Structural evolution of fault zones in sandstone by multiple deformation mechanisms: Moab fault, southeast Utah

    USGS Publications Warehouse

    Davatzes, N.C.; Eichhubl, P.; Aydin, A.

    2005-01-01

    Faults in sandstone are frequently composed of two classes of structures: (1) deformation bands and (2) joints and sheared joints. Whereas the former structures are associated with cataclastic deformation, the latter ones represent brittle fracturing, fragmentation, and brecciation. We investigated the distribution of these structures, their formation, and the underlying mechanical controls for their occurrence along the Moab normal fault in southeastern Utah through the use of structural mapping and numerical elastic boundary element modeling. We found that deformation bands occur everywhere along the fault, but with increased density in contractional relays. Joints and sheared joints only occur at intersections and extensional relays. In all locations , joints consistently overprint deformation bands. Localization of joints and sheared joints in extensional relays suggests that their distribution is controlled by local variations in stress state that are due to mechanical interaction between the fault segments. This interpretation is consistent with elastic boundary element models that predict a local reduction in mean stress and least compressive principal stress at intersections and extensional relays. The transition from deformation band to joint formation along these sections of the fault system likely resulted from the combined effects of changes in remote tectonic loading, burial depth, fluid pressure, and rock properties. In the case of the Moab fault, we conclude that the structural heterogeneity in the fault zone is systematically related to the geometric evolution of the fault, the local state of stress associated with fault slip , and the remote loading history. Because the type and distribution of structures affect fault permeability and strength, our results predict systematic variations in these parameters with fault evolution. ?? 2004 Geological Society of America.

  9. Fault tree analysis for maintenance needs

    NASA Astrophysics Data System (ADS)

    Halme, Jari; Aikala, Antti

    2012-05-01

    One of the key issues in maintenance is to allocate focus and resources to those components and subsystems which are the most unreliable and prone to failures. In industrial systems, fault tree analysis technique can be used to study the reliability of the complex systems and their substructures. In this paper a fault tree application for analyzing online the current reliability and failure probability for maintenance purposes is presented. The analysis is utilizing data connected to the fault tree root causes and events. An indication of an anomaly case, service action, cumulative loading, etc., or just time passed or service hour counter level can trigger a new calculation of current probabilities of the fault tree events and subsystem interactions. In proposed approach real time, dynamic information from several available data sources and different measurement are interconnected to each fault tree event and root cause. There is also formulated an active, constantly updated link between the fault tree events and maintenance databases for the maintenance decision support, and to keep the analysis up to date. Typically top event probability is evaluated based on updated root cause probabilities and lower level events. At the industrial plant level an identification of a failure in a component event defined within a constructed and operatively existing fault tree explicitly means that the event's failure probability is one. By utilizing this indication, the most probable failure branches through the fault tree sub events to root causes can be identified and printed as a valid check list for maintenance purposes to focus service actions first to those fault tree branches most probable causing the failure. Respectively, during the checks, service actions, etc., components, especially those within the critical branches, detected as healthy can be a updated as having zero failure probability. This information can be used to further update the fault tree and produce

  10. Weak Faults, Yet Strong Middle Crust

    NASA Astrophysics Data System (ADS)

    Platt, J. P.; Behr, W. M.

    2013-12-01

    A global compilation of stress magnitude from mylonites developed along major fault zones suggests that maximum differential stresses between 140 and 200 MPa are reached at temperatures between 300 and 350°C on normal, thrust, and strike-slip faults. These differential stresses are consistent with brittle rock strengths estimated based on Coulomb fracture (e.g., Byerlee's law), and with in-situ measurements of crustal stress measured in boreholes. This confirms previous suggestions that many parts of the continental crust are stressed close to failure down to the brittle-ductile transition. Many major active faults in all tectonic regimes are considered to be relatively weak, however, based on various lines of evidence, including their unfavorable orientation with respect to regional stresses, the absence of heat flow anomalies, the mechanical properties of fault gouge, and evidence for high fluid pressures along subduction zone megathrusts. Peak differential stresses estimated by a variety of techniques lie mostly in the range 1 - 20 MPa. The sharp contrast between differential stresses estimated on the seismogenic parts of major faults and those estimated from ductile rocks immediately below the brittle-ductile transition has the following implications: 1. The lower limit of seismicity in major fault zones is not controlled by the intersection of brittle fracture laws such as Byerlee's law with ductile creep laws. Rather, it represents an abrupt downward termination, probably controlled by temperature, of the weakening processes that govern fault behavior in the upper crust. 2. The seismogenic parts of major fault zones contribute little to lithospheric strength, and are unlikely to have much influence on either the slip rate or the location of the faults. Conversely, the high strength segments of ductile shear zones immediately below the brittle-ductile transition constitute a major load-bearing element within the lithosphere. Displacement rates are governed by

  11. Model-based fault detection and isolation for intermittently active faults with application to motion-based thruster fault detection and isolation for spacecraft

    NASA Technical Reports Server (NTRS)

    Wilson, Edward (Inventor)

    2008-01-01

    The present invention is a method for detecting and isolating fault modes in a system having a model describing its behavior and regularly sampled measurements. The models are used to calculate past and present deviations from measurements that would result with no faults present, as well as with one or more potential fault modes present. Algorithms that calculate and store these deviations, along with memory of when said faults, if present, would have an effect on the said actual measurements, are used to detect when a fault is present. Related algorithms are used to exonerate false fault modes and finally to isolate the true fault mode. This invention is presented with application to detection and isolation of thruster faults for a thruster-controlled spacecraft. As a supporting aspect of the invention, a novel, effective, and efficient filtering method for estimating the derivative of a noisy signal is presented.

  12. Hydrothermal circulation in fault slots with topography

    NASA Astrophysics Data System (ADS)

    Titarenko, Sofya; McCaig, Andrew

    2014-05-01

    There are numerous cases where the circulation of hydrothermal fluid is likely to be confined within a permeable fault slot. Examples are (1) the Lost City Hydrothermal Field (LCHF) at 30 N in the Atlantic, which is likely to be controlled by large E-W faults related to the Atlantis transform fault and mass wasting on the southern wall of the Atlantis Massif, and (2) large normal faults bounding the Hess Deep rift in the East Pacific, which contain intense hydrothermal metamorphic assemblages in lower crustal gabbros formed at 200-350 ° C. This type of circulation could occur anywhere where steep faults cut the oceanic crust, including large near-axis normal faults, transform faults and faults at subduction bend zones, and could be the major way in which the upper mantle and lower crust are hydrated. It is therefore important to constrain the controls on temperature conditions of alteration and hence mineral assemblages. Previous 2-D modelling of the LCHF shows that seafloor topography and permeability structure combine together to localise the field near the highest point of the Atlantis Massif. Our new models are 3-D, based on a 10km cube with seafloor topography of ~ 2km affecting both the fault slot and impermeable wall rocks. We have used Comsol multiphysics in this modelling, with a constant basal heatflow corresponding to the near conductive thermal gradient measured in IODP Hole 1309D, 5km north of the LCHF, and a constant temperature seafloor boundary condition. The wall rocks of the slot have a permeability of 10-17 m2 while permeability in the slot is varied between 10-14 and 10-15 m2. Initial conditions are a conductive thermal structure corresponding to the basal heatflow at steady state. Generic models not based on any particular known topography quickly stabilise a hydrothermal system in the fault slot with a single upflow zone close to the model edge with highest topography. In models with a depth of circulation in the fault slot of about 6 km

  13. Study on the Evaluation Method for Fault Displacement: Probabilistic Approach Based on Japanese Earthquake Rupture Data - Distributed fault displacements -

    NASA Astrophysics Data System (ADS)

    Inoue, N.; Kitada, N.; Tonagi, M.

    2016-12-01

    Distributed fault displacements in Probabilistic Fault Displace- ment Analysis (PFDHA) have an important rule in evaluation of important facilities such as Nuclear Installations. In Japan, the Nu- clear Installations should be constructed where there is no possibility that the displacement by the earthquake on the active faults occurs. Youngs et al. (2003) defined the distributed fault as displacement on other faults or shears, or fractures in the vicinity of the principal rup- ture in response to the principal faulting. Other researchers treated the data of distribution fault around principal fault and modeled according to their definitions (e.g. Petersen et al., 2011; Takao et al., 2013 ). We organized Japanese fault displacements data and constructed the slip-distance relationship depending on fault types. In the case of reverse fault, slip-distance relationship on the foot-wall indicated difference trend compared with that on hanging-wall. The process zone or damaged zone have been studied as weak structure around principal faults. The density or number is rapidly decrease away from the principal faults. We contrasted the trend of these zones with that of distributed slip-distance distributions. The subsurface FEM simulation have been carried out to inves- tigate the distribution of stress around principal faults. The results indicated similar trend compared with the distribution of field obser- vations. This research was part of the 2014-2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.

  14. Advanced Ground Systems Maintenance Functional Fault Models For Fault Isolation Project

    NASA Technical Reports Server (NTRS)

    Perotti, Jose M. (Compiler)

    2014-01-01

    This project implements functional fault models (FFM) to automate the isolation of failures during ground systems operations. FFMs will also be used to recommend sensor placement to improve fault isolation capabilities. The project enables the delivery of system health advisories to ground system operators.

  15. Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation

    SciTech Connect

    Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.

    2011-05-15

    Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.

  16. Fault diagnosis of nonlinear analog circuits. Volume 4: An isolation algorithm for the analog fault dictionary

    NASA Astrophysics Data System (ADS)

    Elcherif, Y. S.; Lin, P. M.

    1983-04-01

    A new approach for fault location in an analog fault dictionary has been adopted on the basis of quantizing circuit responses. The possibility of quantization is offered by faults having nearly the same effect on some test measurements. This produces multivalued logic responses which can be manipulated logically to obtain decision rules for fault location. A logical isolation algorithm has been introduced to form a dictionary for hard fault diagnosis using d.c. voltage measurements. The dictionary consists of tables containing voltage ranges of quantized response and numerical codes identifying different faults inthe dictionary. The test measurements chosen by the algorithm are free from redundancy and provide the maximum fault isolation capability of all initially chosen measurements. The algorithm can be easily implemented with some extra software added to the circuit simulator used in fault simulation. Hardware implementation of a logical isolation ATE is simple and efficient compared to a least squares based dictionary. The algorithm has been extended to handle multiple test input conditions.

  17. Normal fault inversion by orthogonal compression: Sandbox experiments with weak faults

    NASA Astrophysics Data System (ADS)

    Marques, F. O.; Nogueira, C. R.

    2008-06-01

    Linear frictional failure criteria predict that normal faults form dipping around 60°, and reverse faults around 30°, depending on dry rock properties. Therefore, it is unlikely that normal faults be reactivated as reverse faults, unless the stress conditions are favourable, or the intrinsic properties of the intact rock or of the precursor fault (PF) are modified. In the present study, we focused on friction (strength) of the PF. We used sandbox experiments with an initially embedded weak PF dipping 60° or 70°, filled with a thin film of silicone putty that lubricated the (weak) fault, to investigate inversion of high angle PF by orthogonal compression. The results show that: (1) the PF can be inverted if it is weak during compression, even if the angle of dip is as great as 70°; (2) after inversion initiation, the reverse movement on the PF can last for as much as 30% model shortening, leading to great amounts of reverse displacement along the PF before the formation of a thrust ahead; (3) in models with more than one PF, the weakness of the reactivated fault closer to piston was not enough to prevent reactivation of the PF further ahead, after an amount of shortening that depended on distance between PFs. The viscous material used to weaken the fault is scalable to salt in nature. However, the great decrease in friction due to lubrication with a viscous material can simulate many other weakening mechanisms observed in nature.

  18. A note on the effect of fault gouge thickness on fault stability

    USGS Publications Warehouse

    Byerlee, J.; Summers, R.

    1976-01-01

    At low confining pressure, sliding on saw cuts in granite is stable but at high pressure it is unstable. The pressure at which the transition takes place increases if the thickness of the crushed material between the sliding surfaces is increased. This experimental result suggests that on natural faults the stability of sliding may be affected by the width of the fault zone. ?? 1976.

  19. Characterising the Alpine Fault Damage Zone using Fault Zone Guided Waves, South Westland, New Zealand

    NASA Astrophysics Data System (ADS)

    Eccles, J. D.; Gulley, A.; Boese, C. M.; Malin, P. E.; Townend, J.; Thurber, C. H.; Guo, B.; Sutherland, R.

    2015-12-01

    Fault Zone Guided Waves (FZGWs) are observed within New Zealand's transpressional continental plate boundary, the Alpine Fault, which is late in its typical seismic cycle. Distinctive dispersive seismic coda waves (~7-35 Hz), trapped within the low-velocity fault damage zone, have been recorded on three component 2 Hz borehole seismometers installed within 20 m of the principal slip zone in the shallow (< 150 m deep) DFDP-1 boreholes. Near the central Alpine Fault, known for low background seismicity, FZGW-generating microseismic events are located beyond the catchment-scale strike-slip and thrust segment partitioning of the fault indicating lateral connectivity of the low-velocity zone immediately below the near-surface segmentation. Double-difference earthquake relocation of events using the dense SAMBA and WIZARD seismometer arrays allows spatio-temporal patterns of 2013 events to be analysed and the segmentation and low velocity zone depth extent further explored. Three layer, dispersion modeling of the low-velocity zone indicates a waveguide width of 60-200 m with a 10-40% reduction in S-wave velocity, similar to that inferred for the fault core of other mature plate boundary faults such as the San Andreas and North Anatolian Faults.

  20. Owens Valley fault kinematics: Right-lateral slip transfer via north-northeast trending normal faults at the northern end of the Owens Valley fault

    NASA Astrophysics Data System (ADS)

    Sheehan, T. P.; Dawers, N. H.

    2003-12-01

    The occurrence of several northeast trending normal faults along the eastern margin of the Sierra Nevada escarpment are evidence of right-lateral slip transfer across northern Owens Valley from the Owens Valley fault to the White Mountains fault zone. Interaction between the Sierran frontal normal fault and these two fault zones has created a transtensional tectonic environment, which allows for right-lateral slip transfer via a population of northwest dipping normal faults within the Late Quaternary-Holocene alluvial valley fill of northern Owens Valley. A component of normal movement within the valley floor has been documented along fifteen faults. This includes the Tungsten Hills fault, two faults near Klondike Lake, and twelve or so, some possibly linked, small NNE trending scarps southeast of the town of Bishop. One fault segment, located just past the tip of the 1872 earthquake rupture, reveals a minimum of 3.2 meters of normal throw along much of its length. This fault shows evidence for at least three large ruptures, each exhibiting at least one meter of vertical slip. In addition, a large population of normal faults with similar orientations is mapped within the immediate vicinity of this scarp segment. These faults accommodate a substantial amount of normal movement allowing for eastward right lateral slip transfer. With the exception of the Tungsten Hills fault, they are primarily concentrated along a segment of the Sierran Escarpment known as the Coyote Warp. The pre-existing normal fault geometry along this segment acts to block the northward propagation of right-lateral movement, which is consequently forced across the valley floor to the White Mountain fault zone.