Sample records for laguna salada fault

  1. Late quaternary activity of the Laguna Salada fault in northern Baja California, Mexico

    E-print Network

    Mueller, Karl

    of displacement (4 m of dextral slip and 3.5 m of normal slip) suggest that the earthquake had a magnitude (Mw de- tailed subsurface data, which include bore- holes, trenches, and earthquake locations. Studies in structural relief, or up- lifts and depressions, are present along the lengths of these faults (Fuis et al

  2. The Surface Rupture of the 2010 El Mayor-Cucapah Earthquake and its Interaction with the 1892 Laguna Salada Rupture - Complex Fault Interaction in an Oblique Rift System (Invited)

    NASA Astrophysics Data System (ADS)

    Rockwell, T. K.; Fletcher, J. M.; Teran, O.; Mueller, K. J.

    2010-12-01

    The 2010 El Mayor-Cucapah earthquake (Mw 7.2) demonstrates intimate mechanical interactions between two major fault systems that intersect within and along the western margin of the Sierra Cucapah in Baja California, Mexico. Rupture associated with 2010 earthquake produced ~4 m of dextral oblique slip and propagated through an imbricate stack of east-dipping faults. Toward the north, rupture consistently steps left to structurally deeper faults located farther west and in this manner passes through the core of the Sierra Cucapah to its western margin. The western margin of the Sierra Cucapah is defined by the Laguna Salada fault (LSF), which forms part of an active west-directed oblique detachment system recognized as the source of the large (M7+) February 22, 1892 earthquake. In the central Sierra Cucapah, the fault systems are separated by a narrow horst block, and here the 2010 event produced triggered slip on the LSF. These surface breaks follow the exact trace of the 1892 rupture, but their sense of slip (10-30 cm of pure normal displacement) differs radically from the 5 m of oblique dextral-normal slip produced by the 1892 event. Farther north, the narrow horst block is buried beneath strata of the northern Laguna Salada rift basin, and at this location, west-directed scarps of the LSF accommodate a significant component of dextral slip associated with the primary 2010 surface rupture. Thus, the two fault systems combine to accommodate oblique extension in the northern part of the range and likely have linking structures at fairly shallow depth. Newly identified paleo-scarps extend the known 1892 rupture length from 20 km to as much as 42 km, from the Canon Rojo fault to the Yuha Basin; consistent with a Mw 7.2 event and historical reports of MMI VII damage in San Diego. Both fault systems generate large earthquakes (>M7.2), with the west-directed LSF fault accommodating rapid subsidence in the adjacent basin during M7+ events at ~ 2Ka recurrence. Initial mapping of Late Quaternary deposits within the range suggests a much longer recurrence interval for the faults that ruptured in 2010.

  3. The SCEC 3D Community Fault Model (CFM-v5): An updated and expanded fault set of oblique crustal deformation and complex fault interaction for southern California

    NASA Astrophysics Data System (ADS)

    Nicholson, C.; Plesch, A.; Sorlien, C. C.; Shaw, J. H.; Hauksson, E.

    2014-12-01

    Southern California represents an ideal natural laboratory to investigate oblique deformation in 3D owing to its comprehensive datasets, complex tectonic history, evolving components of oblique slip, and continued crustal rotations about horizontal and vertical axes. As the SCEC Community Fault Model (CFM) aims to accurately reflect this 3D deformation, we present the results of an extensive update to the model by using primarily detailed fault trace, seismic reflection, relocated hypocenter and focal mechanism nodal plane data to generate improved, more realistic digital 3D fault surfaces. The results document a wide variety of oblique strain accommodation, including various aspects of strain partitioning and fault-related folding, sets of both high-angle and low-angle faults that mutually interact, significant non-planar, multi-stranded faults with variable dip along strike and with depth, and active mid-crustal detachments. In places, closely-spaced fault strands or fault systems can remain surprisingly subparallel to seismogenic depths, while in other areas, major strike-slip to oblique-slip faults can merge, such as the S-dipping Arroyo Parida-Mission Ridge and Santa Ynez faults with the N-dipping North Channel-Pitas Point-Red Mountain fault system, or diverge with depth. Examples of the latter include the steep-to-west-dipping Laguna Salada-Indiviso faults with the steep-to-east-dipping Sierra Cucapah faults, and the steep southern San Andreas fault with the adjacent NE-dipping Mecca Hills-Hidden Springs fault system. In addition, overprinting by steep predominantly strike-slip faulting can segment which parts of intersecting inherited low-angle faults are reactivated, or result in mutual cross-cutting relationships. The updated CFM 3D fault surfaces thus help characterize a more complex pattern of fault interactions at depth between various fault sets and linked fault systems, and a more complex fault geometry than typically inferred or expected from projecting near-surface data down-dip, or modeled from surface strain and potential field data alone.

  4. Diversity of halophilic bacteria isolated from Rambla Salada, Murcia (Spain).

    PubMed

    Luque, Rocío; Béjar, Victoria; Quesada, Emilia; Llamas, Inmaculada

    2014-12-01

    In this study we analyzed the diversity of the halophilic bacteria community from Rambla Salada during the years 2006 and 2007. We collected a total of 364 strains, which were then identified by means of phenotypic tests and by the hypervariable V1-V3 region of the 16S rRNA sequences (around 500 bp). The ribosomal data showed that the isolates belonged to Proteobacteria (72.5%), Firmicutes (25.8%), Actinobacteria (1.4%), and Bacteroidetes (0.3%) phyla, with Gammaproteobacteria the predominant class. Halomonas was the most abundant genus (41.2% isolates) followed by Marinobacter (12.9% isolates) and Bacillus (12.6% isolates). In addition, 9 strains showed <97% sequence identity with validly described species and may well represent new taxa. The diversity of the bacterial community analyzed with the DOTUR package determined 139 operational taxonomic units at 3% genetic distance level. Rarefaction curves and diversity indexes demonstrated that our collection of isolates adequately represented all the bacterial community at Rambla Salada that can be grown under the conditions used in this work. We found that the sampling season influenced the composition of the bacterial community, and bacterial diversity was higher in 2007; this fact could be related to lower salinity at this sampling time. PMID:25403824

  5. The Pueblo of Laguna.

    ERIC Educational Resources Information Center

    Lockart, Barbetta L.

    Proximity to urban areas, a high employment rate, development of natural resources and high academic achievement are all serving to bring Laguna Pueblo to a period of rapid change on the reservation. While working to realize its potential in the areas of natural resources, commercialism and education, the Pueblo must also confront the problems of…

  6. Lateglacial and Late Holocene environmental and vegetational change in Salada Mediana, central Ebro Basin, Spain

    Microsoft Academic Search

    Blas L Valero-Garcés; Penélope González-Sampériz; A Delgado-Huertas; A Navas; J Mach??n; K Kelts

    2000-01-01

    The Salada Mediana lacustrine sequence, central Ebro Basin, Spain (41°30?10?N, 0°44?W, 350m a.s.l.) provides an example of the potential and limitations of saline lake records as palaeoclimate proxies in the semi-arid Mediterranean region. Sedimentary facies analyses, chemical stratigraphy, stable isotopes (?18O and ?13C) of authigenic carbonates, ?13C values of bulk organic matter and pollen analyses from sediment cores provide paleohydrological

  7. Chapter 8Chapter 8 Challenge Theme 6.

    E-print Network

    Fleskes, Joe

    the Imperial fault zone in 1940 bent rail lines, severely damaged irrigation canals, and killed nine people in the Borderlands 181 #12;Damage to shops and telephone lines in Imperial, California, caused by the 1940 earthquake-strikingfaultzonesincludingtheImperial,LagunaSalada,andCerroPrietofaultzones(fig.8­2),whicharepartof the San Andreas transform fault system. A magnitude 7.1 earthquake along

  8. Santa Fe Indian Camp, House 21, Richmond, California: Persistence of Identity among Laguna Pueblo Railroad Laborers, 1945-1982.

    ERIC Educational Resources Information Center

    Peters, Kurt

    1995-01-01

    In 1880 the Laguna people and the predecessor of the Atchison, Topeka, and Santa Fe Railroad reached an agreement giving the railroad unhindered right-of-way through Laguna lands in exchange for Laguna employment "forever." Discusses the Laguna-railroad relationship through 1982, Laguna labor camps in California, and the persistence of Laguna

  9. Fault management Fault management

    E-print Network

    Fault management Pag. 1 Fault management Andrea Bianco T l i ti N t k G Network Management and Qo Torino #12;Fault management Pag. 2 The impact of network failures Cable 1101011000110101011 1 cable x 200 fibers/cable x 160 /fiber x 10 Gb/s/ = 320 Tb/s 5 billion telephone lines (@ 64 kb/s) Network Management

  10. Fault management Fault management

    E-print Network

    Fault management Pag. 1 Fault management Network Management and QoS Provisioning - 1Andrea Bianco traffic · May translate to revenue losses 5 billion telephone lines (@ 64 kb/s) 60.000 full CDs per second #12;Fault management Pag. 2 Fibers Network Management and QoS Provisioning - 4Andrea Bianco ­ TNG

  11. Fault Motion

    NSDL National Science Digital Library

    This collection of animations provides elementary examples of fault motion intended for simple demonstrations. Examples include dip-slip faults (normal and reverse), strike-slip faults, and oblique-slip faults.

  12. Performance of the LAGUNA pulsed power system

    SciTech Connect

    Goforth, J.H.; Caird, R.S.; Fowler, C.M.; Greene, A.E.; Kruse, H.W.; Lindemuth, I.R.; Oona, H.; Reinovsky, R.E.

    1987-01-01

    The goal of the LAGUNA experimental series of the Los Alamos National Laboratory TRAILMASTER program is to accelerate an annular aluminum plasma z-pinch to greater than one hundred kilojoules of implosion kinetic energy. To accomplish this, an electrical pulse >5.5 MA must be delivered to a 20 nH load in approx.1 ..mu..s. The pulsed power system for these experiments consists of a capacitor bank for initial energy storage, a helical explosive-driven magnetic-flux compression generator for the prime power supply and opening and closing switches for power conditioning. While we have not yet achieved our design goal of 15 MA delivered to the inductive store of the system, all major components have functioned successfully at the 10 MA level. Significant successes and some difficulties experienced in these experiments are described.

  13. Limnology of Laguna Tortuguero, Puerto Rico

    USGS Publications Warehouse

    Quinones-Marquez, Ferdinand; Fuste, Luis A.

    1978-01-01

    The principal chemical, physical and biological characteristics, and the hydrology of Laguna Tortuguero, Puerto Rico, were studied from 1974-75. The lagoon, with an area of 2.24 square kilometers and a volume of about 2.68 million cubic meters, contains about 5 percent of seawater. Drainage through a canal on the north side averages 0.64 cubic meters per second per day, flushing the lagoon about 7.5 times per year. Chloride and sodium are the principal ions in the water, ranging from 300 to 700 mg/liter and 150 to 400 mg/liter, respectively. Among the nutrients, nitrogen averages about 1.7 mg/liter, exceeding phosphorus in a weight ratio of 170:1. About 10 percent of the nitrogen and 40 percent of the phosphorus entering the lagoon is retained. The bottom sediments, with a volume of about 4.5 million cubic meters, average 0.8 and 0.014 percent nitrogen and phosphorus, respectively. (Woodard-USGS)

  14. The Characteristics of Boulby as a Site for the LAGUNA Experiment

    SciTech Connect

    Spooner, N. J. C.; Cripps, J.; Bennett, T. [University of Sheffield, Department of Physics and Astronomy, Hicks Building, Hounsfield Road, Sheffield, S3 7RH (United Kingdom)

    2010-11-24

    LAGUNA is proposed project to build in Europe a megaton-scale detector for neutrino physics, neutrino astrophysics and proton decay requiring a deep underground site. We outline here the characteristics of Boulby Mine, UK as a potential place for LAGUNA. We find that this site, already the location of several particle astrophysics experiments at 1100 m depth, has several interesting advantages for LAGUNA.

  15. Hydrology of Laguna Joyuda, Puerto Rico

    USGS Publications Warehouse

    Santiago-Rivera, Luis; Quinones-Aponte, Vicente

    1995-01-01

    A study was conducted by the U.S. Geological Survey to define the hydraulic and hydrologic characteristics of the Laguna Joyuda system (in southwestern Puerto Rico) and to determine the water budget of the lagoon. This shallow-water lagoon is connected to the sea by a single canal. Rainfall and evaporation, surface-water, groundwater, and tidal-flow data were collected from December 1, 1985, to April 30, 1988. A conceptual hydrologic model of the lagoon was developed and discharge measurements and modeling were undertaken to quantify the different flow components. The water balance during the 29-month study period was determined by measuring and estimating the different hydrologic components: 4.14 million cubic meters rainfall; 5.38 million cubic meters evaporation; 1.1 8 million cubic meters surface water; and 0.34 million cubic meters ground water. A total of 18.9 million cubic meters ebb flow (tidal outflow) was discharged from the lagoon and 14.4 million cubic meters flood flow (tidal inflow) entered through the canal during the study. Seawater inflow accounted for 71 percent of the water into the lagoon. The storage volume of the lagoon was about 1.55 million cubic meters. The lagoon's hydrologic-budget residual was 4.22 million cubic meters, whereas the sum of the estimated errors for the different hydrologic components amounted to 4.51 million cubic meters. Average flushing rate for the lagoon was estimated at 72 days. During the study, the specific conductance of the lagoon water ranged from 32,000 to 52,000 microsiemens per centimeter at 25 degrees Celsius, whereas the specific conductance of local seawater is about 45,000 to 55,000 microsiemens.

  16. Molecular Epidemiology of Laguna Negra Virus, Mato Grosso State, Brazil

    PubMed Central

    Travassos da Rosa, Elizabeth S.; Medeiros, Daniele B.A.; Nunes, Márcio R.T.; Simith, Darlene B.; Pereira, Armando de S.; Elkhoury, Mauro R.; Santos, Elizabeth Davi; Lavocat, Marília; Marques, Aparecido A.; Via, Alba V.G.; Kohl, Vânia A.; Terças, Ana C.P.; D`Andrea, Paulo; Bonvícino, Cibele R.; Sampaio de Lemos, Elba R.

    2012-01-01

    We associated Laguna Negra virus with hantavirus pulmonary syndrome in Mato Grosso State, Brazil, and a previously unidentified potential host, the Calomys callidus rodent. Genetic testing revealed homologous sequencing in specimens from 20 humans and 8 mice. Further epidemiologic studies may lead to control of HPS in Mato Grosso State. PMID:22607717

  17. Fault finder

    DOEpatents

    Bunch, Richard H. (1614 NW. 106th St., Vancouver, WA 98665)

    1986-01-01

    A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

  18. A geophysical and geological study of Laguna de Ayarza, a Guatemalan caldera lake

    USGS Publications Warehouse

    Poppe, L.J.; Paull, C.K.; Newhall, C.G.; Bradbury, J.P.; Ziagos, J.

    1985-01-01

    Geologic and geophysical data from Laguna de Ayarza, a figure-8-shaped doublecaldera lake in the Guatemalan highlands, show no evidence of postcaldera eruptive tectonic activity. The bathymetry of the lake has evolved as a result of sedimentary infilling. The western caldera is steep-sided and contains a large flat-floored central basin 240 m deep. The smaller, older, eastern caldera is mostly filled by coalescing delta fans and is connected with the larger caldera by means of a deep channel. Seismicreflection data indicate that at least 170 m of flat-lying unfaulted sediments partly fill the central basin and that the strata of the pre-eruption edifice have collapsed partly along inward-dipping ring faults and partly by more chaotic collapses. These sediments have accumulated in the last 23,000 years at a minimum average sedimentation rate of 7 m/103 yr. The upper 9 m of these sediments is composed of > 50% turbidites, interbedded with laminated clayey silts containing separate diatom and ash layers. The bottom sediments have >1% organic material, an average of 4% pyrite, and abundant biogenic gas, all of which demonstrate that the bottom sediments are anoxic. Although thin (<0.5 cm) ash horizons are common, only one thick (7-16 cm) primary ash horizon could be identified in piston cores. Alterations in the mineralogy and variations in the diatom assemblage suggest magnesium-rich hydrothermal activity. ?? 1985.

  19. Quaternary pollen record from laguna de tagua tagua, chile.

    PubMed

    Heusser, C J

    1983-03-25

    Pollen of southern beech and podocarp at Laguna de Tagua Tagua during the late Pleistocene indicates that cooler and more humid intervals were a feature of Ice Age climate at this subtropical latitude in Chile. The influence of the southern westerlies may have been greater at this time, and the effect of the Pacific anticyclone was apparently weakened. The climate today, wet in winter and dry in summer, supports broad sclerophyll vegetation that developed during the Holocene with the arrival of paleo-Indians and the extinction of mastodon and horse. PMID:17735194

  20. Possibilities For The LAGUNA Projects At The Frejus Site

    SciTech Connect

    Mosca, Luigi [LSM-Frejus - CNRS/IN2P3 and CEA/DSM/IRFU (France)

    2010-11-24

    The present laboratory (LSM) at the Frejus site and the project of a first extension of it, mainly aimed at the next generation of dark matter and double beta decay experiments, are briefly reviewed. Then the main characteristics of the LAGUNA cooperation and Design Study network are summarized. Seven underground sites in Europe are considered in LAGUNA and are under study as candidates for the installation of Megaton scale detectors using three different techniques: a liquid Argon TPC (GLACIER), a liquid scintillator detector (LENA) and a Water Cerenkov (MEMPHYS), all mainly aimed at investigation of proton decay and properties of neutrinos from SuperNovae and other astrophysical sources as well as from accelerators (Super-beams and/or Beta-beams from CERN). One of the seven sites is located at Frejus, near the present LSM laboratory, and the results of its feasibility study are presented and discussed. Then the physics potential of a MEMPHYS detector installed in this site are emphasized both for non-accelerator and for neutrino beam based configurations. The MEMPHYNO prototype with its R and D programme is presented. Finally a possible schedule is sketched.

  1. Fault diagnosis

    NASA Technical Reports Server (NTRS)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to examine pilot mental models of the aircraft subsystems and their use in diagnosis tasks. Future research plans include piloted simulation evaluation of the diagnosis decision aiding concepts and crew interface issues. Information is given in viewgraph form.

  2. Fault mechanics

    SciTech Connect

    Segall, P. (USAF, Geophysics Laboratory, Hanscom AFB, MA (United States))

    1991-01-01

    Recent observational, experimental, and theoretical modeling studies of fault mechanics are discussed in a critical review of U.S. research from the period 1987-1990. Topics examined include interseismic strain accumulation, coseismic deformation, postseismic deformation, and the earthquake cycle; long-term deformation; fault friction and the instability mechanism; pore pressure and normal stress effects; instability models; strain measurements prior to earthquakes; stochastic modeling of earthquakes; and deep-focus earthquakes. Maps, graphs, and a comprehensive bibliography are provided. 220 refs.

  3. Planning for Water Scarcity: The Vulnerability of the Laguna Region, Mexico 

    E-print Network

    Sanchez Flores, Maria Del Rosario

    2010-10-12

    This dissertation examined declining groundwater availability and management strategies for addressing water shortages in the Laguna region located in the states of Coahuila and Durango. Excessive pumping of groundwater ...

  4. Hatching success of Caspian terns nesting in the lower Laguna Madre, Texas, USA

    USGS Publications Warehouse

    Mitchell, C.A.; Custer, T.W.

    1986-01-01

    The average clutch size of Caspian Terns nesting in a colony in the Lower Laguna Madre near Laguna Vista, Texas, USA in 1984 was 1.9 eggs per nest. Using the Mayfield method for calculating success, one egg hatched in 84.1% of the nests and 69.8% of the eggs laid hatched. These hatching estimates are as high or higher than estimates from colonies in other areas.

  5. Remote sensing analysis for fault-zones detection in the Central Andean Plateau (Catamarca, Argentina)

    NASA Astrophysics Data System (ADS)

    Traforti, Anna; Massironi, Matteo; Zampieri, Dario; Carli, Cristian

    2015-04-01

    Remote sensing techniques have been extensively used to detect the structural framework of investigated areas, which includes lineaments, fault zones and fracture patterns. The identification of these features is fundamental in exploration geology, as it allows the definition of suitable sites for the exploitation of different resources (e.g. ore mineral, hydrocarbon, geothermal energy and groundwater). Remote sensing techniques, typically adopted in fault identification, have been applied to assess the geological and structural framework of the Laguna Blanca area (26°35'S-66°49'W). This area represents a sector of the south-central Andes localized in the Argentina region of Catamarca, along the south-eastern margin of the Puna plateau. The study area is characterized by a Precambrian low-grade metamorphic basement intruded by Ordovician granitoids. These rocks are unconformably covered by a volcano-sedimentary sequence of Miocene age, followed by volcanic and volcaniclastic rocks of Upper Miocene to Plio-Pleistocene age. All these units are cut by two systems of major faults, locally characterized by 15-20 m wide damage zones. The detection of main tectonic lineaments in the study area was firstly carried out by classical procedures: image sharpening of Landsat 7 ETM+ images, directional filters applied to ASTER images, medium resolution Digital Elevation Models analysis (SRTM and ASTER GDEM) and hill shades interpretation. In addition, a new approach in fault zone identification, based on multispectral satellite images classification, has been tested in the Laguna Blanca area and in other sectors of south-central Andes. In this perspective, several prominent fault zones affecting basement and granitoid rocks have been sampled. The collected fault gouge samples have been analyzed with a Field-Pro spectrophotometer mounted on a goniometer. We acquired bidirectional reflectance spectra, from 0.35?m to 2.5?m with 1nm spectral sampling, of the sampled fault rocks. Subsequently, two different Spectral Angle Mapper (SAM) classifications were applied to ASTER images: the first one based on fault rock spectral signatures resampled at the ASTER sensor resolution; the second one based on spectral signatures retrieved from specific Region of Interest (ROI), which were directly derived from the ASTER image on the analyzed fault zones. The SAM classification based on the spectral signatures of fault rocks gave outstanding results since it was able to classify the analyzed fault zone, both in terms of length and width. Moreover, in some specific cases, this SAM classification identified not only the sampled fault zone, but also other prominent neighboring faults cutting the same host rock. These results define the SAM supervised classification on ASTER images as a tool to identify prominent fault zones directly on the base of fault-rocks spectra.

  6. Lithologic controls on mineralization at the Lagunas Norte high-sulfidation epithermal gold deposit, northern Peru

    NASA Astrophysics Data System (ADS)

    Cerpa, Luis M.; Bissig, Thomas; Kyser, Kurt; McEwan, Craig; Macassi, Arturo; Rios, Hugo W.

    2013-06-01

    The 13.1-Moz high-sulfidation epithermal gold deposit of Lagunas Norte, Alto Chicama District, northern Peru, is hosted in weakly metamorphosed quartzites of the Upper Jurassic to Lower Cretaceous Chimú Formation and in overlying Miocene volcanic rocks of dacitic to rhyolitic composition. The Dafne and Josefa diatremes crosscut the quartzites and are interpreted to be sources of the pyroclastic volcanic rocks. Hydrothermal activity was centered on the diatremes and four hydrothermal stages have been defined, three of which introduced Au ± Ag mineralization. The first hydrothermal stage is restricted to the quartzites of the Chimú Formation and is characterized by silice parda, a tan-colored aggregate of quartz-auriferous pyrite-rutile ± digenite infilling fractures and faults, partially replacing silty beds and forming cement of small hydraulic breccia bodies. The ?34S values for pyrite (1.7-2.2 ‰) and digenite (2.1 ‰) indicate a magmatic source for the sulfur. The second hydrothermal stage resulted in the emplacement of diatremes and the related volcanic rocks. The Dafne diatreme features a relatively impermeable core dominated by milled slate from the Chicama Formation, whereas the Josefa diatreme only contains Chimú Formation quartzite clasts. The third hydrothermal stage introduced the bulk of the mineralization and affected the volcanic rocks, the diatremes, and the Chimú Formation. In the volcanic rocks, classic high-sulfidation epithermal alteration zonation exhibiting vuggy quartz surrounded by a quartz-alunite and a quartz-alunite-kaolinite zone is observed. Company data suggest that gold is present in solid solution or micro inclusions in pyrite. In the quartzite, the alteration is subtle and is manifested by the presence of pyrophyllite or kaolinite in the silty beds, the former resulting from relatively high silica activities in the fluid. In the quartzite, gold mineralization is hosted in a fracture network filled with coarse alunite, auriferous pyrite, and enargite. Alteration and mineralization in the breccias were controlled by permeability, which depends on the type and composition of the matrix, cement, and clast abundance. Coarse alunite from the main mineralization stage in textural equilibrium with pyrite and enargite has ?34S values of 24.8-29.4 ‰ and {?^{18 }}{{O}_{{S{{O}_4}}}} values of 6.8-13.9 ‰, consistent with H2S as the dominant sulfur species in the mostly magmatic fluid and constraining the fluid composition to low pH (0-2) and log fO2 of -28 to -30. Alunite-pyrite sulfur isotope thermometry records temperatures of 190-260 °C; the highest temperatures corresponding to samples from near the diatremes. Alunite of the third hydrothermal stage has been dated by 40Ar/39Ar at 17.0 ± 0.22 Ma. The fourth hydrothermal stage introduced only modest amounts of gold and is characterized by the presence of massive alunite-pyrite in fractures, whereas barite, drusy quartz, and native sulfur were deposited in the volcanic rocks. The {?^{18 }}{{O}_{{S{{O}_4}}}} values of stage IV alunite vary between 11.5 and 11.7 ‰ and indicate that the fluid was magmatic, an interpretation also supported by the isotopic composition of barite (?34S = 27.1 to 33.8 ‰ and {?^{18 }}{{O}_{{S{{O}_4}}}} = 8.1 to 12.7 ‰). The ?34Spy-alu isotope thermometry records temperatures of 210 to 280 °C with the highest values concentrated around the Josefa diatreme. The Lagunas Norte deposit was oxidized to a depth of about 80 m below the current surface making exploitation by heap leach methods viable.

  7. High-Performance Wireless Internet Connection to Mount Laguna Observatory

    NASA Astrophysics Data System (ADS)

    Etzel, P. B.; Braun, H.-W.

    2000-12-01

    A 45 Mbit/sec full-duplex wireless Internet backbone is now under construction that will connect SDSU's Mount Laguna Observatory (MLO) to the San Diego Supercomputer Center (SDSC), which is located on the campus of UCSD. The SDSU campus is connected to the SDSC via Abilene/OC3 (Internet2) at 155 Mbit/sec. The MLO-SDSC backbone is part of the High-Performance Wireless Research and Education Network (HPWREN) project. Other scientific applications include earthquake monitoring from a remote array of automated seismic stations operated by researchers at the UCSD Institute for Geophysics and Planetary Physics, and environmental monitoring at Ecology field stations administered by SDSU. Educational initiatives include bringing the Internet to schools and educational centers at remote Indian reservations such as Pala and Rincon. HPWREN will allow SDSU astronomers and their collaborators to transmit CCD images to their home institutions while observations are being made. Archive retrieval of images from on-campus data bases, for comparison purposes, could easily be done. SDSU desires to build a modern, large telescope at MLO. HPWREN would support both robotic and remote observing capabilities for such a telescope. Astronomers could observe at their home institutions with multiple workstations to feed command and control instructions, data, and slow-scan video, which would give them the "feel" of being in a control room next to the telescope. HPWREN was funded by the NSF under grant ANI-0087344.

  8. Crab death assemblages from Laguna Madre and vicinity, Texas

    SciTech Connect

    Plotnick, R.E.; McCarroll, S. (Univ. of Illinois, Chicago (USA)); Powell, E. (Texas A M Univ., College Station (USA))

    1990-02-01

    Crabs are a major component of modern marine ecosystems, but are only rarely described in fossil assemblages. Studies of brachyuran taphonomy have examined either the fossil end-products of the taphonomic process or the very earliest stages of decay and decomposition. The next logical step is the analysis of modern crab death assemblages; i.e., studies that examine taphonomic loss in areas where the composition of the living assemblage is known. The authors studied crab death assemblages in shallow water sediments at several localities in an near Laguna Madre, Texas. Nearly every sample examined contained some crab remains, most commonly in the form of isolated claws (dactyl and propodus). A crab fauna associated with a buried grass bed contained abundant remains of the xanthid crab Dyspanopeus texanus, including carapaces, chelipeds, and thoraxes, as well as fragments of the portunid Callinectes sapidus and the majiid Libinia dubia. Crab remains may be an overlooked portion of many preserved benthic assemblages, both in recent and modern sediments.

  9. The Marine Ecology of the Laguna San Rafael (Southern Chile): Ice Scour and Opportunism

    NASA Astrophysics Data System (ADS)

    Davenport, John

    1995-07-01

    Surveys of the intertidal fauna and flora, the plankton, fish, birds and marine mammals of the Laguna San Rafael were carried out by a Raleigh International Expedition in January-February 1993. The Laguna is dominated by the effects of scouring, low temperature and low salinity produced by the calving, tide-water San Rafael glacier that discharges into the Laguna. The fauna and flora are simple and largely limited to a small sector of the Laguna, relatively unaffected by ice. There is a predominance of herbivorous fish, ducks, geese and swans, feeding mainly on macroalgae. Penguins, cormorants, sea lions and porpoises make up the top predators. The strandline is influenced by very heavy rainfall and supports a fauna of freshwater and terrestrial molluscs and earthworms, fed upon by birds and frogs. Large numbers of mussels are present in the north-eastern sector of the Laguna, but many are found in poor condition, high on the shore. It is suggested that poor condition and mortality are caused by large calving waves that dislodge mussels. Such waves are caused by occasional loss of massive quantities of ice from the glacier.

  10. Normal Fault Visualization

    NSDL National Science Digital Library

    Jimm Myers

    This module demonstrates the motion on an active normal fault. Faulting offsets three horizontal strata. At the end of the faulting event, surface topography has been generated. The upper rock layer is eroded by clicking on the 'begin erosion' button. The operator can manipulate the faulting motion, stopping and reversing motion on the fault at any point along the transit of faulting. The action of erosion is also interactive. One possible activity is an investigation of the control of different faulting styles on regional landscape form. This visual lends itself to an investigation of fault motion, and a comparison of types of faults. The interactive normal faulting visual could be compared to other interactive visuals depicting thrust faults, reverse faults, and strike slip faults (interactive animations of these fault types can be found by clicking on 'Media Types' at top red bar, then 'Animations', then 'Faults'). By comparing the interactive images of different types of faulting with maps of terrains dominated by different faulting styles, students are aided in conceptualizing how certain faulting styles produce distinctive landforms on the earth's surface (e.g., ridge and valley topography [thrust faulting dominant] versus basin-and-range topography [normal faulting dominant]). Jimm Myers, geology professor at the University of Wyoming, originated the concept of The Magma Foundry, a website dedicated to improving Earth science education across the grade levels. The Magma Foundry designs and creates modular, stand-alone media components that can be utilized in a variety of pedagogical functions in courses and labs.

  11. Deciphering lake and maar geometries from seismic refraction and re ection surveys in Laguna Potrok Aike (southern Patagonia, Argentina)

    E-print Network

    Gilli, Adrian

    Deciphering lake and maar geometries from seismic refraction and re ection surveys in Laguna Potrok Laguna Potrok Aike is a bowl-shaped maar lake in southern Patagonia, Argentina, with a present mean diameter of ~3.5 km and a maximum water depth of ~100 m. Seismic surveys were carried out between 2003

  12. Optimal fault location

    E-print Network

    Knezev, Maja

    2008-10-10

    after the accurate fault condition and location are detected. This thesis has been focusing on automated fault location procedure. Different fault location algorithms, classified according to the spatial placement of physical measurements on single ended...

  13. The radiological emergency plan to the Laguna Verde Nuclear Power Plant

    SciTech Connect

    Villard, M.M.; Magana, R.O. (Comision Nacional de Seguridad Nuclear Y Salvaguardias, Insurgentes Sur (Mexico))

    1992-01-01

    In this paper, it is described the main characteristics of the area surrounding the Laguna Verde Nuclear Power Plant, in terms of population, main economic activities, housing and infrastructure. Based on those factors, the most important features of the Radiological Emergency Plan are described.

  14. Redhead duck behavior on lower Laguna Madre and adjacent ponds of southern Texas

    USGS Publications Warehouse

    Mitchell, C.A.; Custer, T.W.; Zwank, P.J.

    1992-01-01

    Behavior of redheads (Aythya americana) during winter was studied on the hypersaline lower Laguna Madre and adjacent freshwater to brackish water ponds of southern Texas. On Laguna Madre, feeding (46%) and sleeping (37%) were the most common behaviors. Redheads fed more during early morning (64%) than during the rest of the day (40%); feeding activity was negatively correlated with temperature. Redheads fed more often by dipping (58%) than by tipping (25%), diving (16%), or gleaning (0.1%). Water depth was least where they fed by dipping (16 cm), greatest where diving (75 cm), and intermediate where tipping (26 cm). Feeding sequences averaged 5.3 s for dipping, 8.1 s for tipping, and 19.2 s for diving. Redheads usually were present on freshwater to brackish water ponds adjacent to Laguna Madre only during daylight hours, and use of those areas declined as winter progressed. Sleeping (75%) was the most frequent behavior at ponds, followed by preening (10%), swimming (10%), and feeding (0.4%). Because redheads fed almost exclusively on shoalgrass while dipping and tipping in shallow water and shoalgrass meadows have declined in the lower Laguna Madre, proper management of the remaining shoalgrass habitat is necessary to ensure that this area remains the major wintering area for redheads.

  15. La captura comercial del coypo Myocastor coypus (Mammalia: Myocastoridae) en Laguna Adela, Argentina

    Microsoft Academic Search

    M. Gorostiague; H. A. Regidor

    1993-01-01

    We analize commercial harvest of coypus Myocastor coypus in Laguna Adela, Argentina, during 1988. A total of 217 animals was trapped from April to October (?. = 31 per month), with a capture effort of 1768 night?traps (? = 255.57 per month). The capture increased up to a peak in August ? September and was inversely correlated with capture effort.

  16. Vegetation and climate history from Laguna de Río Seco, Sierra Nevada, southern Spain

    Microsoft Academic Search

    R. S. Anderson; G. Jimenez-Moreno

    2010-01-01

    The largest mountain range in southern Spain - the Sierra Nevada - is an immense landscape with a rich biological and cultural heritage. Rising to 3,479 m at the summit of Mulhacén, the range was extensively glaciated during the late Pleistocene. Subsequent melting of cirque glaciers allowed formation of numerous small lakes and wetlands. One south-facing basin contains Laguna de

  17. Climatically induced depositional dynamics - Results of an areal sediment survey at Laguna Potrok Aike, Argentina

    Microsoft Academic Search

    S. Kastner; C. Ohlendorf; T. Haberzettl; A. Lücke; N. I. Maidana; C. Mayr; F. Schäbitz; B. Zolitschka

    2009-01-01

    The ca. 770 ka old maar lake Laguna Potrok Aike (51°S, 70°W) is an ICDP site within the ``Potrok Aike maar lake Sediment Archive Drilling prOject'' (PASADO) and was drilled in 2008. The lake - situated in the dry steppe environment of south-eastern Patagonia - is a palaeolimnological key site for the reconstruction of terrestrial climatic and environmental conditions in

  18. Late Holocene air temperature variability reconstructed from the sediments of Laguna Escondida, Patagonia, Chile (4530S)

    E-print Network

    Wehrli, Bernhard

    in Northern Chilean Patagonia, Lago Castor (45°36S, 71°47W) and Laguna Escondida (45°31S, 71°49W). Radiometric) and c. 1900 BC (Lago Castor). Both lakes show similarities and repro- ducibility in sedimentation rate signal and correlation with annual temperature reanalysis data (calibration 1900­2006 AD; Lago Castor r=0

  19. The significance of ammonium adsorption on lower laguna madre (texas) sediments 

    E-print Network

    Morin, Jeffery Peter

    2008-10-10

    The work presented in this dissertation focuses on + 4 NH in marine sediments and attempts to elucidate some of the specific pathways and processes affecting + 4 NH in coastal marine regions. The majority of work was conducted in the Laguna Madre...

  20. The significance of ammonium adsorption on lower laguna madre (texas) sediments 

    E-print Network

    Morin, Jeffery Peter

    2009-05-15

    The work presented in this dissertation focuses on + 4 NH in marine sediments and attempts to elucidate some of the specific pathways and processes affecting + 4 NH in coastal marine regions. The majority of work was conducted in the Laguna Madre...

  1. Transition Fault Simulation

    Microsoft Academic Search

    John Waicukauski; Eric Lindbloom; Barry Rosen; Vijay Iyengar

    1987-01-01

    Delay fault testing is becoming more important as VLSI chips become more complex. Components that are fragments of functions, such as those in gate-array designs, need a general model of a delay fault and a feasible method of generating test patterns and simulating the fault. The authors present such a model, called a transition fault, which when used with parallel-pattern,

  2. A dynamic fault tree

    Microsoft Academic Search

    Marko ?epin; Borut Mavko

    2002-01-01

    The fault tree analysis is a widely used method for evaluation of systems reliability and nuclear power plants safety. This paper presents a new method, which represents extension of the classic fault tree with the time requirements. The dynamic fault tree offers a range of risk informed applications. The results show that application of dynamic fault tree may reduce the

  3. Automated network fault management

    Microsoft Academic Search

    J. S. Baras; M. Ball; S. Gupta; P. Viswanathan; P. Shah

    1997-01-01

    Future military communication networks will have a mixture of backbone terrestrial, satellite and wireless terrestrial networks. The speeds of these networks vary and they are very heterogeneous. As networks become faster, it is not enough to do reactive fault management. Our approach combines proactive and reactive fault management. Proactive fault management is implemented by dynamic and adaptive routing. Reactive fault

  4. Statistical Fault Analysis

    Microsoft Academic Search

    Sunil K. JainandVishwani; Vishwani Agrawal

    1985-01-01

    Statistical Fault Analysis, or Stafan, is proposed as an alternative to fault simulation of digital circuits. This method defines Controllabilities and observabilities of circuit nodes as probabilities estimated from signal statistics of fault-free simulation. Special Procedures deal with these quantities at fanout and feedback nodes. The computed probabilities are used to derive unbiased estimates of fault detection probabilities and overall

  5. Migration chronology and distribution of redheads on the lower Laguna Madre, Texas

    USGS Publications Warehouse

    Custer, Christine M.; Custer, T.W.; Zwank, P.J.

    1997-01-01

    An estimated 80% of redheads (Aythya americana) winter on the Laguna Madre of southern Texas and Mexico. Because there have been profound changes in the Laguna Madre over the past three decades and the area is facing increasing industrial and recreational development, we studied the winter distribution and habitat requirements of redheads during two winters (1987-1988 and 1988-1989) on the Lower Laguna Madre, Texas to provide information that could be used to understand, identify, and protect wintering redhead habitat. Redheads began arriving on the Lower Laguna Madre during early October in 1987 and 1988, and continued to arrive through November. Redhead migration was closely associated with passing weather fronts. Redheads arrived on the day a front arrived and during the following two days; no migrants were observed arriving the day before a weather front arrived. Flock size of arriving redheads was 26.4 ± 0.6 birds and did not differ among days or by time of day (morning midday, or afternoon). Number of flocks arriving per 0.5 h interval (arrival rate) was greater during afternoon (21.7 ± 0.6) than during morning (4.3 ± 1.2) or midday (1.5 ± 0.4) on the day of frontal passage and during the first day after frontal passage. Upon arrival, redhead flocks congregated in the central portion of the Lower Laguna Madre. They continued to use the central portion throughout the winter, but gradually spread to the northern and southern ends of the lagoon. Seventy-one percent of the area used by flocks was vegetated with shoalgrass (Halodule wrightii) although shoalgrass covered only 32% of the lagoon. Flock movements seemed to be related to tide level; redheads moved to remain in water 12-30 cm deep. These data can be used by the environmental community to identify and protect this unique and indispensable habitat for wintering redheads.

  6. Vesicularity variation to pyroclasts from silicic eruptions at Laguna del Maule volcanic complex, Chile

    NASA Astrophysics Data System (ADS)

    Wright, H. M. N.; Fierstein, J.; Amigo, A.; Miranda, J.

    2014-12-01

    Crystal-poor rhyodacitic to rhyolitic volcanic eruptions at Laguna del Maule volcanic complex, Chile have produced an astonishing range of textural variation to pyroclasts. Here, we focus on eruptive deposits from two Quaternary eruptions from vents on the northwestern side of the Laguna del Maule basin: the rhyolite of Loma de Los Espejos and the rhyodacite of Laguna Sin Puerto. Clasts in the pyroclastic fall and pyroclastic flow deposits from the rhyolite of Loma de Los Espejos range from dense, non-vesicular (obsidian) to highly vesicular, frothy (coarsely vesicular reticulite); where vesicularity varies from <1% to >90%. Bulk compositions range from 75.6-76.7 wt.% SiO2. The highest vesicularity clasts are found in early fall deposits and widely dispersed pyroclastic flow deposits; the frothy carapace to lava flows is similarly highly vesicular. Pyroclastic deposits also contain tube pumice, and macroscopically folded, finely vesicular, breadcrusted, and heterogeneously vesiculated textures. We speculate that preservation of the highest vesicularities requires relatively low decompression rates or open system degassing such that relaxation times were sufficient to allow extensive vesiculation. Such an inference is in apparent contradiction to documentation of Plinian dispersal to the eruption. Clasts in the pyroclastic fall deposit of the rhyodacite (68-72 wt.% SiO2) of Laguna Sin Puerto are finely vesicular, with vesicularity modes at ~50% and ~68% corresponding to gray and white pumice colors, respectively. Some clasts are banded in color (and vesicularity). All clasts were fragmented into highly angular particles, with subplanar to slightly concave exterior surfaces (average Wadell Roundness of clast margins between 0.32 and 0.39), indicating brittle fragmentation. In contrast to Loma de Los Espejos, high bubble number densities to Laguna Sin Puerto rhyodacite imply high decompression rates.

  7. Food choice of wintering redhead ducks Aythya americana and utilization of available resources in Lower Laguna Madre, Texas

    E-print Network

    Cornelius, Stephen Eugene

    1975-01-01

    fulfillment of the requirement for the degree of PIASTER OF SCIENCE December 1. 975 P{ajor Subject: Wildlife and Fisheries Sciences FOOD CHOICE OF WINTERING REDHEAD DUCKS (AYTHYA AMERICANA) AND UTILIZATION OF AVAILABLE RESOURCES IN LOWER LAGUNA MADRE...

  8. Factors affecting the distribution, food habits, and lead toxicosis of redhead ducks in the Laguna Madre, Texas 

    E-print Network

    Marsh, Steven Lyle

    1979-01-01

    FACTORS AFFECTING THE, DISTRIBUTION, FOOD HASITS& AND LEAD TOXICOSIS OF REDHEAD DUCKS IN THE LAGUNA MADRE, TEXAS A Thesis by STEVEN LYLE MARSH Submitted to the Graduate College of Texas Ait:M University in partial fulfillment... of the requirement for the degree of MASTER OF SCIENCE December 1979 Major Subject: Wildlife and Fisheries Sciences FACTORS AFFEC~G THE DISTRIBUTION, FOOD HABITS, AND LEAD TOXICOSIS OF REDHEAD DUCKS IN THE LAGUNA MADRE, TEXAS A Thesis by STEVEN LYLE MARSH...

  9. Fault recovery characteristics of the fault tolerant multi-processor

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1990-01-01

    The fault handling performance of the fault tolerant multiprocessor (FTMP) was investigated. Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles byzantine or lying faults. It is pointed out that these weak areas in the FTMP's design increase the probability that, for any hardware fault, a good LRU (line replaceable unit) is mistakenly disabled by the fault management software. It is concluded that fault injection can help detect and analyze the behavior of a system in the ultra-reliable regime. Although fault injection testing cannot be exhaustive, it has been demonstrated that it provides a unique capability to unmask problems and to characterize the behavior of a fault-tolerant system.

  10. Crust-Busting Faults

    NSDL National Science Digital Library

    George Davis

    Students independently research a major ancient or active regional fault, and cogently describe map and cross sectional characteristics, kinematics, mechanics, and plate tectonic significance. They present results to classmates, teaching assistants, and instructors. FAULTING, REGIONAL TECTONICS, PLATE TECTONICS

  11. Fault Mapping in Haiti

    USGS Multimedia Gallery

    USGS geologist Carol Prentice surveying features that have been displaced by young movements on the Enriquillo fault in southwest Haiti.  The January 2010 Haiti earthquake was associated with the Enriquillo fault....

  12. The microbial community at Laguna Figueroa, Baja California Mexico - From miles to microns

    NASA Technical Reports Server (NTRS)

    Stolz, J. F.

    1985-01-01

    The changes in the composition of the stratified microbial community in the sediments at Laguna Figeroa following floods are studied. The laguna which is located on the Pacific coast of the Baja California peninsula 200 km south of the Mexican-U.S. border is comprised of an evaporite flat and a salt marsh. Data collected from 1979-1983 using Landsat imagery, Skylab photographs, and light and transmission electron microscopy are presented. The flood conditions, which included 1-3 m of meteoric water covering the area and a remanent of 5-10 cm of siliciclastic and clay sediment, are described. The composition of the community prior to the flooding consisted of Microcoleus, Phormidium sp., a coccoid cynanobacteria, Phloroflexus, Ectothiorhodospira, Chloroflexus, Thiocapsa sp., and Chromatium. Following the floods Thiocapsa, Chromatium, Oscillatora sp., Spirulina sp., and Microcoleus are observed in the sediments.

  13. Environmental evidence of fossil fuel pollution in Laguna Chica de San Pedro lake sediments (Central Chile).

    PubMed

    Chirinos, L; Rose, N L; Urrutia, R; Muñoz, P; Torrejón, F; Torres, L; Cruces, F; Araneda, A; Zaror, C

    2006-05-01

    This paper describes lake sediment spheroidal carbonaceous particle (SCP) profiles from Laguna Chica San Pedro, located in the Biobío Region, Chile (36 degrees 51' S, 73 degrees 05' W). The earliest presence of SCPs was found at 16 cm depth, corresponding to the 1915-1937 period, at the very onset of industrial activities in the study area. No SCPs were found at lower depths. SCP concentrations in Laguna Chica San Pedro lake sediments were directly related to local industrial activities. Moreover, no SCPs were found in Galletué lake (38 degrees 41' S, 71 degrees 17.5' W), a pristine high mountain water body used here as a reference site, suggesting that contribution from long distance atmospheric transport could be neglected, unlike published data from remote Northern Hemisphere lakes. These results are the first SCP sediment profiles from Chile, showing a direct relationship with fossil fuel consumption in the region. Cores were dated using the 210Pb technique. PMID:16226361

  14. A preliminary study of the distribution of some copepods in upper Laguna Madre 

    E-print Network

    Henderson, John C

    1958-01-01

    . , the autlror conducted a hydrograpl'c survey of Lower Laguna I;adre (Deniso;. i ar;d Henderson, 1/~~6) ar. d recorded a net Inflo? of sea water from t!. e Gulf' o I ex. 'co 'nto Lower Laguna I'adrs, During the summer. o lrl56 on the occasion of' several... 2, 465 316 4, 788 1, 8ol 626 267 2, 694 Bottom? 5, 484 1, 815 849 8, 148 3, 141 3r755 230 7, 126 3, 834 1, 683 373 5, 89o 3, 145 6, 134 734 10, 013 2, 246 70 4, 070 2, 165 6o3 284 3, 052 3, 991 1, 659 224 5, 874 9, 112...

  15. Fault zone hydrogeology

    NASA Astrophysics Data System (ADS)

    Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.

    2013-12-01

    Deformation along faults in the shallow crust (< 1 km) introduces permeability heterogeneity and anisotropy, which has an important impact on processes such as regional groundwater flow, hydrocarbon migration, and hydrothermal fluid circulation. Fault zones have the capacity to be hydraulic conduits connecting shallow and deep geological environments, but simultaneously the fault cores of many faults often form effective barriers to flow. The direct evaluation of the impact of faults to fluid flow patterns remains a challenge and requires a multidisciplinary research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and address remaining challenges by co-locating study areas, sharing approaches and fusing data, developing conceptual models from hydrogeologic data, numerical modeling, and training interdisciplinary scientists.

  16. Engineering geology criteria for dredged material disposal in upper Laguna Madre, Texas

    E-print Network

    Stinson, James Edmellaire

    1977-01-01

    Committee: Dr, C. C. Mathewson Channels in Murdock Basin, Upper Laguna Madre, fill with sediment dispersed from adjacent subaqueous dredged materia I disposal sites. Original dredging placed the material in a series of disposal mounds close... to and parallel to the channel. Three disposal sites in different water depths, revealed varying conditions of sediment dispersion and island erosion. Water depth at the different sites varies from 0 to 1. 5 feet in the wind tidal flats, I to 3 feet...

  17. Southern hemispheric westerlies control the spatial distribution of modern sediments in Laguna Potrok Aike, Argentina

    Microsoft Academic Search

    Stephanie Kastner; Christian Ohlendorf; Torsten Haberzettl; Andreas Lücke; Christoph Mayr; Nora I. Maidana; Frank Schäbitz; Bernd Zolitschka

    2010-01-01

    We studied the internal lake processes that control the spatial distribution and characteristics of modern sediments at the\\u000a ICDP (International Continental Scientific Drilling Program) deep drilling site in Laguna Potrok Aike, southern Patagonia,\\u000a Argentina. Sediment distribution patterns were investigated using a dense grid of 63 gravity cores taken throughout the lake\\u000a basin and 40 additional shoreline samples. Analysis of the

  18. Coastal Pond Use by Redheads Wintering in the Laguna Madre, Texas

    Microsoft Academic Search

    Bart M. Ballard; J. Dale James; Ralph L. Bingham; Mark J. Petrie; Barry C. Wilson

    2010-01-01

    The distribution of North American redheads (Aythya americana) during winter is highly concentrated in the Laguna Madre of Texas and Tamaulipas, Mexico. Redheads forage almost exclusively\\u000a in the lagoon and primarily on shoalgrass (Halodule wrightii) rhizomes; however, they make frequent flights to adjacent coastal ponds to dilute salt loads ingested while foraging. We\\u000a conducted 63 weekly aerial surveys during October–March

  19. Fault Tree Analysis

    Microsoft Academic Search

    Liudong Xing; Suprasad V. Amari

    In this chapter, a state-of-the-art review of fault tree analysis is presented. Different forms of fault trees, including\\u000a static, dynamic, and non-coherent fault trees, their applications and analyses will be discussed. Some advanced topics such\\u000a as importance analysis, dependent failures, disjoint events, and multistate systems will also be presented.

  20. Quantitative fault seal prediction

    SciTech Connect

    Yielding, G.; Freeman, B.; Needham, D.T. [Badley Earth Sciences Ltd., Lincolnshire (United Kingdom)

    1997-06-01

    Fault seal can arise from reservoir/nonreservoir juxtaposition or by development of fault rock having high entry pressure. The methodology for evaluating these possibilities uses detailed seismic mapping and well analysis. A first-order seal analysis involves identifying reservoir juxtaposition areas over the fault surface by using the mapped horizons and a refined reservoir stratigraphy defined by isochores at the fault surface. The second-order phase of the analysis assesses whether the sand/sand contacts are likely to support a pressure difference. We define two types of lithology-dependent attributes: gouge ratio and smear factor. Gouge ratio is an estimate of the proportion of fine-grained material entrained into the fault gouge from the wall rocks. Smear factor methods (including clay smear potential and shale smear factor) estimate the profile thickness of a shale drawn along the fault zone during faulting. All of these parameters vary over the fault surface, implying that faults cannot simply be designated sealing or nonsealing. An important step in using these parameters is to calibrate them in areas where across-fault pressure differences are explicitly known from wells on both sides of a fault. Our calibration for a number of data sets shows remarkably consistent results, despite their diverse settings (e.g., Brent province, Niger Delta, Columbus basin). For example, a shale gouge ratio of about 20% (volume of shale in the slipped interval) is a typical threshold between minimal across-fault pressure difference and significant seal.

  1. FTAPE: A fault injection tool to measure fault tolerance

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1995-01-01

    The paper introduces FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The tool combines system-wide fault injection with a controllable workload. A workload generator is used to create high stress conditions for the machine. Faults are injected based on this workload activity in order to ensure a high level of fault propagation. The errors/fault ratio and performance degradation are presented as measures of fault tolerance.

  2. FTAPE: A fault injection tool to measure fault tolerance

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1994-01-01

    The paper introduces FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The tool combines system-wide fault injection with a controllable workload. A workload generator is used to create high stress conditions for the machine. Faults are injected based on this workload activity in order to ensure a high level of fault propagation. The errors/fault ratio and performance degradation are presented as measures of fault tolerance.

  3. Faults of Southern California

    NSDL National Science Digital Library

    This interactive map displays faults for five regions in Southern California. Clicking on a region links to an enlarged relief map of the area, with local faults highlighted in colors. Users can click on individual faults to access pages with more detailed information, such as type, length, nearest communities, and a written description. In all of the maps, the segment of the San Andreas fault that is visible is highlighted in red, and scales for distances and elevations are provided. There is also a link to an alphabetical listing of faults by name.

  4. Trishear for curved faults

    NASA Astrophysics Data System (ADS)

    Brandenburg, J. P.

    2013-08-01

    Fault-propagation folds form an important trapping element in both onshore and offshore fold-thrust belts, and as such benefit from reliable interpretation. Building an accurate geologic interpretation of such structures requires palinspastic restorations, which are made more challenging by the interplay between folding and faulting. Trishear (Erslev, 1991; Allmendinger, 1998) is a useful tool to unravel this relationship kinematically, but is limited by a restriction to planar fault geometries, or at least planar fault segments. Here, new methods are presented for trishear along continuously curved reverse faults defining a flat-ramp transition. In these methods, rotation of the hanging wall above a curved fault is coupled to translation along a horizontal detachment. Including hanging wall rotation allows for investigation of structures with progressive backlimb rotation. Application of the new algorithms are shown for two fault-propagation fold structures: the Turner Valley Anticline in Southwestern Alberta, and the Alpha Structure in the Niger Delta.

  5. Accelerated Fault Simulation and Fault Grading in Combinational Circuits

    Microsoft Academic Search

    Kurt Antreich; Michael H. Schulz

    1987-01-01

    The principles of fault simulation and fault grading are introduced by a general description of the problem. Based upon the well-known concept of restricting fault simulation to the fanout stems and of combining it with a backward traversal inside the fanout-free regions of the circuit, proposals are presented to further accelerate fault simulation and fault grading. These proposals aim at

  6. Fault Links: Exploring the Relationship Between Module and Fault Types

    E-print Network

    Hayes, Jane E.

    - bust, reliable software. Fault-based analysis and fault-based testing are related tech- nologies of a set of pre- specified faults. Similarly, fault-based analysis identifies static techniques (such as traceability analysis that should be performed to ensure that a set of pre-specified faults do not exist

  7. On-line test for fault-secure fault identification

    Microsoft Academic Search

    Samuel Norman Hamilton; Alex Orailoglu

    2000-01-01

    In an increasing number of applications, reliability is essential. On-line resistance to permanent faults is a difficult and important aspect of providing reliability. Particularly vexing is the problem of fault identification. Current methods are either domain specific or expensive. We have developed a fault-secure methodology for permanent fault identification through algorithmic duplication without necessitating complete functional unit replication. Fault identification

  8. Uranium and lanthanides in surficial sediments of Laguna Ojo de Liebre and evaporation ponds of Exportadora de Sal, Guerrero Negro, México

    Microsoft Academic Search

    M. M. Grajeda-Muñoz; E. Choumiline; D. Zaposhnikov

    2007-01-01

    To assess uranium and lanthanides behavior in hypersaline environments, surficial sediment samples were taken from Laguna Ojo de Liebre as well as from the evaporation ponds of Exportadora de Sal (the largest natural salt producing facility in the continent). A total of 63 surficial sediment samples from the laguna and 30 samples from the ponds were analyzed by inductive coupled

  9. How Faults Shape the Earth.

    ERIC Educational Resources Information Center

    Bykerk-Kauffman, Ann

    1992-01-01

    Presents fault activity with an emphasis on earthquakes and changes in continent shapes. Identifies three types of fault movement: normal, reverse, and strike faults. Discusses the seismic gap theory, plate tectonics, and the principle of superposition. Vignettes portray fault movement, and the locations of the San Andreas fault and epicenters of…

  10. Fault simulation and test generation for small delay faults 

    E-print Network

    Qiu, Wangqi

    2007-04-25

    Delay faults are an increasingly important test challenge. Traditional delay fault models are incomplete in that they model only a subset of delay defect behaviors. To solve this problem, a more realistic delay fault model has been developed which...

  11. Fault simulation and test generation for small delay faults

    E-print Network

    Qiu, Wangqi

    2007-04-25

    Delay faults are an increasingly important test challenge. Traditional delay fault models are incomplete in that they model only a subset of delay defect behaviors. To solve this problem, a more realistic delay fault model has been developed which...

  12. Its Not My Fault

    NSDL National Science Digital Library

    2012-08-03

    Students become familiar with strike-slip faults, normal faults, reverse faults and visualize these geological structures using cardboard or a plank of wood, a stack of books, protractor, and a spring scale. The resource is part of the teacher's guide accompanying the video, NASA SCI Files: The Case of the Shaky Quake. Lesson objectives supported by the video, additional resources, teaching tips and an answer sheet are included in the teacher's guide.

  13. The San Andreas Fault

    NSDL National Science Digital Library

    Sandra Schulz

    This United States Geological Survey (USGS) publication discusses the San Andreas Fault in California; specifically what has caused the fault, where it is located, surface features that characterize it, and movement that has occurred. General earthquake information includes an explanation of what earthquakes are, and earthquake magnitude versus intensity. Earthquakes that have occurred along the fault are covered, as well as where the next large one may occur and what can be done about large earthquakes in general.

  14. It's Not Your Fault

    NSDL National Science Digital Library

    In this lesson students will learn about tectonic plate movement. They will discover that we can measure the relative motions of the Pacific Plate and the North American Plate along the San Andreas Fault. Students will be able to compare and contrast movements on either side of the San Andreas Fault, calculate the amount of movement of a tectonic plate over a period of time, and describe the processes involved in the occurrence of earthquakes along the fault.

  15. Fault detection and fault tolerance in robotics

    NASA Technical Reports Server (NTRS)

    Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.

    1992-01-01

    Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.

  16. Denali Fault: Alaska Pipeline

    USGS Multimedia Gallery

    View south along the Trans Alaska Pipeline in the zone where it was engineered for the Denali fault. The fault trace passes beneath the pipeline between the 2nd and 3rd slider supports at the far end of the zone. A large arc in the pipe can be seen in the pipe on the right, due to shortening of the ...

  17. Fault tree handbook

    Microsoft Academic Search

    D. F. Haasl; N. H. Roberts; W. E. Vesely; F. F. Goldberg

    1981-01-01

    This handbook describes a methodology for reliability analysis of complex systems such as those which comprise the engineered safety features of nuclear power generating stations. After an initial overview of the available system analysis approaches, the handbook focuses on a description of the deductive method known as fault tree analysis. The following aspects of fault tree analysis are covered: basic

  18. Practical Byzantine Fault Tolerance

    Microsoft Academic Search

    Miguel Castro; Barbara Liskov

    1999-01-01

    This paper describes a new replication algorithm that is able to tolerate Byzantine faults. We believe that Byzantine- fault-tolerant algorithms will be increasingly important in the future because malicious attacks and software errors are increasingly common and can cause faulty nodes to exhibit arbitrary behavior. Whereas previous algorithms assumed a synchronous system or were too slow to be used in

  19. SFT: scalable fault tolerance

    Microsoft Academic Search

    Fabrizio Petrini; Jarek Nieplocha; Vinod Tipparaju

    2006-01-01

    In this paper we will present a new technology that we are currently developing within the SFT: Scalable Fault Tolerance FastOS project which seeks to implement fault tolerance at the operating system level. Major design goals include dynamic reallocation of resources to allow continuing execution in the presence of hardware failures, very high scalability, high efficiency (low overhead), and transparency---requiring

  20. Puente Hills Fault Visualization

    NSDL National Science Digital Library

    Puente Hills Fault posses a disaster threat for Los Angeles region. Earthquake simulations on this fault estimate damages over $250 billion. Visualizations created by SDSC using the data computed from earthquake simulations helps one to fathom the propagation of siesmic waves and the areas affected.

  1. Denali Fault: Gillette Pass

    USGS Multimedia Gallery

    View northward of mountain near Gillette Pass showing sackung features. Here the mountaintop moved downward like a keystone, producing an uphill-facing scarp. The main Denali fault trace is on the far side of the mountain and a small splay fault is out of view below the photo....

  2. Denali Fault: Gillette Pass

    USGS Multimedia Gallery

    View north of Denali fault trace at Gillette Pass. this view shows that the surface rupture reoccupies the previous fault scarp. Also the right-lateral offset of these stream gullies has developed since deglaciation in the last 10,000 years or so....

  3. Denali Fault: Susitna Glacier

    USGS Multimedia Gallery

    Helicopters and satellite phones were integral to the geologic field response. Here, Peter Haeussler is calling a seismologist to pass along the discovery of the Susitna Glacier thrust fault. View is to the north up the Susitna Glacier. The Denali fault trace lies in the background where the two lan...

  4. Folds and Faults

    NSDL National Science Digital Library

    In this activity, students will learn how rock layers are folded and faulted and how to represent these structures in maps and cross sections. They will use playdough to represent layers of rock and make cuts in varying orientations to represent faults and other structures.

  5. Water-quality reconnaissance of Laguna Tortuguero, Vega Baja, Puerto Rico, March 1999-May 2000

    USGS Publications Warehouse

    Soler-Lopez, Luis; Guzman-Rios, Senen; Conde-Costas, Carlos

    2006-01-01

    The Laguna Tortuguero, a slightly saline to freshwater lagoon in north-central Puerto Rico, has a surface area of about 220 hectares and a mean depth of about 1.2 meters. As part of a water-quality reconnaissance, water samples were collected at about monthly and near bi-monthly intervals from March 1999 to May 2000 at four sites: three stations inside the lagoon and one station at the artificial outlet channel dredged in 1940, which connects the lagoon with the Atlantic Ocean. Physical characteristics that were determined from these water samples were pH, temperature, specific conductance, dissolved oxygen, dissolved oxygen saturation, and discharge at the outlet canal. Other water-quality constituents also were determined, including nitrogen and phosphorus species, organic carbon, chlorophyll a and b, plankton biomass, hardness, alkalinity as calcium carbonate, and major ions. Additionally, a diel study was conducted at three stations in the lagoon to obtain data on the diurnal variation of temperature, specific conductance, dissolved oxygen, and dissolved oxygen saturation. The data analysis indicates the water quality of Laguna Tortuguero complies with the Puerto Rico Environmental Quality Board standards and regulations.

  6. ZAMBEZI: a parallel pattern parallel fault sequential circuit fault simulator

    Microsoft Academic Search

    Minesh B. Amin; Bapiraju Vinnakota

    1996-01-01

    Sequential circuit fault simulators use the multiple bits in a computer data word to accelerate simulation. We introduce, and implement, a new sequential circuit fault simulator, a parallel pattern parallel fault simulator, ZAMBEZI, which simultaneously simulates multiple faults with multiple vectors in one data word. ZAMBEZI is developed by enhancing the control flow, of existing parallel pattern algorithms. For a

  7. A CMOS fault extractor for inductive fault analysis

    Microsoft Academic Search

    F. Joel Ferguson; John Paul Shen

    1988-01-01

    The inductive fault analysis (IFA) method is presented and a description is given of the CMOS fault extraction program FXT. The IFA philosophy is to consider the causes of faults (manufacturing defects) and then simulate these causes to find the faults that are likely to occur in a circuit. FXT automates IFA for a CMOS technology by generating a list

  8. Differential Fault Analysis of AES: Toward Reducing Number of Faults

    E-print Network

    Differential Fault Analysis of AES: Toward Reducing Number of Faults Chong Hee KIM Information-la-Neuve, Belgium. Abstract Differential Fault Analysis (DFA) finds the key of a block cipher using differ- ential, Side channel attacks, Differential fault analysis, Block ciphers, AES 1. Introduction Differential

  9. How clays weaken faults.

    NASA Astrophysics Data System (ADS)

    van der Pluijm, Ben A.; Schleicher, Anja M.; Warr, Laurence N.

    2010-05-01

    The weakness of upper crustal faults has been variably attributed to (i) low values of normal stress, (ii) elevated pore-fluid pressure, and (iii) low frictional strength. Direct observations on natural faults rocks provide new evidence for the role of frictional properties on fault strength, as illustrated by our recent work on samples from the San Andreas Fault Observatory at Depth (SAFOD) drillhole at Parkfield, California. Mudrock samples from fault zones at ~3066 m and ~3296 m measured depth show variably spaced and interconnected networks of displacement surfaces that consist of host rock particles that are abundantly coated by polished films with occasional striations. Transmission electron microscopy and X-ray diffraction study of the surfaces reveal the occurrence of neocrystallized thin-film clay coatings containing illite-smectite (I-S) and chlorite-smectite (C-S) phases. X-ray texture goniometry shows that the crystallographic fabric of these faults rocks is characteristically low, in spite of an abundance of clay phases. 40Ar/39Ar dating of the illitic mix-layered coatings demonstrate recent crystallization and reveal the initiation of an "older" fault strand (~8 Ma) at 3066 m measured depth, and a "younger" fault strand (~4 Ma) at 3296 m measured depth. Today, the younger strand is the site of active creep behavior, reflecting continued activation of these clay-weakened zones. We propose that the majority of slow fault creep is controlled by the high density of thin (< 100nm thick) nano-coatings on fracture surfaces, which become sufficiently smectite-rich and interconnected at low angles to allow slip with minimal breakage of stronger matrix clasts. Displacements are accommodated by localized frictional slip along coated particle surfaces and hydrated smectitic phases, in combination with intracrystalline deformation of the clay lattice, associated with extensive mineral dissolution, mass transfer and continued growth of expandable layers. The localized concentration of smectite in both I-S and C-S minerals, which probably extends to greater depths (<10 km) is responsible for fault weakening, with cataclasis and fluid infiltration creating nucleation sites for neomineralization on displacement surfaces during continued faulting. The role of newly grown, ultrathin, hydrous clay coatings on displacement surfaces in the San Andreas Fault contrasts with previously proposed scenarios of reworked talc/serpentine phases as an explanation for weak faults and creep behavior at these depths.

  10. Measuring fault tolerance with the FTAPE fault injection tool

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1995-01-01

    This paper describes FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The major parts of the tool include a system-wide fault-injector, a workload generator, and a workload activity measurement tool. The workload creates high stress conditions on the machine. Using stress-based injection, the fault injector is able to utilize knowledge of the workload activity to ensure a high level of fault propagation. The errors/fault ratio, performance degradation, and number of system crashes are presented as measures of fault tolerance.

  11. Congener-specific polychlorinated biphenyl patterns in eggs of aquatic birds from the Lower Laguna Madre, Texas

    Microsoft Academic Search

    Miguel A. Mora

    1996-01-01

    Eggs from four aquatic bird species nesting in the Lower Laguna Madre, Texas, were collected to determine differences and similarities in the accumulation of congener-specific polychlorinated biphenyls (PCBs) and to evaluate PCB impacts on reproduction. Because of the different toxicities of PCB congeners, it is important to know which congeners contribute most to total PCBs. The predominant PCB congeners were

  12. Vegetation and climate dynamics in southern South America: The microfossil record of Laguna Potrok Aike, Santa Cruz, Argentina

    Microsoft Academic Search

    Michael Wille; Nora I. Maidana; Frank Schäbitz; Michael Fey; Torsten Haberzettl; Stephanie Janssen; Andreas Lücke; Christoph Mayr; Christian Ohlendorf; Gerhard H. Schleser; Bernd Zolitschka

    2007-01-01

    Pollen and diatom assemblages of the sediment record from Laguna Potrok Aike provide new data about the vegetation and climate history since 16,100 cal BP of the drylands in the Patagonian Steppe, some 80 km east of the Andes on the southernmost Argentinean mainland. In combination with formerly published geochemical sediment proxies it is shown that during the Late Glacial the steppe

  13. Spatial distribution of Late Holocene sediment infill controlled by lake internal depositional dynamics, Laguna Potrok Aike (southern Patagonia, Argentina)

    Microsoft Academic Search

    Stephanie Kastner; Christian Ohlendorf; Torsten Haberzettl; Andreas Lücke; Nora I. Maidana; Christoph Mayr; Frank Schäbitz; Bernd Zolitschka

    2010-01-01

    The maar Laguna Potrok Aike (51°S, 70°W) is situated in the dry steppe environment of southern Patagonia. This 100 m deep lake is a palaeolimnological key site among the emerging terrestrial climate archives of the southern hemisphere and therefore was chosen as an ICDP drilling site. Interdisciplinary multi-proxy sediment studies document the sensitivity of this lacustrine record to palaeoclimatic and

  14. The LAGUNA design study- towards giant liquid based underground detectors for neutrino physics and astrophysics and proton decay searches

    Microsoft Academic Search

    D. Angus; A. Ariga; D. Autiero; A. Apostu; A. Badertscher; T. Bennet; G. Bertola; P. F. Bertola; O. Besida; A. Bettini; C. Booth; J. L. Borne; I. Brancus; W. Bujakowsky; J. E. Campagne; G. Cata Danil; F. Chipesiu; M. Chorowski; J. Cripps; A. Curioni; S. Davidson; Y. Declais; U. Drost; O. Duliu; J. Dumarchez; T. Enqvist; A. Ereditato; F. von Feilitzsch; H. Fynbo; T. Gamble; G. Galvanin; A. Gendotti; W. Gizicki; M. Goger-Neff; U. Grasslin; D. Gurney; M. Hakala; S. Hannestad; M. Haworth; S. Horikawa; A. Jipa; F. Juget; T. Kalliokoski; S. Katsanevas; M. Keen; J. Kisiel; I. Kreslo; V. Kudryastev; P. Kuusiniemi; L. Labarga; T. Lachenmaier; J. C. Lanfranchi; I. Lazanu; T. Lewke; K. Loo; P. Lightfoot; M. Lindner; A. Longhin; J. Maalampi; M. Marafini; A. Marchionni; R. M. Margineanu; A. Markiewicz; T. Marrodan-Undagoita; J. E. Marteau; R. Matikainen; Q. Meindl; M. Messina; J. W. Mietelski; B. Mitrica; A. Mordasini; L. Mosca; U. Moser; G. Nuijten; L. Oberauer; A. Oprina; S. Paling; S. Pascoli; T. Patzak; M. Pectu; Z. Pilecki; F. Piquemal; W. Potzel; W. Pytel; M. Raczynski; G. Rafflet; G. Ristaino; M. Robinson; R. Rogers; J. Roinisto; M. Romana; E. Rondio; B. Rossi; A. Rubbia; Z. Sadecki; C. Saenz; A. Saftoiu; J. Salmelainen; O. Sima; J. Slizowski; K. Slizowski; J. Sobczyk; N. Spooner; S. Stoica; J. Suhonen; R. Sulej; M. Szarska; T. Szeglowski; M. Temussi; J. Thompson; L. Thompson; W. H. Trzaska; M. Tippmann; A. Tonazzo; K. Urbanczyk; G. Vasseur; A. Williams; J. Winter; K. Wojutszewska; M. Wurm; A. Zalewska; M. Zampaolo; M. Zito

    2009-01-01

    The feasibility of a next generation neutrino observatory in Europe is being considered within the LAGUNA design study. To accommodate giant neutrino detectors and shield them from cosmic rays, a new very large underground infrastructure is required. Seven potential candidate sites in different parts of Europe and at several distances from CERN are being studied: Boulby (UK), Canfranc (Spain), Fr\\\\'ejus

  15. System fault diagnostics using fault tree analysis

    Microsoft Academic Search

    E. E. Hurdle; L. M. Bartlett; J. D. Andrews

    2008-01-01

    Over the last 50 years advances in technology have led to an increase in the complexity and sophistication of systems. More complex systems can be harder to maintain and the root cause of a fault more difficult to isolate. Down-time resulting from a system failure can be dangerous or expensive depending on the type of system. In aircraft systems the

  16. The Kunlun Fault

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The Kunlun fault is one of the gigantic strike-slip faults that bound the north side of Tibet. Left-lateral motion along the 1,500-kilometer (932-mile) length of the Kunlun has occurred uniformly for the last 40,000 years at a rate of 1.1 centimeter per year, creating a cumulative offset of more than 400 meters. In this image, two splays of the fault are clearly seen crossing from east to west. The northern fault juxtaposes sedimentary rocks of the mountains against alluvial fans. Its trace is also marked by lines of vegetation, which appear red in the image. The southern, younger fault cuts through the alluvium. A dark linear area in the center of the image is wet ground where groundwater has ponded against the fault. Measurements from the image of displacements of young streams that cross the fault show 15 to 75 meters (16 to 82 yards) of left-lateral offset. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) acquired the visible light and near infrared scene on July 20, 2000. Image courtesy NASA/GSFC/MITI/ERSDAC/JAROS, and the U.S./Japan ASTER Science Team

  17. Fault zone structure of the Wildcat fault in Berkeley, California - Field survey and fault model test -

    NASA Astrophysics Data System (ADS)

    Ueta, K.; Onishi, C. T.; Karasaki, K.; Tanaka, S.; Hamada, T.; Sasaki, T.; Ito, H.; Tsukuda, K.; Ichikawa, K.; Goto, J.; Moriya, T.

    2010-12-01

    In order to develop hydrologic characterization technology of fault zones, it is desirable to clarify the relationship between the geologic structure and hydrologic properties of fault zones. To this end, we are performing surface-based geologic and trench investigations, geophysical surveys and borehole-based hydrologic investigations along the Wildcat fault in Berkeley,California to investigate the effect of fault zone structure on regional hydrology. The present paper outlines the fault zone structure of the Wildcat fault in Berkeley on the basis of results from trench excavation surveys. The approximately 20 - 25 km long Wildcat fault is located within the Berkeley Hills and extends northwest-southeast from Richmond to Oakland, subparallel to the Hayward fault. The Wildcat fault, which is a predominantly right-lateral strike-slip fault, steps right in a releasing bend at the Berkeley Hills region. A total of five trenches have been excavated across the fault to investigate the deformation structure of the fault zone in the bedrock. Along the Wildcat fault, multiple fault surfaces are branched, bent, paralleled, forming a complicated shear zone. The shear zone is ~ 300 m in width, and the fault surfaces may be classified under the following two groups: 1) Fault surfaces offsetting middle Miocene Claremont Chert on the east against late Miocene Orinda formation and/or San Pablo Group on the west. These NNW-SSE trending fault surfaces dip 50 - 60° to the southwest. Along the fault surfaces, fault gouge of up to 1 cm wide and foliated cataclasite of up to 60 cm wide can be observed. S-C fabrics of the fault gouge and foliated cataclasite show normal right-slip shear sense. 2) Fault surfaces forming a positive flower structure in Claremont Chert. These NW-SE trending fault surfaces are sub-vertical or steeply dipping. Along the fault surfaces, fault gouge of up to 3 cm wide and foliated cataclasite of up to 200 cm wide can be observed. S-C fabrics of the fault gouge and foliated cataclasite show reverse right-slip shear sense. We are performing sandbox experiments to investigate the three-dimensional kinematic evolution of fault systems caused by oblique-slip motion. The geometry of the Wildcat fault in the Berkeley Hills region shows a strong resemblance to our sandbox experimental model. Based on these geological and experimental data, we inferred that the complicated fault systems were dominantly developed within the fault step and the tectonic regime switched from transpression to transtension during the middle to late Miocene along the Wildcat fault.

  18. Active faults in West Africa

    Microsoft Academic Search

    D. J. Blundell

    1976-01-01

    Interpretation of offshore seismic surveys south of Accra, Ghana, has shown that Accra is situated near the intersection of the northeast-trending Akwapim fault zone and an east-trending coastal boundary fault. Seismic recordings from Kukurantumi Observatory and historical evidence of earthquakes indicate that both faults are currently active. This is also supported by geological evidence. The Akwapim fault is traced southwest

  19. Fault diagnosis of analog circuits

    Microsoft Academic Search

    J. W. Bandler; A. E. Salama

    1985-01-01

    In this paper, various fault location techniques in analog networks are described and compared. The emphasis is on the more recent developments in the subject. Four main approaches for fault location are addressed, examined, and illustrated using simple network examples. In particular, we consider the fault dictionary approach, the parameter identification approach, the fault verification approach, and the approximation approach.

  20. A “mesh” of crossing faults: Fault networks of southern California

    NASA Astrophysics Data System (ADS)

    Janecke, S. U.

    2009-12-01

    Detailed geologic mapping of active fault systems in the western Salton Trough and northern Peninsular Ranges of southern California make it possible to expand the inventory of mapped and known faults by compiling and updating existing geologic maps, and analyzing high resolution imagery, LIDAR, InSAR, relocated hypocenters and other geophysical datasets. A fault map is being compiled on Google Earth and will ultimately discriminate between a range of different fault expressions: from well-mapped faults to subtle lineaments and geomorphic anomalies. The fault map shows deformation patterns in both crystalline and basinal deposits and reveals a complex fault mesh with many curious and unexpected relationships. Key findings are: 1) Many fault systems have mutually interpenetrating geometries, are grossly coeval, and allow faults to cross one another. A typical relationship reveals a dextral fault zone that appears to be continuous at the regional scale. In detail, however, there are no continuous NW-striking dextral fault traces and instead the master dextral fault is offset in a left-lateral sense by numerous crossing faults. Left-lateral faults also show small offsets where they interact with right lateral faults. Both fault sets show evidence of Quaternary activity. Examples occur along the Clark, Coyote Creek, Earthquake Valley and Torres Martinez fault zones. 2) Fault zones cross in other ways. There are locations where active faults continue across or beneath significant structural barriers. Major fault zones like the Clark fault of the San Jacinto fault system appears to end at NE-striking sinistral fault zones (like the Extra and Pumpkin faults) that clearly cross from the SW to the NE side of the projection of the dextral traces. Despite these blocking structures, there is good evidence for continuation of the dextral faults on the opposite sides of the crossing fault array. In some instances there is clear evidence (in deep microseismic alignments of hypocenters) that the master dextral faults zones pass beneath shallower crossing fault arrays above them and this mechanism may transfer strain through the blocking zones. 3) The curvature of strands of the Coyote Creek fault and the Elsinore fault are similar along their SE 60 km. The scale, locations and concavity of bends are so similar that their shape appears to be coordinated. The matching contractional and extensional bends suggests that originally straighter dextral fault zones may be deforming in response of coeval sinistral deformation between, beneath, and around them. 4) Deformation is strongly domainal with one style or geometry of structure dominating in one area then another in an adjacent area. Boundaries may be abrupt. 5) There are drastic lateral changes in the width of damage zones adjacent to master faults. Outlines of the deformation related to some dextral fault zones resemble a snake that has ingested a squirming cat or soccer ball. 6) A mesh of interconnected faults seems to transfer slip back and forth between structures. 7) Scarps are not necessarily more abundant on the long master faults than on connector or crossing faults. Much remains to be learned upon completion the fault map.

  1. Faults and Folds Animation

    NSDL National Science Digital Library

    2002-01-01

    This animation explores the forces and processes that deform rocks by creating folds, faults, and mountain ranges. You will learn how landmasses move, see the resulting deformation, and learn how this deformation relates to plate tectonics.

  2. Hayward Fault, California Interferogram

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This image of California's Hayward fault is an interferogram created using a pair of images taken by Synthetic Aperture Radar(SAR) combined to measure changes in the surface that may have occurred between the time the two images were taken.

    The images were collected by the European Space Agency's Remote Sensing satellites ERS-1 and ERS-2 in June 1992 and September 1997 over the central San Francisco Bay in California.

    The radar image data are shown as a gray-scale image, with the interferometric measurements that show the changes rendered in color. Only the urbanized area could be mapped with these data. The color changes from orange tones to blue tones across the Hayward fault (marked by a thin red line) show about 2-3centimeters (0.8-1.1 inches) of gradual displacement or movement of the southwest side of the fault. The block west of the fault moved horizontally toward the northwest during the 63 months between the acquisition of the two SAR images. This fault movement is called a seismic creep because the fault moved slowly without generating an earthquake.

    Scientists are using the SAR interferometry along with other data collected on the ground to monitor this fault motion in an attempt to estimate the probability of earthquake on the Hayward fault, which last had a major earthquake of magnitude 7 in 1868. This analysis indicates that the northern part of the Hayward fault is creeping all the way from the surface to a depth of 12 kilometers (7.5 miles). This suggests that the potential for a large earthquake on the northern Hayward fault might be less than previously thought. The blue area to the west (lower left) of the fault near the center of the image seemed to move upward relative to the yellow and orange areas nearby by about 2 centimeters (0.8 inches). The cause of this apparent motion is not yet confirmed, but the rise of groundwater levels during the time between the images may have caused the reversal of a small portion of the subsidence that this area suffered in the past.

    This research is the result of collaboration between the University of California's Berkeley and Davis campuses, the Lawrence Berkeley National Laboratory, and NASA's Jet Propulsion Laboratory in Pasadena, Calif. and is reported in the August 18, 2000, issue of Science magazine.

  3. Fault reactivation control on normal fault growth: an experimental study

    NASA Astrophysics Data System (ADS)

    Bellahsen, Nicolas; Daniel, Jean Marc

    2005-04-01

    Field studies frequently emphasize how fault reactivation is involved in the deformation of the upper crust. However, this phenomenon is generally neglected (except in inversion models) in analogue and numerical models performed to study fault network growth. Using sand/silicon analogue models, we show how pre-existing discontinuities can control the geometry and evolution of a younger fault network. The models show that the reactivation of pre-existing discontinuities and their orientation control: (i) the evolution of the main fault orientation distribution through time, (ii) the geometry of relay fault zones, (iii) the geometry of small scale faulting, and (iv) the geometry and location of fault-controlled basins and depocenters. These results are in good agreement with natural fault networks observed in both the Gulf of Suez and Lake Tanganyika. They demonstrate that heterogeneities such as pre-existing faults should be included in models designed to understand the behavior and the tectonic evolution of sedimentary basins.

  4. Practical Byzantine Fault Tolerance

    Microsoft Academic Search

    Miguel Castro

    2001-01-01

    Our growing reliance on online services accessible on the Internet demands highly-available systemsthat provide correct service without interruptions. Byzantine faults such as software bugs, operatormistakes, and malicious attacks are the major cause of service interruptions. This thesis describesa new replication algorithm, BFT, that can be used to build highly-available systems that tolerateByzantine faults. It shows, for the first time, how

  5. Cable-fault locator

    NASA Technical Reports Server (NTRS)

    Cason, R. L.; Mcstay, J. J.; Heymann, A. P., Sr.

    1979-01-01

    Inexpensive system automatically indicates location of short-circuited section of power cable. Monitor does not require that cable be disconnected from its power source or that test signals be applied. Instead, ground-current sensors are installed in manholes or at other selected locations along cable run. When fault occurs, sensors transmit information about fault location to control center. Repair crew can be sent to location and cable can be returned to service with minimum of downtime.

  6. Validated Fault Tolerant Architectures for Space Station

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.

    1990-01-01

    Viewgraphs on validated fault tolerant architectures for space station are presented. Topics covered include: fault tolerance approach; advanced information processing system (AIPS); and fault tolerant parallel processor (FTPP).

  7. Pen Branch Fault Program

    SciTech Connect

    Price, V.; Stieve, A.L.; Aadland, R.

    1990-09-28

    Evidence from subsurface mapping and seismic reflection surveys at Savannah River Site (SRS) suggests the presence of a fault which displaces Cretaceous through Tertiary (90--35 million years ago) sediments. This feature has been described and named the Pen Branch fault (PBF) in a recent Savannah River Laboratory (SRL) paper (DP-MS-88-219). Because the fault is located near operating nuclear facilities, public perception and federal regulations require a thorough investigation of the fault to determine whether any seismic hazard exists. A phased program with various elements has been established to investigate the PBF to address the Nuclear Regulatory Commission regulatory guidelines represented in 10 CFR 100 Appendix A. The objective of the PBF program is to fully characterize the nature of the PBF (ESS-SRL-89-395). This report briefly presents current understanding of the Pen Branch fault based on shallow drilling activities completed the fall of 1989 (PBF well series) and subsequent core analyses (SRL-ESS-90-145). The results are preliminary and ongoing: however, investigations indicate that the fault is not capable. In conjunction with the shallow drilling, other activities are planned or in progress. 7 refs., 8 figs., 1 tab.

  8. Differential Fault Analysis of the Advanced Encryption Standard using a Single Fault

    E-print Network

    Differential Fault Analysis of the Advanced Encryption Standard using a Single Fault Michael faults, this can be reduced to two key hypothesis. Keywords: Differential Fault Analysis, Fault Attack to as Differential Fault Analysis (DFA) [4]. With the reported work on inducing faults, such as optical fault

  9. Response of shoal grass, Halodule wrightii, to extreme winter conditions in the Lower Laguna Madre, Texas

    USGS Publications Warehouse

    Hicks, D.W.; Onuf, C.P.; Tunnell, J.W.

    1998-01-01

    Effects of a severe freeze on the shoal grass, Halodule wrightii, were documented through analysis of temporal and spatial trends in below-ground biomass. The coincidence of the second lowest temperature (-10.6??C) in 107 years of record, 56 consecutive hours below freezing, high winds and extremely low water levels exposed the Laguna Madre, TX, to the most severe cold stress in over a century. H. wrightii tolerated this extreme freeze event. Annual pre- and post-freeze surveys indicated that below-ground biomass estimated from volume was Unaffected by the freeze event. Nor was there any post-freeze change in biomass among intertidal sites directly exposed to freezing air temperatures relative to subtidal sites which remained submerged during the freezing period.

  10. Late Pleistocene-early Holocene karst features, Laguna Madre, south Texas: A record of climate change

    SciTech Connect

    Prouty, J.S. [Texas A& M Univ., Corpus Christi, TX (United States)

    1996-09-01

    A Pleistocene coquina bordering Laguna Madre, south Texas, contains well-developed late Pleistocene-early Holocene karst features (solution pipes and caliche crusts) unknown elsewhere from coastal Texas. The coquina accumulated in a localized zone of converging longshore Gulf currents along a Gulf beach. The crusts yield {sup 14}C dates of 16,660 to 7630 B.P., with dates of individual crust horizons becoming younger upwards. The karst features provide evidence of regional late Pleistocene-early Holocene climate changes. Following the latest Wisconsinan lowstand 18,000 B.P. the regional climate was more humid and promoted karst weathering. Partial dissolution and reprecipitation of the coquina formed initial caliche crust horizons; the crust later thickened through accretion of additional carbonate laminae. With the commencement of the Holocene approximately 11,000 B.P. the regional climate became more arid. This inhibited karstification of the coquina, and caliche crust formation finally ceased about 7000 B.P.

  11. Water quality mapping of Laguna de Bay and its watershed, Philippines

    NASA Astrophysics Data System (ADS)

    Saito, S.; Nakano, T.; Shin, K.; Maruyama, S.; Miyakawa, C.; Yaota, K.; Kada, R.

    2011-12-01

    Laguna de Bay (or Laguna Lake) is the largest lake in the Philippines, with a surface area of 900 km2 and its watershed area of 2920 km2 (Santos-Borja, 2005). It is located on the southwest part of the Luzon Island and its watershed contains 5 provinces, 49 municipalities and 12 cities, including parts of Metropolitan Manila. The water quality in Laguna de Bay has significantly deteriorated due to pollution from soil erosion, effluents from chemical industries, and household discharges. In this study, we performed multiple element analysis of water samples in the lake and its watersheds for chemical mapping, which allows us to evaluate the regional distribution of elements including toxic heavy metals such as Cd, Pb and As. We collected water samples from 24 locations in Laguna de Bay and 160 locations from rivers in the watersheds. The sampling sites of river are mainly downstreams around the lake, which covers from urbanized areas to rural areas. We also collected well water samples from 17 locations, spring water samples from 10 locations, and tap water samples from 21 locations in order to compare their data with the river and lake samples and to assess the quality of household use waters. The samples were collected in dry season of the study area (March 13 - 17 and May 2 - 9, 2011). The analysis was performed at the Research Institute for Humanity and Nature (RIHN), Japan. The concentrations of the major components (Cl, NO3, SO4, Ca, Mg, Na, and K) dissolved in the samples were determined with ion chromatograph (Dionex Corporation ICS-3000). We also analyzed major and trace elements (Li, B, Na, Mg, Al, Si, P, K, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn Ga, Ge, As, Se, Rb, Sr, Y, Zr, Mo, Ag, Cd, Sn, Sb, Cs, Ba, La, Ce, Pr, Nd, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu, W, Pb and U) with inductively coupled plasma-mass spectrometry (ICP-MS, Agilent Technologies 7500cx). The element concentrations of rivers are characterized by remarkable regional variations. For example, heavy metals such as Ni, Cd and Pb are markedly high in the western region as compared to the eastern region implying that the chemical variation reflects the urbanization in the western region. On the other hand, As contents is relatively high in the south of the lake and some inflowing rivers in the area. The higher concentration of As is also observed in the spring water samples in the area. Therefore, the source of As in the area is probably natural origin rather than anthropogenic. Although river water samples in western watersheds have high concentrations of heavy metals, the lake water samples in western area of the lake are not remarkably high in heavy metals. This inconsistency implies that the heavy metals flowed into the western lake from heavy metal-enriched rives have precipitated on the bottom of the lake. The polluted sediments may induce the pollution of benthos resulting in increase of the risks of food pollution through the bioaccumulation in the ecosystem.

  12. Fault Roughness Records Strength

    NASA Astrophysics Data System (ADS)

    Brodsky, E. E.; Candela, T.; Kirkpatrick, J. D.

    2014-12-01

    Fault roughness is commonly ~0.1-1% at the outcrop exposure scale. More mature faults are smoother than less mature ones, but the overall range of roughness is surprisingly limited which suggests dynamic control. In addition, the power spectra of many exposed fault surfaces follow a single power law over scales from millimeters to 10's of meters. This is another surprising observation as distinct structures such as slickenlines and mullions are clearly visible on the same surfaces at well-defined scales. We can reconcile both observations by suggesting that the roughness of fault surfaces is controlled by the maximum strain that can be supported elastically in the wallrock. If the fault surface topography requires more than 0.1-1% strain, it fails. Invoking wallrock strength explains two additional observations on the Corona Heights fault for which we have extensive roughness data. Firstly, the surface is isotropic below a scale of 30 microns and has grooves at larger scales. Samples from at least three other faults (Dixie Valley, Mount St. Helens and San Andreas) also are isotropic at scales below 10's of microns. If grooves can only persist when the walls of the grooves have a sufficiently low slope to maintain the shape, this scale of isotropy can be predicted based on the measured slip perpendicular roughness data. The observed 30 micron scale at Corona Heights is consistent with an elastic strain of 0.01 estimated from the observed slip perpendicular roughness with a Hurst exponent of 0.8. The second observation at Corona Heights is that slickenlines are not deflected around meter-scale mullions. Yielding of these mullions at centimeter to meter scale is predicted from the slip parallel roughness as measured here. The success of the strain criterion for Corona Heights supports it as the appropriate control on fault roughness. Micromechanically, the criterion implies that failure of the fault surface is a continual process during slip. Macroscopically, the fundamental nature of the control means that 0.1 to 1% roughness should be ubiquitous on faults and can generally be used for simulating ground motion. An important caveat is that the scale-dependence of strength may result in a difference in the yield criterion at large-scales. The commonly observed values of the Hurst exponent below 1 may capture this scale-dependence.

  13. Mega displacement waves in glacial lakes: evidence from Laguna Safuna Alta, Peru

    NASA Astrophysics Data System (ADS)

    Reynolds, J. M.; Heald, A. P.; Zapata, M.

    2003-04-01

    An anomalously large displacement wave and overtopping event have been investigated at Laguna Safuna Alta, Cordillera Blanca, Peru. On 22nd April 2002, 10M m^3 or more of rock fell from the western valley slope into the southern end of the lake and onto the lower 100 m of the glacier. Evidence from the landslide scar indicates that the mechanism of failure was principally flexural toppling of quartzites, mudstones and sandstones with beds of anthracite. Bathymetric surveys taken before and after the landslide show that about 6.4M m^3 of material entered the lake during the event. The resulting displacement wave was 80--100 m high, overtopping the end moraine, which is 80 m at its lowest point. Oscillating rebound waves had amplitudes up to around 80 m. The initial displacement wave and the largest rebound waves caused erosion of the inner and outer flanks of the moraine, damaged lake security structures, and killed cattle that had been grazing in the area; but the moraine dam remained substantially intact and the resulting flood was largely contained within a lower lake, Laguna Safuna Baja. Active backscarps and tension cracks in the slope adjacent to the rockfall indicate that a further 5M m^3 of rock may fail. Modelling the steady state stability of the now weakened moraine dam provides factors of safety below unity against a large-scale failure of the inner slope of the moraine. The moraine dam cannot be expected to resist a second large displacement wave and mitigation strategies are therefore being developed. The height of the wave produced during this event was an order of magnitude greater than values commonly reported and designed for in glacial lake remediation works.

  14. Fault architecture, fault rocks and fault rock properties in carbonate rocks

    NASA Astrophysics Data System (ADS)

    Bauer, Helene; Decker, Kurt

    2010-05-01

    Fault architecture, fault rocks and fault rock properties in carbonate rocks The current study addresses a comparative analysis of fault zones in limestone and dolomite rocks comparing the architecture of fault core and damage zones, fault rocks, and the hydrodynamic properties of faults exposed in the Upper Triassic Wetterstein Fm. of the Hochschwab Massif (Austria). All analysed faults are sinistral strike-slip faults, which formed at shallow crustal depth during the process of eastward lateral extrusion of the Eastern Alps in the Oligocene and Lower Miocene Fault zones in limestone tend to be relatively narrow zones with distinct fault core and damage zones. Fault cores, which include the principle slip surface of the fault, are characterized by cataclastic fault rock associated with slickensides separating strands of catalasite from surrounding host rock or occurring between different types of cataclasite. Cataclasites differ in terms of fragment size, matrix content and the angularity of fragments,. Cataclasite fabrics indicate progressive cataclasis and substantial displacement across the fault rock. Fault core heterogeneity tends to decrease within more evolved (higher displacement) faults. In all fault cores cataclasites are localized within strands, which connect to geometrically complex anastomosing volumes of fault rock. The 3D geometry of such fault cores is difficult to resolve on the outcrop scale. Beside cataclastic flow pressure solution, overprinting cataclastic fabrics, could be documented within fault zones. Damage zones in limestone fault zones are characterized by intensively fractured (jointed) host rock and dilatation breccias, indicating dilatation processes and peripheral wall rock weakening accompanying the growth of the fault zone. Dilatation breccias with high volumes of carbonate cement indicate these processes are related to high fluid pressure and the percolation of large volumes of fluid. Different parts of the damage zones were differentiated on the base of variable fracture densities. Fracture densities (P32 in m² joint surfaces per m³ rock) generally vary along all investigated faults. They are especially high in more evolved (higher displacement) fault zones where they are associated with large-scale Riedel sehars and in parts of the damage zones, that are next to the fault cores. The assessment of the abundance of small-scale fractures uses fracture facies as an empirical classification providing semi-quantitative estimates of fracture density and abundance. Different units were assigned to fracture facies 1 to 4, with fracture facies 4 indicating highest fracture density. Fault zones in dolomite tend to have several fault cores localized within wider zones of fractured wall rock (damage zones), even at low strain. Compared to fault zones with similar displacement in limestone, damage zones in dolomite tend to be wider and have higher fracture densities. Dilatation breccias are more abundant. A clear separation of fault core and damage zone is more difficult. Damage zones observed at the lateral (mode III) tips of the analysed strike-slip faults show that hydraulic fracturing and fluid flow through the propagating fault are of major importance for its evolution. A typical transition from the wall rock ahead of the propagating fault to the core of the slipped fault includes: densely jointed wall rock, wall rock with abundant cement-filled tension gashes, dilatation breccia and cataclasite reworking both dilatation breccia and wall rock. The detailed documentation of different fault zone units is supplemented by porosity measurements in order to assess the hydrogeological properties of the fault zones. High permeability units are first of all located in the damage zones, characterized by high fracture densities. Porosity measurements on fault rocks showed highest porosity (up to 6%) for fractured wall rocks (fracture facies 4) and dilatation breccias (porosity of undeformed wall rock: 1,5 % average, 2 % maximum). Thin sections prove that most of the porosity is carried by uncemented f

  15. Fault displacement hazard for strike-slip faults

    USGS Publications Warehouse

    Petersen, M.D.; Dawson, T.E.; Chen, R.; Cao, T.; Wills, C.J.; Schwartz, D.P.; Frankel, A.D.

    2011-01-01

    In this paper we present a methodology, data, and regression equations for calculating the fault rupture hazard at sites near steeply dipping, strike-slip faults. We collected and digitized on-fault and off-fault displacement data for 9 global strikeslip earthquakes ranging from moment magnitude M 6.5 to M 7.6 and supplemented these with displacements from 13 global earthquakes compiled byWesnousky (2008), who considers events up to M 7.9. Displacements on the primary fault fall off at the rupture ends and are often measured in meters, while displacements on secondary (offfault) or distributed faults may measure a few centimeters up to more than a meter and decay with distance from the rupture. Probability of earthquake rupture is less than 15% for cells 200 m??200 m and is less than 2% for 25 m??25 m cells at distances greater than 200mfrom the primary-fault rupture. Therefore, the hazard for off-fault ruptures is much lower than the hazard near the fault. Our data indicate that rupture displacements up to 35cm can be triggered on adjacent faults at distances out to 10kmor more from the primary-fault rupture. An example calculation shows that, for an active fault which has repeated large earthquakes every few hundred years, fault rupture hazard analysis should be an important consideration in the design of structures or lifelines that are located near the principal fault, within about 150 m of well-mapped active faults with a simple trace and within 300 m of faults with poorly defined or complex traces.

  16. The detection of high impedance faults using random fault behavior 

    E-print Network

    Carswell, Patrick Wayne

    1988-01-01

    prevent it from detecting arcing faults under certain fault scenarios. Past research into the behavior of arcing high impedance faults has demon- strated them to be very random in nature. That is, the actual bursts occur in random intervals of time... and with random intensity. The new algorithm presented attempts to utilize this random behavior as well as time to discriminate the pres- ence of high impedance arcing faults from normal system operations which may also generate a, high frequency current signal...

  17. Fault tree models for fault tolerant hypercube multiprocessors

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Tuazon, Jezus O.

    1991-01-01

    Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.

  18. Stresses and Faulting

    NSDL National Science Digital Library

    Linda Reinen

    This module is designed for students in an introductory structural geology course. While key concepts are described here, it is assumed that the students will have access to a good textbook to augment the information presented here. Learning goals: (1) Understand the role of gravity and rock properties in producing stresses in the shallow Earth. (2) Graphically represent stress states using Mohr diagrams. (3) Determine failure criteria from the results of laboratory experiments. (4) Explore the interaction of gravity-induced and tectonic stresses on fault formation. (5) Apply models of fault formation to predict fault behavior in two natural settings: San Onofre Beach in southern California and Canyonland National Park in Utah. The module is implemented entirely using Microsoft Excel. This program was selected due to its widespread availability and relative ease-of-use. It is assumed that students are familiar with using equations and graphing tools in Excel.

  19. Stacking fault energy of cryogenic austenitic steels

    Microsoft Academic Search

    Dai Qi-Xun; Luo Xin-Min

    2002-01-01

    Stacking fault energy and stacking fault nucleation energy are defined in terms of the physical nature of stacking faults and stacking fault energy and the measuring basis for stacking fault energy. Large quantities of experimental results are processed with the aid of a computer and an expression for calculating stacking fault energy has been obtained as ?300SF (mJ m-2) =

  20. Review of fault diagnosis in control systems

    Microsoft Academic Search

    Aishe Shui; Weimin Chen; Peng Zhang; Shunren Hu; Xiaowei Huang

    2009-01-01

    In this paper, we review the major achievements on the research of fault diagnosis in control systems (FDCS) from three aspects which including fault detection, fault isolation and hybrid intelligent fault diagnosis. Fault detection and isolation (FDI) are two important stages in the diagnosis process while hybrid intelligent fault diagnosis is the hot issue in current research field. The particular

  1. 15,000-yr pollen record of vegetation change in the high altitude tropical Andes at Laguna Verde Alta, Venezuela

    Microsoft Academic Search

    Valentí Rull; Mark B. Abbott; Pratigya J. Polissar; Alexander P. Wolfe; Maximiliano Bezada; Raymond S. Bradley

    2005-01-01

    Pollen analysis of sediments from a high-altitude (4215 m), Neotropical (9°N) Andean lake was conducted in order to reconstruct local and regional vegetation dynamics since deglaciation. Although deglaciation commenced ?15,500 cal yr B.P., the area around the Laguna Verde Alta (LVA) remained a periglacial desert, practically unvegetated, until about 11,000 cal yr B.P. At this time, a lycopod assemblage bearing

  2. Human Impact Since Medieval Times and Recent Ecological Restorationin a Mediterranean Lake: The Laguna Zoñar, Southern Spain

    Microsoft Academic Search

    Blas L. Valero-Garcés; Penélope González-Sampériz; Ana Navas; Javier Machín; Pilar Mata; Antonio Delgado-Huertas; Roberto Bao; Ana Moreno; José S. Carrión; Antje Schwalb; Antonio González-Barrios

    2006-01-01

    The multidisciplinary study of sediment cores from Laguna Zoar (3729?00?? N, 441?22?? W, 300 m a.s.l., Andaluca, Spain)\\u000a provides a detailed record of environmental, climatic and anthropogenic changes in a Mediterranean watershed since Medieval\\u000a times, and an opportunity to evaluate the lake restoration policies during the last decades. The paleohydrological reconstructions\\u000a show fluctuating lake levels since the end of the Medieval

  3. Patterns of fish and macro-invertebrate distribution in the upper Laguna Madre: bag seines 1985-2004 

    E-print Network

    Larimer, Amy Beth

    2009-05-15

    composition. Given the hypersaline conditions that generally exist in the Laguna Madre, I expected high-salinity-tolerant species (such as Cyprinodon variegatus) to be relatively more abundant during times of higher salinity or at sites further from... minnow, Cyprinodon variegatus, with 142,325 individuals (41% of the total), followed by Menidia sp. (mostly likely M. peninsulae) (14.41% of the total) and pinfish, Lagodon rhomboides (9.54% of the total). At the level of family, the drums...

  4. Palaeoenvironmental changes in southern Patagonia during the last millennium recorded in lake sediments from Laguna Azul (Argentina)

    Microsoft Academic Search

    Christoph Mayr; Michael Fey; Torsten Haberzettl; Stephanie Janssen; Andreas Lücke; Nora I. Maidana; Christian Ohlendorf; Frank Schäbitz; Gerhard H. Schleser; Ulrich Struck; Michael Wille; Bernd Zolitschka

    2005-01-01

    Marked environmental changes in the southern Patagonian steppe during the last 1100 years are detected by a multi-proxy study of radiocarbon-dated sediment cores from the crater lake Laguna Azul (52°05?S, 69°35?W). A prominent shift in carbon isotope records occurred between AD 1670 and AD 1890 induced by a change to cooler climate conditions with a concurrent lake level rise. A

  5. A neotectonic-geomorphologic investigation of the prehistoric rock avalanche damming Laguna de Metztitlán (Hidalgo State, east-central Mexico)

    Microsoft Academic Search

    Max Suter

    2004-01-01

    Laguna de Metztitlán (Hidalgo State, east-central Mexico) is a natural lake dammed by an unbreached, large-scale rock avalanche (sturzstrom) deposit (area: 2.5 km²; volume: ~0.6 km³; up to ~400 m thick in the valley axis; horizontal runout distance: 2,600 m; vertical fall height: 860 m) that impounds the Metztitlán River. The natural outflow of the lake is by seepage, the

  6. Examine animations of fault motion

    NSDL National Science Digital Library

    TERC. Center for Earth and Space Science Education

    2003-01-01

    Developed for high school students, this Earth science resource provides animations of each of four different fault types: normal, reverse, thrust, and strike-slip faults. Each animation has its own set of movie control buttons, and arrows in each animation indicate the direction of force that causes that particular kind of fault. The introductory paragraph defines the terms fault plane, handing wall, and footwall--features that are labeled at the end of the appropriate animations. Copyright 2005 Eisenhower National Clearinghouse

  7. Fault-Tolerant Flight Computer

    NASA Technical Reports Server (NTRS)

    Chau, Savio

    1996-01-01

    In design concept for adaptive, fault-tolerant flight computer, upon detection of fault in either processor, surviving processor assumes responsibility for both equipment systems. Possible because of cross-strapping between processors, memories, and input/output units. Concept also applicable to other computing systems required to tolerate faults and in which partial loss of processing speed or functionality acceptable price to pay for continued operation in event of faults.

  8. Fault-Scarp Degradation

    NSDL National Science Digital Library

    Pinter, Nicholas

    In this exercise, students investigate the evolution of Earth's surface over time, as governed by the balance between constructional (tectonic) processes and destructional (erosional) processes. Introductory materials explain the processes of degradation, including the concepts of weathering-limited versus transport-limited slopes, and diffusion modeling. Using the process of diffusion modeling, students will determine how a slope changes through four 100-year time steps, calculate gradient angles for a fault scarp, and compare parameters calculated for two fault scarps, attempting to determine the age of the scarp created by the older, unknown earthquake. Example problems, study questions, and a bibliography are provided.

  9. Computer hardware fault administration

    DOEpatents

    Archer, Charles J. (Rochester, MN); Megerian, Mark G. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian E. (Rochester, MN)

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  10. A 6000-year record of ecological and hydrological changes from Laguna de la Leche, north coastal Cuba

    NASA Astrophysics Data System (ADS)

    Peros, Matthew C.; Reinhardt, Eduard G.; Davis, Anthony M.

    2007-01-01

    Laguna de la Leche, north coastal Cuba, is a shallow (? 3 m), oligohaline (˜ 2.0-4.5‰) coastal lake surrounded by mangroves and cattail stands. A 227-cm core was studied using loss-on-ignition, pollen, calcareous microfossils, and plant macrofossils. From ˜6200 to ˜ 4800 cal yr BP, the area was an oligohaline lake. The period from ˜ 4800 to ˜ 4200 cal yr BP saw higher water levels and a freshened system; these changes are indicated by an increase in the regional pollen rain, as well as by the presence of charophyte oogonia and an increase in freshwater gastropods (Hydrobiidae). By ˜ 4000 cal yr BP, an open mesohaline lagoon had formed; an increase in salt-tolerant foraminifers suggests that water level increase was driven by relative sea level rise. The initiation of Laguna de la Leche correlates with a shift to wetter conditions as indicated in pollen records from the southeastern United States (e.g., Lake Tulane). This synchronicity suggests that sea level rise caused middle Holocene environmental change region-wide. Two other cores sampled from mangrove swamps in the vicinity of Laguna de la Leche indicate that a major expansion of mangroves was underway by ˜ 1700 cal yr BP.

  11. Fault Injection Experiments Using FIAT

    Microsoft Academic Search

    James H. Barton; Edward W. Czeck; Zary Segall; Daniel P. Siewiorek

    1990-01-01

    The results of several experiments conducted using the fault-injection-based automated testing (FIAT) system are presented. FIAT is capable of emulating a variety of distributed system architectures, and it provides the capabilities to monitor system behavior and inject faults for the purpose of experimental characterization and validation of a system's dependability. The experiments consists of exhaustively injecting three separate fault types

  12. Faults' Context Matters Jaymie Strecker

    E-print Network

    Memon, Atif M.

    Faults' Context Matters Jaymie Strecker University of Maryland College Park, MD, USA strecker a testing technique, practitioners want to know which one will detect the faults that matter most to them- tion? More often than not, they report how many faults in a carefully chosen "representative" sample

  13. Generalized Method of Fault Analysis

    Microsoft Academic Search

    V. Brandwajn; W. F. Tinney

    1985-01-01

    A generalized method is given for solving shortcircuit faults of any conceivable complexity. The method efficiently combines the application of sparsity-oriented compensation techniques to sequence networks with the simulation of fault conditions in phase coordinates. All recent advances in features and modeling aspects of fault studies are incorporated in the method. Sparse vector techniques are extensively used to enhance speed

  14. Fault Analysis of Stream Ciphers

    Microsoft Academic Search

    Jonathan J. Hoch; Adi Shamir

    2004-01-01

    A fault attack is a powerful cryptanalytic tool which can be applied to many types of cryptosystems which are not vulnerable to direct attacks. The research literature contains many examples of fault attacks on public key cryptosystems and block ciphers, but surprisingly we could not find any systematic study of the applicability of fault attacks to stream ciphers. Our goal

  15. Dynamic evolution of a fault system through interactions between fault segments

    Microsoft Academic Search

    Ryosuke Ando; Taku Tada; Teruo Yamashita

    2004-01-01

    We simulate the dynamic evolution process of fault system geometry considering interactions between fault segments. We calculate rupture propagation using an elastodynamic boundary integral equation method (BIEM) in which the trajectory of a fault tip is dynamically self-chosen. We consider a system of two noncoplanar fault segments: a preexisting main fault segment (fault 1) and a subsidiary one (fault 2)

  16. An algorithm for faulted phase and feeder selection under high impedance fault conditions 

    E-print Network

    Benner, Carl Lee

    1988-01-01

    . Summary SUMMARY AND CONCLUSIONS REFERENCES . SUPPLEMENTAL SOURCES CONSULTED APPENDIX A VITA 57 58 59 59 60 62 68 70 vn LIST OF TABLES Table II. Ihh Comparison of fault-generated phases during arcing fault test Comparison of fault...-generated phases during arcing fault test Comparison of fault-generated phases during arcing fault test activity on activity on activity on faulted and unfaulted Page 45 faulted and unfaulted 46 faulted and unfaulted 47 vu1 LIST OF FIGURES Figure l...

  17. An empirical comparison of software fault tolerance and fault elimination

    NASA Technical Reports Server (NTRS)

    Shimeall, Timothy J.; Leveson, Nancy G.

    1991-01-01

    Reliability is an important concern in the development of software for modern systems. Some researchers have hypothesized that particular fault-handling approaches or techniques are so effective that other approaches or techniques are superfluous. The authors have performed a study that compares two major approaches to the improvement of software, software fault elimination and software fault tolerance, by examination of the fault detection obtained by five techniques: run-time assertions, multi-version voting, functional testing augmented by structural testing, code reading by stepwise abstraction, and static data-flow analysis. This study has focused on characterizing the sets of faults detected by the techniques and on characterizing the relationships between these sets of faults. The results of the study show that none of the techniques studied is necessarily redundant to any combination of the others. Further results reveal strengths and weakness in the fault detection by the techniques studied and suggest directions for future research.

  18. Fault Scarp Offsets and Fault Population Analysis on Dione

    NASA Astrophysics Data System (ADS)

    Tarlow, S.; Collins, G. C.

    2010-12-01

    Cassini images of Dione show several fault zones cutting through the moon’s icy surface. We have measured the displacement and length of 271 faults, and estimated the strain occurring in 6 different fault zones. These measurements allow us to quantify the total amount of surface strain on Dione as well as constrain what processes might have caused these faults to form. Though we do not have detailed topography across fault scarps on Dione, we can use their projected size on the camera plane to estimate their heights, assuming a reasonable surface slope. Starting with high resolution images of Dione obtained by the Cassini ISS, we marked points at the top to the bottom of each fault scarp to measure the fault’s projected displacement and its orientation along strike. Line and sample information for the measurements were then processed through ISIS to derive latitude/longitude information and pixel dimensions. We then calculate the three dimensional orientation of a vector running from the bottom to the top of the fault scarp, assuming a 45 degree angle with respect to the surface, and project this vector onto the spacecraft camera plane. This projected vector gives us a correction factor to estimate the actual vertical displacement of the fault scarp. This process was repeated many times for each fault, to show variations of displacement along the length of the fault. To compare each fault to its neighbors and see how strain was accommodated across a population of faults, we divided the faults into fault zones, and created new coordinate systems oriented along the central axis of each fault zone. We could then quantify the amount of fault overlap and add the displacement of overlapping faults to estimate the amount of strain accommodated in each zone. Faults in the southern portion of Padua have a strain of 0.031(+/-) 0.0097, central Padua exhibits a strain of .032(+/-) 0.012, and faults in northern Padua have a strain of 0.025(+/-) 0.0080. The western faults of Eurotas have a strain of 0.031(+/-) 0.011, while the eastern faults have a strain of 0.037(+/-) 0.025. Lastly, Clusium has a strain of 0.10 (+/-) 0.029. We also calculated the ratio of maximum fault displacement vs. the length of the faults, and we found this ratio to be 0.019 when drawing a trend line through all the faults that were analyzed. D/L measurements performed on two faults on Europa using stereo topography showed a value of .021 (Nimmo and Schenk 2006), the only other icy satellite where this ratio has been measured. In contrast, faults on Earth has a D/L ratio of about .1 and Mars has a D/L Ratio of about .01 (Schultz et al. 2006).

  19. Fault Tolerant Quantum Filtering and Fault Detection for Quantum Systems

    E-print Network

    Qing Gao; Daoyi Dong; Ian R. Petersen

    2015-04-26

    This paper aims to determine the fault tolerant quantum filter and fault detection equation for a class of open quantum systems coupled to laser fields and subject to stochastic faults. In order to analyze open quantum systems where the system dynamics involve both classical and quantum random variables, a quantum-classical probability space model is developed. Using a reference probability approach, a fault tolerant quantum filter and a fault detection equation are simultaneously derived for this class of open quantum systems. An example of two-level open quantum systems subject to Poisson-type faults is presented to illustrate the proposed method. These results have the potential to lead to a new fault tolerant control theory for quantum systems.

  20. Row fault detection system

    SciTech Connect

    Archer, Charles Jens (Rochester, MN); Pinnow, Kurt Walter (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian Edward (Rochester, MN)

    2008-10-14

    An apparatus, program product and method checks for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  1. Preventive arc fault protection

    Microsoft Academic Search

    D. Brechtken

    2001-01-01

    An arc fault in switchgear is a failure with enormous effects and a high hazard potential to persons in its vicinity. Therefore the switchgear manufacturers intensively look for possibilities to minimize this hazard potential. The ways used in industrial practice can be separated in to two basic directions, the active and the passive protection. The active protection tries to exclude

  2. Tacting "To a Fault."

    ERIC Educational Resources Information Center

    Baer, Donald M.

    1991-01-01

    This paper argues that behavior analysis is not technological to a fault, but rather has a faulty technology by being incomplete. The paper examines reinforcers and punishers that result from the outcomes of either (1) striving for better experimental control, or (2) inventing theories to explain why current control is imperfect. (JDD)

  3. Optical Fault Induction Attacks

    Microsoft Academic Search

    Sergei P. Skorobogatov; Ross J. Anderson

    2002-01-01

    We describe a new class of attacks on secure microcontrollers and smartcards. Illumination of a target transistor causes it to conduct, thereby inducing a transient fault. Such attacks are practical; they do not even require expensive laser equipment. We have carried them out using a flashgun bought second-hand from a camera store for $30 and with an $8 laser pointer.

  4. Row fault detection system

    DOEpatents

    Archer, Charles Jens (Rochester, MN); Pinnow, Kurt Walter (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian Edward (Rochester, MN)

    2012-02-07

    An apparatus, program product and method check for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  5. Row fault detection system

    DOEpatents

    Archer, Charles Jens (Rochester, MN); Pinnow, Kurt Walter (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian Edward (Rochester, MN)

    2010-02-23

    An apparatus and program product check for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  6. Fault tolerant anonymous channel

    Microsoft Academic Search

    Wakaha Ogata; Kaoru Kurosawa; Kazue Sako; Kazunori Takatani

    1997-01-01

    Previous anonymous channels, called MIX nets, do not workif one center stops. This paper shows new anonymous channels which allowless than a half of faulty centers. A fault tolerant multivalued electionscheme is obtained automatically. A very efficient ZKIP for the centersis also presented.

  7. Diatom diversity and paleoenvironmental changes in Laguna Potrok Aike, Patagonia: the ~ 50 kyr PASADO sediment record

    NASA Astrophysics Data System (ADS)

    Recasens, C.; Ariztegui, D.; Maidana, N. I.

    2012-12-01

    Laguna Potrok Aike is a maar lake located in the southernmost Argentinean Patagonia, in the province of Santa Cruz. Being one of the few permanent lakes in the area, it provides an exceptional and continuous sedimentary record. The sediment cores from Laguna Potrok Aike, obtained in the framework of the ICDP-sponsored project PASADO (Potrok Aike Maar Lake Sediment Archive Drilling Program), were sampled for diatom analysis in order to reconstruct a continuous history of hydrological and climatic changes since the Late Pleistocene. Diatoms are widely used to characterize and often quantify the impact of past environmental changes in aquatic systems. We use variations in diatom concentration and in their dominant assemblages, combined with other proxies, to track these changes. Diatom assemblages were analyzed on the composite core 5022-2CP with a multi-centennial time resolution. The total composite profile length of 106.09 mcd (meters composite depth) was reduced to 45.80 m cd-ec (event-corrected composite profile) of pelagic deposits once gaps, reworked sections, and tephra deposits were removed. This continuous deposit spans the last ca. 51.2 cal. ka BP. Previous diatomological analysis from the core catcher samples of core 5022-1D, allowed us to determine the dominant diatom assemblages in this lake and select the sections where higher temporal resolution was needed. Over 200 species, varieties and forms were identified in the sediment record, including numerous endemic species and others which can be new to science. Among these, a new species has been described: Cymbella gravida sp. nov. Recasens and Maidana. The quantitative analysis of the sediment record reveals diatom abundances reaching 460 million valves per gram of dry sediment, with substantial fluctuations through time. Variations in the abundance and species distribution point toward lake level variations, changes in nutrient input or even periods of ice-cover in the lake. The top meters of the record reveal a shift in the phytoplakton composition, corresponding to the previously documented salinization of the water and the lake level drop, indicators of warming temperatures and lower moisture availability during the early and middle Holocene. The new results presented here on diatom diversity and distribution in the Glacial to Late Glacial sections of the record bring much needed information on the previously poorly known paleolimnology of this lake for that time period.

  8. The Maars of the Tuxtla Volcanic Field: the Example of 'laguna Pizatal'

    NASA Astrophysics Data System (ADS)

    Espindola, J.; Zamora-Camacho, A.; Hernandez-Cardona, A.; Alvarez del Castillo, E.; Godinez, M.

    2013-12-01

    Los Tuxtlas Volcanic Field (TVF), also known as Los Tuxtlas massif, is a structure of volcanic rocks rising conspicuously in the south-central part of the coastal plains of eastern Mexico. The TVF seems related to the upper cretaceous magmatism of the NW part of the Gulf's margin (e.g. San Carlos and Sierra de Tamaulipas alkaline complexes) rather than to the nearby Mexican Volcanic Belt. The volcanism in this field began in late Miocene and has continued in historical times, The TVF is composed of 4 large volcanoes (San Martin Tuxtla, San Martin Pajapan, Santa Marta, Cerro El Vigia), at least 365 volcanic cones and 43 maars. In this poster we present the distribution of the maars, their size and depths. These maars span from a few hundred km to almost 1 km in average diameter, and a few meters to several tens of meters in depth; most of them filled with lakes. As an example on the nature of these structures we present our results of the ongoing study of 'Laguna Pizatal or Pisatal' (18° 33'N, 95° 16.4'W, 428 masl) located some 3 km from the village of Reforma, on the western side of San Martin Tuxtla volcano. Laguna Pisatal is a maar some 500 meters in radius and a depth about 40 meters from the surrounding ground level. It is covered by a lake 200 m2 in extent fed by a spring discharging on its western side. We examined a succession of 15 layers on the margins of the maar, these layers are blast deposits of different sizes interbedded by surge deposits. Most of the contacts between layers are irregular; which suggests scouring during deposition of the upper beds. This in turn suggests that the layers were deposited in a rapid series of explosions, which mixed juvenile material with fragments of the preexisting bedrock. We were unable to find the extent of these deposits since the surrounding areas are nowadays sugar cane plantations and the lake has overspilled in several occassions.

  9. A 5000 Year Record of Andean South American Summer Monsoon Variability from Laguna de Ubaque, Colombia

    NASA Astrophysics Data System (ADS)

    Rudloff, O. M.; Bird, B. W.; Escobar, J.

    2014-12-01

    Our understanding of Northern Hemisphere South American summer monsoon (SASM) dynamics during the Holocene has been limited by the small number of terrestrial paleoclimate records from this region. In order to increase our knowledge of SASM variability and to better inform our predictions of its response to ongoing rapid climate change, we require high-resolution paleoclimate records from the Northern Hemisphere Andes. To this end, we present sub-decadally resolved sedimentological and geochemical data from Laguna de Ubaque that spans the last 5000 years. Located in the Eastern Cordillera of the Colombian Andes, Laguna de Ubaque (2070 m asl) is a small, east facing moraine-dammed lake in the upper part of the Rio Meta watershed near Bogotá containing finely laminated clastic sediments. Dry bulk density, %organic matter, %carbonate and magnetic susceptibility (MS) results from Ubaque suggest a period of intense precipitation between 3500 and 2000 years BP interrupted by a 300 yr dry interval centered at 2700 years BP. Following this event, generally drier conditions characterize the last 2000 years. Although considerably lower amplitude than the middle Holocene pluvial events, variability in the sedimentological data support climatic responses during the Medieval Climate Anomaly (MCA; 900 to 1200 CE) and Little Ice Age (LIA; 1450 to 1900 CE) that are consistent with other records of local Andean conditions. In particular, reduced MS during the MCA suggests a reduction in terrestrial material being washed into the lake as a result of generally drier conditions. The LIA on the other hand shows a two phase structure with increased MS between 1450 and 1600 CE, suggesting wetter conditions during the onset of the LIA, and reduced MS between 1600 and 1900 CE, suggesting a return to drier conditions during the latter part of the LIA. These LIA trends are similar to the Quelccaya accumulation record, possibly supporting an in-phase relationship between the South American Hemispheres. By comparing our precipitation proxies with other terrestrial records, as well as Pacific sea surface temperatures (SST) and global climate reconstructions, we will examine the relationship between Northern and Southern Hemisphere Andean climate responses to assess the validity of existing theories on the modes of climate change in the region.

  10. Fault-Related Sanctuaries

    NASA Astrophysics Data System (ADS)

    Piccardi, L.

    2001-12-01

    Beyond the study of historical surface faulting events, this work investigates the possibility, in specific cases, of identifying pre-historical events whose memory survives in myths and legends. The myths of many famous sacred places of the ancient world contain relevant telluric references: "sacred" earthquakes, openings to the Underworld and/or chthonic dragons. Given the strong correspondence with local geological evidence, these myths may be considered as describing natural phenomena. It has been possible in this way to shed light on the geologic origin of famous myths (Piccardi, 1999, 2000 and 2001). Interdisciplinary researches reveal that the origin of several ancient sanctuaries may be linked in particular to peculiar geological phenomena observed on local active faults (like ground shaking and coseismic surface ruptures, gas and flames emissions, strong underground rumours). In many of these sanctuaries the sacred area is laid directly above the active fault. In a few cases, faulting has affected also the archaeological relics, right through the main temple (e.g. Delphi, Cnidus, Hierapolis of Phrygia). As such, the arrangement of the cult site and content of relative myths suggest that specific points along the trace of active faults have been noticed in the past and worshiped as special `sacred' places, most likely interpreted as Hades' Doors. The mythological stratification of most of these sanctuaries dates back to prehistory, and points to a common derivation from the cult of the Mother Goddess (the Lady of the Doors), which was largely widespread since at least 25000 BC. The cult itself was later reconverted into various different divinities, while the `sacred doors' of the Great Goddess and/or the dragons (offspring of Mother Earth and generally regarded as Keepers of the Doors) persisted in more recent mythologies. Piccardi L., 1999: The "Footprints" of the Archangel: Evidence of Early-Medieval Surface Faulting at Monte Sant'Angelo (Gargano, Italy). European Union of Geophysics Congress, Strasbourg, March 1999. Piccardi L., 2000: Active faulting at Delphi (Greece): seismotectonic remarks and a hypothesis for the geological environment of a myth. Geology, 28, 651-654. Piccardi L., 2001: Seismotectonic Origin of the Monster of Loch Ness. Earth System Processes, Joint Meeting of G.S.A. and G.S.L., Edinburgh, June 2001.

  11. Impact of Water Resorts Development along Laguna de Bay on Groundwater Resources

    NASA Astrophysics Data System (ADS)

    Jago-on, K. A. B.; Reyes, Y. K.; Siringan, F. P.; Lloren, R. B.; Balangue, M. I. R. D.; Pena, M. A. Z.; Taniguchi, M.

    2014-12-01

    Rapid urbanization and land use changes in areas along Laguna de Bay, one of the largest freshwater lake in Southeast Asia, have resulted in increased economic activities and demand for groundwater resources from households, commerce and industries. One significant activity that can affect groundwater is the development of the water resorts industry, which includes hot springs spas. This study aims to determine the impact of the proliferation of these water resorts in Calamba and Los Banos, urban areas located at the southern coast of the lake on the groundwater as a resource. Calamba, being the "Hot Spring Capital of the Philippines", presently has more than 300 resorts, while Los Banos has at least 38 resorts. Results from an initial survey of resorts show that the swimming pools are drained/ changed on an average of 2-3 times a week or even daily during peak periods of tourist arrivals. This indicates a large demand on the groundwater. Monitoring of actual groundwater extraction is a challenge however, as most of these resorts operate without water use permits. The unrestrained exploitation of groundwater has resulted to drying up of older wells and decrease in hot spring water temperature. It is necessary to strengthen implementation of laws and policies, and enhance partnerships among government, private sector groups, civil society and communities to promote groundwater sustainability.

  12. Managing Fault Management Development

    NASA Technical Reports Server (NTRS)

    McDougal, John M.

    2010-01-01

    As the complexity of space missions grows, development of Fault Management (FM) capabilities is an increasingly common driver for significant cost overruns late in the development cycle. FM issues and the resulting cost overruns are rarely caused by a lack of technology, but rather by a lack of planning and emphasis by project management. A recent NASA FM Workshop brought together FM practitioners from a broad spectrum of institutions, mission types, and functional roles to identify the drivers underlying FM overruns and recommend solutions. They identified a number of areas in which increased program and project management focus can be used to control FM development cost growth. These include up-front planning for FM as a distinct engineering discipline; managing different, conflicting, and changing institutional goals and risk postures; ensuring the necessary resources for a disciplined, coordinated approach to end-to-end fault management engineering; and monitoring FM coordination across all mission systems.

  13. Randomness fault detection system

    NASA Technical Reports Server (NTRS)

    Russell, B. Don (Inventor); Aucoin, B. Michael (Inventor); Benner, Carl L. (Inventor)

    1996-01-01

    A method and apparatus are provided for detecting a fault on a power line carrying a line parameter such as a load current. The apparatus monitors and analyzes the load current to obtain an energy value. The energy value is compared to a threshold value stored in a buffer. If the energy value is greater than the threshold value a counter is incremented. If the energy value is greater than a high value threshold or less than a low value threshold then a second counter is incremented. If the difference between two subsequent energy values is greater than a constant then a third counter is incremented. A fault signal is issued if the counter is greater than a counter limit value and either the second counter is greater than a second limit value or the third counter is greater than a third limit value.

  14. Faults and Faulting Earth Structure (2nd Edition), 2004

    E-print Network

    on a footwall flat, and segment DE is a hanging-wall flat on a footwall flat. #12;© EarthStructure (2nd ed) 109Faults and Faulting Earth Structure (2nd Edition), 2004 W.W. Norton & Co, New York Slide show by Ben van der Pluijm © WW Norton; unless noted otherwise #12;© EarthStructure (2nd ed) 29/14/2010 Faults

  15. Determining the faulted phase

    Microsoft Academic Search

    David Costello; Karl Zimmerman

    2010-01-01

    In August 1999, a lightning strike caused a misoperation of a relay installed in the late 1980s. The relay misoperation caused a two-minute outage at a petrochemical plant and led to an exhaustive root-cause analysis. The misoperation can be attributed to incorrect fault type selection in a distance element-based, 1980s-era relay. Two separate events in different locations, one in December

  16. Fault tolerant control laws

    NASA Technical Reports Server (NTRS)

    Ly, U. L.; Ho, J. K.

    1986-01-01

    A systematic procedure for the synthesis of fault tolerant control laws to actuator failure has been presented. Two design methods were used to synthesize fault tolerant controllers: the conventional LQ design method and a direct feedback controller design method SANDY. The latter method is used primarily to streamline the full-state Q feedback design into a practical implementable output feedback controller structure. To achieve robustness to control actuator failure, the redundant surfaces are properly balanced according to their control effectiveness. A simple gain schedule based on the landing gear up/down logic involving only three gains was developed to handle three design flight conditions: Mach .25 and Mach .60 at 5000 ft and Mach .90 at 20,000 ft. The fault tolerant control law developed in this study provides good stability augmentation and performance for the relaxed static stability aircraft. The augmented aircraft responses are found to be invariant to the presence of a failure. Furthermore, single-loop stability margins of +6 dB in gain and +30 deg in phase were achieved along with -40 dB/decade rolloff at high frequency.

  17. Fluid involvement in normal faulting

    NASA Astrophysics Data System (ADS)

    Sibson, Richard H.

    2000-04-01

    Evidence of fluid interaction with normal faults comes from their varied role as flow barriers or conduits in hydrocarbon basins and as hosting structures for hydrothermal mineralisation, and from fault-rock assemblages in exhumed footwalls of steep active normal faults and metamorphic core complexes. These last suggest involvement of predominantly aqueous fluids over a broad depth range, with implications for fault shear resistance and the mechanics of normal fault reactivation. A general downwards progression in fault rock assemblages (high-level breccia-gouge (often clay-rich) ? cataclasites ? phyllonites ? mylonite ? mylonitic gneiss with the onset of greenschist phyllonites occurring near the base of the seismogenic crust) is inferred for normal fault zones developed in quartzo-feldspathic continental crust. Fluid inclusion studies in hydrothermal veining from some footwall assemblages suggest a transition from hydrostatic to suprahydrostatic fluid pressures over the depth range 3-5 km, with some evidence for near-lithostatic to hydrostatic pressure cycling towards the base of the seismogenic zone in the phyllonitic assemblages. Development of fault-fracture meshes through mixed-mode brittle failure in rock-masses with strong competence layering is promoted by low effective stress in the absence of thoroughgoing cohesionless faults that are favourably oriented for reactivation. Meshes may develop around normal faults in the near-surface under hydrostatic fluid pressures to depths determined by rock tensile strength, and at greater depths in overpressured portions of normal fault zones and at stress heterogeneities, especially dilational jogs. Overpressures localised within developing normal fault zones also determine the extent to which they may reutilise existing discontinuities (for example, low-angle thrust faults). Brittle failure mode plots demonstrate that reactivation of existing low-angle faults under vertical ?1 trajectories is only likely if fluid overpressures are localised within the fault zone and the surrounding rock retains significant tensile strength. Migrating pore fluids interact both statically and dynamically with normal faults. Static effects include consideration of the relative permeability of the faults with respect to the country rock, and juxtaposition effects which determine whether a fault is transmissive to flow or acts as an impermeable barrier. Strong directional permeability is expected in the subhorizontal ?2 direction parallel to intersections between minor faults, extension fractures, and stylolites. Three dynamic mechanisms tied to the seismic stress cycle may contribute to fluid redistribution: (i) cycling of mean stress coupled to shear stress, sometimes leading to postfailure expulsion of fluid from vertical fractures; (ii) suction pump action at dilational fault jogs; and, (iii) fault-valve action when a normal fault transects a seal capping either uniformly overpressured crust or overpressures localised to the immediate vicinity of the fault zone at depth. The combination of ?2 directional permeability with fluid redistribution from mean stress cycling may lead to hydraulic communication along strike, contributing to the protracted earthquake sequences that characterise normal fault systems.

  18. Fault management for data systems

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Iverson, David L.; Patterson-Hine, F. Ann

    1993-01-01

    Issues related to automating the process of fault management (fault diagnosis and response) for data management systems are considered. Substantial benefits are to be gained by successful automation of this process, particularly for large, complex systems. The use of graph-based models to develop a computer assisted fault management system is advocated. The general problem is described and the motivation behind choosing graph-based models over other approaches for developing fault diagnosis computer programs is outlined. Some existing work in the area of graph-based fault diagnosis is reviewed, and a new fault management method which was developed from existing methods is offered. Our method is applied to an automatic telescope system intended as a prototype for future lunar telescope programs. Finally, an application of our method to general data management systems is described.

  19. Quantification of fabrics in clay gouge from the Carboneras fault, Spain and implications for fault behavior

    E-print Network

    Quantification of fabrics in clay gouge from the Carboneras fault, Spain and implications for fault July 2009 Available online 15 July 2009 Keywords: Fault mechanics Permeability Clay authigenesis Fault strength Clays in fault rocks have the potential to control fault behavior. The formation of frictionally

  20. Fault-tolerant Sensor Network based on Fault Evaluation Matrix and Compensation for Intermittent Observation

    E-print Network

    Fault-tolerant Sensor Network based on Fault Evaluation Matrix and Compensation for Intermittent Observation Kazuya Kosugi, Shinichiro Tokumoto and Toru Namerikawa Abstract-- This paper deals with a fault for constructing a fault tolerant system. Specifically, we propose a fault-evaluation matrix for the fault

  1. Fault Location Orion is the distribution company for the Canterbury region. In 2007, a Ground Fault

    E-print Network

    Hickman, Mark

    Fault Location Orion is the distribution company for the Canterbury region. In 2007, a Ground Fault faults. This system operates by reducing the fault currents present during a fault, extinguishing and preventing arcing from occurring. Although this is greatly beneficial to the system, the reduction in fault

  2. Improved Differential Fault Analysis on ARIA using Small Number of Faults

    E-print Network

    Improved Differential Fault Analysis on ARIA using Small Number of Faults Yuseop Lee a , Kitae In [15], Li et al. firstly proposed a differential fault analysis on ARIA-128. This attack requires byte fault injection. Also Kim proposed differential fault analysis based on multi byte fault model

  3. Carbon and nitrogen isotope composition of core catcher samples from the ICDP deep drilling at Laguna Potrok Aike (Patagonia, Argentina)

    NASA Astrophysics Data System (ADS)

    Luecke, Andreas; Wissel, Holger; Mayr*, Christoph; Oehlerich, Markus; Ohlendorf, Christian; Zolitschka, Bernd; Pasado Science Team

    2010-05-01

    The ICDP project PASADO aims to develop a detailed paleoclimatic record for the southern part of the South American continent from sediments of Laguna Potrok Aike (51°58'S, 70°23'W), situated in the Patagonian steppe east of the Andean cordillera and north of the Street of Magellan. The precursor project SALSA recovered the Holocene and Late Glacial sediment infill of Laguna Potrok Aike and developed the environmental history of the semi-arid Patagonian steppe by a consequent interdisciplinary multi-proxy approach (e.g. Haberzettl et al., 2007). From September to November 2008 the ICDP deep drilling took place and successfully recovered in total 510 m of sediments from two sites resulting in a composite depth of 106 m for the selected main study Site 2. A preliminary age model places the record within the last 50.000 years. During the drilling campaign, the core catcher content of each drilled core run (3 m) was taken as separate sample to be shared and distributed between involved laboratories long before the main sampling party. A total of 70 core catcher samples describe the sediments of Site 2 and will form the base for more detailed investigations on the palaeoclimatic history of Patagonia. We here report on the organic carbon and nitrogen isotope composition of bulk sediment and plant debris of the core catcher samples. Similar investigations were performed for Holocene and Late Glacial sediments of Laguna Potrok Aike revealing insights into the organic matter dynamics of the lake and its catchment as well as into climatically induced hydrological variations with related lake level fluctuations (Mayr et al., 2009). The carbon and nitrogen content of the core catcher fine sediment fraction (<200 µm) is low to very low (around 1 % and 0.1 %, respectively) and requires particular attention in isotope analysis. The carbon isotope composition shows comparably little variation around a value of -26.0 per mil. The positive values of the Holocene and the Late Glacial (up to 22.0 per mil) are only sporadically reached down core. Compared to this, separated moss debris is remarkably 13C depleted with a minimum at 31.5 per mil. The nitrogen isotope ratios of glacial Laguna Potrok Aike sediments are lower (2.5 per mil) than those of the younger part of the record. The core catcher samples indicate several oscillations between 0.5 and 3.5 per mil. Data suggest a correlation between nitrogen isotopes and C/N ratios, but no linear relation between carbon isotopes and carbon content and an only weak relationship between carbon and nitrogen isotopes. Increasing nitrogen isotope values from 8000 cm downwards could probably be related to changed environmental conditions of Marine Isotope Stage 3 (MIS 3) compared to Marine Isotope Stage 2 (MIS 2). This will be further evaluated with higher resolution from the composite profile including a detailed study of discrete plant debris layers. References Haberzettl, T. et al. (2007). Lateglacial and Holocene wet-dry cycles in southern Patagonia: chronology, sedimentology and geochemistry of a lacustrine record from Laguna Potrok Aike, Argentina. The Holocene, 17: 297-310. Mayr, C. et al. (2009). Isotopic and geochemical fingerprints of environmental changes during the last 16,000 years on lacustrine organic matter from Laguna Potrok Aike (southern Patagonia, Argentina). Journal of Paleolimnology, 42: 81-102.

  4. Polynomially Complete Fault Detection Problems

    Microsoft Academic Search

    Oscar H. Ibarra; Sartaj Sahni

    1975-01-01

    We look at several variations of the single fault detection problem for combinational logic circuits and show that deciding whether single faults are detectable by input-output (I\\/O) experiments is polynomially complete, i.e., there is a polynomial time algorithm to decide if these single faults are detectable if and only if there is a polynomial time algorithm for problems such as

  5. Compositional Temporal Fault Tree Analysis

    Microsoft Academic Search

    Martin Walker; Leonardo Bottaci; Yiannis Papadopoulos

    2007-01-01

    HiP-HOPS (Hierarchically-Performed Hazard Origin and Propaga- tion Studies) is a recent technique that partly automates Fault Tree Analysis (FTA) by constructing fault trees from system topologies annotated with component-level failure specifications. HiP-HOPS has hitherto created only classical combinatorial fault trees that fail to capture the often significant temporal ordering of failure events. In this paper, we propose temporal extensions to

  6. Fault-tolerant multiprocessor computer

    SciTech Connect

    Smith, T.B. III; Lala, J.H.; Goldberg, J.; Kautz, W.H.; Melliar-Smith, P.M.; Green, M.W.; Levitt, K.N.; Schwartz, R.L.; Weinstock, C.B.; Palumbo, D.L.

    1986-01-01

    The development and evaluation of fault-tolerant computer architectures and software-implemented fault tolerance (SIFT) for use in advanced NASA vehicles and potentially in flight-control systms are described in a collection of previously published reports prepared for NASA. Topics addressed include the principles of fault-tolerant multiprocessor (FTMP) operation; processor and slave regional designs; FTMP executive, facilities, aceptance-test/diagnostic, applications, and support software; FTM reliability and availability models; SIFT hardware design; and SIFT validation and verification.

  7. Microbiological quality of chicken- and pork-based street-vended foods from Taichung, Taiwan, and Laguna, Philippines.

    PubMed

    Manguiat, Lydia S; Fang, Tony J

    2013-10-01

    The microbiological quality of chicken- and pork-based street-food samples from Taichung, Taiwan's night markets (50) and Laguna, Philippines' public places (69) was evaluated in comparison to a microbiological guideline for ready-to-eat foods. Different bacterial contamination patterns were observed between 'hot-grilled' and 'cold cooked/fried' food types from the two sampling locations with 'hot grilled' foods generally showing better microbiological quality. Several samples were found to be unsatisfactory due to high levels of aerobic plate count, coliform, Escherichia coli, and Staphylococcus aureus. The highest counts obtained were 8.2 log cfu g?¹, 5.4 log cfu g?¹, 4.4 log cfu g?¹, and 3.9 log cfu g?¹, respectively, suggesting poor food hygiene practices and poor sanitation. Salmonella was found in 8% and 7% of Taichung and Laguna samples, respectively, which made the samples potentially hazardous. None of the samples was found to be positive for Listeria monocytogenes and E. coli O157, but Bacillus cereus was detected at the unsatisfactory level of 4 log cfu g?¹ in one Laguna sample. Antimicrobial resistance was observed for Salmonella, E. coli, and S. aureus isolates. Food preparation, cooking, and food handling practices were considered to be contributors to the unacceptable microbiological quality of the street foods. Hence, providing training on food hygiene for the street vendors should result in the improvement of the microbiological quality of street foods. The data obtained in this study can be used as input to microbial risk assessments and in identifying science-based interventions to control the hazards. PMID:23764220

  8. Physical fault tolerance of nanoelectronics.

    PubMed

    Szkopek, Thomas; Roychowdhury, Vwani P; Antoniadis, Dimitri A; Damoulakis, John N

    2011-04-29

    The error rate in complementary transistor circuits is suppressed exponentially in electron number, arising from an intrinsic physical implementation of fault-tolerant error correction. Contrariwise, explicit assembly of gates into the most efficient known fault-tolerant architecture is characterized by a subexponential suppression of error rate with electron number, and incurs significant overhead in wiring and complexity. We conclude that it is more efficient to prevent logical errors with physical fault tolerance than to correct logical errors with fault-tolerant architecture. PMID:21635055

  9. Fault welding by pseudotachylyte generation

    NASA Astrophysics Data System (ADS)

    Mitchell, T. M.; Toy, V. G.; Di Toro, G.; Renner, J.

    2014-12-01

    During earthquakes, frictional melts can localize on slip surfaces and dramatically weaken faults by melt lubrication. Once seismic slip is arrested, the melt cools and solidifies to form pseudotachylyte (PST), the presence of which is commonly used to infer earthquake slip on ancient exhumed faults. Little is known about the effect of solidified melt on the strength of faults directly preceding a subsequent earthquake. We performed triaxial deformation experiments on cores of tonalite (Gole Larghe fault zone, N. Italy) and mylonite (Alpine fault, New Zealand) in order to assess the strength of PST bearing faults in the lab. Three types of sample were prepared for each rock type; intact, sawcut and PST bearing, and were cored so that the sawcut, PST and foliation planes were orientated at 35° to the length of the core and direction of ?1, i.e., a favorable orientation for reactivation. This choice of samples allowed us to compare the strength of 'pre-earthquake' fault (sawcut) to a 'post-earthquake' fault with solidified frictional melt, and assess their strength relative to intact samples. Our results show that PST veins effectively weld fault surfaces together, allowing previously faulted rocks to regain cohesive strengths comparable to that of an intact rock. Shearing of the PST is not favored, but subsequent failure and slip is accommodated on new faults nucleating at other zones of weakness. Thus, the mechanism of coseismic weakening by melt lubrication does not necessarily facilitate long-term interseismic deformation localization, at least at the scale of these experiments. In natural fault zones, PSTs are often found distributed over multiple adjacent fault planes or other zones of weakness such as foliation planes. We also modeled the temperature distribution in and around a PST using an approximation for cooling of a thin, infinite sheet by conduction perpendicular to its margins at ambient temperatures commensurate with the depth of PST formation. Results indicate that such PSTs would have cooled below their solidus in tens of seconds, leading to fault welding in under a minute. Cooled solidified melt patches can potentially act as asperities on faults, where faults can cease to be zones of weakness.

  10. Continued Rapid Uplift at Laguna del Maule Volcanic Field (Chile) from 2007 through 2014

    NASA Astrophysics Data System (ADS)

    Le Mevel, H.; Feigl, K. L.; Cordova, L.; DeMets, C.; Lundgren, P.

    2014-12-01

    The current rate of uplift at Laguna del Maule (LdM) volcanic field in Chile is among the highest ever observed geodetically for a volcano that is not actively erupting. Using data from interferometric synthetic aperture radar (InSAR) and the Global Positioning System (GPS) recorded at five continuously operating stations, we measure the deformation field with dense sampling in time (1/day) and space (1/hectare). These data track the temporal evolution of the current unrest episode from its inception (sometime between 2004 and 2007) to vertical velocities faster than 200 mm/yr that continue through (at least) July 2014. Building on our previous work, we evaluate the temporal evolution by analyzing data from InSAR (ALOS, TerraSAR-X, TanDEM-X) and GPS [http://dx.doi.org/ 10.1093/gji/ggt438]. In addition, we consider InSAR data from (ERS, ENVISAT, COSMO-Skymed, and UAVSAR), as well as constraints from magneto-telluric (MT), seismic, and gravity surveys. The goal is to test the hypothesis that a recent magma intrusion is feeding a large, existing magma reservoir. What will happen next? To address this question, we analyze the temporal evolution of deformation at other large silicic systems such as Yellowstone, Long Valley, and Three Sisters, during well-studied episodes of unrest. We consider several parameterizations, including piecewise linear, parabolic, and Gaussian functions of time. By choosing the best-fitting model, we expect to constrain the time scales of such episodes and elucidate the processes driving them.

  11. Impact of solar radiation on bacterioplankton in Laguna Vilama, a hypersaline Andean lake (4650 m)

    NASA Astrophysics Data System (ADS)

    FaríAs, MaríA. Eugenia; FernáNdez-Zenoff, Verónica; Flores, Regina; OrdóñEz, Omar; EstéVez, Cristina

    2009-06-01

    Laguna Vilama is a hypersaline Lake located at 4660 m altitude in the northwest of Argentina high up in the Andean Puna. The impact of ultraviolet (UV) radiation on bacterioplankton was studied by collecting samples at different times of the day. Molecular analysis (DGGE) showed that the bacterioplankton community is characterized by Gamma-proteobacteria (Halomonas sp., Marinobacter sp.), Alpha-proteobacteria (Roseobacter sp.), HGC (Agrococcus jenensis and an uncultured bacterium), and CFB (uncultured Bacteroidetes). During the day, minor modifications in bacterial diversity such as intensification of Bacteroidetes' signal and an emergence of Gamma-proteobacteria (Marinobacter flavimaris) were observed after solar exposure. DNA damage, measured as an accumulation of Cyclobutane Pyrimidine Dimers (CPDs), in bacterioplankton and naked DNA increased from 100 CPDs MB-1 at 1200 local time (LT) to 300 CPDs MB-1 at 1600 LT, and from 80 CPDs MB-1 at 1200 LT to 640 CPDs MB-1 at 1600 LT, respectively. In addition, pure cultures of Pseudomonas sp. V1 and Brachybacterium sp. V5, two bacteria previously isolated from this environment, were exposed simultaneously with the community, and viability of both strains diminished after solar exposure. No CPD accumulation was observed in either of the exposed cultures, but an increase in mutagenesis was detected in V5. Of both strains only Brachybacterium sp. V5 showed CPD accumulation in naked DNA. These results suggest that the bacterioplankton community is well adapted to this highly solar irradiated environment showing little accumulation of CPDs and few changes in the community composition. They also demonstrate that these microorganisms contain efficient mechanisms against UV damage.

  12. Holocene History of the Chocó Rain Forest from Laguna Piusbi, Southern Pacific Lowlands of Colombia

    NASA Astrophysics Data System (ADS)

    Behling, Hermann; Hooghiemstra, Henry; Negret, Alvaro José

    1998-11-01

    A high-resolution pollen record from a 5-m-long sediment core from the closed-lake basin Laguna Piusbi in the southern Colombian Pacific lowlands of Chocó, dated by 11 AMS 14C dates that range from ca. 7670 to 220 14C yr B.P., represents the first Holocene record from the Chocó rain forest area. The interval between 7600 and 6100 14C yr B.P. (500-265 cm), composed of sandy clays that accumulated during the initial phase of lake formation, is almost barren of pollen. Fungal spores and the presence of herbs and disturbance taxa suggest the basin was at least temporarily inundated and the vegetation was open. The closed lake basin might have formed during an earthquake, probably about 4400 14C yr B.P. From the interval of about 6000 14C yr B.P. onwards, 200 different pollen and spore types were identified in the core, illustrating a diverse floristic composition of the local rain forest. Main taxa are Moraceae/Urticaceae, Cecropia,Melastomataceae/Combretaceae, Acalypha, Alchornea,Fabaceae, Mimosa, Piper, Protium, Sloanea, Euterpe/Geonoma, Socratea,and Wettinia.Little change took place during that time interval. Compared to the pollen records from the rain forests of the Colombian Amazon basin and adjacent savannas, the Chocó rain forest ecosystem has been very stable during the late Holocene. Paleoindians probably lived there at least since 3460 14C yr B.P. Evidence of agricultural activity, shown by cultivation of Zea maissurrounding the lake, spans the last 1710 yr. Past and present very moist climate and little human influence are important factors in maintaining the stable ecosystem and high biodiversity of the Chocó rain forest.

  13. Hydrocarbon concentrations in the American oyster, Crassostrea virginica, in Laguna de Terminos, Campeche, Mexico

    SciTech Connect

    Gold-Bouchot, G.; Norena-Barroso, E.; Zapata-Perez, O. [Unidad Merida, Yucatan (Mexico)

    1995-02-01

    Laguna de Terminos is a 2,500 km{sup 2} coastal lagoon in the southern Gulf of Mexico, located between 18{degrees} 20` and 19{degrees} 00` N, and 91{degrees} 00` and 92{degrees} 20` W (Figure 1). It is a shallow lagoon, with a mean depth of 3.5 m and connected to the Gulf of Mexico through two permanent inlets, Puerto Real to the east and Carmen to the west. Several rivers, most of them from the Grijalva-Usumacinta basin (the largest in Mexico and second largest in the Gulf of Mexico), drain into the lagoon with a mean annual discharge of 6 X 10{sup 9} m{sup 3}/year. This lagoon has been studied systematically, and is probably one of the best known in Mexico. An excellent overview of this lagoon can be found in Yanez-Arancibia and Day. The continental shelf north of Terminos, the Campeche Bank, is the main oil-producing zone in Mexico with a production of about 2 X 10{sup 6} barrels/day. It is also the main shrimp producer in the southern Gulf, with a mean annual catch of 18,000 tonnes/year, which represents 38 to 50% of the national catch in the Gulf of Mexico. The economic importance of this region, along with its extremely high biodiversity, both in terms of species and habitats, has prompted the Mexican government to study the creation of a wildlife refuge around Terminos. Thus, it is very important to know the current levels of pollutants in this area, as a contribution to the management plan of the proposed protected area. This paper looks at hydrocarbon concentrations in oyster tissue. 14 refs., 3 figs., 21 tabs.

  14. Fault Branching and Rupture Directivity

    NASA Astrophysics Data System (ADS)

    Dmowska, R.; Rice, J. R.; Kame, N.

    2002-12-01

    Can the rupture directivity of past earthquakes be inferred from fault geometry? Nakata et al. [J. Geogr., 1998] propose to relate the observed surface branching of fault systems with directivity. Their work assumes that all branches are through acute angles in the direction of rupture propagation. However, in some observed cases rupture paths seem to branch through highly obtuse angles, as if to propagate ``backwards". Field examples of that are as follows: (1) Landers 1992. When crossing from the Johnson Valley to the Homestead Valley (HV) fault via the Kickapoo (Kp) fault, the rupture from Kp progressed not just forward onto the northern stretch of the HV fault, but also backwards, i.e., SSE along the HV [Sowers et al., 1994, Spotila and Sieh, 1995, Zachariasen and Sieh, 1995, Rockwell et al., 2000]. Measurements of surface slip along that backward branch, a prominent feature of 4 km length, show right-lateral slip, decreasing towards the SSE. (2) At a similar crossing from the HV to the Emerson (Em) fault, the rupture progressed backwards along different SSE splays of the Em fault [Zachariasen and Sieh, 1995]. (3). In crossing from the Em to Camp Rock (CR) fault, again, rupture went SSE on the CR fault. (4). Hector Mine 1999. The rupture originated on a buried fault without surface trace [Li et al., 2002; Hauksson et al., 2002] and progressed bilaterally south and north. In the south it met the Lavic Lake (LL) fault and progressed south on it, but also progressed backward, i.e. NNW, along the northern stretch of the LL fault. The angle between the buried fault and the northern LL fault is around -160o, and that NNW stretch extends around 15 km. The field examples with highly obtuse branch angles suggest that there may be no simple correlation between fault geometry and rupture directivity. We propose that an important distinction is whether those obtuse branches actually involved a rupture path which directly turned through the obtuse angle (while continuing also on the main fault), or rather involved arrest by a barrier on the original fault and jumping [Harris and Day, JGR, 1993] to a neighboring fault on which rupture propagated bilaterally to form what appears as a backward-branched structure. Our studies [Poliakov et al., JGR in press, 2002; Kame et al, EOS, 2002] of stress fields around a dynamically moving mode II crack tip show a clear tendency to branch from the straight path at high rupture speeds, but the stress fields never allow the rupture path to directly turn through highly obtuse angles, and hence that mechanism is unlikely. In contrast, study of fault maps in the vicinity of the Kp to HV fault transition [Sowers et al., 1994], discussed as case (1) above, strongly suggest that the large-angle branching occurred as a jump, which we propose as the likely general mechanism. Implications for the Nakata et al. [1998] aim of inferring rupture directivity from branch geometry is that this will be possible only when rather detailed characterization (by surface geology, seismic relocation, trapped waves) of fault connectivity can be carried out in the vicinity of the branching junction, to ascertain whether direct turning of the rupture path through an angle, or jumping and then propagating bilaterally, were involved in prior events. They have opposite implications for how we would associate past directivity with a (nominally) branched fault geometry.

  15. ORIGINAL PAPER Age constraints on faulting and fault

    E-print Network

    Siebel, Wolfgang

    of the cataclastic deformation period. During this time, the ``Kristallgranit'' was already at or near the Earth the fault zone yield Cretaceous ages that clearly postdate their Late-Variscan mineralization age. We is supported by geological evidence, i.e. offsets of Jurassic and Cretaceous sediments along the fault

  16. Fault Models for Quantum Mechanical Switching Networks

    Microsoft Academic Search

    Jacob D. Biamonte; Jeff S. Allen; Marek A. Perkowski

    2010-01-01

    The difference between faults and errors is that, unlike faults, errors can\\u000abe corrected using control codes. In classical test and verification one\\u000adevelops a test set separating a correct circuit from a circuit containing any\\u000aconsidered fault. Classical faults are modelled at the logical level by fault\\u000amodels that act on classical states. The stuck fault model, thought of

  17. The Lawanopo Fault, central Sulawesi, East Indonesia

    NASA Astrophysics Data System (ADS)

    Natawidjaja, Danny Hilman; Daryono, Mudrik R.

    2015-04-01

    The dominant tectonic-force factor in the Sulawesi Island is the westward Bangga-Sula microplate tectonic intrusion, driven by the 12 mm/year westward motion of the Pacific Plate relative to Eurasia. This tectonic intrusion are accommodated by a series of major left-lateral strike-slip fault zones including Sorong Fault, Sula-Sorong Fault, Matano Fault, Palukoro Fault, and Lawanopo Fault zones. The Lawanopo fault has been considered as an active left-lateral strike-slip fault. The natural exposures of the Lawanopo Fault are clear, marked by the breaks and liniemants of topography along the fault line, and also it serves as a tectonic boundary between the different rock assemblages. Inpections of IFSAR 5m-grid DEM and field checks show that the fault traces are visible by lineaments of topographical slope breaks, linear ridges and stream valleys, ridge neckings, and they are also associated with hydrothermal deposits and hot springs. These are characteristics of young fault, so their morphological expressions can be seen still. However, fault scarps and other morpho-tectonic features appear to have been diffused by erosions and young sediment depositions. No fresh fault scarps, stream deflections or offsets, or any influences of fault movements on recent landscapes are observed associated with fault traces. Hence, the faults do not show any evidence of recent activity. This is consistent with lack of seismicity on the fault.

  18. Optimized Fault Location Final Project Report

    E-print Network

    Optimized Fault Location Final Project Report Power Systems Engineering Research Center A National Engineering Research Center Optimized Fault Location Concurrent Technologies Corporation Final Project Report

  19. Perspective View, Garlock Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    California's Garlock Fault, marking the northwestern boundary of the Mojave Desert, lies at the foot of the mountains, running from the lower right to the top center of this image, which was created with data from NASA's shuttle Radar Topography Mission (SRTM), flown in February 2000. The data will be used by geologists studying fault dynamics and landforms resulting from active tectonics. These mountains are the southern end of the Sierra Nevada and the prominent canyon emerging at the lower right is Lone Tree canyon. In the distance, the San Gabriel Mountains cut across from the leftside of the image. At their base lies the San Andreas Fault which meets the Garlock Fault near the left edge at Tejon Pass. The dark linear feature running from lower right to upper left is State Highway 14 leading from the town of Mojave in the distance to Inyokern and the Owens Valley in the north. The lighter parallel lines are dirt roads related to power lines and the Los Angeles Aqueduct which run along the base of the mountains.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense (DoD), and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise,Washington, DC.

    Size: Varies in a perspective view Location: 35.25 deg. North lat., 118.05 deg. West lon. Orientation: Looking southwest Original Data Resolution: SRTM and Landsat: 30 meters (99 feet) Date Acquired: February 16, 2000

  20. Observer-based fault detection for nuclear reactors

    E-print Network

    Li, Qing, 1972-

    2001-01-01

    This is a study of fault detection for nuclear reactor systems. Basic concepts are derived from fundamental theories on system observers. Different types of fault- actuator fault, sensor fault, and system dynamics fault ...

  1. HTDD based parallel fault simulator

    Microsoft Academic Search

    Joanna Sapiecha; Krzysztof Sapiecha; Stanislaw Deniziak

    1998-01-01

    In this paper a new efficient approach to bit-parallel fault simulation for sequential circuits is introduced and evaluated with the help of ISCAS89 benchmarks. Digital systems are modelled using Hierarchical Ternary Decision Diagrams (HTDDs). It leads to substantial reduction of both the number of simulated faults and calculations needed for simulation. Moreover, an approach presented in this paper is able

  2. Transition-fault test generation

    E-print Network

    Cobb, Bradley Douglas

    2013-02-22

    large to store in the memory of the tester. The proposed methods of test generation utilize stuck-at-fault tests to create transition-fault test sets of a smaller size. Greedy algorithms are used in the generation of both the stuck...

  3. Story of stacking fault tetrahedra

    Microsoft Academic Search

    M. Kiritani

    1997-01-01

    Stacking fault tetrahedra, although they have a peculiar structure, are the most general type of vacancy clustered defects in f.c.c. metals and alloys. Placing these stacking fault tetrahedra at the center, the story of point defect reaction is told. The structure of the defect and the energy relation are first described. Various experimental treatments which lead to the formation of

  4. Complementary-Logic Fault Detector

    NASA Technical Reports Server (NTRS)

    Wawrzynek, J. C.

    1985-01-01

    Circuit for checking two-line complementary-logic bits for single faults used as building block for self-checking memory interface for Hammingcoded data. Intended for such applications as fault-tolerant computing, data handling, and data transmission. Circuit performs exclusive-OR function. Many such circuits combined produce complete memory interface with both detection and correction abilities.

  5. Fault Tolerance in MPI Programs?

    Microsoft Academic Search

    William Gropp; Ewing Lusk

    2002-01-01

    This paper examines the topic of writing fault-tolerant MPI applications. We discuss the meaning of fault tolerance in general and what the MPI Standard has to say about it. We survey several approaches to this problem, namely checkpointing, restructuring a class of standard MPI programs, modifying MPI semantics, and extending the MPI spec- ification. We conclude that within certain constraints,

  6. Arcing faults in electrical equipment

    Microsoft Academic Search

    David Sweeting

    2009-01-01

    Electrical incidents that result in significant injuries to people are often the result of substantially unconstrained free-burning arcing fault currents within electrical equipment. It is necessary to understand the nature of these arcs and be able to quantify the parameters before it is possible to really comprehend what is actually happening inside arcing faults and how they cause injuries to

  7. Model extraction for fault isolation

    Microsoft Academic Search

    Rattikorn Hewett

    2004-01-01

    This paper presents a simulation-based approach for fault isolation in complex dynamic systems. A machine learning technique is used to extract, from simulated data, models representing regularities in system behavior. A heuristic based on the degree of coverage of the model on the data is then applied to isolate faults. To test tolerance to incomplete models, our simulation model only

  8. JPL Fault Protection Software Experiences

    NASA Technical Reports Server (NTRS)

    Barltrop, Kevin; Dvorak, Dan

    2008-01-01

    This objectives of this slide presentation are to: (1) Share JPL experiences by describing the evolution of fault protection during its history in deep space exploration, (2) Examine issues of fault protection scope and implementation that affect missions today, and (3) Discuss solutions for the problems of today and tomorrow.

  9. Fault management on communications satellites

    Microsoft Academic Search

    R. D. Coblin

    1999-01-01

    There is a military communications (MILCOM) satellite system which is designed to provide communications for military users. To meet this mission, a key function to be performed is autonomous fault management of the MILCOM system which includes a constellation of satellites and a collection of dedicated ground control stations. The impact of the MILCOM fault management system to the space,

  10. Estimated natural streamflow in the Rio San Jose upstream from the pueblos of Acoma and Laguna, New Mexico

    USGS Publications Warehouse

    Risser, D.W.

    1982-01-01

    The development of surface and ground water, which began about 1870 in the upper Rio San Jose drainage basin, has decreased the flow of the Rio San Jose on the Pueblo of Acoma and the Pueblo of Laguna. The purpose of this study was to estimate the natural streamflow in the Rio San Jose that would have entered the pueblos if no upstream water development had taken place. Estimates of natural flow were based upon streamflow and precipitation records, historical accounts of streamflow, records of irrigated acreage, and empirically-derived estimates of the effects on streamflow of Bluewater Lake, groundwater withdrawals, and irrigation diversions. Natural streamflow in the Rio San Jose at the western boundary of the Pueblo of Acoma is estimated to be between 13,000 and 15,000 acre-feet per year, based on 55 years of recorded and reconstructed streamflow data from water years 1913 to 1972. Natural streamflow at the western boundary of the Pueblo of Laguna is estimated to be between 17 ,000 and 19,000 acre-feet per year for the same period. The error in these estimates of natural streamflow is difficult to assess accurately, but it probably is less than 25 percent. (USGS)

  11. Congener-specific polychlorinated biphenyl patterns in eggs of aquatic birds from the Lower Laguna Madre, Texas

    SciTech Connect

    Mora, M.A. [Texas A and M Univ., College Station, TX (United States)

    1996-06-01

    Eggs from four aquatic bird species nesting in the Lower Laguna Madre, Texas, were collected to determine differences and similarities in the accumulation of congener-specific polychlorinated biphenyls (PCBs) and to evaluate PCB impacts on reproduction. Because of the different toxicities of PCB congeners, it is important to know which congeners contribute most to total PCBs. The predominant PCB congeners were 153, 138, 180, 110, 118, 187, and 92. Collectively, congeners 153, 138, and 180 accounted for 26 to 42% of total PCBs. Congener 153 was the most abundant in Caspian terns (Sterna caspia) and great blue herons (Ardea herodias) and congener 138 was the most abundant in snowy egrets (Egretta thula) and tricolored herons (Egretta tricolor). Principal component analysis indicated a predominance of higher chlorinated biphenyls in Caspian terns and great blue herons and lower chlorinated biphenyls in tricolored herons. Snowy egrets had a predominance of pentachlorobiphenyls. These results suggest that there are differences in PCB congener patterns in closely related species and that these differences are more likely associated with the species` diet rather than metabolism. Total PCBs were significantly greater (p < 0.05) in Caspian terns than in the other species. Overall, PCBs in eggs of birds from the Lower Laguna Madre were below concentrations known to affect bird reproduction.

  12. Vegetation history in southern Patagonia: first palynological results of the ICDP lake drilling project at Laguna Potrok Aike, Argentina

    NASA Astrophysics Data System (ADS)

    Schäbitz, Frank; Michael, Wille

    2010-05-01

    Laguna Potrok Aike located in southern Argentina is one of the very few locations that are suited to reconstruct the paleoenvironmental and climatic history of southern Patagonia. In the framework of the multinational ICDP deep drilling project PASADO several long sediment cores to a composite depth of more than 100 m were obtained. Here we present first results of pollen analyses from sediment material of the core catcher. Absolute time control is not yet available. Pollen spectra with a spatial resolution of three meters show that Laguna Potrok Aike was always surrounded by Patagonian Steppe vegetation. However, the species composition underwent some marked proportional changes through time. The uppermost pollen spectra show a high contribution of Andean forest and charcoal particles as it can be expected for Holocene times and the ending last glacial. The middle part shows no forest and relatively high amounts of pollen from steppe plants indicating cold and dry full glacial conditions. The lowermost samples are characterized by a significantly different species composition as steppe plants like Asteraceae, Caryophyllaceae, Ericaceae and Ephedra became more frequent. In combination with higher charcoal amounts and an algal species composition comparable to Holocene times we suggest that conditions during the formation of sediments at the base of the record were more humid and/or warmer causing a higher fuel availability for charcoal production compared to full glacial times.

  13. SFT: Scalable Fault Tolerance

    SciTech Connect

    Petrini, Fabrizio; Nieplocha, Jarek; Tipparaju, Vinod

    2006-04-15

    In this paper we will present a new technology that we are currently developing within the SFT: Scalable Fault Tolerance FastOS project which seeks to implement fault tolerance at the operating system level. Major design goals include dynamic reallocation of resources to allow continuing execution in the presence of hardware failures, very high scalability, high efficiency (low overhead), and transparency—requiring no changes to user applications. Our technology is based on a global coordination mechanism, that enforces transparent recovery lines in the system, and TICK, a lightweight, incremental checkpointing software architecture implemented as a Linux kernel module. TICK is completely user-transparent and does not require any changes to user code or system libraries; it is highly responsive: an interrupt, such as a timer interrupt, can trigger a checkpoint in as little as 2.5?s; and it supports incremental and full checkpoints with minimal overhead—less than 6% with full checkpointing to disk performed as frequently as once per minute.

  14. Colorado Regional Faults

    SciTech Connect

    Hussein, Khalid

    2012-02-01

    Citation Information: Originator: Earth Science &Observation Center (ESOC), CIRES, University of Colorado at Boulder Originator: Colorado Geological Survey (CGS) Publication Date: 2012 Title: Regional Faults Edition: First Publication Information: Publication Place: Earth Science & Observation Center, Cooperative Institute for Research in Environmental Science, University of Colorado, Boulder Publisher: Earth Science &Observation Center (ESOC), CIRES, University of Colorado at Boulder Description: This layer contains the regional faults of Colorado Spatial Domain: Extent: Top: 4543192.100000 m Left: 144385.020000 m Right: 754585.020000 m Bottom: 4094592.100000 m Contact Information: Contact Organization: Earth Science &Observation Center (ESOC), CIRES, University of Colorado at Boulder Contact Person: Khalid Hussein Address: CIRES, Ekeley Building Earth Science & Observation Center (ESOC) 216 UCB City: Boulder State: CO Postal Code: 80309-0216 Country: USA Contact Telephone: 303-492-6782 Spatial Reference Information: Coordinate System: Universal Transverse Mercator (UTM) WGS’1984 Zone 13N False Easting: 500000.00000000 False Northing: 0.00000000 Central Meridian: -105.00000000 Scale Factor: 0.99960000 Latitude of Origin: 0.00000000 Linear Unit: Meter Datum: World Geodetic System 1984 (WGS ’984) Prime Meridian: Greenwich Angular Unit: Degree Digital Form: Format Name: Shape file

  15. Improving Multiple Fault Diagnosability using Possible Conflicts

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2012-01-01

    Multiple fault diagnosis is a difficult problem for dynamic systems. Due to fault masking, compensation, and relative time of fault occurrence, multiple faults can manifest in many different ways as observable fault signature sequences. This decreases diagnosability of multiple faults, and therefore leads to a loss in effectiveness of the fault isolation step. We develop a qualitative, event-based, multiple fault isolation framework, and derive several notions of multiple fault diagnosability. We show that using Possible Conflicts, a model decomposition technique that decouples faults from residuals, we can significantly improve the diagnosability of multiple faults compared to an approach using a single global model. We demonstrate these concepts and provide results using a multi-tank system as a case study.

  16. Synchronized sampling improves fault location

    SciTech Connect

    Kezunovic, M. [Texas A and M Univ., College Station, TX (United States)] [Texas A and M Univ., College Station, TX (United States); Perunicic, B. [Lamar Univ., Beaumont, TX (United States)] [Lamar Univ., Beaumont, TX (United States)

    1995-04-01

    Transmission line faults must be located accurately to allow maintenance crews to arrive at the scene and repair the faulted section as soon as possible. Rugged terrain and geographical layout cause some sections of power transmission lines to be difficult to reach. In the past, a variety of fault location algorithms were introduced as either an add-on feature in protective relays or stand-alone implementation in fault locators. In both cases, the measurements of current and voltages were taken at one terminal of a transmission line only. Under such conditions, it may become difficult to determine the fault location accurately, since data from other transmission line ends are required for more precise computations. In the absence of data from the other end, existing algorithms have accuracy problems under several circumstances, such as varying switching and loading conditions, fault infeed from the other end, and random value of fault resistance. Most of the one-end algorithms were based on estimation of voltage and current phasors. The need to estimate phasors introduces additional difficulty in high-speed tripping situations where the algorithms may not be fast enough in determining fault location accurately before the current signals disappear due to the relay operation and breaker opening. This article introduces a unique concept of high-speed fault location that can be implemented either as a simple add-on to the digital fault recorders (DFRs) or as a stand-alone new relaying function. This advanced concept is based on the use of voltage and current samples that are synchronously taken at both ends of a transmission line. This sampling technique can be made readily available in some new DFR designs incorporating receivers for accurate sampling clock synchronization using the satellite Global Positioning System (GPS).

  17. Four-dimensional surface deformation analysis, snow volume calculation, and fault mapping with Ground Based Tripod LiDAR

    NASA Astrophysics Data System (ADS)

    Bawden, G. W.; Schmitz, S.; Howle, J. F.; Laczniak, R. J.; Bowers, J.; Osterhuber, R.; Irvine, P.

    2005-12-01

    Ground-based tripod or terrestrial LiDAR (T-LiDAR) has the potential to significantly advance science and hazard assessments in a broad number of research disciplines using remotely collected ultra-high resolution (centimeter to subcentimeter) and accurate (~4 mm) digital imagery of the scanned target. This can be accomplished at distances from 3 to >800 meters, depending on the instrument and target's infrared reflective properties. Scientific analysis of the ultra-high resolution T-LiDAR imagery is through the direct analysis of three-dimensional datasets to calculate target dimensions, volume, and area; alternatively, repeated surveys of the target, differential four-dimensional time-series analysis can be used to evaluate volume change, target stability, surface displacements, or change-detection. Three examples are given that show a range of scientific T-LiDAR application to earth science research: (1) fault mapping at Yucca Flat, Nev.; (2) snow volume calculations in the central Sierra Nevada, Calif.; and (3) hazard mitigation and change-detection and hazard mitigation for the June 1, 2005, Laguna Beach Landslide, southern Calif. The fault-mapping example is a static analysis of many detailed (1- to 5-cm spot spacing) T-LiDAR scans collected on the Nevada Test Site at Yucca Flat at a crater created by an underground nuclear test. The analysis identifies and maps centimeter-level fractures and faults in and near the crater. The level of detail of the T-LiDAR-generated fault and fracture database enhances existing fracture maps. The snow-volume calculation example is a differential analysis of three T-LiDAR surveys at the U.C. Berkeley Central Sierra Snow Lab between March and June 2004. The surveys were aligned and differenced to calculate spatially varying snow volumes. These volumes were combined with water-density measurements to estimate the total water volume. The change detection and hazard-assessment example analyzes repeated T-LiDAR imagery following the June 1, 2005, Laguna Beach Landslide to assess hillslope and structure stability of the slide and the immediate surroundings, and to evaluate T-LiDAR as a hazards response tool. There were no land-surface changes within the landslide after the initial T-LiDAR survey (10 to 21 days after the event) other than minor small-scale readjustments, property recovery efforts, and monitoring within the landslide. T-LiDAR provided direct measurements for the full surrounding region and confirmed that many of the nearby homes were not moving during this time period and could be reinhabited.

  18. Selected Hydrologic, Water-Quality, Biological, and Sedimentation Characteristics of Laguna Grande, Fajardo, Puerto Rico, March 2007-February 2009

    USGS Publications Warehouse

    Soler-López, Luis R.; Santos, Carlos R.

    2010-01-01

    Laguna Grande is a 50-hectare lagoon in the municipio of Fajardo, located in the northeasternmost part of Puerto Rico. Hydrologic, water-quality, and biological data were collected in the lagoon between March 2007 and February 2009 to establish baseline conditions and determine the health of Laguna Grande on the basis of preestablished standards. In addition, a core of bottom material was obtained at one site within the lagoon to establish sediment depositional rates. Water-quality properties measured onsite (temperature, pH, dissolved oxygen, specific conductance, and water transparency) varied temporally rather than areally. All physical properties were in compliance with current regulatory standards established for Puerto Rico. Nutrient concentrations were very low and in compliance with current regulatory standards (less than 5.0 and 1.0 milligrams per liter for total nitrogen and total phosphorus, respectively). The average total nitrogen concentration was 0.28 milligram per liter, and the average total phosphorus concentration was 0.02 milligram per liter. Chlorophyll a was the predominant form of photosynthetic pigment in the water. The average chlorophyll-a concentration was 6.2 micrograms per liter. Bottom sediment accumulation rates were determined in sediment cores by modeling the downcore activities of lead-210 and cesium-137. Results indicated a sediment depositional rate of about 0.44 centimeter per year. At this rate of sediment accretion, the lagoon may become a marshland in about 700 to 900 years. About 86 percent of the community primary productivity in Laguna Grande was generated by periphyton, primarily algal mats and seagrasses, and the remaining 14 percent was generated by phytoplankton in the water column. Based on the diel studies the total average net community productivity equaled 5.7 grams of oxygen per cubic meter per day (2.1 grams of carbon per cubic meter per day). Most of this productivity was ascribed to periphyton and macrophytes, which produced 4.9 grams of oxygen per cubic meter per day (1.8 grams of carbon per cubic meter per day). Phytoplankton, the plant and algal component of plankton, produced about 0.8 gram of oxygen per cubic meter per day (0.3 gram of carbon per cubic meter per day). The total diel community respiration rate was 23.4 grams of oxygen per cubic meter per day. The respiration rate ascribed to plankton, which consists of all free floating and swimming organisms in the water column, composed 10 percent of this rate (2.9 grams of oxygen per cubic meter per day); respiration by all other organisms composed the remaining 90 percent (20.5 grams of oxygen per cubic meter per day). Plankton gross productivity was 3.7 grams of oxygen per cubic meter per day, equivalent to about 13 percent of the average gross productivity for the entire community (29.1 grams of oxygen per cubic meter per day). The average phytoplankton biomass values in Laguna Grande ranged from 6.0 to 13.6 milligrams per liter. During the study, Laguna Grande contained a phytoplankton standing crop of approximately 5.8 metric tons. Phytoplankton community had a turnover (renewal) rate of about 153 times per year, or roughly about once every 2.5 days. Fecal indicator bacteria concentrations ranged from 160 to 60,000 colonies per 100 milliliters. Concentrations generally were greatest in areas near residential and commercial establishments, and frequently exceeded current regulatory standards established for Puerto Rico.

  19. Fault Identification by Unsupervised Learning Algorithm

    NASA Astrophysics Data System (ADS)

    Nandan, S.; Mannu, U.

    2012-12-01

    Contemporary fault identification techniques predominantly rely on the surface expression of the fault. This biased observation is inadequate to yield detailed fault structures in areas with surface cover like cities deserts vegetation etc and the changes in fault patterns with depth. Furthermore it is difficult to estimate faults structure which do not generate any surface rupture. Many disastrous events have been attributed to these blind faults. Faults and earthquakes are very closely related as earthquakes occur on faults and faults grow by accumulation of coseismic rupture. For a better seismic risk evaluation it is imperative to recognize and map these faults. We implement a novel approach to identify seismically active fault planes from three dimensional hypocenter distribution by making use of unsupervised learning algorithms. We employ K-means clustering algorithm and Expectation Maximization (EM) algorithm modified to identify planar structures in spatial distribution of hypocenter after filtering out isolated events. We examine difference in the faults reconstructed by deterministic assignment in K- means and probabilistic assignment in EM algorithm. The method is conceptually identical to methodologies developed by Ouillion et al (2008, 2010) and has been extensively tested on synthetic data. We determined the sensitivity of the methodology to uncertainties in hypocenter location, density of clustering and cross cutting fault structures. The method has been applied to datasets from two contrasting regions. While Kumaon Himalaya is a convergent plate boundary, Koyna-Warna lies in middle of the Indian Plate but has a history of triggered seismicity. The reconstructed faults were validated by examining the fault orientation of mapped faults and the focal mechanism of these events determined through waveform inversion. The reconstructed faults could be used to solve the fault plane ambiguity in focal mechanism determination and constrain the fault orientations for finite source inversions. The faults produced by the method exhibited good correlation with the fault planes obtained by focal mechanism solutions and previously mapped faults.

  20. Why the 2002 Denali fault rupture propagated onto the Totschunda fault: implications for fault branching and seismic hazards

    USGS Publications Warehouse

    Schwartz, David P.; Haeussler, Peter J.; Seitz, Gordon G.; Dawson, Timothy E.

    2012-01-01

    The propagation of the rupture of the Mw7.9 Denali fault earthquake from the central Denali fault onto the Totschunda fault has provided a basis for dynamic models of fault branching in which the angle of the regional or local prestress relative to the orientation of the main fault and branch plays a principal role in determining which fault branch is taken. GeoEarthScope LiDAR and paleoseismic data allow us to map the structure of the Denali-Totschunda fault intersection and evaluate controls of fault branching from a geological perspective. LiDAR data reveal the Denali-Totschunda fault intersection is structurally simple with the two faults directly connected. At the branch point, 227.2 km east of the 2002 epicenter, the 2002 rupture diverges southeast to become the Totschunda fault. We use paleoseismic data to propose that differences in the accumulated strain on each fault segment, which express differences in the elapsed time since the most recent event, was one important control of the branching direction. We suggest that data on event history, slip rate, paleo offsets, fault geometry and structure, and connectivity, especially on high slip rate-short recurrence interval faults, can be used to assess the likelihood of branching and its direction. Analysis of the Denali-Totschunda fault intersection has implications for evaluating the potential for a rupture to propagate across other types of fault intersections and for characterizing sources of future large earthquakes.

  1. An Evaluation of Seagrass Community Structure and Its Role in Green Sea Turtle (Chelonia mydas) Forgaging Dynamics in the Lower Laguna Madre

    E-print Network

    Weatherall, Tracy F.

    2010-07-14

    Satellite tracking data of juvenile and subadult green turtles captured and released by Texas A&M University at Galveston?s Sea Turtle and Fisheries Ecology Research Lab (STFERL) from the lower Laguna Madre indicate green sea turtles (Chelonia mydas...

  2. High-resolution pinger, airgun and sparker seismic surveys in Laguna Potrok-Aike: Imaging the sediment infill prior to deep drilling

    Microsoft Academic Search

    Daniel Ariztegui; Flavio Anselmetti; Marc De Batist

    With the initial goals to determine thickness, geometries and distribution of sediments in Laguna Potrok-Aike, a complete assembly of reflection seismic investigations have been carried out in various campaigns. The results allow to identify potential sediment types, associated depositional processes and past lake level fluctuations. All these features are critical to determine ideal coring sites for future deep drilling locations.

  3. Authigenic, detrital and diagenetic minerals in the Laguna Potrok Aike sediment L. Nuttin a,b,*, P. Francus b,c

    E-print Network

    Long, Bernard

    13 September 2012 Available online xxx Keywords: XRD Clays UeTh measurements Vivianite Paleoclimate Late Quaternary Argentina ICDP-PASADO a b s t r a c t The w100 m-long Laguna Potrok Aike sediment Potrok Aike (51580 S, 70230 W in south-eastern Pata- gonia, Argentina) is one of the older maars

  4. Climatically induced lake level changes during the last two millennia as reflected in sediments of Laguna Potrok Aike, southern Patagonia (Santa Cruz, Argentina)

    Microsoft Academic Search

    Torsten Haberzettl; Michael Fey; Andreas Lücke; Nora Maidana; Christoph Mayr; Christian Ohlendorf; Frank Schäbitz; Gerhard H. Schleser; Michael Wille; Bernd Zolitschka

    2005-01-01

    The volcanogenic lake Laguna Potrok Aike, Santa Cruz, Argentina, reveals an unprecedented continuous high resolution climatic record for the steppe regions of southern Patagonia. With the applied multi-proxy approach rapid climatic changes before the turn of the first millennium were detected followed by medieval droughts which are intersected by moist and\\/or cold periods of varying durations and intensities. The ‘total

  5. Cooling History for the Sierra Laguna Blanca (NW Argentina) on the Southern Puna Plateau, Central Andes

    NASA Astrophysics Data System (ADS)

    Zhou, R.; Schoenbohm, L. M.; Sobel, E. R.; Stockli, D. F.; Glodny, J.

    2014-12-01

    Various dynamic models have been proposed to explain deformation history and topographic evolution for the southern Altiplano-Puna Plateau, including inversion of the Cretaceous Salta rift structures, formation of an orogenic wedge, flat subduction, climate-tectonic coupling, and lithospheric foundering. Controversies persist in the southern Puna Plateau, where preexisting rift structures are unknown and Cenozoic shortening events are sparsely documented. The 6-km high Sierra Laguna Blanca (LB) (NW Argentina) is among the most outstanding topographic features in the interior of the southern Puna Plateau. We document cooling history for LB with apatite (U-Th)/He, apatite fission-track and zircon (U-Th)/He thermochronometers for a vertical profile from 3.6-5.6 km on its eastern flank. Preliminary results from apatite fission-track (AFT) analysis yield ages ranging from 45-65 Ma, with top samples being the oldest. Dpar values for all samples are low (1.54 to 1.74), suggesting a relatively low-temperature partial annealing zone. All samples have shortened mean track lengths ranging from 10.9 to 12.3 micrometers, suggesting partial resetting. Preliminary apatite U-Th/He (AHe) ages are compatible with AFT ages but are widely dispersed, perhaps due to U zoning and small U-rich inclusions which have been observed on AFT external detectors. Inverse modeling of AFT data and selected AHe data using the HeFTy program reveal two major cooling events for LB. All models start ~90-70 Ma and immediately decrease their temperatures to ~60°C before ~50 Ma. Samples may have stayed ~60°C without additional thermal events until ~15-10 Ma, when the most recent cooling event took place, bringing all samples to surface temperature. Our first finding is that the interior of the southern Puna Plateau may have been influenced by the Salta Rift during the Cretaceous, extending the known zone of influence further west. Second, the most recent cooling phase (mid-late Miocene) is consistent with out-of-sequence deformation in the southern Puna Plateau, which might be genetically linked to a proposed lithospheric dripping event.

  6. Widespread Gravity Changes and CO2 Degassing at Laguna Del Maule, Chile, Accompanying Rapid Uplift

    NASA Astrophysics Data System (ADS)

    Miller, C. A.; Williams-Jones, G.; Le Mevel, H.; Tikoff, B.

    2014-12-01

    Laguna Del Maule (LdM), located on the Andes range crest in central Chile, is one of the most active rhyolite volcanic fields on Earth with 36 post glacial rhyolitic eruptions. Since 2007, LdM has accumulated over 1.8 m of uplift at rates of up to 300 mm per year. We hypothesize that this rapid uplift results from the injection of basaltic magma into the base of a rhyolite chamber. To test this hypothesis we undertook a dynamic gravity study, complimented with CO2 soil gas measurements. We established a 35 station dynamic gravity and differential GPS network around the lake in April 2013 and undertook initial CO2 measurements. We resurveyed the network in January 2014 and expanded the soil gas coverage. From these surveys we calculated 0.134 ± 0.030 mGal residual gravity change (?g) accompanied by 281 ± 13 mm of uplift over the 10 month period. Statistical tests show that the results of the 2013 and 2014 surveys are different at p < 0.01. The ?g anomaly occupies an area of 5 km x 10 km, oriented E/W, and centred in the south eastern part of the lake, and is coincident with the area of maximum uplift. Gaussian integration of ?g yields an excess mass of ~1.2 x 1011 kg. Assuming a density of 2700 kg/m3 this results in a volume of around 0.044 km3. In the 10 month time interval between surveys the calculated volume change rate was 41 ± 1 m3/s. We examine gravity / height change (?g/?h) relationships to determine if changes observed relate soley to increased mass or if density changes are involved. In addition to the ?g and ?h measurements, CO2 soil concentrations of up to 7 % are recorded around the entire lake basin. We will discuss modelling of the ?h and ?g data to explore the geometry and physical parameters of the mass and pressure source and discuss the relationship of CO2 anomalies to these models.

  7. Climatically induced depositional dynamics - Results of an areal sediment survey at Laguna Potrok Aike, Argentina

    NASA Astrophysics Data System (ADS)

    Kastner, S.; Ohlendorf, C.; Haberzettl, T.; Lücke, A.; Maidana, N. I.; Mayr, C.; Schäbitz, F.; Zolitschka, B.

    2009-12-01

    The ca. 770 ka old maar lake Laguna Potrok Aike (51°S, 70°W) is an ICDP site within the “Potrok Aike maar lake Sediment Archive Drilling prOject” (PASADO) and was drilled in 2008. The lake - situated in the dry steppe environment of south-eastern Patagonia - is a palaeolimnological key site for the reconstruction of terrestrial climatic and environmental conditions in the Southern Hemisphere. Climate patterns are characterized by high sensitivity to variations in westerly wind and pressure systems. Depositional changes inferred from the lacustrine sediment sequence and lake level terraces provide detailed information about the water budget of the lake which is related to the variability of the Southern Hemispheric Westerlies. In addition to the existing high-resolution and long-term multi-proxy investigation this study focuses on the understanding of depositional dynamics, which control the characteristics and spatial distribution of the lake’s sediment infill. This areal information will improve the interpretations of the long sediment record recovered within the PASADO project. A dense grid of 63 gravity cores and 40 near-shore surface samples were recovered to survey the spatial sediment distribution. Using X-ray fluorescence and magnetic susceptibility scanning data the cores were correlated and linked to a previously established age model. Samples of the 2005 sediment surface were taken from all cores. The scanning profiles do not allow unequivocal correlation of basin and littoral cores across the steep slopes. Thus, sub-sampling of five selected older time intervals covering distinctive hydrological settings back to AD 1380 was restricted to well correlated deep central basin cores. Distribution maps of multi-proxy datasets were created for every time slice using kriging. Sediment deposition in the lake is not only sensitive to lake level changes but also controlled by the dominant westerly winds. The longest wind fetch occurs at the eastern lake side and results in strong wave action, internal currents and polymictic conditions. Furthermore, influence of episodic inflows, ground water springs and of the surrounding geology is observed. The surficial sediments point to frequent relocation of littoral sediment at the eastern shore followed by transport to profundal accumulation areas. The sub-recent spatial sediment distribution is interpreted in context of these modern processes. Changing wind patterns and varying lake levels are assumed to cause modifications of depositional dynamics during the selected late Holocene time intervals. Low lake levels prior to the Little Ice Age and strengthened winds result in sediment distribution patterns comparable to modern times. In contrast, high lake levels during the Little Ice Age and extenuated westerly winds result in a more homogeneous sediment distribution indicating strengthened influence of the tributaries.

  8. Fault-Tolerant Quantum Computation for Local Leakage Faults

    E-print Network

    Panos Aliferis; Barbara M. Terhal

    2006-05-26

    We provide a rigorous analysis of fault-tolerant quantum computation in the presence of local leakage faults. We show that one can systematically deal with leakage by using appropriate leakage-reduction units such as quantum teleportation. The leakage noise is described by a Hamiltonian and the noise is treated coherently, similar to general non-Markovian noise analyzed in Refs. quant-ph/0402104 and quant-ph/0504218. We describe ways to limit the use of leakage-reduction units while keeping the quantum circuits fault-tolerant and we also discuss how leakage reduction by teleportation is naturally achieved in measurement-based computation.

  9. Vibration-based fault detection of sharp bearing faults in helicopters

    E-print Network

    Paris-Sud XI, Université de

    Vibration-based fault detection of sharp bearing faults in helicopters Victor Girondin , Herve the characteristic symptoms of sharp bearing faults (like localized spalling) from vibratory analysis. However mainly in identifying fault frequencies. Local bearing faults induce temporal periodic and impulsive

  10. Fault detection and management system for fault tolerant switched reluctance motor drives

    Microsoft Academic Search

    C. M. Stephens

    1989-01-01

    Fault-tolerance characteristics of the switched reluctance motor are discussed, and winding fault detectors are presented which recognize shorted motor windings. Logic circuitry in the inverter blocks the power switch gating signals of the affected phase at the receipt of a fault-detection signal from one of the fault detectors. The fault detectors were implemented on a laboratory drive system to demonstrate

  11. A Framework for Optimal Fault-Tolerant Control Synthesis: Maximize Pre-Fault while

    E-print Network

    Kumar, Ratnesh

    1 A Framework for Optimal Fault-Tolerant Control Synthesis: Maximize Pre-Fault while Minimize Post-Fault State University, Ames, Iowa 50011 Abstract--In an earlier work, we introduced a framework for fault existence. In this paper, we introduce the synthesis of an optimal fault- tolerant supervisory controller

  12. An Approach to Fault Modeling and Fault Seeding Using the Program Dependence Graph1

    E-print Network

    Harrold, Mary Jean

    An Approach to Fault Modeling and Fault Seeding Using the Program Dependence Graph1 Mary Jean harrold@cis.ohio-state.edu ofut@isse.gmu.edu kanu@eng.sun.com Abstract We present a fault-classification scheme and a fault-seeding method that are based on the manifes- tation of faults in the program

  13. Quantifying Natural Fault Geometry: Statistics of Splay Fault Angles by Ryosuke Ando,*

    E-print Network

    Shaw, Bruce E.

    Short Note Quantifying Natural Fault Geometry: Statistics of Splay Fault Angles by Ryosuke Ando,* Bruce E. Shaw, and Christopher H. Scholz Abstract We propose a new approach to quantifying fault system geometry, using an objective fit of the fault geometry to a test function, specifically here a fault branch

  14. Fault-Trajectory Approach for Fault Diagnosis on Analog Circuits Carlos Eduardo Savioli,

    E-print Network

    Paris-Sud XI, Université de

    Fault-Trajectory Approach for Fault Diagnosis on Analog Circuits Carlos Eduardo Savioli, Claudio C Mesquita@coe.ufrj.br Abstract This issue discusses the fault-trajectory approach suitability for fault on this concept for ATPG for diagnosing faults on analog networks. Such method relies on evolutionary techniques

  15. Computation of Diagnosable Fault-Occurrence Indices for Systems with Repeatable Faults

    E-print Network

    Kumar, Ratnesh

    1 Computation of Diagnosable Fault-Occurrence Indices for Systems with Repeatable Faults Changyan Zhou, Member, IEEE, and Ratnesh Kumar, Fellow, IEEE, Abstract-- Certain faults, such as intermittent or non- persistent faults, may occur repeatedly. For discrete-event sys- tems prone to repeated faults

  16. Microfracture analysis of fault growth and wear processes, Punchbowl Fault, San Andreas system, California

    E-print Network

    Chester, Frederick M.

    Microfracture analysis of fault growth and wear processes, Punchbowl Fault, San Andreas system hypotheses for the origin of damage along large-displacement faults by the processes of fault growth and wear. Oriented samples 0.075 m to 1 km from the Punchbowl fault surface (i.e. ultracataclasite layer) document

  17. Differential Fault Analysis on SMS4 Using a Single Fault , Bing Sun1

    E-print Network

    Differential Fault Analysis on SMS4 Using a Single Fault Ruilin Li1 , Bing Sun1 , Chao Li1 Fault Analysis (DFA) attack is a powerful cryptanalytic technique that could be used to retrieve, Differential Fault Analysis, Block cipher, SMS4 1 Introduction Fault attacks are where an adversary tries

  18. Fast high-level fault simulator

    Microsoft Academic Search

    S. Deniziak; K. Sapiecha

    2004-01-01

    A new fast fault simulation technique is presented for calculating fault propagation through high level primitives (HLPs). Reduced ordered ternary decision diagrams are used to describe HLPs. The technique is implemented in an HTDD fault simulator. The simulator is evaluated with some ITC99 benchmarks. Besides high efficiency (in comparison with existing fault simulators), it shows flexibility for the adoption of

  19. Data parallel sequential circuit fault simulation

    Microsoft Academic Search

    Minesh B. Amin; Bapiraju Vinnakota

    1996-01-01

    Sequential circuit fault simulation is a compute-intensive problem. Parallel simulation is one method to reduce fault simulation time. In this paper, we discuss a novel technique to partition the fault set for the fault parallel simulation of sequential circuits on multiple processors. When applied statically, the technique can scale well for up to thirty two processors on an ethernet. The

  20. Expert System Detects Power-Distribution Faults

    NASA Technical Reports Server (NTRS)

    Walters, Jerry L.; Quinn, Todd M.

    1994-01-01

    Autonomous Power Expert (APEX) computer program is prototype expert-system program detecting faults in electrical-power-distribution system. Assists human operators in diagnosing faults and deciding what adjustments or repairs needed for immediate recovery from faults or for maintenance to correct initially nonthreatening conditions that could develop into faults. Written in Lisp.

  1. Arc fault detection based on wavelet packet

    Microsoft Academic Search

    Wen-Jun Li; Yuan-Chun Li

    2005-01-01

    Methods of arc fault detection are beginning to develop to protect against conditions that may cause fire on aircraft. This paper provides a new method for the detection of arc fault based on wavelet packet transform. Wavelet packets with automatically adjusted time windows are used to distinguished arc fault from non-arcing fault transient phenomena and conditions similar to an arc

  2. Sensor Fault Diagnosis Using Principal Component Analysis

    E-print Network

    Sharifi, Mahmoudreza

    2010-07-14

    of sensor faults 3. A stochastic method for the decision process 4. A nonlinear approach to sensor fault diagnosis. In this study, first a geometrical approach to sensor fault detection is proposed. The sensor fault is isolated based on the direction...

  3. The growth and interaction of normal faults

    Microsoft Academic Search

    Anupma Gupta

    2001-01-01

    Field and satellite observations combined with simple theoretical and numerical analyses are used to construct and apply a model for the growth and interaction of normal faults. Starting with an analysis of the growth and interaction of very small faults in simple systems, results are applied toward a greater understanding of large faults in more complex fault systems. The first

  4. Sensitivity analysis of modular dynamic fault trees

    Microsoft Academic Search

    Yong Ou; Joanne Bechta Dugan

    2000-01-01

    Dynamic fault tree analysis, as currently supported by the Galileo software package, provides an effective means for assessing the reliability of embedded computer-based systems. Dynamic fault trees extend traditional fault trees by defining special gates to capture sequential and functional dependency characteristics. A modular approach to the solution of dynamic fault trees effectively applies Binary Decision Diagram (BOD) and Markov

  5. Practical sensor fault tolerant control system

    Microsoft Academic Search

    S. S. Yang; Ernie Che Mid; H. A. F. Mohamed; M. Moghavvemi

    2008-01-01

    The fundamental purpose of an FTCS scheme is to ensure that faults do not result in system failure and ensure the achievement of best performance at a lower degree of system performance. In this paper we propose a fault tolerant control design consisting of two parts: a nominal performance controller and a fault detection element to provide fault compensating signals

  6. Dynamics of Earthquake Faults

    E-print Network

    J. M. Carlson; J. S. Langer; B. E. Shaw

    1993-08-05

    We present an overview of our ongoing studies of the rich dynamical behavior of the uniform, deterministic Burridge--Knopoff model of an earthquake fault. We discuss the behavior of the model in the context of current questions in seismology. Some of the topics considered include: (1) basic properties of the model, such as the magnitude vs. frequency distribution and the distinction between small and large events; (2) dynamics of individual events, including dynamical selection of rupture propagation speeds; (3) generalizations of the model to more realistic, higher dimensional models; (4) studies of predictability, in which artificial catalogs generated by the model are used to test and determine the limitations of pattern recognition algorithms used in seismology.

  7. Fault Tolerant State Machines

    NASA Technical Reports Server (NTRS)

    Burke, Gary R.; Taft, Stephanie

    2004-01-01

    State machines are commonly used to control sequential logic in FPGAs and ASKS. An errant state machine can cause considerable damage to the device it is controlling. For example in space applications, the FPGA might be controlling Pyros, which when fired at the wrong time will cause a mission failure. Even a well designed state machine can be subject to random errors us a result of SEUs from the radiation environment in space. There are various ways to encode the states of a state machine, and the type of encoding makes a large difference in the susceptibility of the state machine to radiation. In this paper we compare 4 methods of state machine encoding and find which method gives the best fault tolerance, as well as determining the resources needed for each method.

  8. Fault-Tolerant Quantum Walks

    E-print Network

    S. D. Freedman; Y. H. Tong; J. B. Wang

    2014-08-06

    Quantum walks are expected to serve important modelling and algorithmic applications in many areas of science and mathematics. Although quantum walks have been successfully implemented physically in recent times, no major efforts have been made to combat the error associated with these physical implementations in a fault-tolerant manner. In this paper, we propose a systematic method to implement fault-tolerant quantum walks in discrete time on arbitrarily complex graphs, using quantum states encoded with the Steane code and a set of universal fault tolerant matrix operations.

  9. Finding faults with the data

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    Rudolph Giuliani and Hillary Rodham Clinton are crisscrossing upstate New York looking for votes in the U.S. Senate race. Also cutting back and forth across upstate New York are hundreds of faults of a kind characterized by very sporadic seismic activity according to Robert Jacobi, professor of geology at the University of Buffalo (UB), who conducted research with fellow UB geology professor John Fountain."We have proof that upstate New York is crisscrossed by faults," Jacobi said. "In the past, the Appalachian Plateau—which stretches from Albany to Buffalo—was considered a pretty boring place structurally without many faults or folds of any significance."

  10. The Burträsk endglacial fault: Sweden's most seismically active fault system

    NASA Astrophysics Data System (ADS)

    Lund, Björn; Buhcheva, Darina; Tryggvason, Ari; Berglund, Karin; Juhlin, Christopher; Munier, Raymond

    2015-04-01

    Approximately 10,000 years ago, as the Weichselian ice sheet retreated from northern Fennoscandia, large earthquakes occurred in response to the combined tectonic and glacial isostatic adjustment stresses. These endglacial earthquakes reached magnitudes of 7 to 8 and left scarps up to 155 km long in northernmost Fennoscandia. Most of the endglacial faults (EGFs) still show considerable earthquake activity and the area around the Burträsk endglacial fault, south of the town of Skellefteå in northern Sweden, is not only the most active of the EGFs but also the currently most seismically active region in Sweden. Here we show the preliminary results of the first two years of a temporary deployment of seismic stations around the Burträsk fault, complementing the permanent stations of the Swedish National Seismic Network (SNSN) in the region. During the two year period December 2012 to December 2014, the local network recorded approximately 1,500 events and is complete to approximately magnitude -0.4. We determine a new velocity model for the region and perform double-difference relocation of the events along the fault. We also analyze depth phases to further constrain the depths of some of the larger events. We find that many of the events are aligned along and to the southeast of the fault scarp, in agreement with the previously determined reverse faulting mechanism of the main event. Earthquakes extend past the mapped surface scarp to the northeast in a similar strike direction into the Bay of Bothnia, suggesting that the fault may be longer than the surface scarp indicates. We also find a number of events north of the Burträsk fault, some seemingly related to the Röjnoret EGF but some in a more diffuse area of seismicity. The Burträsk events show a seismically active zone dipping approximately 40 degrees to the southeast, with earthquakes all the way down to 35 km depth. The Burträsk fault area thereby has some of the deepest seismicity observed in Sweden. We correlate our results with those of a seismic reflection survey carried out across the fault in 2008. Focal mechanisms are calculated for all events and the highest quality mechanisms are analyzed for faulting style variations in the region. We invert the mechanisms for the causative stress state and shed light on the long-standing issue of what causes earthquakes along the Swedish northeast coast, tectonics or current glacial isostatic adjustment.

  11. Nonlinear Network Dynamics on Earthquake Fault Systems

    SciTech Connect

    Rundle, Paul B.; Rundle, John B.; Tiampo, Kristy F.; Sa Martins, Jorge S.; McGinnis, Seth; Klein, W.

    2001-10-01

    Earthquake faults occur in interacting networks having emergent space-time modes of behavior not displayed by isolated faults. Using simulations of the major faults in southern California, we find that the physics depends on the elastic interactions among the faults defined by network topology, as well as on the nonlinear physics of stress dissipation arising from friction on the faults. Our results have broad applications to other leaky threshold systems such as integrate-and-fire neural networks.

  12. Tutorial: Advanced fault tree applications using HARP

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.

    1993-01-01

    Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.

  13. Physical Fault Tolerance of Nanoelectronics

    E-print Network

    Szkopek, Thomas

    The error rate in complementary transistor circuits is suppressed exponentially in electron number, arising from an intrinsic physical implementation of fault-tolerant error correction. Contrariwise, explicit assembly of ...

  14. Fault analysis using intelligent systems

    SciTech Connect

    Kezunovic, M. [Texas A and M Univ., College Station, TX (United States); Fernandes, M.F.; Sevcik, D.R.; Hertz, A.; Waight, J.; Fukui, S.; Liu, C.C.

    1996-06-01

    This paper examines how applications of expert systems, neural networks, fuzzy logic, and genetic algorithms are helping utilities reach their fault analysis goals and objectives. Four IEEE Power Engineering Society (PES) subcommittees or working groups are presently concerned with intelligent system applications and are coordinating their activities. A joint panel session on fault analysis was held during the 1996 IEEE PES Winter Meeting in Baltimore. The panel session dealt with various aspects of the specification, design, development, and application of different solutions for automated fault analysis using advanced intelligent system techniques and tools. In order to provide an overview of a variety of issues associated with the fault analysis automation, the panel was formed of experts from the utilities, vendors, and academia.

  15. Fault-tolerant TCP mechanisms 

    E-print Network

    Satapati, Suresh Kumar

    2000-01-01

    While fault-tolerance is supported by a variety of critical services that can be accessed over the Internet, they are not robust in that they are oblivious of the impact of their tolerant mechanisms on the service they ...

  16. Cell boundary fault detection system

    DOEpatents

    Archer, Charles Jens (Rochester, MN); Pinnow, Kurt Walter (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian Edward (Rochester, MN)

    2009-05-05

    A method determines a nodal fault along the boundary, or face, of a computing cell. Nodes on adjacent cell boundaries communicate with each other, and the communications are analyzed to determine if a node or connection is faulty.

  17. The fault-tree compiler

    NASA Technical Reports Server (NTRS)

    Martensen, Anna L.; Butler, Ricky W.

    1987-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.

  18. Fault-Scalable Byzantine Fault-Tolerant Services Michael Abd-El-Malek

    E-print Network

    Venkataramani, Arun

    ]: Reliabil- ity--Fault-tolerance General Terms Reliability, Security, Design Keywords Fault's distributed services face unpredictable network delays, arbitrary failures induced by increasing software- tings, timing faults (such as those due to network delays and transient network partitions) and failures

  19. Internal structure of the Kern Canyon Fault, California: a deeply exhumed strike-slip fault

    E-print Network

    Neal, Leslie Ann

    2002-01-01

    the majority of displacement along the fault. Abundant veins and fluid-assisted alteration in the rock surrounding the fault zone attest to the presence of fluids of evolving chemistry during both ductile and brittle faulting. Mass balance calculations...

  20. On fault tolerant matrix decomposition

    Microsoft Academic Search

    Patrick Fitzpatrick

    1994-01-01

    We present a fault tolerant algorithm for matrix factorization in the presence of multiple hardware faults which can be used for solving the linear systemAx=b without determining the correctZU decomposition ofA. HereZ is eitherL for ordinary Gaussian decomposition with partial pivoting,X for pairwise or neighbor pivoting (motivated by the Gentleman-Kung systolic array structure), orQ for the usualQR decomposition. Our algorithm

  1. Fault-tolerant rotary actuator

    DOEpatents

    Tesar, Delbert

    2006-10-17

    A fault-tolerant actuator module, in a single containment shell, containing two actuator subsystems that are either asymmetrically or symmetrically laid out is provided. Fault tolerance in the actuators of the present invention is achieved by the employment of dual sets of equal resources. Dual resources are integrated into single modules, with each having the external appearance and functionality of a single set of resources.

  2. Hardware Fault Simulator for Microprocessors

    NASA Technical Reports Server (NTRS)

    Hess, L. M.; Timoc, C. C.

    1983-01-01

    Breadboarded circuit is faster and more thorough than software simulator. Elementary fault simulator for AND gate uses three gates and shaft register to simulate stuck-at-one or stuck-at-zero conditions at inputs and output. Experimental results showed hardware fault simulator for microprocessor gave faster results than software simulator, by two orders of magnitude, with one test being applied every 4 microseconds.

  3. Normal fault earthquakes or graviquakes

    PubMed Central

    Doglioni, C.; Carminati, E.; Petricca, P.; Riguzzi, F.

    2015-01-01

    Earthquakes are dissipation of energy throughout elastic waves. Canonically is the elastic energy accumulated during the interseismic period. However, in crustal extensional settings, gravity is the main energy source for hangingwall fault collapsing. Gravitational potential is about 100 times larger than the observed magnitude, far more than enough to explain the earthquake. Therefore, normal faults have a different mechanism of energy accumulation and dissipation (graviquakes) with respect to other tectonic settings (strike-slip and contractional), where elastic energy allows motion even against gravity. The bigger the involved volume, the larger is their magnitude. The steeper the normal fault, the larger is the vertical displacement and the larger is the seismic energy released. Normal faults activate preferentially at about 60° but they can be shallower in low friction rocks. In low static friction rocks, the fault may partly creep dissipating gravitational energy without releasing great amount of seismic energy. The maximum volume involved by graviquakes is smaller than the other tectonic settings, being the activated fault at most about three times the hypocentre depth, explaining their higher b-value and the lower magnitude of the largest recorded events. Having different phenomenology, graviquakes show peculiar precursors. PMID:26169163

  4. Normal fault earthquakes or graviquakes.

    PubMed

    Doglioni, C; Carminati, E; Petricca, P; Riguzzi, F

    2015-01-01

    Earthquakes are dissipation of energy throughout elastic waves. Canonically is the elastic energy accumulated during the interseismic period. However, in crustal extensional settings, gravity is the main energy source for hangingwall fault collapsing. Gravitational potential is about 100 times larger than the observed magnitude, far more than enough to explain the earthquake. Therefore, normal faults have a different mechanism of energy accumulation and dissipation (graviquakes) with respect to other tectonic settings (strike-slip and contractional), where elastic energy allows motion even against gravity. The bigger the involved volume, the larger is their magnitude. The steeper the normal fault, the larger is the vertical displacement and the larger is the seismic energy released. Normal faults activate preferentially at about 60° but they can be shallower in low friction rocks. In low static friction rocks, the fault may partly creep dissipating gravitational energy without releasing great amount of seismic energy. The maximum volume involved by graviquakes is smaller than the other tectonic settings, being the activated fault at most about three times the hypocentre depth, explaining their higher b-value and the lower magnitude of the largest recorded events. Having different phenomenology, graviquakes show peculiar precursors. PMID:26169163

  5. Software Fault Tolerance: A Tutorial

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2000-01-01

    Because of our present inability to produce error-free software, software fault tolerance is and will continue to be an important consideration in software systems. The root cause of software design errors is the complexity of the systems. Compounding the problems in building correct software is the difficulty in assessing the correctness of software for highly complex systems. After a brief overview of the software development processes, we note how hard-to-detect design faults are likely to be introduced during development and how software faults tend to be state-dependent and activated by particular input sequences. Although component reliability is an important quality measure for system level analysis, software reliability is hard to characterize and the use of post-verification reliability estimates remains a controversial issue. For some applications software safety is more important than reliability, and fault tolerance techniques used in those applications are aimed at preventing catastrophes. Single version software fault tolerance techniques discussed include system structuring and closure, atomic actions, inline fault detection, exception handling, and others. Multiversion techniques are based on the assumption that software built differently should fail differently and thus, if one of the redundant versions fails, it is expected that at least one of the other versions will provide an acceptable output. Recovery blocks, N-version programming, and other multiversion techniques are reviewed.

  6. Paleosecular variation and paleointensity records for the last millennium from southern South America (Laguna Potrok Aike, Santa Cruz, Argentina)

    NASA Astrophysics Data System (ADS)

    Gogorza, C. S. G.; Sinito, A. M.; Ohlendorf, C.; Kastner, S.; Zolitschka, B.

    2011-01-01

    High-resolution paleo- and rock magnetic studies were performed on a group of four sediment cores from Laguna Potrok Aike (Santa Cruz, Argentina) representing the time period AD 1300-2000. The rock magnetic analyses show that the main magnetic mineral is (titano)magnetite with a concentration between 0.01 and 0.08%, and a grain size of 4-15 ?m. This study is helpful in order to complete the paleosecular variation (PSV) and paleointensity type curves for South America which do not have a detailed record for the last millennium. The comparison with the study carried out for Lake El Trébol shows a very good agreement, supporting that PSV records of south-western Argentina can be developed into a stratigraphic correlation tool on a regional scale.

  7. Evolution of Rhyolite at Laguna del Maule, a Rapidly Inflating Volcanic Field in the Southern Andes

    NASA Astrophysics Data System (ADS)

    Andersen, N. L.; Singer, B. S.; Jicha, B. R.; Hildreth, E. W.; Fierstein, J.; Rogers, N. W.

    2012-12-01

    The Laguna del Maule Volcanic Field (LdM) is host to both the foremost example of post-glacial rhyolitic volcanism in the southern Andes and rapid, ongoing crustal deformation. The flare-up of high-silica eruptions was coeval with deglaciation at 24 ka. Rhyolite and rhyodacite domes and coulees totaling 6.5 km3 form a 20 km ring around the central lake basin. This spatial and temporal concentration of rhyolite is unprecedented in the history of the volcanic field. Colinear major and trace element variation suggests these lavas share a common evolutionary history (Hildreth et al., 2010). Moreover, geodetic observations (InSAR & GPS) have identified rapid inflation centered in the western side of the rhyolite dome ring at a rate of 17 cm/year for five years, which has accelerated to 30 cm/yr since April 2012. The best fit to the geodetic data is an expanding magma body located at 5 km depth (Fournier et al., 2010; Le Mevel, 2012). The distribution of high-silica volcanism, most notably geochemically similar high-silica rhyolite lavas erupted 12 km apart of opposite sides of the lake within a few kyr of each other, raises the possibility that the shallow magma intrusion represents only a portion of a larger rhyolitic body, potentially of caldera forming dimensions. We aim to combine petrologic models with a precise geochronology to formulate a model of the evolution of the LdM magma system to its current state. New 40Ar/39Ar age determinations show rhyolitic volcanism beginning at 23 ka with the eruption of the Espejos rhyolite, followed by the Cari Launa Rhyolite at 14.5 ka, two flows of the Barrancas complex at 6.4 and 3.9 ka, and the Divisoria rhyolite at 2.2 ka. In contrast, significant andesitic and dacitic volcanism is largely absent from the central basin of LdM since the early post-glacial period suggesting a coincident basin-wide evolution from andesite to dacite to rhyolite and is consistent with a shallow body of low-density rhyolite blocking the eruption of less evolved magma. Temporal trends in the major element compositions of the rhyolite domes show the most evolved was erupted early in the post-glacial period followed by slightly lower-silica rhyolites. Major element fractional crystallization modeling using the rhyolite calibration of the MELTS algorithm (Gualda et al., 2012) largely reproduces the high-silica compositions from a basaltic parental composition. The preferred model predicts 86% crystallization while cooling from 1290° to 800° C at a depth of 5-8 km resulting in a water content of 4-6 wt. % in the residual high-silica magma. Trace element assimilation fractional crystallization modeling predicts only moderate assimilation of anatectic melts and fractionation of zircon and apatite controlling trace element compositions at high silica contents. The suite of recent LdM lavas lies on a single evolutionary pathway supporting a cogenetic source; furthermore, the model parameters are consistent with a shallow magma chamber with the potential to fuel an explosive, caldera-forming eruption.

  8. Characterizing the eolian sediment component in the lacustrine record of Laguna Potrok Aike (southeastern Patagonia)

    NASA Astrophysics Data System (ADS)

    Ohlendorf, C.; Gebhardt, C.

    2013-12-01

    Southern South America with its extended dry areas was one of the major sources for dust in the higher latitudes of the southern hemisphere during the last Glacial, as was deduced from fingerprinting of dust particles found in Antarctic ice cores. The amount of dust that was mobilized is mostly related to strength and latitudinal position of the Southern Hemisphere Westerly Winds (SWW). How exactly SWW shifted between glacial and interglacial times and what consequences such shifts had for ocean and atmospheric circulation changes during the last deglaciation is currently under debate. Laguna Potrok Aike (PTA) as a lake situated in the middle of the source area of dust offers the opportunity to arrive at a better understanding of past SWW changes and their associated consequences for dust transport. For this task, a sediment record of the past ~51 ka is available from a deep drilling campaign (PASADO). From this 106 m long profile, 76 samples representing the different lithologies of the sediment sequence were selected to characterize an eolian sediment component. Prior to sampling of the respective core intervals, magnetic susceptibility was measured and the element composition was determined by XRF-scanning on fresh, undisturbed sediment. After sampling and freeze drying, physical, chemical and mineralogical sediment properties were determined before and after separation of each sample into six grainsize classes for each fraction separately. SEM techniques were used to verify the eolian origin of grains. The aim of this approach is to isolate an exploitable fingerprint of the eolian sediment component in terms of their grain size, physical properties, geochemistry and mineralogy. Thereby, the challenging aspect is that such a fingerprint should be based on high-resolution down-core scanning techniques, so time-consuming techniques such as grain-size measurements by laser detection can be avoided. A first evaluation of the dataset indicates that magnetic susceptibility, which is often used as a tracer for the eolian sediment component in marine sediments, probably does not yield a robust signal of eolian input in this continental setting because it is variably contained in the silt as well as in the fine sand fraction. XRF-scanning of powdered samples of the different grain-size fractions shows that some elements are characteristically enriched in the clay, silt or medium sand fractions which might allow a geochemical fingerprinting of these. For instance, an identification of higher amounts of clay in a sample may be possible based on it's enrichment in heavy metals (Zn, Cu, Pb) and/or Fe. Higher amounts of silt may be recognized by Zr and/or Y enrichment. Hence, unmixing of the signal stored in the sedimentary record of PTA with tools of multivariate statistics is a necesseary step to characterize the eolian fraction. The 51 ka BP sediment record of PTA might then be used for a reconstruction of dust availability in the high latitude source areas of the southern hemisphere.

  9. Sr Isotopes and Migration of Prairie Mammoths (Mammuthus columbi) from Laguna de las Cruces, San Luis Potosi, Mexico

    NASA Astrophysics Data System (ADS)

    Solis-Pichardo, G.; Perez-Crespo, V.; Schaaf, P. E.; Arroyo-Cabrales, J.

    2011-12-01

    Asserting mobility of ancient humans is a major issue for anthropologists. For more than 25 years, Sr isotopes have been used as a resourceful tracer tool in this context. A comparison of the 87Sr/86Sr ratios found in tooth enamel and in bone is performed to determine if the human skeletal remains belonged to a local or a migrant. Sr in bone approximately reflects the isotopic composition of the geological region where the person lived before death; whereas the Sr isotopic system in tooth enamel is thought to remain as a closed system and thus conserves the isotope ratio acquired during childhood. Sr isotope ratios are obtained through the geologic substrate and its overlying soil, from where an individual got hold of food and water; these ratios are in turn incorporated into the dentition and skeleton during tissue formation. In previous studies from Teotihuacan, Mexico we have shown that a three-step leaching procedure on tooth enamel samples is important to assure that only the biogenic Sr isotope contribution is analyzed. The same Sr isotopic tools can function concerning ancient animal migration patterns. To determine or to discard the mobility of prairie mammoths (Mammuthus columbi) found at Laguna de las Cruces, San Luis Potosi, México the leaching procedure was applied on six molar samples from several fossil remains. The initial hypothesis was to use 87Sr/86Sr values to verify if the mammoth population was a mixture of individuals from various herds and further by comparing their Sr isotopic composition with that of plants and soils, to confirm their geographic origin. The dissimilar Sr results point to two distinct mammoth groups. The mammoth population from Laguna de Cruces was then not a family unit because it was composed by individuals originated from different localities. Only one individual was identified as local. Others could have walked as much as 100 km to find food and water sources.

  10. High-Resolution Paleosalinity Reconstruction From Laguna de la Leche, North Coastal Cuba, Using Sr, O, and C Isotopes

    NASA Astrophysics Data System (ADS)

    Peros, M. C.; Reinhardt, E. G.; Schwarcz, H. P.; Davis, A. M.

    2008-12-01

    Isotopes of Sr, O, and C were studied from a 227-cm long sediment core to develop a high-resolution paleosalinity record to investigate the paleohydrology of Laguna de la Leche, north coastal Cuba, during the Middle to Late Holocene. Palynological, plant macrofossil, foraminiferal, ostracode, gastropod, and charophyte data from predominantly euryhaline taxa, coupled with a radiocarbon-based chronology, indicate that the wetland evolved through four phases: (1) an oligohaline lake existed from 6200 to 4800 cal yr B.P.; (2) water level in the lake increased and the system freshened from 4800 to 4200 cal yr B.P.; (3) a mesohaline lagoon replaced the lake 4200 cal yr B.P.; and (4) mangroves enclosed the lagoon beginning 1700 cal yr B.P., forming a mesohaline lake. Isotopic ratios were measured on specimens of the euryhaline foraminifer Ammonia beccarii, although several measurements were also made on other calcareous microfossils in order to identify potential taphonomic and/or vital effects. The 87Sr/86Sr results show that the average salinity of Laguna de la Leche was 1.7 ppt during the early lake phase and 8 ppt during the lagoon phase - a change driven by relative sea level rise. The delta18O results do not record the salinity increase seen in the 87Sr/86Sr data, but instead indicate high evaporation from the lake surface. Variability in delta13C was controlled by plant productivity, episodic marine incursions, and vegetation community change. There is some evidence for seasonal effect and the lateral transport of microfossils prior to burial. Our results show that Sr isotopes, while often cited as a powerful paleosalinity tool, should be used in conjunction with other indicators when investigating paleosalinity trends; relying solely on any single isotopic or ecological indicator can lead to inaccurate results, especially in semi-enclosed and closed hydrological systems.

  11. Anisotropy of permeability in faulted porous sandstones

    NASA Astrophysics Data System (ADS)

    Farrell, N. J. C.; Healy, D.; Taylor, C. W.

    2014-06-01

    Studies of fault rock permeabilities advance the understanding of fluid migration patterns around faults and contribute to predictions of fault stability. In this study a new model is proposed combining brittle deformation structures formed during faulting, with fluid flow through pores. It assesses the impact of faulting on the permeability anisotropy of porous sandstone, hypothesising that the formation of fault related micro-scale deformation structures will alter the host rock porosity organisation and create new permeability pathways. Core plugs and thin sections were sampled around a normal fault and oriented with respect to the fault plane. Anisotropy of permeability was determined in three orientations to the fault plane at ambient and confining pressures. Results show that permeabilities measured parallel to fault dip were up to 10 times higher than along fault strike permeability. Analysis of corresponding thin sections shows elongate pores oriented at a low angle to the maximum principal palaeo-stress (?1) and parallel to fault dip, indicating that permeability anisotropy is produced by grain scale deformation mechanisms associated with faulting. Using a soil mechanics 'void cell model' this study shows how elongate pores could be produced in faulted porous sandstone by compaction and reorganisation of grains through shearing and cataclasis.

  12. A novel approach for distribution fault analysis

    SciTech Connect

    Chow, Moyuen (North Carolina State Univ., Raleigh, NC (United States). Dept. of Electrical and Computer Engineering); Taylor, L.S. (Duke Power Co., Charlotte, NC (United States). Power Delivery Engineering Service)

    1993-10-01

    This paper proposes to use four different measures: actual values, normalized values, relative values, and likelihood values for power systems' distribution faults analysis. This paper also discusses the general and local properties of distribution faults. The likelihood measure, based on the local region properties, provides important information for distribution fault cause identification when the fault cause is not known. Tree faults on the Duke Power System are used in this paper for illustration purposes. The proposed measures, analysis and discussion in this paper can be easily generalized for different types of distribution faults in other utility companies.

  13. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1991-01-01

    Twenty independently developed but functionally equivalent software versions were used to investigate and compare empirically some properties of N-version programming, Recovery Block, and Consensus Recovery Block, using the majority and consensus voting algorithms. This was also compared with another hybrid fault-tolerant scheme called Acceptance Voting, using dynamic versions of consensus and majority voting. Consensus voting provides adaptation of the voting strategy to varying component reliability, failure correlation, and output space characteristics. Since failure correlation among versions effectively reduces the cardinality of the space in which the voter make decisions, consensus voting is usually preferable to simple majority voting in any fault-tolerant system. When versions have considerably different reliabilities, the version with the best reliability will perform better than any of the fault-tolerant techniques.

  14. Comparison of Fault Classes in Specification-Based Testing

    E-print Network

    Black, Paul E.

    Comparison of Fault Classes in Specification-Based Testing _____________________________________________________________________________ Abstract Our results extending Kuhn's fault class hierarchy provide a justification for * *the focus of fault-based testing strategies on detecting particular faults and igno* *ring others. We develop

  15. Graphite as a fault lubricant

    NASA Astrophysics Data System (ADS)

    Oohashi, K.; Hirose, T.; Shimamoto, T.

    2011-12-01

    Graphite is a well-known solid lubricant, and has been found in ~14 vol% of fraction from fault zones in a variety of geological settings (e.g. the Atotsugawa fault system, Japan: Oohashi et al., 2011a, submitted; the KTB borehole, Germany: Zulauf et al., 1990; and the Err nappe detachment fault, Switzerland: Manatschal, 1999). However, it received little attention even though friction of graphite gouge shows strikingly low (steady-state friction coefficient ?0.1) over seven orders of magnitude in slip rate (0.16 ?m/s to 1.3 m/s; Oohashi et al., 2011b). Thus the friction experiments were performed on mixed graphite and quartz gouges with different compositions in order to determine the minimum amount of graphite in reducing the frictional strength of faults dramatically, by using a rotary-shear low to high-velocity friction apparatus. Experimental result clearly indicates that the friction coefficient of the mixture gouge decreases with graphite content following a power-law relation irrespective of slip rate; it starts to reduce at the fraction of 5 vol% and reaches to the almost same level of pure graphite gouge at the fraction of more than 20 vol%. This result implies that the 14 vol% of graphite in natural fault rock is enough amount for reduce the shear strength to half of initial. According to the textural observation, slight weakening of 5-8 vol% of graphite mixture is associated with the development of partial connection of graphite matrix, forming a slip localized surface. On the other hand, the formation of through-going connection of diffused graphite-matrix zones along shear planes is most likely to have caused the dramatic weakening of gouge with graphite of more than 20 vol%. The non-linear power-law dependency of friction on graphite content leads to more efficient reduction of fault strength as compared with the previously reported almost linear dependency on the effects of clay minerals (e.g. Shimamoto & Logan, 1981). Hence the result demonstrates the potential importance of graphite as a weakening agent of mature faults as graphite can reduce friction efficiently as compared with other weak clay minerals. Such mechanical properties of graphite may explain the lack of pronounced heat flow in major crustal faults and the long-term fault weakening.

  16. Fault Management Guiding Principles

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn E.; Friberg, Kenneth H.; Fesq, Lorraine; Barley, Bryan

    2011-01-01

    Regardless of the mission type: deep space or low Earth orbit, robotic or human spaceflight, Fault Management (FM) is a critical aspect of NASA space missions. As the complexity of space missions grows, the complexity of supporting FM systems increase in turn. Data on recent NASA missions show that development of FM capabilities is a common driver for significant cost overruns late in the project development cycle. Efforts to understand the drivers behind these cost overruns, spearheaded by NASA's Science Mission Directorate (SMD), indicate that they are primarily caused by the growing complexity of FM systems and the lack of maturity of FM as an engineering discipline. NASA can and does develop FM systems that effectively protect mission functionality and assets. The cost growth results from a lack of FM planning and emphasis by project management, as well the maturity of FM as an engineering discipline, which lags behind the maturity of other engineering disciplines. As a step towards controlling the cost growth associated with FM development, SMD has commissioned a multi-institution team to develop a practitioner's handbook representing best practices for the end-to-end processes involved in engineering FM systems. While currently concentrating primarily on FM for science missions, the expectation is that this handbook will grow into a NASA-wide handbook, serving as a companion to the NASA Systems Engineering Handbook. This paper presents a snapshot of the principles that have been identified to guide FM development from cradle to grave. The principles range from considerations for integrating FM into the project and SE organizational structure, the relationship between FM designs and mission risk, and the use of the various tools of FM (e.g., redundancy) to meet the FM goal of protecting mission functionality and assets.

  17. Fault geometry and fault-zone development in mixed carbonate/clastic successions: Implications for reservoir management

    E-print Network

    Stell, John

    Fault geometry and fault-zone development in mixed carbonate/clastic successions: Implications Geological Survey) & David Richardson (Kier Mining) Overview Faults are key controlling elements of fluid flow within reservoirs. When faults undergo displacement, they change their fluid transmissibility

  18. Incidence of organochlorine pesticides and the health condition of nestling ospreys ( Pandion haliaetus ) at Laguna San Ignacio, a pristine area of Baja California Sur, Mexico

    Microsoft Academic Search

    Laura B. Rivera-Rodríguez

    2011-01-01

    We identified and quantified organochlorine (OC) pesticide residues in the plasma of 28 osprey (Pandion haliaetus) nestlings from a dense population in Laguna San Ignacio, a pristine area of Baja California Sur, Mexico, during the 2001\\u000a breeding season. Sixteen OC pesticides were identified and quantified. ?-, ?-, ?- and ?-hexachlorocyclohexane, heptaclor,\\u000a heptachlor epoxide, endosulfan I and II, endosulfan-sulfate, p,p?-DDE, p,p?-DDD,

  19. Palaeoenvironmental changes during the last 1600 years inferred from the sediment record of a cirque lake in southern Patagonia (Laguna Las Vizcachas, Argentina)

    Microsoft Academic Search

    Michael Fey; Christian Korr; Nora I. Maidana; María L. Carrevedo; Hugo Corbella; Sara Dietrich; Torsten Haberzettl; Gerhard Kuhn; Andreas Lücke; Christoph Mayr; Christian Ohlendorf; Marta M. Paez; Flavia A. Quintana; Frank Schäbitz; Bernd Zolitschka

    2009-01-01

    Laguna Las Vizcachas is a cirque lake located at the margin of an extra-Andean volcanic plateau in southern Patagonia, Argentina, within the area of steppe and semi-desert east of the Andes. The number of paleoenvironmental records is still limited in this region. Sediments of this lake were studied in order to obtain multi-proxy information about the paleoenvironmental history of this

  20. Modern and subrecent spatial distribution and characteristics of sediment infill controlled by internal depositional dynamics, Laguna Potrok Aike (southern Patagonia, Argentina)

    Microsoft Academic Search

    S. Kastner; C. Ohlendorf; T. Haberzettl; A. Lücke; N. I. Maidana; C. Mayr; F. Schäbitz; B. Zolitschka

    2009-01-01

    Situated in the dry steppe environment of south-eastern Patagonia the 100 m deep and max. 770 ka old maar lake Laguna Potrok Aike (51°58'S, 70°23'W) has a high potential as a palaeolimnological key site for the reconstruction of terrestrial palaeoclimate conditions. As this area is sensitive to variations in southern hemispheric wind and pressure systems the lake holds a unique

  1. Chemistry of Hot Spring Pool Waters in Calamba and Los Banos and Potential Effect on the Water Quality of Laguna De Bay

    NASA Astrophysics Data System (ADS)

    Balangue, M. I. R. D.; Pena, M. A. Z.; Siringan, F. P.; Jago-on, K. A. B.; Lloren, R. B.; Taniguchi, M.

    2014-12-01

    Since the Spanish Period (1600s), natural hot spring waters have been harnessed for balneological purposes in the municipalities of Calamba and Los Banos, Laguna, south of Metro Manila. There are at more than a hundred hot spring resorts in Brgy. Pansol, Calamba and Tadlac, Los Banos. These two areas are found at the northern flanks of Mt. Makiling facing Laguna de Bay. This study aims to provide some insights on the physical and chemical characteristics of hot spring resorts and the possible impact on the lake water quality resulting from the disposal of used water. Initial ocular survey of the resorts showed that temperature of the pool water ranges from ambient (>300C) to as high as 500C with an average pool size of 80m3. Water samples were collected from a natural hot spring and pumped well in Los Banos and another pumped well in Pansol to determine the chemistry. The field pH ranges from 6.65 to 6.87 (Pansol springs). Cation analysis revealed that the thermal waters belonged to the Na-K-Cl-HCO3 type with some trace amount of heavy metals. Methods for waste water disposal are either by direct discharge down the drain of the pool or by discharge in the public road canal. Both methods will dump the waste water directly into Laguna de Bay. Taking in consideration the large volume of waste water used especially during the peak season, the effect on the lake water quality would be significant. It is therefore imperative for the environmental authorities in Laguna to regulate and monitor the chemistry of discharges from the pool to protect both the lake water as well as groundwater quality.

  2. Seismology: Diary of a wimpy fault

    NASA Astrophysics Data System (ADS)

    Bürgmann, Roland

    2015-05-01

    Subduction zone faults can slip slowly, generating tremor. The varying correlation between tidal stresses and tremor occurring deep in the Cascadia subduction zone suggests that the fault is inherently weak, and gets weaker as it slips.

  3. Sensor Fault Detection and Isolation System

    E-print Network

    Yang, Cheng-Ken

    2014-08-01

    The purpose of this research is to develop a Fault Detection and Isolation (FDI) system which is capable to diagnosis multiple sensor faults in nonlinear cases. In order to lead this study closer to real world applications in oil industries...

  4. Detection of arcing faults on distribution feeders

    NASA Astrophysics Data System (ADS)

    Russell, B. D.

    1982-12-01

    The problem of detecting high impedance faults is examined from the perspective of current utility protection practices and it is shown why conventional overcurrent protection systems may not detect such faults. A microcomputer based prototype of an arcing, high impedance fault detector was tested. The fault detection technique is based on an increase in the high frequency component of distribution feeder current caused by the arcing associated with many high impedance faults. This theory is supported by field data measurements and analysis of a large number of staged distribution primary faults and normal system conditions. The design and demonstration of the prototype is explained. The device successfully detected many faults of greater than 5 to 10 A on a typical distribution feeder without false trips. General application of this fault detection techniques is considered.

  5. Parametric Modeling and Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva; Ju, Jianhong

    2000-01-01

    Fault tolerant control is considered for a nonlinear aircraft model expressed as a linear parameter-varying system. By proper parameterization of foreseeable faults, the linear parameter-varying system can include fault effects as additional varying parameters. A recently developed technique in fault effect parameter estimation allows us to assume that estimates of the fault effect parameters are available on-line. Reconfigurability is calculated for this model with respect to the loss of control effectiveness to assess the potentiality of the model to tolerate such losses prior to control design. The control design is carried out by applying a polytopic method to the aircraft model. An error bound on fault effect parameter estimation is provided, within which the Lyapunov stability of the closed-loop system is robust. Our simulation results show that as long as the fault parameter estimates are sufficiently accurate, the polytopic controller can provide satisfactory fault-tolerance.

  6. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1987-01-01

    Specific topics briefly addressed include: the consistent comparison problem in N-version system; analytic models of comparison testing; fault tolerance through data diversity; and the relationship between failures caused by automatically seeded faults.

  7. Underground distribution cable incipient fault diagnosis system 

    E-print Network

    Jaafari Mousavi, Mir Rasoul

    2007-04-25

    This dissertation presents a methodology for an efficient, non-destructive, and online incipient fault diagnosis system (IFDS) to detect underground cable incipient faults before they become catastrophic. The system provides ...

  8. Transient Faults in Computer Systems

    NASA Technical Reports Server (NTRS)

    Masson, Gerald M.

    1993-01-01

    A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.

  9. Fault testing quantum switching circuits

    E-print Network

    Jacob Biamonte; Marek Perkowski

    2010-01-19

    Test pattern generation is an electronic design automation tool that attempts to find an input (or test) sequence that, when applied to a digital circuit, enables one to distinguish between the correct circuit behavior and the faulty behavior caused by particular faults. The effectiveness of this classical method is measured by the fault coverage achieved for the fault model and the number of generated vectors, which should be directly proportional to test application time. This work address the quantum process validation problem by considering the quantum mechanical adaptation of test pattern generation methods used to test classical circuits. We found that quantum mechanics allows one to execute multiple test vectors concurrently, making each gate realized in the process act on a complete set of characteristic states in space/time complexity that breaks classical testability lower bounds.

  10. Fault detection and management system for fault-tolerant switched reluctance motor drives

    Microsoft Academic Search

    C. M. Stephens

    1991-01-01

    The superior fault tolerance characteristics of the switched reluctance motor (SRM) have been proved in a working laboratory drive system. The program started by defining the performance effects of various types of motor winding faults. Motor winding fault detection devices were developed, along with control circuitry, to isolate a faulted winding by blocking the gating signals to the semiconductor power

  11. New Algorithms for Address Decoder Delay Faults and Bit Line Imbalance Faults

    Microsoft Academic Search

    A. J. Van De Goor; Said Hamdioui; Georgi Nedeltchev Gaydadjiev; Zaid Al-ars

    2009-01-01

    Due to the rapid decrease of technology feature size speed related faults, such as Address Decoder Delay Faults (ADDFs), are becoming very important. In addition, increased leakage currents demand for improved tests for Bit Line Imbalance Faults (BLIFs)(caused by memory cell pass transistor leakage). This paper contributes to new and improved algorithms for detecting these faults. First it provides an

  12. PREEARTHQUAKE AND POSTEARTHQUAKE CREEP ON THE IMPERIAL FAULT AND THE BRAWLEY FAULT ZONE1

    E-print Network

    Tai, Yu-Chong

    PREEARTHQUAKE AND POSTEARTHQUAKE CREEP ON THE IMPERIAL FAULT AND THE BRAWLEY FAULT ZONE1 By STEPHEN, and 2 years ofsurveys from two nail files suggests that creep events on the Imperial fault 2 to 5 months event. No discernible creep occurred on the fault in the hours and days before the earth- quake. Records

  13. Efficient Fault Tolerance: an Approach to Deal with Transient Faults in Multiprocessor Architectures

    E-print Network

    Firenze, Università degli Studi di

    Efficient Fault Tolerance: an Approach to Deal with Transient Faults in Multiprocessor be integrated with a fault treatment approach aiming at op- timising resource utilisation. In this paper we propose a diagnosis approach that, accounting for transient faults, tries to remove units very cautiously

  14. Working-Conditions-Aware Fault Injection Technique 1 Working-Conditions-Aware Fault Injection Technique

    E-print Network

    Paris-Sud XI, Université de

    Working-Conditions-Aware Fault Injection Technique 1 Working-Conditions-Aware Fault Injection to inject faults in a given cache architecture. We tried to focus on transient errors which are due to cosmic rays (soft errors) or to voltage scaling and high temperatures. During the fault injection process

  15. Multi-Sensor Fault Recovery in the Presence of Known and Unknown Fault Types

    E-print Network

    Roberts, Stephen

    Multi-Sensor Fault Recovery in the Presence of Known and Unknown Fault Types Steven Reece in the presence of modelled and unmodelled faults. The al- gorithm comprises two stages. The first stage attempts to re- move modelled faults from each individual sensor estimate. The second stage de

  16. Fault Behavior and Characteristic Earthquakes: Examples From the Wasatch and San Andreas Fault Zones

    Microsoft Academic Search

    David P. Schwartz; Kevin J. Coppersmith

    1984-01-01

    Paleoseismological data for the Wasatch and San Andreas fault zones have led to the formulation of the characteristic earthquake model, which postulates that individual faults and fault segments tend to generate essentially same size or characteristic earthquakes having a relatively narrow range of magnitudes near the maximum. Analysis of scarp-derived colluvium in trench exposures across the Wasatch fault provides estimates

  17. Migrating Fault Trees To Decision Trees For Real Time Fault Detection On International Space Station

    Microsoft Academic Search

    Charles Lee; Richard L. Alena; Peter Robinson

    2005-01-01

    Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space

  18. Fault diagnosis system based on Dynamic Fault Tree Analysis of power transformer

    Microsoft Academic Search

    Jiang Guo; Kefei Zhang; Lei Shi; Kaikai Gu; Weimin Bai; Bing Zeng; Yajin Liu

    2012-01-01

    Firstly, this research paper introduced the process of transformer fault diagnosis and the theory of DFTA and then we attempt to apply DFTA to the field of transformer faults diagnosis. By establishing the fault tree of transformer, a practical, easily-extended, interactive and self-learning enabled fault diagnosis system based on DFTA for transformer is designed and developed. With the implementation and

  19. Monitoring and Diagnosis of Multiple Incipient Faults Using Fault Tree Induction

    E-print Network

    Madden, Michael

    Monitoring and Diagnosis of Multiple Incipient Faults Using Fault Tree Induction Michael G. M Abstract This paper presents DE/IFT, a new fault diagnosis engine which is based on the authors' IFT algorithm for induction of fault trees. It learns from an examples database comprising sensor recordings

  20. Contribution of Identified Active Faults to Near Fault Seismic Hazard in the Flinders Ranges

    E-print Network

    Sandiford, Mike

    Contribution of Identified Active Faults to Near Fault Seismic Hazard in the Flinders Ranges Paul analysis at a site near an active fault in the Flinders Ranges. Two categories of earthquake sources were used to represent the seismic hazard at the site. The first consists of active faults, and used

  1. Design of an adaptive faults tolerant control: case of sensor faults

    E-print Network

    Paris-Sud XI, Université de

    Design of an adaptive faults tolerant control: case of sensor faults ATEF KHEDHER LARA/ENIT BP 37: This paper presents a method of design of a sensor faults tolerant control. The method is presented. The faults are initially estimated using a proportional integral observer. A mathematical transformation

  2. Which Faults are Security Faults? Michael Gegick, Tao Xie, Laurie Williams, Pete Rotella

    E-print Network

    Young, R. Michael

    Which Faults are Security Faults? Michael Gegick, Tao Xie, Laurie Williams, Pete Rotella North, williams}@csc.ncsu.edu, protella@cisco.com Abstract The subtleties associated with security faults can sometimes be missed by developers and testers. When developers encounter a fault and are unaware

  3. Further Improved Differential Fault Analysis on Camellia by Exploring Fault Width and

    E-print Network

    1 Further Improved Differential Fault Analysis on Camellia by Exploring Fault Width and Depth Xin-jie Zhao, Tao Wang Abstract--In this paper, we present two further improved differential fault analysis/256 key hypotheses to 2 22.2 and 2 31.8 respectively. Index Terms--Differential fault analysis, Feistel

  4. Differential Fault Analysis of AES using a Single Multiple-Byte Fault

    E-print Network

    Differential Fault Analysis of AES using a Single Multiple-Byte Fault Subidh Ali1 , Debdeep presents an improvement on a recently pub- lished differential fault analysis of AES that requires one] proposed the idea of Differential Fault Analysis (DFA), based on differential cryptanalysis, to attack DES

  5. Effects of low-velocity fault zones on dynamic ruptures with nonelastic off-fault response

    E-print Network

    Duan, Benchun

    Effects of low-velocity fault zones on dynamic ruptures with nonelastic off-fault response Benchun 2008. [1] Using a finite element method for elastoplastic dynamic analysis, we examine the effects of a low-velocity fault zone (LVFZ) surrounding a fault on a spontaneous dynamic earthquake rupture. A Mohr

  6. Di#erential Fault Analysis of AES using a Single MultipleByte Fault

    E-print Network

    Di#erential Fault Analysis of AES using a Single Multiple­Byte Fault Subidh Ali 1 , Debdeep). This paper presents an improvement on a recently pub­ lished di#erential fault analysis of AES that requires] proposed the idea of Di#erential Fault Analysis (DFA), based on di#erential cryptanalysis, to attack DES

  7. A Novel Busbar Protection Based on Fault Component Integrated Impedance

    Microsoft Academic Search

    Jiale Suonan; Xuyang Deng; Guobing Song

    2010-01-01

    This paper describes a novel principle for protecting busbars. The principle uses the ratio between the fault component voltage and the fault component differential current of the busbar to detect faults, which is defined as the fault component integrated impedance in this paper. The fault component integrated impedance of an external fault reflects the capacitance impedance of the busbar whereas

  8. Algebraic Fault Analysis of Katan November 21, 2014

    E-print Network

    Algebraic Fault Analysis of Katan November 21, 2014 Frank-M. Quedenfeld1 1 Ruhr University Bochum for fault attacks and statistical and algebraic techniques to improve fault analysis in general. Our solving over F2, fault analysis, algebraic fault attack, filter for improved guessing, differential fault

  9. On the Optimality of Differential Fault Analyses on CLEFIA

    E-print Network

    ,anke,agnes}@sec.t-labs.tu-berlin.de Abstract--Differential Fault Analysis is a powerful cryptanalytic tool to reveal secret keys injections to the lowest possible number reached to date. Keywords: CLEFIA, Differential Fault Analysis or hardware faults is called fault attack. A Differ- ential Fault Analysis (DFA) is a specific form of a fault

  10. InSAR measurements around active faults: creeping Philippine Fault and un-creeping Alpine Fault

    NASA Astrophysics Data System (ADS)

    Fukushima, Y.

    2013-12-01

    Recently, interferometric synthetic aperture radar (InSAR) time-series analyses have been frequently applied to measure the time-series of small and quasi-steady displacements in wide areas. Large efforts in the methodological developments have been made to pursue higher temporal and spatial resolutions by using frequently acquired SAR images and detecting more pixels that exhibit phase stability. While such a high resolution is indispensable for tracking displacements of man-made and other small-scale structures, it is not necessarily needed and can be unnecessarily computer-intensive for measuring the crustal deformation associated with active faults and volcanic activities. I apply a simple and efficient method to measure the deformation around the Alpine Fault in the South Island of New Zealand, and the Philippine Fault in the Leyte Island. I use a small-baseline subset (SBAS) analysis approach (Berardino, et al., 2002). Generally, the more we average the pixel values, the more coherent the signals are. Considering that, for the deformation around active faults, the spatial resolution can be as coarse as a few hundred meters, we can severely 'multi-look' the interferograms. The two applied cases in this study benefited from this approach; I could obtain the mean velocity maps on practically the entire area without discarding decorrelated areas. The signals could have been only partially obtained by standard persistent scatterer or single-look small-baseline approaches that are much more computer-intensive. In order to further increase the signal detection capability, it is sometimes effective to introduce a processing algorithm adapted to the signal of interest. In an InSAR time-series processing, one usually needs to set the reference point because interferograms are all relative measurements. It is difficult, however, to fix the reference point when one aims to measure long-wavelength deformation signals that span the whole analysis area. This problem can be solved by adding the displacement offset in each interferogram as a model parameter and solving the system of equations with the minimum norm condition. This way, the unknown offsets can be automatically determined. By applying this method to the ALOS/PALSAR data acquired over the Alpine Fault, I obtained the mean velocity map showing the right-lateral relative motion of the blocks north and south of the fault and the strain concentration (large velocity gradient) around the fault. The velocity gradient around the fault has along-fault variation, probably reflecting the variation in the fault locking depth. When one aims to detect fault creeps, i.e., displacement discontinuity in space, one can additionally introduce additional parameters to describe the phase ramps in the interferograms and solve the system of equations again with the minimum norm condition. Then, the displacement discontinuity appears more clearly in the result at the cost of suppressing long-wavelength displacements. By applying this method to the ALOS/PALSAR data acquired over the Philippine Fault in Leyte Island, I obtained the mean velocity map showing fault creep at least in the northern and central parts of Leyte at a rate of around 10 mm/year.

  11. Fault-Tolerant Meshes with Small Degree

    Microsoft Academic Search

    Jehoshua Bruck; Robert Cypher; Ching-tien Ho

    1997-01-01

    This paper presents constructions for fault-tolerant, two-dimensional mesh architec- tures. The constructions are designed to tolerate k faults while maintaining a healthy n by n mesh as a subgraph. They utilize several novel techniques for obtaining trade-offs between the number of spare nodes and the degree of the fault-tolerant network. We consider both worst-case and random fault distributions. In terms

  12. Stacking Fault Energies of Tetrahedrally Coordinated Crystals

    Microsoft Academic Search

    S. Takeuchi; K. Suzuki

    1999-01-01

    The energies of the intrinsic stacking fault in 20 tetrahedrally coordinated crystals, determined by electron microscopy from the widths of extended dislocations, range from a few mJ\\/m2 to 300 mJ\\/m2. The reduced stacking fault energy (RSFE: stacking fault energy per bond perpendicular to the fault plane) has been found to have correlations with the effective charge, the charge redistribution index

  13. The arc-fault circuit protection

    Microsoft Academic Search

    G. Parise; L. Martirano; U. Grasselli; L. Benetti

    2001-01-01

    In electrical power systems bolted short-circuits are rare and the fault usually involves arcing and burning; mostly the limit value of minimum short-circuit depends on arcing-fault. In AC low voltage systems, the paper examines the arcing-fault branch circuits as weak points. Different protection measures are available against the arc-faults. A first measure that can guarantee a probabilistic protection is allowed

  14. Diconic addition of failsafe fault-tolerance

    Microsoft Academic Search

    Ali Ebnenasir

    2007-01-01

    We present a divide-and-conquer method, called DiConic, for automatic addition of failsafe fault-tolerance to dis- tributed programs, where a failsafe program guarantees to meet its safety specification even when faults occur. Specif- ically, instead of adding fault-tolerance to a program as a whole, we separately revise program actions so that the en- tire program becomes failsafe fault-tolerant. Our DiConic algorithm

  15. Fault Models for Quantum Mechanical Switching Networks

    E-print Network

    Jacob Biamonte; Jeff S. Allen; Marek A. Perkowski

    2010-01-19

    The difference between faults and errors is that, unlike faults, errors can be corrected using control codes. In classical test and verification one develops a test set separating a correct circuit from a circuit containing any considered fault. Classical faults are modelled at the logical level by fault models that act on classical states. The stuck fault model, thought of as a lead connected to a power rail or to a ground, is most typically considered. A classical test set complete for the stuck fault model propagates both binary basis states, 0 and 1, through all nodes in a network and is known to detect many physical faults. A classical test set complete for the stuck fault model allows all circuit nodes to be completely tested and verifies the function of many gates. It is natural to ask if one may adapt any of the known classical methods to test quantum circuits. Of course, classical fault models do not capture all the logical failures found in quantum circuits. The first obstacle faced when using methods from classical test is developing a set of realistic quantum-logical fault models. Developing fault models to abstract the test problem away from the device level motivated our study. Several results are established. First, we describe typical modes of failure present in the physical design of quantum circuits. From this we develop fault models for quantum binary circuits that enable testing at the logical level. The application of these fault models is shown by adapting the classical test set generation technique known as constructing a fault table to generate quantum test sets. A test set developed using this method is shown to detect each of the considered faults.

  16. Community Fault Model (CFM) for Southern California

    Microsoft Academic Search

    Andreas Plesch; J. H. Shaw; Christine Benson; W. A. Bryant; Sara Carena; M. Cooke; J. Dolan; G. Fuis; E. Gath; L. Grant; E. Hauksson; T. Jordan; M. Kamerling; M. Legg; S. Lindvall; H. Magistrale; C. Nicholson; N. Niemi; M. Oskin; S. Perry; G. Planansky; T. Rockwell; P. Shearer; C. Sorlien; M. P. Suss; J. Suppe; J. Treiman; R. Yeats

    2007-01-01

    We present a new three-dimensional model of the major fault systems in southern California. The model describes the San Andreas fault and associated strike- slip fault systems in the eastern California shear zone and Peninsular Ranges, as well as active blind-thrust and reverse faults in the Los Angeles basin and Transverse Ranges. The model consists of triangulated surface representations (t-surfs)

  17. Neotectonics of the Sumatran fault, Indonesia

    Microsoft Academic Search

    Kerry Sieh; Danny Natawidjaja

    2000-01-01

    The 1900-km-long, trench-parallel Sumatran fault accommodates a significant amount of the right-lateral component of oblique convergence between the Eurasian and Indian\\/Australian plates from 10°N to 7°S. Our detailed map of the fault, compiled from topographic maps and stereographic aerial photographs, shows that unlike many other great strike-slip faults, the Sumatran fault is highly segmented. Cross-strike width of step overs between

  18. TFTR poloidal coil fault analysis

    SciTech Connect

    Pelovitz, M.

    1986-11-01

    A program and procedure are described which were developed to analyze the affect of anticipated TFTR operating faults when the machine was to be used in an operating mode which was higher than design current rating of the poloidal coil system. The design criteria for this program included the definition of the most pessimistic operating fault scenarios, the development of an analogue program to model and observe the time history of the event and several comparison programs used to tally the maximum force condition of each coil during the run and to compare and tally the maximum coil currents, forces and ..delta..I/sup 2/t stresses from several runs.

  19. Fault-crossing P delays, epicentral biasing, and fault behavior in Central California

    USGS Publications Warehouse

    Marks, S.M.; Bufe, C.G.

    1979-01-01

    The P delays across the San Andreas fault zone in central California have been determined from travel-time differences at station pairs spanning the fault, using off-fault local earthquake or quarry blast sources. Systematic delays as large as 0.4 sec have been observed for paths crossing the fault at depths of 5-10 km. These delays can account for the apparent deviation of epicenters from the mapped fault trace. The largest delays occur along the San Andreas fault between San Juan Bautista and Bear Valley and Between Bitterwater Valley and Parkfield. Spatial variations in fault behavior correlate with the magnitude of the fault-crossing P delay. The delay decreases to the northwest of San Juan Bautista across the "locked" section of the San Andreas fault and also decreases to the southeast approaching Parkfield. Where the delay is large, seismicity is relatively high and the fault is creeping. ?? 1979.

  20. Fault Detection and Isolation of Actuator Faults in Overactuated Systems

    Microsoft Academic Search

    Nader Meskin; K. Khorasani

    2007-01-01

    This paper investigates development of fault detection and isolation (FDI) filters for overactuated systems. Due to input channels coupling effects and dependencies in overactuated systems, the necessary condition for applying geometric FDI approaches is not satisfied. In this work a geometric FDI approach for linear systems is extended to overactuated systems. The proposed method is applied to an F-18 HARV

  1. Faulted dislocation loops in quenched aluminium

    Microsoft Academic Search

    J. W. Edington; R. E. Smallman

    1965-01-01

    Dislocation loops containing stacking faults have been observed in quenched aluminium using the electron microscope. It is found that lowering the quenching rate relaxes the stringent purity conditions governing the retention of faulted loops, such that a high proportion of the loops in oil-quenched aluminium (99·97 % purity) contain faults. The results are discussed in terms of the influence of

  2. RT-level fast fault simulator

    Microsoft Academic Search

    Krzysztof Sapiecha

    In this paper a new fast fault simulation technique is presented for calculation of fault propagation through HLPs (High Level Primitives). ROTDDs (Reduced Ordered Ternary Decision Diagrams) are used to describe HLP modules. The technique is implemented in the HTDD RT- level fault simulator. The simulator is evaluated with some ITC99 benchmarks. A hypothesis is proved that a test set

  3. Fault system polarity: A matter of chance?

    NASA Astrophysics Data System (ADS)

    Schöpfer, Martin; Childs, Conrad; Manzocchi, Tom; Walsh, John; Nicol, Andy; Grasemann, Bernhard

    2015-04-01

    Many normal fault systems and, on a smaller scale, fracture boudinage exhibit asymmetry so that one fault dip direction dominates. The fraction of throw (or heave) accommodated by faults with the same dip direction in relation to the total fault system throw (or heave) is a quantitative measure of fault system asymmetry and termed 'polarity'. It is a common belief that the formation of domino and shear band boudinage with a monoclinic symmetry requires a component of layer parallel shearing, whereas torn boudins reflect coaxial flow. Moreover, domains of parallel faults are frequently used to infer the presence of a common décollement. Here we show, using Distinct Element Method (DEM) models in which rock is represented by an assemblage of bonded circular particles, that asymmetric fault systems can emerge under symmetric boundary conditions. The pre-requisite for the development of domains of parallel faults is however that the medium surrounding the brittle layer has a very low strength. We demonstrate that, if the 'competence' contrast between the brittle layer and the surrounding material ('jacket', or 'matrix') is high, the fault dip directions and hence fault system polarity can be explained using a random process. The results imply that domains of parallel faults are, for the conditions and properties used in our models, in fact a matter of chance. Our models suggest that domino and shear band boudinage can be an unreliable shear-sense indicator. Moreover, the presence of a décollement should not be inferred on the basis of a domain of parallel faults only.

  4. Fault-related clay authigenesis along the Moab Fault: Implications for calculations of fault rock composition and mechanical and hydrologic fault zone properties

    USGS Publications Warehouse

    Solum, J.G.; Davatzes, N.C.; Lockner, D.A.

    2010-01-01

    The presence of clays in fault rocks influences both the mechanical and hydrologic properties of clay-bearing faults, and therefore it is critical to understand the origin of clays in fault rocks and their distributions is of great importance for defining fundamental properties of faults in the shallow crust. Field mapping shows that layers of clay gouge and shale smear are common along the Moab Fault, from exposures with throws ranging from 10 to ???1000 m. Elemental analyses of four locations along the Moab Fault show that fault rocks are enriched in clays at R191 and Bartlett Wash, but that this clay enrichment occurred at different times and was associated with different fluids. Fault rocks at Corral and Courthouse Canyons show little difference in elemental composition from adjacent protolith, suggesting that formation of fault rocks at those locations is governed by mechanical processes. Friction tests show that these authigenic clays result in fault zone weakening, and potentially influence the style of failure along the fault (seismogenic vs. aseismic) and potentially influence the amount of fluid loss associated with coseismic dilation. Scanning electron microscopy shows that authigenesis promotes that continuity of slip surfaces, thereby enhancing seal capacity. The occurrence of the authigenesis, and its influence on the sealing properties of faults, highlights the importance of determining the processes that control this phenomenon. ?? 2010 Elsevier Ltd.

  5. Conditional Fault Diameter of Star Graph Networks

    Microsoft Academic Search

    Yordan Rouskov; Shahram Latifi; Pradip K. Srimani

    1996-01-01

    It is well known that star graphs are strongly resilient like thencubes in the sense that they are optimally fault tolerant and the fault diameter is increased only by one in the presence of maximum number of allowable faults. We investigate star graphs under the conditions offorbidden faulty sets, where all the neighbors of any node cannot be faulty simultaneously;

  6. Fault detection methods: A literature survey

    Microsoft Academic Search

    Dubravko Miljkovic

    2011-01-01

    Fault detection plays an important role in high- cost and safety-critical processes. Early detection of process faults can help avoid abnormal event progression. Fault detection can be accomplished through various means. This paper presents the literature survey of major methods and current state of research in the field with a selection of important practical applications. I. INTRODUCTION Increasing demands on

  7. Active faulting and tectonics in China

    Microsoft Academic Search

    Paul Tapponnier; Peter Molnar

    1977-01-01

    We present a study of the active tectonics of China based on an interpretation of Landsat (satellite) imagery and supplemented with seismic data. Several important fault systems can be identified, and most are located in regions of high historical seismicity. We deduce the type and sense of faulting from adjacent features seen on these photos, from fault plane solutions of

  8. Salton Sea Satellite Image Showing Fault Slip

    USGS Multimedia Gallery

    Landsat satellite image (LE70390372003084EDC00) showing location of surface slip triggered along faults in the greater Salton Trough area. Red bars show the generalized location of 2010 surface slip along faults in the central Salton Trough and many additional faults in the southwestern section of t...

  9. FAULT PREDICTIVE CONTROL OF COMPACT DISK PLAYERS

    E-print Network

    Wickerhauser, M. Victor

    FAULT PREDICTIVE CONTROL OF COMPACT DISK PLAYERS Peter Fogh Odgaard Mladen Victor Wickerhauser playing certain discs with surface faults like scratches and fingerprints. The problem is to be found in an other publications of the first author. This scheme is based on an assumption that the surface faults do

  10. The Fault Detection Problem Andreas Haeberlen1

    E-print Network

    Pennsylvania, University of

    The Fault Detection Problem Andreas Haeberlen1 and Petr Kuznetsov2 1 Max Planck Institute challenges in distributed com- puting is ensuring that services are correct and available despite faults. Recently it has been argued that fault detection can be factored out from computation, and that a generic

  11. The Fault Detection Problem Andreas Haeberlen

    E-print Network

    Pennsylvania, University of

    The Fault Detection Problem Andreas Haeberlen Petr Kuznetsov Abstract One of the most important challenges in distributed computing is ensuring that services are correct and available despite faults. Recently it has been argued that fault detection can be factored out from computation, and that a generic

  12. Field Trip to the Hayward Fault Zone

    NSDL National Science Digital Library

    This guide provides directions to locations in Hayward, California where visitors can see evidence of creep along the Hayward Fault. There is also information about the earthquake hazards associated with fault zones, earthquake prediction, and landforms associated with offset along a fault. The guide is available in downloadable, printable format (PDF) in two resolutions

  13. Ground Fault--A Health Hazard

    ERIC Educational Resources Information Center

    Jacobs, Clinton O.

    1977-01-01

    A ground fault is especially hazardous because the resistance through which the current is flowing to ground may be sufficient to cause electrocution. The Ground Fault Circuit Interrupter (G.F.C.I.) protects 15 and 25 ampere 120 volt circuits from ground fault condition. The design and examples of G.F.C.I. functions are described in this article.…

  14. A fault tolerance approach to computer viruses

    Microsoft Academic Search

    Mark K. Joseph; Algirdas AviZienis

    1988-01-01

    Extensions of program flow monitors and n-version programming can be combined to provide a solution to the detection and containment of computer viruses. The consequence is that a computer can tolerate both deliberate faults and random physical faults by one common mechanism. Specifically, the technique detects control flow errors due to physical faults as well as the presence of viruses

  15. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 2013-01-01 2013-01-01 false Fault. 845.302 Section 845.302 Administrative...Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of...

  16. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 2012-04-01 2012-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...SERVICE PENSION SYSTEM (FSPS) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of...

  17. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 2014-01-01 2014-01-01 false Fault. 845.302 Section 845.302 Administrative...Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of...

  18. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 2011-01-01 2011-01-01 false Fault. 831.1402 Section 831.1402 Administrative...for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of...

  19. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 2010-01-01 2010-01-01 false Fault. 845.302 Section 845.302 Administrative...Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of...

  20. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...2013-07-01 2013-07-01 false Fault areas. 258.13 Section 258.13... Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral...located within 200 feet (60 meters) of a fault that has had displacement in...

  1. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...2011-04-01 2011-04-01 false Fault. 255.11 Section 255.11 Employees...RECOVERY OF OVERPAYMENTS § 255.11 Fault. (a) Before recovery of an overpayment...that the overpaid individual was without fault in causing the overpayment. If...

  2. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 2014-04-01 2014-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...SERVICE PENSION SYSTEM (FSPS) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of...

  3. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 2013-04-01 2013-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...SERVICE PENSION SYSTEM (FSPS) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of...

  4. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...2014-04-01 2012-04-01 true Fault. 255.11 Section 255.11 Employees...RECOVERY OF OVERPAYMENTS § 255.11 Fault. (a) Before recovery of an overpayment...that the overpaid individual was without fault in causing the overpayment. If...

  5. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 2012-01-01 2012-01-01 false Fault. 831.1402 Section 831.1402 Administrative...for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of...

  6. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 2014-01-01 2014-01-01 false Fault. 831.1402 Section 831.1402 Administrative...for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of...

  7. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...2012-04-01 2012-04-01 false Fault. 255.11 Section 255.11 Employees...RECOVERY OF OVERPAYMENTS § 255.11 Fault. (a) Before recovery of an overpayment...that the overpaid individual was without fault in causing the overpayment. If...

  8. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 2013-01-01 2013-01-01 false Fault. 831.1402 Section 831.1402 Administrative...for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of...

  9. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 2011-04-01 2011-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...SERVICE PENSION SYSTEM (FSPS) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of...

  10. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...2011-07-01 2011-07-01 false Fault areas. 258.13 Section 258.13... Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral...located within 200 feet (60 meters) of a fault that has had displacement in...

  11. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...2013-04-01 2012-04-01 true Fault. 255.11 Section 255.11 Employees...RECOVERY OF OVERPAYMENTS § 255.11 Fault. (a) Before recovery of an overpayment...that the overpaid individual was without fault in causing the overpayment. If...

  12. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 2010-01-01 2010-01-01 false Fault. 831.1402 Section 831.1402 Administrative...for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of...

  13. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 2010-04-01 2010-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...SERVICE PENSION SYSTEM (FSPS) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of...

  14. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 2011-01-01 2011-01-01 false Fault. 845.302 Section 845.302 Administrative...Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of...

  15. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...2014-07-01 2014-07-01 false Fault areas. 258.13 Section 258.13... Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral...located within 200 feet (60 meters) of a fault that has had displacement in...

  16. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 2012-01-01 2012-01-01 false Fault. 845.302 Section 845.302 Administrative...Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of...

  17. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...2012-07-01 2011-07-01 true Fault areas. 258.13 Section 258.13... Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral...located within 200 feet (60 meters) of a fault that has had displacement in...

  18. Recurrent Faults in Objective Test Items.

    ERIC Educational Resources Information Center

    Stratton, N. J.

    1981-01-01

    A study of recurrent faults in multiple-choice items in Britain's Open University's computer-marked tests has led to a procedure for avoiding these faults. A description of the study covers the incidence and sources of faults (obviousness, memorization, unclear instruction, ambiguity, distractors, inter-item effects, and structure) and…

  19. DIPLOMARBEIT Fault Injection for Diagnosis and Maintenance

    E-print Network

    view over the system, and analysis in order to assess the health state of the system. A fault injection- ficient for meaningful statistical analysis. An embedded application synchronizes the fault injectionDIPLOMARBEIT Fault Injection for Diagnosis and Maintenance in the Time-Triggered Architecture

  20. Failure and Fault Analysis for Software Debugging

    Microsoft Academic Search

    Richard A. Demillo; Hsin Pant; Eugene H. Spafford

    1997-01-01

    Most studies of software failures and faults have done little more than classify failures and faults collected from long-term projects. The authors propose a model to analyze failures and faults for debugging purposes. In the model, they define “failure modes” and “failure types” to identify the existence of program failures and the nature of the program failures, respectively. The goal

  1. Formal Fault Tree Analysis: Practical Experiences

    E-print Network

    Paris-Sud XI, Université de

    AVoCS 2006 Formal Fault Tree Analysis: Practical Experiences Frank Ortmeier Gerhard Schellhorn spread safety analysis methods: fault tree analysis (FTA). Formal FTA allows to rigorously reason about FTA by using model checking. Keywords: fault tree analysis, dependability, safety analysis, formal

  2. High temperature superconducting fault current limiter

    DOEpatents

    Hull, J.R.

    1997-02-04

    A fault current limiter for an electrical circuit is disclosed. The fault current limiter includes a high temperature superconductor in the electrical circuit. The high temperature superconductor is cooled below its critical temperature to maintain the superconducting electrical properties during operation as the fault current limiter. 15 figs.

  3. A novel fault attack against ECDSA

    Microsoft Academic Search

    Alessandro Barenghi; Guido Bertoni; Andrea Palomba; Ruggero Susella

    2011-01-01

    A novel fault attack against ECDSA is proposed in this work. It allows to retrieve the secret signing key, by means of injecting faults during the computation of the signature primitive. The proposed method relies on faults injected during a multiplication employed to perform the signature recombination at the end of the ECDSA signing algorithm. Exploiting the faulty signatures, it

  4. Fault tree analysis with fuzzy gates

    Microsoft Academic Search

    HanSuk Pan; WonYoung Yun

    1997-01-01

    Fault tree analysis is an important tool analyzing system reliability. Fault trees consist of gates and events. Gates mean relationships between events. In fault tree analysis, AND, OR gates have been used as typical gates but it is often difficult to model the system structure with the two gates because in many cases we have not exact knowledge on system

  5. Abstract--Fault collapsing is the process of reducing the number of faults by using redundance and equiva-

    E-print Network

    Al-Asaad, Hussain

    1 Abstract--Fault collapsing is the process of reducing the number of faults by using redundance and equiva- lence/dominance relationships among faults. Exact glo- bal fault collapsing can be easily applied fault collapsing method for library modules that uses both binary deci- sion diagrams and fault

  6. Velocity contrast along the Calaveras fault from analysis of fault zone head waves generated by repeating earthquakes

    E-print Network

    Black, Robert X.

    Velocity contrast along the Calaveras fault from analysis of fault zone head waves generated fault from analysis of fault zone head waves generated by repeating earthquakes, Geophys. Res. Lett., 35 contrast along the Calaveras fault that ruptured during the 1984 Morgan Hill earthquake using fault zone

  7. Dynamic fault-tree models for fault-tolerant computer systems

    Microsoft Academic Search

    Joanne Bechta Dugan; Salvatore J. Bavuso; Mark A. Boyd

    1992-01-01

    Reliability analysis of fault-tolerant computer systems for critical applications is complicated by several factors. Systems designed to achieve high levels of reliability frequently employ high levels of redundancy, dynamic redundancy management, and complex fault and error recovery techniques. This paper describes dynamic fault-tree modeling techniques for handling these difficulties. Three advanced fault-tolerant computer systems are described: a fault-tolerant parallel processor,

  8. Extraction and Simulation of Realistic CMOS Faults Using Inductive Fault Analysis

    Microsoft Academic Search

    John Paul Shen; F. Joel Ferguson

    1988-01-01

    FXT is a software tool which implements inductive fault analysis for CMOS circuits. It extracts a comprehensive list of circuit-level faults for any given CMOS circuit and ranks them according to their relative likelihood of occurrence. Five commercial CMOS circuits are analyzed using FXT. Of the extracted faults, approximately 50% can be modeled by single-line stuck-at 0\\/1 fault model. Faults

  9. Equivalence of robust delay-fault and single stuck-fault test generation

    Microsoft Academic Search

    Alexander Saldanha; Robert K. Brayton; Alberto L. Sangiovanni-Vincentelli

    1992-01-01

    A link between the problems of robust delay-fault and single stuck-fault test generation is established. In particular, it is proved that all the robust test vector pairs for any path delay-fault in a network are directly obtained by all the test vectors for a corresponding single stuck-fault in a modified network. Since single stuck-fault test generation is a well solved

  10. Novel neural networks-based fault tolerant control scheme with fault alarm.

    PubMed

    Shen, Qikun; Jiang, Bin; Shi, Peng; Lim, Cheng-Chew

    2014-11-01

    In this paper, the problem of adaptive active fault-tolerant control for a class of nonlinear systems with unknown actuator fault is investigated. The actuator fault is assumed to have no traditional affine appearance of the system state variables and control input. The useful property of the basis function of the radial basis function neural network (NN), which will be used in the design of the fault tolerant controller, is explored. Based on the analysis of the design of normal and passive fault tolerant controllers, by using the implicit function theorem, a novel NN-based active fault-tolerant control scheme with fault alarm is proposed. Comparing with results in the literature, the fault-tolerant control scheme can minimize the time delay between fault occurrence and accommodation that is called the time delay due to fault diagnosis, and reduce the adverse effect on system performance. In addition, the FTC scheme has the advantages of a passive fault-tolerant control scheme as well as the traditional active fault-tolerant control scheme's properties. Furthermore, the fault-tolerant control scheme requires no additional fault detection and isolation model which is necessary in the traditional active fault-tolerant control scheme. Finally, simulation results are presented to demonstrate the efficiency of the developed techniques. PMID:25014982

  11. Predeployment validation of fault-tolerant systems through software-implemented fault insertion

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1989-01-01

    Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.

  12. Analysis of the ecosystem structure of Laguna Alvarado, western Gulf of Mexico, by means of a mass balance model

    NASA Astrophysics Data System (ADS)

    Cruz-Escalona, V. H.; Arreguín-Sánchez, F.; Zetina-Rejón, M.

    2007-03-01

    Alvarado is one of the most productive estuary-lagoon systems in the Mexican Gulf of Mexico. It has great economic and ecological importance due to high fisheries productivity and because it serves as a nursery, feeding, and reproduction area for numerous populations of fishes and crustaceans. Because of this, extensive studies have focused on biology, ecology, fisheries (e.g. shrimp, oysters) and other biological components of the system during the last few decades. This study presents a mass-balanced trophic model for Laguna Alvarado to determine it's structure and functional form, and to compare it with similar coastal systems of the Gulf of Mexico and Mexican Pacific coast. The model, based on the software Ecopath with Ecosim, consists of eighteen fish groups, seven invertebrate groups, and one group each of sharks and rays, marine mammals, phytoplankton, sea grasses and detritus. The acceptability of the model is indicated by the pedigree index (0.5) which range from 0 to 1 based on the quality of input data. The highest trophic level was 3.6 for marine mammals and snappers. Total system throughput reached 2680 t km -2 year -1, of which total consumption made up 47%, respiratory flows made up 37% and flows to detritus made up 16%. The total system production was higher than consumption, and net primary production higher than respiration. The mean transfer efficiency was 13.8%. The mean trophic level of the catch was 2.3 and the primary production required to sustain the catch was estimated in 31 t km -2 yr -1. Ecosystem overhead was 2.4 times the ascendancy. Results suggest a balance between primary production and consumption. In contrast with other Mexican coastal lagoons, Laguna Alvarado differs strongly in relation to the primary source of energy; here the primary producers (seagrasses) are more important than detritus pathways. This fact can be interpreted a response to mangrove deforest, overfishing, etc. Future work might include the compilation of fishing and biomass time trends to develop historical verification and fitting of temporal simulations.

  13. Spatial distribution of Late Holocene sediment infill controlled by lake internal depositional dynamics, Laguna Potrok Aike (southern Patagonia, Argentina)

    NASA Astrophysics Data System (ADS)

    Kastner, Stephanie; Ohlendorf, Christian; Haberzettl, Torsten; Lücke, Andreas; Maidana, Nora I.; Mayr, Christoph; Schäbitz, Frank; Zolitschka, Bernd

    2010-05-01

    The maar Laguna Potrok Aike (51°S, 70°W) is situated in the dry steppe environment of southern Patagonia. This 100 m deep lake is a palaeolimnological key site among the emerging terrestrial climate archives of the southern hemisphere and therefore was chosen as an ICDP drilling site. Interdisciplinary multi-proxy sediment studies document the sensitivity of this lacustrine record to palaeoclimatic and palaeoecological variability and inferred a close relation of hydrological variations of the lake to fluctuations of the Southern Hemispheric Westerlies. This study presents analyses of a dense grid of 63 gravity cores from the lake floor documenting processes of late Holocene sediment distribution in the lake. Using X-ray fluorescence and magnetic susceptibility scanning data, all cores were correlated and linked to a previously established age model (Haberzettl et al. 2005). Thereafter, multi-proxy investigations of selected Late Holocene time windows were conducted. Surface sediment samples were taken from all cores and from 40 additional shoreline samples. The scanning profiles do not allow unequivocal correlation of profundal and littoral cores across the steep slopes. Thus, sub-sampling of five selected time intervals covering distinct lake level stages back to AD 1380 was restricted to 43 well-correlated cores from the deep basin. Geochemical, sedimentological, palynological, diatomological and stable isotope data were used to interpolate distribution maps for all these parameters and for the selected time slices by an exact point kriging method. The dominance of westerly winds strongly influences the spatial sediment distribution patterns. Modern sediment analyses point to the influence of wave action for littoral areas and sediment relocation to the profundal by wind-driven internal currents. Furthermore, the surrounding geology and geomorphology distinctively influence sediment characteristics. The sub-recent spatial sediment distribution is interpreted in the context of these modern processes. Depositional dynamics are modified by varying lake levels and changing wind patterns during the selected late Holocene time sections. Distribution patterns in the deep basin reveal intensified sediment redistribution during lake level low stands and strengthened winds following the Little Ice Age (around AD 1960). In contrast, Little Ice Age (around AD 1800) conditions with a lake level high stand and less intense westerly winds result in a more homogeneous sediment distribution within the deep central basin. References Haberzettl, T. et al. (2005), Climatically induced lake level changes during the last two millennia as reflected in sediments of Laguna Potrok Aike, southern Patagonia (Santa Cruz, Argentina), JOPL, 33: 283-302.

  14. Limnology in El Dorado: some surprising aspects of the regulation of phytoplankton productive capacity in a high-altitude Andean lake (Laguna de Guatavita, Colombia).

    PubMed

    Donato, Jhon; Jimenez, Paola; Reynolds, Colin

    2012-09-01

    High-altitude mountain lakes remain understudied, mostly because of their relative inaccessibility. Laguna de Guatavita, a small, equatorial, high-altitude crater lake in the Eastern Range of the Colombian Andes, was once of high cultural importance to pre-Columban inhabitants, the original location of the legendary El Dorado. We investigated the factors regulating the primary production in Laguna de Guatavita (4degrees58'50" N - 73degrees46'43" W, alt. 2 935m.a.s.l., area: 0.11km2, maximum depth: 30m), during a series of three intensive field campaigns, which were conducted over a year-long period in 2003-2004. In each, standard profiles of temperature, oxygen concentration and light intensity were determined on each of 16-18 consecutive days. Samples were collected and analysed for chlorophyll and for biologically-significant solutes in GF/F-filtered water (NH4+, NO3(-), NO2(-); soluble reactive phosphorus). Primary production was also determined, by oxygen generation, on each day of the campaign. Our results showed that the productive potential of the lake was typically modest (campaign averages of 45-90mg C/m2.h) but that many of the regulating factors were not those anticipated intuitively. The lake is demonstrably meromictic, reminiscent ofkarstic dolines in higher latitudes, its stratification being maintained by solute- concentration gradients. Light penetration is poor, attributable to the turbidity owing to fine calcite and other particulates in suspension. Net primary production in the mixolimnion of Laguna de Guavita is sensitive to day-to-day variations in solar irradiance at the surface. However, deficiencies in nutrient availability, especially nitrogen, also constrain the capacity of the lake to support a phytoplankton. We deduced that Laguna de Guatavita is something of a limnological enigma, atypical of the common anticipation of a "mountain lake". While doubtlessly not unique, comparable descriptions of similar sites elsewhere are sufficiently rare to justify the presentation of the data from Laguna de Guatavita that our studies have revealed so far. PMID:23025073

  15. Stacking fault energies in aluminium

    Microsoft Academic Search

    B. Hammer; K. W. Jacobsen; V. Milman; M. C. Payne

    1992-01-01

    The twin, intrinsic and extrinsic stacking fault energies together with the FCC-HCP structural energy difference are calculated for Al by means of the total energy pseudopotential method. The influence of supercell geometry is controlled by extrapolating the calculated data to infinite cell size. All calculations include full interplanar relaxations and the final inter-planar separations are presented and shown to vary

  16. APPROACHES TO SOFTWARE FAULT TOLERANCE

    E-print Network

    Newcastle upon Tyne, University of

    months ago I had the honour of speaking at the INRIA 25th Anniversary Conference. Now I have the equally I feel so at home here at LAAS, and partly because it seems appropriate for an anniversary celebration, I will instead discuss how ideas on software fault tolerance originated and how work on them has

  17. Fault Lines in the Atlantic

    Microsoft Academic Search

    Wm. S. Bruce

    1908-01-01

    IN Prof. J. Milne's discourse at the Royal Institution which appeared in NATURE of April 23 is given an interesting map on p. 593 showing the folds and probable direction of fault lines in the Atlantic. In that map is shown the mid-Atlantic ``rise'' extending to about 40° S. The map, however, would have been more interesting had Prof. Milne

  18. Geometric Analyses of Rotational Faults.

    ERIC Educational Resources Information Center

    Schwert, Donald Peters; Peck, Wesley David

    1986-01-01

    Describes the use of analysis of rotational faults in undergraduate structural geology laboratories to provide students with applications of both orthographic and stereographic techniques. A demonstration problem is described, and an orthographic/stereographic solution and a reproducible black model demonstration pattern are provided. (TW)

  19. Fault Diagnosis with Dynamic Observers

    E-print Network

    Cassez, Franck

    2010-01-01

    In this paper, we review some recent results about the use of dynamic observers for fault diagnosis of discrete event systems. Fault diagnosis consists in synthesizing a diagnoser that observes a given plant and identifies faults in the plant as soon as possible after their occurrence. Existing literature on this problem has considered the case of fixed static observers, where the set of observable events is fixed and does not change during execution of the system. In this paper, we consider dynamic observers: an observer can "switch" sensors on or off, thus dynamically changing the set of events it wishes to observe. It is known that checking diagnosability (i.e., whether a given observer is capable of identifying faults) can be solved in polynomial time for static observers, and we show that the same is true for dynamic ones. We also solve the problem of dynamic observers' synthesis and prove that a most permissive observer can be computed in doubly exponential time, using a game-theoretic approach. We furt...

  20. HARNESS and fault tolerant MPI

    Microsoft Academic Search

    Graham E. Fagg; Antonin Bukovsky; Jack J Dongarra

    2001-01-01

    Initial versions of MPI were designed to work efficiently on multi-processors which had very little job control and thus static process models. Subsequently forcing them to support a dynamic process model would have affected their performance. As current HPC systems increase in size with greater potential levels of individual node failure, the need arises for new fault tolerant systems to

  1. Deep pulverization along active faults ?

    NASA Astrophysics Data System (ADS)

    Doan, M.

    2013-12-01

    Pulverization is a intensive damage observed along some active faults. Rarely found in the field, it has been associated with dynamic damage produced by large earthquakes. Pulverization has been so far only described at the ground surface, consistent with the high frequency tensile loading expected for earthquake occurring along bimaterial faults. However, we discuss here a series of hints suggesting that pulverization is expected also several hundred of meters deep. In the deep well drilled within Nojima fault after the 1995 Kobe earthquake, thin sections reveal non localized damage, with microfractured pervading a sample, but with little shear disturbing the initial microstructure. In the SAFOD borehole drilled near Parkfield, Wiersberg and Erzinger (2008) made gas monitoring while drilling found large amount of H2 gas in the sandstone west to the fault. They attribute this high H2 concentration to mechanochemical origin, in accordance with some example of diffuse microfracturing found in thin sections from cores of SAFOD phase 3 and from geophysical data from logs. High strain rate experiments in both dry (Yuan et al, 2011) and wet samples (Forquin et al, 2010) show that even under confining pressures of several tens of megapascals, diffuse damage similar to pulverization is possible. This could explain the occurrence of pulverization at depth.

  2. Fault-tolerant distributed reconnaissance

    Microsoft Academic Search

    Adrian P. Lauf; William H. Robinson

    2010-01-01

    This paper describes a method to efficiently canvass an area of interest using distributed sensing methods, assisted by fault-tolerant resource management. By implementing multiple aircraft in an assessment configuration, aerial monitoring and diverse sensing can be accomplished through the use of ad-hoc networking principles; aircraft act as nodes, each being a distributed agent in the network. Combined with a method

  3. Fault Tolerance for Wireless Networks

    Microsoft Academic Search

    Kevin M. Somervill

    Abstract—Wireless communcations and technology have estab- lished themselves as a ubiquitous facet of daily life. Mobile users have grown to depend on wireless technology in small portable computers, satellite communications, and wireless networks [1] establishing a need for wireless communications to operate in a robust and reliable manner. As part of a semester research effort, the topic of fault-tolerance for

  4. Denali Fault: Black Rapids Glacier

    USGS Multimedia Gallery

    View eastward along Black Rapids Glacier. The Denali fault follows the trace of the glacier. These very large rockslides went a mile across the glacier on the right side. Investigations of the headwall of the middle landslide indicate a volume at least as large as that which fell, has dropped a mete...

  5. Deconstructing "Technological to a Fault."

    ERIC Educational Resources Information Center

    Morris, Edward K.

    1991-01-01

    This essay presents a deconstruction of the phrase "technological to a fault" as it relates to applied behavior analysis. The essay discusses the imbalance between analysis as demonstration and analysis as discovery, offers a consequence and a cause, and examines the relationship of discovery and demonstration to behavior-analytic epistemology.…

  6. Cell boundary fault detection system

    DOEpatents

    Archer, Charles Jens (Rochester, MN); Pinnow, Kurt Walter (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian Edward (Rochester, MN)

    2011-04-19

    An apparatus and program product determine a nodal fault along the boundary, or face, of a computing cell. Nodes on adjacent cell boundaries communicate with each other, and the communications are analyzed to determine if a node or connection is faulty.

  7. Geologic map + fault mechanics problem set

    NSDL National Science Digital Library

    John Singleton

    This exercise requires students to answer some questions about stress and fault mechanics that relate to geologic maps. In part A) students must draw a cross section and Mohr circles and make some calculations to explain the slip history and mechanics of two generations of normal faults. In part B) students interpret the faulting history and fault mechanics of the Yerington District, Nevada, based on a classic geologic map and cross section by John Proffett. keywords: geologic map, cross section, normal faults, Mohr circle, Coulomb failure, Andersonian theory, frictional sliding, Byerlee's law

  8. Multiple Fault Isolation in Redundant Systems

    NASA Technical Reports Server (NTRS)

    Pattipati, Krishna R.; Patterson-Hine, Ann; Iverson, David

    1997-01-01

    Fault diagnosis in large-scale systems that are products of modern technology present formidable challenges to manufacturers and users. This is due to large number of failure sources in such systems and the need to quickly isolate and rectify failures with minimal down time. In addition, for fault-tolerant systems and systems with infrequent opportunity for maintenance (e.g., Hubble telescope, space station), the assumption of at most a single fault in the system is unrealistic. In this project, we have developed novel block and sequential diagnostic strategies to isolate multiple faults in the shortest possible time without making the unrealistic single fault assumption.

  9. Bridging faults in BiCMOS circuits

    NASA Technical Reports Server (NTRS)

    Menon, Sankaran M.; Malaiya, Yashwant K.; Jayasumana, Anura P.

    1993-01-01

    Combining the advantages of CMOS and bipolar, BiCMOS is emerging as a major technology for many high performance digital and mixed signal applications. Recent investigations revealed that bridging faults can be a major failure mode in IC's. Effects of bridging faults in BiCMOS circuits are presented. Bridging faults between logical units without feedback and logical units with feedback are considered. Several bridging faults can be detected by monitoring the power supply current (I(sub DDQ) monitoring). Effects of bridging faults and bridging resistance on output logic levels were examined along with their effects on noise immunity.

  10. Deformation associated with continental normal faults

    NASA Astrophysics Data System (ADS)

    Resor, Phillip G.

    Deformation associated with normal fault earthquakes and geologic structures provide insights into the seismic cycle as it unfolds over time scales from seconds to millions of years. Improved understanding of normal faulting will lead to more accurate seismic hazard assessments and prediction of associated structures. High-precision aftershock locations for the 1995 Kozani-Grevena earthquake (Mw 6.5), Greece image a segmented master fault and antithetic faults. This three-dimensional fault geometry is typical of normal fault systems mapped from outcrop or interpreted from reflection seismic data and illustrates the importance of incorporating three-dimensional fault geometry in mechanical models. Subsurface fault slip associated with the Kozani-Grevena and 1999 Hector Mine (Mw 7.1) earthquakes is modeled using a new method for slip inversion on three-dimensional fault surfaces. Incorporation of three-dimensional fault geometry improves the fit to the geodetic data while honoring aftershock distributions and surface ruptures. GPS Surveying of deformed bedding surfaces associated with normal faulting in the western Grand Canyon reveals patterns of deformation that are similar to those observed by interferometric satellite radar interferometry (InSAR) for the Kozani Grevena earthquake with a prominent down-warp in the hanging wall and a lesser up-warp in the footwall. However, deformation associated with the Kozani-Grevena earthquake extends ˜20 km from the fault surface trace, while the folds in the western Grand Canyon only extend 500 m into the footwall and 1500 m into the hanging wall. A comparison of mechanical and kinematic models illustrates advantages of mechanical models in exploring normal faulting processes including incorporation of both deformation and causative forces, and the opportunity to incorporate more complex fault geometry and constitutive properties. Elastic models with antithetic or synthetic faults or joints in association with a master normal fault illustrate how these secondary structures influence the deformation in ways that are similar to fault/fold geometry mapped in the western Grand Canyon. Specifically, synthetic faults amplify hanging wall bedding dips, antithetic faults reduce dips, and joints act to localize deformation. The distribution of aftershocks in the hanging wall of the Kozani-Grevena earthquake suggests that secondary structures may accommodate strains associated with slip on a master fault during postseismic deformation.

  11. Multiple Fault Isolation in Redundant Systems

    NASA Technical Reports Server (NTRS)

    Pattipati, Krishna R.

    1997-01-01

    Fault diagnosis in large-scale systems that are products of modem technology present formidable challenges to manufacturers and users. This is due to large number of failure sources in such systems and the need to quickly isolate and rectify failures with minimal down time. In addition, for fault-tolerant systems and systems with infrequent opportunity for maintenance (e.g., Hubble telescope, space station), the assumption of at most a single fault in the system is unrealistic. In this project, we have developed novel block and sequential diagnostic strategies to isolate multiple faults in the shortest possible time without making the unrealistic single fault assumption.

  12. Hydrogen Release: New Indicator of Fault Activity

    NASA Astrophysics Data System (ADS)

    Wakita, Hiroshi; Nakamura, Yuji; Kita, Itsuro; Fujii, Naoyuki; Notsu, Kenji

    1980-10-01

    The hydrogen concentration in soil gas has been measured in the area around the Yamasaki Fault, one of the active faults in southwestern Japan. Degassing of a significant amount of hydrogen (up to more than 3 percent by volume) has been observed for sites along the fault zone. The hydrogen concentration in soil gas at sites away from the fault zone was about 0.5 part per million, almost the same as that found in the atmosphere. The spatial distribution of sites with high hydrogen concentrations is quite systematic. A hypothesis on the production of hydrogen by fault movements is postulated.

  13. Managing Space System Faults: Coalescing NASA's Views

    NASA Technical Reports Server (NTRS)

    Muirhead, Brian; Fesq, Lorraine

    2012-01-01

    Managing faults and their resultant failures is a fundamental and critical part of developing and operating aerospace systems. Yet, recent studies have shown that the engineering "discipline" required to manage faults is not widely recognized nor evenly practiced within the NASA community. Attempts to simply name this discipline in recent years has been fraught with controversy among members of the Integrated Systems Health Management (ISHM), Fault Management (FM), Fault Protection (FP), Hazard Analysis (HA), and Aborts communities. Approaches to managing space system faults typically are unique to each organization, with little commonality in the architectures, processes and practices across the industry.

  14. Rule-based fault diagnosis of hall sensors and fault-tolerant control of PMSM

    NASA Astrophysics Data System (ADS)

    Song, Ziyou; Li, Jianqiu; Ouyang, Minggao; Gu, Jing; Feng, Xuning; Lu, Dongbin

    2013-07-01

    Hall sensor is widely used for estimating rotor phase of permanent magnet synchronous motor(PMSM). And rotor position is an essential parameter of PMSM control algorithm, hence it is very dangerous if Hall senor faults occur. But there is scarcely any research focusing on fault diagnosis and fault-tolerant control of Hall sensor used in PMSM. From this standpoint, the Hall sensor faults which may occur during the PMSM operating are theoretically analyzed. According to the analysis results, the fault diagnosis algorithm of Hall sensor, which is based on three rules, is proposed to classify the fault phenomena accurately. The rotor phase estimation algorithms, based on one or two Hall sensor(s), are initialized to engender the fault-tolerant control algorithm. The fault diagnosis algorithm can detect 60 Hall fault phenomena in total as well as all detections can be fulfilled in 1/138 rotor rotation period. The fault-tolerant control algorithm can achieve a smooth torque production which means the same control effect as normal control mode (with three Hall sensors). Finally, the PMSM bench test verifies the accuracy and rapidity of fault diagnosis and fault-tolerant control strategies. The fault diagnosis algorithm can detect all Hall sensor faults promptly and fault-tolerant control algorithm allows the PMSM to face failure conditions of one or two Hall sensor(s). In addition, the transitions between health-control and fault-tolerant control conditions are smooth without any additional noise and harshness. Proposed algorithms can deal with the Hall sensor faults of PMSM in real applications, and can be provided to realize the fault diagnosis and fault-tolerant control of PMSM.

  15. Model-Based Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Kumar, Aditya; Viassolo, Daniel

    2008-01-01

    The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.

  16. Experiments in fault tolerant software reliability

    NASA Technical Reports Server (NTRS)

    Mcallister, David F.; Vouk, Mladen A.

    1989-01-01

    Twenty functionally equivalent programs were built and tested in a multiversion software experiment. Following unit testing, all programs were subjected to an extensive system test. In the process sixty-one distinct faults were identified among the versions. Less than 12 percent of the faults exhibited varying degrees of positive correlation. The common-cause (or similar) faults spanned as many as 14 components. However, a majority of these faults were trivial, and easily detected by proper unit and/or system testing. Only two of the seven similar faults were difficult faults, and both were caused by specification ambiguities. One of these faults exhibited variable identical-and-wrong response span, i.e. response span which varied with the testing conditions and input data. Techniques that could have been used to avoid the faults are discussed. For example, it was determined that back-to-back testing of 2-tuples could have been used to eliminate about 90 percent of the faults. In addition, four of the seven similar faults could have been detected by using back-to-back testing of 5-tuples. It is believed that most, if not all, similar faults could have been avoided had the specifications been written using more formal notation, the unit testing phase was subject to more stringent standards and controls, and better tools for measuring the quality and adequacy of the test data (e.g. coverage) were used.

  17. Fibre bundle framework for quantum fault tolerance

    NASA Astrophysics Data System (ADS)

    Zhang, Lucy Liuxuan; Gottesman, Daniel

    2014-03-01

    We introduce a differential geometric framework for describing families of quantum error-correcting codes and for understanding quantum fault tolerance. In particular, we use fibre bundles and a natural projectively flat connection thereon to study the transformation of codewords under unitary fault-tolerant evolutions. We'll explain how the fault-tolerant logical operations are given by the monodromy group for the bundles with projectively flat connection, which is always discrete. We will discuss the construction of the said bundles for two examples of fault-tolerant families of operations, the string operators in the toric code and the qudit transversal gates. This framework unifies topological fault tolerance and fault tolerance based on transversal gates, and is expected to apply for all unitary quantum fault-tolerant protocols.

  18. Unsynchronized two-terminal fault location estimation

    SciTech Connect

    Novosel, D.; Hart, D.G.; Udren, E.; Garitty, J.

    1996-01-01

    A technique for fault location estimation which uses data from both ends of the transmission line and which does not require the data to be synchronized is described. The technique fully utilizes the advantages of digital technology and numerical relaying which are available today and can easily be applied for off-line analysis. This technique allows for accurate estimation of the fault location irrespective of the fault type, fault resistance, load currents, and source impedances. Use of two-terminal data allows the algorithm to eliminate previous assumptions in fault location estimation, thus increasing the accuracy of the estimate. The described scheme does not require real time communications, only off-line post-fault analysis. The paper also presents fault analysis techniques utilizing the additional communicated information.

  19. Parallel fault-tolerant robot control

    NASA Technical Reports Server (NTRS)

    Hamilton, D. L.; Bennett, J. K.; Walker, I. D.

    1992-01-01

    A shared memory multiprocessor architecture is used to develop a parallel fault-tolerant robot controller. Several versions of the robot controller are developed and compared. A robot simulation is also developed for control observation. Comparison of a serial version of the controller and a parallel version without fault tolerance showed the speedup possible with the coarse-grained parallelism currently employed. The performance degradation due to the addition of processor fault tolerance was demonstrated by comparison of these controllers with their fault-tolerant versions. Comparison of the more fault-tolerant controller with the lower-level fault-tolerant controller showed how varying the amount of redundant data affects performance. The results demonstrate the trade-off between speed performance and processor fault tolerance.

  20. Active faults in Africa: a review

    NASA Astrophysics Data System (ADS)

    Skobelev, S. F.; Hanon, M.; Klerkx, J.; Govorova, N. N.; Lukina, N. V.; Kazmin, V. G.

    2004-03-01

    The active fault database and Map of active faults in Africa, in scale of 1:5,000,000, were compiled according to the ILP Project II-2 "World Map of Major Active Faults". The data were collected in the Royal Museum of Central Africa, Tervuren, Belgium, and in the Geological Institute, Moscow, where the final edition was carried out. Active faults of Africa form three groups. The first group is represented by thrusts and reverse faults associated with compressed folds in the northwest Africa. They belong to the western part of the Alpine-Central Asian collision belt. The faults disturb only the Earth's crust and some of them do not penetrate deeper than the sedimentary cover. The second group comprises the faults of the Great African rift system. The faults form the known Western and Eastern branches, which are rifts with abnormal mantle below. The deep-seated mantle "hot" anomaly probably relates to the eastern volcanic branch. In the north, it joins with the Aden-Red Sea rift zone. Active faults in Egypt, Libya and Tunis may represent a link between the East African rift system and Pantellerian rift zone in the Mediterranean. The third group included rare faults in the west of Equatorial Africa. The data were scarce, so that most of the faults of this group were identified solely by interpretation of space imageries and seismicity. Some longer faults of the group may continue the transverse faults of the Atlantic and thus can penetrate into the mantle. This seems evident for the Cameron fault line.

  1. Truncated hantavirus nucleocapsid proteins for serotyping Sin Nombre, Andes, and Laguna Negra hantavirus infections in humans and rodents.

    PubMed

    Koma, Takaaki; Yoshimatsu, Kumiko; Pini, Noemi; Safronetz, David; Taruishi, Midori; Levis, Silvana; Endo, Rika; Shimizu, Kenta; Yasuda, Shumpei P; Ebihara, Hideki; Feldmann, Heinz; Enria, Delia; Arikawa, Jiro

    2010-05-01

    Sin Nombre virus (SNV), Andes virus (ANDV), and Laguna Negra virus (LANV) have been known as the dominant causative agents of hantavirus pulmonary syndrome (HPS). ANDV and LANV, with different patterns of pathogenicity, exist in a sympatric relationship. Moreover, there is documented evidence of person-to-person transmission of ANDV. Therefore, it is important in clinical medicine and epidemiology to know the serotype of a hantavirus causing infection. Truncated SNV, ANDV, and LANV recombinant nucleocapsid proteins (trNs) missing 99 N-terminal amino acids (trN100) were expressed using a baculovirus system, and their applicability for serotyping SNV, ANDV, and LANV infection by the use of enzyme-linked immunosorbent assays (ELISA) was examined. HPS patient sera and natural-reservoir rodent sera infected with SNV, ANDV, and LANV showed the highest optical density (OD) values for homologous trN100 antigens. Since even patient sera with lower IgM and IgG antibody titers were serotyped, the trN100s are therefore considered useful for serotyping with early-acute-phase sera. In contrast, assays testing whole recombinant nucleocapsid protein antigens of SNV, ANDV, and LANV expressed in Escherichia coli detected homologous and heterologous antibodies equally. These results indicated that a screening ELISA using an E. coli-expressed antigen followed by a serotyping ELISA using trN100s is useful for epidemiological surveillance in regions where two or more hantavirus species cocirculate. PMID:20335425

  2. 77 FR 19690 - Notice of Inventory Completion: California Department of Parks and Recreation, Sacramento, CA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-02

    ...Bound to the east by the Sand Hills in Imperial County and includes the southern end of the Salton Basin and all of the Chocolate Mountains, the territory extends southward to Todos Santos Bay, Laguna Salada and along the New River in northern Baja...

  3. 77 FR 19702 - Notice of Intent to Repatriate Cultural Items: California Department of Parks and Recreation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-02

    ...Bound to the east by the Sand Hills in Imperial County and includes the southern end of the Salton Basin and all of the Chocolate Mountains, the territory extends southward to Todos Santos Bay, Laguna Salada and along the New River in northern Baja...

  4. Experimental fault analysis of 1 Mb SRAM chips

    Microsoft Academic Search

    Hiroyuki Goto; S. Nakamura; K. Iwasaki

    1997-01-01

    Analyzing 1,000 faulty 1 Mb SRAM chips that were randomly selected from a single manufacture, we found 251 stuck-at cell faults, 5 stuck-at bit-line faults, 1 stuck-at word-line fault, 46 neighborhood-pattern-sensitive faults, and other kinds of faults. Under the condition that Idd=4.5 I; temperature=70°C, and load capacity CL=30 pF, we detected margin faults in 460 chips. Because the actual fault

  5. From fissure to fault: A model of fault growth in the Krafla Fissure System, NE Iceland

    NASA Astrophysics Data System (ADS)

    Bramham, Emma; Paton, Douglas; Wright, Tim

    2015-04-01

    Current models of fault growth examine the relationship of fault length (L) to vertical displacement (D) where the faults exhibit the classic fault shape of gradually increasing vertical displacement from zero at the fault tips to a maximum displacement (Dmax) at the middle of the fault. These models cannot adequately explain displacement-length observations at the Krafla fissure swarm, in Iceland's northern volcanic zone, where we observe that many of the faults with significant vertical displacements still retain fissure-like features, with no vertical displacement, along portions of their lengths. We have created a high resolution digital elevation model (DEM) of the Krafla region using airborne LiDAR and measured the displacement/length profiles of 775 faults, with lengths ranging from 10s to 1000s of metres. We have categorised the faults based on the proportion of the profile that was still fissure-like. Fully-developed faults (no fissure-like regions) were further grouped into those with profiles that had a flat-top geometry (i.e. significant proportion of fault length with constant throw), those with a bell-shaped throw profile and those that show regions of fault linkage. We suggest that a fault can most easily accommodate stress by displacing regions that are still fissure-like, and that a fault would be more likely to accommodate stress by linkage once it has reached the maximum displacement for its fault length. Our results demonstrate that there is a pattern of growth from fissure to fault in the Dmax/L ratio of the categorised faults and propose a model for this growth. These data better constrain our understanding of how fissures develop into faults but also provide insights into the discrepancy in D/L profiles from a typical bell-shaped distribution.

  6. The susitna glacier thrust fault: Characteristics of surface ruptures on the fault that initiated the 2002 denali fault earthquake

    USGS Publications Warehouse

    Crone, A.J.; Personius, S.F.; Craw, P.A.; Haeussler, P.J.; Staft, L.A.

    2004-01-01

    The 3 November 2002 Mw 7.9 Denali fault earthquake sequence initiated on the newly discovered Susitna Glacier thrust fault and caused 48 km of surface rupture. Rupture of the Susitna Glacier fault generated scarps on ice of the Susitna and West Fork glaciers and on tundra and surficial deposits along the southern front of the central Alaska Range. Based on detailed mapping, 27 topographic profiles, and field observations, we document the characteristics and slip distribution of the 2002 ruptures and describe evidence of pre-2002 ruptures on the fault. The 2002 surface faulting produced structures that range from simple folds on a single trace to complex thrust-fault ruptures and pressure ridges on multiple, sinuous strands. The deformation zone is locally more than 1 km wide. We measured a maximum vertical displacement of 5.4 m on the south-directed main thrust. North-directed backthrusts have more than 4 m of surface offset. We measured a well-constrained near-surface fault dip of about 19?? at one site, which is considerably less than seismologically determined values of 35??-48??. Surface-rupture data yield an estimated magnitude of Mw 7.3 for the fault, which is similar to the seismological value of Mw 7.2. Comparison of field and seismological data suggest that the Susitna Glacier fault is part of a large positive flower structure associated with northwest-directed transpressive deformation on the Denali fault. Prehistoric scarps are evidence of previous rupture of the Sustina Glacier fault, but additional work is needed to determine if past failures of the Susitna Glacier fault have consistently induced rupture of the Denali fault.

  7. Silica Lubrication in Faults (Invited)

    NASA Astrophysics Data System (ADS)

    Rowe, C. D.; Rempe, M.; Lamothe, K.; Kirkpatrick, J. D.; White, J. C.; Mitchell, T. M.; Andrews, M.; Di Toro, G.

    2013-12-01

    Silica-rich rocks are common in the crust, so silica lubrication may be important for causing fault weakening during earthquakes if the phenomenon occurs in nature. In laboratory friction experiments on chert, dramatic shear weakening has been attributed to amorphization and attraction of water from atmospheric humidity to form a 'silica gel'. Few observations of the slip surfaces have been reported, and the details of weakening mechanism(s) remain enigmatic. Therefore, no criteria exist on which to make comparisons of experimental materials to natural faults. We performed a series of friction experiments, characterized the materials formed on the sliding surface, and compared these to a geological fault in the same rock type. Experiments were performed in the presence of room humidity at 2.5 MPa normal stress with 3 and 30 m total displacement for a variety of slip rates (10-4 - 10-1 m/s). The friction coefficient (?) reduced from >0.6 to ~0.2 at 10-1 m/s, but only fell to ~0.4 at 10-2 - 10-4 m/s. The slip surfaces and wear material were observed using laser confocal Raman microscopy, electron microprobe, X-ray diffraction, and transmission electron microscopy. Experiments at 10-1 m/s formed wear material consisting of ?1 ?m powder that is aggregated into irregular 5-20 ?m clumps. Some material disaggregated during analysis with electron beams and lasers, suggesting hydrous and unstable components. Compressed powder forms smooth pavements on the surface in which grains are not visible (if present, they are <100 nm). Powder contains amorphous material and as yet unidentified crystalline and non-crystalline forms of silica (not quartz), while the worn chert surface underneath shows Raman spectra consistent with a mixture of quartz and amorphous material. If silica amorphization facilitates shear weakening in natural faults, similar wear materials should be formed, and we may be able to identify them through microstructural studies. However, the sub-micron particles of unstable materials are unlikely to survive in the crust over geologic time, so a direct comparison of fresh experimental wear material and ancient fault rock needs to account for the alteration and crystallization of primary materials. The surface of the Corona fault is coated by a translucent shiny layer consisting of ~100 nm interlocking groundmass of dislocation-free quartz, 10 nm ellipsoidal particles, and interstitial patches of amorphous silica. We interpret this layer as the equivalent of the experimentally produced amorphous material after crystallizing to more stable forms over geological time.

  8. A Log-Scaling Fault Tolerant Agreement Algorithm for a Fault Tolerant MPI

    SciTech Connect

    Hursey, Joshua J [ORNL; Naughton, III, Thomas J [ORNL; Vallee, Geoffroy R [ORNL; Graham, Richard L [ORNL

    2011-01-01

    The lack of fault tolerance is becoming a limiting factor for application scalability in HPC systems. The MPI does not provide standardized fault tolerance interfaces and semantics. The MPI Forum's Fault Tolerance Working Group is proposing a collective fault tolerant agreement algorithm for the next MPI standard. Such algorithms play a central role in many fault tolerant applications. This paper combines a log-scaling two-phase commit agreement algorithm with a reduction operation to provide the necessary functionality for the new collective without any additional messages. Error handling mechanisms are described that preserve the fault tolerance properties while maintaining overall scalability.

  9. Influence of fault trend, fault bends, and fault convergence on shallow structure, geomorphology, and hazards, Hosgri strike-slip fault, offshore central California

    NASA Astrophysics Data System (ADS)

    Johnson, S. Y.; Watt, J. T.; Hartwell, S. R.

    2012-12-01

    We mapped a ~94-km-long portion of the right-lateral Hosgri Fault Zone from Point Sal to Piedras Blancas in offshore central California using high-resolution seismic reflection profiles, marine magnetic data, and multibeam bathymetry. The database includes 121 seismic profiles across the fault zone and is perhaps the most comprehensive reported survey of the shallow structure of an active strike-slip fault. These data document the location, length, and near-surface continuity of multiple fault strands, highlight fault-zone heterogeneity, and demonstrate the importance of fault trend, fault bends, and fault convergences in the development of shallow structure and tectonic geomorphology. The Hosgri Fault Zone is continuous through the study area passing through a broad arc in which fault trend changes from about 338° to 328° from south to north. The southern ~40 km of the fault zone in this area is more extensional, resulting in accommodation space that is filled by deltaic sediments of the Santa Maria River. The central ~24 km of the fault zone is characterized by oblique convergence of the Hosgri Fault Zone with the more northwest-trending Los Osos and Shoreline Faults. Convergence between these faults has resulted in the formation of local restraining and releasing fault bends, transpressive uplifts, and transtensional basins of varying size and morphology. We present a hypothesis that links development of a paired fault bend to indenting and bulging of the Hosgri Fault by a strong crustal block translated to the northwest along the Shoreline Fault. Two diverging Hosgri Fault strands bounding a central uplifted block characterize the northern ~30 km of the Hosgri Fault in this area. The eastern Hosgri strand passes through releasing and restraining bends; the releasing bend is the primary control on development of an elongate, asymmetric, "Lazy Z" sedimentary basin. The western strand of the Hosgri Fault Zone passes through a significant restraining bend and dies out northward where we propose that its slip transfers to active structures in the Piedras Blancas fold belt. Given the continuity of the Hosgri Fault Zone through our study area, earthquake hazard assessments should incorporate a minimum rupture length of 110 km. Our data do not constrain lateral slip rates on the Hosgri, which probably vary along the fault (both to the north and south) as different structures converge and diverge but are likely in the geodetically estimated range of 2 to 4 mm/yr. More focused mapping of lowstand geomorphic features (e.g., channels, paleoshorelines) has the potential to provide better constraints. The post-Last-Glacial Maximum unconformity is an important surface for constraining vertical deformation, yielding local fault offset rates that may be as high as 1.4 mm/yr and off-fault deformation rates as high as 0.5 mm/yr. These vertical rates are short-term and not sustainable over longer geologic time, emphasizing the complex evolution and dynamics of strike-slip zones.

  10. Building the GEM Faulted Earth database

    NASA Astrophysics Data System (ADS)

    Litchfield, N. J.; Berryman, K. R.; Christophersen, A.; Thomas, R. F.; Wyss, B.; Tarter, J.; Pagani, M.; Stein, R. S.; Costa, C. H.; Sieh, K. E.

    2011-12-01

    The GEM Faulted Earth project is aiming to build a global active fault and seismic source database with a common set of strategies, standards, and formats, to be placed in the public domain. Faulted Earth is one of five hazard global components of the Global Earthquake Model (GEM) project. A key early phase of the GEM Faulted Earth project is to build a database which is flexible enough to capture existing and variable (e.g., from slow interplate faults to fast subduction interfaces) global data, and yet is not too onerous to enter new data from areas where existing databases are not available. The purpose of this talk is to give an update on progress building the GEM Faulted Earth database. The database design conceptually has two layers, (1) active faults and folds, and (2) fault sources, and automated processes are being defined to generate fault sources. These include the calculation of moment magnitude using a user-selected magnitude-length or magnitude-area scaling relation, and the calculation of recurrence interval from displacement divided by slip rate, where displacement is calculated from moment and moment magnitude. The fault-based earthquake sources defined by the Faulted Earth project will then be rationalised with those defined by the other GEM global components. A web based tool is being developed for entering individual faults and folds, and fault sources, and includes capture of additional information collected at individual sites, as well as descriptions of the data sources. GIS shapefiles of individual faults and folds, and fault sources will also be able to be uploaded. A data dictionary explaining the database design rationale, definitions of the attributes and formats, and a tool user guide is also being developed. Existing national databases will be uploaded outside of the fault compilation tool, through a process of mapping common attributes between the databases. Regional workshops are planned for compilation in areas where existing databases are not available, or require further population, and will include training on using the fault compilation tool. The tool is also envisaged as an important legacy of the GEM Faulted Earth project, to be available for use beyond the end of the 2 year project.

  11. Stress Relaxation on Geometrically Complex Faults

    NASA Astrophysics Data System (ADS)

    Dieterich, J.; Smith, D. E.

    2006-12-01

    Slip of geometrically complex faults involves interactions and processes that do not occur in standard planar fault models. These include off-fault yielding and stress relaxation, which are required to prevent the development of pathological stress conditions on the fault (or in extreme cases fault lock-up). The necessity of incorporating yielding to allow slip past geometric complexities was recognized by Nielsen and Knopoff [1998] who employed a simplified form of viscoelastic stress relaxation consisting of a monotonic time-dependent exponential decay of fault stresses. However, the characteristics of stress relaxation in the brittle seismogenic crust, which are dominated by faulting processes, may be quite different from that predicted by viscoelastic models. For example, slip over a slight fault irregularity with a slope change of only one part in 100 will give rise to shear strains adjacent to the irregularity on the order of 0.01. This greatly exceeds the strain needed to fracture rock under conditions in crust. The fractal-like character of fault systems and fault roughness insures that slight movements of secondary faults, at all scales, will be necessary to accommodate slip of major through-going faults. We surmise that these adjustments occur as co-seismic slip on secondary faults during large earthquakes, as delayed stress relaxation in the form of aftershocks, and as spatially distributed background seismicity. To model the integrated effect of these processes on the stress conditions on major faults, we employ an earthquake rate formulation [Dieterich, 1994], which incorporates laboratory-derived rate- and state-dependent frictional properties. Models of the earthquake cycle with uniform inter-event times using faults with simple bends, and random fractal geometries have the following characteristics. Slip produces spatially heterogeneous stress, where in the absence of relaxation processes, continues to grow without limit. When relaxation processes are included, using the earthquake rate formulation from rate- and state- dependent friction, deviations from the spatial mean stress decay at a rate proportional to 1/t. The 1/t decay is consistent with our assumption that stress relaxation occurs largely by aftershock processes. Where slip is inhibited by geometric irregularities, the relaxation processes reload the fault to favor additional slip, thus preventing pathological stress conditions such as fault lock-up. We can also calculate stress rotations during the relaxation process, where the amplitude of the rotations depends on the constitutive properties, the amplitude of the fault trace heterogeneity, and the initial background stress.

  12. Neotectonics of Panama. I. Major fault systems

    SciTech Connect

    Corrigan, J.; Mann, P.

    1985-01-01

    The direction and rate of relative plate motion across the Caribbean-Nazca boundary in Panama is poorly known. This lack of understanding can be attributed to diffuse seismicity; lack of well constrained focal mechanisms from critical areas; and dense tropical vegetation. In order to better understand the relation of plate motions to major fault systems in Panama, the authors have integrated geologic, remote sensing, earthquake and UTIG marine seismic reflection data. Three areas of recent faulting can be distinguished in Panama and its shelf areas; ZONE 1 of eastern Panama consists of a 70 km wide zone of 3 discrete left-lateral strike-slip faults (Sanson Hills, Jaque River, Sambu) which strike N40W and can be traced as continuous features for distances of 100-150 km; ZONE 2 in central Panama consists of a diffuse zone of discontinuous normal(.) faults which range in strike from N40E, N70E; ZONE 3 in western Panama consists of a 60 km wide zone of 2 discrete, left-lateral(.) strike-slip faults which strike N60W and can be traced as continuous features for distances of 150 km; ZONE 3 faults appear to be continuous with faults bounding the forearc Teraba Trough of Costa Rica. The relation of faults of ZONE 3 to faults of ZONE 2 and a major fault bounding the southern Panama shelf is unclear.

  13. Inductive fault analysis of VLSI circuits

    SciTech Connect

    Ferguson, F.J.

    1987-01-01

    Inductive fault analysis (IFA) is a systematic method for determining the realistic faults likely to occur in a VLSI circuit. This method takes into account the circuit's fabrication technology, fabrication defect statistics, and physical layout. This inductive approach of characterizing faults, by drawing conclusions based on analyzing the particulars of low-level fault-inducing mechanisms, departs from the traditional scenario of simply assuming a convenient high-level fault model. For a given circuit, the IFA procedure extracts a comprehensive list of circuit-level faults and ranks them according to their relative likelihood of occurrence. These ranked fault lists can be used to validate the traditional stuck-at fault model, assess the true fault coverage of traditional test sets, facilitate more-effective test generation, and support yield-optimization techniques. A software program automating the IFA procedure, called FXT, was implemented. Several circuits from a commercial standard cell library were analyzed. Based on the extracted fault lists for these circuits, a number of interesting observations can be made. With the availability of the IFA method and the FXT tool, a number of very interesting and promising research tasks can be pursued.

  14. DEM simulation of growth normal fault slip

    NASA Astrophysics Data System (ADS)

    Chu, Sheng-Shin; Lin, Ming-Lang; Nien, Wie-Tung; Chan, Pei-Chen

    2014-05-01

    Slip of the fault can cause deformation of shallower soil layers and lead to the destruction of infrastructures. Shanchiao fault on the west side of the Taipei basin is categorized. The activities of Shanchiao fault will cause the quaternary sediments underneath the Taipei basin to become deformed. This will cause damage to structures, traffic construction, and utility lines within the area. It is determined from data of geological drilling and dating, Shanchiao fault has growth fault. In experiment, a sand box model was built with non-cohesive sand soil to simulate the existence of growth fault in Shanchiao Fault and forecast the effect on scope of shear band development and ground differential deformation. The results of the experiment showed that when a normal fault containing growth fault, at the offset of base rock the shear band will develop upward along with the weak side of shear band of the original topped soil layer, and this shear band will develop to surface much faster than that of single top layer. The offset ratio (basement slip / lower top soil thickness) required is only about 1/3 of that of single cover soil layer. In this research, it is tried to conduct numerical simulation of sand box experiment with a Discrete Element Method program, PFC2D, to simulate the upper covering sand layer shear band development pace and scope of normal growth fault slip. Results of simulation indicated, it is very close to the outcome of sand box experiment. It can be extended to application in water pipeline project design around fault zone in the future. Keywords: Taipei Basin, Shanchiao fault, growth fault, PFC2D

  15. Fault Mechanics and Earthquake Physics: Insights From Laboratory Studies Of Fault Rocks Recovered In Scientific Drilling

    NASA Astrophysics Data System (ADS)

    Marone, C.; Carpenter, B. M.; Saffer, D. M.; Collettini, C.

    2012-12-01

    Laboratory experiments on fault rocks recovered in scientific drilling projects have emerged as a powerful tool for understanding tectonic faults and the spectrum of fault slip behaviors. Recent laboratory work has made significant strides in understanding frictional strength, slip stability, poromechanical properties, and their relationships in active fault zones. These studies are beginning to reconcile field measurements with theory and lab measurements, including heat flow, stress orientation data, and lab measurements on fault gouge. In particular, it is now clear that mature tectonic faults can be much weaker than previously thought due to the effects of in situ shear fabric, fault rock mineralogy, and clay nano-coatings on micro- and meso-scale shear localization surfaces within fault gouge. We summarize results from laboratory experiments on samples recovered in scientific drilling projects, outcrop fault samples, and synthetic fault gouge, focusing primarily on friction measurements conducted under slip velocities that correspond to nucleation of dynamic earthquake rupture. Our data show that fault zone friction is strongly influenced by shear fabric in some cases, which suggests that rotary and triaxial experimental geometries are not well suited for studies of in-situ fault zone friction, at least in some cases. At this stage, the role of fault zone fabric at dynamic slip velocities, such as accessed only in rotary shear experiments, is unclear and needs to be explored. A number of current studies are evaluating connections between fault strength and poromechanical properties, with particular focus on fault zone permeability and anisotropy. Results of these laboratory studies will have important implications for understanding emerging, in-situ measurements of fault zone shear heating and on explanations for fault strength.

  16. Fault Diagnosability of Arrangement Graphs

    E-print Network

    Zhou, Shuming

    2012-01-01

    The growing size of the multiprocessor system increases its vulnerability to component failures. It is crucial to locate and to replace the faulty processors to maintain a system's high reliability. The fault diagnosis is the process of identifying faulty processors in a system through testing. This paper shows that the largest connected component of the survival graph contains almost all remaining vertices in the $(n,k)$-arrangement graph $A_{n,k}$ when the number of moved faulty vertices is up to twice or three times the traditional connectivity. Based on this fault resiliency, we establishes that the conditional diagnosability of $A_{n,k}$ under the comparison model. We prove that for $k\\geq 4$, $n\\geq k+2$, the conditional diagnosability of $A_{n,k}$ is $(3k-2)(n-k)-3$; the conditional diagnosability of $A_{n,n-1}$ is $3n-7$ for $n\\geq 5$.

  17. Fault trees and imperfect coverage

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne B.

    1989-01-01

    A new algorithm is presented for solving the fault tree. The algorithm includes the dynamic behavior of the fault/error handling model but obviates the need for the Markov chain solution. As the state space is expanded in a breadth-first search (the same is done in the conversion to a Markov chain), the state's contribution to each future state is calculated exactly. A dynamic state truncation technique is also presented; it produces bounds on the unreliability of the system by considering only part of the state space. Since the model is solved as the state space is generated, the process can be stopped as soon as the desired accuracy is reached.

  18. Fault geometries in basement-induced wrench faulting under different initial stress states

    NASA Astrophysics Data System (ADS)

    Naylor, M. A.; Mandl, G.; Supesteijn, C. H. K.

    Scaled sandbox experiments were used to generate models for relative ages, dip, strike and three-dimensional shape of faults in basement-controlled wrench faulting. The basic fault sequence runs from early en échelon Riedel shears and splay faults through 'lower-angle' shears to P shears. The Riedel shears are concave upwards and define a tulip structure in cross-section. In three dimensions, each Riedel shear has a helicoidal form. The sequence of faults and three-dimensional geometry are rationalized in terms of the prevailing stress field and Coulomb-Mohr theory of shear failure. The stress state in the sedimentary overburden before wrenching begins has a substantial influence on the fault geometries and on the final complexity of the fault zone. With the maximum compressive stress (? 1) initially parallel to the basement fault (transtension), Riedel shears are only slightly en échelon, sub-parallel to the basement fault, steeply dipping with a reduced helicoidal aspect. Conversely, with ? 1 initially perpendicular to the basement fault (transpression), Riedel shears are strongly oblique to the basement fault strike, have lower dips and an exaggerated helicoidal form; the final fault zone is both wide and complex. We find good agreement between the models and both mechanical theory and natural examples of wrench faulting.

  19. Timing of Alpine fault gouges

    Microsoft Academic Search

    Horst Zwingmann; Neil Mancktelow

    2004-01-01

    K–Ar ages from clay-rich fault gouges in the European Alps are consistent internally, with established field constraints and with fission track ages, demonstrating the applicability of this method for direct dating of brittle deformation. Illite grown by retrograde hydration of granitic and\\/or high-grade metamorphic protoliths is the major K-bearing mineral in the fractions that have been separated (<0.1 to 6–10

  20. Fault-ignorant Quantum Search

    E-print Network

    Peter Vrana; David Reeb; Daniel Reitzner; Michael M. Wolf

    2014-07-25

    We investigate the problem of quantum searching on a noisy quantum computer. Taking a 'fault-ignorant' approach, we analyze quantum algorithms that solve the task for various different noise strengths, which are possibly unknown beforehand. We prove lower bounds on the runtime of such algorithms and thereby find that the quadratic speedup is necessarily lost (in our noise models). However, for low but constant noise levels the algorithms we provide (based on Grover's algorithm) still outperform the best noiseless classical search algorithm.

  1. Fault classification using genetic programming

    Microsoft Academic Search

    Liang Zhang; Asoke K. Nandi

    2007-01-01

    Genetic programming (GP) is a stochastic process for automatically generating computer programs. In this paper, three GP-based approaches for solving multi-class classification problems in roller bearing fault detection are proposed. Single-GP maps all the classes onto the one-dimensional GP output. Independent-GPs singles out each class separately by evolving a binary GP for each class independently. Bundled-GPs also has one binary

  2. Detection, diagnosis, and evaluation of faults in vapor compression equipment

    Microsoft Academic Search

    Todd Michael Rossi

    1995-01-01

    This thesis develops techniques for automated detection, diagnostics, and evaluation of faults in vapor compression equipment. Fault evaluation was added to the more common steps of fault detection and diagnostics to consider the special aspects of performance degradation faults over abrupt faults. A model for testing these techniques in a simulation environment was developed. The model is described and experimental

  3. Drilling Investigations on the Mechanics and Structure of Faults

    Microsoft Academic Search

    Kentaro Omura

    2007-01-01

    standing the dynamics, physical properties, and structure of ing the dynamics, physical properties, and structure of the dynamics, physical properties, and structure of an active fault. We drilled into the four major active faults in four major active faults in major active faults in central and western Japan to directly access mechanics, physical properties, and fault rock distributions in and

  4. Algorithms for Power System Fault Location and Line Parameter Estimation

    Microsoft Academic Search

    Yuan Liao

    2007-01-01

    This paper presents a novel power system transmission line fault location algorithm that is applicable for the scenarios where transmission line parameters are not available. A set of equations involving the unknown fault location are formulated based on pre-fault and fault data, solution to which leads to the fault location. A method is also described for determining the positive sequence

  5. A fault tolerant control system for hexagram inverter motor drive

    Microsoft Academic Search

    Liang Zhou; Keyue Smedley

    2010-01-01

    In this paper, a fault tolerant control method for hexagram inverter motor drive is proposed. Due to its unique topology, the hexagram inverter is able to tolerate certain degree of switch failure with a proper control method. The proposed method consists of fault detection, fault isolation and post fault control. A simple fault isolation method is to use fuses in

  6. Design of arc fault detection system based on CAN bus

    Microsoft Academic Search

    Zong Ming; Yang Tian; Fengge Zhang

    2009-01-01

    Arc fault detection system (AFDS) is a device intended to protect the power system against the arc fault that may cause fire. When there is an arc fault, the scale of fault current is lower than the initialization of most of the protection devices installed in the lowers, hence AFDS is an effective device to detect the arc fault successfully

  7. Fault facies and its application to sandstone reservoirs

    E-print Network

    Fossen, Haakon

    Fault facies and its application to sandstone reservoirs Alvar Braathen, Jan Tveranger, Haakon The concept of fault facies is a novel approach to fault de- scription adapted to three-dimensional reservoir. The fault envelope consists of a varying number of discrete fault facies originating from the host rock

  8. A model-based fault diagnosis of powered wheelchair

    Microsoft Academic Search

    Fumihiro Itaba; Masafumi Hashimoto; Kazuhiko Takahashi

    2007-01-01

    This paper describes a method of fault diagnosis of internal sensors and actuators for a powered wheelchair. We handle hard fault and scale fault of three sensors (two wheel- resolvers and one gyro) as well as hard fault of two wheel- motors. The hard fault of the gyro is diagnosed based on mode probability estimated with interacting multi-model estimator. The

  9. Syntactic Fault Patterns in OO Programs Roger T. Alexander

    E-print Network

    Offutt, Jeff

    Syntactic Fault Patterns in OO Programs Roger T. Alexander Colorado State University Dept faults are widely studied, there are many aspects of faults that we still do not understand, par is to cause failures and thereby detect faults, a full understanding of the char- acteristics of faults

  10. HOPE: an efficient parallel fault simulator for synchronous sequential circuits

    Microsoft Academic Search

    Hyung Ki Lee; Dong Sam Ha

    1992-01-01

    In this paper, we present an efficient sequential circuit parallel fault simulator, HOPE, which simulates 32 faults at a time. The key idea incorporated in HOPE is to screen out faults with short propagation paths through the single fault propagation. A systematic method of identifying faults with short propagation paths is presented. The proposed method substantially reduces the total number

  11. Structural style of the Eureka fault system, eastern Missouri

    Microsoft Academic Search

    C. W. Clendenin; M. A. Middendorf; T. L. Thompson; J. W. Whitfield

    1993-01-01

    The Eureka fault system is one of a number of northwest-striking faults in eastern Missouri. The 60 km fault system (present known length) consists of three right-stepping en echelon fault segments. Each segment is 15 to 25 km in length and appears to have an independent rupture history. Faulting on each segment is confined to a relatively narrow linear zone

  12. Diversity against Accidental and Deliberate Faults Yves Deswarte1

    E-print Network

    Boyer, Edmond

    Diversity against Accidental and Deliberate Faults Yves Deswarte1 , Karama Kanoun and Jean that gave rise to this book: security, fault tolerance, and software assurance. Those three topics can for addressing the classes of faults that underlay all these topics, i.e., design faults and intrusion faults. 1

  13. Carafe: an inductive fault analysis tool for CMOS VLSI circuits

    Microsoft Academic Search

    Alvin Jee; F. Joel Ferguson

    1993-01-01

    Traditional fault models for testing CMOS VLSI circuits do not take into account the actual mechanisms that precipitate faults in CMOS circuits. As a result, tests based on traditional fault models may not detect all the faults that occur in the circuit. This paper discusses the Carafe software package which determines which faults are likely to occur in a circuit

  14. Carafe: An Inductive Fault Analysis Tool for CMOS VLSI Circuits

    Microsoft Academic Search

    Alvin Jee; F. Joel Ferguson Boar

    1991-01-01

    Traditional fault models for testing CMOS VLSIcircuits do not take into account the actual mechanismsthat precipitate faults in CMOS circuits. As aresult, tests based on traditional fault models may notdetect the actual faults in the circuit. This paper discussesthe Carafe software package which determineswhich faults are likely to occur in a circuit based onthe circuit's physical design, defect parameters, andfabrication

  15. Actuator fault tolerant control in experimental networked embedded mini Drone

    Microsoft Academic Search

    Hossein Hashemi Nejad; Dominique Sauter; Samir Aberkane; Suzanne Lesecq

    2009-01-01

    This paper deals with freezing fault reconfiguration in a small four-rotor helicopter (drone). This fault may be because of network faults such as packet loss or long delay in one actuator. In case of the fault occurrence in one actuator (motor) different strategies were proposed to compensate the fault effects on drone. These approaches are based on the minimisation of

  16. Fault prophet : a fault injection tool for large scale computer systems

    E-print Network

    Tchwella, Tal

    2014-01-01

    In this thesis, I designed and implemented a fault injection tool, to study the impact of soft errors for large scale systems. Fault injection is used as a mechanism to simulate soft errors, measure the output variability ...

  17. Spacecraft fault tolerance: The Magellan experience

    NASA Technical Reports Server (NTRS)

    Kasuda, Rick; Packard, Donna Sexton

    1993-01-01

    Interplanetary and earth orbiting missions are now imposing unique fault tolerant requirements upon spacecraft design. Mission success is the prime motivator for building spacecraft with fault tolerant systems. The Magellan spacecraft had many such requirements imposed upon its design. Magellan met these requirements by building redundancy into all the major subsystem components and designing the onboard hardware and software with the capability to detect a fault, isolate it to a component, and issue commands to achieve a back-up configuration. This discussion is limited to fault protection, which is the autonomous capability to respond to a fault. The Magellan fault protection design is discussed, as well as the developmental and flight experiences and a summary of the lessons learned.

  18. Systems approach to software fault tolerance

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Eckhardt, D. E., Jr.

    1985-01-01

    Computing systems are employed for aerospace applications with high reliability requirements. In order to provide the needed reliability, it was necessary to make use of computing systems with fault-tolerance characteristics. Traditionally, fault tolerance is achieved through the use of hardware redundance. However, fault-tolerant techniques based on suitable software design considerations have also been developed. The present paper is concerned with the major issues arising in the context of an application of fault-tolerant software techniques to dynamic systems. Attention is given to fault-tolerant flight software, software component stability, system stability with fault-tolerant software, the preservation of functional performance, N-version vs. recovery blocks in flight software, systems-based software, static and dynamic models, static and dynamic consistency tests, and recovery block initialization.

  19. In-circuit fault injector user's guide

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1987-01-01

    A fault injector system, called an in-circuit injector, was designed and developed to facilitate fault injection experiments performed at NASA-Langley's Avionics Integration Research Lab (AIRLAB). The in-circuit fault injector (ICFI) allows fault injections to be performed on electronic systems without special test features, e.g., sockets. The system supports stuck-at-zero, stuck-at-one, and transient fault models. The ICFI system is interfaced to a VAX-11/750 minicomputer. An interface program has been developed in the VAX. The computer code required to access the interface program is presented. Also presented is the connection procedure to be followed to connect the ICFI system to a circuit under test and the ICFI front panel controls which allow manual control of fault injections.

  20. Performance Analysis on Fault Tolerant Control System

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Belcastro, Christine

    2005-01-01

    In a fault tolerant control (FTC) system, a parameter varying FTC law is reconfigured based on fault parameters estimated by fault detection and isolation (FDI) modules. FDI modules require some time to detect fault occurrences in aero-vehicle dynamics. In this paper, an FTC analysis framework is provided to calculate the upper bound of an induced-L(sub 2) norm of an FTC system with existence of false identification and detection time delay. The upper bound is written as a function of a fault detection time and exponential decay rates and has been used to determine which FTC law produces less performance degradation (tracking error) due to false identification. The analysis framework is applied for an FTC system of a HiMAT (Highly Maneuverable Aircraft Technology) vehicle. Index Terms fault tolerant control system, linear parameter varying system, HiMAT vehicle.

  1. Fault-tolerant dynamic task graph scheduling

    SciTech Connect

    Kurt, Mehmet C.; Krishnamoorthy, Sriram; Agrawal, Kunal; Agrawal, Gagan

    2014-11-16

    In this paper, we present an approach to fault tolerant execution of dynamic task graphs scheduled using work stealing. In particular, we focus on selective and localized recovery of tasks in the presence of soft faults. We elicit from the user the basic task graph structure in terms of successor and predecessor relationships. The work stealing-based algorithm to schedule such a task graph is augmented to enable recovery when the data and meta-data associated with a task get corrupted. We use this redundancy, and the knowledge of the task graph structure, to selectively recover from faults with low space and time overheads. We show that the fault tolerant design retains the essential properties of the underlying work stealing-based task scheduling algorithm, and that the fault tolerant execution is asymptotically optimal when task re-execution is taken into account. Experimental evaluation demonstrates the low cost of recovery under various fault scenarios.

  2. Holocene fault scarps near Tacoma, Washington, USA

    USGS Publications Warehouse

    Sherrod, B.L.; Brocher, T.M.; Weaver, C.S.; Bucknam, R.C.; Blakely, R.J.; Kelsey, H.M.; Nelson, A.R.; Haugerud, R.

    2004-01-01

    Airborne laser mapping confirms that Holocene active faults traverse the Puget Sound metropolitan area, northwestern continental United States. The mapping, which detects forest-floor relief of as little as 15 cm, reveals scarps along geophysical lineaments that separate areas of Holocene uplift and subsidence. Along one such line of scarps, we found that a fault warped the ground surface between A.D. 770 and 1160. This reverse fault, which projects through Tacoma, Washington, bounds the southern and western sides of the Seattle uplift. The northern flank of the Seattle uplift is bounded by a reverse fault beneath Seattle that broke in A.D. 900-930. Observations of tectonic scarps along the Tacoma fault demonstrate that active faulting with associated surface rupture and ground motions pose a significant hazard in the Puget Sound region.

  3. Parallel fault-tolerant robot control

    NASA Astrophysics Data System (ADS)

    Hamilton, Deirdre L.; Bennett, John K.; Walker, Ian D.

    1992-11-01

    Most robot controllers today employ a single processor architecture. As robot control requirements become more complex, these serial controllers have difficulty providing the desired response time. Additionally, with robots being used in environments that are hazardous or inaccessible to humans, fault-tolerant robotic systems are particularly desirable. A uniprocessor control architecture cannot offer tolerance of processor faults. Use of multiple processors for robot control offers two advantages over single processor systems. Parallel control provides a faster response, which in turn allows a finer granularity of control. Processor fault tolerance is also made possible by the existence of multiple processors. There is a trade-off between performance and the level of fault tolerance provided. This paper describes a shared memory multiprocessor robot controller that is capable of providing high performance and processor fault tolerance. We evaluate the performance of this controller, and demonstrate how performance and processor fault tolerance can be balanced in a cost- effective manner.

  4. West Coast Tsunami: Cascadia's Fault?

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Bernard, E. N.; Titov, V.

    2013-12-01

    The tragedies of 2004 Sumatra and 2011 Japan tsunamis exposed the limits of our knowledge in preparing for devastating tsunamis. The 1,100-km coastline of the Pacific coast of North America has tectonic and geological settings similar to Sumatra and Japan. The geological records unambiguously show that the Cascadia fault had caused devastating tsunamis in the past and this geological process will cause tsunamis in the future. Hypotheses of the rupture process of Cascadia fault include a long rupture (M9.1) along the entire fault line, short ruptures (M8.8 - M9.1) nucleating only a segment of the coastline, or a series of lesser events of M8+. Recent studies also indicate an increasing probability of small rupture occurring at the south end of the Cascadia fault. Some of these hypotheses were implemented in the development of tsunami evacuation maps in Washington and Oregon. However, the developed maps do not reflect the tsunami impact caused by the most recent updates regarding the Cascadia fault rupture process. The most recent study by Wang et al. (2013) suggests a rupture pattern of high- slip patches separated by low-slip areas constrained by estimates of coseismic subsidence based on microfossil analyses. Since this study infers that a Tokohu-type of earthquake could strike in the Cascadia subduction zone, how would such an tsunami affect the tsunami hazard assessment and planning along the Pacific Coast of North America? The rapid development of computing technology allowed us to look into the tsunami impact caused by above hypotheses using high-resolution models with large coverage of Pacific Northwest. With the slab model of MaCrory et al. (2012) (as part of the USGS slab 1.0 model) for the Cascadia earthquake, we tested the above hypotheses to assess the tsunami hazards along the entire U.S. West Coast. The modeled results indicate these hypothetical scenarios may cause runup heights very similar to those observed along Japan's coastline during the 2011 Japan tsunami,. Comparing to a long rupture, the Tohoku-type rupture may cause more serious impact at the adjacent coastline, independent of where it would occur in the Cascadia subduction zone. These findings imply that the Cascadia tsunami hazard may be greater than originally thought.

  5. Fault collapsing is the process of reducing the number of faults by using redundance and equivalence/dominance

    E-print Network

    Al-Asaad, Hussain

    1 Abstract Fault collapsing is the process of reducing the number of faults by using redundance and equivalence/dominance relationships among faults. Exact fault collapsing can be easily applied locally such as execution time and/or memory. In this paper, we present EGFC, an exact global fault collapsing tool

  6. Use of Fault Dropping for Multiple Fault Analysis Youns KARKOURI, El Mostapha ABOULHAMID, Eduard CERNY and Alain VERREAULT

    E-print Network

    Aboulhamid, El Mostapha

    - 1 - Use of Fault Dropping for Multiple Fault Analysis Younès KARKOURI, El Mostapha ABOULHAMID Montréal, C.P. 6128, Succ. "A" Montréal, (Québec), H3C-3J7, Canada. ABSTRACT A new approach to fault analysis is presented. We consider multiple stuck-at-0/1 faults at the gate level. First, a fault

  7. Toward Reducing Fault Fix Time: Understanding Developer Behavior for the Design of Automated Fault Detection Tools, the Full Report

    E-print Network

    Young, R. Michael

    Toward Reducing Fault Fix Time: Understanding Developer Behavior for the Design of Automated Fault}@csc.ncsu.edu Abstract The longer a fault remains in the code from the time it was injected, the more time it will take to fix the fault. Increasingly, automated fault detection (AFD) tools are providing developers

  8. NFTAPE: Networked Fault Tolerance and Performance Evaluator

    Microsoft Academic Search

    David T. Stott; Phillip H. Jones III; M. Hamman; Zbigniew Kalbarczyk; Ravishankar K. Iyer

    2002-01-01

    The NFTAPE is a software implemented, highly flexible fault injection environment for conducting automated fault\\/error injection-based dependability characterization. NFTAPE: (1) enables a user: (i) to specify a fault\\/error injection plan, (ii) to carry out injection experiments, and (iii) to collect the experimental results for analysis; (2) targets assessment of a broad set of dependability metrics, e.g., availability, reliability, coverage; (3)

  9. Tolerating Faults in Hypercubes Using Subcube Partitioning

    Microsoft Academic Search

    Jehoshua Bruck; Robert Cypher; Danny Soroker

    1992-01-01

    We examine the issue of running algorithms on a hypercube whichhas both node and edge faults, and we assume a worst case distributionof the faults. We prove that for any constant c, an n-dimensionalhypercube (n-cube) with ncfaulty components contains a fault-freesubgraph that can implement a large class of hypercube algorithmswith only a constant factor slowdown. In addition, our approach yieldspractical

  10. Transient fault detection via simultaneous multithreading

    Microsoft Academic Search

    Steven K. Reinhardt; Shubhendu S. Mukherjee

    2000-01-01

    Smaller feature sizes, reduced voltage levels, higher transistor counts, and reduced noise margins make future generations of microprocessors increasingly prone to transient hardware faults. Most commercial fault-tolerant computers use fully replicated hardware components to detect microprocessor faults. The components are lockstepped (cycle-by-cycle synchronized) to ensure that, in each cycle, they perform the same operation on the same inputs, producing the

  11. Segregation of Solute Atoms to Stacking Faults

    Microsoft Academic Search

    Hideji Suzuki

    1962-01-01

    An experimental evidence was given for the segregation of solute atoms to stacking faults in alpha-brass. The stacking fault energy in an alpha-phase solid solution with face-centered cubic structure usually decreases continuously with the increasing concentration of solute atoms. The solute atoms in that alloy tend to segregate to the stacking faults due to chemical interaction. A simple calculation indicates

  12. Stacking fault energies of random metallic alloys

    Microsoft Academic Search

    S. Crampin; D. D. Vvedensky; R. Monnier

    1993-01-01

    Stacking fault energies in dilute Cu(Al) alloys and across the composition range of PdAg alloys are calculated from first principles using the layer Korringa-Kohn-Rostoker method and treating the compositional disorder within the coherent potential approximation. In Cu(Al), rigid-band behaviour results in a sharp reduction in the fault energy with Al concentration. The non-uniform variation of the fault energy in PdAg

  13. Supporting Reconfigurable Fault Tolerance on Application Servers

    Microsoft Academic Search

    Junguo Li; Gang Huang; Xingrun Chen; Franck Chauvel; Hong Mei

    2009-01-01

    Dynamic reconfiguration support in application servers is a solution to meet the demands for flexible and adaptive component-based applications. However, when an application is reconfigured, its fault-tolerant mechanism should be reconfigured either. This is one of the crucial problems we have to solve before a fault-tolerant application is dynamically reconfigured at runtime. This paper proposes a fault-tolerant sandbox to support

  14. The fault-tolerant multiprocessor computer

    NASA Technical Reports Server (NTRS)

    Smith, T. B., III (editor); Lala, J. H. (editor); Goldberg, J. (editor); Kautz, W. H. (editor); Melliar-Smith, P. M. (editor); Green, M. W. (editor); Levitt, K. N. (editor); Schwartz, R. L. (editor); Weinstock, C. B. (editor); Palumbo, D. L. (editor)

    1986-01-01

    The development and evaluation of fault-tolerant computer architectures and software-implemented fault tolerance (SIFT) for use in advanced NASA vehicles and potentially in flight-control systems are described in a collection of previously published reports prepared for NASA. Topics addressed include the principles of fault-tolerant multiprocessor (FTMP) operation; processor and slave regional designs; FTMP executive, facilities, acceptance-test/diagnostic, applications, and support software; FTM reliability and availability models; SIFT hardware design; and SIFT validation and verification.

  15. Fault seal analysis: Methodology and case studies

    SciTech Connect

    Badley, M.E.; Freeman, B.; Needham, D.T. [Earth Sciences Limited, Lincolnshire (United Kingdom)

    1996-12-31

    Fault seal can arise from reservoir/non-reservoir juxtaposition or by development of fault rock of high entry-pressure. The methodology for evaluating these possibilities uses detailed seismic mapping and well analysis. A {open_quote}first-order{close_quote} seal analysis involves identifying reservoir juxtaposition areas over the fault surface, using the mapped horizons and a refined reservoir stratigraphy defined by isochores at the fault surface. The {open_quote}second-order{close_quote} phase of the analysis assesses whether the sand-sand contacts are likely to support a pressure difference. We define two lithology-dependent attributes {open_quote}Gouge Ratio{close_quote} and {open_quote}Smear Factor{close_quote}. Gouge Ratio is an estimate of the proportion of fine-grained material entrained into the fault gouge from the wall rocks. Smear Factor methods estimate the profile thickness of a ductile shale drawn along the fault zone during faulting. Both of these parameters vary over the fault surface implying that faults cannot simply be designated {open_quote}sealing{close_quote} or {open_quote}non-sealing{close_quote}. An important step in using these parameters is to calibrate them in areas where across-fault pressure differences are explicitly known from wells on both sides of a fault. Our calibration for a number of datasets shows remarkably consistent results despite their diverse settings (e.g. Brent Province, Niger Delta, Columbus Basin). For example, a Shale Gouge Ratio of c. 20% (volume of shale in the slipped interval) is a typical threshold between minimal across-fault pressure difference and significant seal.

  16. Fault seal analysis: Methodology and case studies

    SciTech Connect

    Badley, M.E.; Freeman, B.; Needham, D.T. (Earth Sciences Limited, Lincolnshire (United Kingdom))

    1996-01-01

    Fault seal can arise from reservoir/non-reservoir juxtaposition or by development of fault rock of high entry-pressure. The methodology for evaluating these possibilities uses detailed seismic mapping and well analysis. A [open quote]first-order[close quote] seal analysis involves identifying reservoir juxtaposition areas over the fault surface, using the mapped horizons and a refined reservoir stratigraphy defined by isochores at the fault surface. The [open quote]second-order[close quote] phase of the analysis assesses whether the sand-sand contacts are likely to support a pressure difference. We define two lithology-dependent attributes [open quote]Gouge Ratio[close quote] and [open quote]Smear Factor[close quote]. Gouge Ratio is an estimate of the proportion of fine-grained material entrained into the fault gouge from the wall rocks. Smear Factor methods estimate the profile thickness of a ductile shale drawn along the fault zone during faulting. Both of these parameters vary over the fault surface implying that faults cannot simply be designated [open quote]sealing[close quote] or [open quote]non-sealing[close quote]. An important step in using these parameters is to calibrate them in areas where across-fault pressure differences are explicitly known from wells on both sides of a fault. Our calibration for a number of datasets shows remarkably consistent results despite their diverse settings (e.g. Brent Province, Niger Delta, Columbus Basin). For example, a Shale Gouge Ratio of c. 20% (volume of shale in the slipped interval) is a typical threshold between minimal across-fault pressure difference and significant seal.

  17. The effect of faults on network expansion

    Microsoft Academic Search

    Amitabha Bagchi; Ankur Bhargava; Amitabh Chaudhary; David Eppstein; Christian Scheideler

    2004-01-01

    In this paper we study the problem of how resilient networks are to node faults. Specifically, we investigate the question of how many faults a network can sustain so that it still contains a large (i.e. linear-sized) connected component that still has approximately the same expansion as the original fault-free network. For this we apply a pruning technique which culls

  18. Probability Analysis for CMOS Floating Gate Faults

    Microsoft Academic Search

    Hua Xue; Chennian Di; Jochen A. G. Jess

    1994-01-01

    The electrical behavior of a floating gate MOS transistor is mask-topology-dependent, i.e. floating on different sites of interconnection may result in different fault behavior. In this paper, we present a net-oriented deterministic approach to compute the probability of different open faults on each net, by taking into account the process defect statistics and mask layout data. The open faults causing

  19. Frictional properties of natural fault gouge from a low-angle normal fault, Panamint Valley, California

    Microsoft Academic Search

    T. Numelin; C. Marone; E. Kirby

    2007-01-01

    We investigate the relationship between frictional strength and clay mineralogy of natural fault gouge from a low-angle normal fault in Panamint Valley, California. Gouge samples were collected from the fault zone at five locations along a north-south transect of the range-bounding fault system, spanning a variety of bedrock lithologies. Samples were powdered and sheared in the double-direct shear configuration at

  20. Frictional properties of natural fault gouge from a low-angle normal fault, Panamint Valley, California

    Microsoft Academic Search

    T. Numelin; C. Marone; E. Kirby

    2007-01-01

    We investigate the relationship between frictional strength and clay mineralogy of natural fault gouge from a low-angle normal fault in Panamint Valley, California. Gouge samples were collected from the fault zone at five locations along a north–south transect of the range-bounding fault system, spanning a variety of bedrock lithologies. Samples were powdered and sheared in the double-direct shear configuration at

  1. Drilling Active Faults in Northern Europe: Geological and Geophysical Data Sets on Postglacial Faults in Finland

    NASA Astrophysics Data System (ADS)

    Kukkonen, I. T.

    2011-12-01

    Postglacial faults represent a special type of intraplate earthquake-generating faults which were formed at the late stages of or immediately after the Weichselian glaciation in northern Europe at about 9000 - 15 000 years B.P. In northern Finland, Sweden and Norway 14 postglacial faults are known with fault scarps up to 30 m high and up to 160 km long. The faults are mostly interpreted as SW-NE oriented thrust faults dipping 30-50° SE. Many of the faults are still seismically active, indicating that the structures may have a significant role in releasing seismic energy in the otherwise seismically quiet continental area. In Finland, four postglacial faults are known, the Suasselkä (fault scarp length 50 km), Pasmajärvi (5 km), Venejärvi (10 km) and Ruostejärvi (4 km) faults. The scarp heights of these faults range from 0 to 12 m. The ICDP drilling project DAFNE (Drilling into Active Faults in Northern Europe) is currently under preparation. The project aims at drilling 1-3 km deep research boreholes into a postglacial fault which is seismically active. The project will study the structure, tectonics, deformation, seismicity, stress field, hydrogeology, and deep biosphere of postglacial faults. The presentation reviews the postglacial faults in Finland and the status of geological and geophysical data available on the structures. Existing data and maps on Precambrian rocks and Quaternary sediments, shallow drill cores, excavation pits, low-altitude airborne magnetic, EM and radiometric data, various surface geophysical data sets, reflection seismic surveys, and seismic monitoring of the faults are discussed with a particular view on surveying for potential drilling targets.

  2. A novel approach to the distance protection, fault location and arcing faults recognition

    Microsoft Academic Search

    Z. M. Radojevic; H.-J. Koglin; V. V. Terzija

    2004-01-01

    In this paper a novel two-stage numerical algorithm devoted to fault distance calculation and arcing faults recognition is presented. The first algorithm stage serves for the fault distance calculation. Fault distance is calculated from the fundamental frequency phase voltages and currents phasors, utilizing the positive-and zero-sequence impedance of the line as an input parameter. The second algorithm stage serves for

  3. Hydrologic, water-quality, and biological assessment of Laguna de las Salinas, Ponce, Puerto Rico, January 2003-September 2004

    USGS Publications Warehouse

    Soler-López, Luis R.; Gómez-Gómez, Fernando; Rodríguez-Martínez, Jesús

    2005-01-01

    The Laguna de Las Salinas is a shallow, 35-hectare, hypersaline lagoon (depth less than 1 meter) in the municipio of Ponce, located on the southern coastal plain of Puerto Rico. Hydrologic, water-quality, and biological data in the lagoon were collected between January 2003 and September 2004 to establish baseline conditions. During the study period, rainfall was about 1,130 millimeters, with much of the rain recorded during three distinct intense events. The lagoon is connected to the sea by a shallow, narrow channel. Subtle tidal changes, combined with low rainfall and high evaporation rates, kept the lagoon at salinities above that of the sea throughout most of the study. Water-quality properties measured on-site (temperature, pH, dissolved oxygen, specific conductance, and Secchi disk transparency) exhibited temporal rather than spatial variations and distribution. Although all physical parameters were in compliance with current regulatory standards for Puerto Rico, hyperthermic and hypoxic conditions were recorded during isolated occasions. Nutrient concentrations were relatively low and in compliance with current regulatory standards (less than 5.0 and 1.0 milligrams per liter for total nitrogen and total phosphorus, respectively). The average total nitrogen concentration was 1.9 milligrams per liter and the average total phosphorus concentration was 0.4 milligram per liter. Total organic carbon concentrations ranged from 12.0 to 19.0 milligrams per liter. Chlorophyll a was the predominant form of photosynthetic pigment in the water. The average chlorophyll a concentration was 13.4 micrograms per liter. Chlorophyll b was detected (detection limits 0.10 microgram per liter) only twice during the study. About 90 percent of the primary productivity in the Laguna de Las Salinas was generated by periphyton such as algal mats and macrophytes such as seagrasses. Of the average net productivity of 13.6 grams of oxygen per cubic meter per day derived from the diel study, the periphyton and macrophyes produced 12.3 grams per cubic meter per day; about 1.3 grams (about 10 percent) were produced by the phytoplankton (plant and algae component of plankton). The total respiration rate was 59.2 grams of oxygen per cubic meter per day. The respiration rate ascribed to the plankton (all organisms floating through the water column) averaged about 6.2 grams of oxygen per cubic meter per day (about 10 percent), whereas the respiration rate by all other organisms averaged 53.0 grams of oxygen per cubic meter per day (about 90 percent). Plankton gross productivity was 7.5 grams per cubic meter per day; the gross productivity of the entire community averaged 72.8 grams per cubic meter per day. Fecal coliform bacteria counts were generally less than 200 colonies per 100 milliliters; the highest concentration was 600 colonies per 100 milliliters.

  4. Method to identify wells that yield water that will be replaced by water from the Colorado River downstream from Laguna Dam in Arizona and California

    USGS Publications Warehouse

    Owen-Joyce, Sandra J.; Wilson, Richard P.; Carpenter, Michael C.; Fink, James B.

    2000-01-01

    Accounting for the use of Colorado River water is required by the U.S. Supreme Court decree, 1964, Arizona v. California. Water pumped from wells on the flood plain and from certain wells on alluvial slopes outside the flood plain is presumed to be river water and is accounted for as Colorado River water. The accounting-surface method developed for the area upstream from Laguna Dam was modified for use downstream from Laguna Dam to identify wells outside the flood plain of the lower Colorado River that yield water that will be replaced by water from the river. Use of the same method provides a uniform criterion of identification for all users pumping water from wells by determining if the static water-level elevation in the well is above or below the elevation of the accounting surface. Wells that have a static water-level elevation equal to or below the accounting surface are presumed to yield water that will be replaced by water from the Colorado River. Wells that have a static water-level elevation above the accounting surface are presumed to yield river water stored above river level. The method is based on the concept of a river aquifer and an accounting surface within the river aquifer. The river aquifer consists of permeable sediments and sedimentary rocks that are hydraulically connected to the Colorado River so that water can move between the river and the aquifer in response to withdrawal of water from the aquifer or differences in water-level elevations between the river and the aquifer. The subsurface limit of the river aquifer is the nearly impermeable bedrock of the bottom and sides of the basins that underlie the Yuma area and adjacent valleys. The accounting surface represents the elevation and slope of the unconfined static water table in the river aquifer outside the flood plain of the Colorado River that would exist if the river were the only source of water to the river aquifer. The accounting surface was generated by using water-surface profiles of the Colorado River from Laguna Dam to about the downstream limit of perennial flow at Morelos Dam. The accounting surface extends outward from the edges of the flood plain to the subsurface boundary of the river aquifer. Maps at a scale of 1:100,000 show the extent of the river aquifer and elevation of the accounting surface downstream from Laguna Dam in Arizona and California.

  5. Faults Discovery By Using Mined Data

    NASA Technical Reports Server (NTRS)

    Lee, Charles

    2005-01-01

    Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.

  6. Development of a bridge fault extractor tool 

    E-print Network

    Bhat, Nandan D.

    2005-02-17

    be determined with certainty by analyzing defects. This is referred to as defect diagnosis. This first requires locating them within the chip. This process can be simplified through the use of test structures [2][3]. An example is static RAM whose bad bits... is called fault diagnosis. Fault localization or fault isolation is the process of identifying a region within an integrated circuit that contains a circuit fault, such as a short or open circuit. This region must be small enough that the defect causing...

  7. Mantle fault zone beneath Kilauea Volcano, Hawaii

    USGS Publications Warehouse

    Wolfe, C.J.; Okubo, P.G.; Shearer, P.M.

    2003-01-01

    Relocations and focal mechanism analyses of deep earthquakes (???13 kilometers) at Kilauea volcano demonstrate that seismicity is focused on an active fault zone at 30-kilometer depth, with seaward slip on a low-angle plane, and other smaller, distinct fault zones. The earthquakes we have analyzed predominantly reflect tectonic faulting in the brittle lithosphere rather than magma movement associated with volcanic activity. The tectonic earthquakes may be induced on preexisting faults by stresses of magmatic origin, although background stresses from volcano loading and lithospheric flexure may also contribute.

  8. Applications of Fault Detection in Vibrating Structures

    NASA Technical Reports Server (NTRS)

    Eure, Kenneth W.; Hogge, Edward; Quach, Cuong C.; Vazquez, Sixto L.; Russell, Andrew; Hill, Boyd L.

    2012-01-01

    Structural fault detection and identification remains an area of active research. Solutions to fault detection and identification may be based on subtle changes in the time series history of vibration signals originating from various sensor locations throughout the structure. The purpose of this paper is to document the application of vibration based fault detection methods applied to several structures. Overall, this paper demonstrates the utility of vibration based methods for fault detection in a controlled laboratory setting and limitations of applying the same methods to a similar structure during flight on an experimental subscale aircraft.

  9. Block rotations, fault domains and crustal deformation

    NASA Technical Reports Server (NTRS)

    Nur, A.; Ron, H.

    1987-01-01

    Much of the earth's crust is broken by sets of parallel strike-slip faults which are organized in domains. A simple kinematic model suggests that when subject to tectonic strain, the faults, and the blocks bound by them, rotate. The rotation can be estimated from the structurally-determined fault slip and fault spacing, and independently from local deviations of paleomagnetic declinations from global values. A rigorous test of this model was carried out in northern Israel, where good agreement was found between the two rotations.

  10. Diagnosing multiple faults in SSM/PMAD

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel

    1990-01-01

    Multiple fault diagnosis for SSM/PMAD (space station module/power management and distribution) using the knowledge management design system as applied to the SSM/PMAD domain (KNOMAD-SSM/PMAD) is discussed. KNOMAD-SSM/PMAD provides a powerful facility for knowledge representation and reasoning which has been used to build the second generation of FRAMES (fault recovery and management expert system). FRAMES now handles the diagnosis of multiple faults and provides support for a more powerful interface for user interaction during autonomous operation. There are two types of multiple fault diagnosis handled in FRAMES. The first diagnoses hard faults, soft faults, and incipient faults simultaneously. The second diagnoses multiple hard faults which occur in close proximity in time to one another. Multiple fault diagnosis in FRAMES is performed using a rule-based approach. This rule-based approach, enabled by the KNOMAD-SSM/PMAD system, has proven to be powerful. Levels of autonomy are discussed, focusing on the approach taken in FRAMES for providing at least three levels of autonomy: complete autonomy, partial autonomy, and complete manual mode.

  11. Reconstructing paleoenvironmental conditions during the past 50 ka from the biogeochemical record of Laguna Potrok Aike, southern Patagonia

    NASA Astrophysics Data System (ADS)

    Hahn, A.; Rosén, P.; Kliem, P.; Ohlendorf, C.; Zolitschka, B.

    2011-12-01

    Total organic carbon (TOC), total inorganic carbon (TIC) and biogenic silica (BSi) assessed by Fourier transform infrared spectroscopy (FTIRS) are used to reconstruct the environmental history during the past 50kyrs in high resolution from Laguna Potrok Aike. During the Holocene warmer conditions lead to an increased productivity reflected in higher TOC and BSi contents. Calcite precipitation initiated around 9 ka cal. BP probably due to supersaturation induced by lake level lowering. It is assumed that prior to this time period sediments are carbonate-free because high lake-level conditions prevailed. During the Glacial, increased runoff linked to permafrost, precipitation related to stronger cyclonic activity and reduced evaporation have caused higher lake levels. Moreover, during cold glacial conditions lake productivity was low and organic matter mainly of algal or cyanobacterial origin as indicated by generally low TOC and C/N values. During interstadials, such as the Antarctic A-events and the Younger Dryas, TOC contents appear to rise. The glacial C/N ratios and their correlation with TOC concentrations indicate that aquatic moss blooms probably induce these increases in TOC. Aquatic mosses grow if surface water temperatures rise due to warmer climatic conditions and/or development of a lake water stratification. The latter may occur if wind speeds are low and melt water inflow caused higher density gradients. Prevailing permafrost thawing during warmer periods could lead to considerable rises of lake levels, which would contribute to the preservation of organic material. This may explain why higher C/N and TOC values occur at the end of Antarctic A-events. For the uppermost 25 m, the BSi profile shows a high correlation with the TOC profile. In deeper horizons, however, there are indications that the BSi/TOC ratio increased. This part of the record is dominated by mass movement events, which may have supplied nutrients and thus triggered diatom blooms.

  12. 15,000-yr pollen record of vegetation change in the high altitude tropical Andes at Laguna Verde Alta, Venezuela

    NASA Astrophysics Data System (ADS)

    Rull, Valentí; Abbott, Mark B.; Polissar, Pratigya J.; Wolfe, Alexander P.; Bezada, Maximiliano; Bradley, Raymond S.

    2005-11-01

    Pollen analysis of sediments from a high-altitude (4215 m), Neotropical (9°N) Andean lake was conducted in order to reconstruct local and regional vegetation dynamics since deglaciation. Although deglaciation commenced ˜15,500 cal yr B.P., the area around the Laguna Verde Alta (LVA) remained a periglacial desert, practically unvegetated, until about 11,000 cal yr B.P. At this time, a lycopod assemblage bearing no modern analog colonized the superpáramo. Although this community persisted until ˜6000 cal yr B.P., it began to decline somewhat earlier, in synchrony with cooling following the Holocene thermal maximum of the Northern Hemisphere. At this time, the pioneer assemblage was replaced by a low-diversity superpáramo community that became established ˜9000 cal yr B.P. This replacement coincides with regional declines in temperature and/or available moisture. Modern, more diverse superpáramo assemblages were not established until ˜4600 cal yr B.P., and were accompanied by a dramatic decline in Alnus, probably the result of factors associated with climate, humans, or both. Pollen influx from upper Andean forests is remarkably higher than expected during the Late Glacial and early to middle Holocene, especially between 14,000 and 12,600 cal yr B.P., when unparalleled high values are recorded. We propose that intensification of upslope orographic winds transported lower elevation forest pollen to the superpáramo, causing the apparent increase in tree pollen at high altitude. The association between increased forest pollen and summer insolation at this time suggests a causal link; however, further work is needed to clarify this relationship.

  13. Permeability of fault-related rocks, and implications for hydraulic structure of fault zones

    Microsoft Academic Search

    J. Goddard; C. Forster

    1997-01-01

    The permeability structure of a fault zone in granitic rocks has been investigated by laboratory testing of intact core samples from the unfaulted protolith and the two principal fault zone components; the fault core and the damaged zone. The results of two test series performed on rocks obtained from outcrop are reported. First, tests performed at low confining pressure on

  14. FAULT DETECTION AND IDENTIFICATION OF ACTUATOR FAULTS USING LINEAR PARAMETER VARYING MODELS

    Microsoft Academic Search

    R. Hallouzi; V. Verdult; R. Babuska; M. Verhaegen

    A method is proposed to detect and identify two common classes of actuator faults in nonlinear systems. The two fault classes are total and partial actuator faults. This is accomplished by representing the nonlinear system by a Linear Parameter Varying (LPV) model, which is derived from experimental input-output data. The LPV model is used in a Kalman filter to estimate

  15. DYNAMIC SLIP TRANSFER FROM THE DENALI TO TOTSCHUNDA FAULTS, ALASKA: TESTING THEORY FOR FAULT BRANCHING

    E-print Network

    Kame, Nobuki

    1 DYNAMIC SLIP TRANSFER FROM THE DENALI TO TOTSCHUNDA FAULTS, ALASKA: TESTING THEORY FOR FAULT, 2004 [Accepted for publication in BSSA special issue on Denali 2002 earthquake] ABSTRACT We analyze the observed dynamic slip transfer from the Denali to Totschunda faults during the Mw 7.9, November 3, 2002

  16. Dynamic Slip Transfer from the Denali to Totschunda Faults, Alaska: Testing Theory for Fault Branching

    Microsoft Academic Search

    Harsha S. Bhat; Renata Dmowska; James R. Rice; Nobuki Kame

    2004-01-01

    We analyze the observed dynamic slip transfer from the Denali to Totschunda faults during the Mw 7.9 3 November 2002 Denali fault earthquake, Alaska. This study adopts the theory and methodology of Poliakov et al. (2002) and Kame et al. (2003), in which it was shown that the propensity of the rupture path to follow a fault branch is determined

  17. Seismotectonics of the Central Denali Fault, Alaska and the 2002 Denali Fault Earthquake Sequence

    Microsoft Academic Search

    N. A. Ratchkovski; S. Wiemer; R. Hansen

    2004-01-01

    We analyzed the spatial and temporal variations in the seismicity and stress state within the central Denali fault system, Alaska, before and during the 2002 Denali fault earthquake sequence. Seismicity prior to the 2002 earthquake sequence along the Denali fault was very light with an average of four events with magnitude 3.0 and greater per year. We observe a significant

  18. Interaction of a Dynamic Rupture on a Fault Plane with Short Frictionless Fault Branches

    E-print Network

    Rosakis, Ares J.

    mechanics and fault zone structure. From a fault mechanics perspective, an earthquake is a propagating rupture on an existing fault surface, which is controlled by sliding friction. Fracture energy is the work done in reducing the coefficient of friction from its static value to a lower dynamic one

  19. Collateral damage: Evolution with displacement of fracture distribution and secondary fault strands in fault

    E-print Network

    Savage, Heather M.

    Collateral damage: Evolution with displacement of fracture distribution and secondary fault strands in fault damage zones Heather M. Savage1,2 and Emily E. Brodsky1 Received 22 April 2010; revised 10 faults is governed by the same process. Based on our own field work combined with data from

  20. Supervision, fault-detection and fault-diagnosis methods — An introduction

    Microsoft Academic Search

    R. Isermann

    1997-01-01

    The operation of technical processes requires increasingly advanced supervision and fault diagnosis to improve reliability, safety and economy. This paper gives an introduction to the field of fault detection and diagnosis. It begins with a consideration of a knowledge-based procedure that is based on analytical and heuristic information. Then different methods of fault detection are considered, which extract features from

  1. Spectral domain arcing fault recognition and fault distance calculation in transmission systems

    Microsoft Academic Search

    Zoran M Radojevi?; Vladimir V. Terzija; Milenko B. Djuri?

    1996-01-01

    A new numerical algorithm for recognizing arcing faults for the purpose of automatic reclosing is presented. The fault distance can also be calculated using this algorithm. The solution for both symmetrical and unsymmetrical faults is given. A simple empirical arc voltage model is obtained through a table of numerical values. By means of computer simulation and laboratory testing it is

  2. A new two-terminal numerical algorithm for fault location, distance protection, and arcing fault recognition

    Microsoft Academic Search

    C. J. Lee; J. B. Park; J. R. Shin; Z. M. Radojevie

    2006-01-01

    This letter presents a new numerical algorithm for fault location calculation and arcing faults recognition. The proposed algorithm is based on the synchronized phasors measured from the phasor measurement units (PMUs) installed at both terminals of the transmission lines. From the calculated arc voltage amplitude, a decision can be made whether the fault is permanent or transient. The proposed algorithm

  3. Fault tolerant tracking controller design for TS fuzzy disturbed systems with uncertainties subject to actuator faults

    Microsoft Academic Search

    Sabrina Aouaouda; Khadir Mohamed Tarek; Dalil Ichalal; Tahar Bouarar; Mohamed Chadli

    2011-01-01

    The investigated fault tolerant control (FTC) problem for uncertain nonlinear systems with external disturbances against actuator faults is addressed. The aim is to synthesize a fault tolerant controller ensuring trajectory tracking for a class of nonlinear systems represented by Takagi-Sugeno (T-S) models with measurable premise variables. In order to design the FTC law a proportional integral observer (PIO) is adopted

  4. Frictional heterogeneities on carbonate-bearing normal faults: Insights from the Monte Maggio Fault, Italy

    NASA Astrophysics Data System (ADS)

    Carpenter, B. M.; Scuderi, M. M.; Collettini, C.; Marone, C.

    2014-12-01

    Observations of heterogeneous and complex fault slip are often attributed to the complexity of fault structure and/or spatial heterogeneity of fault frictional behavior. Such complex slip patterns have been observed for earthquakes on normal faults throughout central Italy, where many of the Mw 6 to 7 earthquakes in the Apennines nucleate at depths where the lithology is dominated by carbonate rocks. To explore the relationship between fault structure and heterogeneous frictional properties, we studied the exhumed Monte Maggio Fault, located in the northern Apennines. We collected intact specimens of the fault zone, including the principal slip surface and hanging wall cataclasite, and performed experiments at a normal stress of 10 MPa under saturated conditions. Experiments designed to reactivate slip between the cemented principal slip surface and cataclasite show a 3 MPa stress drop as the fault surface fails, then velocity-neutral frictional behavior and significant frictional healing. Overall, our results suggest that (1) earthquakes may readily nucleate in areas of the fault where the slip surface separates massive limestone and are likely to propagate in areas where fault gouge is in contact with the slip surface; (2) postseismic slip is more likely to occur in areas of the fault where gouge is present; and (3) high rates of frictional healing and low creep relaxation observed between solid fault surfaces could lead to significant aftershocks in areas of low stress drop.

  5. Impossible Fault Analysis of RC4 and Differential Fault Analysis of RC4

    Microsoft Academic Search

    Eli Biham; Louis Granboulan; Phong Q. Nguyen

    2005-01-01

    In this paper we introduce the notion of impossible fault anal- ysis, and present an impossible fault analysis of RC4, whose complexity 221 is smaller than the previously best known attack of Hoch and Shamir (226), along with an even faster fault analysis of RC4, based on dierent ideas, with complexity smaller than 216.

  6. Design of fault tolerant control for nonlinear systems subject to time varying faults

    E-print Network

    Boyer, Edmond

    , a Fault Tolerant Control (FTC) problem for discrete time nonlinear systems rep- resented by Takagi widely studied. Nevertheless, the FTC problem based on this kind of model is not largely treated. SomeDesign of fault tolerant control for nonlinear systems subject to time varying faults T. Bouarar, B

  7. Towards Fault-Tolerant Digital Microfluidic Lab-on-Chip: Defects, Fault Modeling, Testing, and Reconfiguration

    E-print Network

    Chakrabarty, Krishnendu

    Towards Fault-Tolerant Digital Microfluidic Lab-on-Chip: Defects, Fault Modeling, Testing, NC 27708, USA Abstract Dependability is an important attribute for microfluidic lab-on-chip devices microfluidic lab-on-chip systems. Defects are related to logical fault models that can be viewed not only

  8. Low-cost motor drive embedded fault diagnosis systems 

    E-print Network

    Akin, Bilal

    2009-05-15

    cost incipient fault detection of inverter-fed driven motors. Basically, low order inverter harmonics contributions to fault diagnosis, a motor drive embedded condition monitoring method, analysis of motor fault signatures in noisy line current, and a...

  9. Development and Application of Nonlinear PCA for Fault

    E-print Network

    Gorban, Alexander N.

    and Application of Nonlinear PCA August 25., 2006 Component Technology Introduction Linear MSPC Limitations NLPCA of fault conditions. · Fault diagnosis can be further divided into fault isolation and identification isolation relates to the determin

  10. Intermittent/transient fault phenomena in digital systems

    NASA Technical Reports Server (NTRS)

    Masson, G. M.

    1977-01-01

    An overview of the intermittent/transient (IT) fault study is presented. An interval survivability evaluation of digital systems for IT faults is discussed along with a method for detecting and diagnosing IT faults in digital systems.

  11. Earthquake behavior and structure of oceanic transform faults

    E-print Network

    Roland, Emily Carlson

    2012-01-01

    Oceanic transform faults that accommodate strain at mid-ocean ridge offsets represent a unique environment for studying fault mechanics. Here, I use seismic observations and models to explore how fault structure affects ...

  12. Microgrid Fault Protection Based on Symmetrical and Differential Current Components

    E-print Network

    Microgrid Fault Protection Based on Symmetrical and Differential Current Components Prepared.1. SINGLE LINE-TO-GROUND (SLG) FAULTS................................................12 3.2. LINE-TO-LINE (L-TO-L) FAULTS ..............................................................14 4. PROTECTION BASED

  13. Symbolic Dynamic Analysis of Transient Time Series for Fault

    E-print Network

    Ray, Asok

    dynamics, fault detection, aircraft gas turbine engines 1 Introduction Performance monitoring of aircraft.g., bearing faults, controller miss-scheduling, and starter system faults) tend to magnify during

  14. DIFFERENTIAL FAULT ANALYSIS ATTACK RESISTANT ARCHITECTURES FOR THE

    E-print Network

    Karpovsky, Mark

    DIFFERENTIAL FAULT ANALYSIS ATTACK RESISTANT ARCHITECTURES FOR THE ADVANCED ENCRYPTION STANDARD of AES against side-channel attacks known as Differential Fault Analysis attacks. The first architecture: Advanced Encryption Standard; Differential Fault Analysis 1. INTRODUCTION Cryptographic algorithms

  15. A Fault-tolerant RISC Microprocessor for Spacecraft Applications

    NASA Technical Reports Server (NTRS)

    Timoc, Constantin; Benz, Harry

    1990-01-01

    Viewgraphs on a fault-tolerant RISC microprocessor for spacecraft applications are presented. Topics covered include: reduced instruction set computer; fault tolerant registers; fault tolerant ALU; and double rail CMOS logic.

  16. Paleoseismicity of two historically quiescent faults in Australia: Implications for fault behavior in stable continental regions

    USGS Publications Warehouse

    Crone, A.J.; De Martini, P. M.; Machette, M.M.; Okumura, K.; Prescott, J.R.

    2003-01-01

    Paleoseismic studies of two historically aseismic Quaternary faults in Australia confirm that cratonic faults in stable continental regions (SCR) typically have a long-term behavior characterized by episodes of activity separated by quiescent intervals of at least 10,000 and commonly 100,000 years or more. Studies of the approximately 30-km-long Roopena fault in South Australia and the approximately 30-km-long Hyden fault in Western Australia document multiple Quaternary surface-faulting events that are unevenly spaced in time. The episodic clustering of events on cratonic SCR faults may be related to temporal fluctuations of fault-zone fluid pore pressures in a volume of strained crust. The long-term slip rate on cratonic SCR faults is extremely low, so the geomorphic expression of many cratonic SCR faults is subtle, and scarps may be difficult to detect because they are poorly preserved. Both the Roopena and Hyden faults are in areas of limited or no significant seismicity; these and other faults that we have studied indicate that many potentially hazardous SCR faults cannot be recognized solely on the basis of instrumental data or historical earthquakes. Although cratonic SCR faults may appear to be nonhazardous because they have been historically aseismic, those that are favorably oriented for movement in the current stress field can and have produced unexpected damaging earthquakes. Paleoseismic studies of modern and prehistoric SCR faulting events provide the basis for understanding of the long-term behavior of these faults and ultimately contribute to better seismic-hazard assessments.

  17. Discovery of active fault traces based on dip-slip distribution pattern of strike-slip faults; A case study on Kawakami and Okamura faults of the Median Tectonic Line active fault system in Shikoku

    Microsoft Academic Search

    Hideaki GOTO; Takashi NAKATA

    1998-01-01

    We discovered active fault traces on the extension of Kawakami and Okamura faults of the Median Tectonic Line active fault system in Shikoku based on new criterion that strike-slip faults are characterized by the pattern of dip-slip distribution; the upthrown side along strike-slip faults are, in general, located on the fault blocks in the direction of relative strike-slip motion. The

  18. A real-time intelligent multiple fault diagnostic system

    Microsoft Academic Search

    Yong-Hwan Bae; Seok-Hee Lee; Ho-Chan Kim; Byung-Ryong Lee; Jaejin Jang; Jay Lee

    2006-01-01

    Modern manufacturing systems and their failure modes are very complex, and efficient fault diagnosis is essential for higher\\u000a productivity. However, traditional fault diagnostic systems that perform sequential fault diagnosis can fail during diagnosis\\u000a when fault propagation is very fast. This paper describes a real-time intelligent multiple fault diagnostic system (RIMFDS).\\u000a This system deals with multiple fault diagnosis, and is based

  19. INTRODUCTION The importance of blind thrust faults as sources

    E-print Network

    Mueller, Karl

    detach- ments (Crouch and Suppe, 1993), and may pose significant hazards to coastal California and Thirtymile Bank detachments, extend south from Laguna Beach and Catalina Island, respectively, to at least.Inmigratedseismicreflection profiles, the thrust is imaged as a coherent set of strong reflections that dip to the northeast

  20. Effects of Fault Displacement on Emplacement Drifts

    SciTech Connect

    F. Duan

    2000-04-25

    The purpose of this analysis is to evaluate potential effects of fault displacement on emplacement drifts, including drip shields and waste packages emplaced in emplacement drifts. The output from this analysis not only provides data for the evaluation of long-term drift stability but also supports the Engineered Barrier System (EBS) process model report (PMR) and Disruptive Events Report currently under development. The primary scope of this analysis includes (1) examining fault displacement effects in terms of induced stresses and displacements in the rock mass surrounding an emplacement drift and (2 ) predicting fault displacement effects on the drip shield and waste package. The magnitude of the fault displacement analyzed in this analysis bounds the mean fault displacement corresponding to an annual frequency of exceedance of 10{sup -5} adopted for the preclosure period of the repository and also supports the postclosure performance assessment. This analysis is performed following the development plan prepared for analyzing effects of fault displacement on emplacement drifts (CRWMS M&O 2000). The analysis will begin with the identification and preparation of requirements, criteria, and inputs. A literature survey on accommodating fault displacements encountered in underground structures such as buried oil and gas pipelines will be conducted. For a given fault displacement, the least favorable scenario in term of the spatial relation of a fault to an emplacement drift is chosen, and the analysis is then performed analytically. Based on the analysis results, conclusions are made regarding the effects and consequences of fault displacement on emplacement drifts. Specifically, the analysis will discuss loads which can be induced by fault displacement on emplacement drifts, drip shield and/or waste packages during the time period of postclosure.

  1. Landsat TM processing in the investigation of active fault zones, South Lajas Valley Fault Zone and Cerro Goden Fault Zone as an example

    E-print Network

    Gilbes, Fernando

    to active fault zones in western PR. The term "lineaments" is the name given by geologists to lines or edgesLandsat TM processing in the investigation of active fault zones, South Lajas Valley Fault Zone and Cerro Goden Fault Zone as an example ANTONIO E. CAMERON-GONZÁLEZ1 1 Department of Geology, University

  2. Your Mission: (1) Identify 20 active faults in California (2) Identify the direction of fault motion and the slip rate for each fault

    E-print Network

    Smith-Konter, Bridget

    Your Mission: (1) Identify 20 active faults in California (2) Identify the direction of fault motion and the slip rate for each fault (3) Investigate recent earthquakes near your hometown (4) Use Microsoft Excel to plot a small set of earthquake data Your Supplies: California Faults map handout

  3. Active faulting in apparently stable peninsular India: rift inversion and a Holocene-age great earthquake on the Tapti Fault

    E-print Network

    Copley, Alex; Mitra, Supriyo; Sloan, R Alastair; Gaonkar, Sharad; Reynolds, Kirsty

    2014-07-23

    We present observations of active faulting within peninsular India, far from the surrounding plate boundaries. Offset alluvial fan surfaces indicate one or more magnitude 7.6–8.4 thrust-faulting earthquakes on the Tapti Fault (Maharashtra, western...

  4. Fault-tolerant software - Experiment with the sift operating system. [Software Implemented Fault Tolerance computer

    NASA Technical Reports Server (NTRS)

    Brunelle, J. E.; Eckhardt, D. E., Jr.

    1985-01-01

    Results are presented of an experiment conducted in the NASA Avionics Integrated Research Laboratory (AIRLAB) to investigate the implementation of fault-tolerant software techniques on fault-tolerant computer architectures, in particular the Software Implemented Fault Tolerance (SIFT) computer. The N-version programming and recovery block techniques were implemented on a portion of the SIFT operating system. The results indicate that, to effectively implement fault-tolerant software design techniques, system requirements will be impacted and suggest that retrofitting fault-tolerant software on existing designs will be inefficient and may require system modification.

  5. The width of fault zones in a brittle-viscous lithosphere: Strike-slip faults

    NASA Technical Reports Server (NTRS)

    Parmentier, E. M.

    1991-01-01

    A fault zone in an ideal brittle material overlying a very weak substrate could, in principle, consist of a single slip surface. Real fault zones have a finite width consisting of a number of nearly parallel slip surfaces on which deformation is distributed. The hypothesis that the finite width of fault zones reflects stresses due to quasistatic flow in the ductile substrate of a brittle surface layer is explored. Because of the simplicity of theory and observations, strike-slip faults are examined first, but the analysis can be extended to normal and thrust faulting.

  6. Lightning faults on distribution lines

    SciTech Connect

    Parrish, D.E.; Kvaltine, D.J. (CH2M Hill, Gainesville, FL (US))

    1989-10-01

    Until now, power engineers have been unable to quantify electrical system outages and damage caused by lightning. Determining the number of lightning strikes to overhead lines is a necessary first step in evaluating design options for lightning protection systems. Under contract to the Electric Power Research Institute (EPRI), the authors have developed low-cost instrumentation by lightning and those caused by other phenomena. The theories used to develop this coincident lightning events detector (CLED), the experiment design used for testing the CLED, and the test results are discussed. Shielding from nearby structures were found to be a major consideration in assessing the lightning fault rate on distribution lines.

  7. Spatial analysis of hypocenter to fault relationships for determining fault process zone width in Japan.

    SciTech Connect

    Arnold, Bill Walter; Roberts, Barry L.; McKenna, Sean Andrew; Coburn, Timothy C. (Abilene Christian University, Abilene, TX)

    2004-09-01

    Preliminary investigation areas (PIA) for a potential repository of high-level radioactive waste must be evaluated by NUMO with regard to a number of qualifying factors. One of these factors is related to earthquakes and fault activity. This study develops a spatial statistical assessment method that can be applied to the active faults in Japan to perform such screening evaluations. This analysis uses the distribution of seismicity near faults to define the width of the associated process zone. This concept is based on previous observations of aftershock earthquakes clustered near active faults and on the assumption that such seismic activity is indicative of fracturing and associated impacts on bedrock integrity. Preliminary analyses of aggregate data for all of Japan confirmed that the frequency of earthquakes is higher near active faults. Data used in the analysis were obtained from NUMO and consist of three primary sources: (1) active fault attributes compiled in a spreadsheet, (2) earthquake hypocenter data, and (3) active fault locations. Examination of these data revealed several limitations with regard to the ability to associate fault attributes from the spreadsheet to locations of individual fault trace segments. In particular, there was no direct link between attributes of the active faults in the spreadsheet and the active fault locations in the GIS database. In addition, the hypocenter location resolution in the pre-1983 data was less accurate than for later data. These pre-1983 hypocenters were eliminated from further analysis.

  8. Physiochemical Evidence of Faulting Processes and Modeling of Fluid in Evolving Fault Systems in Southern California

    SciTech Connect

    Boles, James [Professor

    2013-05-24

    Our study targets recent (Plio-Pleistocene) faults and young (Tertiary) petroleum fields in southern California. Faults include the Refugio Fault in the Transverse Ranges, the Ellwood Fault in the Santa Barbara Channel, and most recently the Newport- Inglewood in the Los Angeles Basin. Subsurface core and tubing scale samples, outcrop samples, well logs, reservoir properties, pore pressures, fluid compositions, and published structural-seismic sections have been used to characterize the tectonic/diagenetic history of the faults. As part of the effort to understand the diagenetic processes within these fault zones, we have studied analogous processes of rapid carbonate precipitation (scaling) in petroleum reservoir tubing and manmade tunnels. From this, we have identified geochemical signatures in carbonate that characterize rapid CO2 degassing. These data provide constraints for finite element models that predict fluid pressures, multiphase flow patterns, rates and patterns of deformation, subsurface temperatures and heat flow, and geochemistry associated with large fault systems.

  9. Fault impacts on solar power unit reliability

    Microsoft Academic Search

    Ali M. Bazzi; Katherine A. Kim; Brian B. Johnson; Philip T. Krein; Alejandro Dominguez-Garcia

    2011-01-01

    This paper introduces a generalized reliability model of a solar power unit (SPU) based on physical characteristics including material, operating conditions, and electrical ratings. An SPU includes a photovoltaic panel, power converter, control and sensing. Possible faults in each component of the unit are surveyed and their failure rates based on physics-of- failure models are formulated. PV panel faults include

  10. Sensor Fault Diagnosis Using Principal Component Analysis 

    E-print Network

    Sharifi, Mahmoudreza

    2010-07-14

    The purpose of this research is to address the problem of fault diagnosis of sensors which measure a set of direct redundant variables. This study proposes: 1. A method for linear senor fault diagnosis 2. An analysis of isolability and detectability...

  11. Proactive Fault-Recovery in Distributed Systems

    E-print Network

    Narasimhan, Priya

    1 Proactive Fault-Recovery in Distributed Systems by Soila M. Pertet A dissertation submitted #12;2 Abstract Supporting both real-time and fault-tolerance properties in systems is challenging in order to meet task dead- lines. However, system failures, which are typically unanticipated events, can

  12. Injection of transient faults using electromagnetic pulses

    E-print Network

    Injection of transient faults using electromagnetic pulses Practical results on a cryptographic of magnetic pulses to inject transient faults into the calculations of a RISC micro-controller running the AES to setup time violations resulting in the injection of errors. Transient deviations under nom- inal values

  13. IP Fault Localization Via Risk Modeling

    Microsoft Academic Search

    Ramana Rao Kompella; Jennifer Yates; Albert G. Greenberg; Alex C. Snoeren

    2005-01-01

    Automated, rapid, and effective fault management is a central goal of large operational IP networks. Today's networks suffer from a wide and volatile set of failure modes, where the underlying fault proves difficult to de- tect and localize, thereby delaying repair. One of the main challenges stems from operational reality: IP rout- ing and the underlying optical fiber plant are

  14. Assumptions for fault tolerant quantum computing

    SciTech Connect

    Knill, E.; Laflamme, R.

    1996-06-01

    Assumptions useful for fault tolerant quantum computing are stated and briefly discussed. We focus on assumptions related to properties of the computational system. The strongest form of the assumptions seems to be sufficient for achieving highly fault tolerant quantum computation. We discuss weakenings which are also likely to suffice.

  15. Fault diagnosis with automata generated languages

    Microsoft Academic Search

    Chuei-Tin Chang; Chung Yang Chen

    2011-01-01

    A SDG-based simulation procedure is proposed in this study to qualitatively predict the effects of one or more fault propagating in a given process system. These predicted state evolution behaviors are characterized with an automaton model. By selecting a set of on-line sensors, the corresponding diagnoser can be constructed and the diagnosability of every fault origin can be determined accordingly

  16. Dislocations and Stacking Faults in Aluminum Nitride

    Microsoft Academic Search

    P. Delavignette; H. B. Kirkpatrick; S. Amelinckx

    1961-01-01

    Dislocations in thin platelets of aluminum nitride grown from the vapor phase appear to be of two types. Some are dissociated into partials of the Shockley type; others are undissociated. A model is given for both types. The stacking fault associated with the dissociated dislocations consists of one lamella of the sphalerite structure. The stacking fault energy is deduced from

  17. Reliability evaluation based on fuzzy fault tree

    Microsoft Academic Search

    Guo-zhu MAOI; Jia-wei Tu; Hui-bin Du

    2010-01-01

    In conventional fault tree analysis (FTA), some complex and uncertain events such as human errors cannot be handled effectively. Fuzzy fault tree analysis (fuzzy FTA) integrating fuzzy set evaluation and probabilistic estimation is proposed to evaluate vague events. The reliability of water supply subsystem in fire protection systems is analyzed using the proposed approach and the results prove the validity

  18. Arc Fault Detection and Discrimination Methods

    Microsoft Academic Search

    Carlos E. Restrepo

    2007-01-01

    Arc waveform characteristics can be evaluated with various methods to recognize the presence of hazardous arc fault conditions. Discussion covers the arc phenomena and how it is generated in a low voltage electrical distribution circuit, as well as the isolation of the presence of hazardous conditions versus conditions that could falsely mimic the presence of an arc fault. Many waveform

  19. Arcing fault detection using artificial neural networks

    Microsoft Academic Search

    Tarlochan S. Sidhu; Gurdeep Singh; Mohindar S. Sachdev

    1998-01-01

    A technique that detects the presence of arcing faults is presented in this paper. The proposed technique analyzes the radiation produced due to arcing faults. Acoustic, infra-red and radio waves are recorded using appropriate sensors and a DSP-based data acquisition system. The recorded signals are then classified using artificial neural networks. The sensors, data acquisition system and design of the

  20. The cost of software fault tolerance

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1982-01-01

    The proposed use of software fault tolerance techniques as a means of reducing software costs in avionics and as a means of addressing the issue of system unreliability due to faults in software is examined. A model is developed to provide a view of the relationships among cost, redundancy, and reliability which suggests strategies for software development and maintenance which are not conventional.