Science.gov

Sample records for laguna salada fault

  1. Subsidence History of the Laguna Salada Basin in Northeastern Baja California, Mexico

    NASA Astrophysics Data System (ADS)

    Contreras, J.; Martin-Barajas, A.; Herguera, J.

    2008-12-01

    The Salton Trough region in southern California and the Mexicali valley in northwestern Mexico are areas of (i) rapid subsidence due to trans-tension along the San Andreas-Imperial fault system, and (ii) high flux of sediments transported by the Colorado River, all of which confer this region with a high potential to preserve a complete record of climatic and tectonic activity information. Here we present the subsidence history of the Laguna Salada basin, and the history of activity of the master bounding faults on its eastern side. The Laguna Salada is a lacustrine basin located west of the Mexicali valley and to the south of the Salton Trough. Sedimentological as well as time series analyses performed on two 42 m-long cores drilled in the center of the basin, estimated to span the past 50 and 70KaBP, indicate a modulation of the late Quaternary stratigraphy by cyclic variations in lake level driven by Milankovitch forcing. Based on these results we derive the long-term history of the basin from a gamma-ray log recovered from a 2.8 km-deep geothermal borehole drilled by the Mexican Power Company adjacent to the Laguna Salada fault. The stratigraphy of the deep borehole reveals a history of activity pulses related to the initial breakage of the Laguna Salada fault and its interaction with neighboring faults. A first pulse started at 1.5 Ma and records the initiation of the Laguna Salada fault and rapid uplift of the crystalline block of the Sierra Cucapa. A second pulse started around 1 Ma, and is very likely related to the hard linking of the Laguna Salada fault with the Caada David detachment by the Caon Rojo fault. The onset of the Laguna Salada fault at 1.5 Ma appears to be synchronous with an early Pleistocene regional fault reorganization among the San Jacinto, San Andreas and Elsinore fault systems in southern California, suggesting that this reorganization may have affected a large area from San Gorgonio pass to the northern Gulf of California.

  2. Long-term slip rates of the Elsinore-Laguna Salada fault, southern California, by U-series Dating of Pedogenic Carbonate in Progressively Offset Alluvial fan Remnants.

    NASA Astrophysics Data System (ADS)

    Fletcher, K. E.; Rockwell, T. K.; Sharp, W. D.

    2007-12-01

    The Elsinore-Laguna Salada (ELS) fault is one of the principal strands of the San Andreas fault system in southern California, however its seismic potential is often de-emphasized due to previous estimates of a low slip rate. Nevertheless, the fault zone has produced two historic earthquakes over M6, with the 1892 event estimated at >M7; thus further investigation of the long-term slip rate on the ELS fault is warranted. On the western slopes of the Coyote Mountains (CM), southwest Imperial Valley, a series of alluvial fans are progressively offset by the Elsinore fault. These fans can be correlated to their source drainages via distinctive clast assemblages, thereby defining measurable offsets on the fault. Dating of the CM fans (to compute slip rates), however, is challenging. Organic materials appropriate for C-14 dating are rare or absent in the arid, oxidizing environment. Cosmogenic surface exposure techniques are limited by the absence of suitable sample materials and are inapplicable to numerous buried fan remnants that are otherwise excellent strain markers. Pedogenic carbonate datable by U-series, however, occurs in CM soil profiles, ubiquitously developed in fan gravels, and is apparent in deposits as young as ~1 ka. In CM gravels 10's ka and older, carbonate forms continuous, dense, yellow coatings up to 3 mm thick on the undersides of clasts. Powdery white carbonate may completely engulf clasts, but is not dateable. Carefully selected samples of dense, innermost carbonate lamina weighing 10's of milligrams and analyzed by TIMS, are geochemically favorable for precise U-series dating (e.g., U = 1-1.5 ppm, median 238U/232Th ~ 7), and yield reproducible ages for coatings from the same microstratigraphic horizon (e.g., 48.2 ± 2.7 and 49.9 ± 2.2 ka), indicating that U-Th systems have remained closed and that inherited coatings, though present, have been avoided. Accordingly, U-series on pedogenic carbonate provides reliable minimum ages for deposition of host landforms, thereby facilitating determination of maximum bounds on corresponding slip rates. Results to date show that pedogenic carbonate dating in the CM has a useful range of at least 140 ka, thus progressively offset geomorphic surfaces in the CM study area afford the opportunity to examine the pattern of slip on the Elsinore fault over time scales from circa 10 to >100 ka.

  3. Long period variability of a High Resolution Hydraulic Balance Proxy for Southern California and Northwestern Mexico: A 50 ka-long Sediment Record from Laguna Salada Basin, Baja California, Mexico

    NASA Astrophysics Data System (ADS)

    Aco-Palestina, A.; Contreras, J.; Martin-Barajas, A.; Hergeura, J. C.; Rendon-Marquez, G.

    2005-12-01

    The Salton Trough region of southern California and the Mexicali valley in northwestern Mexico are areas of rapid subsidence due to extension along the San Andreas-Imperial fault system and are being filled by clastic sediments transported by the Colorado River. The relatively high sedimentation rates of these basins have a high potential to preserve high-resolution climatic information. With this goal in mind, we drilled a 42 m-long core in the center of the Laguna Salada, a lacustrine basin located west of the Mexicali valley and to the south of the Salton Trough. Two 14C dates from plant remnants indicate sedimentation rates are in the order of 1mm/yr; based on this we estimate the age of the bottom of the core close to 50 Ka. This high sedimentation rate could in principle allow us to reconstruct the climatic variability of this hyperarid region on timescales ranging from centennial and millennial periodicities up to Milankovitch forcing. Here we present data on the first-order changes introduced by the later longer periods. Sedimentary facies in the core were identified based on color, granulometry, mineralogical composition and primary structures such as laminations, dissecation cracks, and bioturbation. Additionally, we obtained reflectivity of sediments every 5 mm to 1 cm, depending on the scale of primary structures. The recovered stratigraphy consists of three sedimentary successions. The lower part of the core is characterized by an alternation of mud and silt laminae of varying thickness between 5mm-1cm, probably deposited during stage 3. This ancient paleolake was dominated by sub-aqueous conditions with a permanent water table year-round. Good preservation of laminae suggests seasonal bottom anoxia. During the last glacial maximum, moisture conditions changed drastically. Laminations are replaced by finely stratified sand and further upsection by repetitive packages 50cm-thick composed of fine sand, brown mud, greenish silt and mud, and mud, caped with dissecation cracks and laminated gypsum. These sediments were deposited in a continental sabka environment with intermittent freshwater input, as evidenced by the clear dissecation cycles. Transition to the Holocene climate is characterized by deposition of well-classified sand with textural properties very similar to that of modern eolian dunes. This is indicative of extreme hyper-arid conditions. The Holocene, on the other hand is marked by a return to periodic wet and dry conditions. Present high rates of local precipitation during El Nio years and periodic infillings by occasional discharges from the Colorado river are balanced by high evaporation rates that lead to intermittent diseccation of the lake and yield a characteristic alternation of laminated mud, fine sand, and evaporitic deposits.

  4. The SCEC 3D Community Fault Model (CFM-v5): An updated and expanded fault set of oblique crustal deformation and complex fault interaction for southern California

    NASA Astrophysics Data System (ADS)

    Nicholson, C.; Plesch, A.; Sorlien, C. C.; Shaw, J. H.; Hauksson, E.

    2014-12-01

    Southern California represents an ideal natural laboratory to investigate oblique deformation in 3D owing to its comprehensive datasets, complex tectonic history, evolving components of oblique slip, and continued crustal rotations about horizontal and vertical axes. As the SCEC Community Fault Model (CFM) aims to accurately reflect this 3D deformation, we present the results of an extensive update to the model by using primarily detailed fault trace, seismic reflection, relocated hypocenter and focal mechanism nodal plane data to generate improved, more realistic digital 3D fault surfaces. The results document a wide variety of oblique strain accommodation, including various aspects of strain partitioning and fault-related folding, sets of both high-angle and low-angle faults that mutually interact, significant non-planar, multi-stranded faults with variable dip along strike and with depth, and active mid-crustal detachments. In places, closely-spaced fault strands or fault systems can remain surprisingly subparallel to seismogenic depths, while in other areas, major strike-slip to oblique-slip faults can merge, such as the S-dipping Arroyo Parida-Mission Ridge and Santa Ynez faults with the N-dipping North Channel-Pitas Point-Red Mountain fault system, or diverge with depth. Examples of the latter include the steep-to-west-dipping Laguna Salada-Indiviso faults with the steep-to-east-dipping Sierra Cucapah faults, and the steep southern San Andreas fault with the adjacent NE-dipping Mecca Hills-Hidden Springs fault system. In addition, overprinting by steep predominantly strike-slip faulting can segment which parts of intersecting inherited low-angle faults are reactivated, or result in mutual cross-cutting relationships. The updated CFM 3D fault surfaces thus help characterize a more complex pattern of fault interactions at depth between various fault sets and linked fault systems, and a more complex fault geometry than typically inferred or expected from projecting near-surface data down-dip, or modeled from surface strain and potential field data alone.

  5. The Pueblo of Laguna.

    ERIC Educational Resources Information Center

    Lockart, Barbetta L.

    Proximity to urban areas, a high employment rate, development of natural resources and high academic achievement are all serving to bring Laguna Pueblo to a period of rapid change on the reservation. While working to realize its potential in the areas of natural resources, commercialism and education, the Pueblo must also confront the problems of…

  6. 'Laguna Hollow'Undisturbed

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This image shows the patch of soil at the bottom of the shallow depression dubbed 'Laguna Hollow' where the Mars Exploration Rover Spirit will soon begin trenching. Scientists are intrigued by the clustering of small pebbles and the crack-like fine lines, which indicate a coherent surface that expands and contracts. A number of processes can cause materials to expand and contract, including cycles of heating and cooling; freezing and thawing; and rising and falling of salty liquids within a substance. This false-color image was created using the blue, green and infrared filters of the rover's panoramic camera. Scientists chose this particular combination of filters to enhance the heterogeneity of the martian soil.

  7. The LAGUNA-LBNO Project

    NASA Astrophysics Data System (ADS)

    Avanzini, Margherita Buizza

    LAGUNA-LBNO is a Design Study funded by the European Commission to develop the design of a large and deep underground neutrino observatory; its physics program involves the study of neutrino oscillations at long baselines, the investigation of the Grand Unification of elementary forces and the detection of neutrinos from astrophysical sources. Building on the successful format and on the findings of the previous LAGUNA Design Study, LAGUNA-LBNO is more focused and is specifically considering Long Baseline Neutrino Oscillations (LBNO) with neutrino beams from CERN. Two sites, Frjus (in France at 130 km) and Pyhsalmi (in Finland at 2300 km), are being considered. Three different detector technologies are being studied: Water Cherenkov, Liquid Scintillator and Liquid Argon. Recently the LAGUNA-LBNO consortium has submitted an Expression of Interest for a very long baseline neutrino experiment, selecting as a first priority the option of a Liquid Argon detector at Pyhsalmi. Detailed potential studies have been curried out for the determination of the neutrino Mass Hierarchy and the discovery of the CP-violation, using a conventional neutrino beam from the CERN SPS with a power of 750 kW.

  8. Present-day loading rate of faults in southern California and northern Baja California, Mexico, and post-seismic deformation following the M7.2 April 4, 2010, El Mayor-Cucapah earthquake from GPS Geodesy

    NASA Astrophysics Data System (ADS)

    Spinler, J. C.; Bennett, R. A.

    2012-12-01

    We use 142 GPS velocity estimates from the SCEC Crustal Motion Map 4 and 59 GPS velocity estimates from additional sites to model the crustal velocity field of southern California, USA, and northern Baja California, Mexico, prior to the 2010 April 4 Mw 7.2 El Mayor-Cucapah (EMC) earthquake. The EMC earthquake is the largest event to occur along the southern San Andreas fault system in nearly two decades. In the year following the EMC earthquake, the EarthScope Plate Boundary Observatory (PBO) constructed eight new continuous GPS sites in northern Baja California, Mexico. We used our velocity model, which represents the period before the EMC earthquake, to assess postseismic velocity changes at the new PBO sites. Time series from the new PBO sites, which were constructed 4-18 months following the earthquake do not exhibit obvious exponential or logarithmic decay, showing instead fairly secular trends through the period of our analysis (2010.8-2012.5). The weighted RMS misfit to secular rates, accounting for periodic site motions is typically around 1.7 mm/yr, indicating high positioning precision and fairly linear site motion. Results of our research include new fault slip rate estimates for the greater San Andreas fault system, including model faults representing the Cerro Prieto (39.00.1 mm/yr), Imperial (35.70.1 mm/yr), and southernmost San Andreas (24.70.1 mm/yr), generally consistent with previous geodetic studies within the region. Velocity changes at the new PBO sites associated with the EMC earthquake are in the range 1.70.3 to 9.22.6 mm/yr. The maximum rate difference is found in Mexicali Valley, close to the rupture. Rate changes decay systematically with distance from the EMC epicenter and velocity orientations exhibit a butterfly pattern as expected from a strike slip earthquake. Sites to the south and southwest of the Baja California shear zone are moving more rapidly to the northwest relative to their motions prior to the earthquake. Sites to the west of the Laguna Salada fault zone are moving more westerly. Sites to the east of the EMC rupture move more southerly than prior to the EMC earthquake. Continued monitoring of these velocity changes will allow us to differentiate between lower crustal and upper mantle relaxation processes.

  9. Tectonic forcing of shelf-ramp depositional architecture, Laguna Madre-Tuxpan Shelf, western Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Wawrzyniec, Tim F.; Ambrose, W.; Aranda-Garcia, M.; Romano, U. H.

    2004-07-01

    Analysis of seismic reflection data reveals the existence of a major listric fault that accommodates most of the Neogene extension of the Laguna Madre-Tuxpan shelf of the western Gulf of Mexico. The variation of related growth strata, the profile of the modern shelf-slope transition, the linear gradient of shelf extension (as well as basin accommodation) along the trace of the fault support a hypothesis that sediment loading along the northern part of the fault drives fault motion and influences sediment distribution along the southern end of the fault. In particular, where kinematic accommodation appears to outpace sediment supply, sedimentation is maximized along a shelf-ramp system and not the shelf-slope transition.

  10. LAGUNA DESIGN STUDY, Underground infrastructures and engineering

    NASA Astrophysics Data System (ADS)

    Nuijten, Guido Alexander

    2011-07-01

    The European Commission has awarded the LAGUNA project a grant of 1.7 million euro for a Design Study from the seventh framework program of research and technology development (FP7-INFRASTRUCTURES - 2007-1) in 2008. The purpose of this two year work is to study the feasibility of the considered experiments and prepare a conceptual design of the required underground infrastructure. It is due to deliver a report that allows the funding agencies to decide on the realization of the experiment and to select the site and the technology. The result of this work is the first step towards fulfilling the goals of LAGUNA. The work will continue with EU funding to study the possibilities more thoroughly. The LAGUNA project is included in the future plans prepared by European funding organizations. (Astroparticle physics in Europe). It is recommended that a new large European infrastructure is put forward, as a future international multi-purpose facility for improved studies on proton decay and low-energy neutrinos from astrophysical origin. The three detection techniques being studied for such large detectors in Europe, Water-Cherenkov (like MEMPHYS), liquid scintillator (like LENA) and liquid argon (like GLACIER), are evaluated in the context of a common design study which should also address the underground infrastructure and the possibility of an eventual detection of future accelerator neutrino beams. The design study is also to take into account worldwide efforts and converge, on a time scale of 2010, to a common proposal.

  11. Faulted Barn

    USGS Multimedia Gallery

    This barn is faulted through the middle; the moletrack is seen in the foreground with the viewer standing on the fault. From the air one can see metal roof panels of the barn that rotated as the barn was faulted....

  12. Santa Fe Indian Camp, House 21, Richmond, California: Persistence of Identity among Laguna Pueblo Railroad Laborers, 1945-1982.

    ERIC Educational Resources Information Center

    Peters, Kurt

    1995-01-01

    In 1880 the Laguna people and the predecessor of the Atchison, Topeka, and Santa Fe Railroad reached an agreement giving the railroad unhindered right-of-way through Laguna lands in exchange for Laguna employment "forever." Discusses the Laguna-railroad relationship through 1982, Laguna labor camps in California, and the persistence of Laguna…

  13. Field reconnaissance of the effects of the earthquake of April 13, 1973, near Laguna de Arenal, Costa Rica

    USGS Publications Warehouse

    Plafker, George

    1973-01-01

    At about 3:34 a.m. on April 13, 1973, a moderate-sized, but widely-felt, earthquake caused extensive damage with loss of 23 lives in a rural area of about 150 km2 centered just south of Laguna de Arenal in northwestern Costa Rica (fig. 1). This report summarizes the results of the writer's reconnaissance investigation of the area that was affected by the earthquake of April 13, 1973. A 4-day field study of the meizoseismal area was carried out during the period from April 28 through May 1 under the auspices of the U.S. Geological Survey. The primary objective of this study was to evaluate geologic factors that contributed to the damage and loss of life. The earthquake was also of special interest because of the possibility that it was accompanied by surface faulting comparable to that which occurred at Managua, Nicaragua, during the disastrous earthquake of December 23, 1972 (Brown, Ward, and Plafker, 1973). Such earthquake-related surface faulting can provide scientifically valuable information on active tectonic processes at shallow depths within the Middle America arc. Also, identification of active faults in this area is of considerable practical importance because of the planned construction of a major hydroelectrical facility within the meizoseismal area by the Instituto Costarricense de Electricidad (I.C.E.). The project would involve creation of a storage reservoir within the Laguna de Arenal basin and part of the Ro Arenal valley with a 75 m-high earthfill dam across Ro Arenal at a point about 10 km east of the outlet of Laguna de Arenal.

  14. Limnology of Laguna Tortuguero, Puerto Rico

    USGS Publications Warehouse

    Quinones-Marquez, Ferdinand; Fuste, Luis A.

    1978-01-01

    The principal chemical, physical and biological characteristics, and the hydrology of Laguna Tortuguero, Puerto Rico, were studied from 1974-75. The lagoon, with an area of 2.24 square kilometers and a volume of about 2.68 million cubic meters, contains about 5 percent of seawater. Drainage through a canal on the north side averages 0.64 cubic meters per second per day, flushing the lagoon about 7.5 times per year. Chloride and sodium are the principal ions in the water, ranging from 300 to 700 mg/liter and 150 to 400 mg/liter, respectively. Among the nutrients, nitrogen averages about 1.7 mg/liter, exceeding phosphorus in a weight ratio of 170:1. About 10 percent of the nitrogen and 40 percent of the phosphorus entering the lagoon is retained. The bottom sediments, with a volume of about 4.5 million cubic meters, average 0.8 and 0.014 percent nitrogen and phosphorus, respectively. (Woodard-USGS)

  15. Fault finder

    DOEpatents

    Bunch, Richard H. (1614 NW. 106th St., Vancouver, WA 98665)

    1986-01-01

    A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

  16. Hydrology of Laguna Joyuda, Puerto Rico

    USGS Publications Warehouse

    Santiago-Rivera, Luis; Quinones-Aponte, Vicente

    1995-01-01

    A study was conducted by the U.S. Geological Survey to define the hydraulic and hydrologic characteristics of the Laguna Joyuda system (in southwestern Puerto Rico) and to determine the water budget of the lagoon. This shallow-water lagoon is connected to the sea by a single canal. Rainfall and evaporation, surface-water, groundwater, and tidal-flow data were collected from December 1, 1985, to April 30, 1988. A conceptual hydrologic model of the lagoon was developed and discharge measurements and modeling were undertaken to quantify the different flow components. The water balance during the 29-month study period was determined by measuring and estimating the different hydrologic components: 4.14 million cubic meters rainfall; 5.38 million cubic meters evaporation; 1.1 8 million cubic meters surface water; and 0.34 million cubic meters ground water. A total of 18.9 million cubic meters ebb flow (tidal outflow) was discharged from the lagoon and 14.4 million cubic meters flood flow (tidal inflow) entered through the canal during the study. Seawater inflow accounted for 71 percent of the water into the lagoon. The storage volume of the lagoon was about 1.55 million cubic meters. The lagoon's hydrologic-budget residual was 4.22 million cubic meters, whereas the sum of the estimated errors for the different hydrologic components amounted to 4.51 million cubic meters. Average flushing rate for the lagoon was estimated at 72 days. During the study, the specific conductance of the lagoon water ranged from 32,000 to 52,000 microsiemens per centimeter at 25 degrees Celsius, whereas the specific conductance of local seawater is about 45,000 to 55,000 microsiemens.

  17. Molecular Epidemiology of Laguna Negra Virus, Mato Grosso State, Brazil

    PubMed Central

    Travassos da Rosa, Elizabeth S.; Medeiros, Daniele B.A.; Nunes, Márcio R.T.; Simith, Darlene B.; Pereira, Armando de S.; Elkhoury, Mauro R.; Santos, Elizabeth Davi; Lavocat, Marília; Marques, Aparecido A.; Via, Alba V.G.; Kohl, Vânia A.; Terças, Ana C.P.; D`Andrea, Paulo; Bonvícino, Cibele R.; Sampaio de Lemos, Elba R.

    2012-01-01

    We associated Laguna Negra virus with hantavirus pulmonary syndrome in Mato Grosso State, Brazil, and a previously unidentified potential host, the Calomys callidus rodent. Genetic testing revealed homologous sequencing in specimens from 20 humans and 8 mice. Further epidemiologic studies may lead to control of HPS in Mato Grosso State. PMID:22607717

  18. A survey for avian influenza from gulls on the coasts of the District of Pinamar and the Lagoon Salada Grande, General Madariaga, Argentina.

    PubMed

    Buscaglia, Celina

    2012-12-01

    In the present study, fecal samples obtained from kelp gulls (Larus dominicanus), brown-hooded gulls (Larus maculipennis), and Olrog's gulls (Larus atlanticus) on the coast of the District of Pinamar, and grey-hooded gulls (Larus cirrocephalus) on the coast of the Lagoon Salada Grande and surrounding wetlands, General Madariaga, Buenos Aires Province, Argentina, were tested for evidence of avian influenza virus over a period of 3 yr. This surveillance in free-living wild birds in the Buenos Aires Province started in October 2008. Additional samples, which included cloacal swabs, tracheal swabs, or pooled organs, were obtained from sick or dead gulls that arrived at the Fundaci6n Ecol6gica Pinamar or were provided by the Direcci6n de Seguridad en Playas, Municipalidad de Pinamar. Samples were pooled according to date, species, and area. Pooled samples were inoculated in 9- to 11-day-old eggs, and after 5 days, allantoic fluids were tested for evidence of hemagglutination. None of the samples was positive for avian influenza viruses. PMID:23402129

  19. Shallow Landslide Assessment using SINMAP in Laguna, Philippines

    NASA Astrophysics Data System (ADS)

    Bonus, A. A. B.; Rabonza, M. L.; Alemania, M. K. B.; Alejandrino, I. K.; Ybanez, R. L.; Lagmay, A. M. A.

    2014-12-01

    Due to the tectonic environment and tropical climate in the Philippines, both rain-induced and seismic-induced landslides are common in the country. Numerous hazard mapping activities are regularly conducted by both academic and government institutions using various tools and software. One such software is Stability Index Mapping (SINMAP), a terrain stability mapping tool applied to shallow translational landslide phenomena controlled by shallow groundwater flow convergence. SINMAP modelling combines a slope stability model with a steady-state hydrology model to delineate areas prone to shallow landslides. DOST- Project NOAH, one of the hazard-mapping initiatives of the government, aims to map all landslide hazard in the Philippines using both computer models as well as validating ground data. Laguna, located in the island of Luzon, is one such area where mapping and modelling is conducted. SINMAP modelling of the Laguna area was run with a 5-meter Interferomteric Synthetic Aperture Radar (IFSAR) derived digital terrain model (DTM). Topographic, soil-strength and physical hydrologic parameters, which include cohesion, angle of friction, bulk density and hydraulic conductivity, were assigned to each pixel of a given DTM grid to compute for the corresponding factor of safety. The landslide hazard map generated using SINMAP shows 2% of the total land area is highly susceptible in Santa Mara, Famy, Siniloan, Pangil, Pakil and Los Ba?os Laguna and 10% is moderately susceptible in the eastern parts of Laguna. The data derived from the model is consistent with both ground validation surveys as well as landslide inventories derived from high resolution satellite imagery from 2003 to 2013. With these combined computer and on-the-ground data, it is useful in identifying no-build zone areas and in monitoring activities of the local government units and other agencies concerned. This provides a reasonable delineation of hazard zones for shallow landslide susceptible areas of Laguna Philippines.

  20. Fault diagnosis

    NASA Technical Reports Server (NTRS)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to examine pilot mental models of the aircraft subsystems and their use in diagnosis tasks. Future research plans include piloted simulation evaluation of the diagnosis decision aiding concepts and crew interface issues. Information is given in viewgraph form.

  1. Fault mechanics

    SciTech Connect

    Segall, P. )

    1991-01-01

    Recent observational, experimental, and theoretical modeling studies of fault mechanics are discussed in a critical review of U.S. research from the period 1987-1990. Topics examined include interseismic strain accumulation, coseismic deformation, postseismic deformation, and the earthquake cycle; long-term deformation; fault friction and the instability mechanism; pore pressure and normal stress effects; instability models; strain measurements prior to earthquakes; stochastic modeling of earthquakes; and deep-focus earthquakes. Maps, graphs, and a comprehensive bibliography are provided. 220 refs.

  2. A geophysical and geological study of Laguna de Ayarza, a Guatemalan caldera lake

    USGS Publications Warehouse

    Poppe, L.J.; Paull, C.K.; Newhall, C.G.; Bradbury, J.P.; Ziagos, J.

    1985-01-01

    Geologic and geophysical data from Laguna de Ayarza, a figure-8-shaped doublecaldera lake in the Guatemalan highlands, show no evidence of postcaldera eruptive tectonic activity. The bathymetry of the lake has evolved as a result of sedimentary infilling. The western caldera is steep-sided and contains a large flat-floored central basin 240 m deep. The smaller, older, eastern caldera is mostly filled by coalescing delta fans and is connected with the larger caldera by means of a deep channel. Seismicreflection data indicate that at least 170 m of flat-lying unfaulted sediments partly fill the central basin and that the strata of the pre-eruption edifice have collapsed partly along inward-dipping ring faults and partly by more chaotic collapses. These sediments have accumulated in the last 23,000 years at a minimum average sedimentation rate of 7 m/103 yr. The upper 9 m of these sediments is composed of > 50% turbidites, interbedded with laminated clayey silts containing separate diatom and ash layers. The bottom sediments have >1% organic material, an average of 4% pyrite, and abundant biogenic gas, all of which demonstrate that the bottom sediments are anoxic. Although thin (<0.5 cm) ash horizons are common, only one thick (7-16 cm) primary ash horizon could be identified in piston cores. Alterations in the mineralogy and variations in the diatom assemblage suggest magnesium-rich hydrothermal activity. ?? 1985.

  3. 75 FR 74073 - Laguna Atascosa National Wildlife Refuge, Cameron and Willacy Counties, TX; Final Comprehensive...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-30

    ..., advancement, management, conservation and protection of fish and wildlife resources'' (Fish and Wildlife Act... ``Laguna Atascosa final CCP'' in the subject line of the message. Mail: Mark Sprick, AICP, Natural Resource... Register July 19, 2004 (69 FR 43010). Laguna Atascosa NWR is located in Cameron and Willacy Counties,...

  4. 78 FR 57545 - Proposed Establishment of Class D Airspace and Class E Airspace; Laguna AAF, AZ

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-19

    ... Proposed Establishment of Class D Airspace and Class E Airspace; Laguna AAF, AZ AGENCY: Federal Aviation... establish Class D airspace and Class E airspace at Laguna Army Air Field (AAF), (Yuma Proving Ground), Yuma... Regulations (14 CFR) part 71 by establishing Class D airspace from the surface to and including 1,700 feet...

  5. Possibilities For The LAGUNA Projects At The Frejus Site

    SciTech Connect

    Mosca, Luigi

    2010-11-24

    The present laboratory (LSM) at the Frejus site and the project of a first extension of it, mainly aimed at the next generation of dark matter and double beta decay experiments, are briefly reviewed. Then the main characteristics of the LAGUNA cooperation and Design Study network are summarized. Seven underground sites in Europe are considered in LAGUNA and are under study as candidates for the installation of Megaton scale detectors using three different techniques: a liquid Argon TPC (GLACIER), a liquid scintillator detector (LENA) and a Water Cerenkov (MEMPHYS), all mainly aimed at investigation of proton decay and properties of neutrinos from SuperNovae and other astrophysical sources as well as from accelerators (Super-beams and/or Beta-beams from CERN). One of the seven sites is located at Frejus, near the present LSM laboratory, and the results of its feasibility study are presented and discussed. Then the physics potential of a MEMPHYS detector installed in this site are emphasized both for non-accelerator and for neutrino beam based configurations. The MEMPHYNO prototype with its R and D programme is presented. Finally a possible schedule is sketched.

  6. About a Gadolinium-doped Water Cherenkov LAGUNA Detector

    NASA Astrophysics Data System (ADS)

    Labarga, Luis

    2010-11-01

    Water Cherenkov (wC) detectors are extremely powerful apparatuses for scientific research. Nevertheless they lack of neutron tagging capabilities, which translates, mainly, into an inability to identify the anti-matter nature of the reacting incoming anti-neutrino particles. A solution was proposed by R. Beacon and M. Vagins back in 2004: by dissolving in the water a compound with nucleus with very large cross section for neutron capture like the Gadolinium, with a corresponding emission of photons of enough energy to be detected, they can tag thermal neutrons with an efficiency larger than 80%. In this talk we detail the technique and its implications in the measurement capabilities and, as well, the new backgrounds induced. We discuss the improvement on their physics program, also for the case of LAGUNA type detectors. We comment shortly the status of the pioneering R&D program of the Super-Kamiokande Collaboration towards dissolving a Gadolinium compound in its water.

  7. About a Gadolinium-doped Water Cherenkov LAGUNA Detector

    SciTech Connect

    Labarga, Luis

    2010-11-24

    Water Cherenkov (wC) detectors are extremely powerful apparatuses for scientific research. Nevertheless they lack of neutron tagging capabilities, which translates, mainly, into an inability to identify the anti-matter nature of the reacting incoming anti-neutrino particles. A solution was proposed by R. Beacon and M. Vagins back in 2004: by dissolving in the water a compound with nucleus with very large cross section for neutron capture like the Gadolinium, with a corresponding emission of photons of enough energy to be detected, they can tag thermal neutrons with an efficiency larger than 80%. In this talk we detail the technique and its implications in the measurement capabilities and, as well, the new backgrounds induced. We discuss the improvement on their physics program, also for the case of LAGUNA type detectors. We comment shortly the status of the pioneering R and D program of the Super-Kamiokande Collaboration towards dissolving a Gadolinium compound in its water.

  8. Hatching success of Caspian terns nesting in the lower Laguna Madre, Texas, USA

    USGS Publications Warehouse

    Mitchell, C.A.; Custer, T.W.

    1986-01-01

    The average clutch size of Caspian Terns nesting in a colony in the Lower Laguna Madre near Laguna Vista, Texas, USA in 1984 was 1.9 eggs per nest. Using the Mayfield method for calculating success, one egg hatched in 84.1% of the nests and 69.8% of the eggs laid hatched. These hatching estimates are as high or higher than estimates from colonies in other areas.

  9. Flight elements: Fault detection and fault management

    NASA Technical Reports Server (NTRS)

    Lum, H.; Patterson-Hine, A.; Edge, J. T.; Lawler, D.

    1990-01-01

    Fault management for an intelligent computational system must be developed using a top down integrated engineering approach. An approach proposed includes integrating the overall environment involving sensors and their associated data; design knowledge capture; operations; fault detection, identification, and reconfiguration; testability; causal models including digraph matrix analysis; and overall performance impacts on the hardware and software architecture. Implementation of the concept to achieve a real time intelligent fault detection and management system will be accomplished via the implementation of several objectives, which are: Development of fault tolerant/FDIR requirement and specification from a systems level which will carry through from conceptual design through implementation and mission operations; Implementation of monitoring, diagnosis, and reconfiguration at all system levels providing fault isolation and system integration; Optimize system operations to manage degraded system performance through system integration; and Lower development and operations costs through the implementation of an intelligent real time fault detection and fault management system and an information management system.

  10. Fault zone fabric and fault weakness.

    PubMed

    Collettini, Cristiano; Niemeijer, Andr; Viti, Cecilia; Marone, Chris

    2009-12-17

    Geological and geophysical evidence suggests that some crustal faults are weak compared to laboratory measurements of frictional strength. Explanations for fault weakness include the presence of weak minerals, high fluid pressures within the fault core and dynamic processes such as normal stress reduction, acoustic fluidization or extreme weakening at high slip velocity. Dynamic weakening mechanisms can explain some observations; however, creep and aseismic slip are thought to occur on weak faults, and quasi-static weakening mechanisms are required to initiate frictional slip on mis-oriented faults, at high angles to the tectonic stress field. Moreover, the maintenance of high fluid pressures requires specialized conditions and weak mineral phases are not present in sufficient abundance to satisfy weak fault models, so weak faults remain largely unexplained. Here we provide laboratory evidence for a brittle, frictional weakening mechanism based on common fault zone fabrics. We report on the frictional strength of intact fault rocks sheared in their in situ geometry. Samples with well-developed foliation are extremely weak compared to their powdered equivalents. Micro- and nano-structural studies show that frictional sliding occurs along very fine-grained foliations composed of phyllosilicates (talc and smectite). When the same rocks are powdered, frictional strength is high, consistent with cataclastic processes. Our data show that fault weakness can occur in cases where weak mineral phases constitute only a small percentage of the total fault rock and that low friction results from slip on a network of weak phyllosilicate-rich surfaces that define the rock fabric. The widespread documentation of foliated fault rocks along mature faults in different tectonic settings and from many different protoliths suggests that this mechanism could be a viable explanation for fault weakening in the brittle crust. PMID:20016599

  11. Impact of Hot Spring Resort Development on the Groundwater Discharge in the Southeast Part of Laguna De Bay, Luzon, Philippines

    NASA Astrophysics Data System (ADS)

    Siringan, F. P.; Lloren, R. B.; Mancenido, D. L. O.; Jago-on, K. A. B.; Pena, M. A. Z.; Balangue-Tarriela, M. I. R.; Taniguchi, M.

    2014-12-01

    Direct groundwater seepage in a lake (DGSL) can be a major component to its water and nutrient budget. Groundwater extraction around a lake may affect the DGSL, thus it can be expected that it would also impact the lake. In the Philippines, Laguna de Bay which is the second largest freshwater lake in South-east Asia and used primarily for fisheries, is under significant water development pressure. Along the southern coast of the lake, in the Calamba-Los Banos area, rapid urbanization and development of the water resort industry, including hot spring spas, are expected to have led to a rapid increase in groundwater extraction. This study aims to establish the effect of this development to the DGSL in this part of the lake. As a first step, we utilized towed electrical resistivity (ER) profiling to identify and map the potential and type of groundwater seepage off the southern coast of the lake. SRTM digital elevation models and synthetic aperture radar images were used to delineate lineaments which are potential fractures that cut across the study area. ER profiles indicate widespread occurrence of GDL across the shallower parts of the lake. In the more offshore, deeper parts of the lake, DGSL appears to be more limited possibly due to more muddy sediments there. However, in this area, narrow, vertical high resistivity columns cut through the lake floor suggesting more discrete GDLs possibly controlled by faults.

  12. Summary: beyond fault trees to fault graphs

    SciTech Connect

    Alesso, H.P.; Prassinos, P.; Smith, C.F.

    1984-09-01

    Fault Graphs are the natural evolutionary step over a traditional fault-tree model. A Fault Graph is a failure-oriented directed graph with logic connectives that allows cycles. We intentionally construct the Fault Graph to trace the piping and instrumentation drawing (P and ID) of the system, but with logical AND and OR conditions added. Then we evaluate the Fault Graph with computer codes based on graph-theoretic methods. Fault Graph computer codes are based on graph concepts, such as path set (a set of nodes traveled on a path from one node to another) and reachability (the complete set of all possible paths between any two nodes). These codes are used to find the cut-sets (any minimal set of component failures that will fail the system) and to evaluate the system reliability.

  13. Crab death assemblages from Laguna Madre and vicinity, Texas

    SciTech Connect

    Plotnick, R.E.; McCarroll, S. ); Powell, E. )

    1990-02-01

    Crabs are a major component of modern marine ecosystems, but are only rarely described in fossil assemblages. Studies of brachyuran taphonomy have examined either the fossil end-products of the taphonomic process or the very earliest stages of decay and decomposition. The next logical step is the analysis of modern crab death assemblages; i.e., studies that examine taphonomic loss in areas where the composition of the living assemblage is known. The authors studied crab death assemblages in shallow water sediments at several localities in an near Laguna Madre, Texas. Nearly every sample examined contained some crab remains, most commonly in the form of isolated claws (dactyl and propodus). A crab fauna associated with a buried grass bed contained abundant remains of the xanthid crab Dyspanopeus texanus, including carapaces, chelipeds, and thoraxes, as well as fragments of the portunid Callinectes sapidus and the majiid Libinia dubia. Crab remains may be an overlooked portion of many preserved benthic assemblages, both in recent and modern sediments.

  14. Factors controlling navigation-channel Shoaling in Laguna Madre, Texas

    USGS Publications Warehouse

    Morton, R.A.; Nava, R.C.; Arhelger, M.

    2001-01-01

    Shoaling in the Gulf Intracoastal Waterway of Laguna Madre, Tex., is caused primarily by recycling of dredged sediments. Sediment recycling, which is controlled by water depth and location with respect to the predominant wind-driven currents, is minimal where dredged material is placed on tidal flats that are either flooded infrequently or where the water is extremely shallow. In contrast, nearly all of the dredged material placed in open water >1.5 m deep is reworked and either transported back into the channel or dispersed into the surrounding lagoon. A sediment flux analysis incorporating geotechnical properties demonstrated that erosion and not postemplacement compaction caused most sediment losses from the placement areas. Comparing sediment properties in the placement areas and natural lagoon indicated that the remaining dredged material is mostly a residual of initial channel construction. Experimental containment designs (shallow subaqueous mound, submerged levee, and emergent levee) constructed in high-maintenance areas to reduce reworking did not retain large volumes of dredged material. The emergent levee provided the greatest retention potential approximately 2 years after construction.

  15. Silicic Magmas Erupted From the Laguna de Bay Caldera, Macolod Corridor, Luzon, Philippines: Geochemistry and Origin

    NASA Astrophysics Data System (ADS)

    Flood, T. P.; Vogel, T. A.; Arpa, M. B.; Patino, L. C.; Cantane, S. G.; Arcilla, C. A.

    2004-12-01

    The Laguna de Bay Caldera is a depression 200 km2 in diameter that occurs within the Macolod Corridor. The Macalod corridor is a NE-SW zone of rifting through the central part of Luzon that was the site of extensive Pliestocene to Holocene volcanism, including major pyroclastic eruptions from the Laguna de Bay Caldera. This caldera has erupted large volumes of pyroclastic material, with poorly constrained published ages from < 27 ka to > 50 ka. The range in pumice sample composition in these flow units is from 53 to 69 wt. % SiO2. The abundant silicic compositions (>64 wt. % SiO2) are the focus of this investigation. Published chemical data from two nearby and relatively young, subduction related, stratavolcanoes, Taal and Makiling, show that both also contain silicic deposits. A comparison of theses silicic deposits to the silicic samples from Laguna de Bay indicate that the Laguna de Bay pyroclastic deposits contain much higher K2O/Na2O (< 1 for both Taal and Makiling and >6 for Laguna de Bay). Sr concentrations in the silicic samples from Laguna de Bay are high (> 250 ppm), which precludes large amount of plagioclase fractionation. The small Eu anomaly is consistent with this interpretation. The REE element patterns for Laguna de Bay are LREE enriched with flat HREE. No depletion occurs in the middle REE. The lack of depletion in the middle REE is in contrast to a significant concave upward pattern for the Makiling samples, an indication of amphibole in the source (no REE data are available for Taal Volcano). Our preliminary conclusions are that the silicic samples from Laguna comprise distinct compositional groups, which may be interpreted as distinct magma batches. The very high K2O/Na2O values can be used to argue against the origin of these silicic magmas by fractional crystallization or partial melting of basaltic compositions. Melting or assimilation of a more evolved source must be involved. Evolved preexisting continental crust is absent in this area. Therefore we propose that the origin of the silicic magmas from the Laguna de Bay caldera is related to melting of preexisting subduction related, evolved crust.

  16. Characterization of Appalachian faults

    SciTech Connect

    Hatcher, R.D. Jr.; Odom, A.L.; Engelder, T.; Dunn, D.E.; Wise, D.U.; Geiser, P.A.; Schamel, S.; Kish, S.A.

    1988-02-01

    This study presents a classification/characterization of Appalachian faults. Characterization factors include timing of movement relative to folding, metamorphism, and plutonism; tectonic position in the orogen; relations to existing anisotropies in the rock masses; involvement of particular rock units and their ages, as well as the standard Andersonian distinctions. Categories include faults with demonstrable Cenozoic activity, wildflysch-associated thrusts, foreland bedding-plane thrusts, premetamorphic to synmetamorphic thrusts in medium- to high-grade terranes, postmetamorphic thrusts in medium- to high-grade terranes, thrusts rooted in Precambrian basement, reverse faults, strike-slip faults, normal (block) faults, compound faults, structural lineaments, faults associated with local centers of disturbance, and geomorphic (nontectonic) faults.

  17. Fault Mapping in Haiti

    USGS Multimedia Gallery

    USGS geologist Carol Prentice surveying features that have been displaced by young movements on the Enriquillo fault in southwest Haiti.  The January 2010 Haiti earthquake was associated with the Enriquillo fault....

  18. Quantitative fault seal prediction

    SciTech Connect

    Yielding, G.; Freeman, B.; Needham, D.T.

    1997-06-01

    Fault seal can arise from reservoir/nonreservoir juxtaposition or by development of fault rock having high entry pressure. The methodology for evaluating these possibilities uses detailed seismic mapping and well analysis. A first-order seal analysis involves identifying reservoir juxtaposition areas over the fault surface by using the mapped horizons and a refined reservoir stratigraphy defined by isochores at the fault surface. The second-order phase of the analysis assesses whether the sand/sand contacts are likely to support a pressure difference. We define two types of lithology-dependent attributes: gouge ratio and smear factor. Gouge ratio is an estimate of the proportion of fine-grained material entrained into the fault gouge from the wall rocks. Smear factor methods (including clay smear potential and shale smear factor) estimate the profile thickness of a shale drawn along the fault zone during faulting. All of these parameters vary over the fault surface, implying that faults cannot simply be designated sealing or nonsealing. An important step in using these parameters is to calibrate them in areas where across-fault pressure differences are explicitly known from wells on both sides of a fault. Our calibration for a number of data sets shows remarkably consistent results, despite their diverse settings (e.g., Brent province, Niger Delta, Columbus basin). For example, a shale gouge ratio of about 20% (volume of shale in the slipped interval) is a typical threshold between minimal across-fault pressure difference and significant seal.

  19. Fault recovery characteristics of the fault tolerant multi-processor

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1990-01-01

    The fault handling performance of the fault tolerant multiprocessor (FTMP) was investigated. Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles byzantine or lying faults. It is pointed out that these weak areas in the FTMP's design increase the probability that, for any hardware fault, a good LRU (line replaceable unit) is mistakenly disabled by the fault management software. It is concluded that fault injection can help detect and analyze the behavior of a system in the ultra-reliable regime. Although fault injection testing cannot be exhaustive, it has been demonstrated that it provides a unique capability to unmask problems and to characterize the behavior of a fault-tolerant system.

  20. Investigation of chikungunya fever outbreak in Laguna, Philippines, 2012

    PubMed Central

    Zapanta, Ma Justina; de los Reyes, Vikki Carr; Sucaldito, Ma Nemia; Tayag, Enrique

    2015-01-01

    Background In July 2012, the Philippines National Epidemiology Center received a report of a suspected chikungunya fever outbreak in San Pablo City, Laguna Province, the first chikungunya cases reported from the city since surveillance started in 2007. We conducted an outbreak investigation to identify risk factors associated with chikungunya. Methods A case was defined as any resident of Concepcion Village in San Pablo City who had fever of at least two days duration and either joint pains or rash between 23 June and 6 August 2012. Cases were ascertained by conducting house-to-house canvassing and medical records review. An unmatched case-control study was conducted and analysed using a multivariate logistic regression. An environmental investigation was conducted by observing water and sanitation practices, and 100 households were surveyed to determine House and Breteau Indices. Human serum samples were collected for confirmation for chikungunya IgM through enzyme-linked immunosorbent assay. Results There were 98 cases identified. Multivariate analysis revealed that having a chikungunya case in the household (adjusted odds ratio [aOR]: 6.2; 95% confidence interval [CI]: 3.0–12.9) and disposing of garbage haphazardly (aOR: 2.7; 95% CI: 1.4–5.4) were associated with illness. House and Breteau Indices were 27% and 28%, respectively. Fifty-eight of 84 (69%) serum samples were positive for chikungunya IgM. Conclusion It was not surprising that having a chikungunya case in a household was associated with illness in this outbreak. However, haphazard garbage disposal is not an established risk factor for the disease, although this could be linked to increased breeding sites for mosquitoes. PMID:26668759

  1. A case of paleo-creep? Comparison of fault displacements in a trench with the corresponding earthquake record in lake sediments along the Polochic fault, Guatemala

    NASA Astrophysics Data System (ADS)

    Brocard, Gilles; Anselmetti, Flavio

    2014-05-01

    The Polochic and Motagua strike-slip faults in Guatemala accommodate the displacement (~2 cm/y) across the boundary between the Caribbean and North American plates. Both faults are expected to produce large destructive earthquakes such as the Mw 7.5 earthquake of 1976 on the Motagua fault. Former large earthquakes with magnitudes larger than Mw 7.0 are suggested from the areal extent of destructions to Precolombian Mayan cities and churches, and both the Motagua and Polochic fault have been suspected as the sources of these earthquakes. The available record, however, is surprisingly poor in large earthquakes, suggesting either that the record is sketchy or that such earthquakes are effectively infrequent. We investigated the activity of the Polochic fault by opening trenches along its major strand in Uspantn, Quich, and Agua Blanca, Alta Verapaz. Recent displacements are evidenced in Agua Blanca, with soils less than 350 years old disrupted by the fault. We combined the study of the trenches with the study of sediment cores in Laguna Chichj, a lake located 4 km north of the Polochic fault. We had previously conducted an analysis of the sensitivity of the Chichj lake sediments to earthquakes in the 20th century. In the 20th centurey the earthquake record is well known, as well the locally felt intensity of these earthquakes. We found that for MMI intensities of VI and higher turbidites and slumps are produced in the lake. We used this calibration to study the earthquake record of the past 12 centuries and identified a cluster of earthquakes with MMI > VI between 830 and 1450 AD. The oldest seismite temporally matches widespread destructions in Mayan cities in 830 AD. Surprisingly, no earthquakes are recorded between 1450 and 1976 AD. Yet, the trench in Agua Blanca records substantial displacement of the Polochic fault over the period. It seems therefore that this ultimate displacement did not produce any substantial earthquake, and may correspond to a period of creeping on the Polochic fault.

  2. Earthquake fault superhighways

    NASA Astrophysics Data System (ADS)

    Robinson, D. P.; Das, S.; Searle, M. P.

    2010-10-01

    Motivated by the observation that the rare earthquakes which propagated for significant distances at supershear speeds occurred on very long straight segments of faults, we examine every known major active strike-slip fault system on land worldwide and identify those with long (> 100 km) straight portions capable not only of sustained supershear rupture speeds but having the potential to reach compressional wave speeds over significant distances, and call them "fault superhighways". The criteria used for identifying these are discussed. These superhighways include portions of the 1000 km long Red River fault in China and Vietnam passing through Hanoi, the 1050 km long San Andreas fault in California passing close to Los Angeles, Santa Barbara and San Francisco, the 1100 km long Chaman fault system in Pakistan north of Karachi, the 700 km long Sagaing fault connecting the first and second cities of Burma, Rangoon and Mandalay, the 1600 km Great Sumatra fault, and the 1000 km Dead Sea fault. Of the 11 faults so classified, nine are in Asia and two in North America, with seven located near areas of very dense populations. Based on the current population distribution within 50 km of each fault superhighway, we find that more than 60 million people today have increased seismic hazards due to them.

  3. Fault model development for fault tolerant VLSI design

    NASA Astrophysics Data System (ADS)

    Hartmann, C. R.; Lala, P. K.; Ali, A. M.; Visweswaran, G. S.; Ganguly, S.

    1988-05-01

    Fault models provide systematic and precise representations of physical defects in microcircuits in a form suitable for simulation and test generation. The current difficulty in testing VLSI circuits can be attributed to the tremendous increase in design complexity and the inappropriateness of traditional stuck-at fault models. This report develops fault models for three different types of common defects that are not accurately represented by the stuck-at fault model. The faults examined in this report are: bridging faults, transistor stuck-open faults, and transient faults caused by alpha particle radiation. A generalized fault model could not be developed for the three fault types. However, microcircuit behavior and fault detection strategies are described for the bridging, transistor stuck-open, and transient (alpha particle strike) faults. The results of this study can be applied to the simulation and analysis of faults in fault tolerant VLSI circuits.

  4. FTAPE: A fault injection tool to measure fault tolerance

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1995-01-01

    The paper introduces FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The tool combines system-wide fault injection with a controllable workload. A workload generator is used to create high stress conditions for the machine. Faults are injected based on this workload activity in order to ensure a high level of fault propagation. The errors/fault ratio and performance degradation are presented as measures of fault tolerance.

  5. Isolability of faults in sensor fault diagnosis

    NASA Astrophysics Data System (ADS)

    Sharifi, Reza; Langari, Reza

    2011-10-01

    A major concern with fault detection and isolation (FDI) methods is their robustness with respect to noise and modeling uncertainties. With this in mind, several approaches have been proposed to minimize the vulnerability of FDI methods to these uncertainties. But, apart from the algorithm used, there is a theoretical limit on the minimum effect of noise on detectability and isolability. This limit has been quantified in this paper for the problem of sensor fault diagnosis based on direct redundancies. In this study, first a geometric approach to sensor fault detection is proposed. The sensor fault is isolated based on the direction of residuals found from a residual generator. This residual generator can be constructed from an input-output or a Principal Component Analysis (PCA) based model. The simplicity of this technique, compared to the existing methods of sensor fault diagnosis, allows for more rational formulation of the isolability concepts in linear systems. Using this residual generator and the assumption of Gaussian noise, the effect of noise on isolability is studied, and the minimum magnitude of isolable fault in each sensor is found based on the distribution of noise in the measurement system. Finally, some numerical examples are presented to clarify this approach.

  6. Redhead duck behavior on lower Laguna Madre and adjacent ponds of southern Texas

    USGS Publications Warehouse

    Mitchell, C.A.; Custer, T.W.; Zwank, P.J.

    1992-01-01

    Behavior of redheads (Aythya americana) during winter was studied on the hypersaline lower Laguna Madre and adjacent freshwater to brackish water ponds of southern Texas. On Laguna Madre, feeding (46%) and sleeping (37%) were the most common behaviors. Redheads fed more during early morning (64%) than during the rest of the day (40%); feeding activity was negatively correlated with temperature. Redheads fed more often by dipping (58%) than by tipping (25%), diving (16%), or gleaning (0.1%). Water depth was least where they fed by dipping (16 cm), greatest where diving (75 cm), and intermediate where tipping (26 cm). Feeding sequences averaged 5.3 s for dipping, 8.1 s for tipping, and 19.2 s for diving. Redheads usually were present on freshwater to brackish water ponds adjacent to Laguna Madre only during daylight hours, and use of those areas declined as winter progressed. Sleeping (75%) was the most frequent behavior at ponds, followed by preening (10%), swimming (10%), and feeding (0.4%). Because redheads fed almost exclusively on shoalgrass while dipping and tipping in shallow water and shoalgrass meadows have declined in the lower Laguna Madre, proper management of the remaining shoalgrass habitat is necessary to ensure that this area remains the major wintering area for redheads.

  7. 78 FR 72006 - Establishment of Class D Airspace and Class E Airspace; Laguna AAF, AZ

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-02

    ... Register a notice of proposed rulemaking (NPRM) to establish controlled airspace at Laguna AAF, AZ (78 FR... Procedures (44 FR 11034; February 26, 1979); and (3) does not warrant preparation of a regulatory evaluation... read as follows: Authority: 49 U.S.C. 106(g), 40103, 40113, 40120; E. O. 10854, 24 FR 9565, 3 CFR,...

  8. Migration chronology and distribution of redheads on the lower Laguna Madre, Texas

    USGS Publications Warehouse

    Custer, Christine M.; Custer, T.W.; Zwank, P.J.

    1997-01-01

    An estimated 80% of redheads (Aythya americana) winter on the Laguna Madre of southern Texas and Mexico. Because there have been profound changes in the Laguna Madre over the past three decades and the area is facing increasing industrial and recreational development, we studied the winter distribution and habitat requirements of redheads during two winters (1987-1988 and 1988-1989) on the Lower Laguna Madre, Texas to provide information that could be used to understand, identify, and protect wintering redhead habitat. Redheads began arriving on the Lower Laguna Madre during early October in 1987 and 1988, and continued to arrive through November. Redhead migration was closely associated with passing weather fronts. Redheads arrived on the day a front arrived and during the following two days; no migrants were observed arriving the day before a weather front arrived. Flock size of arriving redheads was 26.4 0.6 birds and did not differ among days or by time of day (morning midday, or afternoon). Number of flocks arriving per 0.5 h interval (arrival rate) was greater during afternoon (21.7 0.6) than during morning (4.3 1.2) or midday (1.5 0.4) on the day of frontal passage and during the first day after frontal passage. Upon arrival, redhead flocks congregated in the central portion of the Lower Laguna Madre. They continued to use the central portion throughout the winter, but gradually spread to the northern and southern ends of the lagoon. Seventy-one percent of the area used by flocks was vegetated with shoalgrass (Halodule wrightii) although shoalgrass covered only 32% of the lagoon. Flock movements seemed to be related to tide level; redheads moved to remain in water 12-30 cm deep. These data can be used by the environmental community to identify and protect this unique and indispensable habitat for wintering redheads.

  9. Vesicularity variation to pyroclasts from silicic eruptions at Laguna del Maule volcanic complex, Chile

    NASA Astrophysics Data System (ADS)

    Wright, H. M. N.; Fierstein, J.; Amigo, A.; Miranda, J.

    2014-12-01

    Crystal-poor rhyodacitic to rhyolitic volcanic eruptions at Laguna del Maule volcanic complex, Chile have produced an astonishing range of textural variation to pyroclasts. Here, we focus on eruptive deposits from two Quaternary eruptions from vents on the northwestern side of the Laguna del Maule basin: the rhyolite of Loma de Los Espejos and the rhyodacite of Laguna Sin Puerto. Clasts in the pyroclastic fall and pyroclastic flow deposits from the rhyolite of Loma de Los Espejos range from dense, non-vesicular (obsidian) to highly vesicular, frothy (coarsely vesicular reticulite); where vesicularity varies from <1% to >90%. Bulk compositions range from 75.6-76.7 wt.% SiO2. The highest vesicularity clasts are found in early fall deposits and widely dispersed pyroclastic flow deposits; the frothy carapace to lava flows is similarly highly vesicular. Pyroclastic deposits also contain tube pumice, and macroscopically folded, finely vesicular, breadcrusted, and heterogeneously vesiculated textures. We speculate that preservation of the highest vesicularities requires relatively low decompression rates or open system degassing such that relaxation times were sufficient to allow extensive vesiculation. Such an inference is in apparent contradiction to documentation of Plinian dispersal to the eruption. Clasts in the pyroclastic fall deposit of the rhyodacite (68-72 wt.% SiO2) of Laguna Sin Puerto are finely vesicular, with vesicularity modes at ~50% and ~68% corresponding to gray and white pumice colors, respectively. Some clasts are banded in color (and vesicularity). All clasts were fragmented into highly angular particles, with subplanar to slightly concave exterior surfaces (average Wadell Roundness of clast margins between 0.32 and 0.39), indicating brittle fragmentation. In contrast to Loma de Los Espejos, high bubble number densities to Laguna Sin Puerto rhyodacite imply high decompression rates.

  10. How Faults Shape the Earth.

    ERIC Educational Resources Information Center

    Bykerk-Kauffman, Ann

    1992-01-01

    Presents fault activity with an emphasis on earthquakes and changes in continent shapes. Identifies three types of fault movement: normal, reverse, and strike faults. Discusses the seismic gap theory, plate tectonics, and the principle of superposition. Vignettes portray fault movement, and the locations of the San Andreas fault and epicenters of

  11. How Faults Shape the Earth.

    ERIC Educational Resources Information Center

    Bykerk-Kauffman, Ann

    1992-01-01

    Presents fault activity with an emphasis on earthquakes and changes in continent shapes. Identifies three types of fault movement: normal, reverse, and strike faults. Discusses the seismic gap theory, plate tectonics, and the principle of superposition. Vignettes portray fault movement, and the locations of the San Andreas fault and epicenters of…

  12. Fault detection and fault tolerance in robotics

    NASA Technical Reports Server (NTRS)

    Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.

    1992-01-01

    Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.

  13. Denali Fault: Gillette Pass

    USGS Multimedia Gallery

    View north of Denali fault trace at Gillette Pass. this view shows that the surface rupture reoccupies the previous fault scarp. Also the right-lateral offset of these stream gullies has developed since deglaciation in the last 10,000 years or so....

  14. Denali Fault: Gillette Pass

    USGS Multimedia Gallery

    View northward of mountain near Gillette Pass showing sackung features. Here the mountaintop moved downward like a keystone, producing an uphill-facing scarp. The main Denali fault trace is on the far side of the mountain and a small splay fault is out of view below the photo....

  15. Advanced cable fault locator

    SciTech Connect

    Steiner, J.P.; Weeks, W.L. )

    1990-03-01

    It has been demonstrated that it is possible to utilize the electromagnetic transients generated by the faulting process itself to locate the fault site in typical Underground Residential Distribution cable. Successful tests were carried out on a full scale model underground test facility and on two operating utility underground distribution circuits. The fault location system differs from existing ones not only in the way it handles the transients but also by the fact that it requires no operator interpretation of the waveforms. A personal computer is made a part of the system and, in response to simple, usually single key strokes, the computer does all of the interpretations and calculations. In practice, the fault location process is divided into three main parts: (1) Global Location'' which gives the fault location relative to the nearest transformer; (2) Precision Location'' which gives the fault location relative to the end of the isolated faulty cable; and (3) Tracer Location'' which gives the fault location relative to a convenient reference point on the ground in the vicinity of the fault site. 85 refs., 85 figs.

  16. Denali Fault: Susitna Glacier

    USGS Multimedia Gallery

    Helicopters and satellite phones were integral to the geologic field response. Here, Peter Haeussler is calling a seismologist to pass along the discovery of the Susitna Glacier thrust fault. View is to the north up the Susitna Glacier. The Denali fault trace lies in the background where the two lan...

  17. Denali Fault: Alaska Pipeline

    USGS Multimedia Gallery

    View south along the Trans Alaska Pipeline in the zone where it was engineered for the Denali fault. The fault trace passes beneath the pipeline between the 2nd and 3rd slider supports at the far end of the zone. A large arc in the pipe can be seen in the pipe on the right, due to shortening of the ...

  18. Solar system fault detection

    DOEpatents

    Farrington, R.B.; Pruett, J.C. Jr.

    1984-05-14

    A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

  19. Solar system fault detection

    DOEpatents

    Farrington, Robert B. (Wheatridge, CO); Pruett, Jr., James C. (Lakewood, CO)

    1986-01-01

    A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

  20. Remote sensing analysis for fault-zones detection in the Central Andean Plateau (Catamarca, Argentina)

    NASA Astrophysics Data System (ADS)

    Traforti, Anna; Massironi, Matteo; Zampieri, Dario; Carli, Cristian

    2015-04-01

    Remote sensing techniques have been extensively used to detect the structural framework of investigated areas, which includes lineaments, fault zones and fracture patterns. The identification of these features is fundamental in exploration geology, as it allows the definition of suitable sites for the exploitation of different resources (e.g. ore mineral, hydrocarbon, geothermal energy and groundwater). Remote sensing techniques, typically adopted in fault identification, have been applied to assess the geological and structural framework of the Laguna Blanca area (26°35'S-66°49'W). This area represents a sector of the south-central Andes localized in the Argentina region of Catamarca, along the south-eastern margin of the Puna plateau. The study area is characterized by a Precambrian low-grade metamorphic basement intruded by Ordovician granitoids. These rocks are unconformably covered by a volcano-sedimentary sequence of Miocene age, followed by volcanic and volcaniclastic rocks of Upper Miocene to Plio-Pleistocene age. All these units are cut by two systems of major faults, locally characterized by 15-20 m wide damage zones. The detection of main tectonic lineaments in the study area was firstly carried out by classical procedures: image sharpening of Landsat 7 ETM+ images, directional filters applied to ASTER images, medium resolution Digital Elevation Models analysis (SRTM and ASTER GDEM) and hill shades interpretation. In addition, a new approach in fault zone identification, based on multispectral satellite images classification, has been tested in the Laguna Blanca area and in other sectors of south-central Andes. In this perspective, several prominent fault zones affecting basement and granitoid rocks have been sampled. The collected fault gouge samples have been analyzed with a Field-Pro spectrophotometer mounted on a goniometer. We acquired bidirectional reflectance spectra, from 0.35μm to 2.5μm with 1nm spectral sampling, of the sampled fault rocks. Subsequently, two different Spectral Angle Mapper (SAM) classifications were applied to ASTER images: the first one based on fault rock spectral signatures resampled at the ASTER sensor resolution; the second one based on spectral signatures retrieved from specific Region of Interest (ROI), which were directly derived from the ASTER image on the analyzed fault zones. The SAM classification based on the spectral signatures of fault rocks gave outstanding results since it was able to classify the analyzed fault zone, both in terms of length and width. Moreover, in some specific cases, this SAM classification identified not only the sampled fault zone, but also other prominent neighboring faults cutting the same host rock. These results define the SAM supervised classification on ASTER images as a tool to identify prominent fault zones directly on the base of fault-rocks spectra.

  1. Trace elements and organochlorines in the shoalgrass community of the lower Laguna Madre Texas

    USGS Publications Warehouse

    Custer, T.W.; Mitchell, C.A.

    1993-01-01

    Our objectives were to measure concentrations of seven trace elements and 14 organochlorine compounds in sediment and biota of the shoalgrass (Halodule wrightii) community of the lower Laguna Madre of south Texas [USA] and to determine whether chemicals associated with agriculture (e.g. mercury, arsenic, selenium, organochlorine pesticides) were highest near agricultural drainage. Arsenic, mercury, selenium, lead, cadmium, and organochlorines were generally at background concentrations throughout the lower Laguna Madre. Nickel and chromium concentrations were exceptionally high in shrimp and pinfish (Lagodon rhomboides), which is difficult to explain because of no known anthropogenic sources for these trace elements. For sediment and blue crabs (Callinectes sapidus), mercury was highest near agricultural drainages. Also, DDE was more frequently detected in blue crabs near agricultural drainages than farther away. In contrast, selenium concentrations did not differ among collecting sites and arsenic concentrations were lowest n shoalgrass, blue crabs, and brown shrimp (Penaeus aztecus) near agricultural drainages.

  2. Fault detection and isolation

    NASA Technical Reports Server (NTRS)

    Bernath, Greg

    1994-01-01

    In order for a current satellite-based navigation system (such as the Global Positioning System, GPS) to meet integrity requirements, there must be a way of detecting erroneous measurements, without help from outside the system. This process is called Fault Detection and Isolation (FDI). Fault detection requires at least one redundant measurement, and can be done with a parity space algorithm. The best way around the fault isolation problem is not necessarily isolating the bad measurement, but finding a new combination of measurements which excludes it.

  3. Fault detection and isolation

    NASA Technical Reports Server (NTRS)

    Bernath, Greg

    1993-01-01

    Erroneous measurements in multisensor navigation systems must be detected and isolated. A recursive estimator can find fast growing errors; a least squares batch estimator can find slow growing errors. This process is called fault detection. A protection radius can be calculated as a function of time for a given location. This protection radius can be used to guarantee the integrity of the navigation data. Fault isolation can be accomplished using either a snapshot method or by examining the history of the fault detection statistics.

  4. Waterbirds (other than Laridae) nesting in the middle section of Laguna Cuyutln, Colima, Mxico.

    PubMed

    Mellink, Eric; Riojas-Lpez, Mnica E

    2008-03-01

    Laguna de Cuyutln, in the state of Colima, Mexico, is the only large coastal wetland in a span of roughly 1150 km. Despite this, the study of its birds has been largely neglected. Between 2003 and 2006 we assessed the waterbirds nesting in the middle portion of Laguna Cuyutln, a large tropical coastal lagoon, through field visits. We documented the nesting of 15 species of non-Laridae waterbirds: Neotropic Cormorant (Phalacrocorax brasilianus), Tricolored Egret (Egretta tricolor), Snowy Egret (Egretta thula), Little Blue Heron (Egretta caerulea), Great Egret (Ardea alba), Cattle Egret (Bubulcus ibis), Black-crowned Night-heron (Nycticorax nycticorax), Yellow-crowned Night-heron (Nyctanassa violacea), Green Heron (Butorides virescens), Roseate Spoonbill (Platalea ajaja), White Ibis (Eudocimus albus), Black-bellied Whistling-duck (Dendrocygna autumnalis), Clapper Rail (Rallus longirostris), Snowy Plover (Charadrius alexandrinus), and Black-necked Stilt (Himantopus mexicanus). These add to six species of Laridae known to nest in that area: Laughing Gulls (Larus atricilla), Royal Terns (Thalasseus maximus), Gull-billed Terns (Gelochelidon nilotica), Forster's Terns (S. forsteri), Least Terns (Sternula antillarum), and Black Skimmer (Rynchops niger), and to at least 57 species using it during the non-breeding season. With such bird assemblages, Laguna Cuyutln is an important site for waterbirds, which should be given conservation status. PMID:18624252

  5. Hydrocarbon concentrations in sediments and clams (Rangia cuneata) in Laguna de Pom, Mexico

    SciTech Connect

    Alvarez-Legorreta, T.; Gold-Bouchot, G.; Zapata-Perez, O.

    1994-01-01

    Laguna de Pom is a coastal lagoon within the Laguna de Terminos system in southern Gulf of Mexico. It belongs to the Grijalva-Usumacinta basin, and is located between 18{degrees} 33{prime} and 18{degrees} 38{prime} north latitude and 92{degrees} 01{prime} and 92{degrees} 14{prime} west longitude, in the Coastal Plain physiographic Province of the Gulf. It is ellipsoidal and approximately 10 km long, with a surface area of 5,200 ha and a mean depth of 1.5 m. Water salinity and temperature ranges are 0 to 13 {per_thousand} and 25{degrees} to 31{degrees}C, respectively. Benthic macrofauna is dominated by bivalves such as the clams Rangia cuneata, R. flexuosa, and Polymesoda carolineana. These clams provide the basis of an artisanal fishery, which is the main economic activity in the region. The presence of several oil-processing facilities around the lagoon is very conspicuous, which together with decreasing yields has created social conflicts, with the fishermen blaming the mexican state oil company (PEMEX) for the decrease in the clam population. This work aims to determine if the concentration of hydrocarbons in the clams (R. cuneata) and sediments of Laguna de Pom are responsible for the declining clam fishery. 11 refs., 4 figs., 2 tabs.

  6. Measuring fault tolerance with the FTAPE fault injection tool

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1995-01-01

    This paper describes FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The major parts of the tool include a system-wide fault-injector, a workload generator, and a workload activity measurement tool. The workload creates high stress conditions on the machine. Using stress-based injection, the fault injector is able to utilize knowledge of the workload activity to ensure a high level of fault propagation. The errors/fault ratio, performance degradation, and number of system crashes are presented as measures of fault tolerance.

  7. Fault zone structure of the Wildcat fault in Berkeley, California - Field survey and fault model test -

    NASA Astrophysics Data System (ADS)

    Ueta, K.; Onishi, C. T.; Karasaki, K.; Tanaka, S.; Hamada, T.; Sasaki, T.; Ito, H.; Tsukuda, K.; Ichikawa, K.; Goto, J.; Moriya, T.

    2010-12-01

    In order to develop hydrologic characterization technology of fault zones, it is desirable to clarify the relationship between the geologic structure and hydrologic properties of fault zones. To this end, we are performing surface-based geologic and trench investigations, geophysical surveys and borehole-based hydrologic investigations along the Wildcat fault in Berkeley,California to investigate the effect of fault zone structure on regional hydrology. The present paper outlines the fault zone structure of the Wildcat fault in Berkeley on the basis of results from trench excavation surveys. The approximately 20 - 25 km long Wildcat fault is located within the Berkeley Hills and extends northwest-southeast from Richmond to Oakland, subparallel to the Hayward fault. The Wildcat fault, which is a predominantly right-lateral strike-slip fault, steps right in a releasing bend at the Berkeley Hills region. A total of five trenches have been excavated across the fault to investigate the deformation structure of the fault zone in the bedrock. Along the Wildcat fault, multiple fault surfaces are branched, bent, paralleled, forming a complicated shear zone. The shear zone is ~ 300 m in width, and the fault surfaces may be classified under the following two groups: 1) Fault surfaces offsetting middle Miocene Claremont Chert on the east against late Miocene Orinda formation and/or San Pablo Group on the west. These NNW-SSE trending fault surfaces dip 50 - 60 to the southwest. Along the fault surfaces, fault gouge of up to 1 cm wide and foliated cataclasite of up to 60 cm wide can be observed. S-C fabrics of the fault gouge and foliated cataclasite show normal right-slip shear sense. 2) Fault surfaces forming a positive flower structure in Claremont Chert. These NW-SE trending fault surfaces are sub-vertical or steeply dipping. Along the fault surfaces, fault gouge of up to 3 cm wide and foliated cataclasite of up to 200 cm wide can be observed. S-C fabrics of the fault gouge and foliated cataclasite show reverse right-slip shear sense. We are performing sandbox experiments to investigate the three-dimensional kinematic evolution of fault systems caused by oblique-slip motion. The geometry of the Wildcat fault in the Berkeley Hills region shows a strong resemblance to our sandbox experimental model. Based on these geological and experimental data, we inferred that the complicated fault systems were dominantly developed within the fault step and the tectonic regime switched from transpression to transtension during the middle to late Miocene along the Wildcat fault.

  8. Cable-fault locator

    NASA Technical Reports Server (NTRS)

    Cason, R. L.; Mcstay, J. J.; Heymann, A. P., Sr.

    1979-01-01

    Inexpensive system automatically indicates location of short-circuited section of power cable. Monitor does not require that cable be disconnected from its power source or that test signals be applied. Instead, ground-current sensors are installed in manholes or at other selected locations along cable run. When fault occurs, sensors transmit information about fault location to control center. Repair crew can be sent to location and cable can be returned to service with minimum of downtime.

  9. Fault tolerant magnetic bearings

    SciTech Connect

    Maslen, E.H.; Sortore, C.K.; Gillies, G.T.; Williams, R.D.; Fedigan, S.J.; Aimone, R.J.

    1999-07-01

    A fault tolerant magnetic bearing system was developed and demonstrated on a large flexible-rotor test rig. The bearing system comprises a high speed, fault tolerant digital controller, three high capacity radial magnetic bearings, one thrust bearing, conventional variable reluctance position sensors, and an array of commercial switching amplifiers. Controller fault tolerance is achieved through a very high speed voting mechanism which implements triple modular redundancy with a powered spare CPU, thereby permitting failure of up to three CPU modules without system failure. Amplifier/cabling/coil fault tolerance is achieved by using a separate power amplifier for each bearing coil and permitting amplifier reconfiguration by the controller upon detection of faults. This allows hot replacement of failed amplifiers without any system degradation and without providing any excess amplifier kVA capacity over the nominal system requirement. Implemented on a large (2440 mm in length) flexible rotor, the system shows excellent rejection of faults including the failure of three CPUs as well as failure of two adjacent amplifiers (or cabling) controlling an entire stator quadrant.

  10. A mesh of crossing faults: Fault networks of southern California

    NASA Astrophysics Data System (ADS)

    Janecke, S. U.

    2009-12-01

    Detailed geologic mapping of active fault systems in the western Salton Trough and northern Peninsular Ranges of southern California make it possible to expand the inventory of mapped and known faults by compiling and updating existing geologic maps, and analyzing high resolution imagery, LIDAR, InSAR, relocated hypocenters and other geophysical datasets. A fault map is being compiled on Google Earth and will ultimately discriminate between a range of different fault expressions: from well-mapped faults to subtle lineaments and geomorphic anomalies. The fault map shows deformation patterns in both crystalline and basinal deposits and reveals a complex fault mesh with many curious and unexpected relationships. Key findings are: 1) Many fault systems have mutually interpenetrating geometries, are grossly coeval, and allow faults to cross one another. A typical relationship reveals a dextral fault zone that appears to be continuous at the regional scale. In detail, however, there are no continuous NW-striking dextral fault traces and instead the master dextral fault is offset in a left-lateral sense by numerous crossing faults. Left-lateral faults also show small offsets where they interact with right lateral faults. Both fault sets show evidence of Quaternary activity. Examples occur along the Clark, Coyote Creek, Earthquake Valley and Torres Martinez fault zones. 2) Fault zones cross in other ways. There are locations where active faults continue across or beneath significant structural barriers. Major fault zones like the Clark fault of the San Jacinto fault system appears to end at NE-striking sinistral fault zones (like the Extra and Pumpkin faults) that clearly cross from the SW to the NE side of the projection of the dextral traces. Despite these blocking structures, there is good evidence for continuation of the dextral faults on the opposite sides of the crossing fault array. In some instances there is clear evidence (in deep microseismic alignments of hypocenters) that the master dextral faults zones pass beneath shallower crossing fault arrays above them and this mechanism may transfer strain through the blocking zones. 3) The curvature of strands of the Coyote Creek fault and the Elsinore fault are similar along their SE 60 km. The scale, locations and concavity of bends are so similar that their shape appears to be coordinated. The matching contractional and extensional bends suggests that originally straighter dextral fault zones may be deforming in response of coeval sinistral deformation between, beneath, and around them. 4) Deformation is strongly domainal with one style or geometry of structure dominating in one area then another in an adjacent area. Boundaries may be abrupt. 5) There are drastic lateral changes in the width of damage zones adjacent to master faults. Outlines of the deformation related to some dextral fault zones resemble a snake that has ingested a squirming cat or soccer ball. 6) A mesh of interconnected faults seems to transfer slip back and forth between structures. 7) Scarps are not necessarily more abundant on the long master faults than on connector or crossing faults. Much remains to be learned upon completion the fault map.

  11. 77 FR 19700 - Notice of Intent to Repatriate Cultural Items: California Department of Parks and Recreation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-02

    ... the Chocolate Mountains, the territory extends southward to Todos Santos Bay, Laguna Salada and along... from just below Borrego Springs to the north end of the Salton Basin and the Chocolate Mountains....

  12. Fault Roughness Records Strength

    NASA Astrophysics Data System (ADS)

    Brodsky, E. E.; Candela, T.; Kirkpatrick, J. D.

    2014-12-01

    Fault roughness is commonly ~0.1-1% at the outcrop exposure scale. More mature faults are smoother than less mature ones, but the overall range of roughness is surprisingly limited which suggests dynamic control. In addition, the power spectra of many exposed fault surfaces follow a single power law over scales from millimeters to 10's of meters. This is another surprising observation as distinct structures such as slickenlines and mullions are clearly visible on the same surfaces at well-defined scales. We can reconcile both observations by suggesting that the roughness of fault surfaces is controlled by the maximum strain that can be supported elastically in the wallrock. If the fault surface topography requires more than 0.1-1% strain, it fails. Invoking wallrock strength explains two additional observations on the Corona Heights fault for which we have extensive roughness data. Firstly, the surface is isotropic below a scale of 30 microns and has grooves at larger scales. Samples from at least three other faults (Dixie Valley, Mount St. Helens and San Andreas) also are isotropic at scales below 10's of microns. If grooves can only persist when the walls of the grooves have a sufficiently low slope to maintain the shape, this scale of isotropy can be predicted based on the measured slip perpendicular roughness data. The observed 30 micron scale at Corona Heights is consistent with an elastic strain of 0.01 estimated from the observed slip perpendicular roughness with a Hurst exponent of 0.8. The second observation at Corona Heights is that slickenlines are not deflected around meter-scale mullions. Yielding of these mullions at centimeter to meter scale is predicted from the slip parallel roughness as measured here. The success of the strain criterion for Corona Heights supports it as the appropriate control on fault roughness. Micromechanically, the criterion implies that failure of the fault surface is a continual process during slip. Macroscopically, the fundamental nature of the control means that 0.1 to 1% roughness should be ubiquitous on faults and can generally be used for simulating ground motion. An important caveat is that the scale-dependence of strength may result in a difference in the yield criterion at large-scales. The commonly observed values of the Hurst exponent below 1 may capture this scale-dependence.

  13. Changes in fault length distributions due to fault linkage

    NASA Astrophysics Data System (ADS)

    Xu, Shunshan; Nieto-Samaniego, A. F.; Alaniz-lvarez, S. A.; Velasquillo-Martnez, L. G.; Grajales-Nishimura, J. M.; Garca-Hernndez, J.; Murillo-Muetn, G.

    2010-01-01

    Fault linkage plays an important role in the growth of faults. In this paper we analyze a published synthetic model to simulate fault linkage. The results of the simulation indicate that fault linkage is the cause of the shallower local slopes on the length-frequency plots. The shallower local slopes lead to two effects. First, the curves of log cumulative number against log length exhibit fluctuating shapes as reported in literature. Second, for a given fault population, the power-law exponents after linkage are negatively related to the linked length scales. Also, we present datasets of fault length measured from four structural maps at the Cantarell oilfield in the southern Gulf of Mexico (offshore Campeche). The results demonstrate that the fault length data, corrected by seismic resolution at the tip fault zone, also exhibit fluctuating curves of log cumulative frequency vs. log length. The steps (shallower slopes) on the curves imply the scale positions of fault linkage. We conclude that fault linkage is the main reason for the fluctuating shapes of log cumulative frequency vs. log length. On the other hand, our data show that the two-tip faults are better for linear analysis between maximum displacement ( D) and length ( L). Evidently, two-tip faults underwent fewer fault linkages and interactions.

  14. Validated Fault Tolerant Architectures for Space Station

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.

    1990-01-01

    Viewgraphs on validated fault tolerant architectures for space station are presented. Topics covered include: fault tolerance approach; advanced information processing system (AIPS); and fault tolerant parallel processor (FTPP).

  15. Patagonian and Antarctic dust as recorded in the sediments of Laguna Potrok Aike (Patagonia, Argentina)

    NASA Astrophysics Data System (ADS)

    Haberzettl, Torsten; Stopp, Annemarie; Lis-Pronovost, Agathe; Gebhardt, Catalina; Ohlendorf, Christian; Zolitschka, Bernd; von Eynatten, Hilmar; Kleinhanns, Ilka; Pasado Science Team

    2010-05-01

    Although an increasing number of terrestrial paleoclimatic records from southern South America has been published during the last decade, these archives mostly cover the Lateglacial and/or the Holocene. Only little is known about the Patagonian climate before the Last Glacial Maximum. Here, we present a continuous, high-resolution magnetic susceptibility record for the past 48 ka from the maar lake Laguna Potrok Aike (5158' S, 7023' W, southern Patagonia, Argentina). Magnetic susceptibility serves as an excellent parameter for the parallelization of sediment cores all over Laguna Potrok Aike including sediment cores taken within the ICDP (International Continental Scientific Drilling Program) project PASADO (Potrok Aike maar lake Sediment Archive Drilling prOject). Additionally, magnetic susceptibility is assumed to be a proxy for dust deposition in this lake. Distinct similarities were found between the independently dated magnetic susceptibility record from Laguna Potrok Aike and the non-sea-salt calcium (nss-Ca) flux from the EPICA Dome C ice core record (7506'S, 12324'E) the latter being a proxy for mineral dust deposition in Antarctica [1]. Comparison of the two records and variations in grain size of the Laguna Potrok Aike sediment records indicate a relatively high aeolian activity in southern South America during the glacial period. During the Holocene climatic conditions driving sediment deposition seem to have been more variable and less dominated by wind compared to glacial times. Although the source of the dust found in Antarctic ice cores often has been attributed to Patagonia [2], we present the first evidence for contemporaneity of aeolian deposition in both the target area (Antarctica) and the major source area (Patagonia). Considering the similarities of the two records, magnetic susceptibility might yield the potential for chronological information: transfer of the ice core age model to a lacustrine sediment record. This would be important as additional time control for the recently recovered sediment record within the ICDP deep drilling project PASADO. To support this idea, we performed Sr/Nd-isotopic analyses on the assumed aeolian, well sorted fraction (63-200 m) deposited in Laguna Potrok Aike during the last glaciation as well as on the <5 m fraction which is commonly found as dust in Antarctica - both on the same samples. These results are compared to the Sr/Nd-isotopic signatures measured directly on dust from Antarctic ice cores [2]: the isotopic data field of sediments from Laguna Potrok Aike superposes a large part of isotopic data from Antarctic dust, although the 87Sr/86Sr-data seems to show a slight offset to lower values. In conclusion our analyses confirm previous studies that suggested southern South America to be the main source area of east Antarctic dust during glacial periods. However, this is the first evidence for a contemporaneous dust deposition pattern in Patagonia and Antarctica. References [1] R. Rthlisberger, R. Mulvaney, E.W. Wolff, M.A. Hutterli, M. Bigler, S. Sommer, J. Jouzel, Dust and sea salt variability in central East Antarctica (Dome C) over the last 45 kyrs and its implications for southern high-latitude climate, Geophysical Research Letters 29 (2002) doi:10.1029/2002GL015186. [2] B. Delmonte, I. Basile-Doelsch, J.R. Petit, V. Maggi, M. Revel-Rolland, A. Michard, E. Jagoutz, F. Grousset, Comparing the Epica and Vostok dust records during the last 220,000 years: stratigraphical correlation and provenance in glacial periods, Earth-Science Reviews 66 (2004) 63-87.

  16. Cable fault locator research

    NASA Astrophysics Data System (ADS)

    Cole, C. A.; Honey, S. K.; Petro, J. P.; Phillips, A. C.

    1982-07-01

    Cable fault location and the construction of four field test units are discussed. Swept frequency sounding of mine cables with RF signals was the technique most thoroughly investigated. The swept frequency technique is supplemented with a form of moving target indication to provide a method for locating the position of a technician along a cable and relative to a suspected fault. Separate, more limited investigations involved high voltage time domain reflectometry and acoustical probing of mine cables. Particular areas of research included microprocessor-based control of the swept frequency system, a microprocessor based fast Fourier transform for spectral analysis, and RF synthesizers.

  17. Computer hardware fault administration

    DOEpatents

    Archer, Charles J. (Rochester, MN); Megerian, Mark G. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian E. (Rochester, MN)

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  18. Fault tolerant linear actuator

    DOEpatents

    Tesar, Delbert

    2004-09-14

    In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.

  19. Fault Drag Along Normal Faults in Unconsolidated Sediments

    NASA Astrophysics Data System (ADS)

    Exner, U.; Grasemann, B.; Pretsch, H.

    2007-12-01

    A displacement gradient along the strike of a fault plane results in the formation of fault drag in layers of the adjacent host rock. We investigated normal faults in Lower Miocene (Sarmatian-Pannonian) clastic sediments in a quarry at St. Margarethen, Burgenland, Austria, situated at the Eastern margin of the Eisenstadt Basin, a subbasin of the Vienna Basin complex. The N-S trending faults crosscut a barely lithified sequence of conglomerates, fine-grained sands and silts. These marker horizons display normal offset along the conjugate fault set, which is often, but not exclusively, restricted to the conglomerate beds. A significant amount of rotation of the faults can be inferred, as largest offsets are accumulated at the more inclined fault planes, whereas steeper faults show least displacement. Associated with increasing amount of offset, pronounced reverse drag of the faulted sedimentary layers can be observed both in footwall and hanging wall, often accommodated by re-orientation of the conglomerate pebbles. Rotation and fault linkage resulted in the formation of longer faults with varying dip angles crosscutting several conglomerate beds and the intercalated sand and silt layers. In the vicinity of the fault tips, individual pebbles are intensively cracked, which we interpret as an indicator for stress concentration at the fault tips. Comparing the geometry of the observed fault drag with results from numerical models we try to estimate the initial shape and orientation of the fault planes, as well as the amount of rotation and background strain which led to their finite geometry. Extrapolating the results to basin-scale faults, we may deduce valuable parameters for the interpretation of reflection seismic images, where structural details may be blurred or below seismic resolution.

  20. Niebla ceruchis from Laguna Figueroa: dimorphic spore morphology and secondary compounds localized in pycnidia and apothecia

    NASA Technical Reports Server (NTRS)

    Enzien, M.; Margulis, L.

    1988-01-01

    During and after the floods of 1979-80 Niebla ceruchis growing epiphytically on Lycium brevipes was one of the dominant aspects of the vegetation in the coastal dunal complex bordering the microbial mats at Laguna Figueroa, Baja California Norte, Mexico. The lichen on denuded branches of Lycium was far more extensively distributed than Lycium lacking lichen. Unusual traits of this Niebla ceruchis strain, namely localization of lichen compounds in the mycobiont reproductive structures (pycnidia and apothecia) and simultaneous presence of bilocular and quadrilocular ascospores, are reported. The abundance of this coastal lichen cover at the microbial mat site has persisted through April 1988.

  1. Rock Magnetic Properties of Laguna Carmen (Tierra del Fuego, Argentina): Implications for Paleomagnetic Reconstruction

    NASA Astrophysics Data System (ADS)

    Gogorza, C. G.; Orgeira, M. J.; Ponce, F.; Fernndez, M.; Laprida, C.; Coronato, A.

    2013-05-01

    We report preliminary results obtained from a multi-proxy analysis including paleomagnetic and rock-magnetic studies of two sediment cores of Laguna Carmen (5340'60" S 6819'0" W, ~83m asl) in the semiarid steppe in northern Tierra del Fuego island, Southernmost Patagonia, Argentina. Two short cores (115 cm) were sampled using a Livingstone piston corer during the 2011 southern fall. Sediments are massive green clays (115 to 70 cm depth) with irregularly spaced thin sandy strata and lens. Massive yellow clay with thin sandy strata continues up to 30 cm depth; from here up to 10 cm yellow massive clays domain. The topmost 10 cm are mixed yellow and green clays with fine sand. Measurements of intensity and directions of Natural Remanent Magnetization (NRM), magnetic susceptibility, isothermal remanent magnetization, saturation isothermal remanent magnetization (SIRM), back field and anhysteretic remanent magnetization at 100 mT (ARM100mT) were performed and several associated parameters calculated (ARM100mT/k and SIRM/ ARM100mT). Also, as a first estimate of relative magnetic grain-size variations, the median destructive field of the NRM (MDFNRM), was determined. Additionally, we present results of magnetic parameters measured with vibrating sample magnetometer (VSM). The stability of the NRM was analyzed by alternating field demagnetization. The magnetic properties have shown variable values, showing changes in both grain size and concentration of magnetic minerals. It was found that the main carrier of remanence is magnetite with the presence of hematite in very low percentages. This is the first paleomagnetic study performed in lakes located in the northern, semiarid fuegian steppe, where humid-dry cycles have been interpreted all along the Holocene from an aeolian paleosoil sequence (Orgeira et el, 2012). Comparison between paleomagnetic records of Laguna Carmen and results obtained in earlier studies carried out at Laguna Potrok Aike (Gogorza et al., 2012) were performed. References Gogorza, C.S.G., Irurzun, M.A., Sinito, A.M., Lis-Pronovost, A., St-Onge, G., Haberzettl, T., Ohlendorf, C., Kastner, S., Zolitschka, B., 2012. High-resolution paleomagnetic records from Laguna Potrok Aike (Patagonia, Argentina) for the last 16,000 years. Geochemistry Geophysics Geosystems. 13, Q12Z37. Orgeira, M.J., Vsquez, C.A., Coronato, A., Ponce, F., Moreto, A., Osterrieth, M, Egli, R., Onorato, R., 2012. Magnetic properties of Holocene edaphized silty eolian sediments from Tierra del Fuego (Argentina). Revista de la Sociedad Geolgica de Espaa. 25 (1-2), 45-56.

  2. Fault tree models for fault tolerant hypercube multiprocessors

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Tuazon, Jezus O.

    1991-01-01

    Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.

  3. Fault-Mechanism Simulator

    ERIC Educational Resources Information Center

    Guyton, J. W.

    1972-01-01

    An inexpensive, simple mechanical model of a fault can be produced to simulate the effects leading to an earthquake. This model has been used successfully with students from elementary to college levels and can be demonstrated to classes as large as thirty students. (DF)

  4. Row fault detection system

    DOEpatents

    Archer, Charles Jens (Rochester, MN); Pinnow, Kurt Walter (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian Edward (Rochester, MN)

    2010-02-23

    An apparatus and program product check for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  5. Row fault detection system

    DOEpatents

    Archer, Charles Jens (Rochester, MN); Pinnow, Kurt Walter (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian Edward (Rochester, MN)

    2012-02-07

    An apparatus, program product and method check for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  6. Row fault detection system

    SciTech Connect

    Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward

    2008-10-14

    An apparatus, program product and method checks for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  7. Dynamic Fault Detection Chassis

    SciTech Connect

    Mize, Jeffery J

    2007-01-01

    Abstract The high frequency switching megawatt-class High Voltage Converter Modulator (HVCM) developed by Los Alamos National Laboratory for the Oak Ridge National Laboratory's Spallation Neutron Source (SNS) is now in operation. One of the major problems with the modulator systems is shoot-thru conditions that can occur in a IGBTs H-bridge topology resulting in large fault currents and device failure in a few microseconds. The Dynamic Fault Detection Chassis (DFDC) is a fault monitoring system; it monitors transformer flux saturation using a window comparator and dV/dt events on the cathode voltage caused by any abnormality such as capacitor breakdown, transformer primary turns shorts, or dielectric breakdown between the transformer primary and secondary. If faults are detected, the DFDC will inhibit the IGBT gate drives and shut the system down, significantly reducing the possibility of a shoot-thru condition or other equipment damaging events. In this paper, we will present system integration considerations, performance characteristics of the DFDC, and discuss its ability to significantly reduce costly down time for the entire facility.

  8. Fault-Related Sanctuaries

    NASA Astrophysics Data System (ADS)

    Piccardi, L.

    2001-12-01

    Beyond the study of historical surface faulting events, this work investigates the possibility, in specific cases, of identifying pre-historical events whose memory survives in myths and legends. The myths of many famous sacred places of the ancient world contain relevant telluric references: "sacred" earthquakes, openings to the Underworld and/or chthonic dragons. Given the strong correspondence with local geological evidence, these myths may be considered as describing natural phenomena. It has been possible in this way to shed light on the geologic origin of famous myths (Piccardi, 1999, 2000 and 2001). Interdisciplinary researches reveal that the origin of several ancient sanctuaries may be linked in particular to peculiar geological phenomena observed on local active faults (like ground shaking and coseismic surface ruptures, gas and flames emissions, strong underground rumours). In many of these sanctuaries the sacred area is laid directly above the active fault. In a few cases, faulting has affected also the archaeological relics, right through the main temple (e.g. Delphi, Cnidus, Hierapolis of Phrygia). As such, the arrangement of the cult site and content of relative myths suggest that specific points along the trace of active faults have been noticed in the past and worshiped as special `sacred' places, most likely interpreted as Hades' Doors. The mythological stratification of most of these sanctuaries dates back to prehistory, and points to a common derivation from the cult of the Mother Goddess (the Lady of the Doors), which was largely widespread since at least 25000 BC. The cult itself was later reconverted into various different divinities, while the `sacred doors' of the Great Goddess and/or the dragons (offspring of Mother Earth and generally regarded as Keepers of the Doors) persisted in more recent mythologies. Piccardi L., 1999: The "Footprints" of the Archangel: Evidence of Early-Medieval Surface Faulting at Monte Sant'Angelo (Gargano, Italy). European Union of Geophysics Congress, Strasbourg, March 1999. Piccardi L., 2000: Active faulting at Delphi (Greece): seismotectonic remarks and a hypothesis for the geological environment of a myth. Geology, 28, 651-654. Piccardi L., 2001: Seismotectonic Origin of the Monster of Loch Ness. Earth System Processes, Joint Meeting of G.S.A. and G.S.L., Edinburgh, June 2001.

  9. Earthquakes and fault creep on the northern San Andreas fault

    USGS Publications Warehouse

    Nason, R.

    1979-01-01

    At present there is an absence of both fault creep and small earthquakes on the northern San Andreas fault, which had a magnitude 8 earthquake with 5 m of slip in 1906. The fault has apparently been dormant after the 1906 earthquake. One possibility is that the fault is 'locked' in some way and only produces great earthquakes. An alternative possibility, presented here, is that the lack of current activity on the northern San Andreas fault is because of a lack of sufficient elastic strain after the 1906 earthquake. This is indicated by geodetic measurements at Fort Ross in 1874, 1906 (post-earthquake), and 1969, which show that the strain accumulation in 1969 (69 ?? 10-6 engineering strain) was only about one-third of the strain release (rebound) in the 1906 earthquake (200 ?? 10-6 engineering strain). The large difference in seismicity before and after 1906, with many strong local earthquakes from 1836 to 1906, but only a few strong earthquakes from 1906 to 1976, also indicates a difference of elastic strain. The geologic characteristics (serpentine, fault straightness) of most of the northern San Andreas fault are very similar to the characteristics of the fault south of Hollister, where fault creep is occurring. Thus, the current absence of fault creep on the northern fault segment is probably due to a lack of sufficient elastic strain at the present time. ?? 1979.

  10. An empirical comparison of software fault tolerance and fault elimination

    NASA Technical Reports Server (NTRS)

    Shimeall, Timothy J.; Leveson, Nancy G.

    1991-01-01

    Reliability is an important concern in the development of software for modern systems. Some researchers have hypothesized that particular fault-handling approaches or techniques are so effective that other approaches or techniques are superfluous. The authors have performed a study that compares two major approaches to the improvement of software, software fault elimination and software fault tolerance, by examination of the fault detection obtained by five techniques: run-time assertions, multi-version voting, functional testing augmented by structural testing, code reading by stepwise abstraction, and static data-flow analysis. This study has focused on characterizing the sets of faults detected by the techniques and on characterizing the relationships between these sets of faults. The results of the study show that none of the techniques studied is necessarily redundant to any combination of the others. Further results reveal strengths and weakness in the fault detection by the techniques studied and suggest directions for future research.

  11. Fault Scarp Offsets and Fault Population Analysis on Dione

    NASA Astrophysics Data System (ADS)

    Tarlow, S.; Collins, G. C.

    2010-12-01

    Cassini images of Dione show several fault zones cutting through the moons icy surface. We have measured the displacement and length of 271 faults, and estimated the strain occurring in 6 different fault zones. These measurements allow us to quantify the total amount of surface strain on Dione as well as constrain what processes might have caused these faults to form. Though we do not have detailed topography across fault scarps on Dione, we can use their projected size on the camera plane to estimate their heights, assuming a reasonable surface slope. Starting with high resolution images of Dione obtained by the Cassini ISS, we marked points at the top to the bottom of each fault scarp to measure the faults projected displacement and its orientation along strike. Line and sample information for the measurements were then processed through ISIS to derive latitude/longitude information and pixel dimensions. We then calculate the three dimensional orientation of a vector running from the bottom to the top of the fault scarp, assuming a 45 degree angle with respect to the surface, and project this vector onto the spacecraft camera plane. This projected vector gives us a correction factor to estimate the actual vertical displacement of the fault scarp. This process was repeated many times for each fault, to show variations of displacement along the length of the fault. To compare each fault to its neighbors and see how strain was accommodated across a population of faults, we divided the faults into fault zones, and created new coordinate systems oriented along the central axis of each fault zone. We could then quantify the amount of fault overlap and add the displacement of overlapping faults to estimate the amount of strain accommodated in each zone. Faults in the southern portion of Padua have a strain of 0.031(+/-) 0.0097, central Padua exhibits a strain of .032(+/-) 0.012, and faults in northern Padua have a strain of 0.025(+/-) 0.0080. The western faults of Eurotas have a strain of 0.031(+/-) 0.011, while the eastern faults have a strain of 0.037(+/-) 0.025. Lastly, Clusium has a strain of 0.10 (+/-) 0.029. We also calculated the ratio of maximum fault displacement vs. the length of the faults, and we found this ratio to be 0.019 when drawing a trend line through all the faults that were analyzed. D/L measurements performed on two faults on Europa using stereo topography showed a value of .021 (Nimmo and Schenk 2006), the only other icy satellite where this ratio has been measured. In contrast, faults on Earth has a D/L ratio of about .1 and Mars has a D/L Ratio of about .01 (Schultz et al. 2006).

  12. Seagrasses, Dredging and Light in Laguna Madre, Texas, U.S.A.

    NASA Astrophysics Data System (ADS)

    Onuf, Christopher P.

    1994-07-01

    Light reduction resulting from maintenance dredging was the suspected cause of large-scale loss of seagrass cover in deep parts of Laguna Madre between surveys conducted in 1965 and 1974. Additional changes to 1988, together with an analysis of dredging frequency and intensity for different parts of the laguna, were consistent with this interpretation. Intensive monitoring of the underwater light regime and compilation of detailed environmental data for 3 months before and 15 months after a dredging project in 1988 revealed reduced light attributable to dredging in four of eight subdivisions of the study area, including the most extensive seagrass meadow in the study area. Dredging effects were strongest close to disposal areas used during this project but still were detectable on transects >12 km from the nearest dredge disposal area. In the subdivision of the study area where most of the dredge disposal occurred, light attenuation was increased throughout the 15 months of observation after dredging. In the seagrass meadow and the transition zone at the outer edge of the meadow, effects were evident up to 10 months after dredging. Resuspension and dispersion events caused by wind-generated waves are responsible for the propagation of dredge-related turbidity over space and time in this system.

  13. Water-quality reconnaissance of Laguna Tortuguero, Vega Baja, Puerto Rico, March 1999-May 2000

    USGS Publications Warehouse

    Soler-Lopez, Luis; Guzman-Rios, Senen; Conde-Costas, Carlos

    2006-01-01

    The Laguna Tortuguero, a slightly saline to freshwater lagoon in north-central Puerto Rico, has a surface area of about 220 hectares and a mean depth of about 1.2 meters. As part of a water-quality reconnaissance, water samples were collected at about monthly and near bi-monthly intervals from March 1999 to May 2000 at four sites: three stations inside the lagoon and one station at the artificial outlet channel dredged in 1940, which connects the lagoon with the Atlantic Ocean. Physical characteristics that were determined from these water samples were pH, temperature, specific conductance, dissolved oxygen, dissolved oxygen saturation, and discharge at the outlet canal. Other water-quality constituents also were determined, including nitrogen and phosphorus species, organic carbon, chlorophyll a and b, plankton biomass, hardness, alkalinity as calcium carbonate, and major ions. Additionally, a diel study was conducted at three stations in the lagoon to obtain data on the diurnal variation of temperature, specific conductance, dissolved oxygen, and dissolved oxygen saturation. The data analysis indicates the water quality of Laguna Tortuguero complies with the Puerto Rico Environmental Quality Board standards and regulations.

  14. Origin and evolution of the Laguna Potrok Aike maar (Patagonia, Argentina)

    NASA Astrophysics Data System (ADS)

    Gebhardt, A. C.; de Batist, M.; Niessen, F.; Anselmetti, F. S.; Ariztegui, D.; Ohlendorf, C.; Zolitschka, B.

    2009-04-01

    Laguna Potrok Aike, a maar lake in southern-most Patagonia, is located at about 110 m a.s.l. in the Pliocene to late Quaternary Pali Aike Volcanic Field (Santa Cruz, southern Patagonia, Argentina) at about 52°S and 70°W, some 20 km north of the Strait of Magellan and approximately 90 km west of the city of Rio Gallegos. The lake is almost circular and bowl-shaped with a 100 m deep, flat plain in its central part and an approximate diameter of 3.5 km. Steep slopes separate the central plain from the lake shoulder at about 35 m water depth. At present, strong winds permanently mix the entire water column. The closed lake basin contains a sub saline water body and has only episodic inflows with the most important episodic tributary situated on the western shore. Discharge is restricted to major snowmelt events. Laguna Potrok Aike is presently located at the boundary between the Southern Hemispheric Westerlies and the Antarctic Polar Front. The sedimentary regime is thus influenced by climatic and hydrologic conditions related to the Antarctic Circumpolar Current, the Southern Hemispheric Westerlies and sporadic outbreaks of Antarctic polar air masses. Previous studies demonstrated that closed lakes in southern South America are sensitive to variations in the evaporation/precipitation ratio and have experienced drastic lake level changes in the past causing for example the desiccation of the 75 m deep Lago Cardiel during the Late Glacial. Multiproxy environmental reconstruction of the last 16 ka documents that Laguna Potrok Aike is highly sensitive to climate change. Based on an Ar/Ar age determination, the phreatomagmatic tephra that is assumed to relate to the Potrok Aike maar eruption was formed around 770 ka. Thus Laguna Potrok Aike sediments contain almost 0.8 million years of climate history spanning several past glacial-interglacial cycles making it a unique archive for non-tropical and non-polar regions of the Southern Hemisphere. In particular, variations of the hydrological cycle, changes in eolian dust deposition, frequencies and consequences of volcanic activities and other natural forces controlling climatic and environmental responses can be tracked throughout time. Laguna Potrok Aike has thus become a major focus of the International Continental Scientific Drilling Program. Drilling operations were carried out within PASADO (Potrok Aike Maar Lake Sediment Archive Drilling Project) in late 2008 and penetrated ~100 m into the lacustrine sediment. Laguna Potrok Aike is surrounded by a series of subaerial paleo-shorelines of modern to Holocene age that reach up to 21 m above the 2003 AD lake level. An erosional unconformity which can be observed basin-wide along the lake shoulder at about 33 m below the 2003 AD lake level marks the lowest lake level reached during Late Glacial to Holocene times. A high-resolution seismic survey revealed a series of buried, subaquatic paleo-shorelines that hold a record of the complex transgressional history of the past approximately 6800 years, which was temporarily interrupted by two regressional phases from approximately 5800 to 5400 and 4700 to 4000 cal BP. Seismic reflection and refraction data provide insights into the sedimentary infill and the underlying volcanic structure of Laguna Potrok Aike. Reflection data show undisturbed, stratified lacustrine sediments at least in the upper ~100 m of the sedimentary infill. Two stratigraphic boundaries were identified in the seismic profiles (separating subunits I-ab, I-c and I-d) that are likely related to changes in lake level. Subunits I-ab and I-d are quite similar even though velocities are enhanced in subunit I-d. This might point at cementation in subunit I-d. Subunit I-c is restricted to the central parts of the lake and thins out laterally. A velocity-depth model calculated from seismic refraction data reveals a funnel-shaped structure embedded in the sandstone rocks of the surrounding Santa Cruz Formation. This funnel structure is filled by lacustrine sediments of up to 370 m in thickness. These can be separated into two distinct subunits with i) low acoustic velocities of 1500-1800 m s-1 in the upper part, and ii) enhanced velocities of 2000-2350 m s-1 in the lower part. Below these sediments, a unit of probably volcanoclastic origin is observed (>2400 m s-1). This sedimentary succession is perfectly comparable to other well-studied sequences (e.g. Messel and Baruth maars, Germany), confirming phreatomagmatic maar explosions as the origin of Laguna Potrok Aike.

  15. A 20,000-year record of environmental change from Laguna Kollpa Kkota, Bolivia

    SciTech Connect

    Seltzer, G.O. . Mendenhall Lab.); Abbott, M.B. )

    1992-01-01

    Most records of paleoclimate in the Bolivian Andes date from the last glacial-to-interglacial transition. However, Laguna Kollpa Kkota and other lakes like it, formed more than 20,000 yr BP when glaciers retreated and moraines dammed the drainage of the valleys they are located in. These lakes were protected from subsequent periods of glaciation because the headwalls of these valleys are below the level of the late-Pleistocene glacial equilibrium-line altitude. The chemical, mineral, and microfossil stratigraphies of these glacial lakes provide continuous records of environmental change for the last 20,000 years that can be used to address several problems in paleoclimate specific to tropical-subtropical latitudes. Preliminary results from Laguna Kollpa Kkota indicate that glacial equilibrium-line altitudes were never depressed more than 600 m during the last 20,000 years, suggesting that temperatures were reduced only a few-degrees celsius over this time period. Sedimentation rates and the organic carbon stratigraphy of cores reflect an increase in moisture in the late Pleistocene just prior to the transition to a warmer and drier Holocene. The pollen and diatom concentrations in the sediments are sufficient to permit the high resolution analyses needed to address whether or not there were climatic reversals during the glacial-to-interglacial transition.

  16. Fault diagnosis of analog circuits

    SciTech Connect

    Bandler, J.W.; Salama, A.E.

    1985-08-01

    In this paper, various fault location techniques in analog networks are described and compared. The emphasis is on the more recent developments in the subject. Four main approaches for fault location are addressed, examined, and illustrated using simple network examples. In particular, we consider the fault dictionary approach, the parameter identification approach, the fault verification approach, and the approximation approach. Theory and algorithms that are associated with these approaches are reviewed and problems of their practical application are identified. Associated with the fault dictionary approach we consider fault dictionary construction techniques, methods of optimum measurement selection, different fault isolation criteria, and efficient fault simulation techniques. Parameter identification techniques that either utilize linear or nonlinear systems of equations to identify all network elements are examined very thoroughly. Under fault verification techniques we discuss node-fault diagnosis, branch-fault diagnosis, subnetwork testability conditions as well as combinatorial techniques, the failure bound technique, and the network decomposition technique. For the approximation approach we consider probabilistic methods and optimization-based methods. The artificial intelligence technique and the different measures of testability are also considered. The main features of the techniques considered are summarized in a comparative table. An extensive, but not exhaustive, bibliography is provided.

  17. Winter distributions of North American Plovers in the Laguna Madre regions of Tamaulipas, Mexico and Texas, USA

    USGS Publications Warehouse

    Mabee, Todd J.; Plissner, Jonathan H.; Haig, Susan M.; Goossen, J.P.

    2001-01-01

    To determine the distribution and abundance of wintering plovers in the Laguna Madre of Texas and Tamaulipas, surveys were conducted in December 1997 and February 1998, along a 160 km stretch of barrier islands in Mexico and- 40 km of shoreline on South Padre Island, Texas. Altogether, 5,673 individuals, representing six plover species, were recorded during the surveys. Black-bellied Plovers Pluvialis squatarola were the most numerous (3 ,013 individuals) representing 53% of the total number of plovers observed. Numbers of Piping Charadriusm elodu, Snowy C . alexandrinus, Semipalmated C. semipalmatus and Wilson's Plovers C. wilsonia were 739, 1,345, 561, and 13 birds, respectively. Most individuals (97%) of all species except Wilson's Plovers were observed on bayside flats of the barrier islands. Similar numbers of Piping Plovers were recorded at South Padre Island, Texas, and in the Laguna Madre de Tamaulipas. Over 85% of the individuals of each of the other species were found in the more extensively surveyed Mexico portion of Laguna Madre. In Tamaulipas, most plover species were observed more often on algal flats than any other substrate. These results provide evidence of the value of these systems as wintering areas for plover species and indicate the need for more extensive survey efforts to determine temporal and spatial variation in the distribution of these species within the Laguna ecosystem.

  18. Abnormal fault-recovery characteristics of the fault-tolerant multiprocessor uncovered using a new fault-injection methodology

    NASA Astrophysics Data System (ADS)

    Padilla, Peter A.

    1991-03-01

    An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.

  19. Abnormal fault-recovery characteristics of the fault-tolerant multiprocessor uncovered using a new fault-injection methodology

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1991-01-01

    An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.

  20. Methods for quantitatively determining fault slip using fault separation

    NASA Astrophysics Data System (ADS)

    Xu, S.-S.; Velasquillo-Martnez, L. G.; Grajales-Nishimura, J. M.; Murillo-Muetn, G.; Nieto-Samaniego, A. F.

    2007-10-01

    Fault slip and fault separation are generally not equal to each other, however, they are geometrically related. The fault slip ( S) is a vector with a magnitude, a direction, and a sense of the movement. In this paper, a series of approaches are introduced to estimate quantitatively the magnitude and direction of the fault slip using fault separations. For calculation, the known factors are the pitch of slip lineations ( ?), the pitch of a cutoff ( ?), the dip separation ( Smd) or the strike separation ( Smh) for one marker. The two main purposes of this work include: (1) to analyze the relationship between fault slip and fault separation when slickenside lineations of a fault are known; (2) to estimate the slip direction when the parameters Smd or Smh, and ? for two non-parallel markers at a place (e.g., a point) are known. We tested the approaches using an example from a mainly strike-slip fault in East Quantoxhead, United Kingdom, and another example from the Jordan Field, Ector County, Texas. Also, we estimated the relative errors of apparent heave of the normal faults from the Sierra de San Miguelito, central Mexico.

  1. Holocene faulting on the Mission fault, northwest Montana

    SciTech Connect

    Ostenaa, D.A.; Klinger, R.E.; Levish, D.R. )

    1993-04-01

    South of Flathead Lake, fault scarps on late Quaternary surfaces are nearly continuous for 45 km along the western flank of the Mission Range. On late Pleistocene alpine lateral moraines, scarp heights reach a maximum of 17 m. Scarp heights on post glacial Lake Missoula surfaces range from 2.6--7.2 m and maximum scarp angles range from 10[degree]--24[degree]. The stratigraphy exposed in seven trenches across the fault demonstrates that the post glacial Lake Missoula scarps resulted from at least two surface-faulting events. Larger scarp heights on late Pleistocene moraines suggests a possible third event. This yields an estimated recurrence of 4--8 kyr. Analyses of scarp profiles show that the age of the most surface faulting is middle Holocene, consistent with stratigraphic evidence found in the trenches. Rupture length and displacement imply earthquake magnitudes of 7 to 7.5. Previous studies have not identified geologic evidence of late Quaternary surface faulting in the Rocky Mountain Trench or on faults north of the Lewis and Clark line despite abundant historic seismicity in the Flathead Lake area. In addition to the Mission fault, reconnaissance studies have located late Quaternary fault scarps along portions of faults bordering Jocko and Thompson Valleys. These are the first documented late Pleistocene/Holocene faults north of the Lewis and Clark line in Montana and should greatly revise estimates of earthquake hazards in this region.

  2. Managing Fault Management Development

    NASA Technical Reports Server (NTRS)

    McDougal, John M.

    2010-01-01

    As the complexity of space missions grows, development of Fault Management (FM) capabilities is an increasingly common driver for significant cost overruns late in the development cycle. FM issues and the resulting cost overruns are rarely caused by a lack of technology, but rather by a lack of planning and emphasis by project management. A recent NASA FM Workshop brought together FM practitioners from a broad spectrum of institutions, mission types, and functional roles to identify the drivers underlying FM overruns and recommend solutions. They identified a number of areas in which increased program and project management focus can be used to control FM development cost growth. These include up-front planning for FM as a distinct engineering discipline; managing different, conflicting, and changing institutional goals and risk postures; ensuring the necessary resources for a disciplined, coordinated approach to end-to-end fault management engineering; and monitoring FM coordination across all mission systems.

  3. Randomness fault detection system

    NASA Technical Reports Server (NTRS)

    Russell, B. Don (Inventor); Aucoin, B. Michael (Inventor); Benner, Carl L. (Inventor)

    1996-01-01

    A method and apparatus are provided for detecting a fault on a power line carrying a line parameter such as a load current. The apparatus monitors and analyzes the load current to obtain an energy value. The energy value is compared to a threshold value stored in a buffer. If the energy value is greater than the threshold value a counter is incremented. If the energy value is greater than a high value threshold or less than a low value threshold then a second counter is incremented. If the difference between two subsequent energy values is greater than a constant then a third counter is incremented. A fault signal is issued if the counter is greater than a counter limit value and either the second counter is greater than a second limit value or the third counter is greater than a third limit value.

  4. An experiment in software fault elimination and fault tolerance

    SciTech Connect

    Shimeall, T.J.

    1989-01-01

    Three primary approaches have been taken in developing methods to improve software reliability: fault avoidance, fault elimination and fault tolerance. This study investigates the error detection obtained by application of two of these approaches, fault tolerance and fault elimination, on a set of independently developed versions of a program. Different fault detection techniques following each approach are used to provide a broad exposure of each approach on the versions. The fault detection techniques chosen were multi-version voting, programmer-inserted run-time assertions, testing, code reading of uncommented code by stepwise abstraction and static data flow analysis. Voting and run-time assertions are most commonly associated with fault tolerance. Testing, code reading and static data flow analysis are most commonly associated with fault elimination. After application of the techniques following each approach, the errors detected and the circumstances of detection were analyzed as a means of characterizing the differences between the approaches. The results of this study provide insight on a series of research questions. The results demonstrate weaknesses in the fault tolerance approach and specifically in the multi-version voting method. In particular, the results demonstrate that voting of untested software may produce an insufficient improvement in the probability of producing a correct result to consider such use in systems where reliability is important. Voting is not to be a substitute for testing. Examination of the faults detected in this experiment show that the majority of faults were detected by only one technique. The results of this study suggest a series of questions for further research. For example, research is needed on how to broaden the classes of faults detected by each technique.

  5. Triggered surface slips in southern California associated with the 2010 El Mayor-Cucapah, Baja California, Mexico, earthquake

    USGS Publications Warehouse

    Rymer, Michael J.; Treiman, Jerome A.; Kendrick, Katherine J.; Lienkaemper, James J.; Weldon, Ray J.; Bilham, Roger; Wei, Meng; Fielding, Eric J.; Hernandez, Janis L.; Olson, Brian P.E.; Irvine, Pamela J.; Knepprath, Nichole; Sickler, Robert R.; Tong, Xiaopeng; Siem, Martin E.

    2011-01-01

    Triggered slip in the Yuha Desert area occurred along more than two dozen faults, only some of which were recognized before the April 4, 2010, El Mayor-Cucapah earthquake. From east to northwest, slip occurred in seven general areas: (1) in the Northern Centinela Fault Zone (newly named), (2) along unnamed faults south of Pinto Wash, (3) along the Yuha Fault (newly named), (4) along both east and west branches of the Laguna Salada Fault, (5) along the Yuha Well Fault Zone (newly revised name) and related faults between it and the Yuha Fault, (6) along the Ocotillo Fault (newly named) and related faults to the north and south, and (7) along the southeasternmost section of the Elsinore Fault. Faults that slipped in the Yuha Desert area include northwest-trending right-lateral faults, northeast-trending left-lateral faults, and north-south faults, some of which had dominantly vertical offset. Triggered slip along the Ocotillo and Elsinore Faults appears to have occurred only in association with the June 14, 2010 (Mw5.7), aftershock. This aftershock also resulted in slip along other faults near the town of Ocotillo. Triggered offset on faults in the Yuha Desert area was mostly less than 20 mm, with three significant exceptions, including slip of about 5060 mm on the Yuha Fault, 40 mm on a fault south of Pinto Wash, and about 85 mm on the Ocotillo Fault. All triggered slips in the Yuha Desert area occurred along preexisting faults, whether previously recognized or not.

  6. Fault tolerant control laws

    NASA Technical Reports Server (NTRS)

    Ly, U. L.; Ho, J. K.

    1986-01-01

    A systematic procedure for the synthesis of fault tolerant control laws to actuator failure has been presented. Two design methods were used to synthesize fault tolerant controllers: the conventional LQ design method and a direct feedback controller design method SANDY. The latter method is used primarily to streamline the full-state Q feedback design into a practical implementable output feedback controller structure. To achieve robustness to control actuator failure, the redundant surfaces are properly balanced according to their control effectiveness. A simple gain schedule based on the landing gear up/down logic involving only three gains was developed to handle three design flight conditions: Mach .25 and Mach .60 at 5000 ft and Mach .90 at 20,000 ft. The fault tolerant control law developed in this study provides good stability augmentation and performance for the relaxed static stability aircraft. The augmented aircraft responses are found to be invariant to the presence of a failure. Furthermore, single-loop stability margins of +6 dB in gain and +30 deg in phase were achieved along with -40 dB/decade rolloff at high frequency.

  7. Coherent structures on fault surfaces

    NASA Astrophysics Data System (ADS)

    Kirkpatrick, J. D.; Brodsky, E. E.

    2012-12-01

    Fault zones often contain structures such as corrugations, bumps or lenses that appear to have a regular spacing and/or preferred length or size. The existence of preferred scales is important for frictional processes and earthquake nucleation. However, the power spectral density of fault surface roughness is anisotropic self-affine over a wide range of length scales. No break in scaling is observed in the power spectrum, suggesting that the surfaces do not contain any preferred length scales. To reconcile these paradoxical observations, we calculate the power spectral density of the strike-slip Corona Heights fault surface (San Francisco) from ground-based LiDAR data, and examine the phases calculated from the Fourier transform. In comparison to synthetic faults, the phases defining the Corona Heights fault are non-random, consistent with the sense of curvature of large (5-10 m) bumps or mullions on the fault surface. Bumps are bounded by anastomosing fault surfaces, and are defined by branch lines. Furthermore, corrugations regularly spaced at ~0.1-0.3 m in the slip-perpendicular direction are also defined by the fault curvature. We suggest the phases provide information regarding structure in the fault surface topography that is not captured by the power spectrum alone. The non-random phase distribution reflects the coherence of these structures over the extent of the fault.

  8. Salt lake Laguna de Fuente de Piedra (S-Spain) as Late Quaternary palaeoenvironmental archive

    NASA Astrophysics Data System (ADS)

    Hbig, Nicole; Melles, Martin; Reicherter, Klaus

    2014-05-01

    This study deals with Late Quaternary palaeoenvironmental variability in Iberia reconstructed from terrestrial archives. In southern Iberia, endorheic basins of the Betic Cordilleras are relatively common and contain salt or fresh-water lakes due to subsurface dissolution of Triassic evaporites. Such precipitation or ground-water fed lakes (called Lagunas in Spanish) are vulnerable to changes in hydrology, climate or anthropogenic modifications. The largest Spanish salt lake, Laguna de Fuente de Piedra (Antequera region, S-Spain), has been investigated and serves as a palaeoenvironmental archive for the Late Pleistocene to Holocene time interval. Several sediment cores taken during drilling campaigns in 2012 and 2013 have revealed sedimentary sequences (up to 14 m length) along the shoreline. A multi-proxy study, including sedimentology, geochemistry and physical properties (magnetic susceptibility) has been performed on the cores. The sedimentary history is highly variable: several decimetre thick silty variegated clay deposits, laminated evaporites, and even few-centimetre thick massive gypsum crystals (i.e., selenites). XRF analysis was focussed on valuable palaeoclimatic proxies (e.g., S, Zr, Ti, and element ratios) to identify the composition and provenance of the sediments and to delineate palaeoenvironmental conditions. First age control has been realized by AMS-radiocarbon dating. The records start with approximately 2-3 m Holocene deposits and reach back to the middle of MIS 3 (GS-3). The sequences contain changes in sedimentation rates as well as colour changes, which can be summarized as brownish-beige deposits at the top and more greenish-grey deposits below as well as highly variegated lamination and selenites below ca. 6 m depth. The Younger Dryas, Blling/Allerd, and the so-called Mystery Interval/Last Glacial Maximum have presumably been identified in the sediment cores and aligned to other climate records. In general, the cores of the Laguna de Fuente de Piedra show cyclic deposition including evaporitic sequences throughout the Holocene and Late Pleistocene, indicating higher fluxes and reworking of organic/inorganic carbon as well as other indicative proxy elements like Ti, Zr and Ca/Sr ratio during Late Pleistocene times. In order to achieve a better understanding of the palaeoenvironmental history in the study area further studies are planned which encompass biological/palaeontological indicators (e.g., pollen, diatoms) as well as another geochemical isotopic techniques on evaporitic deposits such as fluid inclusion analysis.

  9. Handling Software Faults with Redundancy

    NASA Astrophysics Data System (ADS)

    Carzaniga, Antonio; Gorla, Alessandra; Pezz, Mauro

    Software engineering methods can increase the dependability of software systems, and yet some faults escape even the most rigorous and methodical development process. Therefore, to guarantee high levels of reliability in the presence of faults, software systems must be designed to reduce the impact of the failures caused by such faults, for example by deploying techniques to detect and compensate for erroneous runtime conditions. In this chapter, we focus on software techniques to handle software faults, and we survey several such techniques developed in the area of fault tolerance and more recently in the area of autonomic computing. Since practically all techniques exploit some form of redundancy, we consider the impact of redundancy on the software architecture, and we propose a taxonomy centered on the nature and use of redundancy in software systems. The primary utility of this taxonomy is to classify and compare techniques to handle software faults.

  10. Fault management for data systems

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Iverson, David L.; Patterson-Hine, F. Ann

    1993-01-01

    Issues related to automating the process of fault management (fault diagnosis and response) for data management systems are considered. Substantial benefits are to be gained by successful automation of this process, particularly for large, complex systems. The use of graph-based models to develop a computer assisted fault management system is advocated. The general problem is described and the motivation behind choosing graph-based models over other approaches for developing fault diagnosis computer programs is outlined. Some existing work in the area of graph-based fault diagnosis is reviewed, and a new fault management method which was developed from existing methods is offered. Our method is applied to an automatic telescope system intended as a prototype for future lunar telescope programs. Finally, an application of our method to general data management systems is described.

  11. Response of shoal grass, Halodule wrightii, to extreme winter conditions in the Lower Laguna Madre, Texas

    USGS Publications Warehouse

    Hicks, D.W.; Onuf, C.P.; Tunnell, J.W.

    1998-01-01

    Effects of a severe freeze on the shoal grass, Halodule wrightii, were documented through analysis of temporal and spatial trends in below-ground biomass. The coincidence of the second lowest temperature (-10.6??C) in 107 years of record, 56 consecutive hours below freezing, high winds and extremely low water levels exposed the Laguna Madre, TX, to the most severe cold stress in over a century. H. wrightii tolerated this extreme freeze event. Annual pre- and post-freeze surveys indicated that below-ground biomass estimated from volume was Unaffected by the freeze event. Nor was there any post-freeze change in biomass among intertidal sites directly exposed to freezing air temperatures relative to subtidal sites which remained submerged during the freezing period.

  12. Late Pleistocene-early Holocene karst features, Laguna Madre, south Texas: A record of climate change

    SciTech Connect

    Prouty, J.S.

    1996-09-01

    A Pleistocene coquina bordering Laguna Madre, south Texas, contains well-developed late Pleistocene-early Holocene karst features (solution pipes and caliche crusts) unknown elsewhere from coastal Texas. The coquina accumulated in a localized zone of converging longshore Gulf currents along a Gulf beach. The crusts yield {sup 14}C dates of 16,660 to 7630 B.P., with dates of individual crust horizons becoming younger upwards. The karst features provide evidence of regional late Pleistocene-early Holocene climate changes. Following the latest Wisconsinan lowstand 18,000 B.P. the regional climate was more humid and promoted karst weathering. Partial dissolution and reprecipitation of the coquina formed initial caliche crust horizons; the crust later thickened through accretion of additional carbonate laminae. With the commencement of the Holocene approximately 11,000 B.P. the regional climate became more arid. This inhibited karstification of the coquina, and caliche crust formation finally ceased about 7000 B.P.

  13. Water quality mapping of Laguna de Bay and its watershed, Philippines

    NASA Astrophysics Data System (ADS)

    Saito, S.; Nakano, T.; Shin, K.; Maruyama, S.; Miyakawa, C.; Yaota, K.; Kada, R.

    2011-12-01

    Laguna de Bay (or Laguna Lake) is the largest lake in the Philippines, with a surface area of 900 km2 and its watershed area of 2920 km2 (Santos-Borja, 2005). It is located on the southwest part of the Luzon Island and its watershed contains 5 provinces, 49 municipalities and 12 cities, including parts of Metropolitan Manila. The water quality in Laguna de Bay has significantly deteriorated due to pollution from soil erosion, effluents from chemical industries, and household discharges. In this study, we performed multiple element analysis of water samples in the lake and its watersheds for chemical mapping, which allows us to evaluate the regional distribution of elements including toxic heavy metals such as Cd, Pb and As. We collected water samples from 24 locations in Laguna de Bay and 160 locations from rivers in the watersheds. The sampling sites of river are mainly downstreams around the lake, which covers from urbanized areas to rural areas. We also collected well water samples from 17 locations, spring water samples from 10 locations, and tap water samples from 21 locations in order to compare their data with the river and lake samples and to assess the quality of household use waters. The samples were collected in dry season of the study area (March 13 - 17 and May 2 - 9, 2011). The analysis was performed at the Research Institute for Humanity and Nature (RIHN), Japan. The concentrations of the major components (Cl, NO3, SO4, Ca, Mg, Na, and K) dissolved in the samples were determined with ion chromatograph (Dionex Corporation ICS-3000). We also analyzed major and trace elements (Li, B, Na, Mg, Al, Si, P, K, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn Ga, Ge, As, Se, Rb, Sr, Y, Zr, Mo, Ag, Cd, Sn, Sb, Cs, Ba, La, Ce, Pr, Nd, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu, W, Pb and U) with inductively coupled plasma-mass spectrometry (ICP-MS, Agilent Technologies 7500cx). The element concentrations of rivers are characterized by remarkable regional variations. For example, heavy metals such as Ni, Cd and Pb are markedly high in the western region as compared to the eastern region implying that the chemical variation reflects the urbanization in the western region. On the other hand, As contents is relatively high in the south of the lake and some inflowing rivers in the area. The higher concentration of As is also observed in the spring water samples in the area. Therefore, the source of As in the area is probably natural origin rather than anthropogenic. Although river water samples in western watersheds have high concentrations of heavy metals, the lake water samples in western area of the lake are not remarkably high in heavy metals. This inconsistency implies that the heavy metals flowed into the western lake from heavy metal-enriched rives have precipitated on the bottom of the lake. The polluted sediments may induce the pollution of benthos resulting in increase of the risks of food pollution through the bioaccumulation in the ecosystem.

  14. Mega displacement waves in glacial lakes: evidence from Laguna Safuna Alta, Peru

    NASA Astrophysics Data System (ADS)

    Reynolds, J. M.; Heald, A. P.; Zapata, M.

    2003-04-01

    An anomalously large displacement wave and overtopping event have been investigated at Laguna Safuna Alta, Cordillera Blanca, Peru. On 22nd April 2002, 10M m^3 or more of rock fell from the western valley slope into the southern end of the lake and onto the lower 100 m of the glacier. Evidence from the landslide scar indicates that the mechanism of failure was principally flexural toppling of quartzites, mudstones and sandstones with beds of anthracite. Bathymetric surveys taken before and after the landslide show that about 6.4M m^3 of material entered the lake during the event. The resulting displacement wave was 80--100 m high, overtopping the end moraine, which is 80 m at its lowest point. Oscillating rebound waves had amplitudes up to around 80 m. The initial displacement wave and the largest rebound waves caused erosion of the inner and outer flanks of the moraine, damaged lake security structures, and killed cattle that had been grazing in the area; but the moraine dam remained substantially intact and the resulting flood was largely contained within a lower lake, Laguna Safuna Baja. Active backscarps and tension cracks in the slope adjacent to the rockfall indicate that a further 5M m^3 of rock may fail. Modelling the steady state stability of the now weakened moraine dam provides factors of safety below unity against a large-scale failure of the inner slope of the moraine. The moraine dam cannot be expected to resist a second large displacement wave and mitigation strategies are therefore being developed. The height of the wave produced during this event was an order of magnitude greater than values commonly reported and designed for in glacial lake remediation works.

  15. Foraminifera Assemblages in Laguna Torrecilla- Puerto Rico: an Environmental Micropaleontology Approach.

    NASA Astrophysics Data System (ADS)

    Martinez-Colon, M.; Hallock, P.

    2006-12-01

    Foraminiferal assemblages (Ammonia becarii cf. typica - A. becarii cf. tepida - Triloculina spp.) from 30 cm cores taken at Laguna Torrecilla, a polluted estuary, contain a relative high occurrence of deformed tests (up to 13%). Such deformities (i.e., double tests, aberrant tests) are mostly found within the miliolids (Triloculina spp.) while the rotaliids (Ammonia spp.) show fewer deformities (i.e., extended proloculi, stunted tests). Preliminary results for heavy metal analysis (ACTLABS Laboratories-Canada) from bulk sediment samples show concentrations below toxicity levels except for copper. Copper concentrations (50- 138 ppm) fall between the ERL (Effect Range Low) and ERM (Effect Range Median) values representing possible to occasional detrimental effects to the aquatic environment. Organic matter content (loss-on-ignition) ranging from 10-23%, coupled with pyritized tests and framboidal pyrite, indicates low oxygen conditions. Ammonia becarii cf. typica and A. becarii cf. tepida showed no significant variation in size with sample depth. However, forma tepida was not found in the intervals with highest organic concentrations. The abundance of A. becarii, which is a species highly resistant to environmental stresses, appears to be related to hypoxia events. Ammonia-Elphidium index values, a previously established indicator of hypoxia, are 80-100, reflecting the lack of Elphidium spp. Apparently reduced oxygen conditions at Laguna Torrecilla exceeded the tolerance levels of Elphidium spp. In addition, diversity indices show that there has been temporal variability in terms of abundance and distribution of foraminifera. Foraminiferal assemblages coupled with diversity indices and organic matter content indicates that Torrecilla Lagoon has undergone several episodes of hypoxia. Such conditions could explain the relatively high percentage of test deformities, although elevated copper concentrations may be a compounding factor.

  16. Fault architecture, fault rocks and fault rock properties in carbonate rocks

    NASA Astrophysics Data System (ADS)

    Bauer, Helene; Decker, Kurt

    2010-05-01

    Fault architecture, fault rocks and fault rock properties in carbonate rocks The current study addresses a comparative analysis of fault zones in limestone and dolomite rocks comparing the architecture of fault core and damage zones, fault rocks, and the hydrodynamic properties of faults exposed in the Upper Triassic Wetterstein Fm. of the Hochschwab Massif (Austria). All analysed faults are sinistral strike-slip faults, which formed at shallow crustal depth during the process of eastward lateral extrusion of the Eastern Alps in the Oligocene and Lower Miocene Fault zones in limestone tend to be relatively narrow zones with distinct fault core and damage zones. Fault cores, which include the principle slip surface of the fault, are characterized by cataclastic fault rock associated with slickensides separating strands of catalasite from surrounding host rock or occurring between different types of cataclasite. Cataclasites differ in terms of fragment size, matrix content and the angularity of fragments,. Cataclasite fabrics indicate progressive cataclasis and substantial displacement across the fault rock. Fault core heterogeneity tends to decrease within more evolved (higher displacement) faults. In all fault cores cataclasites are localized within strands, which connect to geometrically complex anastomosing volumes of fault rock. The 3D geometry of such fault cores is difficult to resolve on the outcrop scale. Beside cataclastic flow pressure solution, overprinting cataclastic fabrics, could be documented within fault zones. Damage zones in limestone fault zones are characterized by intensively fractured (jointed) host rock and dilatation breccias, indicating dilatation processes and peripheral wall rock weakening accompanying the growth of the fault zone. Dilatation breccias with high volumes of carbonate cement indicate these processes are related to high fluid pressure and the percolation of large volumes of fluid. Different parts of the damage zones were differentiated on the base of variable fracture densities. Fracture densities (P32 in m² joint surfaces per m³ rock) generally vary along all investigated faults. They are especially high in more evolved (higher displacement) fault zones where they are associated with large-scale Riedel sehars and in parts of the damage zones, that are next to the fault cores. The assessment of the abundance of small-scale fractures uses fracture facies as an empirical classification providing semi-quantitative estimates of fracture density and abundance. Different units were assigned to fracture facies 1 to 4, with fracture facies 4 indicating highest fracture density. Fault zones in dolomite tend to have several fault cores localized within wider zones of fractured wall rock (damage zones), even at low strain. Compared to fault zones with similar displacement in limestone, damage zones in dolomite tend to be wider and have higher fracture densities. Dilatation breccias are more abundant. A clear separation of fault core and damage zone is more difficult. Damage zones observed at the lateral (mode III) tips of the analysed strike-slip faults show that hydraulic fracturing and fluid flow through the propagating fault are of major importance for its evolution. A typical transition from the wall rock ahead of the propagating fault to the core of the slipped fault includes: densely jointed wall rock, wall rock with abundant cement-filled tension gashes, dilatation breccia and cataclasite reworking both dilatation breccia and wall rock. The detailed documentation of different fault zone units is supplemented by porosity measurements in order to assess the hydrogeological properties of the fault zones. High permeability units are first of all located in the damage zones, characterized by high fracture densities. Porosity measurements on fault rocks showed highest porosity (up to 6%) for fractured wall rocks (fracture facies 4) and dilatation breccias (porosity of undeformed wall rock: 1,5 % average, 2 % maximum). Thin sections prove that most of the porosity is carried by uncemented fractures. Fracture porosity therefore is the controlling factor of fault zone permeability. The different types of cataclasite in fault cores show low intra-granular porosities (average 2,5 %) and very low fracture density. They therefore are classified as low-permeability units.

  17. Unrest within a large rhyolitic magma system at Laguna del Maule volcanic field (Chile) from 2007 through 2013: geodetic measurements and numerical models

    NASA Astrophysics Data System (ADS)

    Le Mevel, H.; Cordova, L.; Ali, S. T.; Feigl, K. L.; DeMets, C.; Williams-Jones, G.; Tikoff, B.; Singer, B. S.

    2013-12-01

    The Laguna del Maule (LdM) volcanic field is remarkable for its unusual concentration of post-glacial rhyolitic lava coules and domes that erupted between 25 and 2 thousand years ago. Covering more than 100 square kilometers, they erupted from 24 vents encircling a lake basin approximately 20 km in diameter on the range crest of the Andes. Geodetic measurements at the LdM volcanic field show rapid uplift since 2007 over an area more than 20 km in diameter that is centered on the western portion of the young rhyolite domes. By quantifying this active deformation and its evolution with time, we aim to investigate the storage conditions and dynamic processes in the underlying rhyolitic reservoir that drive the ongoing inflation. Analyzing interferometric synthetic aperture radar (InSAR) data, we track the rate of deformation. The rate of vertical uplift is negligible from 2003 to 2004, accelerates from at least 200 mm/yr in 2007 to more than 300 mm/yr in 2012, and then decreases to 200mm/yr in early 2013. To describe the deformation, we use a simple model that approximates the source as a 8 km-by-6 km sill at a depth of 5 km, assuming a rectangular dislocation in a half space with uniform elastic properties. Between 2007 and 2013, the modeled sill increased in volume by at least 190 million cubic meters. Four continuous GPS stations installed in April 2012 around the lake confirm this extraordinarily high rate of vertical uplift and a substantial rate of radial expansion. As of June 2013, the rapid deformation persists in the InSAR and GPS data. To describe the spatial distribution of material properties at depth, we are developing a model using the finite element method. This approach can account for geophysical observations, including magneto-telluric measurements, gravity surveys, and earthquake locations. It can also calculate changes in the local stress field. In particular, a large increase in stress in the magma chamber roof could lead to the initiation and/or reactivation of the ring faults. Potential evidence for fault reactivation is the detection of diffuse soil degassing of CO2 with concentrations reaching 5-7% near the center of deformation. We therefore consider several hypotheses for the processes driving the deformation, including: (1) an intrusion of basalt into the base of a melt-rich layer of rhyolite leading to heating, bubble growth and subsequent increase pressure in the reservoir, and/or (2) inflation of a hydrothermal system above the rhyolite melt layer.

  18. Fault welding by pseudotachylyte generation

    NASA Astrophysics Data System (ADS)

    Mitchell, T. M.; Toy, V. G.; Di Toro, G.; Renner, J.

    2014-12-01

    During earthquakes, frictional melts can localize on slip surfaces and dramatically weaken faults by melt lubrication. Once seismic slip is arrested, the melt cools and solidifies to form pseudotachylyte (PST), the presence of which is commonly used to infer earthquake slip on ancient exhumed faults. Little is known about the effect of solidified melt on the strength of faults directly preceding a subsequent earthquake. We performed triaxial deformation experiments on cores of tonalite (Gole Larghe fault zone, N. Italy) and mylonite (Alpine fault, New Zealand) in order to assess the strength of PST bearing faults in the lab. Three types of sample were prepared for each rock type; intact, sawcut and PST bearing, and were cored so that the sawcut, PST and foliation planes were orientated at 35 to the length of the core and direction of ?1, i.e., a favorable orientation for reactivation. This choice of samples allowed us to compare the strength of 'pre-earthquake' fault (sawcut) to a 'post-earthquake' fault with solidified frictional melt, and assess their strength relative to intact samples. Our results show that PST veins effectively weld fault surfaces together, allowing previously faulted rocks to regain cohesive strengths comparable to that of an intact rock. Shearing of the PST is not favored, but subsequent failure and slip is accommodated on new faults nucleating at other zones of weakness. Thus, the mechanism of coseismic weakening by melt lubrication does not necessarily facilitate long-term interseismic deformation localization, at least at the scale of these experiments. In natural fault zones, PSTs are often found distributed over multiple adjacent fault planes or other zones of weakness such as foliation planes. We also modeled the temperature distribution in and around a PST using an approximation for cooling of a thin, infinite sheet by conduction perpendicular to its margins at ambient temperatures commensurate with the depth of PST formation. Results indicate that such PSTs would have cooled below their solidus in tens of seconds, leading to fault welding in under a minute. Cooled solidified melt patches can potentially act as asperities on faults, where faults can cease to be zones of weakness.

  19. Fault-tolerant processing system

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L. (Inventor)

    1996-01-01

    A fault-tolerant, fiber optic interconnect, or backplane, which serves as a via for data transfer between modules. Fault tolerance algorithms are embedded in the backplane by dividing the backplane into a read bus and a write bus and placing a redundancy management unit (RMU) between the read bus and the write bus so that all data transmitted by the write bus is subjected to the fault tolerance algorithms before the data is passed for distribution to the read bus. The RMU provides both backplane control and fault tolerance.

  20. Perspective View, Garlock Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    California's Garlock Fault, marking the northwestern boundary of the Mojave Desert, lies at the foot of the mountains, running from the lower right to the top center of this image, which was created with data from NASA's shuttle Radar Topography Mission (SRTM), flown in February 2000. The data will be used by geologists studying fault dynamics and landforms resulting from active tectonics. These mountains are the southern end of the Sierra Nevada and the prominent canyon emerging at the lower right is Lone Tree canyon. In the distance, the San Gabriel Mountains cut across from the leftside of the image. At their base lies the San Andreas Fault which meets the Garlock Fault near the left edge at Tejon Pass. The dark linear feature running from lower right to upper left is State Highway 14 leading from the town of Mojave in the distance to Inyokern and the Owens Valley in the north. The lighter parallel lines are dirt roads related to power lines and the Los Angeles Aqueduct which run along the base of the mountains.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense (DoD), and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise,Washington, DC.

    Size: Varies in a perspective view Location: 35.25 deg. North lat., 118.05 deg. West lon. Orientation: Looking southwest Original Data Resolution: SRTM and Landsat: 30 meters (99 feet) Date Acquired: February 16, 2000

  1. Fault current limiter

    DOEpatents

    Darmann, Francis Anthony

    2013-10-08

    A fault current limiter (FCL) includes a series of high permeability posts for collectively define a core for the FCL. A DC coil, for the purposes of saturating a portion of the high permeability posts, surrounds the complete structure outside of an enclosure in the form of a vessel. The vessel contains a dielectric insulation medium. AC coils, for transporting AC current, are wound on insulating formers and electrically interconnected to each other in a manner such that the senses of the magnetic field produced by each AC coil in the corresponding high permeability core are opposing. There are insulation barriers between phases to improve dielectric withstand properties of the dielectric medium.

  2. Final Technical Report: PV Fault Detection Tool.

    SciTech Connect

    King, Bruce Hardison; Jones, Christian Birk

    2015-12-01

    The PV Fault Detection Tool project plans to demonstrate that the FDT can (a) detect catastrophic and degradation faults and (b) identify the type of fault. This will be accomplished by collecting fault signatures using different instruments and integrating this information to establish a logical controller for detecting, diagnosing and classifying each fault.

  3. 20 CFR 404.507 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Officer 404.507 Fault. Fault as used in without fault (see 404.506 and 42 CFR 405.355) applies only to the individual. Although the Administration may have been at fault in making the overpayment, that... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Fault. 404.507 Section 404.507...

  4. 20 CFR 404.507 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Officer 404.507 Fault. Fault as used in without fault (see 404.506 and 42 CFR 405.355) applies only to the individual. Although the Administration may have been at fault in making the overpayment, that... 20 Employees' Benefits 2 2012-04-01 2012-04-01 false Fault. 404.507 Section 404.507...

  5. 20 CFR 404.507 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Officer 404.507 Fault. Fault as used in without fault (see 404.506 and 42 CFR 405.355) applies only to the individual. Although the Administration may have been at fault in making the overpayment, that... 20 Employees' Benefits 2 2014-04-01 2014-04-01 false Fault. 404.507 Section 404.507...

  6. Implementacion de modulos constructivistas que atiendan "misconceptions" y lagunas conceptuales en temas de la fisica en estudiantes universitarios

    NASA Astrophysics Data System (ADS)

    Santacruz Sarmiento, Neida M.

    Este estudio se enfoco en los "misconception" y lagunas conceptuales en temas fundamentales de Fisica como son Equilibrio Termodinamico y Estatica de fluidos. En primer lugar se trabajo con la identificacion de "misconceptions" y lagunas conceptuales y se analizo en detalle la forma en que los estudiantes construyen sus propias teorias de fenomenos relacionados con los temas. Debido a la complejidad en la que los estudiantes asimilan los conceptos fisicos, se utilizo el metodo de investigacion mixto de tipo secuencial explicativo en dos etapas, una cuantitativa y otra cualitativa. La primera etapa comprendio cuatro fases: (1) Aplicacion de una prueba diagnostica para identificar el conocimiento previo y lagunas conceptuales. (2) Identificacion de "misconceptions" y lagunas del concepto a partir del conocimiento previo. (3) Implementacion de la intervencion por medio de modulos en el topico de Equilibrio Termodinamico y Estatica de Fluidos. (4) Y la realizacion de la pos prueba para analizar el impacto y la efectividad de la intervencion constructivista. En la segunda etapa se utilizo el metodo de investigacion cualitativo, por medio de una entrevista semiestructurada que partio de la elaboracion de un mapa conceptual y se finalizo con un analisis de datos conjuntamente. El desarrollo de este estudio permitio encontrar "misconceptions" y lagunas conceptuales a partir del conocimiento previo de los estudiantes participantes en los temas trabajados, que fueron atendidos en el desarrollo de las distintas actividades inquisitivas que se presentaron en el modulo constructivista. Se encontro marcadas diferencias entre la pre y pos prueba en los temas, esto se debio al requerimiento de habilidades abstractas para el tema de Estatica de Fluidos y al desarrollo intuitivo para el tema de Equilibrio Termodinamico, teniendo mejores respuestas en el segundo. Los participantes demostraron una marcada evolucion y/o cambio en sus estructuras de pensamiento, las pruebas estadisticas de t-pareada fueron significativas para ambos modulos a pesar que en la pos prueba no todos llegaron a la respuesta correcta. El analisis cualitativo de las respuestas de los participantes confirmo la dificultad de remover "misconception" y lagunas conceptuales.

  7. Fault Branching and Rupture Directivity

    NASA Astrophysics Data System (ADS)

    Dmowska, R.; Rice, J. R.; Kame, N.

    2002-12-01

    Can the rupture directivity of past earthquakes be inferred from fault geometry? Nakata et al. [J. Geogr., 1998] propose to relate the observed surface branching of fault systems with directivity. Their work assumes that all branches are through acute angles in the direction of rupture propagation. However, in some observed cases rupture paths seem to branch through highly obtuse angles, as if to propagate ``backwards". Field examples of that are as follows: (1) Landers 1992. When crossing from the Johnson Valley to the Homestead Valley (HV) fault via the Kickapoo (Kp) fault, the rupture from Kp progressed not just forward onto the northern stretch of the HV fault, but also backwards, i.e., SSE along the HV [Sowers et al., 1994, Spotila and Sieh, 1995, Zachariasen and Sieh, 1995, Rockwell et al., 2000]. Measurements of surface slip along that backward branch, a prominent feature of 4 km length, show right-lateral slip, decreasing towards the SSE. (2) At a similar crossing from the HV to the Emerson (Em) fault, the rupture progressed backwards along different SSE splays of the Em fault [Zachariasen and Sieh, 1995]. (3). In crossing from the Em to Camp Rock (CR) fault, again, rupture went SSE on the CR fault. (4). Hector Mine 1999. The rupture originated on a buried fault without surface trace [Li et al., 2002; Hauksson et al., 2002] and progressed bilaterally south and north. In the south it met the Lavic Lake (LL) fault and progressed south on it, but also progressed backward, i.e. NNW, along the northern stretch of the LL fault. The angle between the buried fault and the northern LL fault is around -160o, and that NNW stretch extends around 15 km. The field examples with highly obtuse branch angles suggest that there may be no simple correlation between fault geometry and rupture directivity. We propose that an important distinction is whether those obtuse branches actually involved a rupture path which directly turned through the obtuse angle (while continuing also on the main fault), or rather involved arrest by a barrier on the original fault and jumping [Harris and Day, JGR, 1993] to a neighboring fault on which rupture propagated bilaterally to form what appears as a backward-branched structure. Our studies [Poliakov et al., JGR in press, 2002; Kame et al, EOS, 2002] of stress fields around a dynamically moving mode II crack tip show a clear tendency to branch from the straight path at high rupture speeds, but the stress fields never allow the rupture path to directly turn through highly obtuse angles, and hence that mechanism is unlikely. In contrast, study of fault maps in the vicinity of the Kp to HV fault transition [Sowers et al., 1994], discussed as case (1) above, strongly suggest that the large-angle branching occurred as a jump, which we propose as the likely general mechanism. Implications for the Nakata et al. [1998] aim of inferring rupture directivity from branch geometry is that this will be possible only when rather detailed characterization (by surface geology, seismic relocation, trapped waves) of fault connectivity can be carried out in the vicinity of the branching junction, to ascertain whether direct turning of the rupture path through an angle, or jumping and then propagating bilaterally, were involved in prior events. They have opposite implications for how we would associate past directivity with a (nominally) branched fault geometry.

  8. Colorado Regional Faults

    SciTech Connect

    Hussein, Khalid

    2012-02-01

    Citation Information: Originator: Earth Science &Observation Center (ESOC), CIRES, University of Colorado at Boulder Originator: Colorado Geological Survey (CGS) Publication Date: 2012 Title: Regional Faults Edition: First Publication Information: Publication Place: Earth Science & Observation Center, Cooperative Institute for Research in Environmental Science, University of Colorado, Boulder Publisher: Earth Science &Observation Center (ESOC), CIRES, University of Colorado at Boulder Description: This layer contains the regional faults of Colorado Spatial Domain: Extent: Top: 4543192.100000 m Left: 144385.020000 m Right: 754585.020000 m Bottom: 4094592.100000 m Contact Information: Contact Organization: Earth Science &Observation Center (ESOC), CIRES, University of Colorado at Boulder Contact Person: Khalid Hussein Address: CIRES, Ekeley Building Earth Science & Observation Center (ESOC) 216 UCB City: Boulder State: CO Postal Code: 80309-0216 Country: USA Contact Telephone: 303-492-6782 Spatial Reference Information: Coordinate System: Universal Transverse Mercator (UTM) WGS’1984 Zone 13N False Easting: 500000.00000000 False Northing: 0.00000000 Central Meridian: -105.00000000 Scale Factor: 0.99960000 Latitude of Origin: 0.00000000 Linear Unit: Meter Datum: World Geodetic System 1984 (WGS ’984) Prime Meridian: Greenwich Angular Unit: Degree Digital Form: Format Name: Shape file

  9. SFT: Scalable Fault Tolerance

    SciTech Connect

    Petrini, Fabrizio; Nieplocha, Jarek; Tipparaju, Vinod

    2006-04-15

    In this paper we will present a new technology that we are currently developing within the SFT: Scalable Fault Tolerance FastOS project which seeks to implement fault tolerance at the operating system level. Major design goals include dynamic reallocation of resources to allow continuing execution in the presence of hardware failures, very high scalability, high efficiency (low overhead), and transparency—requiring no changes to user applications. Our technology is based on a global coordination mechanism, that enforces transparent recovery lines in the system, and TICK, a lightweight, incremental checkpointing software architecture implemented as a Linux kernel module. TICK is completely user-transparent and does not require any changes to user code or system libraries; it is highly responsive: an interrupt, such as a timer interrupt, can trigger a checkpoint in as little as 2.5μs; and it supports incremental and full checkpoints with minimal overhead—less than 6% with full checkpointing to disk performed as frequently as once per minute.

  10. Central Asia Active Fault Database

    NASA Astrophysics Data System (ADS)

    Mohadjer, Solmaz; Ehlers, Todd A.; Kakar, Najibullah

    2014-05-01

    The ongoing collision of the Indian subcontinent with Asia controls active tectonics and seismicity in Central Asia. This motion is accommodated by faults that have historically caused devastating earthquakes and continue to pose serious threats to the population at risk. Despite international and regional efforts to assess seismic hazards in Central Asia, little attention has been given to development of a comprehensive database for active faults in the region. To address this issue and to better understand the distribution and level of seismic hazard in Central Asia, we are developing a publically available database for active faults of Central Asia (including but not limited to Afghanistan, Tajikistan, Kyrgyzstan, northern Pakistan and western China) using ArcGIS. The database is designed to allow users to store, map and query important fault parameters such as fault location, displacement history, rate of movement, and other data relevant to seismic hazard studies including fault trench locations, geochronology constraints, and seismic studies. Data sources integrated into the database include previously published maps and scientific investigations as well as strain rate measurements and historic and recent seismicity. In addition, high resolution Quickbird, Spot, and Aster imagery are used for selected features to locate and measure offset of landforms associated with Quaternary faulting. These features are individually digitized and linked to attribute tables that provide a description for each feature. Preliminary observations include inconsistent and sometimes inaccurate information for faults documented in different studies. For example, the Darvaz-Karakul fault which roughly defines the western margin of the Pamir, has been mapped with differences in location of up to 12 kilometers. The sense of motion for this fault ranges from unknown to thrust and strike-slip in three different studies despite documented left-lateral displacements of Holocene and late Pleistocene landforms observed near the fault trace.

  11. Fault deformation mechanisms and fault rocks in micritic limestones: Examples from Corinth rift normal faults

    NASA Astrophysics Data System (ADS)

    Bussolotto, M.; Benedicto, A.; Moen-Maurel, L.; Invernizzi, C.

    2015-08-01

    A multidisciplinary study investigates the influence of different parameters on fault rock architecture development along normal faults affecting non-porous carbonates of the Corinth rift southern margin. Here, some fault systems cut the same carbonate unit (Pindus), and the gradual and fast uplift since the initiation of the rift led to the exhumation of deep parts of the older faults. This exceptional context allows superficial active fault zones and old exhumed fault zones to be compared. Our approach includes field studies, micro-structural (optical microscope and cathodoluminescence), geochemical analyses (δ13C, δ18O, trace elements) and fluid inclusions microthermometry of calcite sin-kinematic cements. Our main results, in a depth-window ranging from 0 m to about 2500 m, are: i) all cements precipitated from meteoric fluids in a close or open circulation system depending on depth; ii) depth (in terms of P/T condition) determines the development of some structures and their sealing; iii) lithology (marly levels) influences the type of structures and its cohesive/non-cohesive nature; iv) early distributed rather than final total displacement along the main fault plane is the responsible for the fault zone architecture; v) petrophysical properties of each fault zone depend on the variable combination of these factors.

  12. 20 CFR 404.507 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Officer § 404.507 Fault. Fault as used in without fault (see § 404.506 and 42 CFR 405.355) applies only to..., educational, or linguistic limitations (including any lack of facility with the English language)...

  13. 20 CFR 404.507 - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Officer § 404.507 Fault. Fault as used in without fault (see § 404.506 and 42 CFR 405.355) applies only to..., educational, or linguistic limitations (including any lack of facility with the English language)...

  14. Chip level simulation of fault tolerant computers

    NASA Technical Reports Server (NTRS)

    Armstrong, J. R.

    1982-01-01

    Chip-level modeling techniques in the evaluation of fault tolerant systems were researched. A fault tolerant computer was modeled. An efficient approach to functional fault simulation was developed. Simulation software was also developed.

  15. Studies of sibling Drosophila species from Laguna Verde, Veracruz, Mexico: I. Species frequencies, viability, desiccation resistance, and vagility.

    PubMed

    de la Rosa, M E; Guzmn, J; Levine, L; Olvera, O; Rockwell, R F

    1989-01-01

    The sibling species Drosophila melanogaster and D. simulans were collected at Laguna Verde, Veracruz, Mexico. D. melanogaster was found in significantly greater frequency than was D. simulans. Ten isofemale lines of each species were tested for egg to adult viability, desiccation resistance, and vagility. D. melanogaster surpassed D. simulans in all three characteristics. The findings are discussed with reference to the climatic conditions at Laguna Verde and the expected effect of such an environment on the relative frequencies of these species. The dichotomous results in regard to desiccation resistance and vagility that were observed between recently collected D. melanogaster and the Oregon-R laboratory stock of that species are also discussed. PMID:2493497

  16. Set-up of a decision support system to support sustainable development of the Laguna de Bay, Philippines.

    PubMed

    Nauta, Tjitte A; Bongco, Alicia E; Santos-Borja, Adelina C

    2003-01-01

    Over recent decades, population expansion, deforestation, land conversion, urbanisation, intense fisheries and industrialisation have produced massive changes in the Laguna de Bay catchment, Philippines. The resulting problems include rapid siltation of the lake, eutrophication, inputs of toxics, flooding problems and loss of biodiversity. Rational and systematic resolution of conflicting water use and water allocation interests is now urgently needed in order to ensure sustainable use of the water resources. With respect to the competing and conflicting pressures on the water resources, the Laguna Lake Development Authority (LLDA) needs to achieve comprehensive management and development of the area. In view of these problems and needs, the Government of the Netherlands was funding a two-year project entitled 'Sustainable Development of the Laguna de Bay Environment'.A comprehensive tool has been developed to support decision-making at catchment level. This consists of an ArcView GIS-database linked to a state-of-the-art modelling suite, including hydrological and waste load models for the catchment area and a three-dimensional hydrodynamic and water quality model (Delft3D) linked to a habitat evaluation module for the lake. In addition, MS Office based tools to support a stakeholder analysis and financial and economic assessments have been developed. The project also focused on technical studies relating to dredging, drinking water supply and infrastructure works. These aimed to produce technically and economically feasible solutions to water quantity and quality problems. The paper also presents the findings of a study on the development of polder islands in the Laguna de Bay, addressing the water quantity and quality problems and focusing on the application of the decision support system. PMID:12787622

  17. Frictional Heterogeneities Along Carbonate Faults

    NASA Astrophysics Data System (ADS)

    Collettini, C.; Carpenter, B. M.; Scuderi, M.; Tesei, T.

    2014-12-01

    The understanding of fault-slip behaviour in carbonates has an important societal impact as a) a significant number of earthquakes nucleate within or propagate through these rocks, and b) half of the known petroleum reserves occur within carbonate reservoirs, which likely contain faults that experience fluid pressure fluctuations. Field studies on carbonate-bearing faults that are exhumed analogues of currently active structures of the seismogenic crust, show that fault rock types are systematically controlled by the lithology of the faulted protolith: localization associated with cataclasis, thermal decomposition and plastic deformation commonly affect fault rocks in massive limestone, whereas distributed deformation, pressure-solution and frictional sliding along phyllosilicates are observed in marly rocks. In addition, hydraulic fractures, indicating cyclic fluid pressure build-ups during the fault activity, are widespread. Standard double direct friction experiments on fault rocks from massive limestones show high friction, velocity neutral/weakening behaviour and significant re-strengthening during hold periods, on the contrary, phyllosilicate-rich shear zones are characterized by low friction, significant velocity strengthening behavior and no healing. We are currently running friction experiments on large rock samples (20x20 cm) in order to reproduce and characterize the interaction of fault rock frictional heterogeneities observed in the field. In addition we have been performing experiments at near lithostatic fluid pressure in the double direct shear configuration within a pressure vessel to test the Rate and State friction stability under these conditions. Our combination of structural observations and mechanical data have been revealing the processes and structures that are at the base of the broad spectrum of fault slip behaviors recently documented by high-resolution geodetic and seismological data.

  18. The Lawanopo Fault, central Sulawesi, East Indonesia

    NASA Astrophysics Data System (ADS)

    Natawidjaja, Danny Hilman; Daryono, Mudrik R.

    2015-04-01

    The dominant tectonic-force factor in the Sulawesi Island is the westward Bangga-Sula microplate tectonic intrusion, driven by the 12 mm/year westward motion of the Pacific Plate relative to Eurasia. This tectonic intrusion are accommodated by a series of major left-lateral strike-slip fault zones including Sorong Fault, Sula-Sorong Fault, Matano Fault, Palukoro Fault, and Lawanopo Fault zones. The Lawanopo fault has been considered as an active left-lateral strike-slip fault. The natural exposures of the Lawanopo Fault are clear, marked by the breaks and liniemants of topography along the fault line, and also it serves as a tectonic boundary between the different rock assemblages. Inpections of IFSAR 5m-grid DEM and field checks show that the fault traces are visible by lineaments of topographical slope breaks, linear ridges and stream valleys, ridge neckings, and they are also associated with hydrothermal deposits and hot springs. These are characteristics of young fault, so their morphological expressions can be seen still. However, fault scarps and other morpho-tectonic features appear to have been diffused by erosions and young sediment depositions. No fresh fault scarps, stream deflections or offsets, or any influences of fault movements on recent landscapes are observed associated with fault traces. Hence, the faults do not show any evidence of recent activity. This is consistent with lack of seismicity on the fault.

  19. Arc fault detection system

    DOEpatents

    Jha, Kamal N.

    1999-01-01

    An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard.

  20. Faulted Sedimentary Rocks

    NASA Technical Reports Server (NTRS)

    2004-01-01

    27 June 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows some of the layered, sedimentary rock outcrops that occur in a crater located at 8oN, 7oW, in western Arabia Terra. Dark layers and dark sand have enhanced the contrast of this scene. In the upper half of the image, one can see numerous lines that off-set the layers. These lines are faults along which the rocks have broken and moved. The regularity of layer thickness and erosional expression are taken as evidence that the crater in which these rocks occur might once have been a lake. The image covers an area about 1.9 km (1.2 mi) wide. Sunlight illuminates the scene from the lower left.

  1. Arc fault detection system

    DOEpatents

    Jha, K.N.

    1999-05-18

    An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard. 1 fig.

  2. Fault Tolerant State Machines

    NASA Technical Reports Server (NTRS)

    Burke, Gary R.; Taft, Stephanie

    2004-01-01

    State machines are commonly used to control sequential logic in FPGAs and ASKS. An errant state machine can cause considerable damage to the device it is controlling. For example in space applications, the FPGA might be controlling Pyros, which when fired at the wrong time will cause a mission failure. Even a well designed state machine can be subject to random errors us a result of SEUs from the radiation environment in space. There are various ways to encode the states of a state machine, and the type of encoding makes a large difference in the susceptibility of the state machine to radiation. In this paper we compare 4 methods of state machine encoding and find which method gives the best fault tolerance, as well as determining the resources needed for each method.

  3. Identification and dating of indigenous water storage reservoirs along the Rio San Jos at Laguna Pueblo, western New Mexico, USA

    USGS Publications Warehouse

    Huckleberry, Gary; Ferguson, T.J.; Rittenour, Tammy M.; Banet, Chris; Mahan, Shannon

    2016-01-01

    An investigation into indigenous water storage on the Rio San Jos in western New Mexico was conducted in support of efforts by the Pueblo of Laguna to adjudicate their water rights. Here we focus on stratigraphy and geochronology of two Native American-constructed reservoirs. One reservoir located near the community of Casa Blanca was formed by a ?600 m (2000 feet) long stone masonry dam that impounded ?1.6 106 m3 (?1300 acre-feet) of stored water. Four optically stimulated luminescence (OSL) ages obtained on reservoir deposits indicate that the dam was constructed prior to AD 1825. The other reservoir is located adjacent to Old Laguna Pueblo and contains only a small remnant of its former earthen dam. The depth and distribution of reservoir deposits and a photogrammetric analyses of relict shorelines indicate a storage capacity of ?6.5 106 m3 (?5300 ac-ft). OSL ages from above and below the base of the reservoir indicate that the reservoir was constructed sometime after AD 1370 but before AD 1750. The results of our investigation are consistent with Laguna oral history and Spanish accounts demonstrating indigenous construction of significant water-storage reservoirs on the Rio San Jos prior to the late nineteenth century.

  4. Postglacial eruptive history of Laguna del Maule volcanic field in Chile, from fallout stratigraphy in Argentina

    NASA Astrophysics Data System (ADS)

    Fierstein, J.; Sruoga, P.; Amigo, A.; Elissondo, M.; Rosas, M.

    2012-12-01

    The Laguna del Maule (LdM) volcanic field, which surrounds the 54-km2 lake of that name, covers ~500 km2 of rugged glaciated terrain with Quaternary lavas and tuffs that extend for 40 km westward from the Argentine frontier and 30 km N-S from the Rio Campanario to Laguna Fea in the Southern Volcanic Zone of Chile. Geologic mapping (Hildreth et al., 2010) shows that at least 130 separate vents are part of the LdM field, from which >350 km3 of products have erupted since 1.5 Ma. These include a ring of 36 postglacial rhyolite and rhyodacite coulees and domes that erupted from 24 separate vents and encircle the lake, suggesting a continued large magma reservoir. Because the units are young, glassy, and do not overlap, only a few ages had been determined and the sequence of most of the postglacial eruptions had not previously been established. However, most of these postglacial silicic eruptions were accompanied by explosive eruptions of pumice and ash. Recent investigations downwind in Argentina are combining stratigraphy, grain-size analysis, chemistry, and radiocarbon dating to correlate the tephra with eruptive units mapped in Chile, assess fallout distribution, and establish a time-stratigraphic framework for the postglacial eruptions at Laguna del Maule. Two austral summer field seasons with a tri-country collaboration among the geological surveys of the U.S., Chile, and Argentina, have now established that a wide area east of the volcanic field was blanketed by at least 3 large explosive eruptions from LdM sources, and by at least 3 more modest, but still significant, eruptions. In addition, an ignimbrite from the LdM Barrancas vent complex on the border in the SE corner of the lake traveled at least 15 km from source and now makes up a pyroclastic mesa that is at least 40 m thick. This ignimbrite (72-75% SiO2) preceded a series of fall deposits that are correlated with eruption of several lava flows that built the Barrancas complex. Recent 14C dates suggest that most of the preserved LdM fallout eruptions were between 7 ka and 2 ka. However, the oldest and perhaps largest fall unit yet recognized is correlated with the Los Espejos rhyolite lava flow that dammed the lake and yields a 40Ar/39Ar age of 23 ka. Pumice clasts as large as 8.5 cm and lithics to 4 cm were measured 32 km ENE of source. It is the only high-silica rhyolite (75.5-76% SiO2) fall layer yet found, correlates chemically with the Los Espejos rhyolite lava flow, and includes distinctive olivine-bearing lithics that are correlated with mafic lavas which underlie the Espejos vent. Extremely frothy pumice found near the vent is also consistent with the bubble-wall shards and reticulite pumice distinctive of the correlative fall deposit. Another large rhyolite fall deposit (74.5% SiO2), 4 m thick 22 km E of source, has pumice clasts to 9.5 cm and includes ubiquitous coherent clasts of fine, dense soil that suggests it erupted through wet ground; 14C dates (uncalibrated) yield ages ~7 ka. Stratigraphic details suggest that pulses of fallout were accompanied by small pyroclastic flows. Ongoing field and lab work continues to build the LdM postglacial eruptive story. The numerous postglacial explosive eruptions from the LdM field are of significant concern because of ongoing 33 cm/year uplift along the western lakeshore, as measured by InSAR and verified by GPS.

  5. Comparison of Cenozoic Faulting at the Savannah River Site to Fault Characteristics of the Atlantic Coast Fault Province: Implications for Fault Capability

    SciTech Connect

    Cumbest, R.J.

    2000-11-14

    This study compares the faulting observed on the Savannah River Site and vicinity with the faults of the Atlantic Coastal Fault Province and concludes that both sets of faults exhibit the same general characteristics and are closely associated. Based on the strength of this association it is concluded that the faults observed on the Savannah River Site and vicinity are in fact part of the Atlantic Coastal Fault Province. Inclusion in this group means that the historical precedent established by decades of previous studies on the seismic hazard potential for the Atlantic Coastal Fault Province is relevant to faulting at the Savannah River Site. That is, since these faults are genetically related the conclusion of ''not capable'' reached in past evaluations applies.In addition, this study establishes a set of criteria by which individual faults may be evaluated in order to assess their inclusion in the Atlantic Coast Fault Province and the related association of the ''not capable'' conclusion.

  6. Improving Multiple Fault Diagnosability using Possible Conflicts

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2012-01-01

    Multiple fault diagnosis is a difficult problem for dynamic systems. Due to fault masking, compensation, and relative time of fault occurrence, multiple faults can manifest in many different ways as observable fault signature sequences. This decreases diagnosability of multiple faults, and therefore leads to a loss in effectiveness of the fault isolation step. We develop a qualitative, event-based, multiple fault isolation framework, and derive several notions of multiple fault diagnosability. We show that using Possible Conflicts, a model decomposition technique that decouples faults from residuals, we can significantly improve the diagnosability of multiple faults compared to an approach using a single global model. We demonstrate these concepts and provide results using a multi-tank system as a case study.

  7. Constraining Fault-Zone Permeability

    NASA Astrophysics Data System (ADS)

    Ge, S.; Ball, L. B.; Caine, J. S.; Revil, A.

    2011-12-01

    Faults are known to behave as hydrologic barriers, conduits, or combined barrier-conduits to fluid flow. On the basis of hydrological and geophysical data, this study presents a possible fault-zone permeability model for a buried reverse fault that juxtaposes Precambrian crystalline rocks against Tertiary sedimentary strata, the Elkhorn fault zone in South Park, Colorado. Permeameter tests and thin section analyses were performed on several samples taken from a 300-foot interval of core from the sedimentary footwall and yielded permeability values on the order of 10E-16 m^2. In-situ slug tests and single-well pumping tests conducted in both the footwall and hanging wall yielded larger permeability values, varying between 10E-14 to 10E-11 m^2, indicating a substantial change in permeability in the vicinity of the fault. Geophysical interpretations from electrical resistivity tomography and self-potential measurements suggest that these permeability estimates may be representative of a lithologically and hydrologically distinct fault zone. Permeability estimates, resistivity structure, and interpreted changes in groundwater flow direction near the fault are consistent with combined conduit-barrier behavior at the meter to tens-of-meters scale.

  8. ANNs pinpoint underground distribution faults

    SciTech Connect

    Glinkowski, M.T.; Wang, N.C.

    1995-10-01

    Many offline fault location techniques in power distribution circuits involve patrolling along the lines or cables. In overhead distribution lines, most of the failures can be located quickly by visual inspection without the aid of special equipment. However, locating a fault in underground cable systems is more difficult. It involves additional equipment (e.g., thumpers, radars, etc.) to transform the invisibility of the cable into other forms of signals, such as acoustic sound and electromagnetic pulses. Trained operators must carry the equipment above the ground, follow the path of the signal, and draw lines on their maps in order to locate the fault. Sometimes, even smelling the burnt cable faults is a way of detecting the problem. These techniques are time consuming, not always reliable, and, as in the case of high-voltage dc thumpers, can cause additional damage to the healthy parts of the cable circuit. Online fault location in power networks that involve interconnected lines (cables) and multiterminal sources continues receiving great attention, with limited success in techniques that would provide simple and practical solutions. This article features a new online fault location technique that: uses the pattern recognition feature of artificial neural networks (ANNs); utilizes new capabilities of modern protective relaying hardware. The output of the neural network can be graphically displayed as a simple three-dimensional (3-D) chart that can provide an operator with an instantaneous indication of the location of the fault.

  9. Diatom diversity and paleoenvironmental changes in Laguna Potrok Aike, Patagonia: the ~ 50 kyr PASADO sediment record

    NASA Astrophysics Data System (ADS)

    Recasens, C.; Ariztegui, D.; Maidana, N. I.

    2012-12-01

    Laguna Potrok Aike is a maar lake located in the southernmost Argentinean Patagonia, in the province of Santa Cruz. Being one of the few permanent lakes in the area, it provides an exceptional and continuous sedimentary record. The sediment cores from Laguna Potrok Aike, obtained in the framework of the ICDP-sponsored project PASADO (Potrok Aike Maar Lake Sediment Archive Drilling Program), were sampled for diatom analysis in order to reconstruct a continuous history of hydrological and climatic changes since the Late Pleistocene. Diatoms are widely used to characterize and often quantify the impact of past environmental changes in aquatic systems. We use variations in diatom concentration and in their dominant assemblages, combined with other proxies, to track these changes. Diatom assemblages were analyzed on the composite core 5022-2CP with a multi-centennial time resolution. The total composite profile length of 106.09 mcd (meters composite depth) was reduced to 45.80 m cd-ec (event-corrected composite profile) of pelagic deposits once gaps, reworked sections, and tephra deposits were removed. This continuous deposit spans the last ca. 51.2 cal. ka BP. Previous diatomological analysis from the core catcher samples of core 5022-1D, allowed us to determine the dominant diatom assemblages in this lake and select the sections where higher temporal resolution was needed. Over 200 species, varieties and forms were identified in the sediment record, including numerous endemic species and others which can be new to science. Among these, a new species has been described: Cymbella gravida sp. nov. Recasens and Maidana. The quantitative analysis of the sediment record reveals diatom abundances reaching 460 million valves per gram of dry sediment, with substantial fluctuations through time. Variations in the abundance and species distribution point toward lake level variations, changes in nutrient input or even periods of ice-cover in the lake. The top meters of the record reveal a shift in the phytoplakton composition, corresponding to the previously documented salinization of the water and the lake level drop, indicators of warming temperatures and lower moisture availability during the early and middle Holocene. The new results presented here on diatom diversity and distribution in the Glacial to Late Glacial sections of the record bring much needed information on the previously poorly known paleolimnology of this lake for that time period.

  10. A 5000 Year Record of Andean South American Summer Monsoon Variability from Laguna de Ubaque, Colombia

    NASA Astrophysics Data System (ADS)

    Rudloff, O. M.; Bird, B. W.; Escobar, J.

    2014-12-01

    Our understanding of Northern Hemisphere South American summer monsoon (SASM) dynamics during the Holocene has been limited by the small number of terrestrial paleoclimate records from this region. In order to increase our knowledge of SASM variability and to better inform our predictions of its response to ongoing rapid climate change, we require high-resolution paleoclimate records from the Northern Hemisphere Andes. To this end, we present sub-decadally resolved sedimentological and geochemical data from Laguna de Ubaque that spans the last 5000 years. Located in the Eastern Cordillera of the Colombian Andes, Laguna de Ubaque (2070 m asl) is a small, east facing moraine-dammed lake in the upper part of the Rio Meta watershed near Bogotá containing finely laminated clastic sediments. Dry bulk density, %organic matter, %carbonate and magnetic susceptibility (MS) results from Ubaque suggest a period of intense precipitation between 3500 and 2000 years BP interrupted by a 300 yr dry interval centered at 2700 years BP. Following this event, generally drier conditions characterize the last 2000 years. Although considerably lower amplitude than the middle Holocene pluvial events, variability in the sedimentological data support climatic responses during the Medieval Climate Anomaly (MCA; 900 to 1200 CE) and Little Ice Age (LIA; 1450 to 1900 CE) that are consistent with other records of local Andean conditions. In particular, reduced MS during the MCA suggests a reduction in terrestrial material being washed into the lake as a result of generally drier conditions. The LIA on the other hand shows a two phase structure with increased MS between 1450 and 1600 CE, suggesting wetter conditions during the onset of the LIA, and reduced MS between 1600 and 1900 CE, suggesting a return to drier conditions during the latter part of the LIA. These LIA trends are similar to the Quelccaya accumulation record, possibly supporting an in-phase relationship between the South American Hemispheres. By comparing our precipitation proxies with other terrestrial records, as well as Pacific sea surface temperatures (SST) and global climate reconstructions, we will examine the relationship between Northern and Southern Hemisphere Andean climate responses to assess the validity of existing theories on the modes of climate change in the region.

  11. The Maars of the Tuxtla Volcanic Field: the Example of 'laguna Pizatal'

    NASA Astrophysics Data System (ADS)

    Espindola, J.; Zamora-Camacho, A.; Hernandez-Cardona, A.; Alvarez del Castillo, E.; Godinez, M.

    2013-12-01

    Los Tuxtlas Volcanic Field (TVF), also known as Los Tuxtlas massif, is a structure of volcanic rocks rising conspicuously in the south-central part of the coastal plains of eastern Mexico. The TVF seems related to the upper cretaceous magmatism of the NW part of the Gulf's margin (e.g. San Carlos and Sierra de Tamaulipas alkaline complexes) rather than to the nearby Mexican Volcanic Belt. The volcanism in this field began in late Miocene and has continued in historical times, The TVF is composed of 4 large volcanoes (San Martin Tuxtla, San Martin Pajapan, Santa Marta, Cerro El Vigia), at least 365 volcanic cones and 43 maars. In this poster we present the distribution of the maars, their size and depths. These maars span from a few hundred km to almost 1 km in average diameter, and a few meters to several tens of meters in depth; most of them filled with lakes. As an example on the nature of these structures we present our results of the ongoing study of 'Laguna Pizatal or Pisatal' (18 33'N, 95 16.4'W, 428 masl) located some 3 km from the village of Reforma, on the western side of San Martin Tuxtla volcano. Laguna Pisatal is a maar some 500 meters in radius and a depth about 40 meters from the surrounding ground level. It is covered by a lake 200 m2 in extent fed by a spring discharging on its western side. We examined a succession of 15 layers on the margins of the maar, these layers are blast deposits of different sizes interbedded by surge deposits. Most of the contacts between layers are irregular; which suggests scouring during deposition of the upper beds. This in turn suggests that the layers were deposited in a rapid series of explosions, which mixed juvenile material with fragments of the preexisting bedrock. We were unable to find the extent of these deposits since the surrounding areas are nowadays sugar cane plantations and the lake has overspilled in several occassions.

  12. Fault Injection Campaign for a Fault Tolerant Duplex Framework

    NASA Technical Reports Server (NTRS)

    Sacco, Gian Franco; Ferraro, Robert D.; von llmen, Paul; Rennels, Dave A.

    2007-01-01

    Fault tolerance is an efficient approach adopted to avoid or reduce the damage of a system failure. In this work we present the results of a fault injection campaign we conducted on the Duplex Framework (DF). The DF is a software developed by the UCLA group [1, 2] that uses a fault tolerant approach and allows to run two replicas of the same process on two different nodes of a commercial off-the-shelf (COTS) computer cluster. A third process running on a different node, constantly monitors the results computed by the two replicas, and eventually restarts the two replica processes if an inconsistency in their computation is detected. This approach is very cost efficient and can be adopted to control processes on spacecrafts where the fault rate produced by cosmic rays is not very high.

  13. Method of locating ground faults

    NASA Technical Reports Server (NTRS)

    Patterson, Richard L. (Inventor); Rose, Allen H. (Inventor); Cull, Ronald C. (Inventor)

    1994-01-01

    The present invention discloses a method of detecting and locating current imbalances such as ground faults in multiwire systems using the Faraday effect. As an example, for 2-wire or 3-wire (1 ground wire) electrical systems, light is transmitted along an optical path which is exposed to magnetic fields produced by currents flowing in the hot and neutral wires. The rotations produced by these two magnetic fields cancel each other, therefore light on the optical path does not read the effect of either. However, when a ground fault occurs, the optical path is exposed to a net Faraday effect rotation due to the current imbalance thereby exposing the ground fault.

  14. Finding faults with the data

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    Rudolph Giuliani and Hillary Rodham Clinton are crisscrossing upstate New York looking for votes in the U.S. Senate race. Also cutting back and forth across upstate New York are hundreds of faults of a kind characterized by very sporadic seismic activity according to Robert Jacobi, professor of geology at the University of Buffalo (UB), who conducted research with fellow UB geology professor John Fountain."We have proof that upstate New York is crisscrossed by faults," Jacobi said. "In the past, the Appalachian Plateauwhich stretches from Albany to Buffalowas considered a pretty boring place structurally without many faults or folds of any significance."

  15. The Burtrsk endglacial fault: Sweden's most seismically active fault system

    NASA Astrophysics Data System (ADS)

    Lund, Bjrn; Buhcheva, Darina; Tryggvason, Ari; Berglund, Karin; Juhlin, Christopher; Munier, Raymond

    2015-04-01

    Approximately 10,000 years ago, as the Weichselian ice sheet retreated from northern Fennoscandia, large earthquakes occurred in response to the combined tectonic and glacial isostatic adjustment stresses. These endglacial earthquakes reached magnitudes of 7 to 8 and left scarps up to 155 km long in northernmost Fennoscandia. Most of the endglacial faults (EGFs) still show considerable earthquake activity and the area around the Burtrsk endglacial fault, south of the town of Skellefte in northern Sweden, is not only the most active of the EGFs but also the currently most seismically active region in Sweden. Here we show the preliminary results of the first two years of a temporary deployment of seismic stations around the Burtrsk fault, complementing the permanent stations of the Swedish National Seismic Network (SNSN) in the region. During the two year period December 2012 to December 2014, the local network recorded approximately 1,500 events and is complete to approximately magnitude -0.4. We determine a new velocity model for the region and perform double-difference relocation of the events along the fault. We also analyze depth phases to further constrain the depths of some of the larger events. We find that many of the events are aligned along and to the southeast of the fault scarp, in agreement with the previously determined reverse faulting mechanism of the main event. Earthquakes extend past the mapped surface scarp to the northeast in a similar strike direction into the Bay of Bothnia, suggesting that the fault may be longer than the surface scarp indicates. We also find a number of events north of the Burtrsk fault, some seemingly related to the Rjnoret EGF but some in a more diffuse area of seismicity. The Burtrsk events show a seismically active zone dipping approximately 40 degrees to the southeast, with earthquakes all the way down to 35 km depth. The Burtrsk fault area thereby has some of the deepest seismicity observed in Sweden. We correlate our results with those of a seismic reflection survey carried out across the fault in 2008. Focal mechanisms are calculated for all events and the highest quality mechanisms are analyzed for faulting style variations in the region. We invert the mechanisms for the causative stress state and shed light on the long-standing issue of what causes earthquakes along the Swedish northeast coast, tectonics or current glacial isostatic adjustment.

  16. Fault-free performance validation of fault-tolerant multiprocessors

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Feather, Frank E.; Grizzaffi, Ann Marie; Segall, Zary Z.; Siewiorek, Daniel P.

    1987-01-01

    A validation methodology for testing the performance of fault-tolerant computer systems was developed and applied to the Fault-Tolerant Multiprocessor (FTMP) at NASA-Langley's AIRLAB facility. This methodology was claimed to be general enough to apply to any ultrareliable computer system. The goal of this research was to extend the validation methodology and to demonstrate the robustness of the validation methodology by its more extensive application to NASA's Fault-Tolerant Multiprocessor System (FTMP) and to the Software Implemented Fault-Tolerance (SIFT) Computer System. Furthermore, the performance of these two multiprocessors was compared by conducting similar experiments. An analysis of the results shows high level language instruction execution times for both SIFT and FTMP were consistent and predictable, with SIFT having greater throughput. At the operating system level, FTMP consumes 60% of the throughput for its real-time dispatcher and 5% on fault-handling tasks. In contrast, SIFT consumes 16% of its throughput for the dispatcher, but consumes 66% in fault-handling software overhead.

  17. Radiocarbon dating of the Peruvian Chachapoya/Inca site at the Laguna de los Condores

    NASA Astrophysics Data System (ADS)

    Wild, Eva Maria; Guillen, Sonia; Kutschera, Walter; Seidler, Horst; Steier, Peter

    2007-06-01

    In 1997 a new archaeological site was discovered in the Peruvian tropical rain forest. The site is located in an area which has been occupied by the Chachapoya, a pre-Incan people, from about 800AD on. The site comprises a large funerary place with several mausoleums built in the cliffs next to the Laguna de los Condores. More than 200 human mummies and funerary bone-bundles together with numerous grave artefacts have been found there. Although the site has been ascribed to the Chachapoya, the mummification method used is very similar to the one applied by the Inca. As part of an ongoing multidisciplinary project to explore the history of this site and of the Chachapoya people, twenty-seven (27) 14C-AMS age determinations were performed. Samples, bones and textile wrappings as well as samples from a funerary bone bundle plus associated grave artefacts were dated. The 14C data show that the site originates from the Chachapoya pre-Inca period and that in addition, it was used as a funerary place during the subsequent Inca occupation era. The radiocarbon results indicate that the Chachapoya may have changed their burial tradition due to the colonization by the Inca.

  18. The ambient acoustic environment in Laguna San Ignacio, Baja California Sur, Mexico.

    PubMed

    Seger, Kerri D; Thode, Aaron M; Swartz, Steven L; Urbán, Jorge R

    2015-11-01

    Each winter gray whales (Eschrichtius robustus) breed and calve in Laguna San Ignacio, Mexico, where a robust, yet regulated, whale-watching industry exists. Baseline acoustic environments in LSI's three zones were monitored between 2008 and 2013, in anticipation of a new road being paved that will potentially increase tourist activity to this relatively isolated location. These zones differ in levels of both gray whale usage and tourist activity. Ambient sound level distributions were computed in terms of percentiles of power spectral densities. While these distributions are consistent across years within each zone, inter-zone differences are substantial. The acoustic environment in the upper zone is dominated by snapping shrimp that display a crepuscular cycle. Snapping shrimp also affect the middle zone, but tourist boat transits contribute to noise distributions during daylight hours. The lower zone has three source contributors to its acoustic environment: snapping shrimp, boats, and croaker fish. As suggested from earlier studies, a 300 Hz noise minimum exists in both the middle and lower zones of the lagoon, but not in the upper zone. PMID:26627811

  19. Estimating floodplain sedimentation in the Laguna de Santa Rosa, Sonoma County, CA

    USGS Publications Warehouse

    Curtis, Jennifer A.; Flint, Lorraine E.; Hupp, Cliff R.

    2013-01-01

    We present a conceptual and analytical framework for predicting the spatial distribution of floodplain sedimentation for the Laguna de Santa Rosa, Sonoma County, CA. We assess the role of the floodplain as a sink for fine-grained sediment and investigate concerns regarding the potential loss of flood storage capacity due to historic sedimentation. We characterized the spatial distribution of sedimentation during a post-flood survey and developed a spatially distributed sediment deposition potential map that highlights zones of floodplain sedimentation. The sediment deposition potential map, built using raster files that describe the spatial distribution of relevant hydrologic and landscape variables, was calibrated using 2 years of measured overbank sedimentation data and verified using longer-term rates determined using dendrochronology. The calibrated floodplain deposition potential relation was used to estimate an average annual floodplain sedimentation rate (3.6 mm/year) for the ~11 km2 floodplain. This study documents the development of a conceptual model of overbank sedimentation, describes a methodology to estimate the potential for various parts of a floodplain complex to accumulate sediment over time, and provides estimates of short and long-term overbank sedimentation rates that can be used for ecosystem management and prioritization of restoration activities.

  20. Impact of Water Resorts Development along Laguna de Bay on Groundwater Resources

    NASA Astrophysics Data System (ADS)

    Jago-on, K. A. B.; Reyes, Y. K.; Siringan, F. P.; Lloren, R. B.; Balangue, M. I. R. D.; Pena, M. A. Z.; Taniguchi, M.

    2014-12-01

    Rapid urbanization and land use changes in areas along Laguna de Bay, one of the largest freshwater lake in Southeast Asia, have resulted in increased economic activities and demand for groundwater resources from households, commerce and industries. One significant activity that can affect groundwater is the development of the water resorts industry, which includes hot springs spas. This study aims to determine the impact of the proliferation of these water resorts in Calamba and Los Banos, urban areas located at the southern coast of the lake on the groundwater as a resource. Calamba, being the "Hot Spring Capital of the Philippines", presently has more than 300 resorts, while Los Banos has at least 38 resorts. Results from an initial survey of resorts show that the swimming pools are drained/ changed on an average of 2-3 times a week or even daily during peak periods of tourist arrivals. This indicates a large demand on the groundwater. Monitoring of actual groundwater extraction is a challenge however, as most of these resorts operate without water use permits. The unrestrained exploitation of groundwater has resulted to drying up of older wells and decrease in hot spring water temperature. It is necessary to strengthen implementation of laws and policies, and enhance partnerships among government, private sector groups, civil society and communities to promote groundwater sustainability.

  1. Laguna Negra Virus Infection Causes Hantavirus Pulmonary Syndrome in Turkish Hamsters (Mesocricetus brandti).

    PubMed

    Hardcastle, K; Scott, D; Safronetz, D; Brining, D L; Ebihara, H; Feldmann, H; LaCasse, R A

    2016-01-01

    Laguna Negra virus (LNV) is a New World hantavirus associated with severe and often fatal cardiopulmonary disease in humans, known as hantavirus pulmonary syndrome (HPS). Five hamster species were evaluated for clinical and serologic responses following inoculation with 4 hantaviruses. Of the 5 hamster species, only Turkish hamsters infected with LNV demonstrated signs consistent with HPS and a fatality rate of 43%. Clinical manifestations in infected animals that succumbed to disease included severe and rapid onset of dyspnea, weight loss, leukopenia, and reduced thrombocyte numbers as compared to uninfected controls. Histopathologic examination revealed lung lesions that resemble the hallmarks of HPS in humans, including interstitial pneumonia and pulmonary edema, as well as generalized infection of endothelial cells and macrophages in major organ tissues. Histologic lesions corresponded to the presence of viral antigen in affected tissues. To date, there have been no small animal models available to study LNV infection and pathogenesis. The Turkish hamster model of LNV infection may be important in the study of LNV-induced HPS pathogenesis and development of disease treatment and prevention strategies. PMID:25722219

  2. Expert System Detects Power-Distribution Faults

    NASA Technical Reports Server (NTRS)

    Walters, Jerry L.; Quinn, Todd M.

    1994-01-01

    Autonomous Power Expert (APEX) computer program is prototype expert-system program detecting faults in electrical-power-distribution system. Assists human operators in diagnosing faults and deciding what adjustments or repairs needed for immediate recovery from faults or for maintenance to correct initially nonthreatening conditions that could develop into faults. Written in Lisp.

  3. 20 CFR 410.561b - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Fault. 410.561b Section 410.561b Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL MINE HEALTH AND SAFETY ACT OF 1969, TITLE IV-BLACK LUNG BENEFITS (1969- ) Payment of Benefits § 410.561b Fault. Fault as used in without fault (see §...

  4. 20 CFR 410.561b - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Fault. 410.561b Section 410.561b Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL MINE HEALTH AND SAFETY ACT OF 1969, TITLE IV-BLACK LUNG BENEFITS (1969- ) Payment of Benefits § 410.561b Fault. Fault as used in without fault (see §...

  5. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...) 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of... agency may have been at fault in initiating an overpayment will not necessarily relieve the...

  6. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...) 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of... agency may have been at fault in initiating an overpayment will not necessarily relieve the...

  7. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...) 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of... agency may have been at fault in initiating an overpayment will not necessarily relieve the...

  8. Spontaneous rupture on irregular faults

    NASA Astrophysics Data System (ADS)

    Liu, C.

    2014-12-01

    It is now know (e.g. Robinson et al., 2006) that when ruptures propagate around bends, the rupture velocity decrease. In the extreme case, a large bend in the fault can stop the rupture. We develop a 2-D finite difference method to simulate spontaneous dynamic rupture on irregular faults. This method is based on a second order leap-frog finite difference scheme on a uniform mesh of triangles. A relaxation method is used to generate an irregular fault geometry-conforming mesh from the uniform mesh. Through this numerical coordinate mapping, the elastic wave equations are transformed and solved in a curvilinear coordinate system. Extensive numerical experiments using the linear slip-weakening law will be shown to demonstrate the effect of fault geometry on rupture properties. A long term goal is to simulate the strong ground motion near the vicinity of bends, jogs, etc.

  9. Fault-tolerant system optimization

    NASA Technical Reports Server (NTRS)

    Rose, J.

    1980-01-01

    The paper describes the decisions to be made in the design of fault-tolerant systems and provides details of a comprehensive model developed to cost optimize such systems. Economical use of replication is making fault-tolerant systems possible and more applications for safety crucial systems such as active flight controls can be expected. In turn, the use of massive redundancy, fault-tolerance, and reconfigurable systems in stimulating the development of new analytical tools for establishing the cost and effectiveness of the safety and cost effectiveness of the levels of replication will increase. Closed-form analytical solutions for the reliability and maintainability analysis of fault-tolerant systems are complex, and Monte-Carlo simulation appears to be a more desirable method of establishing the reliability and maintainability of such systems.

  10. Cell boundary fault detection system

    DOEpatents

    Archer, Charles Jens (Rochester, MN); Pinnow, Kurt Walter (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian Edward (Rochester, MN)

    2009-05-05

    A method determines a nodal fault along the boundary, or face, of a computing cell. Nodes on adjacent cell boundaries communicate with each other, and the communications are analyzed to determine if a node or connection is faulty.

  11. Fault-tolerant rotary actuator

    DOEpatents

    Tesar, Delbert

    2006-10-17

    A fault-tolerant actuator module, in a single containment shell, containing two actuator subsystems that are either asymmetrically or symmetrically laid out is provided. Fault tolerance in the actuators of the present invention is achieved by the employment of dual sets of equal resources. Dual resources are integrated into single modules, with each having the external appearance and functionality of a single set of resources.

  12. Hardware Fault Simulator for Microprocessors

    NASA Technical Reports Server (NTRS)

    Hess, L. M.; Timoc, C. C.

    1983-01-01

    Breadboarded circuit is faster and more thorough than software simulator. Elementary fault simulator for AND gate uses three gates and shaft register to simulate stuck-at-one or stuck-at-zero conditions at inputs and output. Experimental results showed hardware fault simulator for microprocessor gave faster results than software simulator, by two orders of magnitude, with one test being applied every 4 microseconds.

  13. MER surface fault protection system

    NASA Technical Reports Server (NTRS)

    Neilson, Tracy

    2005-01-01

    The Mars Exploration Rovers surface fault protection design was influenced by the fact that the solar-powered rovers must recharge their batteries during the day to survive the night. the rovers needed to autonomously maintain thermal stability, initiate safe and reliable communication with orbiting assets or directly to Earth, while maintaining energy balance. This paper will describe the system fault protection design for the surface phase of the mission.

  14. Fault Tree Analysis: A Bibliography

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Fault tree analysis is a top-down approach to the identification of process hazards. It is as one of the best methods for systematically identifying an graphically displaying the many ways some things can go wrong. This bibliography references 266 documents in the NASA STI Database that contain the major concepts. fault tree analysis, risk an probability theory, in the basic index or major subject terms. An abstract is included with most citations, followed by the applicable subject terms.

  15. Normal fault earthquakes or graviquakes

    PubMed Central

    Doglioni, C.; Carminati, E.; Petricca, P.; Riguzzi, F.

    2015-01-01

    Earthquakes are dissipation of energy throughout elastic waves. Canonically is the elastic energy accumulated during the interseismic period. However, in crustal extensional settings, gravity is the main energy source for hangingwall fault collapsing. Gravitational potential is about 100 times larger than the observed magnitude, far more than enough to explain the earthquake. Therefore, normal faults have a different mechanism of energy accumulation and dissipation (graviquakes) with respect to other tectonic settings (strike-slip and contractional), where elastic energy allows motion even against gravity. The bigger the involved volume, the larger is their magnitude. The steeper the normal fault, the larger is the vertical displacement and the larger is the seismic energy released. Normal faults activate preferentially at about 60° but they can be shallower in low friction rocks. In low static friction rocks, the fault may partly creep dissipating gravitational energy without releasing great amount of seismic energy. The maximum volume involved by graviquakes is smaller than the other tectonic settings, being the activated fault at most about three times the hypocentre depth, explaining their higher b-value and the lower magnitude of the largest recorded events. Having different phenomenology, graviquakes show peculiar precursors. PMID:26169163

  16. Normal fault earthquakes or graviquakes

    NASA Astrophysics Data System (ADS)

    Doglioni, C.; Carminati, E.; Petricca, P.; Riguzzi, F.

    2015-07-01

    Earthquakes are dissipation of energy throughout elastic waves. Canonically is the elastic energy accumulated during the interseismic period. However, in crustal extensional settings, gravity is the main energy source for hangingwall fault collapsing. Gravitational potential is about 100 times larger than the observed magnitude, far more than enough to explain the earthquake. Therefore, normal faults have a different mechanism of energy accumulation and dissipation (graviquakes) with respect to other tectonic settings (strike-slip and contractional), where elastic energy allows motion even against gravity. The bigger the involved volume, the larger is their magnitude. The steeper the normal fault, the larger is the vertical displacement and the larger is the seismic energy released. Normal faults activate preferentially at about 60 but they can be shallower in low friction rocks. In low static friction rocks, the fault may partly creep dissipating gravitational energy without releasing great amount of seismic energy. The maximum volume involved by graviquakes is smaller than the other tectonic settings, being the activated fault at most about three times the hypocentre depth, explaining their higher b-value and the lower magnitude of the largest recorded events. Having different phenomenology, graviquakes show peculiar precursors.

  17. Software Fault Tolerance: A Tutorial

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2000-01-01

    Because of our present inability to produce error-free software, software fault tolerance is and will continue to be an important consideration in software systems. The root cause of software design errors is the complexity of the systems. Compounding the problems in building correct software is the difficulty in assessing the correctness of software for highly complex systems. After a brief overview of the software development processes, we note how hard-to-detect design faults are likely to be introduced during development and how software faults tend to be state-dependent and activated by particular input sequences. Although component reliability is an important quality measure for system level analysis, software reliability is hard to characterize and the use of post-verification reliability estimates remains a controversial issue. For some applications software safety is more important than reliability, and fault tolerance techniques used in those applications are aimed at preventing catastrophes. Single version software fault tolerance techniques discussed include system structuring and closure, atomic actions, inline fault detection, exception handling, and others. Multiversion techniques are based on the assumption that software built differently should fail differently and thus, if one of the redundant versions fails, it is expected that at least one of the other versions will provide an acceptable output. Recovery blocks, N-version programming, and other multiversion techniques are reviewed.

  18. Passive fault current limiting device

    DOEpatents

    Evans, Daniel J. (Wheeling, IL); Cha, Yung S. (Darien, IL)

    1999-01-01

    A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment.

  19. Passive fault current limiting device

    DOEpatents

    Evans, D.J.; Cha, Y.S.

    1999-04-06

    A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment. 6 figs.

  20. Normal fault earthquakes or graviquakes.

    PubMed

    Doglioni, C; Carminati, E; Petricca, P; Riguzzi, F

    2015-01-01

    Earthquakes are dissipation of energy throughout elastic waves. Canonically is the elastic energy accumulated during the interseismic period. However, in crustal extensional settings, gravity is the main energy source for hangingwall fault collapsing. Gravitational potential is about 100 times larger than the observed magnitude, far more than enough to explain the earthquake. Therefore, normal faults have a different mechanism of energy accumulation and dissipation (graviquakes) with respect to other tectonic settings (strike-slip and contractional), where elastic energy allows motion even against gravity. The bigger the involved volume, the larger is their magnitude. The steeper the normal fault, the larger is the vertical displacement and the larger is the seismic energy released. Normal faults activate preferentially at about 60° but they can be shallower in low friction rocks. In low static friction rocks, the fault may partly creep dissipating gravitational energy without releasing great amount of seismic energy. The maximum volume involved by graviquakes is smaller than the other tectonic settings, being the activated fault at most about three times the hypocentre depth, explaining their higher b-value and the lower magnitude of the largest recorded events. Having different phenomenology, graviquakes show peculiar precursors. PMID:26169163

  1. Fault diagnosis of power systems

    SciTech Connect

    Sekine, Y. ); Akimoto, Y. ); Kunugi, M. )

    1992-05-01

    Fault diagnosis of power systems plays a crucial role in power system monitoring and control that ensures stable supply of electrical power to consumers. In the case of multiple faults or incorrect operation of protective devices, fault diagnosis requires judgment of complex conditions at various levels. For this reason, research into application of knowledge-based systems go an early start and reports of such systems have appeared in may papers. In this paper, these systems are classified by the method of inference utilized in the knowledge-based systems for fault diagnosis of power systems. The characteristics of each class and corresponding issues as well as the state-of-the-art techniques for improving their performance are presented. Additional topics covered are user interfaces, interfaces with energy management systems (EMS's), and expert system development tools for fault diagnosis. Results and evaluation of actual operation in the field are also discussed. Knowledge-based fault diagnosis of power systems will continue to disseminate.

  2. Tutorial: Advanced fault tree applications using HARP

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.

    1993-01-01

    Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.

  3. Nonlinear Network Dynamics on Earthquake Fault Systems

    NASA Astrophysics Data System (ADS)

    Rundle, Paul B.; Rundle, John B.; Tiampo, Kristy F.; Sa Martins, Jorge S.; McGinnis, Seth; Klein, W.

    2001-10-01

    Earthquake faults occur in interacting networks having emergent space-time modes of behavior not displayed by isolated faults. Using simulations of the major faults in southern California, we find that the physics depends on the elastic interactions among the faults defined by network topology, as well as on the nonlinear physics of stress dissipation arising from friction on the faults. Our results have broad applications to other leaky threshold systems such as integrate-and-fire neural networks.

  4. Fault Management Guiding Principles

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn E.; Friberg, Kenneth H.; Fesq, Lorraine; Barley, Bryan

    2011-01-01

    Regardless of the mission type: deep space or low Earth orbit, robotic or human spaceflight, Fault Management (FM) is a critical aspect of NASA space missions. As the complexity of space missions grows, the complexity of supporting FM systems increase in turn. Data on recent NASA missions show that development of FM capabilities is a common driver for significant cost overruns late in the project development cycle. Efforts to understand the drivers behind these cost overruns, spearheaded by NASA's Science Mission Directorate (SMD), indicate that they are primarily caused by the growing complexity of FM systems and the lack of maturity of FM as an engineering discipline. NASA can and does develop FM systems that effectively protect mission functionality and assets. The cost growth results from a lack of FM planning and emphasis by project management, as well the maturity of FM as an engineering discipline, which lags behind the maturity of other engineering disciplines. As a step towards controlling the cost growth associated with FM development, SMD has commissioned a multi-institution team to develop a practitioner's handbook representing best practices for the end-to-end processes involved in engineering FM systems. While currently concentrating primarily on FM for science missions, the expectation is that this handbook will grow into a NASA-wide handbook, serving as a companion to the NASA Systems Engineering Handbook. This paper presents a snapshot of the principles that have been identified to guide FM development from cradle to grave. The principles range from considerations for integrating FM into the project and SE organizational structure, the relationship between FM designs and mission risk, and the use of the various tools of FM (e.g., redundancy) to meet the FM goal of protecting mission functionality and assets.

  5. Critical fault patterns determination in fault-tolerant computer systems

    NASA Technical Reports Server (NTRS)

    Mccluskey, E. J.; Losq, J.

    1978-01-01

    The method proposed tries to enumerate all the critical fault-patterns (successive occurrences of failures) without analyzing every single possible fault. The conditions for the system to be operating in a given mode can be expressed in terms of the static states. Thus, one can find all the system states that correspond to a given critical mode of operation. The next step consists in analyzing the fault-detection mechanisms, the diagnosis algorithm and the process of switch control. From them, one can find all the possible system configurations that can result from a failure occurrence. Thus, one can list all the characteristics, with respect to detection, diagnosis, and switch control, that failures must have to constitute critical fault-patterns. Such an enumeration of the critical fault-patterns can be directly used to evaluate the overall system tolerance to failures. Present research is focused on how to efficiently make use of these system-level characteristics to enumerate all the failures that verify these characteristics.

  6. Fault Analysis in Solar Photovoltaic Arrays

    NASA Astrophysics Data System (ADS)

    Zhao, Ye

    Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.

  7. Faulting processes at high fluid pressures: An example of fault valve behavior from the Wattle Gully Fault, Victoria, Australia

    NASA Astrophysics Data System (ADS)

    Cox, Stephen F.

    1995-07-01

    The internal structures of the Wattle Gully Fault provide insights about the mechanics and dynamics of fault systems exhibiting fault valve behavior in high fluid pressure regimes. This small, high-angle reverse fault zone developed at temperatures near 300°C in the upper crust, late during mid-Devonian regional crustal shortening in central Victoria, Australia. The Wattle Gully Fault forms part of a network of faults that focused upward migration of fluids generated by metamorphism and devolatilisation at deeper crustal levels. The fault has a length of around 800 m and a maximum displacement of 50 m and was oriented at 60° to 80° to the maximum principal stress during faulting. The structure was therefore severely misoriented for frictional reactivation. This factor, together with the widespread development of steeply dipping fault fill quartz veins and associated subhorizontal extension veins within the fault zone, indicates that faulting occurred at low shear stresses and in a near-lithostatic fluid pressure regime. The internal structures of these veins, and overprinting relationships between veins and faults, indicate that vein development was intimately associated with faulting and involved numerous episodes of fault dilatation and hydrothermal sealing and slip, together with repeated hydraulic extension fracturing adjacent to slip surfaces. The geometries, distribution and internal structures of veins in the Wattle Gully Fault Zone are related to variations in shear stress, fluid pressure, and near-field principal stress orientations during faulting. Vein opening is interpreted to have been controlled by repeated fluid pressure fluctuations associated with cyclic, deformation-induced changes in fault permeability during fault valve behavior. Rates of recovery of shear stress and fluid pressure after rupture events are interpreted to be important factors controlling time dependence of fault shear strength and slip recurrence. Fluctuations in shear stress and transient rotations of near-field principal Stresses, indicated by vein geometries, are interpreted to indicate at least local near-total relief of shear stress during some rupture events. Fault valve behavior has important effects on the dynamics of fluid migration around active faults that are sites of focused fluid migration. In particular, fault valve action is expected to lead to distinctly different fluid migration patterns adjacent to faults before, and immediately after, rupture. These fluid migration patterns have important differences with those predicted by models for dilatancy-diffusion effects and for poroelastic responses around reverse faults.

  8. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1991-01-01

    Twenty independently developed but functionally equivalent software versions were used to investigate and compare empirically some properties of N-version programming, Recovery Block, and Consensus Recovery Block, using the majority and consensus voting algorithms. This was also compared with another hybrid fault-tolerant scheme called Acceptance Voting, using dynamic versions of consensus and majority voting. Consensus voting provides adaptation of the voting strategy to varying component reliability, failure correlation, and output space characteristics. Since failure correlation among versions effectively reduces the cardinality of the space in which the voter make decisions, consensus voting is usually preferable to simple majority voting in any fault-tolerant system. When versions have considerably different reliabilities, the version with the best reliability will perform better than any of the fault-tolerant techniques.

  9. Origin and evolution of the Laguna Potrok Aike maar (Southern Patagonia, Argentina) as revealed by seismic data

    NASA Astrophysics Data System (ADS)

    Gebhardt, C.; de Batist, M. A.; Niessen, F.; Anselmetti, F.; Ariztegui, D.; Haberzettl, T.; Ohlendorf, C.; Zolitschka, B.

    2009-12-01

    Seismic reflection and refraction data provide insights into the sedimentary infill and the underlying volcanic structure of Laguna Potrok Aike, a maar lake situated in the Pali Aike Volcanic Field, Southern Patagonia. The lake has a diameter of ~3.5 km, a maximum water depth of ~100 m and a presumed age of ~770 ka. Its sedimentary regime is influenced by climatic and hydrologic conditions related to the Antarctic Circumpolar Current, the Southern Hemispheric Westerlies and sporadic outbreaks of Antarctic polar air masses. Multiproxy environmental reconstructions of the last 16 ka document that this terminal lake is highly sensitive to climate change. Laguna Potrok Aike has recently become a major focus of the International Continental Scientific Drilling Program and was drilled down to 100 m below lake floor in late 2008 within the PASADO project. The sediments are likely to contain a continental record spanning the last ca. 80 kyrs unique in the South American realm. Seismic reflection data show relatively undisturbed, stratified lacustrine sediments at least in the upper ~100 m of the sedimentary infill but are obscured possibly by gas and/or coarser material in larger areas. A model calculated from seismic refraction data reveals a funnel-shaped structure embedded in the sandstone rocks of the surrounding Santa Cruz Formation. This funnel structure is filled by lacustrine sediments of up to 370 m in thickness. These can be separated into two distinct subunits with low acoustic velocities of 1500-1800 m s-1 in the upper subunit pointing at unconsolidated lacustrine muds, and enhanced velocities of 2000-2350 m s-1 in the lower subunit. Below these lacustrine sediments, a unit of probably volcanoclastic origin is observed (>2400 m s-1). This sedimentary succession is well comparable to other well-studied sequences (e.g. Messel and Baruth maars, Germany), confirming phreatomagmatic maar explosions as the origin of Laguna Potrok Aike.

  10. Transform Faults in the California Continental Borderland

    NASA Astrophysics Data System (ADS)

    Legg, M. R.; Kamerling, M. J.

    2006-12-01

    As an active part of the larger Pacific-North America transform, the California Continental Borderland contains numerous strike-slip faults. Although northwest-trending Borderland faults are right-slip, in general, some are transform faults and others are transcurrent faults. We consider the following as essential characteristics of transform faults: 1) strike-slip character; 2) links different plate boundary types; 3) lithospheric scale; 4) strike is parallel to relative plate motion vector. For example, we consider the dextral strike-slip San Clemente fault as a transform fault because it links underthrusting (subduction) of Borderland crust beneath the western Transverse Ranges at its northern end, with continental rifts and possibly nascent seafloor spreading centers within the Borderland at its southern end. Geologic slip indicators and earthquake focal mechanisms show that right-slip on the San Clemente fault is parallel to Pacific-North America transform motion, about N40W in this area. The 600-km long fault appears to cut through the entire crust based on deep seismic imaging and other geophysical data. In contrast, the adjacent San Diego Trough fault, another northwest-trending right-slip fault, is not considered a transform fault, but rather a transcurrent fault that accommodates right shear within the broad and complex plate boundary of southern California. The San Diego Trough fault has a N30W strike, about 10 degrees oblique to the relative plate motion. The San Diego Trough fault soled in a mid-crustal detachment fault, at least early in its evolution. The San Diego Trough fault links other strike-slip faults including the Catalina and Agua Blanca faults. Other small transform faults within South San Clemente Basin, a large rhombochasm inferred to be a nascent seafloor spreading center, show the classic left separation of ridge volcanoes on northwest-trending right-slip faults of the San Clemente fault zone. Other major northwest- trending Borderland right-slip faults may be relict transforms from older periods within the evolution of the Pacific-North America transform plate boundary. Changing relative plate motion vectors subsequently forced oblique motion on these older faults and promoted creation and growth of new transform faults to accommodate the northwest movement of the Pacific plate more readily

  11. The Development of a Restless Rhyolite Magma Chamber at Laguna del Maule, Chile

    NASA Astrophysics Data System (ADS)

    Andersen, N.; Singer, B. S.; Jicha, B. R.; Fierstein, J.; Vazquez, J. A.

    2013-12-01

    The Laguna del Maule (LdM) volcanic field is a site of rapid crustal deformation at rates in excess of 200 mm/yr since 2007. The uplift is centered in the 16 km diameter LdM lake basin, which is ringed by 21 rhyolite domes and coulees erupted since the last glacial retreat. The lack of previously common andesite and dacite eruptions since 19 ka and coherent major and trace element variation throughout post-glacial time suggests the presence of a large silicic magma body beneath the LdM basin. Assimilation-fractional crystallization modeling predicts the rhyolites evolved at 5 km depth by 73% fractionation of a basaltic parent and modest assimilation of granodiorite accounting for up to 20% of the highest silica rhyolite. AFC processes dominate the evolution from basalt, however the differentiation of the silicic magma is complicated by liquid extraction from crystal mush, remelting of cumulate by intruding basalt, and trace element diffusion. Two-oxide thermometry indicates a relatively hot, oxidized system with eruptive temperatures ranging from 760 - 850° C and fO2 at QFM+2. Pilot ion microprobe 238U-230Th dating of zircon rims suggests the shallow LdM magma system was assembled over a period of 100-200 kyr. 40Ar/39Ar geochronology and field relationships reveal the post-glacial silicic volcanism occurred in two phases. Phase 1 began approximately coincident with deglaciation at 25 ka with the eruption of the rhyolite East of Presa Laguna del Maule. Over the next 6 ky, 6 small rhyodacite domes, a larger rhyodacite flow, and 4 andesite flows erupted in the NW basin and two silicic domes 12 km to the SE. Phase 1 culminates with the eruption of the Espejos rhyolite near the N shore of the lake at 19 ka. The locus of volcanism then migrates SE and phase 2 begins at ~10 ka with the eruption of the Cari Launa rhyolite and the early flows of the Barrancas complex. This period is more voluminous, erupting 4.8 km3 compared to 1.7 km3 during phase 1. Phase 2 produced lower silica rhyolite (72-74%) than the majority erupted during phase 1 (75-76%) but a smaller range of compositions overall as andesite and rhyodacite eruptions become rare and peripheral. The two phases are also distinguished by a small, but consistent, shift in REE contents. Phase 1 is marked by lower REE contents, but higher Ce/Sm ratios. The chemical trends are temporally, rather than spatially, correlated reflecting the evolution of an integrated magma body rather than local vagaries in magmatic process. Early eruptions in both phases 1 and 2 are characterized by elevated two-oxide temperatures, the presence of trace pyrrhotite, and Ta contents 2-3 times greater than subsequent eruptions, an enrichment of similar magnitude to that observed in the early Bishop Tuff. The intrusion of basalt to the base of the magma chamber could provide a source of heat and volatiles catalyzing the crystallization of Fe-sulfide and roofward diffusion of Ta. Such events have been followed by periods of heightened volcanic activity and produced an increasing rate of silicic magma generation. If the current unrest is indicative of basaltic intrusion, it could foreshadow continuing silicic volcanism at LdM, potentially leading to a catastrophic caldera forming eruption.

  12. Graphite as a fault lubricant

    NASA Astrophysics Data System (ADS)

    Oohashi, K.; Hirose, T.; Shimamoto, T.

    2011-12-01

    Graphite is a well-known solid lubricant, and has been found in ~14 vol% of fraction from fault zones in a variety of geological settings (e.g. the Atotsugawa fault system, Japan: Oohashi et al., 2011a, submitted; the KTB borehole, Germany: Zulauf et al., 1990; and the Err nappe detachment fault, Switzerland: Manatschal, 1999). However, it received little attention even though friction of graphite gouge shows strikingly low (steady-state friction coefficient ?0.1) over seven orders of magnitude in slip rate (0.16 ?m/s to 1.3 m/s; Oohashi et al., 2011b). Thus the friction experiments were performed on mixed graphite and quartz gouges with different compositions in order to determine the minimum amount of graphite in reducing the frictional strength of faults dramatically, by using a rotary-shear low to high-velocity friction apparatus. Experimental result clearly indicates that the friction coefficient of the mixture gouge decreases with graphite content following a power-law relation irrespective of slip rate; it starts to reduce at the fraction of 5 vol% and reaches to the almost same level of pure graphite gouge at the fraction of more than 20 vol%. This result implies that the 14 vol% of graphite in natural fault rock is enough amount for reduce the shear strength to half of initial. According to the textural observation, slight weakening of 5-8 vol% of graphite mixture is associated with the development of partial connection of graphite matrix, forming a slip localized surface. On the other hand, the formation of through-going connection of diffused graphite-matrix zones along shear planes is most likely to have caused the dramatic weakening of gouge with graphite of more than 20 vol%. The non-linear power-law dependency of friction on graphite content leads to more efficient reduction of fault strength as compared with the previously reported almost linear dependency on the effects of clay minerals (e.g. Shimamoto & Logan, 1981). Hence the result demonstrates the potential importance of graphite as a weakening agent of mature faults as graphite can reduce friction efficiently as compared with other weak clay minerals. Such mechanical properties of graphite may explain the lack of pronounced heat flow in major crustal faults and the long-term fault weakening.

  13. Impact of solar radiation on bacterioplankton in Laguna Vilama, a hypersaline Andean lake (4650 m)

    NASA Astrophysics Data System (ADS)

    FarAs, MarA. Eugenia; FernNdez-Zenoff, Vernica; Flores, Regina; OrdEz, Omar; EstVez, Cristina

    2009-06-01

    Laguna Vilama is a hypersaline Lake located at 4660 m altitude in the northwest of Argentina high up in the Andean Puna. The impact of ultraviolet (UV) radiation on bacterioplankton was studied by collecting samples at different times of the day. Molecular analysis (DGGE) showed that the bacterioplankton community is characterized by Gamma-proteobacteria (Halomonas sp., Marinobacter sp.), Alpha-proteobacteria (Roseobacter sp.), HGC (Agrococcus jenensis and an uncultured bacterium), and CFB (uncultured Bacteroidetes). During the day, minor modifications in bacterial diversity such as intensification of Bacteroidetes' signal and an emergence of Gamma-proteobacteria (Marinobacter flavimaris) were observed after solar exposure. DNA damage, measured as an accumulation of Cyclobutane Pyrimidine Dimers (CPDs), in bacterioplankton and naked DNA increased from 100 CPDs MB-1 at 1200 local time (LT) to 300 CPDs MB-1 at 1600 LT, and from 80 CPDs MB-1 at 1200 LT to 640 CPDs MB-1 at 1600 LT, respectively. In addition, pure cultures of Pseudomonas sp. V1 and Brachybacterium sp. V5, two bacteria previously isolated from this environment, were exposed simultaneously with the community, and viability of both strains diminished after solar exposure. No CPD accumulation was observed in either of the exposed cultures, but an increase in mutagenesis was detected in V5. Of both strains only Brachybacterium sp. V5 showed CPD accumulation in naked DNA. These results suggest that the bacterioplankton community is well adapted to this highly solar irradiated environment showing little accumulation of CPDs and few changes in the community composition. They also demonstrate that these microorganisms contain efficient mechanisms against UV damage.

  14. Continued Rapid Uplift at Laguna del Maule Volcanic Field (Chile) from 2007 through 2014

    NASA Astrophysics Data System (ADS)

    Le Mevel, H.; Feigl, K. L.; Cordova, L.; DeMets, C.; Lundgren, P.

    2014-12-01

    The current rate of uplift at Laguna del Maule (LdM) volcanic field in Chile is among the highest ever observed geodetically for a volcano that is not actively erupting. Using data from interferometric synthetic aperture radar (InSAR) and the Global Positioning System (GPS) recorded at five continuously operating stations, we measure the deformation field with dense sampling in time (1/day) and space (1/hectare). These data track the temporal evolution of the current unrest episode from its inception (sometime between 2004 and 2007) to vertical velocities faster than 200 mm/yr that continue through (at least) July 2014. Building on our previous work, we evaluate the temporal evolution by analyzing data from InSAR (ALOS, TerraSAR-X, TanDEM-X) and GPS [http://dx.doi.org/ 10.1093/gji/ggt438]. In addition, we consider InSAR data from (ERS, ENVISAT, COSMO-Skymed, and UAVSAR), as well as constraints from magneto-telluric (MT), seismic, and gravity surveys. The goal is to test the hypothesis that a recent magma intrusion is feeding a large, existing magma reservoir. What will happen next? To address this question, we analyze the temporal evolution of deformation at other large silicic systems such as Yellowstone, Long Valley, and Three Sisters, during well-studied episodes of unrest. We consider several parameterizations, including piecewise linear, parabolic, and Gaussian functions of time. By choosing the best-fitting model, we expect to constrain the time scales of such episodes and elucidate the processes driving them.

  15. Hydrocarbon concentrations in the American oyster, Crassostrea virginica, in Laguna de Terminos, Campeche, Mexico

    SciTech Connect

    Gold-Bouchot, G.; Norena-Barroso, E.; Zapata-Perez, O.

    1995-02-01

    Laguna de Terminos is a 2,500 km{sup 2} coastal lagoon in the southern Gulf of Mexico, located between 18{degrees} 20` and 19{degrees} 00` N, and 91{degrees} 00` and 92{degrees} 20` W (Figure 1). It is a shallow lagoon, with a mean depth of 3.5 m and connected to the Gulf of Mexico through two permanent inlets, Puerto Real to the east and Carmen to the west. Several rivers, most of them from the Grijalva-Usumacinta basin (the largest in Mexico and second largest in the Gulf of Mexico), drain into the lagoon with a mean annual discharge of 6 X 10{sup 9} m{sup 3}/year. This lagoon has been studied systematically, and is probably one of the best known in Mexico. An excellent overview of this lagoon can be found in Yanez-Arancibia and Day. The continental shelf north of Terminos, the Campeche Bank, is the main oil-producing zone in Mexico with a production of about 2 X 10{sup 6} barrels/day. It is also the main shrimp producer in the southern Gulf, with a mean annual catch of 18,000 tonnes/year, which represents 38 to 50% of the national catch in the Gulf of Mexico. The economic importance of this region, along with its extremely high biodiversity, both in terms of species and habitats, has prompted the Mexican government to study the creation of a wildlife refuge around Terminos. Thus, it is very important to know the current levels of pollutants in this area, as a contribution to the management plan of the proposed protected area. This paper looks at hydrocarbon concentrations in oyster tissue. 14 refs., 3 figs., 21 tabs.

  16. Dynamics of a large, restless, rhyolitic magma system at Laguna del Maule, southern Andes, Chile

    USGS Publications Warehouse

    Singer, Brad S.; Andersen, Nathan L.; Le Mével, Hélène; Feigl, Kurt L.; DeMets, Charles; Tikoff, Basil; Thurber, Clifford H.; Jicha, Brian R.; Cardonna, Carlos; Córdova, Loreto; Gil, Fernando; Unsworth, Martyn J.; Williams-Jones, Glyn; Miller, Craig W.; Fierstein, Judith; Hildreth, Edward; Vazquez, Jorge A.

    2014-01-01

    Explosive eruptions of large-volume rhyolitic magma systems are common in the geologic record and pose a major potential threat to society. Unlike other natural hazards, such as earthquakes and tsunamis, a large rhyolitic volcano may provide warning signs long before a caldera-forming eruption occurs. Yet, these signs—and what they imply about magma-crust dynamics—are not well known. This is because we have learned how these systems form, grow, and erupt mainly from the study of ash flow tuffs deposited tens to hundreds of thousands of years ago or more, or from the geophysical imaging of the unerupted portions of the reservoirs beneath the associated calderas. The Laguna del Maule Volcanic Field, Chile, includes an unusually large and recent concentration of silicic eruptions. Since 2007, the crust there has been inflating at an astonishing rate of at least 25 cm/yr. This unique opportunity to investigate the dynamics of a large rhyolitic system while magma migration, reservoir growth, and crustal deformation are actively under way is stimulating a new international collaboration. Findings thus far lead to the hypothesis that the silicic vents have tapped an extensive layer of crystal-poor, rhyolitic melt that began to form atop a magmatic mush zone that was established by ca. 20 ka with a renewed phase of rhyolite eruptions during the Holocene. Modeling of surface deformation, magnetotelluric data, and gravity changes suggest that magma is currently intruding at a depth of ~5 km. The next phase of this investigation seeks to enlarge the sets of geophysical and geochemical data and to use these observations in numerical models of system dynamics.

  17. PC-based fault finder

    SciTech Connect

    Bengiamin, N.N. ); Jensen, C.A. . Electrical Engineering Dept. Otter Tail Power Co., Fergus Falls, MN . System Protection Group); McMahon, H. )

    1993-07-01

    Electric utilities are continually pressed to stay competitive while meeting the increasing demand of today's sophisticated customer. Advances in electron equipment and the improved array of electric driven devices are setting new standards for improved reliability and quality of service. Besides the specifications on voltage and frequency regulation and the permitted harmonic content, to name a few, the number and duration of service interruptions have a dramatic direct effect on the customer. Accurate fault locating reduces transmission line patrolling and is of particular significance in repairing long lines in rough terrain. Shortened outage times, reduced equipment degrading and stress on the system, fast restored service, and improved revenue are immediate outcomes of fast fault locating which insure minimum loss of system security. This article focuses on a PC-based (DOS) computer program that has unique features for identifying the type of fault and its location on overhead transmission/distribution lines. Balanced and unbalanced faults are identified and located accurately while accounting for changes in conductor sizes and network configuration. The presented concepts and methodologies have been spurred by Otter Tail Power's need for an accurate fault locating scheme to accommodate multiple feeders with mixed lone configurations. A case study based on a section of the Otter Tail network is presented to illustrate the features and capabilities of the developed software.

  18. Faulted archaeological relics at Hierapolis (Pamukkale), Turkey

    NASA Astrophysics Data System (ADS)

    Hancock, P. L.; Altunel, E.

    1997-09-01

    The former Roman city of Hierapolis (modern Pamukkale), within the Byk Menderes valley, contains an abundance of faulted architectural relics related to damaging earthquakes that have occurred since at least 60 A.D. Faulted relics include: (1) a Roman fresh-water channel; (2) a mid-Roman relief carved into a fault plane; (3) Roman and Byzantine walls offset across the Hierapolis normal fault zone; (4) the walls of a late Byzantine fort offset more than once across a fissure/fault; and (5) numerous displaced wall-like Roman and post-Roman petrified water channels. In addition to these faulted relics, numerous monuments display tilted and toppled walls; maximum damage generally being adjacent to the Hierapolis fault zone which passes through the centre of the city. Many relics are also partly covered by faulting-related travertine deposits. Analysis of the faulted relics indicates: (1) Hierapolis and its immediate surroundings are cut by two active normal fault zones; (2) the NNW-trending Hierapolis fault zone, formerly thought to be a sinistral strike-slip fault, is a small normal fault zone; (3) there has been about 1.5 m of normal slip on the Pamukkale range-front fault since mid-Roman times; (4) an opening direction across the weakly expressed Hierapolis fault zone can be inferred by matching formerly contiguous piercing points on the relic that are now on either side of the fault trace; (5) where a fault passes through a narrow rigid architectural relic, its trace is generally refracted so that it is oriented at roughly right angles to the long axis of the relic; and (6) some major dilated cracks cutting relics reflect the locations of underlying faults.

  19. Fault-tolerant architecture: Evaluation methodology

    SciTech Connect

    Battle, R.E; Kisner, R.A. )

    1992-08-01

    The design and reliability of four fault-tolerant architectures that may be used in nuclear power plant control systems were evaluated. Two architectures are variations of triple-modular-redundant (TMR) systems, and two are variations of dual redundant systems. The evaluation includes a review of methods of implementing fault-tolerant control, the importance of automatic recovery from failures, methods of self-testing diagnostics, block diagrams of typical fault-tolerant controllers, review of fault-tolerant controllers operating in nuclear power plants, and fault tree reliability analyses of fault-tolerant systems.

  20. Transient Faults in Computer Systems

    NASA Technical Reports Server (NTRS)

    Masson, Gerald M.

    1993-01-01

    A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.

  1. Intelligent fault-tolerant controllers

    NASA Technical Reports Server (NTRS)

    Huang, Chien Y.

    1987-01-01

    A system with fault tolerant controls is one that can detect, isolate, and estimate failures and perform necessary control reconfiguration based on this new information. Artificial intelligence (AI) is concerned with semantic processing, and it has evolved to include the topics of expert systems and machine learning. This research represents an attempt to apply AI to fault tolerant controls, hence, the name intelligent fault tolerant control (IFTC). A generic solution to the problem is sought, providing a system based on logic in addition to analytical tools, and offering machine learning capabilities. The advantages are that redundant system specific algorithms are no longer needed, that reasonableness is used to quickly choose the correct control strategy, and that the system can adapt to new situations by learning about its effects on system dynamics.

  2. Faulting in porous carbonate grainstones

    NASA Astrophysics Data System (ADS)

    Tondi, Emanuele; Agosta, Fabrizio

    2010-05-01

    In the recent past, a new faulting mechanism has been documented within porous carbonate grainstones. This mechanism is due to strain localization into narrow tabular bands characterized by both volumetric and shear strain; for this reason, these features are named compactive shear bands. In the field, compactive shear bands are easily recognizable because they are lightly coloured with respect to the parent rock, and/or show a positive relief because of their increased resistance to weathering. Both characteristics, light colours and positive relief, are a consequence of the compaction processes that characterize these bands, which are the simplest structure element that form within porous carbonate grainstones. With ongoing deformation, the single compactive shear bands, which solve only a few mm of displacement, may evolve into zone of compactive shear bands and, finally, into well-developed faults characterized by slip surfaces and fault rocks. Field analysis conducted in key areas of Italy allow us to documented different modalities of interaction and linkage among the compactive shear bands: (i) a simple divergence of two different compactive shear bands from an original one, (ii) extensional and contractional jogs formed by two continuous, interacting compactive shear bands, and (iii) eye structures formed by collinear interacting compactive shear bands, which have been already described for deformation bands in sandstones. The last two types of interaction may localize the formation of compaction bands, which are characterized by pronounced component of compaction and negligible components of shearing, and/or pressure solution seams. All the aforementioned types of interaction and linkage could happen at any deformation stage, single bands, zone of bands or well developed faults. The transition from one deformation process to another, which is likely to be controlled by the changes in the material properties, is recorded by different ratios and distributions of the fault dimensional attributes. The results of field analysis are consistent with length (L), displacement (D) and thickness (T) of single compactive shear bands clustering around given values, peculiar to the individual lithologies, and does not point out to any scale relationship among these parameters. On the contrary, in zones of shear bands and well-developed faults the D values are maximum in the central portion of individual elements. Differently from what characterize the well-developed faults, in which the slip increments are solved along the main slip surfaces, within zones of compactive shear bands the displacement varies according to the number of individual single bands, so that an increased displacement is related to an higher number of bands. As a consequence, the T-D plot concerning zones of compactive shear bands and well-developed faults show two different populations, which suggest that well-developed faults are much efficient to resolve displacement, with respect the zone of shear bands, because they include sharp slip surfaces. The petrographical and petrophysical properties of the tectonic features described above, which have been assessed by mean of detailed laboratory analyses, are consistent with the single compactive shear bands and zones of shear bands behaving as seals for underground fluid flow with respect to the host rock. These features, strongly present within the fault damage zones of well-developed faults, may compartmentalize the fluid flow in faulted carbonate reservoirs.

  3. InSAR measurements around active faults: creeping Philippine Fault and un-creeping Alpine Fault

    NASA Astrophysics Data System (ADS)

    Fukushima, Y.

    2013-12-01

    Recently, interferometric synthetic aperture radar (InSAR) time-series analyses have been frequently applied to measure the time-series of small and quasi-steady displacements in wide areas. Large efforts in the methodological developments have been made to pursue higher temporal and spatial resolutions by using frequently acquired SAR images and detecting more pixels that exhibit phase stability. While such a high resolution is indispensable for tracking displacements of man-made and other small-scale structures, it is not necessarily needed and can be unnecessarily computer-intensive for measuring the crustal deformation associated with active faults and volcanic activities. I apply a simple and efficient method to measure the deformation around the Alpine Fault in the South Island of New Zealand, and the Philippine Fault in the Leyte Island. I use a small-baseline subset (SBAS) analysis approach (Berardino, et al., 2002). Generally, the more we average the pixel values, the more coherent the signals are. Considering that, for the deformation around active faults, the spatial resolution can be as coarse as a few hundred meters, we can severely 'multi-look' the interferograms. The two applied cases in this study benefited from this approach; I could obtain the mean velocity maps on practically the entire area without discarding decorrelated areas. The signals could have been only partially obtained by standard persistent scatterer or single-look small-baseline approaches that are much more computer-intensive. In order to further increase the signal detection capability, it is sometimes effective to introduce a processing algorithm adapted to the signal of interest. In an InSAR time-series processing, one usually needs to set the reference point because interferograms are all relative measurements. It is difficult, however, to fix the reference point when one aims to measure long-wavelength deformation signals that span the whole analysis area. This problem can be solved by adding the displacement offset in each interferogram as a model parameter and solving the system of equations with the minimum norm condition. This way, the unknown offsets can be automatically determined. By applying this method to the ALOS/PALSAR data acquired over the Alpine Fault, I obtained the mean velocity map showing the right-lateral relative motion of the blocks north and south of the fault and the strain concentration (large velocity gradient) around the fault. The velocity gradient around the fault has along-fault variation, probably reflecting the variation in the fault locking depth. When one aims to detect fault creeps, i.e., displacement discontinuity in space, one can additionally introduce additional parameters to describe the phase ramps in the interferograms and solve the system of equations again with the minimum norm condition. Then, the displacement discontinuity appears more clearly in the result at the cost of suppressing long-wavelength displacements. By applying this method to the ALOS/PALSAR data acquired over the Philippine Fault in Leyte Island, I obtained the mean velocity map showing fault creep at least in the northern and central parts of Leyte at a rate of around 10 mm/year.

  4. Earthquake source fault beneath Tokyo.

    PubMed

    Sato, Hiroshi; Hirata, Naoshi; Koketsu, Kazuki; Okaya, David; Abe, Susumu; Kobayashi, Reiji; Matsubara, Makoto; Iwasaki, Takaya; Ito, Tanio; Ikawa, Takeshi; Kawanaka, Taku; Kasahara, Keiji; Harder, Steven

    2005-07-15

    Devastating earthquakes occur on a megathrust fault that underlies the Tokyo metropolitan region. We identify this fault with use of deep seismic reflection profiling to be the upper surface of the Philippine Sea plate. The depth to the top of this plate, 4 to 26 kilometers, is much shallower than previous estimates based on the distribution of seismicity. This shallower plate geometry changes the location of maximum finite slip of the 1923 Kanto earthquake and will affect estimations of strong ground motion for seismic hazards analysis within the Tokyo region. PMID:16020734

  5. Vegetation history in southern Patagonia: first palynological results of the ICDP lake drilling project at Laguna Potrok Aike, Argentina

    NASA Astrophysics Data System (ADS)

    Schbitz, Frank; Michael, Wille

    2010-05-01

    Laguna Potrok Aike located in southern Argentina is one of the very few locations that are suited to reconstruct the paleoenvironmental and climatic history of southern Patagonia. In the framework of the multinational ICDP deep drilling project PASADO several long sediment cores to a composite depth of more than 100 m were obtained. Here we present first results of pollen analyses from sediment material of the core catcher. Absolute time control is not yet available. Pollen spectra with a spatial resolution of three meters show that Laguna Potrok Aike was always surrounded by Patagonian Steppe vegetation. However, the species composition underwent some marked proportional changes through time. The uppermost pollen spectra show a high contribution of Andean forest and charcoal particles as it can be expected for Holocene times and the ending last glacial. The middle part shows no forest and relatively high amounts of pollen from steppe plants indicating cold and dry full glacial conditions. The lowermost samples are characterized by a significantly different species composition as steppe plants like Asteraceae, Caryophyllaceae, Ericaceae and Ephedra became more frequent. In combination with higher charcoal amounts and an algal species composition comparable to Holocene times we suggest that conditions during the formation of sediments at the base of the record were more humid and/or warmer causing a higher fuel availability for charcoal production compared to full glacial times.

  6. Distribution and community structure of ichthyoplankton in Laguna Madre seagrass meadows: Potential impact of seagrass species change

    USGS Publications Warehouse

    Tolan, J.M.; Holt, S.A.; Onuf, C.P.

    1997-01-01

    Seasonal ichthyoplankton surveys were made in the lower Laguna Madre, Texas, to compare the relative utilization of various nursery habitats (shoal grass, Halodule wrightii; manatee grass, Syringodium filiforme;, and unvegetated sand bottom) for both estuarine and offshore-spawned larvae. The species composition and abundance of fish larvae were determined for each habitat type at six locations in the bay. Pushnet ichthyoplankton sampling resulted in 296 total collections, yielding 107,463 fishes representing 55 species in 24 families. A broad spectrum of both the biotic and physical habitat parameters were examined to link the dispersion and distribution of both pre-settlement and post-settlement larvae to the utilization of shallow seagrass habitats. Sample sites were grouped by cluster analysis (Ward's minimum variance method) according to the similarity of their fish assemblages and subsequently examined with a multiple discriminant function analysis to identify important environmental variables. Abiotic environmental factors were most influential in defining groups for samples dominated by early larvae, whereas measures of seagrass complexity defined groups dominated by older larvae and juveniles. Juvenile-stage individuals showed clear habitat preference, with the more shallow Halodule wrightii being the habitat of choice, whereas early larvae of most species were widely distributed over all habitats. As a result of the recent shift of dominance from Halodule wrightii to Syringodium filiforme, overall reductions in the quality of nursery habitat for fishes in the lower Laguna Madre are projected.

  7. The basaltic to trachydacitic upper Diliman Tuff in Manila: Petrogenesis and comparison with deposits from Taal and Laguna Calderas

    NASA Astrophysics Data System (ADS)

    Arpa, Maria Carmencita B.; Patino, Lina C.; Vogel, Thomas A.

    2008-11-01

    The basaltic to trachydacitic (50-65 wt.% SiO 2) upper Diliman Tuff is the youngest deposit of a sequence of tuffaceous deposits in Metro Manila. The deposit is located north of Taal Caldera and northwest of Laguna Caldera, which are both within the Southwest Luzon Volcanic Field. Chemical variations in the pumice fragments within the upper Diliman Tuff include medium-K basalt to basaltic andesite, high-K basaltic andesite to andesite and trachyandesite to trachydacite. Magma mixing/mingling is ubiquitous and is shown by banding textures in some pumice fragments, considerable range in groundmass glass composition (54 to 65 wt.% SiO 2) in a single pumice fragment, and zoning in plagioclase phenocrysts. Simple binary mixing modeling and polytopic vector analysis were used to further evaluate magma mixing. Trace-element variations are inconsistent with the medium-K and high-K magmas being related by crystal fractionation. The medium-K basalts represent hotter intrusions, which induced small degrees of partial melting in older crystallized medium-K basaltic material within the crust to produce the high-K magmas. All melts likely differentiated in the crust but the emplaced and new basaltic intrusions originated from the mantle wedge and were generated by subduction zone processes. The volcanic source vent for the upper Diliman Tuff has not been identified. In comparisons with the deposits from adjacent Taal and Laguna Calderas it is chemically distinct with respect to both major- and trace-element concentrations.

  8. Congener-specific polychlorinated biphenyl patterns in eggs of aquatic birds from the Lower Laguna Madre, Texas

    SciTech Connect

    Mora, M.A.

    1996-06-01

    Eggs from four aquatic bird species nesting in the Lower Laguna Madre, Texas, were collected to determine differences and similarities in the accumulation of congener-specific polychlorinated biphenyls (PCBs) and to evaluate PCB impacts on reproduction. Because of the different toxicities of PCB congeners, it is important to know which congeners contribute most to total PCBs. The predominant PCB congeners were 153, 138, 180, 110, 118, 187, and 92. Collectively, congeners 153, 138, and 180 accounted for 26 to 42% of total PCBs. Congener 153 was the most abundant in Caspian terns (Sterna caspia) and great blue herons (Ardea herodias) and congener 138 was the most abundant in snowy egrets (Egretta thula) and tricolored herons (Egretta tricolor). Principal component analysis indicated a predominance of higher chlorinated biphenyls in Caspian terns and great blue herons and lower chlorinated biphenyls in tricolored herons. Snowy egrets had a predominance of pentachlorobiphenyls. These results suggest that there are differences in PCB congener patterns in closely related species and that these differences are more likely associated with the species` diet rather than metabolism. Total PCBs were significantly greater (p < 0.05) in Caspian terns than in the other species. Overall, PCBs in eggs of birds from the Lower Laguna Madre were below concentrations known to affect bird reproduction.

  9. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1987-01-01

    Specific topics briefly addressed include: the consistent comparison problem in N-version system; analytic models of comparison testing; fault tolerance through data diversity; and the relationship between failures caused by automatically seeded faults.

  10. Seismology: Diary of a wimpy fault

    NASA Astrophysics Data System (ADS)

    Bürgmann, Roland

    2015-05-01

    Subduction zone faults can slip slowly, generating tremor. The varying correlation between tidal stresses and tremor occurring deep in the Cascadia subduction zone suggests that the fault is inherently weak, and gets weaker as it slips.

  11. Parametric Modeling and Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva; Ju, Jianhong

    2000-01-01

    Fault tolerant control is considered for a nonlinear aircraft model expressed as a linear parameter-varying system. By proper parameterization of foreseeable faults, the linear parameter-varying system can include fault effects as additional varying parameters. A recently developed technique in fault effect parameter estimation allows us to assume that estimates of the fault effect parameters are available on-line. Reconfigurability is calculated for this model with respect to the loss of control effectiveness to assess the potentiality of the model to tolerate such losses prior to control design. The control design is carried out by applying a polytopic method to the aircraft model. An error bound on fault effect parameter estimation is provided, within which the Lyapunov stability of the closed-loop system is robust. Our simulation results show that as long as the fault parameter estimates are sufficiently accurate, the polytopic controller can provide satisfactory fault-tolerance.

  12. Solar Dynamic Power System Fault Diagnosis

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Dias, Lakshman G.

    1996-01-01

    The objective of this research is to conduct various fault simulation studies for diagnosing the type and location of faults in the power distribution system. Different types of faults are simulated at different locations within the distribution system and the faulted waveforms are monitored at measurable nodes such as at the output of the DDCU's. These fault signatures are processed using feature extractors such as FFT and wavelet transforms. The extracted features are fed to a clustering based neural network for training and subsequent testing using previously unseen data. Different load models consisting of constant impedance and constant power are used for the loads. Open circuit faults and short circuit faults are studied. It is concluded from present studies that using features extracted from wavelet transforms give better success rates during ANN testing. The trained ANN's are capable of diagnosing fault types and approximate locations in the solar dynamic power distribution system.

  13. A summary of the active fault investigation in the extension sea area of Kikugawa fault and the Nishiyama fault , N-S direction fault in south west Japan

    NASA Astrophysics Data System (ADS)

    Abe, S.

    2010-12-01

    In this study, we carried out two sets of active fault investigation by the request from Ministry of Education, Culture, Sports, Science and Technology in the sea area of the extension of Kikugawa fault and the Nishiyama fault. We want to clarify the five following matters about both active faults based on those results. (1)Fault continuity of the land and the sea. (2) The length of the active fault. (3) The division of the segment. (4) Activity characteristics. In this investigation, we carried out a digital single channel seismic reflection survey in the whole area of both active faults. In addition, a high-resolution multichannel seismic reflection survey was carried out to recognize the detailed structure of a shallow stratum. Furthermore, the sampling with the vibrocoring to get information of the sedimentation age was carried out. The reflection profile of both active faults was extremely clear. The characteristics of the lateral fault such as flower structure, the dispersion of the active fault were recognized. In addition, from analysis of the age of the stratum, it was recognized that the thickness of the sediment was extremely thin in Holocene epoch on the continental shelf in this sea area. It was confirmed that the Kikugawa fault extended to the offing than the existing results of research by a result of this investigation. In addition, the width of the active fault seems to become wide toward the offing while dispersing. At present, we think that we can divide Kikugawa fault into some segments based on the distribution form of the segment. About the Nishiyama fault, reflection profiles to show the existence of the active fault was acquired in the sea between Ooshima and Kyushu. From this result and topographical existing results of research in Ooshima, it is thought that Nishiyama fault and the Ooshima offing active fault are a series of structure. As for Ooshima offing active fault, the upheaval side changes, and a direction changes too. Therefore, we think that we can divide Nishiyama fault into some segments based on the distribution form of the segment like Kikugawa fault.About both active faults, the length of the active fault, segment division, the activity characteristics of each segment are examining now.

  14. Selected Hydrologic, Water-Quality, Biological, and Sedimentation Characteristics of Laguna Grande, Fajardo, Puerto Rico, March 2007-February 2009

    USGS Publications Warehouse

    Soler-Lpez, Luis R.; Santos, Carlos R.

    2010-01-01

    Laguna Grande is a 50-hectare lagoon in the municipio of Fajardo, located in the northeasternmost part of Puerto Rico. Hydrologic, water-quality, and biological data were collected in the lagoon between March 2007 and February 2009 to establish baseline conditions and determine the health of Laguna Grande on the basis of preestablished standards. In addition, a core of bottom material was obtained at one site within the lagoon to establish sediment depositional rates. Water-quality properties measured onsite (temperature, pH, dissolved oxygen, specific conductance, and water transparency) varied temporally rather than areally. All physical properties were in compliance with current regulatory standards established for Puerto Rico. Nutrient concentrations were very low and in compliance with current regulatory standards (less than 5.0 and 1.0 milligrams per liter for total nitrogen and total phosphorus, respectively). The average total nitrogen concentration was 0.28 milligram per liter, and the average total phosphorus concentration was 0.02 milligram per liter. Chlorophyll a was the predominant form of photosynthetic pigment in the water. The average chlorophyll-a concentration was 6.2 micrograms per liter. Bottom sediment accumulation rates were determined in sediment cores by modeling the downcore activities of lead-210 and cesium-137. Results indicated a sediment depositional rate of about 0.44 centimeter per year. At this rate of sediment accretion, the lagoon may become a marshland in about 700 to 900 years. About 86 percent of the community primary productivity in Laguna Grande was generated by periphyton, primarily algal mats and seagrasses, and the remaining 14 percent was generated by phytoplankton in the water column. Based on the diel studies the total average net community productivity equaled 5.7 grams of oxygen per cubic meter per day (2.1 grams of carbon per cubic meter per day). Most of this productivity was ascribed to periphyton and macrophytes, which produced 4.9 grams of oxygen per cubic meter per day (1.8 grams of carbon per cubic meter per day). Phytoplankton, the plant and algal component of plankton, produced about 0.8 gram of oxygen per cubic meter per day (0.3 gram of carbon per cubic meter per day). The total diel community respiration rate was 23.4 grams of oxygen per cubic meter per day. The respiration rate ascribed to plankton, which consists of all free floating and swimming organisms in the water column, composed 10 percent of this rate (2.9 grams of oxygen per cubic meter per day); respiration by all other organisms composed the remaining 90 percent (20.5 grams of oxygen per cubic meter per day). Plankton gross productivity was 3.7 grams of oxygen per cubic meter per day, equivalent to about 13 percent of the average gross productivity for the entire community (29.1 grams of oxygen per cubic meter per day). The average phytoplankton biomass values in Laguna Grande ranged from 6.0 to 13.6 milligrams per liter. During the study, Laguna Grande contained a phytoplankton standing crop of approximately 5.8 metric tons. Phytoplankton community had a turnover (renewal) rate of about 153 times per year, or roughly about once every 2.5 days. Fecal indicator bacteria concentrations ranged from 160 to 60,000 colonies per 100 milliliters. Concentrations generally were greatest in areas near residential and commercial establishments, and frequently exceeded current regulatory standards established for Puerto Rico.

  15. Automatic fault diagnosis of a switching regulator

    NASA Astrophysics Data System (ADS)

    Nienhaus, H. A.; Palmer, D. E.

    This paper describes a microprocessor-based system for the automatic fault diagnosis of a switching regulator. It covers the system from a test philosophy to a working breadboard that correctly identifies single simulated faults in the switching regulator. In addition to open circuit, short circuit, and stuck at faults, the system is capable of diagnosing faults due to excessive leakage, drift in critical components, and system instability.

  16. Fault current limiters using superconductors

    NASA Astrophysics Data System (ADS)

    Norris, W. T.; Power, A.

    Fault current limiters on power systems are to reduce damage by heating and electromechanical forces, to alleviate duty on switchgear used to clear the fault, and to mitigate disturbance to unfaulted parts of the system. A basic scheme involves a super-resistor which is a superconductor being driven to high resistance when fault current flows either when current is high during a cycle of a.c. or, if the temperature of the superconductive material rises, for the full cycle. Current may be commuted from superconductor to an impedance in parallel, thus reducing the energy dispersed at low temperature and saving refrigeration. In a super-shorted transformer the ambient temperature primary carries the power system current; the superconductive secondary goes to a resistive condition when excessive currents flow in the primary. A super-transformer has the advantage of not needing current leads from high temperature to low temperature; it behaves as a parallel super-resistor and inductor. The supertransductor with a superconductive d.c. bias winding is large and has small effect on the rate of fall of current at current zero; it does little to alleviate duty on switchgear but does reduce heating and electromechanical forces. It is fully active after a fault has been cleared. Other schemes depend on rapid recooling of the superconductor to achieve this.

  17. Denali Fault: Black Rapids Glacier

    USGS Multimedia Gallery

    View eastward along Black Rapids Glacier. The Denali fault follows the trace of the glacier. These very large rockslides went a mile across the glacier on the right side. Investigations of the headwall of the middle landslide indicate a volume at least as large as that which fell, has dropped a mete...

  18. MOS integrated circuit fault modeling

    NASA Technical Reports Server (NTRS)

    Sievers, M.

    1985-01-01

    Three digital simulation techniques for MOS integrated circuit faults were examined. These techniques embody a hierarchy of complexity bracketing the range of simulation levels. The digital approaches are: transistor-level, connector-switch-attenuator level, and gate level. The advantages and disadvantages are discussed. Failure characteristics are also described.

  19. Cell boundary fault detection system

    DOEpatents

    Archer, Charles Jens (Rochester, MN); Pinnow, Kurt Walter (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian Edward (Rochester, MN)

    2011-04-19

    An apparatus and program product determine a nodal fault along the boundary, or face, of a computing cell. Nodes on adjacent cell boundaries communicate with each other, and the communications are analyzed to determine if a node or connection is faulty.

  20. Ground Fault--A Health Hazard

    ERIC Educational Resources Information Center

    Jacobs, Clinton O.

    1977-01-01

    A ground fault is especially hazardous because the resistance through which the current is flowing to ground may be sufficient to cause electrocution. The Ground Fault Circuit Interrupter (G.F.C.I.) protects 15 and 25 ampere 120 volt circuits from ground fault condition. The design and examples of G.F.C.I. functions are described in this article.

  1. High temperature superconducting fault current limiter

    DOEpatents

    Hull, John R.

    1997-01-01

    A fault current limiter (10) for an electrical circuit (14). The fault current limiter (10) includes a high temperature superconductor (12) in the electrical circuit (14). The high temperature superconductor (12) is cooled below its critical temperature to maintain the superconducting electrical properties during operation as the fault current limiter (10).

  2. Fault system polarity: A matter of chance?

    NASA Astrophysics Data System (ADS)

    Schpfer, Martin; Childs, Conrad; Manzocchi, Tom; Walsh, John; Nicol, Andy; Grasemann, Bernhard

    2015-04-01

    Many normal fault systems and, on a smaller scale, fracture boudinage exhibit asymmetry so that one fault dip direction dominates. The fraction of throw (or heave) accommodated by faults with the same dip direction in relation to the total fault system throw (or heave) is a quantitative measure of fault system asymmetry and termed 'polarity'. It is a common belief that the formation of domino and shear band boudinage with a monoclinic symmetry requires a component of layer parallel shearing, whereas torn boudins reflect coaxial flow. Moreover, domains of parallel faults are frequently used to infer the presence of a common dcollement. Here we show, using Distinct Element Method (DEM) models in which rock is represented by an assemblage of bonded circular particles, that asymmetric fault systems can emerge under symmetric boundary conditions. The pre-requisite for the development of domains of parallel faults is however that the medium surrounding the brittle layer has a very low strength. We demonstrate that, if the 'competence' contrast between the brittle layer and the surrounding material ('jacket', or 'matrix') is high, the fault dip directions and hence fault system polarity can be explained using a random process. The results imply that domains of parallel faults are, for the conditions and properties used in our models, in fact a matter of chance. Our models suggest that domino and shear band boudinage can be an unreliable shear-sense indicator. Moreover, the presence of a dcollement should not be inferred on the basis of a domain of parallel faults only.

  3. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 2 2013-01-01 2013-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission...

  4. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Fault. 255.11 Section 255.11 Employees... 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than...

  5. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Fault. 255.11 Section 255.11 Employees... 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than...

  6. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Fault areas. 258.13 Section 258.13... SOLID WASTE LANDFILLS Location Restrictions 258.13 Fault areas. (a) New MSWLF units and lateral expansions shall not be located within 200 feet (60 meters) of a fault that has had displacement in...

  7. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission...

  8. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The...

  9. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The...

  10. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Fault areas. 258.13 Section 258.13... SOLID WASTE LANDFILLS Location Restrictions 258.13 Fault areas. (a) New MSWLF units and lateral expansions shall not be located within 200 feet (60 meters) of a fault that has had displacement in...

  11. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 2 2013-01-01 2013-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The...

  12. High temperature superconducting fault current limiter

    DOEpatents

    Hull, J.R.

    1997-02-04

    A fault current limiter for an electrical circuit is disclosed. The fault current limiter includes a high temperature superconductor in the electrical circuit. The high temperature superconductor is cooled below its critical temperature to maintain the superconducting electrical properties during operation as the fault current limiter. 15 figs.

  13. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 2 2014-01-01 2014-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission...

  14. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Fault areas. 258.13 Section 258.13... SOLID WASTE LANDFILLS Location Restrictions 258.13 Fault areas. (a) New MSWLF units and lateral expansions shall not be located within 200 feet (60 meters) of a fault that has had displacement in...

  15. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Fault areas. 258.13 Section 258.13... SOLID WASTE LANDFILLS Location Restrictions 258.13 Fault areas. (a) New MSWLF units and lateral expansions shall not be located within 200 feet (60 meters) of a fault that has had displacement in...

  16. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission...

  17. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission...

  18. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Fault. 255.11 Section 255.11 Employees... 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than...

  19. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The...

  20. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Fault areas. 258.13 Section 258.13... SOLID WASTE LANDFILLS Location Restrictions 258.13 Fault areas. (a) New MSWLF units and lateral expansions shall not be located within 200 feet (60 meters) of a fault that has had displacement in...

  1. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Fault. 255.11 Section 255.11 Employees... 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than...

  2. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 2 2014-01-01 2014-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The...

  3. Lake Tahoe Faults, Shaded Relief Map

    USGS Multimedia Gallery

    Shaded relief map of western part of the Lake Tahoe basin, California. Faults lines are dashed where approximately located, dotted where concealed, bar and ball on downthrown side. Heavier line weight shows principal range-front fault strands of the Tahoe-Sierra frontal fault zone (TSFFZ). Opaque wh...

  4. Fault-related clay authigenesis along the Moab Fault: Implications for calculations of fault rock composition and mechanical and hydrologic fault zone properties

    USGS Publications Warehouse

    Solum, J.G.; Davatzes, N.C.; Lockner, D.A.

    2010-01-01

    The presence of clays in fault rocks influences both the mechanical and hydrologic properties of clay-bearing faults, and therefore it is critical to understand the origin of clays in fault rocks and their distributions is of great importance for defining fundamental properties of faults in the shallow crust. Field mapping shows that layers of clay gouge and shale smear are common along the Moab Fault, from exposures with throws ranging from 10 to ???1000 m. Elemental analyses of four locations along the Moab Fault show that fault rocks are enriched in clays at R191 and Bartlett Wash, but that this clay enrichment occurred at different times and was associated with different fluids. Fault rocks at Corral and Courthouse Canyons show little difference in elemental composition from adjacent protolith, suggesting that formation of fault rocks at those locations is governed by mechanical processes. Friction tests show that these authigenic clays result in fault zone weakening, and potentially influence the style of failure along the fault (seismogenic vs. aseismic) and potentially influence the amount of fluid loss associated with coseismic dilation. Scanning electron microscopy shows that authigenesis promotes that continuity of slip surfaces, thereby enhancing seal capacity. The occurrence of the authigenesis, and its influence on the sealing properties of faults, highlights the importance of determining the processes that control this phenomenon. ?? 2010 Elsevier Ltd.

  5. Salton Sea Satellite Image Showing Fault Slip

    USGS Multimedia Gallery

    Landsat satellite image (LE70390372003084EDC00) showing location of surface slip triggered along faults in the greater Salton Trough area. Red bars show the generalized location of 2010 surface slip along faults in the central Salton Trough and many additional faults in the southwestern section of t...

  6. Morphometric or morpho-anatomal and genetic investigations highlight allopatric speciation in Western Mediterranean lagoons within the Atherina lagunae species (Teleostei, Atherinidae)

    NASA Astrophysics Data System (ADS)

    Trabelsi, M.; Maamouri, F.; Quignard, J.-P.; Boussad, M.; Faure, E.

    2004-12-01

    Current distribution of Atherina lagunae poses an interesting biogeographical problem as this species inhabits widely separate circum-Mediterranean lagoons. Statistical analyses of 87 biometric parameters and genetic variation in a portion of the cytochrome b gene were examined in four populations of A. lagunae from Tunisian and French lagoons. The results suggested a subdivision into two distinct Atherinid groups: one included the French lagoonal sand smelts and the second included the Tunisian ones. Tunisian lagoonal sand smelts were distinguished from the French ones by the lower number of lateral line scales, vertebrae, pectorals and first dorsal fin rays and the higher number of lower and total gillrakers. In addition, A. lagunae from Tunisian lagoons are characterised by short preorbital length, developed operculum, broad interorbital space, larger head, robust body and a relatively small first dorsal fin which is positioned backwards. In addition, intraspecific sequence variation in a portion of the cytochrome b gene was examined in 87 individuals from Tunisia and France. The high correlation between the results of the molecular phylogenetic tree and biometric statistical data analysis suggested that two different sibling species or at least sub-species or semi-species have colonised the lagoons. In addition, our analyses suggested that the evolution of A. lagunae probably occurred in two steps including marine sympatric speciation within the large Atherina boyeri complex and a post-Pleistocene colonisation of the lagoons.

  7. Evaluation Report of the Native American Consortium for Educational and Assistive Technologies for Indian Children Living on the Acoma and Laguna Pueblos.

    ERIC Educational Resources Information Center

    Zastrow, Leona M.

    The New Mexico State Department of Education received a federal grant to provide educational and assistive technology for American Indian children living in the Pueblos of Laguna and Acoma, New Mexico. During the 2-year project, more than 229 assistive technology items were purchased, and some form of assistive technology was provided to 121

  8. Geometry and kinematics of active normal faults, South Oquirrh Mountains, Utah: implication for fault growth

    NASA Astrophysics Data System (ADS)

    Wu, Daning; Bruhn, Ronald L.

    1994-08-01

    The NNW-striking South Oquirrh Mountains normal fault zone consists of four individual faults. The individual faults are arranged in a right-stepping pattern in the north and a left-stepping pattern in the south, forming a convex-shaped fault zone in map view with the apex towards the hanging wall. Late Quaternary fault activity is characterized by 2.0-4.5 m high discontinuous fault scarps developed in both late Quaternary alluvium and bedrock. The fault scarps in bedrock contain evidence of two large rupture events. The last large earthquake occurred prior to the highstand of the Bonneville lake cycle (15 ka), based on cross-cutting relationship between the Bonneville shoreline and the fault scarp, and on comparisons of fault scarp morphology. Cumulative displacement patterns inferred from range crest elevations, Bouguer gravity data in the adjacent basin and rotation of the subsidiary faults adjacent to the West Mercur fault are similar to the pattern of displacements measured across late Quaternary fault scarps; that is, the maximum displacement is near the apex of the convex-shaped fault zone, where the West Mercur fault is located, and then tapers off towards both ends of the fault zone. Fault traces that range from a few meters to tens of kilometers long are characterized by two dominant orientations (strike N11W and N43W): the two orientations are separated from each other by a statistically constant angle of 33 3. Slip directions concentrate on a trend of S70W for both groups of faults, perpendicular to the average strike of the fault zone. These data indicate that the geometry of the fault surfaces is non-cylindrical and may be composed of self-similar structural ridges and troughs of variable wavelength and amplitude that are elongated parallel to slip direction. The relationships between fault geometry, displacement and geomorphology in the South Oquirrh Mountains fault zone suggests a growth model of normal faults in which apex points on the convex-shaped fault sections mark the nucleation points of primary faults. Secondary faults developed at the ends of the primary fault step into the footwall block as the fault zone grows laterally. Intersecting regions between two laterally growing fault zones are concave-shaped in map view.

  9. Fault Diagnosis in HVAC Chillers

    NASA Technical Reports Server (NTRS)

    Choi, Kihoon; Namuru, Setu M.; Azam, Mohammad S.; Luo, Jianhui; Pattipati, Krishna R.; Patterson-Hine, Ann

    2005-01-01

    Modern buildings are being equipped with increasingly sophisticated power and control systems with substantial capabilities for monitoring and controlling the amenities. Operational problems associated with heating, ventilation, and air-conditioning (HVAC) systems plague many commercial buildings, often the result of degraded equipment, failed sensors, improper installation, poor maintenance, and improperly implemented controls. Most existing HVAC fault-diagnostic schemes are based on analytical models and knowledge bases. These schemes are adequate for generic systems. However, real-world systems significantly differ from the generic ones and necessitate modifications of the models and/or customization of the standard knowledge bases, which can be labor intensive. Data-driven techniques for fault detection and isolation (FDI) have a close relationship with pattern recognition, wherein one seeks to categorize the input-output data into normal or faulty classes. Owing to the simplicity and adaptability, customization of a data-driven FDI approach does not require in-depth knowledge of the HVAC system. It enables the building system operators to improve energy efficiency and maintain the desired comfort level at a reduced cost. In this article, we consider a data-driven approach for FDI of chillers in HVAC systems. To diagnose the faults of interest in the chiller, we employ multiway dynamic principal component analysis (MPCA), multiway partial least squares (MPLS), and support vector machines (SVMs). The simulation of a chiller under various fault conditions is conducted using a standard chiller simulator from the American Society of Heating, Refrigerating, and Air-conditioning Engineers (ASHRAE). We validated our FDI scheme using experimental data obtained from different types of chiller faults.

  10. Fault tolerant control of spacecraft

    NASA Astrophysics Data System (ADS)

    Godard

    Autonomous multiple spacecraft formation flying space missions demand the development of reliable control systems to ensure rapid, accurate, and effective response to various attitude and formation reconfiguration commands. Keeping in mind the complexities involved in the technology development to enable spacecraft formation flying, this thesis presents the development and validation of a fault tolerant control algorithm that augments the AOCS on-board a spacecraft to ensure that these challenging formation flying missions will fly successfully. Taking inspiration from the existing theory of nonlinear control, a fault-tolerant control system for the RyePicoSat missions is designed to cope with actuator faults whilst maintaining the desirable degree of overall stability and performance. Autonomous fault tolerant adaptive control scheme for spacecraft equipped with redundant actuators and robust control of spacecraft in underactuated configuration, represent the two central themes of this thesis. The developed algorithms are validated using a hardware-in-the-loop simulation. A reaction wheel testbed is used to validate the proposed fault tolerant attitude control scheme. A spacecraft formation flying experimental testbed is used to verify the performance of the proposed robust control scheme for underactuated spacecraft configurations. The proposed underactuated formation flying concept leads to more than 60% savings in fuel consumption when compared to a fully actuated spacecraft formation configuration. We also developed a novel attitude control methodology that requires only a single thruster to stabilize three axis attitude and angular velocity components of a spacecraft. Numerical simulations and hardware-in-the-loop experimental results along with rigorous analytical stability analysis shows that the proposed methodology will greatly enhance the reliability of the spacecraft, while allowing for potentially significant overall mission cost reduction.

  11. Fault-Tolerant Heat Exchanger

    NASA Technical Reports Server (NTRS)

    Izenson, Michael G.; Crowley, Christopher J.

    2005-01-01

    A compact, lightweight heat exchanger has been designed to be fault-tolerant in the sense that a single-point leak would not cause mixing of heat-transfer fluids. This particular heat exchanger is intended to be part of the temperature-regulation system for habitable modules of the International Space Station and to function with water and ammonia as the heat-transfer fluids. The basic fault-tolerant design is adaptable to other heat-transfer fluids and heat exchangers for applications in which mixing of heat-transfer fluids would pose toxic, explosive, or other hazards: Examples could include fuel/air heat exchangers for thermal management on aircraft, process heat exchangers in the cryogenic industry, and heat exchangers used in chemical processing. The reason this heat exchanger can tolerate a single-point leak is that the heat-transfer fluids are everywhere separated by a vented volume and at least two seals. The combination of fault tolerance, compactness, and light weight is implemented in a unique heat-exchanger core configuration: Each fluid passage is entirely surrounded by a vented region bridged by solid structures through which heat is conducted between the fluids. Precise, proprietary fabrication techniques make it possible to manufacture the vented regions and heat-conducting structures with very small dimensions to obtain a very large coefficient of heat transfer between the two fluids. A large heat-transfer coefficient favors compact design by making it possible to use a relatively small core for a given heat-transfer rate. Calculations and experiments have shown that in most respects, the fault-tolerant heat exchanger can be expected to equal or exceed the performance of the non-fault-tolerant heat exchanger that it is intended to supplant (see table). The only significant disadvantages are a slight weight penalty and a small decrease in the mass-specific heat transfer.

  12. Availability of ground water in parts of the Acoma and Laguna Indian Reservations, New Mexico

    USGS Publications Warehouse

    Dinwiddie, George A.; Motts, Ward Sundt

    1964-01-01

    The need for additional water has increased in recent years on the Acoma and Laguna Indian Reservations in west-central New Mexico because the population and per capita use of water have increased; the tribes also desire water for light industry, for more modern schools, and to increase their irrigation program. Many wells have been drilled in the area, but most have been disappointing because of small yields and poor chemical quality of the water. The topography in the Acoma and Laguna Indian Reservations is controlled primarily by the regional and local dip of alternating beds of sandstone and shale and by the igneous complex of Mount Taylor. The entrenched alluvial valley along the Rio San Jose, which traverses the area, ranges in width from about 0.4 mile to about 2 miles. The climate is characterized by scant rainfall, which occurs mainly in summer, low relative humidity, and large daily fluctuations of temperature. Most of the surface water enters the area through the Rio San Jose. The average annual streamflow past the gaging station Rio San Jose near Grants, N. Mex. is about 4,000 acre-feet. Tributaries to the Rio San Jose within the area probably contribute about 1,000 acre-feet per year. At the present time, most of the surface water is used for irrigation. Ground water is obtained from consolidated sedimentary rocks that range in age from Triassic to Cretaceous, and from unconsolidated alluvium of Quaternary age. The principal aquifers are the Dakota Sandstone, the Tres Hermanos Sandstone Member of the Mancos Shale, and the alluvium. The Dakota Sandstone yields 5 to 50 gpm (gallons per minute) of water to domestic and stock wells. The Tres Hermanos sandstone Member generally yields 5 to 20 gpm of water to domestic and stock wells. Locally, beds of sandstone in the Chinle and Morrison Formations, the Entrada Sandstone, and the Bluff Sandstone also yield small supplies of water to domestic and stock wells. The alluvium yields from 2 gpm to as much as 150 gpm of water to domestic and stock wells. Thirteen test wells were drilled in a search for usable supplies of ground water for pueblo and irrigation supply and to determine the geologic and hydrologic characteristics of the water-bearing material. The performance of six of the test wells suggests that the sites are favorable for pueblo or irrigation supply wells. The yield of the other seven wells was too small or the quality of the water was too poor for development of pueblo or irrigation supply to be feasible. However, the water from one of the seven wells was good in chemical quality, and the yield was large enough to supply a few homes with water. The tests suggest that the water in the alluvium of the Rio San Jose valley is closely related to the streamflow and that it might be possible to withdraw from the alluvium in summer and replenish it in winter. The surface flow in summer might be decreased by extensive pumpage of ground water, but on the other hand, more of the winter flow could be retained in the area by storage in the ground-water reservoir. Wells could be drilled along the axis of the valley, and the water could be pumped into systems for distribution to irrigated farms. The chemical quality of ground water in the area varies widely from one stratigraphic unit to another and laterally within each unit and commonly the water contains undesirably large amounts of sulfate. However, potable water has been obtained locally from all the aquifers. The water of best quality seemingly is in the Tres Hermanos Sandstone Member of the Mancos Shale and in the alluvium north of the Rio San Jose. The largest quantity of water that is suitable for irrigation is in the valley fill along the Rio San Jose. Intensive pumping of ground water from aquifers containing water of good quality may draw water of inferior chemical quality into the wells.

  13. Predeployment validation of fault-tolerant systems through software-implemented fault insertion

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1989-01-01

    Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.

  14. Novel neural networks-based fault tolerant control scheme with fault alarm.

    PubMed

    Shen, Qikun; Jiang, Bin; Shi, Peng; Lim, Cheng-Chew

    2014-11-01

    In this paper, the problem of adaptive active fault-tolerant control for a class of nonlinear systems with unknown actuator fault is investigated. The actuator fault is assumed to have no traditional affine appearance of the system state variables and control input. The useful property of the basis function of the radial basis function neural network (NN), which will be used in the design of the fault tolerant controller, is explored. Based on the analysis of the design of normal and passive fault tolerant controllers, by using the implicit function theorem, a novel NN-based active fault-tolerant control scheme with fault alarm is proposed. Comparing with results in the literature, the fault-tolerant control scheme can minimize the time delay between fault occurrence and accommodation that is called the time delay due to fault diagnosis, and reduce the adverse effect on system performance. In addition, the FTC scheme has the advantages of a passive fault-tolerant control scheme as well as the traditional active fault-tolerant control scheme's properties. Furthermore, the fault-tolerant control scheme requires no additional fault detection and isolation model which is necessary in the traditional active fault-tolerant control scheme. Finally, simulation results are presented to demonstrate the efficiency of the developed techniques. PMID:25014982

  15. Computerized underground cable fault location expertise

    SciTech Connect

    Bascom, E.C. III; Dollen, D.W. Von; Ng, H.W.

    1994-12-31

    Power Technologies, Inc. (PTI) developed an expert system and on-line advisor for the Electric Power Research Institute (EPRI). The system, FAULT, provides guidance for field crews to diagnose a cable failure, recommend applicable fault location techniques, and trouble-shoot resulting difficulties which occur during the process of locating underground cable faults on transmission and distribution cable systems. The fault location methods which were identified during development of the expert system are presented in this paper, along with utility statistics from a survey on underground cable fault location.

  16. Multiple Fault Isolation in Redundant Systems

    NASA Technical Reports Server (NTRS)

    Pattipati, Krishna R.

    1997-01-01

    Fault diagnosis in large-scale systems that are products of modem technology present formidable challenges to manufacturers and users. This is due to large number of failure sources in such systems and the need to quickly isolate and rectify failures with minimal down time. In addition, for fault-tolerant systems and systems with infrequent opportunity for maintenance (e.g., Hubble telescope, space station), the assumption of at most a single fault in the system is unrealistic. In this project, we have developed novel block and sequential diagnostic strategies to isolate multiple faults in the shortest possible time without making the unrealistic single fault assumption.

  17. Mapping tasks into fault tolerant manipulators

    SciTech Connect

    Paredis, C.J.J.; Khosla, P.K.; Kanade, T.

    1994-12-31

    The application of robots in critical missions in hazardous environments requires the development of reliable or fault tolerant manipulators. In this paper, we define fault tolerance as the ability to continue the performance of a task after immobilization of a joint due to failure. Initially, no joint limits are considered, in which case we prove the existence of fault tolerant manipulators and develop an analysis tool to determine the fault tolerant work space. We also derive design templates for spatial fault tolerant manipulators. When joint limits are introduced, analytic solutions become infeasible but instead a numerical design procedure can be used, as is illustrated through an example.

  18. Managing Space System Faults: Coalescing NASA's Views

    NASA Technical Reports Server (NTRS)

    Muirhead, Brian; Fesq, Lorraine

    2012-01-01

    Managing faults and their resultant failures is a fundamental and critical part of developing and operating aerospace systems. Yet, recent studies have shown that the engineering "discipline" required to manage faults is not widely recognized nor evenly practiced within the NASA community. Attempts to simply name this discipline in recent years has been fraught with controversy among members of the Integrated Systems Health Management (ISHM), Fault Management (FM), Fault Protection (FP), Hazard Analysis (HA), and Aborts communities. Approaches to managing space system faults typically are unique to each organization, with little commonality in the architectures, processes and practices across the industry.

  19. Multiple Fault Isolation in Redundant Systems

    NASA Technical Reports Server (NTRS)

    Pattipati, Krishna R.; Patterson-Hine, Ann; Iverson, David

    1997-01-01

    Fault diagnosis in large-scale systems that are products of modern technology present formidable challenges to manufacturers and users. This is due to large number of failure sources in such systems and the need to quickly isolate and rectify failures with minimal down time. In addition, for fault-tolerant systems and systems with infrequent opportunity for maintenance (e.g., Hubble telescope, space station), the assumption of at most a single fault in the system is unrealistic. In this project, we have developed novel block and sequential diagnostic strategies to isolate multiple faults in the shortest possible time without making the unrealistic single fault assumption.

  20. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1986-01-01

    Multiversion or N-version programming was proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. Specific topics addressed are: failure probabilities in N-version systems, consistent comparison in N-version systems, descriptions of the faults found in the Knight and Leveson experiment, analytic models of comparison testing, characteristics of the input regions that trigger faults, fault tolerance through data diversity, and the relationship between failures caused by automatically seeded faults.

  1. Frictional constraints on crustal faulting

    USGS Publications Warehouse

    Boatwright, J.; Cocco, M.

    1996-01-01

    We consider how variations in fault frictional properties affect the phenomenology of earthquake faulting. In particular, we propose that lateral variations in fault friction produce the marked heterogeneity of slip observed in large earthquakes. We model these variations using a rate- and state-dependent friction law, where we differentiate velocity-weakening behavior into two fields: the strong seismic field is very velocity weakening and the weak seismic field is slightly velocity weakening. Similarly, we differentiate velocity-strengthening behavior into two fields: the compliant field is slightly velocity strengthening and the viscous field is very velocity strengthening. The strong seismic field comprises the seismic slip concentrations, or asperities. The two "intermediate" fields, weak seismic and compliant, have frictional velocity dependences that are close to velocity neutral: these fields modulate both the tectonic loading and the dynamic rupture process. During the interseismic period, the weak seismic and compliant regions slip aseismically, while the strong seismic regions remain locked, evolving into stress concentrations that fail only in main shocks. The weak seismic areas exhibit most of the interseismic activity and aftershocks but can also creep seismically. This "mixed" frictional behavior can be obtained from a sufficiently heterogenous distribution of the critical slip distance. The model also provides a mechanism for rupture arrest: dynamic rupture fronts decelerate as they penetrate into unloaded complaint or weak seismic areas, producing broad areas of accelerated afterslip. Aftershocks occur on both the weak seismic and compliant areas around a fault, but most of the stress is diffused through aseismic slip. Rapid afterslip on these peripheral areas can also produce aftershocks within the main shock rupture area by reloading weak fault areas that slipped in the main shock and then healed. We test this frictional model by comparing the seismicity and the coseismic slip for the 1966 Parkfield, 1979 Coyote Lake, and 1984 Morgan Hill earthquakes. The interevent seismicity and aftershocks appear to occur on fault areas outside the regions of significant slip: these regions are interpreted as either weak seismic or compliant, depending on whether or not they manifest interevent seismicity.

  2. Silica Lubrication in Faults (Invited)

    NASA Astrophysics Data System (ADS)

    Rowe, C. D.; Rempe, M.; Lamothe, K.; Kirkpatrick, J. D.; White, J. C.; Mitchell, T. M.; Andrews, M.; Di Toro, G.

    2013-12-01

    Silica-rich rocks are common in the crust, so silica lubrication may be important for causing fault weakening during earthquakes if the phenomenon occurs in nature. In laboratory friction experiments on chert, dramatic shear weakening has been attributed to amorphization and attraction of water from atmospheric humidity to form a 'silica gel'. Few observations of the slip surfaces have been reported, and the details of weakening mechanism(s) remain enigmatic. Therefore, no criteria exist on which to make comparisons of experimental materials to natural faults. We performed a series of friction experiments, characterized the materials formed on the sliding surface, and compared these to a geological fault in the same rock type. Experiments were performed in the presence of room humidity at 2.5 MPa normal stress with 3 and 30 m total displacement for a variety of slip rates (10-4 - 10-1 m/s). The friction coefficient (μ) reduced from >0.6 to ~0.2 at 10-1 m/s, but only fell to ~0.4 at 10-2 - 10-4 m/s. The slip surfaces and wear material were observed using laser confocal Raman microscopy, electron microprobe, X-ray diffraction, and transmission electron microscopy. Experiments at 10-1 m/s formed wear material consisting of ≤1 μm powder that is aggregated into irregular 5-20 μm clumps. Some material disaggregated during analysis with electron beams and lasers, suggesting hydrous and unstable components. Compressed powder forms smooth pavements on the surface in which grains are not visible (if present, they are <100 nm). Powder contains amorphous material and as yet unidentified crystalline and non-crystalline forms of silica (not quartz), while the worn chert surface underneath shows Raman spectra consistent with a mixture of quartz and amorphous material. If silica amorphization facilitates shear weakening in natural faults, similar wear materials should be formed, and we may be able to identify them through microstructural studies. However, the sub-micron particles of unstable materials are unlikely to survive in the crust over geologic time, so a direct comparison of fresh experimental wear material and ancient fault rock needs to account for the alteration and crystallization of primary materials. The surface of the Corona fault is coated by a translucent shiny layer consisting of ~100 nm interlocking groundmass of dislocation-free quartz, 10 nm ellipsoidal particles, and interstitial patches of amorphous silica. We interpret this layer as the equivalent of the experimentally produced amorphous material after crystallizing to more stable forms over geological time.

  3. Tool for Viewing Faults Under Terrain

    NASA Technical Reports Server (NTRS)

    Siegel, Herbert, L.; Li, P. Peggy

    2005-01-01

    Multi Surface Light Table (MSLT) is an interactive software tool that was developed in support of the QuakeSim project, which has created an earthquake- fault database and a set of earthquake- simulation software tools. MSLT visualizes the three-dimensional geometries of faults embedded below the terrain and animates time-varying simulations of stress and slip. The fault segments, represented as rectangular surfaces at dip angles, are organized into collections, that is, faults. An interface built into MSLT queries and retrieves fault definitions from the QuakeSim fault database. MSLT also reads time-varying output from one of the QuakeSim simulation tools, called "Virtual California." Stress intensity is represented by variations in color. Slips are represented by directional indicators on the fault segments. The magnitudes of the slips are represented by the duration of the directional indicators in time. The interactive controls in MSLT provide a virtual track-ball, pan and zoom, translucency adjustment, simulation playback, and simulation movie capture. In addition, geographical information on the fault segments and faults is displayed on text windows. Because of the extensive viewing controls, faults can be seen in relation to one another, and to the terrain. These relations can be realized in simulations. Correlated slips in parallel faults are visible in the playback of Virtual California simulations.

  4. Experiments in fault tolerant software reliability

    NASA Technical Reports Server (NTRS)

    Mcallister, David F.; Vouk, Mladen A.

    1989-01-01

    Twenty functionally equivalent programs were built and tested in a multiversion software experiment. Following unit testing, all programs were subjected to an extensive system test. In the process sixty-one distinct faults were identified among the versions. Less than 12 percent of the faults exhibited varying degrees of positive correlation. The common-cause (or similar) faults spanned as many as 14 components. However, a majority of these faults were trivial, and easily detected by proper unit and/or system testing. Only two of the seven similar faults were difficult faults, and both were caused by specification ambiguities. One of these faults exhibited variable identical-and-wrong response span, i.e. response span which varied with the testing conditions and input data. Techniques that could have been used to avoid the faults are discussed. For example, it was determined that back-to-back testing of 2-tuples could have been used to eliminate about 90 percent of the faults. In addition, four of the seven similar faults could have been detected by using back-to-back testing of 5-tuples. It is believed that most, if not all, similar faults could have been avoided had the specifications been written using more formal notation, the unit testing phase was subject to more stringent standards and controls, and better tools for measuring the quality and adequacy of the test data (e.g. coverage) were used.

  5. Wrench faulting using seismic and Landsat

    SciTech Connect

    Bolden, G.P.

    1987-05-01

    Two high-multiplicity seismic profiles demonstrate the compressional nature of the faulting along the Double Mountain Lineament in northeast Garza County in the Permian basin. NASA high-altitude aircraft imagery using Landsat parameters delineate the traces of these faults on the surface. The drainage system also defines the fault traces by following the zones of fracture and weakness in the Permian and Triassic outcrops. A north-south seismic profile crosses the Double Mountain lineament (P Shear), defining two thrust faults, two high-angle reverse faults and a pop-up block (flow structure). NASA high-altitude imagery and stream drainage indicate the traces of these faults. The pattern developed fits the definition of left lateral wrench faulting. Overlying carbonate shelf margins are developed above the underlying structure, which further enhances the structural interpretation. An east-west seismic profile 3 mi southeast of the north-south profile again defines the Double Mountain Lineament or P Shear and the associated faulting. A 1-mi wide pop-up block with a high angle reverse fault on both sides demonstrates the compressional nature of the faulting, and the high-altitude imagery delineates the surface traces of the faults. This structure has been drilled with several Stawn and Ellenburger producers, confirming the seismic and surface interpretations in the subsurface.

  6. Model-Based Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Kumar, Aditya; Viassolo, Daniel

    2008-01-01

    The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.

  7. A Quaternary fault database for central Asia

    NASA Astrophysics Data System (ADS)

    Mohadjer, Solmaz; Ehlers, Todd Alan; Bendick, Rebecca; Stübner, Konstanze; Strube, Timo

    2016-02-01

    Earthquakes represent the highest risk in terms of potential loss of lives and economic damage for central Asian countries. Knowledge of fault location and behavior is essential in calculating and mapping seismic hazard. Previous efforts in compiling fault information for central Asia have generated a large amount of data that are published in limited-access journals with no digital maps publicly available, or are limited in their description of important fault parameters such as slip rates. This study builds on previous work by improving access to fault information through a web-based interactive map and an online database with search capabilities that allow users to organize data by different fields. The data presented in this compilation include fault location, its geographic, seismic, and structural characteristics, short descriptions, narrative comments, and references to peer-reviewed publications. The interactive map displays 1196 fault traces and 34 000 earthquake locations on a shaded-relief map. The online database contains attributes for 123 faults mentioned in the literature, with Quaternary and geodetic slip rates reported for 38 and 26 faults respectively, and earthquake history reported for 39 faults. All data are accessible for viewing and download via http://www.geo.uni-tuebingen.de/faults/. This work has implications for seismic hazard studies in central Asia as it summarizes important fault parameters, and can reduce earthquake risk by enhancing public access to information. It also allows scientists and hazard assessment teams to identify structures and regions where data gaps exist and future investigations are needed.

  8. Parallel fault-tolerant robot control

    NASA Technical Reports Server (NTRS)

    Hamilton, D. L.; Bennett, J. K.; Walker, I. D.

    1992-01-01

    A shared memory multiprocessor architecture is used to develop a parallel fault-tolerant robot controller. Several versions of the robot controller are developed and compared. A robot simulation is also developed for control observation. Comparison of a serial version of the controller and a parallel version without fault tolerance showed the speedup possible with the coarse-grained parallelism currently employed. The performance degradation due to the addition of processor fault tolerance was demonstrated by comparison of these controllers with their fault-tolerant versions. Comparison of the more fault-tolerant controller with the lower-level fault-tolerant controller showed how varying the amount of redundant data affects performance. The results demonstrate the trade-off between speed performance and processor fault tolerance.

  9. Arc burst pattern analysis fault detection system

    NASA Technical Reports Server (NTRS)

    Russell, B. Don (Inventor); Aucoin, B. Michael (Inventor); Benner, Carl L. (Inventor)

    1997-01-01

    A method and apparatus are provided for detecting an arcing fault on a power line carrying a load current. Parameters indicative of power flow and possible fault events on the line, such as voltage and load current, are monitored and analyzed for an arc burst pattern exhibited by arcing faults in a power system. These arcing faults are detected by identifying bursts of each half-cycle of the fundamental current. Bursts occurring at or near a voltage peak indicate arcing on that phase. Once a faulted phase line is identified, a comparison of the current and voltage reveals whether the fault is located in a downstream direction of power flow toward customers, or upstream toward a generation station. If the fault is located downstream, the line is de-energized, and if located upstream, the line may remain energized to prevent unnecessary power outages.

  10. Alp Transit: Crossing Faults 44 and 49

    NASA Astrophysics Data System (ADS)

    El Tani, M.; Bremen, R.

    2014-05-01

    This paper describes the crossing of faults 44 and 49 when constructing the 57 km Gotthard base tunnel of the Alp Transit project. Fault 44 is a permeable fault that triggered significant surface deformations 1,400 m above the tunnel when it was reached by the advancing excavation. The fault runs parallel to the downstream face of the Nalps arch dam. Significant deformations were measured at the dam crown. Fault 49 is sub-vertical and permeable, and runs parallel at the upstream face of the dam. It was necessary to assess the risk when crossing fault 49, as a limit was put on the acceptable dam deformation for structural safety. The simulation model, forecasts and action decided when crossing over the faults are presented, with a brief description of the tunnel, the dam, and the monitoring system.

  11. Fibre bundle framework for quantum fault tolerance

    NASA Astrophysics Data System (ADS)

    Zhang, Lucy Liuxuan; Gottesman, Daniel

    2014-03-01

    We introduce a differential geometric framework for describing families of quantum error-correcting codes and for understanding quantum fault tolerance. In particular, we use fibre bundles and a natural projectively flat connection thereon to study the transformation of codewords under unitary fault-tolerant evolutions. We'll explain how the fault-tolerant logical operations are given by the monodromy group for the bundles with projectively flat connection, which is always discrete. We will discuss the construction of the said bundles for two examples of fault-tolerant families of operations, the string operators in the toric code and the qudit transversal gates. This framework unifies topological fault tolerance and fault tolerance based on transversal gates, and is expected to apply for all unitary quantum fault-tolerant protocols.

  12. Rule-based fault diagnosis of hall sensors and fault-tolerant control of PMSM

    NASA Astrophysics Data System (ADS)

    Song, Ziyou; Li, Jianqiu; Ouyang, Minggao; Gu, Jing; Feng, Xuning; Lu, Dongbin

    2013-07-01

    Hall sensor is widely used for estimating rotor phase of permanent magnet synchronous motor(PMSM). And rotor position is an essential parameter of PMSM control algorithm, hence it is very dangerous if Hall senor faults occur. But there is scarcely any research focusing on fault diagnosis and fault-tolerant control of Hall sensor used in PMSM. From this standpoint, the Hall sensor faults which may occur during the PMSM operating are theoretically analyzed. According to the analysis results, the fault diagnosis algorithm of Hall sensor, which is based on three rules, is proposed to classify the fault phenomena accurately. The rotor phase estimation algorithms, based on one or two Hall sensor(s), are initialized to engender the fault-tolerant control algorithm. The fault diagnosis algorithm can detect 60 Hall fault phenomena in total as well as all detections can be fulfilled in 1/138 rotor rotation period. The fault-tolerant control algorithm can achieve a smooth torque production which means the same control effect as normal control mode (with three Hall sensors). Finally, the PMSM bench test verifies the accuracy and rapidity of fault diagnosis and fault-tolerant control strategies. The fault diagnosis algorithm can detect all Hall sensor faults promptly and fault-tolerant control algorithm allows the PMSM to face failure conditions of one or two Hall sensor(s). In addition, the transitions between health-control and fault-tolerant control conditions are smooth without any additional noise and harshness. Proposed algorithms can deal with the Hall sensor faults of PMSM in real applications, and can be provided to realize the fault diagnosis and fault-tolerant control of PMSM.

  13. Tracing the Geomorphic Signature of Lateral Faulting

    NASA Astrophysics Data System (ADS)

    Duvall, A. R.; Tucker, G. E.

    2012-12-01

    Active strike-slip faults are among the most dangerous geologic features on Earth. Unfortunately, it is challenging to estimate their slip rates, seismic hazard, and evolution over a range of timescales. An under-exploited tool in strike-slip fault characterization is quantitative analysis of the geomorphic response to lateral fault motion to extract tectonic information directly from the landscape. Past geomorphic work of this kind has focused almost exclusively on vertical motion, despite the ubiquity of horizontal motion in crustal deformation and mountain building. We seek to address this problem by investigating the landscape response to strike-slip faulting in two ways: 1) examining the geomorphology of the Marlborough Fault System (MFS), a suite of parallel strike-slip faults within the actively deforming South Island of New Zealand, and 2) conducting controlled experiments in strike-slip landscape evolution using the CHILD landscape evolution model. The MFS offers an excellent natural experiment site because fault initiation ages and cumulative displacements decrease from north to south, whereas slip rates increase over four fold across a region underlain by a single bedrock unit (Torlesse Greywacke). Comparison of planform and longitudinal profiles of rivers draining the MFS reveals strong disequilibrium within tributaries that drain to active fault strands, and suggests that river capture related to fault activity may be a regular process in strike-slip fault zones. Simple model experiments support this view. Model calculations that include horizontal motion as well as vertical uplift demonstrate river lengthening and shortening due to stream capture in response to shutter ridges sliding in front of stream outlets. These results suggest that systematic variability in fluvial knickpoint location, drainage area, and incision rates along different faults or fault segments may be expected in catchments upstream of strike-slip faults and could act as useful indicators of fault activity.

  14. Effects of fault structures on evaporite dissolution

    NASA Astrophysics Data System (ADS)

    Zechner, Eric; Zidane, Ali; Huggenberger, Peter; Younes, Anis

    2013-04-01

    Uncontrolled subsurface dissolution of evaporites can lead to hazards such as land subsidence. Observed subsidences in a study area of Northwestern Switzerland were mainly due to subsurface dissolution of halite and gypsum. A set of density-driven flow simulations were conducted to study the effect of the different unknown subsurface parameters on the dissolution process. The study site is represented by an approximately 1000m long, and 200m deep 2D field scale model, which corresponds to a setup of two aquifers connected by subvertical normal fault zones. The mixed finite element method is used to solve the flow equation, coupled with the multipoint flux approximation and the discontinuous Galerkin method to solve the diffusion and the advection parts of the transport equation. Specific concern is given to the heterogeneity of normal fault zones and its role on the dissolution of evaporites. Different fault zones with increased hydraulic conductivity and fault widths ranging from 0.5m to 40m were evaluated. Results show that larger fault thicknesses induce smaller flow velocities, which, theoretically, lead to less salt dissolution. Larger fault zones, however, allow for larger amounts of freshwater to access the salt top. The resulting increase of concentration gradient between the saturated salt top and the subsaturated groundwater accelerates the dissolution process. Major faults causing significant displacement of sediments typically consist of sets of smaller faults, which can be grouped into one larger fault zone. In order to account for a more realistic approach of heterogeneity within the 40m wide fault zone, the zone is divided into 2, 3 and 6 faults with different combinations of fault widths. Despite that the hydraulically active width of the fault is reduced when the faults number is increased, a substantial increase of dissolved mass is observed when increasing the number of faults. This difference in mass is due to the fact that steady state flow conditions require more time to establish in the case of six thin faults compared to the model with one single wide fault. The presence of conductive vertical zones in a variety of geological settings combined with the typical uncertainty related to the hydraulic characteristics of fractured fault zones suggests that faults play an important role for the dissolution process of evaporites and resulting density-driven transport of solutes.

  15. Fault Injection Techniques and Tools

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen; Tsai, Timothy K.; Iyer, Ravishankar K.

    1997-01-01

    Dependability evaluation involves the study of failures and errors. The destructive nature of a crash and long error latency make it difficult to identify the causes of failures in the operational environment. It is particularly hard to recreate a failure scenario for a large, complex system. To identify and understand potential failures, we use an experiment-based approach for studying the dependability of a system. Such an approach is applied not only during the conception and design phases, but also during the prototype and operational phases. To take an experiment-based approach, we must first understand a system's architecture, structure, and behavior. Specifically, we need to know its tolerance for faults and failures, including its built-in detection and recovery mechanisms, and we need specific instruments and tools to inject faults, create failures or errors, and monitor their effects.

  16. Perspective View, San Andreas Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The prominent linear feature straight down the center of this perspective view is California's famous San Andreas Fault. The image, created with data from NASA's Shuttle Radar Topography Mission (SRTM), will be used by geologists studying fault dynamics and landforms resulting from active tectonics. This segment of the fault lies west of the city of Palmdale, Calif., about 100 kilometers (about 60 miles) northwest of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. Two large mountain ranges are visible, the San Gabriel Mountains on the left and the Tehachapi Mountains in the upper right. Another fault, the Garlock Fault lies at the base of the Tehachapis; the San Andreas and the Garlock Faults meet in the center distance near the town of Gorman. In the distance, over the Tehachapi Mountains is California's Central Valley. Along the foothills in the right hand part of the image is the Antelope Valley, including the Antelope Valley California Poppy Reserve. The data used to create this image were acquired by SRTM aboard the Space Shuttle Endeavour, launched on February 11, 2000.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    SRTM uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense (DoD), and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, DC.

    Size: Varies in a perspective view Location: 34.70 deg. North lat., 118.57 deg. West lon. Orientation: Looking Northwest Original Data Resolution: SRTM and Landsat: 30 meters (99 feet) Date Acquired: February 16, 2000

  17. Stacking faults in Si nanocrystals

    SciTech Connect

    Wang, Y.Q.; Smirani, R.; Ross, G.G.

    2005-05-30

    Si nanocrystals (Si nc) were formed by the implantation of Si{sup +} into a SiO{sub 2} film on (100) Si, followed by high-temperature annealing. High-resolution transmission electron microscopy has been used to examine the microstructure of the Si nc produced by a high-dose (3x10{sup 17} cm{sup -2}) implantation. It is shown that there are only stacking-fault (SF) defects in some nanocrystals; while in others the stacking faults (SFs) coexist with twins. Two kinds of SFs, one being an intrinsic SF, the other being an extrinsic SF, have been observed inside the Si nc. More intrinsic SFs have been found in the Si nc, and the possible reasons are discussed. These microstructural defects are expected to play an important role in the light emission from the Si nc.

  18. The susitna glacier thrust fault: Characteristics of surface ruptures on the fault that initiated the 2002 denali fault earthquake

    USGS Publications Warehouse

    Crone, A.J.; Personius, S.F.; Craw, P.A.; Haeussler, P.J.; Staft, L.A.

    2004-01-01

    The 3 November 2002 Mw 7.9 Denali fault earthquake sequence initiated on the newly discovered Susitna Glacier thrust fault and caused 48 km of surface rupture. Rupture of the Susitna Glacier fault generated scarps on ice of the Susitna and West Fork glaciers and on tundra and surficial deposits along the southern front of the central Alaska Range. Based on detailed mapping, 27 topographic profiles, and field observations, we document the characteristics and slip distribution of the 2002 ruptures and describe evidence of pre-2002 ruptures on the fault. The 2002 surface faulting produced structures that range from simple folds on a single trace to complex thrust-fault ruptures and pressure ridges on multiple, sinuous strands. The deformation zone is locally more than 1 km wide. We measured a maximum vertical displacement of 5.4 m on the south-directed main thrust. North-directed backthrusts have more than 4 m of surface offset. We measured a well-constrained near-surface fault dip of about 19?? at one site, which is considerably less than seismologically determined values of 35??-48??. Surface-rupture data yield an estimated magnitude of Mw 7.3 for the fault, which is similar to the seismological value of Mw 7.2. Comparison of field and seismological data suggest that the Susitna Glacier fault is part of a large positive flower structure associated with northwest-directed transpressive deformation on the Denali fault. Prehistoric scarps are evidence of previous rupture of the Sustina Glacier fault, but additional work is needed to determine if past failures of the Susitna Glacier fault have consistently induced rupture of the Denali fault.

  19. Inverter Ground Fault Overvoltage Testing

    SciTech Connect

    Hoke, Andy; Nelson, Austin; Chakraborty, Sudipta; Chebahtah, Justin; Wang, Trudie; McCarty, Michael

    2015-08-12

    This report describes testing conducted at NREL to determine the duration and magnitude of transient overvoltages created by several commercial PV inverters during ground fault conditions. For this work, a test plan developed by the Forum on Inverter Grid Integration Issues (FIGII) has been implemented in a custom test setup at NREL. Load rejection overvoltage test results were reported previously in a separate technical report.

  20. Fault growth and interactions in a multiphase rift fault network: Horda Platform, Norwegian North Sea

    NASA Astrophysics Data System (ADS)

    Duffy, Oliver B.; Bell, Rebecca E.; Jackson, Christopher A.-L.; Gawthorpe, Rob L.; Whipp, Paul S.

    2015-11-01

    Physical models predict that multiphase rifts that experience a change in extension direction between stretching phases will typically develop non-colinear normal fault sets. Furthermore, multiphase rifts will display a greater frequency and range of styles of fault interactions than single-phase rifts. Although these physical models have yielded useful information on the evolution of fault networks in map view, the true 3D geometry of the faults and associated interactions are poorly understood. Here, we use an integrated 3D seismic reflection and borehole dataset to examine a range of fault interactions that occur in a natural multiphase fault network in the northern Horda Platform, northern North Sea. In particular we aim to: i) determine the range of styles of fault interaction that occur between non-colinear faults; ii) examine the typical geometries and throw patterns associated with each of these different styles; and iii) highlight the differences between single-phase and multiphase rift fault networks. Our study focuses on a ca. 350km2 region around the >60km long, N-S-striking Tusse Fault, a normal fault system that was active in the Permian-Triassic and again in the Late Jurassic-to-Early Cretaceous. The Tusse Fault is one of a series of large (>1500m throw) N-S-striking faults forming part of the northern Horda Platform fault network, which includes numerous smaller (2-10km long), lower throw (<100m), predominantly NW-SE-striking faults that were only active during the Late Jurassic to Early Cretaceous. We examine how the 2nd-stage NW-SE-striking faults grew, interacted and linked with the N-S-striking Tusse Fault, documenting a range of interaction styles including mechanical and kinematic isolation, abutment, retardation and reactivated relays. Our results demonstrate that: i) isolated, and abutting interactions are the most common fault interaction styles in the northern Horda Platform; ii) pre-existing faults can act as sites of nucleation for 2nd-stage faults or may form mechanical barriers to propagation; iii) the throw distribution on reactivated 1st-stage faults will be modified in a predictable manner if they are intersected or influenced by 2nd-stage faults; iv) sites of fault linkage and relay-breaching associated with the first phase of extension can act as preferential nucleation sites for 2nd-stage faults; and v) the development of fault intersections is a dynamic process, involving the gradual transition from one style to another.

  1. CONTROL AND FAULT DETECTOR CIRCUIT

    DOEpatents

    Winningstad, C.N.

    1958-04-01

    A power control and fault detectcr circuit for a radiofrequency system is described. The operation of the circuit controls the power output of a radio- frequency power supply to automatically start the flow of energizing power to the radio-frequency power supply and to gradually increase the power to a predetermined level which is below the point where destruction occurs upon the happening of a fault. If the radio-frequency power supply output fails to increase during such period, the control does not further increase the power. On the other hand, if the output of the radio-frequency power supply properly increases, then the control continues to increase the power to a maximum value. After the maximumn value of radio-frequency output has been achieved. the control is responsive to a ''fault,'' such as a short circuit in the radio-frequency system being driven, so that the flow of power is interrupted for an interval before the cycle is repeated.

  2. Fault detection using genetic programming

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; B. Jack, Lindsay; Nandi, Asoke K.

    2005-03-01

    Genetic programming (GP) is a stochastic process for automatically generating computer programs. GP has been applied to a variety of problems which are too wide to reasonably enumerate. As far as the authors are aware, it has rarely been used in condition monitoring (CM). In this paper, GP is used to detect faults in rotating machinery. Featuresets from two different machines are used to examine the performance of two-class normal/fault recognition. The results are compared with a few other methods for fault detection: Artificial neural networks (ANNs) have been used in this field for many years, while support vector machines (SVMs) also offer successful solutions. For ANNs and SVMs, genetic algorithms have been used to do feature selection, which is an inherent function of GP. In all cases, the GP demonstrates performance which equals or betters that of the previous best performing approaches on these data sets. The training times are also found to be considerably shorter than the other approaches, whilst the generated classification rules are easy to understand and independently validate.

  3. From fissure to fault: A model of fault growth in the Krafla Fissure System, NE Iceland

    NASA Astrophysics Data System (ADS)

    Bramham, Emma; Paton, Douglas; Wright, Tim

    2015-04-01

    Current models of fault growth examine the relationship of fault length (L) to vertical displacement (D) where the faults exhibit the classic fault shape of gradually increasing vertical displacement from zero at the fault tips to a maximum displacement (Dmax) at the middle of the fault. These models cannot adequately explain displacement-length observations at the Krafla fissure swarm, in Iceland's northern volcanic zone, where we observe that many of the faults with significant vertical displacements still retain fissure-like features, with no vertical displacement, along portions of their lengths. We have created a high resolution digital elevation model (DEM) of the Krafla region using airborne LiDAR and measured the displacement/length profiles of 775 faults, with lengths ranging from 10s to 1000s of metres. We have categorised the faults based on the proportion of the profile that was still fissure-like. Fully-developed faults (no fissure-like regions) were further grouped into those with profiles that had a flat-top geometry (i.e. significant proportion of fault length with constant throw), those with a bell-shaped throw profile and those that show regions of fault linkage. We suggest that a fault can most easily accommodate stress by displacing regions that are still fissure-like, and that a fault would be more likely to accommodate stress by linkage once it has reached the maximum displacement for its fault length. Our results demonstrate that there is a pattern of growth from fissure to fault in the Dmax/L ratio of the categorised faults and propose a model for this growth. These data better constrain our understanding of how fissures develop into faults but also provide insights into the discrepancy in D/L profiles from a typical bell-shaped distribution.

  4. Characterizing the eolian sediment component in the lacustrine record of Laguna Potrok Aike (southeastern Patagonia)

    NASA Astrophysics Data System (ADS)

    Ohlendorf, C.; Gebhardt, C.

    2013-12-01

    Southern South America with its extended dry areas was one of the major sources for dust in the higher latitudes of the southern hemisphere during the last Glacial, as was deduced from fingerprinting of dust particles found in Antarctic ice cores. The amount of dust that was mobilized is mostly related to strength and latitudinal position of the Southern Hemisphere Westerly Winds (SWW). How exactly SWW shifted between glacial and interglacial times and what consequences such shifts had for ocean and atmospheric circulation changes during the last deglaciation is currently under debate. Laguna Potrok Aike (PTA) as a lake situated in the middle of the source area of dust offers the opportunity to arrive at a better understanding of past SWW changes and their associated consequences for dust transport. For this task, a sediment record of the past ~51 ka is available from a deep drilling campaign (PASADO). From this 106 m long profile, 76 samples representing the different lithologies of the sediment sequence were selected to characterize an eolian sediment component. Prior to sampling of the respective core intervals, magnetic susceptibility was measured and the element composition was determined by XRF-scanning on fresh, undisturbed sediment. After sampling and freeze drying, physical, chemical and mineralogical sediment properties were determined before and after separation of each sample into six grainsize classes for each fraction separately. SEM techniques were used to verify the eolian origin of grains. The aim of this approach is to isolate an exploitable fingerprint of the eolian sediment component in terms of their grain size, physical properties, geochemistry and mineralogy. Thereby, the challenging aspect is that such a fingerprint should be based on high-resolution down-core scanning techniques, so time-consuming techniques such as grain-size measurements by laser detection can be avoided. A first evaluation of the dataset indicates that magnetic susceptibility, which is often used as a tracer for the eolian sediment component in marine sediments, probably does not yield a robust signal of eolian input in this continental setting because it is variably contained in the silt as well as in the fine sand fraction. XRF-scanning of powdered samples of the different grain-size fractions shows that some elements are characteristically enriched in the clay, silt or medium sand fractions which might allow a geochemical fingerprinting of these. For instance, an identification of higher amounts of clay in a sample may be possible based on it's enrichment in heavy metals (Zn, Cu, Pb) and/or Fe. Higher amounts of silt may be recognized by Zr and/or Y enrichment. Hence, unmixing of the signal stored in the sedimentary record of PTA with tools of multivariate statistics is a necesseary step to characterize the eolian fraction. The 51 ka BP sediment record of PTA might then be used for a reconstruction of dust availability in the high latitude source areas of the southern hemisphere.

  5. Effects of abnormal flooding events on microbial mat communities and aragonitic stromatolites, Laguna Mormona, Baja California, Mexico

    SciTech Connect

    Horodyski, R.J.

    1985-02-01

    Laguna Mormona (Baja California, Mexico) is a coastal sabkha that contains a variety of microbial (cyanophycean and bacterial) mat communities. Studies conducted during 1971-76 concentrated on the microstructure, macrostructure, and degradation of these microbial mats and aragonitic stromatolites and the information they provide that is relevant to the interpretation of Proterozoic stromatolites, silicified microbial mats, and their contained microfossils. Abnormally high rainfall in 1979-80 flooded the sabkha to depths exceeding 1 m and profoundly affected these microbial communities by lowering the salinity of the water and depositing 5-10 cm of very fine grained, organic-rich mud over most of the microbial mats. The water level has returned to normal, and diatoms, cyanophytes, and bacteria locally form millimeter-thick mats upon this mud in areas that previously contained well-developed mats; however, it is unclear whether these mats will eventually attain the thickness (up to 30 cm) of their predecessors.

  6. Overprinting faulting mechanisms during the development of multiple fault sets in sandstone, Chimney Rock fault array, Utah, USA

    NASA Astrophysics Data System (ADS)

    Davatzes, Nicholas C.; Aydin, Atilla; Eichhubl, Peter

    2003-02-01

    The deformation mechanisms producing the Chimney Rock normal fault array (San Rafael Swell, Utah, USA) are identified from detailed analyses of the structural components of the faults and their architecture. Faults in this area occur in four sets with oppositely dipping fault pairs striking ENE and WNW. The ENE-striking faults initially developed by formation of deformation bands and associated slip surfaces (deformation mechanism 1). After deformation band formation ceased, three sets of regional joints developed. The oldest two sets of the regional joints, including the most prominent WNW-striking set, were sheared. Localized deformation due to shearing of the WNW-striking regional joints formed WNW-striking map-scale normal faults. The formation mechanism of these faults can be characterized by the shearing of joints that produces splay joints, breccia, and eventually a core of fault rock (deformation mechanism 2). During this second phase of faulting, the ENE-striking faults were reactivated by shear across the slip surfaces and shearing of ENE-striking joints, producing localized splay joints and breccia (similar to deformation mechanism 2) superimposed onto a dense zone of deformation bands from the first phase. We found that new structural components are added to a fault zone as a function of increasing offset for both deformation mechanisms. Conversely, we estimated the magnitude of slip partitioned by the two mechanisms using the fault architecture and the component structures. Our analyses demonstrate that faults in a single rock type and location, with similar length and offset, but forming at different times and under different loading conditions, can have fundamentally different fault architecture. The impact by each mechanism on petrophysical properties of the fault is different. Deformation mechanism 1 produces deformations bands that can act as fluid baffles, whereas deformation mechanism 2 results in networks of joints and breccia that can act as preferred fluid conduits. Consequently, a detailed analysis of fault architecture is essential for establishing an accurate tectonic history, deformation path, and hydraulic properties of a faulted terrain.

  7. Building the GEM Faulted Earth database

    NASA Astrophysics Data System (ADS)

    Litchfield, N. J.; Berryman, K. R.; Christophersen, A.; Thomas, R. F.; Wyss, B.; Tarter, J.; Pagani, M.; Stein, R. S.; Costa, C. H.; Sieh, K. E.

    2011-12-01

    The GEM Faulted Earth project is aiming to build a global active fault and seismic source database with a common set of strategies, standards, and formats, to be placed in the public domain. Faulted Earth is one of five hazard global components of the Global Earthquake Model (GEM) project. A key early phase of the GEM Faulted Earth project is to build a database which is flexible enough to capture existing and variable (e.g., from slow interplate faults to fast subduction interfaces) global data, and yet is not too onerous to enter new data from areas where existing databases are not available. The purpose of this talk is to give an update on progress building the GEM Faulted Earth database. The database design conceptually has two layers, (1) active faults and folds, and (2) fault sources, and automated processes are being defined to generate fault sources. These include the calculation of moment magnitude using a user-selected magnitude-length or magnitude-area scaling relation, and the calculation of recurrence interval from displacement divided by slip rate, where displacement is calculated from moment and moment magnitude. The fault-based earthquake sources defined by the Faulted Earth project will then be rationalised with those defined by the other GEM global components. A web based tool is being developed for entering individual faults and folds, and fault sources, and includes capture of additional information collected at individual sites, as well as descriptions of the data sources. GIS shapefiles of individual faults and folds, and fault sources will also be able to be uploaded. A data dictionary explaining the database design rationale, definitions of the attributes and formats, and a tool user guide is also being developed. Existing national databases will be uploaded outside of the fault compilation tool, through a process of mapping common attributes between the databases. Regional workshops are planned for compilation in areas where existing databases are not available, or require further population, and will include training on using the fault compilation tool. The tool is also envisaged as an important legacy of the GEM Faulted Earth project, to be available for use beyond the end of the 2 year project.

  8. Fault tolerant operation of switched reluctance machine

    NASA Astrophysics Data System (ADS)

    Wang, Wei

    The energy crisis and environmental challenges have driven industry towards more energy efficient solutions. With nearly 60% of electricity consumed by various electric machines in industry sector, advancement in the efficiency of the electric drive system is of vital importance. Adjustable speed drive system (ASDS) provides excellent speed regulation and dynamic performance as well as dramatically improved system efficiency compared with conventional motors without electronics drives. Industry has witnessed tremendous grow in ASDS applications not only as a driving force but also as an electric auxiliary system for replacing bulky and low efficiency auxiliary hydraulic and mechanical systems. With the vast penetration of ASDS, its fault tolerant operation capability is more widely recognized as an important feature of drive performance especially for aerospace, automotive applications and other industrial drive applications demanding high reliability. The Switched Reluctance Machine (SRM), a low cost, highly reliable electric machine with fault tolerant operation capability, has drawn substantial attention in the past three decades. Nevertheless, SRM is not free of fault. Certain faults such as converter faults, sensor faults, winding shorts, eccentricity and position sensor faults are commonly shared among all ASDS. In this dissertation, a thorough understanding of various faults and their influence on transient and steady state performance of SRM is developed via simulation and experimental study, providing necessary knowledge for fault detection and post fault management. Lumped parameter models are established for fast real time simulation and drive control. Based on the behavior of the faults, a fault detection scheme is developed for the purpose of fast and reliable fault diagnosis. In order to improve the SRM power and torque capacity under faults, the maximum torque per ampere excitation are conceptualized and validated through theoretical analysis and experiments. With the proposed optimal waveform, torque production is greatly improved under the same Root Mean Square (RMS) current constraint. Additionally, position sensorless operation methods under phase faults are investigated to account for the combination of physical position sensor and phase winding faults. A comprehensive solution for position sensorless operation under single and multiple phases fault are proposed and validated through experiments. Continuous position sensorless operation with seamless transition between various numbers of phase fault is achieved.

  9. LIDAR Measurements of Fault Roughness

    NASA Astrophysics Data System (ADS)

    Sagy, A.; Axen, G. J.; Brodsky, E. E.

    2005-12-01

    Fault zones contain several discrete slip surfaces that accommodate most of the displacement across the zone. The geometrical properties of a given slip surface and the geometrical relation between surfaces can control the friction and deformation properties. We present the first measurements of fault surfaces using ground-based LiDAR (Light Detection and Ranging). The Laser-based system can measure precise distances over an area hundreds of squares meters large with individual points spaced as close as 3mm apart. We can then extract thousands of fault-surface profiles in any direction along the scanned surface. Our measurements of 8 large-scale, natural fault exposures in the Western US suggest the following preliminary results: (1) Not surprisingly, at any measurable wavelength, individual coherent striated slip surfaces are smooth relative to nearby erosional surfaces. For wavelengths of 1 m (which is the scale of slip on large earthquakes), we find that the average asperity height of slip surfaces is 1.8 cm, while that of erosional surfaces is 10 cm. (2) Like previous studies, we find that the wavelength ? is related to the asperity height h by h=C?#? over a range of wavelengths of 2 cm to a few meters. However, our more precise measurements show that the constants are different than in previous estimates. We find that C ranges between 0.01-0.0025, and ? ranges between 0.4-0.7. Surfaces with low values of C also have low values of ?. (3) An extrapolation of the self-affine relationship with the measured parameters implies that the heights and lengths of asperities have similar dimensions at scales of microns up to tens of microns. The geometry implies that below these scales sliding breaks asperities, while at larger scales the surfaces are riding up on each other during sliding. This scale of 10s of microns is consistent with previous laboratory measurements of Dc. (4) In some cases the fault ``surface'' is an ensemble of striated surfaces at non-uniform orientation and a single exposure may have non-stationary spectral properties with variations in both ? and C (although not Dc) over the 1-10 meter scale. Other surfaces have consistent spectra for nearly 100 m.

  10. West Coast Tsunami: Cascadia's Fault?

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Bernard, E. N.; Titov, V.

    2013-12-01

    The tragedies of 2004 Sumatra and 2011 Japan tsunamis exposed the limits of our knowledge in preparing for devastating tsunamis. The 1,100-km coastline of the Pacific coast of North America has tectonic and geological settings similar to Sumatra and Japan. The geological records unambiguously show that the Cascadia fault had caused devastating tsunamis in the past and this geological process will cause tsunamis in the future. Hypotheses of the rupture process of Cascadia fault include a long rupture (M9.1) along the entire fault line, short ruptures (M8.8 - M9.1) nucleating only a segment of the coastline, or a series of lesser events of M8+. Recent studies also indicate an increasing probability of small rupture occurring at the south end of the Cascadia fault. Some of these hypotheses were implemented in the development of tsunami evacuation maps in Washington and Oregon. However, the developed maps do not reflect the tsunami impact caused by the most recent updates regarding the Cascadia fault rupture process. The most recent study by Wang et al. (2013) suggests a rupture pattern of high- slip patches separated by low-slip areas constrained by estimates of coseismic subsidence based on microfossil analyses. Since this study infers that a Tokohu-type of earthquake could strike in the Cascadia subduction zone, how would such an tsunami affect the tsunami hazard assessment and planning along the Pacific Coast of North America? The rapid development of computing technology allowed us to look into the tsunami impact caused by above hypotheses using high-resolution models with large coverage of Pacific Northwest. With the slab model of MaCrory et al. (2012) (as part of the USGS slab 1.0 model) for the Cascadia earthquake, we tested the above hypotheses to assess the tsunami hazards along the entire U.S. West Coast. The modeled results indicate these hypothetical scenarios may cause runup heights very similar to those observed along Japan's coastline during the 2011 Japan tsunami,. Comparing to a long rupture, the Tohoku-type rupture may cause more serious impact at the adjacent coastline, independent of where it would occur in the Cascadia subduction zone. These findings imply that the Cascadia tsunami hazard may be greater than originally thought.

  11. Deciphering lake and maar geometries from seismic refraction and reflection surveys in Laguna Potrok Aike (southern Patagonia, Argentina)

    NASA Astrophysics Data System (ADS)

    Gebhardt, A. C.; De Batist, M.; Niessen, F.; Anselmetti, F. S.; Ariztegui, D.; Haberzettl, T.; Kopsch, C.; Ohlendorf, C.; Zolitschka, B.

    2011-04-01

    Laguna Potrok Aike is a bowl-shaped maar lake in southern Patagonia, Argentina, with a present mean diameter of ~ 3.5 km and a maximum water depth of ~ 100 m. Seismic surveys were carried out between 2003 and 2005 in order to get a deeper knowledge on the lake sediments and the deeper basin geometries. A raytracing model of the Laguna Potrok Aike basin was calculated based on refraction data while sparker data were additionally used to identify the crater-wall discordance and thus the upper outer shape of the maar structure. The combined data sets show a rather steep funnel-shaped structure embedded in the surrounding Santa Cruz Formation that resembles other well-known maar structures. The infill consists of up to 370 m lacustrine sediments underlain by probably volcanoclastic sediments of unknown thickness. The lacustrine sediments show a subdivision into two sub-units: (a) the upper with seismic velocities between 1500 and 1800 m s - 1 , interpreted as unconsolidated muds, and (b) the lower with higher seismic velocities of up to 2350 m s - 1 , interpreted as lacustrine sediments intercalated with mass transport deposits of different lithology and/or coarser-grained sediments. The postulated volcanoclastic layer has acoustic velocities of > 2400 m s - 1 . The lake sediments were recently drilled within the PASADO project in the framework of the International Continental Scientific Drilling Program (ICDP). Cores penetrated through lacustrine unconsolidated sediments down to a depth of ~ 100 m below lake floor. This minimal thickness for the unconsolidated and low-velocity lithologies is in good agreement with our raytracing model.

  12. Lateglacial and Holocene climatic changes in south-eastern Patagonia inferred from carbonate isotope records of Laguna Potrok Aike (Argentina)

    NASA Astrophysics Data System (ADS)

    Oehlerich, M.; Mayr, C.; Gussone, N.; Hahn, A.; Hlzl, S.; Lcke, A.; Ohlendorf, C.; Rummel, S.; Teichert, B. M. A.; Zolitschka, B.

    2015-04-01

    First results of strontium, calcium, carbon and oxygen isotope analyses of bulk carbonates from a 106 m long sediment record of Laguna Potrok Aike, located in southern Patagonia are presented. Morphological and isotopic investigations of ?m-sized carbonate crystals in the sediment reveal an endogenic origin for the entire Holocene. During this time period the calcium carbonate record of Laguna Potrok Aike turned out to be most likely ikaite-derived. As ikaite precipitation in nature has only been observed in a narrow temperature window between 0 and 7 C, the respective carbonate oxygen isotope ratios serve as a proxy of hydrological variations rather than of palaeotemperatures. We suggest that oxygen isotope ratios are sensitive to changes of the lake water balance induced by intensity variations of the Southern Hemisphere Westerlies and discuss the role of this wind belt as a driver for climate change in southern South America. In combination with other proxy records the evolution of westerly wind intensities is reconstructed. Our data suggest that weak SHW prevailed during the Lateglacial and the early Holocene, interrupted by an interval with strengthened Westerlies between 13.4 and 11.3 ka cal BP. Wind strength increased at 9.2 ka cal BP and significantly intensified until 7.0 ka cal BP. Subsequently, the wind intensity diminished and stabilised to conditions similar to present day after a period of reduced evaporation during the "Little Ice Age". Strontium isotopes (87Sr/86Sr ratio) were identified as a potential lake-level indicator and point to a lowering from overflow conditions during the Glacial (?17 ka cal BP) to lowest lake levels around 8 ka cal BP. Thereafter the strontium isotope curve resembles the lake-level curve which is stepwise rising until the "Little Ice Age". The variability of the Ca isotope composition of the sediment reflects changes in the Ca budget of the lake, indicating higher degrees of Ca utilisation during the period with lowest lake level.

  13. Recovery of floral and faunal communities after placement of dredged material on seagrasses in Laguna Madre, Texas

    NASA Astrophysics Data System (ADS)

    Sheridan, P.

    2004-03-01

    The objectives of this project were to determine how long alterations in habitat characteristics and use by fishery and forage organisms were detectable at dredged material placement sites in Laguna Madre, Texas. Water, sediment, seagrass, benthos, and nekton characteristics were measured and compared among newly deposited sediments and nearby and distant seagrasses each fall and spring over three years. Over this period, 75% of the estimated total surface area of the original deposits was either re-vegetated by seagrass or dispersed by winds and currents. Differences in water and sediment characteristics among habitat types were mostly detected early in the study. There were signs of steady seagrass re-colonization in the latter half of the study period, and mean seagrass coverage of deposits had reached 48% approximately three years after dredging. Clovergrass Halophila engelmannii was the initial colonist, but shoalgrass Halodule wrightii predominated after about one year. Densities of annelids and non-decapod crustaceans were generally significantly greater in close and distant seagrass habitats than in dredged material habitat, whereas densities of molluscs were not significantly related to habitat type. Nekton (fish and decapod) densities were almost always significantly greater in the two seagrass habitats than in dredged material deposits. Benthos and nekton communities in dredged material deposits were distinct from those in seagrass habitats. Recovery from dredged material placement was nearly complete for water column and sediment components after 1.5 to 3 years, but recovery of seagrasses, benthos, and nekton was predicted to take 4 to 8 years. The current 2 to 5 years dredging cycle virtually insures no time for ecosystem recovery before being disturbed again. The only way to ensure permanent protection of the high primary and secondary productivity of seagrass beds in Laguna Madre from acute and chronic effects of maintenance dredging, while ensuring navigation capability, is to remove dredged materials from the shallow waters of the ecosystem.

  14. Sr Isotopes and Migration of Prairie Mammoths (Mammuthus columbi) from Laguna de las Cruces, San Luis Potosi, Mexico

    NASA Astrophysics Data System (ADS)

    Solis-Pichardo, G.; Perez-Crespo, V.; Schaaf, P. E.; Arroyo-Cabrales, J.

    2011-12-01

    Asserting mobility of ancient humans is a major issue for anthropologists. For more than 25 years, Sr isotopes have been used as a resourceful tracer tool in this context. A comparison of the 87Sr/86Sr ratios found in tooth enamel and in bone is performed to determine if the human skeletal remains belonged to a local or a migrant. Sr in bone approximately reflects the isotopic composition of the geological region where the person lived before death; whereas the Sr isotopic system in tooth enamel is thought to remain as a closed system and thus conserves the isotope ratio acquired during childhood. Sr isotope ratios are obtained through the geologic substrate and its overlying soil, from where an individual got hold of food and water; these ratios are in turn incorporated into the dentition and skeleton during tissue formation. In previous studies from Teotihuacan, Mexico we have shown that a three-step leaching procedure on tooth enamel samples is important to assure that only the biogenic Sr isotope contribution is analyzed. The same Sr isotopic tools can function concerning ancient animal migration patterns. To determine or to discard the mobility of prairie mammoths (Mammuthus columbi) found at Laguna de las Cruces, San Luis Potosi, Mxico the leaching procedure was applied on six molar samples from several fossil remains. The initial hypothesis was to use 87Sr/86Sr values to verify if the mammoth population was a mixture of individuals from various herds and further by comparing their Sr isotopic composition with that of plants and soils, to confirm their geographic origin. The dissimilar Sr results point to two distinct mammoth groups. The mammoth population from Laguna de Cruces was then not a family unit because it was composed by individuals originated from different localities. Only one individual was identified as local. Others could have walked as much as 100 km to find food and water sources.

  15. Neotectonics of Panama. I. Major fault systems

    SciTech Connect

    Corrigan, J.; Mann, P.

    1985-01-01

    The direction and rate of relative plate motion across the Caribbean-Nazca boundary in Panama is poorly known. This lack of understanding can be attributed to diffuse seismicity; lack of well constrained focal mechanisms from critical areas; and dense tropical vegetation. In order to better understand the relation of plate motions to major fault systems in Panama, the authors have integrated geologic, remote sensing, earthquake and UTIG marine seismic reflection data. Three areas of recent faulting can be distinguished in Panama and its shelf areas; ZONE 1 of eastern Panama consists of a 70 km wide zone of 3 discrete left-lateral strike-slip faults (Sanson Hills, Jaque River, Sambu) which strike N40W and can be traced as continuous features for distances of 100-150 km; ZONE 2 in central Panama consists of a diffuse zone of discontinuous normal(.) faults which range in strike from N40E, N70E; ZONE 3 in western Panama consists of a 60 km wide zone of 2 discrete, left-lateral(.) strike-slip faults which strike N60W and can be traced as continuous features for distances of 150 km; ZONE 3 faults appear to be continuous with faults bounding the forearc Teraba Trough of Costa Rica. The relation of faults of ZONE 3 to faults of ZONE 2 and a major fault bounding the southern Panama shelf is unclear.

  16. A Quaternary Fault Database for Central Asia

    NASA Astrophysics Data System (ADS)

    Mohadjer, S.; Ehlers, T. A.; Bendick, R.; Stübner, K.; Strube, T.

    2015-09-01

    Earthquakes represent the highest risk in terms of potential loss of lives and economic damage for Central Asian countries. Knowledge of fault location and behavior is essential in calculating and mapping seismic hazard. Previous efforts in compiling fault information for Central Asia have generated a large amount of data that are published in limited-access journals with no digital maps publicly available, or are limited in their description of important fault parameters such as slip rates. This study builds on previous work by improving access to fault information through a web-based interactive map and an online database with search capabilities that allow users to organize data by different fields. The data presented in this compilation include fault location, its geographic, seismic and structural characteristics, short descriptions, narrative comments and references to peer-reviewed publications. The interactive map displays 1196 fault segments and 34 000 earthquake locations on a shaded-relief map. The online database contains attributes for 122 faults mentioned in the literature, with Quaternary and geodetic slip rates reported for 38 and 26 faults respectively, and earthquake history reported for 39 faults. This work has implications for seismic hazard studies in Central Asia as it summarizes important fault parameters, and can reduce earthquake risk by enhancing public access to information. It also allows scientists and hazard assessment teams to identify structures and regions where data gaps exist and future investigations are needed.

  17. Early weakening processes inside thrust fault

    NASA Astrophysics Data System (ADS)

    Lacroix, B.; Tesei, T.; Oliot, E.; Lahfid, A.; Collettini, C.

    2015-07-01

    Observations from deep boreholes at several locations worldwide, laboratory measurements of frictional strength on quartzo-feldspathic materials, and earthquake focal mechanisms indicate that crustal faults are strong (apparent friction μ ≥ 0.6). However, friction experiments on phyllosilicate-rich rocks and some geophysical data have demonstrated that some major faults are considerably weaker. This weakness is commonly considered to be characteristic of mature faults in which rocks are altered by prolonged deformation and fluid-rock interaction (i.e., San Andreas, Zuccale, and Nankai Faults). In contrast, in this study we document fault weakening occurring along a marly shear zone in its infancy (<30 m displacement). Geochemical mass balance calculation and microstructural data show that a massive calcite departure (up to 50 vol %) from the fault rocks facilitated the concentration and reorganization of weak phyllosilicate minerals along the shear surfaces. Friction experiments carried out on intact foliated samples of host marls and fault rocks demonstrated that this structural reorganization lead to a significant fault weakening and that the incipient structure has strength and slip behavior comparable to that of the major weak faults previously documented. These results indicate that some faults, especially those nucleating in lithologies rich of both clays and high-solubility minerals (such as calcite), might experience rapid mineralogical and structural alteration and become weak even in the early stages of their activity.

  18. A Log-Scaling Fault Tolerant Agreement Algorithm for a Fault Tolerant MPI

    SciTech Connect

    Hursey, Joshua J; Naughton, III, Thomas J; Vallee, Geoffroy R; Graham, Richard L

    2011-01-01

    The lack of fault tolerance is becoming a limiting factor for application scalability in HPC systems. The MPI does not provide standardized fault tolerance interfaces and semantics. The MPI Forum's Fault Tolerance Working Group is proposing a collective fault tolerant agreement algorithm for the next MPI standard. Such algorithms play a central role in many fault tolerant applications. This paper combines a log-scaling two-phase commit agreement algorithm with a reduction operation to provide the necessary functionality for the new collective without any additional messages. Error handling mechanisms are described that preserve the fault tolerance properties while maintaining overall scalability.

  19. Perspective View, San Andreas Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The prominent linear feature straight down the center of this perspective view is the San Andreas Fault in an image created with data from NASA's shuttle Radar Topography Mission (SRTM), which will be used by geologists studying fault dynamics and landforms resulting from active tectonics. This segment of the fault lies west of the city of Palmdale, California, about 100 kilometers (about 60 miles) northwest of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. This area is at the junction of two large mountain ranges, the San Gabriel Mountains on the left and the Tehachapi Mountains on the right. Quail Lake Reservoir sits in the topographic depression created by past movement along the fault. Interstate 5 is the prominent linear feature starting at the left edge of the image and continuing into the fault zone, passing eventually over Tejon Pass into the Central Valley, visible at the upper left.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense (DoD), and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise,Washington, DC.

    Size: Varies in a perspective view Location: 34.78 deg. North lat., 118.75 deg. West lon. Orientation: Looking Northwest Original Data Resolution: SRTM and Landsat: 30 meters (99 feet) Date Acquired: February 16, 2000

  20. Off-fault tip splay networks: A genetic and generic property of faults indicative of their long-term propagation

    NASA Astrophysics Data System (ADS)

    Perrin, Clément; Manighetti, Isabelle; Gaudemer, Yves

    2016-01-01

    We use fault maps and fault propagation evidences available in the literature to examine geometrical relations between parent faults and off-fault splays. The population includes 47 worldwide crustal faults with lengths from millimetres to thousands of kilometres and of different slip modes. We show that fault splays form adjacent to any propagating fault tip, whereas they are absent at non-propagating fault ends. Independent of fault length, slip mode, context, etc., tip splay networks have a similar fan shape widening in direction of long-term propagation, a similar relative length and width (∼ 30 and ∼ 10% of parent fault length, respectively), and a similar range of mean angles to parent fault (10-20°). We infer that tip splay networks are a genetic and a generic property of faults indicative of their long-term propagation. Their generic geometrical properties suggest they result from generic off-fault stress distribution at propagating fault ends.

  1. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1990-01-01

    The use of back-to-back, or comparison, testing for regression test or porting is examined. The efficiency and the cost of the strategy is compared with manual and table-driven single version testing. Some of the key parameters that influence the efficiency and the cost of the approach are the failure identification effort during single version program testing, the extent of implemented changes, the nature of the regression test data (e.g., random), and the nature of the inter-version failure correlation and fault-masking. The advantages and disadvantages of the technique are discussed, together with some suggestions concerning its practical use.

  2. Fault segmentation and structural evolution of the frontal Longmen San fault zone

    NASA Astrophysics Data System (ADS)

    Chang, C.; Xu, X.; Yuan, R.; Li, K.; Sun, X.; Chen, W.

    2011-12-01

    Field investigations show that the Wenchuan earthquake on the 12th of May 2008 ruptured two NW-dipping imbricate reverse faults along the Longmen Shan fault zone at the eastern margin of the Tibetan Plateau. The length of the Beichuan-Yingxiu Fault reaches nearly 240 km. Southeast of this fault, a smaller displacement occurred along the Guanxian-Jiangyou Fault, which has a length of about 70 km. A 7 km long NW-striking left-lateral reverse fault, the Xiaoyudong Fault, was clearly observed between these two main surface ruptures. This co-seismic surface rupture pattern, involving multiple structures, is one of the most complicated patterns of recent great earthquakes. The surface rupture length is the longest among the co-seismic surface rupture zones for reverse faulting events ever reported. Our detail field investigations reveal that the surface rupture of the Wenchuan earthquake cascaded through several pre-existing fault segments. The displacement amount, the rupture pattern and the stress orientation calculated from the fault slickenside striations between the different segments are all different. Some secondary faults can also be observed between the segments. These faults are partially active and control the development of river terraces and the shape of streams. We suggest that the multi-segment rupturing model is a better approximation than a single-segment model for estimating the maximum magnitude of the Longmen Shan fault zone.

  3. Fault geometries in basement-induced wrench faulting under different initial stress states

    NASA Astrophysics Data System (ADS)

    Naylor, M. A.; Mandl, G.; Supesteijn, C. H. K.

    Scaled sandbox experiments were used to generate models for relative ages, dip, strike and three-dimensional shape of faults in basement-controlled wrench faulting. The basic fault sequence runs from early en échelon Riedel shears and splay faults through 'lower-angle' shears to P shears. The Riedel shears are concave upwards and define a tulip structure in cross-section. In three dimensions, each Riedel shear has a helicoidal form. The sequence of faults and three-dimensional geometry are rationalized in terms of the prevailing stress field and Coulomb-Mohr theory of shear failure. The stress state in the sedimentary overburden before wrenching begins has a substantial influence on the fault geometries and on the final complexity of the fault zone. With the maximum compressive stress (∂ 1) initially parallel to the basement fault (transtension), Riedel shears are only slightly en échelon, sub-parallel to the basement fault, steeply dipping with a reduced helicoidal aspect. Conversely, with ∂ 1 initially perpendicular to the basement fault (transpression), Riedel shears are strongly oblique to the basement fault strike, have lower dips and an exaggerated helicoidal form; the final fault zone is both wide and complex. We find good agreement between the models and both mechanical theory and natural examples of wrench faulting.

  4. Networking of Near Fault Observatories in Europe

    NASA Astrophysics Data System (ADS)

    Vogfjörd, Kristín; Bernard, Pascal; Chiraluce, Lauro; Fäh, Donat; Festa, Gaetano; Zulficar, Can

    2014-05-01

    Networking of six European near-fault observatories (NFO) was established In the FP7 infrastructure project NERA (Network of European Research Infrastructures for Earthquake Risk Assessment and Mitigation). This networking has included sharing of expertise and know-how among the observatories, distribution of analysis tools and access to data. The focus of the NFOs is on research into the active processes of their respective fault zones through acquisition and analysis of multidisciplinary data. These studies include the role of fluids in fault initiation, site effects, derived processes such as earthquake generated tsunamis and landslides, mapping the internal structure of fault systems and development of automatic early warning systems. The six fault zones are in different tectonic regimes: The South Iceland Seismic Zone (SISZ) in Iceland, the Marmara Sea in Turkey and the Corinth Rift in Greece are at plate boundaries, with strike-slip faulting characterizing the SISZ and the Marmara Sea, while normal faulting dominates in the Corinth Rift. The Alto Tiberina and Irpinia faults, dominated by low- and medium-angle normal faulting, respectively are in the Apennine mountain range in Italy and the Valais Region, characterized by both strike-slip and normal faulting is located in the Swiss Alps. The fault structures range from well-developed long faults, such as in the Marmara Sea, to more complex networks of smaller, book-shelf faults such as in the SISZ. Earthquake hazard in the fault zones ranges from significant to substantial. The Marmara Sea and Corinth rift are under ocean causing additional tsunami hazard and steep slopes and sediment-filled valleys in the Valais give rise to hazards from landslides and liquefaction. Induced seismicity has repeatedly occurred in connection with geothermal drilling and water injection in the SISZ and active volcanoes flanking the SISZ also give rise to volcanic hazard due to volcano-tectonic interaction. Organization among the NERA NFO's has led to their gaining working-group status in EPOS as the WG on Near Fault Observatories, representing multidisciplinary research of faults and fault zones.

  5. Relationships Between Earthquakes and Mapped Faults

    NASA Astrophysics Data System (ADS)

    Weiser, D. A.

    2011-12-01

    A fundamental question that needs to be addressed by natural hazards studies asks, "Can one use an existing fault map to accurately determine maximum magnitude for earthquakes in a region?" Currently, fault maps are used to assess local and regional hazards, thus making them an important tool in determining potential magnitude and shaking for an area. Regressions between magnitude and rupture dimensions are based on post-earthquake observations. But, these regressions are employed to estimate magnitude, using observations taken before an earthquake has occurred. Since future magnitude estimates are based on rupture length of a fault, and not the previously mapped fault trace, it is important to understand the relationship between mapped faults and earthquakes. Mapped faults are the foundation of many seismic hazard studies. It is generally assumed that fault traces constrain the location, orientation, and size of future earthquakes. Yet many earthquakes, like the 1992 Landers, CA and 1999 Denali, AK earthquakes, rupture beyond previously mapped traces. Other events, like the 1994 Northridge, CA and the 2010 Christchurch, New Zealand earthquakes, serve as additional evidence that large earthquakes can and do occur off of mapped faults. My study examines the relationship between earthquakes and pre-existing faults, extends Wesnousky's recent fault compilation [2006], and expands Black's [2008] fault-jumping probability models. I update Wesnousky's [2006] data set to include additional earthquakes, for which surface rupture maps have recently been published. I add additional parameters and use this new data set to estimate fault-jumping probability. I distinguish between three types of faults; those already mapped at a scale appropriate to hazard studies such as Uniform California Earthquake Rupture Forecasts of 2007 and 2011; those that could be mapped from available surface imaging like LIDAR and high resolution optical pictures; and those that have no surface evidence detectable with present technology. I also distinguish between several types of earthquake ruptures; those that stay inside the limits of mapped faults; those that push the limits; those that violate the limits; and those that occur off of mapped faults. I match the fault types with the earthquake types, and make quantitative models of the probability that earthquakes will extend beyond mapped fault traces.

  6. Fault-tolerant dynamic task graph scheduling

    SciTech Connect

    Kurt, Mehmet C.; Krishnamoorthy, Sriram; Agrawal, Kunal; Agrawal, Gagan

    2014-11-16

    In this paper, we present an approach to fault tolerant execution of dynamic task graphs scheduled using work stealing. In particular, we focus on selective and localized recovery of tasks in the presence of soft faults. We elicit from the user the basic task graph structure in terms of successor and predecessor relationships. The work stealing-based algorithm to schedule such a task graph is augmented to enable recovery when the data and meta-data associated with a task get corrupted. We use this redundancy, and the knowledge of the task graph structure, to selectively recover from faults with low space and time overheads. We show that the fault tolerant design retains the essential properties of the underlying work stealing-based task scheduling algorithm, and that the fault tolerant execution is asymptotically optimal when task re-execution is taken into account. Experimental evaluation demonstrates the low cost of recovery under various fault scenarios.

  7. Ultrareliable fault-tolerant control systems

    NASA Technical Reports Server (NTRS)

    Webster, L. D.; Slykhouse, R. A.; Booth, L. A., Jr.; Carson, T. M.; Davis, G. J.; Howard, J. C.

    1984-01-01

    It is demonstrated that fault-tolerant computer systems, such as on the Shuttles, based on redundant, independent operation are a viable alternative in fault tolerant system designs. The ultrareliable fault-tolerant control system (UFTCS) was developed and tested in laboratory simulations of an UH-1H helicopter. UFTCS includes asymptotically stable independent control elements in a parallel, cross-linked system environment. Static redundancy provides the fault tolerance. A polling is performed among the computers, with results allowing for time-delay channel variations with tight bounds. When compared with the laboratory and actual flight data for the helicopter, the probability of a fault was, for the first 10 hr of flight given a quintuple computer redundancy, found to be 1 in 290 billion. Two weeks of untended Space Station operations would experience a fault probability of 1 in 24 million. Techniques for avoiding channel divergence problems are identified.

  8. Probable origin of the Livingston Fault Zone

    NASA Astrophysics Data System (ADS)

    Monroe, Watson H.

    1991-09-01

    Most faulting in the Coastal Plain is high angle and generally normal, but the faults in the Livingston Fault Zone are all medium-angle reverse, forming a series of parallel horsts and grabens. Parallel to the fault zone are a number of phenomena all leading to the conclusion that the faults result from the solution of a late Cretaceous salt anticline by fresh groundwater, which then migrated up to the Eutaw and perhaps Tuscaloosa aquifers, causing an anomalous elongated area of highly saline water. The origin of the Livingston Fault Zone and the association of salt water in underlying aquifers is of particular importance at this time in relation to environmental concerns associated with hazardous waste management in the area.

  9. Performance Analysis on Fault Tolerant Control System

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Belcastro, Christine

    2005-01-01

    In a fault tolerant control (FTC) system, a parameter varying FTC law is reconfigured based on fault parameters estimated by fault detection and isolation (FDI) modules. FDI modules require some time to detect fault occurrences in aero-vehicle dynamics. In this paper, an FTC analysis framework is provided to calculate the upper bound of an induced-L(sub 2) norm of an FTC system with existence of false identification and detection time delay. The upper bound is written as a function of a fault detection time and exponential decay rates and has been used to determine which FTC law produces less performance degradation (tracking error) due to false identification. The analysis framework is applied for an FTC system of a HiMAT (Highly Maneuverable Aircraft Technology) vehicle. Index Terms fault tolerant control system, linear parameter varying system, HiMAT vehicle.

  10. A new intelligent hierarchical fault diagnosis system

    SciTech Connect

    Huang, Y.C.; Huang, C.L.; Yang, H.T.

    1997-02-01

    As a part of a substation-level decision support system, a new intelligent Hierarchical Fault Diagnosis System for on-line fault diagnosis is presented in this paper. The proposed diagnosis system divides the fault diagnosis process into two phases. Using time-stamped information of relays and breakers, phase 1 identifies the possible fault sections through the Group Method of Data Handling (GMDH) networks, and phase 2 recognizes the types and detailed situations of the faults identified in phase 1 by using a fast bit-operation logical inference mechanism. The diagnosis system has been practically verified by testing on a typical Taiwan power secondary transmission system. Test results show that rapid and accurate diagnosis can be obtained with flexibility and portability for fault diagnosis purpose of diverse substations.

  11. Holocene fault scarps near Tacoma, Washington, USA

    USGS Publications Warehouse

    Sherrod, B.L.; Brocher, T.M.; Weaver, C.S.; Bucknam, R.C.; Blakely, R.J.; Kelsey, H.M.; Nelson, A.R.; Haugerud, R.

    2004-01-01

    Airborne laser mapping confirms that Holocene active faults traverse the Puget Sound metropolitan area, northwestern continental United States. The mapping, which detects forest-floor relief of as little as 15 cm, reveals scarps along geophysical lineaments that separate areas of Holocene uplift and subsidence. Along one such line of scarps, we found that a fault warped the ground surface between A.D. 770 and 1160. This reverse fault, which projects through Tacoma, Washington, bounds the southern and western sides of the Seattle uplift. The northern flank of the Seattle uplift is bounded by a reverse fault beneath Seattle that broke in A.D. 900-930. Observations of tectonic scarps along the Tacoma fault demonstrate that active faulting with associated surface rupture and ground motions pose a significant hazard in the Puget Sound region.

  12. The fault-tolerant multiprocessor computer

    NASA Technical Reports Server (NTRS)

    Smith, T. B., III (editor); Lala, J. H. (editor); Goldberg, J. (editor); Kautz, W. H. (editor); Melliar-Smith, P. M. (editor); Green, M. W. (editor); Levitt, K. N. (editor); Schwartz, R. L. (editor); Weinstock, C. B. (editor); Palumbo, D. L. (editor)

    1986-01-01

    The development and evaluation of fault-tolerant computer architectures and software-implemented fault tolerance (SIFT) for use in advanced NASA vehicles and potentially in flight-control systems are described in a collection of previously published reports prepared for NASA. Topics addressed include the principles of fault-tolerant multiprocessor (FTMP) operation; processor and slave regional designs; FTMP executive, facilities, acceptance-test/diagnostic, applications, and support software; FTM reliability and availability models; SIFT hardware design; and SIFT validation and verification.

  13. Hydrogen Embrittlement And Stacking-Fault Energies

    NASA Technical Reports Server (NTRS)

    Parr, R. A.; Johnson, M. H.; Davis, J. H.; Oh, T. K.

    1988-01-01

    Embrittlement in Ni/Cu alloys appears related to stacking-fault porbabilities. Report describes attempt to show a correlation between stacking-fault energy of different Ni/Cu alloys and susceptibility to hydrogen embrittlement. Correlation could lead to more fundamental understanding and method of predicting susceptibility of given Ni/Cu alloy form stacking-fault energies calculated from X-ray diffraction measurements.

  14. Fault seal analysis: Methodology and case studies

    SciTech Connect

    Badley, M.E.; Freeman, B.; Needham, D.T.

    1996-12-31

    Fault seal can arise from reservoir/non-reservoir juxtaposition or by development of fault rock of high entry-pressure. The methodology for evaluating these possibilities uses detailed seismic mapping and well analysis. A {open_quote}first-order{close_quote} seal analysis involves identifying reservoir juxtaposition areas over the fault surface, using the mapped horizons and a refined reservoir stratigraphy defined by isochores at the fault surface. The {open_quote}second-order{close_quote} phase of the analysis assesses whether the sand-sand contacts are likely to support a pressure difference. We define two lithology-dependent attributes {open_quote}Gouge Ratio{close_quote} and {open_quote}Smear Factor{close_quote}. Gouge Ratio is an estimate of the proportion of fine-grained material entrained into the fault gouge from the wall rocks. Smear Factor methods estimate the profile thickness of a ductile shale drawn along the fault zone during faulting. Both of these parameters vary over the fault surface implying that faults cannot simply be designated {open_quote}sealing{close_quote} or {open_quote}non-sealing{close_quote}. An important step in using these parameters is to calibrate them in areas where across-fault pressure differences are explicitly known from wells on both sides of a fault. Our calibration for a number of datasets shows remarkably consistent results despite their diverse settings (e.g. Brent Province, Niger Delta, Columbus Basin). For example, a Shale Gouge Ratio of c. 20% (volume of shale in the slipped interval) is a typical threshold between minimal across-fault pressure difference and significant seal.

  15. Fault seal analysis: Methodology and case studies

    SciTech Connect

    Badley, M.E.; Freeman, B.; Needham, D.T. )

    1996-01-01

    Fault seal can arise from reservoir/non-reservoir juxtaposition or by development of fault rock of high entry-pressure. The methodology for evaluating these possibilities uses detailed seismic mapping and well analysis. A [open quote]first-order[close quote] seal analysis involves identifying reservoir juxtaposition areas over the fault surface, using the mapped horizons and a refined reservoir stratigraphy defined by isochores at the fault surface. The [open quote]second-order[close quote] phase of the analysis assesses whether the sand-sand contacts are likely to support a pressure difference. We define two lithology-dependent attributes [open quote]Gouge Ratio[close quote] and [open quote]Smear Factor[close quote]. Gouge Ratio is an estimate of the proportion of fine-grained material entrained into the fault gouge from the wall rocks. Smear Factor methods estimate the profile thickness of a ductile shale drawn along the fault zone during faulting. Both of these parameters vary over the fault surface implying that faults cannot simply be designated [open quote]sealing[close quote] or [open quote]non-sealing[close quote]. An important step in using these parameters is to calibrate them in areas where across-fault pressure differences are explicitly known from wells on both sides of a fault. Our calibration for a number of datasets shows remarkably consistent results despite their diverse settings (e.g. Brent Province, Niger Delta, Columbus Basin). For example, a Shale Gouge Ratio of c. 20% (volume of shale in the slipped interval) is a typical threshold between minimal across-fault pressure difference and significant seal.

  16. Air conditioner response to transmission faults

    SciTech Connect

    Shaffer, J.W.

    1997-05-01

    This paper describes two multi-phase faults events which occurred during periods of high air conditioning use. There was a significant loss of load in these events which is attributed to air conditioner motor protection. The overall response of the transmission system is simulated using induction motor models based on the characteristics of a typical residential air conditioner compressor motor. The sensitivity of factors such as fault location, fault duration and excitation system performance is also investigated.

  17. 31 CFR 29.522 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance: Treasury 1 2012-07-01 2012-07-01 false Fault. 29.522 Section 29.522 Money... Overpayments 29.522 Fault. (a) General rule. A debtor is considered to be at fault if he or she, or any other... requirement. (3) The following factors may affect the decision as to whether the debtor is or is not at...

  18. 31 CFR 29.522 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 31 Money and Finance: Treasury 1 2014-07-01 2014-07-01 false Fault. 29.522 Section 29.522 Money... Overpayments 29.522 Fault. (a) General rule. A debtor is considered to be at fault if he or she, or any other... requirement. (3) The following factors may affect the decision as to whether the debtor is or is not at...

  19. 31 CFR 29.522 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance: Treasury 1 2013-07-01 2013-07-01 false Fault. 29.522 Section 29.522 Money... Overpayments 29.522 Fault. (a) General rule. A debtor is considered to be at fault if he or she, or any other... requirement. (3) The following factors may affect the decision as to whether the debtor is or is not at...

  20. Fault Zone Guided Wave generation on the locked, late interseismic Alpine Fault, New Zealand

    NASA Astrophysics Data System (ADS)

    Eccles, J. D.; Gulley, A. K.; Malin, P. E.; Boese, C. M.; Townend, J.; Sutherland, R.

    2015-07-01

    Fault Zone Guided Waves (FZGWs) have been observed for the first time within New Zealand's transpressional continental plate boundary, the Alpine Fault, which is late in its typical seismic cycle. Ongoing study of these phases provides the opportunity to monitor interseismic conditions in the fault zone. Distinctive dispersive seismic codas (~7-35 Hz) have been recorded on shallow borehole seismometers installed within 20 m of the principal slip zone. Near the central Alpine Fault, known for low background seismicity, FZGW-generating microseismic events are located beyond the catchment-scale partitioning of the fault indicating lateral connectivity of the low-velocity zone immediately below the near-surface segmentation. Initial modeling of the low-velocity zone indicates a waveguide width of 60-200 m with a 10-40% reduction in S wave velocity, similar to that inferred for the fault core of other mature plate boundary faults such as the San Andreas and North Anatolian Faults.

  1. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1993-01-01

    Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.

  2. Identifiability of Additive Actuator and Sensor Faults by State Augmentation

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh; Gonzalez, Oscar R.; Upchurch, Jason M.

    2014-01-01

    A class of fault detection and identification (FDI) methods for bias-type actuator and sensor faults is explored in detail from the point of view of fault identifiability. The methods use state augmentation along with banks of Kalman-Bucy filters for fault detection, fault pattern determination, and fault value estimation. A complete characterization of conditions for identifiability of bias-type actuator faults, sensor faults, and simultaneous actuator and sensor faults is presented. It is shown that FDI of simultaneous actuator and sensor faults is not possible using these methods when all sensors have unknown biases. The fault identifiability conditions are demonstrated via numerical examples. The analytical and numerical results indicate that caution must be exercised to ensure fault identifiability for different fault patterns when using such methods.

  3. Distributed bearing fault diagnosis based on vibration analysis

    NASA Astrophysics Data System (ADS)

    Dolenc, Boštjan; Boškoski, Pavle; Juričić, Đani

    2016-01-01

    Distributed bearing faults appear under various circumstances, for example due to electroerosion or the progression of localized faults. Bearings with distributed faults tend to generate more complex vibration patterns than those with localized faults. Despite the frequent occurrence of such faults, their diagnosis has attracted limited attention. This paper examines a method for the diagnosis of distributed bearing faults employing vibration analysis. The vibrational patterns generated are modeled by incorporating the geometrical imperfections of the bearing components. Comparing envelope spectra of vibration signals shows that one can distinguish between localized and distributed faults. Furthermore, a diagnostic procedure for the detection of distributed faults is proposed. This is evaluated on several bearings with naturally born distributed faults, which are compared with fault-free bearings and bearings with localized faults. It is shown experimentally that features extracted from vibrations in fault-free, localized and distributed fault conditions form clearly separable clusters, thus enabling diagnosis.

  4. Fault rheology beyond frictional melting

    PubMed Central

    Lavallée, Yan; Hirose, Takehiro; Kendrick, Jackie E.; Hess, Kai-Uwe; Dingwell, Donald B.

    2015-01-01

    During earthquakes, comminution and frictional heating both contribute to the dissipation of stored energy. With sufficient dissipative heating, melting processes can ensue, yielding the production of frictional melts or “pseudotachylytes.” It is commonly assumed that the Newtonian viscosities of such melts control subsequent fault slip resistance. Rock melts, however, are viscoelastic bodies, and, at high strain rates, they exhibit evidence of a glass transition. Here, we present the results of high-velocity friction experiments on a well-characterized melt that demonstrate how slip in melt-bearing faults can be governed by brittle fragmentation phenomena encountered at the glass transition. Slip analysis using models that incorporate viscoelastic responses indicates that even in the presence of melt, slip persists in the solid state until sufficient heat is generated to reduce the viscosity and allow remobilization in the liquid state. Where a rock is present next to the melt, we note that wear of the crystalline wall rock by liquid fragmentation and agglutination also contributes to the brittle component of these experimentally generated pseudotachylytes. We conclude that in the case of pseudotachylyte generation during an earthquake, slip even beyond the onset of frictional melting is not controlled merely by viscosity but rather by an interplay of viscoelastic forces around the glass transition, which involves a response in the brittle/solid regime of these rock melts. We warn of the inadequacy of simple Newtonian viscous analyses and call for the application of more realistic rheological interpretation of pseudotachylyte-bearing fault systems in the evaluation and prediction of their slip dynamics. PMID:26124123

  5. Fault rheology beyond frictional melting.

    PubMed

    Lavallée, Yan; Hirose, Takehiro; Kendrick, Jackie E; Hess, Kai-Uwe; Dingwell, Donald B

    2015-07-28

    During earthquakes, comminution and frictional heating both contribute to the dissipation of stored energy. With sufficient dissipative heating, melting processes can ensue, yielding the production of frictional melts or "pseudotachylytes." It is commonly assumed that the Newtonian viscosities of such melts control subsequent fault slip resistance. Rock melts, however, are viscoelastic bodies, and, at high strain rates, they exhibit evidence of a glass transition. Here, we present the results of high-velocity friction experiments on a well-characterized melt that demonstrate how slip in melt-bearing faults can be governed by brittle fragmentation phenomena encountered at the glass transition. Slip analysis using models that incorporate viscoelastic responses indicates that even in the presence of melt, slip persists in the solid state until sufficient heat is generated to reduce the viscosity and allow remobilization in the liquid state. Where a rock is present next to the melt, we note that wear of the crystalline wall rock by liquid fragmentation and agglutination also contributes to the brittle component of these experimentally generated pseudotachylytes. We conclude that in the case of pseudotachylyte generation during an earthquake, slip even beyond the onset of frictional melting is not controlled merely by viscosity but rather by an interplay of viscoelastic forces around the glass transition, which involves a response in the brittle/solid regime of these rock melts. We warn of the inadequacy of simple Newtonian viscous analyses and call for the application of more realistic rheological interpretation of pseudotachylyte-bearing fault systems in the evaluation and prediction of their slip dynamics. PMID:26124123

  6. Acoustic fault injection tool (AFIT)

    NASA Astrophysics Data System (ADS)

    Schoess, Jeffrey N.

    1999-05-01

    On September 18, 1997, Honeywell Technology Center (HTC) successfully completed a three-week flight test of its rotor acoustic monitoring system (RAMS) at Patuxent River Flight Test Center. This flight test was the culmination of an ambitious 38-month proof-of-concept effort directed at demonstrating the feasibility of detecting crack propagation in helicopter rotor components. The program was funded as part of the U.S. Navy's Air Vehicle Diagnostic Systems (AVDS) program. Reductions in Navy maintenance budgets and available personnel have dictated the need to transition from time-based to 'condition-based' maintenance. Achieving this will require new enabling diagnostic technologies. The application of acoustic emission for the early detection of helicopter rotor head dynamic component faults has proven the feasibility of the technology. The flight-test results demonstrated that stress-wave acoustic emission technology can detect signals equivalent to small fatigue cracks in rotor head components and can do so across the rotating articulated rotor head joints and in the presence of other background acoustic noise generated during flight operation. During the RAMS flight test, 12 test flights were flown from which 25 Gbyte of digital acoustic data and about 15 hours of analog flight data recorder (FDR) data were collected from the eight on-rotor acoustic sensors. The focus of this paper is to describe the CH-46 flight-test configuration and present design details about a new innovative machinery diagnostic technology called acoustic fault injection. This technology involves the injection of acoustic sound into machinery to assess health and characterize operational status. The paper will also address the development of the Acoustic Fault Injection Tool (AFIT), which was successfully demonstrated during the CH-46 flight tests.

  7. Faults Discovery By Using Mined Data

    NASA Technical Reports Server (NTRS)

    Lee, Charles

    2005-01-01

    Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.

  8. The mechanics of clay smearing along faults

    NASA Astrophysics Data System (ADS)

    Egholm, D. L.; Clausen, O. R.; Sandiford, M.; Kristensen, M. B.; Korstgård, J. A.

    2008-10-01

    A clay- or shale-rich fault gouge can significantly reduce faultpermeability. Therefore, predictions of the volume of clay orshale that may be smeared along a fault trace are importantfor estimating the fluid connectivity of groundwater and hydrocarbonreservoir systems. Here, we show how fault smears develop spontaneouslyin layered soil systems with varying friction coefficients,and we present a quantitative dynamic model for such behavior.The model is based on Mohr-Coulomb failure theory, and usingdiscrete element computations, we demonstrate how the modelframework can predict the fault smear potential from soil frictionangles and layer thicknesses.

  9. Paleostresses associated with faults of large offset

    NASA Astrophysics Data System (ADS)

    Wojtal, Steven; Pershing, Jonathan

    In order to test empirically the limitations of paleostress analysis, we used Etchecopar's computer program to compute the orientations and relative magnitudes of paleostress principal values in two southern Appalachian (U.S.A) thrust zones from minor fault and slickenside attitudes. In both thrust zones, faults are closely spaced, many faults have offsets whose magnitudes exceed the distance to adjacent faults of comparable size, and deformation was strongly non-coaxial. While even in the less-deformed thrust zone bulk strains due to fault movement have axial ratios as high as 10:1, nearly 75% of the fault-slickenside pairs in each thrust zone conform with paleostress tensors that indicate: (1) sequential transport-parallel compression, transport-parallel extension and extension oblique to transport; and (2) low resolved shear stresses on the thrusts. Finite strains measured in one thrust zone share a principal plane with the first two tensors, and the inclinations of the paleostress and finite strain principal directions in that plane are consistent with thrust-parallel shearing. Relative magnitudes of paleostress and strain principal values do not correlate well. Moreover, the locations and inferred origins of fault-slickenside pairs inconsistent with paleostress tensors suggest that stresses in rocks between faults were not, as is assumed in paleostress analyses, uniform; this complexity may also occur in faulted rocks where bulk strains are small. Paleostress tensors from rocks with large bulk strains may be viable, but they must be interpreted cautiously.

  10. Applications of Fault Detection in Vibrating Structures

    NASA Technical Reports Server (NTRS)

    Eure, Kenneth W.; Hogge, Edward; Quach, Cuong C.; Vazquez, Sixto L.; Russell, Andrew; Hill, Boyd L.

    2012-01-01

    Structural fault detection and identification remains an area of active research. Solutions to fault detection and identification may be based on subtle changes in the time series history of vibration signals originating from various sensor locations throughout the structure. The purpose of this paper is to document the application of vibration based fault detection methods applied to several structures. Overall, this paper demonstrates the utility of vibration based methods for fault detection in a controlled laboratory setting and limitations of applying the same methods to a similar structure during flight on an experimental subscale aircraft.

  11. Chip level simulation of fault tolerant computers

    NASA Technical Reports Server (NTRS)

    Armstrong, J. R.

    1983-01-01

    Chip level modeling techniques, functional fault simulation, simulation software development, a more efficient, high level version of GSP, and a parallel architecture for functional simulation are discussed.

  12. Fault-tolerant parallel processor

    SciTech Connect

    Harper, R.E.; Lala, J.H. )

    1991-06-01

    This paper addresses issues central to the design and operation of an ultrareliable, Byzantine resilient parallel computer. Interprocessor connectivity requirements are met by treating connectivity as a resource that is shared among many processing elements, allowing flexibility in their configuration and reducing complexity. Redundant groups are synchronized solely by message transmissions and receptions, which aslo provide input data consistency and output voting. Reliability analysis results are presented that demonstrate the reduced failure probability of such a system. Performance analysis results are presented that quantify the temporal overhead involved in executing such fault-tolerance-specific operations. Empirical performance measurements of prototypes of the architecture are presented. 30 refs.

  13. Frictional heterogeneities on carbonate-bearing normal faults: Insights from the Monte Maggio Fault, Italy

    NASA Astrophysics Data System (ADS)

    Carpenter, B. M.; Scuderi, M. M.; Collettini, C.; Marone, C.

    2014-12-01

    Observations of heterogeneous and complex fault slip are often attributed to the complexity of fault structure and/or spatial heterogeneity of fault frictional behavior. Such complex slip patterns have been observed for earthquakes on normal faults throughout central Italy, where many of the Mw 6 to 7 earthquakes in the Apennines nucleate at depths where the lithology is dominated by carbonate rocks. To explore the relationship between fault structure and heterogeneous frictional properties, we studied the exhumed Monte Maggio Fault, located in the northern Apennines. We collected intact specimens of the fault zone, including the principal slip surface and hanging wall cataclasite, and performed experiments at a normal stress of 10 MPa under saturated conditions. Experiments designed to reactivate slip between the cemented principal slip surface and cataclasite show a 3 MPa stress drop as the fault surface fails, then velocity-neutral frictional behavior and significant frictional healing. Overall, our results suggest that (1) earthquakes may readily nucleate in areas of the fault where the slip surface separates massive limestone and are likely to propagate in areas where fault gouge is in contact with the slip surface; (2) postseismic slip is more likely to occur in areas of the fault where gouge is present; and (3) high rates of frictional healing and low creep relaxation observed between solid fault surfaces could lead to significant aftershocks in areas of low stress drop.

  14. Transform fault earthquakes in the North Atlantic: Source mechanisms and depth of faulting

    NASA Technical Reports Server (NTRS)

    Bergman, Eric A.; Solomon, Sean C.

    1987-01-01

    The centroid depths and source mechanisms of 12 large earthquakes on transform faults of the northern Mid-Atlantic Ridge were determined from an inversion of long-period body waveforms. The earthquakes occurred on the Gibbs, Oceanographer, Hayes, Kane, 15 deg 20 min, and Vema transforms. The depth extent of faulting during each earthquake was estimated from the centroid depth and the fault width. The source mechanisms for all events in this study display the strike slip motion expected for transform fault earthquakes; slip vector azimuths agree to 2 to 3 deg of the local strike of the zone of active faulting. The only anomalies in mechanism were for two earthquakes near the western end of the Vema transform which occurred on significantly nonvertical fault planes. Secondary faulting, occurring either precursory to or near the end of the main episode of strike-slip rupture, was observed for 5 of the 12 earthquakes. For three events the secondary faulting was characterized by reverse motion on fault planes striking oblique to the trend of the transform. In all three cases, the site of secondary reverse faulting is near a compression jog in the current trace of the active transform fault zone. No evidence was found to support the conclusions of Engeln, Wiens, and Stein that oceanic transform faults in general are either hotter than expected from current thermal models or weaker than normal oceanic lithosphere.

  15. On Identifiability of Bias-Type Actuator-Sensor Faults in Multiple-Model-Based Fault Detection and Identification

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh M.

    2012-01-01

    This paper explores a class of multiple-model-based fault detection and identification (FDI) methods for bias-type faults in actuators and sensors. These methods employ banks of Kalman-Bucy filters to detect the faults, determine the fault pattern, and estimate the fault values, wherein each Kalman-Bucy filter is tuned to a different failure pattern. Necessary and sufficient conditions are presented for identifiability of actuator faults, sensor faults, and simultaneous actuator and sensor faults. It is shown that FDI of simultaneous actuator and sensor faults is not possible using these methods when all sensors have biases.

  16. a case of casing deformation and fault slip for the active fault drilling

    NASA Astrophysics Data System (ADS)

    Ge, H.; Song, L.; Yuan, S.; Yang, W.

    2010-12-01

    Active fault is normally defined as a fault with displacement or seismic activity during the geologically recent period (in the last 10,000 years, USGS). Here, we refer the active fault to the fault that is under the post-seismic stress modification or recovery. Micro-seismic, fault slip would happen during the recovery of the active faults. It is possible that the drilling through this active fault, such as the Wenchuan Fault Scientific Drilling(WFSD), will be accompanied with some possible wellbore instability and casing deformation, which is noteworthy for the fault scientific drilling. This presentation gives a field case of the Wenchuan earthquake. The great Wenchuan earthquake happened on May 12, 2008. An oilfield is 400km apart from the epicenter and 260km from the main fault. Many wells were drilled or are under drilling. Some are drilled through the active fault and a few tectonic active phenomenons were observed. For instance, a drill pipe was cut off in the well which was just drilled through the fault. We concluded that this is due to the fault slip,if not, so thick wall pipe cannot be cut off. At the same time, a mass of well casings of the oilfield deformed during the great Wenchuan Earthquake. The analysis of the casing deformation characteristic, formation structure, seismicity, tectonic stress variation suggest that the casing deformation is closely related to the Wenchuan Earthquake. It is the tectonic stress variation that induces seismic activities, fault slip, salt/gypsum creep speedup, and deformation inconsistent between stratums. Additional earthquake dynamic loads were exerted on the casing and caused its deformation. Active fault scientific drilling has become an important tool to understand earthquake mechanism and physics. The casing deformation and wellbore instability is not only a consequence of the earthquake but also an indicator of stress modification and fault activity. It is noteworthy that tectonic stress variation and fault slip would lead to casing deformation and wellbore instability when drilling through active fault. The Wenchuan Fault Scientific Drilling(WFSD)is a new effort of rapid response survey to the earthquake active fault. This issue should be taken into account for the active fault drilling design.

  17. Groundwater flow in a closed basin with a saline shallow lake in a volcanic area: Laguna Tuyajto, northern Chilean Altiplano of the Andes.

    PubMed

    Herrera, Christian; Custodio, Emilio; Chong, Guillermo; Lambán, Luis Javier; Riquelme, Rodrigo; Wilke, Hans; Jódar, Jorge; Urrutia, Javier; Urqueta, Harry; Sarmiento, Alvaro; Gamboa, Carolina; Lictevout, Elisabeth

    2016-01-15

    Laguna Tuyajto is a small, shallow saline water lake in the Andean Altiplano of northern Chile. In the eastern side it is fed by springs that discharge groundwater of the nearby volcanic aquifers. The area is arid: rainfall does not exceed 200mm/year in the rainiest parts. The stable isotopic content of spring water shows that the recharge is originated mainly from winter rain, snow melt, and to a lesser extent from some short and intense sporadic rainfall events. Most of the spring water outflowing in the northern side of Laguna Tuyajto is recharged in the Tuyajto volcano. Most of the spring water in the eastern side and groundwater are recharged at higher elevations, in the rims of the nearby endorheic basins of Pampa Colorada and Pampa Las Tecas to the East. The presence of tritium in some deep wells in Pampa Colorada and Pampa Las Tecas indicates recent recharge. Gas emission in recent volcanoes increase the sulfate content of atmospheric deposition and this is reflected in local groundwater. The chemical composition and concentration of spring waters are the result of meteoric water evapo-concentration, water-rock interaction, and mainly the dissolution of old and buried evaporitic deposits. Groundwater flow is mostly shallow due to a low permeability ignimbrite layer of regional extent, which also hinders brine spreading below and around the lake. High deep temperatures near the recent Tuyajto volcano explain the high dissolved silica contents and the δ(18)O shift to heavier values found in some of the spring waters. Laguna Tuyajto is a terminal lake where salts cumulate, mostly halite, but some brine transfer to the Salar de Aguas Calientes-3 cannot be excluded. The hydrogeological behavior of Laguna Tuyajto constitutes a model to understand the functioning of many other similar basins in other areas in the Andean Altiplano. PMID:26410705

  18. A Follow-up Study of Graduates of Laguna-Acoma High School Who Took ACT and/or Entered a Four-Year College Program.

    ERIC Educational Resources Information Center

    Munro, Fern H.

    The myth that only the high school student who is at or near the top of his class can succeed at a four-year college is not upheld for graduates of Laguna-Acoma High School (LAHS) in New Mexico. Many sources provide accurate gradepoint averages (GPA), American College Test (ACT) scores and Rank in Class (RIC) for the LAHS students who took the ACT…

  19. Chemistry of Hot Spring Pool Waters in Calamba and Los Banos and Potential Effect on the Water Quality of Laguna De Bay

    NASA Astrophysics Data System (ADS)

    Balangue, M. I. R. D.; Pena, M. A. Z.; Siringan, F. P.; Jago-on, K. A. B.; Lloren, R. B.; Taniguchi, M.

    2014-12-01

    Since the Spanish Period (1600s), natural hot spring waters have been harnessed for balneological purposes in the municipalities of Calamba and Los Banos, Laguna, south of Metro Manila. There are at more than a hundred hot spring resorts in Brgy. Pansol, Calamba and Tadlac, Los Banos. These two areas are found at the northern flanks of Mt. Makiling facing Laguna de Bay. This study aims to provide some insights on the physical and chemical characteristics of hot spring resorts and the possible impact on the lake water quality resulting from the disposal of used water. Initial ocular survey of the resorts showed that temperature of the pool water ranges from ambient (>300C) to as high as 500C with an average pool size of 80m3. Water samples were collected from a natural hot spring and pumped well in Los Banos and another pumped well in Pansol to determine the chemistry. The field pH ranges from 6.65 to 6.87 (Pansol springs). Cation analysis revealed that the thermal waters belonged to the Na-K-Cl-HCO3 type with some trace amount of heavy metals. Methods for waste water disposal are either by direct discharge down the drain of the pool or by discharge in the public road canal. Both methods will dump the waste water directly into Laguna de Bay. Taking in consideration the large volume of waste water used especially during the peak season, the effect on the lake water quality would be significant. It is therefore imperative for the environmental authorities in Laguna to regulate and monitor the chemistry of discharges from the pool to protect both the lake water as well as groundwater quality.

  20. Seismic images and fault relations of the Santa Monica thrust fault, West Los Angeles, California

    USGS Publications Warehouse

    Catchings, R.D.; Gandhok, G.; Goldman, M.R.; Okaya, D.

    2001-01-01

    In May 1997, the US Geological Survey (USGS) and the University of Southern California (USC) acquired high-resolution seismic reflection and refraction images on the grounds of the Wadsworth Veterans Administration Hospital (WVAH) in the city of Los Angeles (Fig. 1a,b). The objective of the seismic survey was to better understand the near-surface geometry and faulting characteristics of the Santa Monica fault zone. In this report, we present seismic images, an interpretation of those images, and a comparison of our results with results from studies by Dolan and Pratt (1997), Pratt et al. (1998) and Gibbs et al. (2000). The Santa Monica fault is one of the several northeast-southwest-trending, north-dipping, reverse faults that extend through the Los Angeles metropolitan area (Fig. 1a). Through much of area, the Santa Monica fault trends subparallel to the Hollywood fault, but the two faults apparently join into a single fault zone to the southwest and to the northeast (Dolan et al., 1995). The Santa Monica and Hollywood faults may be part of a larger fault system that extends from the Pacific Ocean to the Transverse Ranges. Crook et al. (1983) refer to this fault system as the Malibu Coast-Santa Monica-Raymond-Cucamonga fault system. They suggest that these faults have not formed a contiguous zone since the Pleistocene and conclude that each of the faults should be treated as a separate fault with respect to seismic hazards. However, Dolan et al. (1995) suggest that the Hollywood and Santa Monica faults are capable of generating Mw 6.8 and Mw 7.0 earthquakes, respectively. Thus, regardless of whether the overall fault system is connected and capable of rupturing in one event, individually, each of the faults present a sizable earthquake hazard to the Los Angeles metropolitan area. If, however, these faults are connected, and they were to rupture along a continuous fault rupture, the resulting hazard would be even greater. Although the Santa Monica fault represents a hazard to millions of people, its lateral extent and rupture history are not well known, due largely to limited knowledge of the fault location, geometry, and relationship to other faults. The Santa Monica fault has been obscured at the surface by alluvium and urbanization. For example, Dolan et al. (1995) could find only one 200-m-long stretch of the Santa Monica fault that was not covered by either streets or buildings. Of the 19-km length onshore section of the Santa Monica fault, its apparent location has been delineated largely on the basis of geomorphic features and oil-well drilling. Seismic imaging efforts, in combination with other investigative methods, may be the best approach in locating and understanding the Santa Monica fault in the Los Angeles region. This investigation and another recent seismic imaging investigation (Pratt et al., 1998) were undertaken to resolve the near-surface location, fault geometry, and faulting relations associated with the Santa Monica fault.

  1. Illuminating Northern California's Active Faults

    NASA Astrophysics Data System (ADS)

    Prentice, Carol S.; Crosby, Christopher J.; Whitehill, Caroline S.; Arrowsmith, J. Ramn; Furlong, Kevin P.; Phillips, David A.

    2009-02-01

    Newly acquired light detection and ranging (lidar) topographic data provide a powerful community resource for the study of landforms associated with the plate boundary faults of northern California (Figure 1). In the spring of 2007, GeoEarthScope, a component of the EarthScope Facility construction project funded by the U.S. National Science Foundation, acquired approximately 2000 square kilometers of airborne lidar topographic data along major active fault zones of northern California. These data are now freely available in point cloud (x, y, z coordinate data for every laser return), digital elevation model (DEM), and KMZ (zipped Keyhole Markup Language, for use in Google Earth and other similar software) formats through the GEON OpenTopography Portal (http://www.OpenTopography.org/data). Importantly, vegetation can be digitally removed from lidar data, producing high-resolution images (0.5- or 1.0-meter DEMs) of the ground surface beneath forested regions that reveal landforms typically obscured by vegetation canopy (Figure 2).

  2. Fault Tolerant Homopolar Magnetic Bearings

    NASA Technical Reports Server (NTRS)

    Li, Ming-Hsiu; Palazzolo, Alan; Kenny, Andrew; Provenza, Andrew; Beach, Raymond; Kascak, Albert

    2003-01-01

    Magnetic suspensions (MS) satisfy the long life and low loss conditions demanded by satellite and ISS based flywheels used for Energy Storage and Attitude Control (ACESE) service. This paper summarizes the development of a novel MS that improves reliability via fault tolerant operation. Specifically, flux coupling between poles of a homopolar magnetic bearing is shown to deliver desired forces even after termination of coil currents to a subset of failed poles . Linear, coordinate decoupled force-voltage relations are also maintained before and after failure by bias linearization. Current distribution matrices (CDM) which adjust the currents and fluxes following a pole set failure are determined for many faulted pole combinations. The CDM s and the system responses are obtained utilizing 1D magnetic circuit models with fringe and leakage factors derived from detailed, 3D, finite element field models. Reliability results are presented vs. detection/correction delay time and individual power amplifier reliability for 4, 6, and 7 pole configurations. Reliability is shown for two success criteria, i.e. (a) no catcher bearing contact following pole failures and (b) re-levitation off of the catcher bearings following pole failures. An advantage of the method presented over other redundant operation approaches is a significantly reduced requirement for backup hardware such as additional actuators or power amplifiers.

  3. Fault Management Techniques in Human Spaceflight Operations

    NASA Technical Reports Server (NTRS)

    O'Hagan, Brian; Crocker, Alan

    2006-01-01

    This paper discusses human spaceflight fault management operations. Fault detection and response capabilities available in current US human spaceflight programs Space Shuttle and International Space Station are described while emphasizing system design impacts on operational techniques and constraints. Preflight and inflight processes along with products used to anticipate, mitigate and respond to failures are introduced. Examples of operational products used to support failure responses are presented. Possible improvements in the state of the art, as well as prioritization and success criteria for their implementation are proposed. This paper describes how the architecture of a command and control system impacts operations in areas such as the required fault response times, automated vs. manual fault responses, use of workarounds, etc. The architecture includes the use of redundancy at the system and software function level, software capabilities, use of intelligent or autonomous systems, number and severity of software defects, etc. This in turn drives which Caution and Warning (C&W) events should be annunciated, C&W event classification, operator display designs, crew training, flight control team training, and procedure development. Other factors impacting operations are the complexity of a system, skills needed to understand and operate a system, and the use of commonality vs. optimized solutions for software and responses. Fault detection, annunciation, safing responses, and recovery capabilities are explored using real examples to uncover underlying philosophies and constraints. These factors directly impact operations in that the crew and flight control team need to understand what happened, why it happened, what the system is doing, and what, if any, corrective actions they need to perform. If a fault results in multiple C&W events, or if several faults occur simultaneously, the root cause(s) of the fault(s), as well as their vehicle-wide impacts, must be determined in order to maintain situational awareness. This allows both automated and manual recovery operations to focus on the real cause of the fault(s). An appropriate balance must be struck between correcting the root cause failure and addressing the impacts of that fault on other vehicle components. Lastly, this paper presents a strategy for using lessons learned to improve the software, displays, and procedures in addition to determining what is a candidate for automation. Enabling technologies and techniques are identified to promote system evolution from one that requires manual fault responses to one that uses automation and autonomy where they are most effective. These considerations include the value in correcting software defects in a timely manner, automation of repetitive tasks, making time critical responses autonomous, etc. The paper recommends the appropriate use of intelligent systems to determine the root causes of faults and correctly identify separate unrelated faults.

  4. Geometrical and Structural Asperities on Fault Surfaces

    NASA Astrophysics Data System (ADS)

    Sagy, A.; Brodsky, E. E.; van der Elst, N.; Agosta, F.; di Toro, G.; Collettini, C.

    2007-12-01

    Earthquake dynamics are strongly affected by fault zone structure and geometry. Fault surface irregularities and the nearby structure control the rupture nucleation and propagation, the fault strength, the near-field stress orientations and the hydraulic properties. New field observations demonstrate the existence of asperities in faults as displayed by topographical bumps on the fault surface and hardening of the internal structure near them. Ground-based LIDAR measurements on more than 30 normal and strike slip faults in different lithologies demonstrate that faults are not planar surfaces and roughness is strongly dependent on fault displacement. In addition to the well-understood roughness exemplified by abrasive striations and fracture segmentation, we found semi-elliptical topographical bumps with wavelengths of a few meters. In many faults the bumps are not spread equally on the surface and zones can be bumpier than others. The bumps are most easily identified on faults with total displacement of dozens to hundreds of meters. Smaller scale roughness on these faults is smoothed by abrasive processes. A key site in southern Oregon shows that the topographic bumps are closely tied to the internal structure of the fault zone. At this location, we combine LiDAR data with detailed structural analysis of the fault zone embedded in volcanic rocks. Here the bumps correlate with an abrupt change in the width of the cohesive cataclasite layer that is exposed under a thin ultracataclasite zone. In most of the exposures the cohesive layer thickness is 10-20 cm. However, under protruding bumps the layer is always thickened and the width can locally exceed one meter. Field and microscopic analyses show that the layer contains grains with dimensions ranging from less than 10 ? up to a few centimeters. There is clear evidence of internal flow, rotation and fracturing of the grains in the layer. X-Ray diffraction measurements of samples from the layer show that the bulk mineralogy is identical to that of the host rock, although thin section analysis suggests that some alteration and secondary mineralization of the grains also occurs. We infer that the cohesiveness of the layer is a consequence of repacking and cementation similar to deformation bands in granular material. By comparing the thickness of the cohesive layer on several secondary faults in this fault area we found that the average thickness of the layer increases with total slip. The correlation is nonlinear and the thickening rate decreases with increasing slip. We conclude that granular flow decreasing with increasing slip and thus the deformation is continually localized.

  5. Fault slip during a glacial cycle

    NASA Astrophysics Data System (ADS)

    Steffen, Rebekka; Wu, Patrick; Steffen, Holger; Eaton, Dave

    2013-04-01

    Areas affected by glacial isostatic adjustment (GIA) generally show uplift after deglaciation. These regions are also characterized by a moderate past and present-day seismicity, at seismic moment release rates that exceed those expected under stable tectonic conditions. Several faults have been found in North America and Europe, which have been activated during or after the last deglaciation. Large-magnitude earthquakes have generated fault offsets of up to 120 m. Due to the recent melting of Greenland and Antarctic ice sheets, an understanding of the occurrence of these earthquakes is important. With a new finite-element model, we are able to estimate, for the first time, fault slip during a glacial cycle for continental ice sheets. A two-dimensional earth model based on former GIA studies is developed, which is loaded with a hyperbolic ice sheet. The fault is able to move in a stress field consisting of rebound stress, tectonic background stress, and lithostatic stress. The sensitivity of this fault is tested regarding lithospheric and crustal thickness, viscosity structure of upper and lower mantle, ice-sheet thickness and width, and fault parameters including coefficient of friction, depth, angle and location. Fault throws of up to 30 m are obtained using a fault of 45° dipping below the ice sheet centre. The thickness of the crust is one of the major parameters affecting the total fault throw, e.g. higher values for a thinner crust. Most faults start to move close to the end of deglaciation, and movement stops after one thrusting/reverse earthquake. However, certain conditions may also lead to several fault movements after the end of glaciations.

  6. The width of fault zones in a brittle-viscous lithosphere: Strike-slip faults

    NASA Technical Reports Server (NTRS)

    Parmentier, E. M.

    1991-01-01

    A fault zone in an ideal brittle material overlying a very weak substrate could, in principle, consist of a single slip surface. Real fault zones have a finite width consisting of a number of nearly parallel slip surfaces on which deformation is distributed. The hypothesis that the finite width of fault zones reflects stresses due to quasistatic flow in the ductile substrate of a brittle surface layer is explored. Because of the simplicity of theory and observations, strike-slip faults are examined first, but the analysis can be extended to normal and thrust faulting.

  7. Implementation of a model based fault detection and diagnosis technique for actuation faults of the SSME

    NASA Technical Reports Server (NTRS)

    Duyar, A.; Guo, T.-H.; Merrill, W.; Musgrave, J.

    1991-01-01

    In a previous study, Guo, Merrill and Duyar, 1990, reported a conceptual development of a fault detection and diagnosis system for actuation faults of the Space Shuttle main engine. This study, which is a continuation of the previous work, implements the developed fault detection and diagnosis scheme for the real time actuation fault diagnosis of the Space Shuttle Main Engine. The scheme will be used as an integral part of an intelligent control system demonstration experiment at NASA Lewis. The diagnosis system utilizes a model based method with real time identification and hypothesis testing for actuation, sensor, and performance degradation faults.

  8. A Fault-tolerant RISC Microprocessor for Spacecraft Applications

    NASA Technical Reports Server (NTRS)

    Timoc, Constantin; Benz, Harry

    1990-01-01

    Viewgraphs on a fault-tolerant RISC microprocessor for spacecraft applications are presented. Topics covered include: reduced instruction set computer; fault tolerant registers; fault tolerant ALU; and double rail CMOS logic.

  9. Intermittent/transient fault phenomena in digital systems

    NASA Technical Reports Server (NTRS)

    Masson, G. M.

    1977-01-01

    An overview of the intermittent/transient (IT) fault study is presented. An interval survivability evaluation of digital systems for IT faults is discussed along with a method for detecting and diagnosing IT faults in digital systems.

  10. Intermittent/transient faults in digital systems

    NASA Technical Reports Server (NTRS)

    Masson, G. M.; Glazer, R. E.

    1982-01-01

    Containment set techniques are applied to 8085 microprocessor controllers so as to transform a typical control system into a slightly modified version, shown to be crashproof: after the departure of the intermittent/transient fault, return to one proper control algorithm is assured, assuming no permanent faults occur.

  11. Fault detection with principal component pursuit method

    NASA Astrophysics Data System (ADS)

    Pan, Yijun; Yang, Chunjie; Sun, Youxian; An, Ruqiao; Wang, Lin

    2015-11-01

    Data-driven approaches are widely applied for fault detection in industrial process. Recently, a new method for fault detection called principal component pursuit(PCP) is introduced. PCP is not only robust to outliers, but also can accomplish the objectives of model building, fault detection, fault isolation and process reconstruction simultaneously. PCP divides the data matrix into two parts: a fault-free low rank matrix and a sparse matrix with sensor noise and process fault. The statistics presented in this paper fully utilize the information in data matrix. Since the low rank matrix in PCP is similar to principal components matrix in PCA, a T2 statistic is proposed for fault detection in low rank matrix. And this statistic can illustrate that PCP is more sensitive to small variations in variables than PCA. In addition, in sparse matrix, a new monitored statistic performing the online fault detection with PCP-based method is introduced. This statistic uses the mean and the correlation coefficient of variables. Monte Carlo simulation and Tennessee Eastman (TE) benchmark process are provided to illustrate the effectiveness of monitored statistics.

  12. Intraplate rotational deformation induced by faults

    NASA Astrophysics Data System (ADS)

    Dembo, Neta; Hamiel, Yariv; Granot, Roi

    2015-11-01

    Vertical axis rotations provide important constraints on the tectonic history of plate boundaries. Geodetic measurements can be used to calculate interseismic rotations, whereas paleomagnetic remanence directions provide constraints on the long-term rotations accumulated over geological timescales. Here we present a new mechanical modeling approach that links between intraplate deformational patterns of these timescales. We construct mechanical models of active faults at their locked state to simulate the presumed to be elastic interseismic deformation rate observed by GPS measurements. We then apply a slip to the faults above the locking depth to simulate the long-term deformation of the crust from which we derive the accumulated rotations. We test this approach in northern Israel along the Dead Sea Fault and Carmel-Gilboa fault system. We use 12 years of interseismic GPS measurements to constrain a slip model of the major faults found in this region. Next, we compare the modeled rotations against long-term rotations determined based on new primary magnetic remanence directions from 29 sites with known age. The distributional pattern of site mean declinations is in general agreement with the vertical axis rotations predicted by the mechanical model, both showing anomalously high rotations near fault tips and bending points. Overall, the results from northern Israel validate the effectiveness of our approach and indicate that rotations induced by motion along faults may act in parallel (or alone) to rigid block rotations. Finally, the new suggested method unravels important insights on the evolution (timing, magnitude, and style) of deformation along major faults.

  13. Diagnostics Tools Identify Faults Prior to Failure

    NASA Technical Reports Server (NTRS)

    2013-01-01

    Through the SBIR program, Rochester, New York-based Impact Technologies LLC collaborated with Ames Research Center to commercialize the Center s Hybrid Diagnostic Engine, or HyDE, software. The fault detecting program is now incorporated into a software suite that identifies potential faults early in the design phase of systems ranging from printers to vehicles and robots, saving time and money.

  14. A Game Theoretic Fault Detection Filter

    NASA Technical Reports Server (NTRS)

    Chung, Walter H.; Speyer, Jason L.

    1995-01-01

    The fault detection process is modelled as a disturbance attenuation problem. The solution to this problem is found via differential game theory, leading to an H(sub infinity) filter which bounds the transmission of all exogenous signals save the fault to be detected. For a general class of linear systems which includes some time-varying systems, it is shown that this transmission bound can be taken to zero by simultaneously bringing the sensor noise weighting to zero. Thus, in the limit, a complete transmission block can he achieved, making the game filter into a fault detection filter. When we specialize this result to time-invariant system, it is found that the detection filter attained in the limit is identical to the well known Beard-Jones Fault Detection Filter. That is, all fault inputs other than the one to be detected (the "nuisance faults") are restricted to an invariant subspace which is unobservable to a projection on the output. For time-invariant systems, it is also shown that in the limit, the order of the state-space and the game filter can be reduced by factoring out the invariant subspace. The result is a lower dimensional filter which can observe only the fault to be detected. A reduced-order filter can also he generated for time-varying systems, though the computational overhead may be intensive. An example given at the end of the paper demonstrates the effectiveness of the filter as a tool for fault detection and identification.

  15. Late Cenozoic intraplate faulting in eastern Australia

    NASA Astrophysics Data System (ADS)

    Babaahmadi, Abbas; Rosenbaum, Gideon

    2014-12-01

    The intensity and tectonic origin of late Cenozoic intraplate deformation in eastern Australia is relatively poorly understood. Here we show that Cenozoic volcanic rocks in southeast Queensland have been deformed by numerous faults. Using gridded aeromagnetic data and field observations, structural investigations were conducted on these faults. Results show that faults have mainly undergone strike-slip movement with a reverse component, displacing Cenozoic volcanic rocks ranging in ages from 31 to 21 Ma. These ages imply that faulting must have occurred after the late Oligocene. Late Cenozoic deformation has mostly occurred due to the reactivation of major faults, which were active during episodes of basin formation in the Jurassic-Early Cretaceous and later during the opening of the Tasman and Coral Seas from the Late Cretaceous to the early Eocene. The wrench reactivation of major faults in the late Cenozoic also gave rise to the occurrence of brittle subsidiary reverse strike-slip faults that affected Cenozoic volcanic rocks. Intraplate transpressional deformation possibly resulted from far-field stresses transmitted from the collisional zones at the northeast and southeast boundaries of the Australian plate during the late Oligocene-early Miocene and from the late Miocene to the Pliocene. These events have resulted in the hitherto unrecognized reactivation of faults in eastern Australia.

  16. Is the Lishan fault of Taiwan active?

    NASA Astrophysics Data System (ADS)

    Kuo-Chen, Hao; Wu, Francis; Chang, Wu-Lung; Chang, Chih-Yu; Cheng, Ching-Yu; Hirata, Naoshi

    2015-10-01

    The Lishan fault has been characterized alternately as a major discontinuity in stratigraphy, structures and metamorphism, a ductile shear zone, a tectonic suture or non-existent. In addition to being a geological boundary, it also marks transitions in subsurface structures. Thus, the seismicity to the west of the fault permeates through the upper and mid-crust while beneath the Central Range it is noticeably less and largely concentrated in the upper 12 km. A prominent west-dipping conductive zone extends upward to meet the Lishan fault. Also, the eastward increase of crust thickness from ~ 30 km in the Taiwan Strait quickens under the Lishan fault to form a root of over 50 km under the Central Range. In the past, the small magnitude seismicity along the Lishan fault has been noticed but is too diffuse for definitive association with the fault. Recent processing of aftershock records of the 1999 Mw 7.6 Chi-Chi earthquake using Central Weather Bureau data and, especially, data from three post-Chi-Chi deployments of seismic stations across central Taiwan yielded hypocenters that appear to link directly to the Lishan structure. The presence of a near 4-km-long vertical seismic zone directly under the surface trace of the Lishan fault indicates that it is an active structure from the surface down to about 35 km, and the variety of focal mechanisms indicates that the fault motion can be complex and depth-dependent.

  17. Detecting Latent Faults In Digital Flight Controls

    NASA Technical Reports Server (NTRS)

    Mcgough, John; Mulcare, Dennis; Larsen, William E.

    1992-01-01

    Report discusses theory, conduct, and results of tests involving deliberate injection of low-level faults into digital flight-control system. Part of study of effectiveness of techniques for detection of and recovery from faults, based on statistical assessment of inputs and outputs of parts of control systems. Offers exceptional new capability to establish reliabilities of critical digital electronic systems in aircraft.

  18. Assumptions for fault tolerant quantum computing

    SciTech Connect

    Knill, E.; Laflamme, R.

    1996-06-01

    Assumptions useful for fault tolerant quantum computing are stated and briefly discussed. We focus on assumptions related to properties of the computational system. The strongest form of the assumptions seems to be sufficient for achieving highly fault tolerant quantum computation. We discuss weakenings which are also likely to suffice.

  19. Staged fault at Denver International Airport

    SciTech Connect

    Armenta, J.; Befus, C.

    1995-12-31

    Electric utilities occasionally conduct staged faults to test transmission and substation circuit breakers and relay protection schemes. This paper discusses a staged fault test conducted on a 25kV distribution system at Denver International Airport (DIA) to test sophisticated fault detection hardware and software in distribution automation field equipment. In 1993 and the first part of 1994, supervisory controlled 25kV switch cabinets were installed along key distribution feeder tie points at DIA. The supervisory switch cabinets monitor and report voltage, current, switch status, etc. and provide remote and local fault indication utilizing digital signal processing. The switch cabinets are monitored and controlled by Public Service Company of Colorado`s (PSCo) energy management and SCADA system. On July 29, 1994, PSCo conducted two staged faults to test the fault indication software and hardware as part of the complete system. This paper will illustrate why and how these two faults were initiated. It will also reveal the preparation required to stage the faults, the expected results, the actual results, conclusions, and solutions to problems found.

  20. The cost of software fault tolerance

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1982-01-01

    The proposed use of software fault tolerance techniques as a means of reducing software costs in avionics and as a means of addressing the issue of system unreliability due to faults in software is examined. A model is developed to provide a view of the relationships among cost, redundancy, and reliability which suggests strategies for software development and maintenance which are not conventional.

  1. Training for Skill in Fault Diagnosis

    ERIC Educational Resources Information Center

    Turner, J. D.

    1974-01-01

    The Knitting, Lace and Net Industry Training Board has developed a training innovation called fault diagnosis training. The entire training process concentrates on teaching based on the experiences of troubleshooters or any other employees whose main tasks involve fault diagnosis and rectification. (Author/DS)

  2. Deltaic faulting and subsidence: Analog modeling

    SciTech Connect

    Larroque, J.M. )

    1991-03-01

    Scaled experiments with sand layers overlying viscous silicone putty have been used to investigate the behavior of deltaic sediments prograding over salt or mobile shales. Differential loading caused by a sand wedge prograding over a viscous putty layer induces a forward expulsion of the viscous material. This causes the putty to thin beneath the sand wedge and to thicken at the wedge toe. It results in extension and subsidence in the sand wedge. The predominant dip of the extensional faults is in the progradation direction except in the toe bulge area where a major fault may occur with an opposite (counter regional dip). The experiments examined how changes in model parameters affect the resultant fault geometries: Increasing the putty thickness leads to increase in the amount of extension and degree of block rotation, both of which decrease upwards into younger sediments; a sloping basement/putty interface leads to a significant increase in the extension of the sand wedge; fast progradation rates lead to widely spaced faulting whereas slow progradation rates lead to closely spaced faulting; basement fault steps, associated with changes in viscous layer thickness, are also demonstrated to locate and locally reorient faults in the overlying sand wedge. These concepts can assist the interpreter in defining the shape of faulted traps, particularly at depth or where seismic quality deteriorates, and in understanding the evolution and timing of trap formation.

  3. Measurement selection for parametric IC fault diagnosis

    NASA Technical Reports Server (NTRS)

    Wu, A.; Meador, J.

    1991-01-01

    Experimental results obtained with the use of measurement reduction for statistical IC fault diagnosis are described. The reduction method used involves data pre-processing in a fashion consistent with a specific definition of parametric faults. The effects of this preprocessing are examined.

  4. Interactive Instruction in Solving Fault Finding Problems.

    ERIC Educational Resources Information Center

    Brooke, J. B.; And Others

    1978-01-01

    A training program is described which provides, during fault diagnosis, additional information about the relationship between the remaining faults and the available indicators. An interactive computer program developed for this purpose and the first results of experimental training are described. (Author)

  5. System-level fault diagnosis and reconfiguration

    SciTech Connect

    Gupta, R.

    1987-01-01

    The classical fault-diagnosis model assumes that faults are permanent and each test, administered by a unit, is complete for the unit being tested. These two assumptions may restrict the applicability of the model. The author introduces a new deterministic fault model for system-level fault diagnosis. Unlike earlier attempts, his model intermittent faults, incomplete testing by units, and fault masking in a uniform manner. He obtains necessary and sufficient conditions for a system to be diagnosable using the new fault model. The complexity of the diagnosability problem in the model is shown to be co-NP-complete. He then examines the problem of system reconfiguration following identification of faulty components. In particular, reconfigurability of multipipelines is considered in detail. He alternates the pipeline stages with testing and reconfiguring circuitry. The pipelines are reconfigured by programming the switches in a distributed manner. The switch programming algorithm is optimal in the sense that it recovers the maximum number of pipelines under any fault pattern. A proof of its optimality is also presented.

  6. Fault classification in gearboxes using neural networks

    SciTech Connect

    Paya, B.; Esat, I.; Badi, M.N.M.

    1996-11-01

    The purpose of condition monitoring, and fault diagnostics are to detect faults occurring in machinery, in order to reduce operational and maintenance costs, and provide a significant improvement in plant economy. The condition of a model drive-line was investigated. This model drive-line consists of various interconnected rotating parts, including a gearbox, two bearing blocks, and an electric motor, all connected via flexible coupling and loaded by a disc brake. The drive-line was run in its normal condition, and then single and multiple faults were intentionally introduced to the gearbox, and bearing block. The faults investigated on the drive-line were typical bearing and gear faults, which may develop during normal and continuous operation of this kind of machinery. This paper presents the investigation carried out in order to study both bearing and gear faults introduced together to the drive-line. It is shown that, by using multilayer artificial neural networks on the condition monitoring data, single and multiple faults were successfully classified. The real time domain signals obtained from the drive-line were pre-processed by Wavelet transforms for the network to perform fault classification.

  7. Neotectonics of the Sumatran fault, Indonesia

    NASA Astrophysics Data System (ADS)

    Sieh, Kerry; Natawidjaja, Danny

    2000-12-01

    The 1900-km-long, trench-parallel Sumatran fault accommodates a significant amount of the right-lateral component of oblique convergence between the Eurasian and Indian/Australian plates from 10N to 7S. Our detailed map of the fault, compiled from topographic maps and stereographic aerial photographs, shows that unlike many other great strike-slip faults, the Sumatran fault is highly segmented. Cross-strike width of step overs between the 19 major subaerial segments is commonly many kilometers. The influence of these step overs on historical seismic source dimensions suggests that the dimensions of future events will also be influenced by fault geometry. Geomorphic offsets along the fault range as high as 20 km and may represent the total offset across the fault. If this is so, other structures must have accommodated much of the dextral component of oblique convergence during the past few million years. Our analysis of stretching of the forearc region, near the southern tip of Sumatra, constrains the combined dextral slip on the Sumatran and Mentawai faults to be no more than 100 km in the past few million years. The shape and location of the Sumatran fault and the active volcanic arc are highly correlated with the shape and character of the underlying subducting oceanic lithosphere. Nonetheless, active volcanic centers of the Sumatran volcanic arc have not influenced noticeably the geometry of the active Sumatran fault. On the basis of its geologic history and pattern of deformation, we divide the Sumatran plate margin into northern, central and southern domains. We support previous proposals that the geometry and character of the subducting Investigator fracture zone are affecting the shape and evolution of the Sumatran fault system within the central domain. The southern domain is the most regular. The Sumatran fault there comprises six right-stepping segments. This pattern indicates that the overall trend of the fault deviates 4 clockwise from the slip vector between the two blocks it separates. The regularity of this section and its association with the portion of the subduction zone that generated the giant (Mw9) earthquake of 1833 suggest that a geometrically simple subducting slab results in both simple strike-slip faulting and unusually large subduction earthquakes.

  8. Elastodynamic Simulation of Fault System Dynamics

    NASA Astrophysics Data System (ADS)

    Mora, P.; Weatherley, D.

    2002-12-01

    Previous simulations of granular systems subjected to shear with the lattice solid model have exhibited evolution of the stress correlation function in the leadup to large events. While these results provide evidence for a Critical Point-like mechanism in elasto-dynamic systems and the possibility of earthquake forecasting, it remains unclear whether such a mechanism will occur in more realistic models of interacting fault systems or in the real earth. Furthermore, CA simulations suggest that both Self-Organised Critical and Critical Point behaviours are possible depending on the values of tuning parameters. This suggests that even if the the crust does exhibit CP-like behaviour, a given fault system may not depending on the tuning parameters such as fault density, the statistics of fault friction, and dissipation. To progress towards resolving this issue, we develop a 2D fully elasto-dynamic model of parallel interacting faults. Either slip or velocity weakening friction can be defined along faults. Slip weakening friction and a power law distribution of static and dynamic friction coefficients is specified. Numerical shear experiments are conducted in a model with ten parallel interacting faults and fault friction power law exponents of 0.6 and 1.6. The results exhibit a complex evolution of the stress field and a number of interesting features including activity switching between faults and fault segments in the model. The event size distributions are essentially a power law with a slight overabundence of large events. Based upon comparisons with CA simulation results, this suggests the system is in the SOC part of phase space although further analysis is required to confirm this hypothesis. Numerical expriments are now in progress using different fault densities, fault friction statistics and slip weakening distance to study whether or not the model exhibits both critical point and SOC behaviour like the CA models. The model provides a crucial link between CA maps of phase space (e.g. that show regimes of CP or SOC behaviour) and the behaviour of more realistic elasto-dynamic interacting fault system models, and thus, a means to improve understanding of the complex system behaviour of real fault systems and progress towards the goal of a scientific underpinning for earthquake forecasting

  9. Active faulting in the Walker Lane

    NASA Astrophysics Data System (ADS)

    Wesnousky, Steven G.

    2005-06-01

    Deformation across the San Andreas and Walker Lane fault systems accounts for most relative Pacific-North American transform plate motion. The Walker Lane is composed of discontinuous sets of right-slip faults that are located to the east and strike approximately parallel to the San Andreas fault system. Mapping of active faults in the central Walker Lane shows that right-lateral shear is locally accommodated by rotation of crustal blocks bounded by steep-dipping east striking left-slip faults. The left slip and clockwise rotation of crustal blocks bounded by the east striking faults has produced major basins in the area, including Rattlesnake and Garfield flats; Teels, Columbus and Rhodes salt marshes; and Queen Valley. The Benton Springs and Petrified Springs faults are the major northwest striking structures currently accommodating transform motion in the central Walker Lane. Right-lateral offsets of late Pleistocene surfaces along the two faults point to slip rates of at least 1 mm/yr. The northern limit of northwest trending strike-slip faults in the central Walker Lane is abrupt and reflects transfer of strike-slip to dip-slip deformation in the western Basin and Range and transformation of right slip into rotation of crustal blocks to the north. The transfer of strike slip in the central Walker Lane to dip slip in the western Basin and Range correlates to a northward broadening of the modern strain field suggested by geodesy and appears to be a long-lived feature of the deformation field. The complexity of faulting and apparent rotation of crustal blocks within the Walker Lane is consistent with the concept of a partially detached and elastic-brittle crust that is being transported on a continuously deforming layer below. The regional pattern of faulting within the Walker Lane is more complex than observed along the San Andreas fault system to the west. The difference is attributed to the relatively less cumulative slip that has occurred across the Walker Lane and that oblique components of displacement are of opposite sense along the Walker Lane (extension) and San Andreas (contraction), respectively. Despite the gross differences in fault pattern, the Walker Lane and San Andreas also share similarities in deformation style, including clockwise rotations of crustal blocks leading to development of structural basins and the partitioning of oblique components of slip onto subparallel strike-slip and dip-slip faults.

  10. Do faults stay cool under stress?

    NASA Astrophysics Data System (ADS)

    Savage, H. M.; Polissar, P. J.; Sheppard, R. E.; Brodsky, E. E.; Rowe, C. D.

    2011-12-01

    Determining the absolute stress on faults during slip is one of the major goals of earthquake physics as this information is necessary for full mechanical modeling of the rupture process. One indicator of absolute stress is the total energy dissipated as heat through frictional resistance. The heat results in a temperature rise on the fault that is potentially measurable and interpretable as an indicator of the absolute stress. We present a new paleothermometer for fault zones that utilizes the thermal maturity of extractable organic material to determine the maximum frictional heating experienced by the fault. Because there are no retrograde reactions in these organic systems, maximum heating is preserved. We investigate four different faults: 1) the Punchbowl Fault, a strike-slip fault that is part of the ancient San Andreas system in southern California, 2) the Muddy Mountain Thrust, a continental thrust sheet in Nevada, 3) large shear zones of Sitkanik Island, AK, part of the proto-megathrust of the Kodiak Accretionary Complex and 4) the Pasagshak Point Megathrust, Kodiak Accretionary Complex, AK. According to a variety of organic thermal maturity indices, the thermal maturity of the rocks falls within the range of heating expected from the bounds on burial depth and time, indicating that the method is robust and in some cases improving our knowledge of burial depth. Only the Pasagshak Point Thrust, which is also pseudotachylyte-bearing, shows differential heating between the fault and off-fault samples. This implies that most of the faults did not get hotter than the surrounding rock during slip. Simple temperature models coupled to the kinetic reactions for organic maturity let us constrain certain aspects of the fault during slip such as fault friction, maximum slip in a single earthquake, the thickness of the active slipping zone and the effective normal stress. Because of the significant length of these faults, we find it unlikely that they never sustained large earthquakes. Therefore we focus on the implications that either 1) some faults undergo dynamic weakening, at least during large earthquakes, or 2) that slip is not confined to very thin localized zones during earthquakes.

  11. Geophysical characterization of buried active faults: the Concud Fault (Iberian Chain, NE Spain)

    NASA Astrophysics Data System (ADS)

    Pueyo Anchuela, scar; Lafuente, Paloma; Arlegui, Luis; Liesa, Carlos L.; Simn, Jos L.

    2015-12-01

    The Concud Fault is a ~14-km-long active fault that extends close to Teruel, a city with about 35,000 inhabitants in the Iberian Range (NE Spain). It shows evidence of recurrent activity during Late Pleistocene time, posing a significant seismic hazard in an area of moderate-to-low tectonic rates. A geophysical survey was carried out along the mapped trace of the southern branch of the Concud Fault to evaluate the geophysical signature from the fault and the location of paleoseismic trenches. The survey identified a lineation of inverse magnetic dipoles at residual and vertical magnetic gradient, a local increase in apparent conductivity, and interruptions of the underground sediment structure along GPR profiles. The origin of these anomalies is due to lateral contrast between both fault blocks and the geophysical signature of Quaternary materials located above and directly south of the fault. The spatial distribution of anomalies was successfully used to locate suitable trench sites and to map non-exposed segments of the fault. The geophysical anomalies are related to the sedimentological characteristics and permeability differences of the deposits and to deformation related to fault activity. The results illustrate the usefulness of geophysics to detect and map non-exposed faults in areas of moderate-to-low tectonic activity where faults are often covered by recent pediments that obscure geological evidence of the most recent earthquakes. The results also highlight the importance of applying multiple geophysical techniques in defining the location of buried faults.

  12. Partial fault dictionary: A new approach for computer-aided fault localization

    SciTech Connect

    Hunger, A.; Papathanasiou, A.

    1995-12-31

    The approach described in this paper has been developed to address the computation time and problem size of localization methodologies in VLSI circuits in order to speed up the overall time consumption for fault localization. The reduction of the problem to solve is combined with the idea of the fault dictionary. In a pre-processing phase, a possibly faulty area is derived using the netlist and the actual test results as input data. The result is a set of cones originating from each faulty primary output. In the next step, the best cone is extracted for the fault dictionary methodology according to a heuristic formula. The circuit nodes, which are included in the intersection of the cones, are combined to a fault list. This fault list together with the best cone can be used by the fault simulator to generate a small and manageable fault dictionary related to one faulty output. In connection with additional algorithms for the reduction of stimuli and netlist a partial fault dictionary can be set up. This dictionary is valid only for the given faulty device together with the given and reduced stimuli, but offers important benefits: Practical results show a reduction of simulation time and size of the fault dictionary by factors around 100 or even more, depending on the actual circuit and assumed fault. The list of fault candidates is significantly reduced, and the required number of steps during the process of localization is reduced, too.

  13. Spatial analysis of hypocenter to fault relationships for determining fault process zone width in Japan.

    SciTech Connect

    Arnold, Bill Walter; Roberts, Barry L.; McKenna, Sean Andrew; Coburn, Timothy C. (Abilene Christian University, Abilene, TX)

    2004-09-01

    Preliminary investigation areas (PIA) for a potential repository of high-level radioactive waste must be evaluated by NUMO with regard to a number of qualifying factors. One of these factors is related to earthquakes and fault activity. This study develops a spatial statistical assessment method that can be applied to the active faults in Japan to perform such screening evaluations. This analysis uses the distribution of seismicity near faults to define the width of the associated process zone. This concept is based on previous observations of aftershock earthquakes clustered near active faults and on the assumption that such seismic activity is indicative of fracturing and associated impacts on bedrock integrity. Preliminary analyses of aggregate data for all of Japan confirmed that the frequency of earthquakes is higher near active faults. Data used in the analysis were obtained from NUMO and consist of three primary sources: (1) active fault attributes compiled in a spreadsheet, (2) earthquake hypocenter data, and (3) active fault locations. Examination of these data revealed several limitations with regard to the ability to associate fault attributes from the spreadsheet to locations of individual fault trace segments. In particular, there was no direct link between attributes of the active faults in the spreadsheet and the active fault locations in the GIS database. In addition, the hypocenter location resolution in the pre-1983 data was less accurate than for later data. These pre-1983 hypocenters were eliminated from further analysis.

  14. Active Fault Topography and Fault Outcrops in the Central Part of the Nukumi fault, the 1891 Nobi Earthquake Fault System, Central Japan

    NASA Astrophysics Data System (ADS)

    Sasaki, T.; Ueta, K.; Inoue, D.; Aoyagi, Y.; Yanagida, M.; Ichikawa, K.; Goto, N.

    2010-12-01

    It is important to evaluate the magnitude of earthquake caused by multiple active faults, taking into account the simultaneous effects. The simultaneity of adjacent active faults are often decided on the basis of geometric distances except for known these paleoseismic records. We have been studied the step area between the Nukumi fault and the Neodani fault, which appeared as consecutive ruptures in the 1891 Nobi earthquake, since 2009. The purpose of this study is to establish innovation in valuation technique of the simultaneity of adjacent active faults in addition to the paleoseismic record and the geometric distance. Geomorphological, geological and reconnaissance microearthquake surveys are concluded. The present work is intended to clarify the distribution of tectonic geomorphology along the Nukumi fault and the Neodani fault by high-resolution interpretations of airborne LiDAR DEM and aerial photograph, and the field survey of outcrops and location survey. The study area of this work is the southeastern Nukumi fault and the northwestern Neodani fault. We interpret DEM using shaded relief map and stereoscopic bird's-eye view made from 2m mesh DEM data which is obtained by airborne laser scanner of Kokusai Kogyo Co., Ltd. Aerial photographic survey is for confirmation of DEM interpretation using 1/16,000 scale photo. As a result of topographic survey, we found consecutive tectonic topography which is left lateral displacement of ridge and valley lines and reverse scarplets along the Nukumi fault and the Neodani fault . From Ogotani 2km southeastern of Nukumi pass which is located at the southeastern end of surface rupture along the Nukumi fault by previous study to Neooppa 9km southeastern of Nukumi pass, we can interpret left lateral topographies and small uphill-facing fault scarps on the terrace surface by detail DEM investigation. These topographies are unrecognized by aerial photographic survey because of heavy vegetation. We have found several new outcrops in this area where the surface ruptures of the 1891 Nobi earthquake have not been known. These outcrops have active fault which cut the layer of terrace deposit and slope deposit to the bottom of present soil layer in common. At the locality of Ogotani outcrop, the humic layer which age is from14th century to 15th century by 14C age dating is deformed by the active fault. The vertical displacement of the humic layer is 0.8-0.9m and the terrace deposit layer below the humic layer is ca. 1.3m. For this reason and the existence of fain grain deposit including AT tephra (28ka) in the footwall of the fault, this fault movement occurred more than once since the last glacial age. We conclude that the surface rupture of Nukumi fault in the 1891 Nobi earthquake is continuous to 9km southeast of Nukumi pass. In other words, these findings indicate that there is 10km parallel overlap zone between the surface rupture of the southeastern end of Nukumi fault and the northwestern end of Neodani fault.

  15. Fault analysis of multichannel spacecraft power systems

    NASA Technical Reports Server (NTRS)

    Dugal-Whitehead, Norma R.; Lollar, Louis F.

    1990-01-01

    The NASA Marshall Space Flight Center proposes to implement computer-controlled fault injection into an electrical power system breadboard to study the reactions of the various control elements of this breadboard. Elements under study include the remote power controllers, the algorithms in the control computers, and the artificially intelligent control programs resident in this breadboard. To this end, a study of electrical power system faults is being performed to yield a list of the most common power system faults. The results of this study will be applied to a multichannel high-voltage DC spacecraft power system called the large autonomous spacecraft electrical power system (LASEPS) breadboard. The results of the power system fault study and the planned implementation of these faults into the LASEPS breadboard are described.

  16. Fault Detection for Automotive Shock Absorber

    NASA Astrophysics Data System (ADS)

    Hernandez-Alcantara, Diana; Morales-Menendez, Ruben; Amezquita-Brooks, Luis

    2015-11-01

    Fault detection for automotive semi-active shock absorbers is a challenge due to the non-linear dynamics and the strong influence of the disturbances such as the road profile. First obstacle for this task, is the modeling of the fault, which has been shown to be of multiplicative nature. Many of the most widespread fault detection schemes consider additive faults. Two model-based fault algorithms for semiactive shock absorber are compared: an observer-based approach and a parameter identification approach. The performance of these schemes is validated and compared using a commercial vehicle model that was experimentally validated. Early results shows that a parameter identification approach is more accurate, whereas an observer-based approach is less sensible to parametric uncertainty.

  17. Tuning of fault tolerant control design parameters.

    PubMed

    DeLima, Pedro G; Yen, Gary G

    2008-01-01

    This paper presents two major contributions in the field of fault tolerant control. First, it gathers points of concern typical to most fault tolerant control applications and translates the chosen performance metrics into a set of six practical design specifications. Second, it proposes initialization and tuning procedures through which a particular fault tolerant control architecture not only can be set to comply with the required specifications, but also can be tuned online to compensate for a total of twelve properties, such as the noise rejection levels for fault detection and diagnosis signals. The proposed design is realized over a powerful architecture that combines the flexibility of adaptive critic designs with the long term memory and learning capabilities of a supervisor. This paper presents a practical design procedure to facilitate the applications of a fundamentally sound fault tolerant control architecture in real-world problems. PMID:18028929

  18. Faults, fluids, and southeast Missouri MVT deposits

    SciTech Connect

    Clendenin, C.W.

    1993-03-01

    A number of interpretations have been proposed to explain regional Late Paleozoic flow paths responsible for the southeast Missouri Mississippi Valley-type (MVT) deposits. In each interpretation the driving force for regional flow is the Ouachita orogeny. Differences in interpretations stem directly from how faults are treated hydrologically and are possible depending on whether faults are ignored or treated as barriers to flow. Observations and geochemical data are used here to re-examine the paleohydrology of southeast Missouri. Fault style and facies patterns argue against assumptions of any idealized aquifer system. Specific observations show that faults are barriers to and pathways for fluid flow in a hydrologically compartmentalized region. Regional relations further suggest that fluid flow out of the Reelfoot rift was via faults in the Precambrian basement, and new isotope studies support such an interpretation.

  19. Self-triggering superconducting fault current limiter

    DOEpatents

    Yuan, Xing (Albany, NY); Tekletsadik, Kasegn (Rexford, NY)

    2008-10-21

    A modular and scaleable Matrix Fault Current Limiter (MFCL) that functions as a "variable impedance" device in an electric power network, using components made of superconducting and non-superconducting electrically conductive materials. The matrix fault current limiter comprises a fault current limiter module that includes a superconductor which is electrically coupled in parallel with a trigger coil, wherein the trigger coil is magnetically coupled to the superconductor. The current surge doing a fault within the electrical power network will cause the superconductor to transition to its resistive state and also generate a uniform magnetic field in the trigger coil and simultaneously limit the voltage developed across the superconductor. This results in fast and uniform quenching of the superconductors, significantly reduces the burnout risk associated with non-uniformity often existing within the volume of superconductor materials. The fault current limiter modules may be electrically coupled together to form various "n" (rows).times."m" (columns) matrix configurations.

  20. Maneuver Classification for Aircraft Fault Detection

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Tumer, Irem Y.; Tumer, Kagan; Huff, Edward M.

    2003-01-01

    Automated fault detection is an increasingly important problem in aircraft maintenance and operation. Standard methods of fault detection assume the availability of either data produced during all possible faulty operation modes or a clearly-defined means to determine whether the data provide a reasonable match to known examples of proper operation. In the domain of fault detection in aircraft, identifying all possible faulty and proper operating modes is clearly impossible. We envision a system for online fault detection in aircraft, one part of which is a classifier that predicts the maneuver being performed by the aircraft as a function of vibration data and other available data. To develop such a system, we use flight data collected under a controlled test environment, subject to many sources of variability. We explain where our classifier fits into the envisioned fault detection system as well as experiments showing the promise of this classification subsystem.

  1. Classification of Aircraft Maneuvers for Fault Detection

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj; Tumer, Irem Y.; Tumer, Kagan; Huff, Edward M.; Koga, Dennis (Technical Monitor)

    2002-01-01

    Automated fault detection is an increasingly important problem in aircraft maintenance and operation. Standard methods of fault detection assume the availability of either data produced during all possible faulty operation modes or a clearly-defined means to determine whether the data provide a reasonable match to known examples of proper operation. In the domain of fault detection in aircraft, the first assumption is unreasonable and the second is difficult to determine. We envision a system for online fault detection in aircraft, one part of which is a classifier that predicts the maneuver being performed by the aircraft as a function of vibration data and other available data. To develop such a system, we use flight data collected under a controlled test environment, subject to many sources of variability. We explain where our classifier fits into the envisioned fault detection system as well as experiments showing the promise of this classification subsystem.

  2. Classification of Aircraft Maneuvers for Fault Detection

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Tumer, Irem Y.; Tumer, Kagan; Huff, Edward M.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Automated fault detection is an increasingly important problem in aircraft maintenance and operation. Standard methods of fault detection assume the availability of either data produced during all possible faulty operation modes or a clearly-defined means to determine whether the data is a reasonable match to known examples of proper operation. In our domain of fault detection in aircraft, the first assumption is unreasonable and the second is difficult to determine. We envision a system for online fault detection in aircraft, one part of which is a classifier that predicts the maneuver being performed by the aircraft as a function of vibration data and other available data. We explain where this subsystem fits into our envisioned fault detection system as well its experiments showing the promise of this classification subsystem.

  3. Practical application of fault tree analysis

    SciTech Connect

    Prugh, R.W.

    1980-01-01

    A detailed survey of standard and novel approaches to Fault Tree construction, based on recent developments at Du Pont, covers the effect-to-cause procedure for control systems as in process plants; the effect-to-cause procedure for processes; source-of-hazard analysis, as in pressure vessel rupture; use of the ''fire triangle'' in a Fault Tree; critical combinations of safeguard failures; action points for automatic or operator control of a process; situations involving hazardous reactant ratios; failure-initiating and failure-enabling events and intervention by the operator; ''daisy-chain'' hazards, e.g., in batch processes and ship accidents; combining batch and continuous operations in a Fault Tree; possible future structure-development procedures for fault-tree construction; and the use of quantitative results (calculated frequencies of Top-Event occurrence) to restructure the Fault Tree after improving the process to any acceptable risk level.

  4. Physiochemical Evidence of Faulting Processes and Modeling of Fluid in Evolving Fault Systems in Southern California

    SciTech Connect

    Boles, James

    2013-05-24

    Our study targets recent (Plio-Pleistocene) faults and young (Tertiary) petroleum fields in southern California. Faults include the Refugio Fault in the Transverse Ranges, the Ellwood Fault in the Santa Barbara Channel, and most recently the Newport- Inglewood in the Los Angeles Basin. Subsurface core and tubing scale samples, outcrop samples, well logs, reservoir properties, pore pressures, fluid compositions, and published structural-seismic sections have been used to characterize the tectonic/diagenetic history of the faults. As part of the effort to understand the diagenetic processes within these fault zones, we have studied analogous processes of rapid carbonate precipitation (scaling) in petroleum reservoir tubing and manmade tunnels. From this, we have identified geochemical signatures in carbonate that characterize rapid CO2 degassing. These data provide constraints for finite element models that predict fluid pressures, multiphase flow patterns, rates and patterns of deformation, subsurface temperatures and heat flow, and geochemistry associated with large fault systems.

  5. Analysis of the ecosystem structure of Laguna Alvarado, western Gulf of Mexico, by means of a mass balance model

    NASA Astrophysics Data System (ADS)

    Cruz-Escalona, V. H.; Arreguín-Sánchez, F.; Zetina-Rejón, M.

    2007-03-01

    Alvarado is one of the most productive estuary-lagoon systems in the Mexican Gulf of Mexico. It has great economic and ecological importance due to high fisheries productivity and because it serves as a nursery, feeding, and reproduction area for numerous populations of fishes and crustaceans. Because of this, extensive studies have focused on biology, ecology, fisheries (e.g. shrimp, oysters) and other biological components of the system during the last few decades. This study presents a mass-balanced trophic model for Laguna Alvarado to determine it's structure and functional form, and to compare it with similar coastal systems of the Gulf of Mexico and Mexican Pacific coast. The model, based on the software Ecopath with Ecosim, consists of eighteen fish groups, seven invertebrate groups, and one group each of sharks and rays, marine mammals, phytoplankton, sea grasses and detritus. The acceptability of the model is indicated by the pedigree index (0.5) which range from 0 to 1 based on the quality of input data. The highest trophic level was 3.6 for marine mammals and snappers. Total system throughput reached 2680 t km -2 year -1, of which total consumption made up 47%, respiratory flows made up 37% and flows to detritus made up 16%. The total system production was higher than consumption, and net primary production higher than respiration. The mean transfer efficiency was 13.8%. The mean trophic level of the catch was 2.3 and the primary production required to sustain the catch was estimated in 31 t km -2 yr -1. Ecosystem overhead was 2.4 times the ascendancy. Results suggest a balance between primary production and consumption. In contrast with other Mexican coastal lagoons, Laguna Alvarado differs strongly in relation to the primary source of energy; here the primary producers (seagrasses) are more important than detritus pathways. This fact can be interpreted a response to mangrove deforest, overfishing, etc. Future work might include the compilation of fishing and biomass time trends to develop historical verification and fitting of temporal simulations.

  6. Deep Drilling at Laguna Potrok Aike, Argentina: Recovery of a Paleoclimate Record for the Last Glacial from the Southern Hemisphere

    NASA Astrophysics Data System (ADS)

    Zolitschka, B.; Anselmetti, F.; Ariztegui, D.; Corbella, H.; Francus, P.; Gebhardt, C.; Hahn, A.; Kliem, P.; Lcke, A.; Ohlendorf, C.; Schbitz, F.

    2009-12-01

    Laguna Potrok Aike, located in the South-Patagonian province of Santa Cruz (5258S, 7023W), was formed 770 ka ago by a volcanic (maar) eruption. Within the framework of the ICDP-funded project PASADO two sites were drilled from September to November 2008 using the GLAD800 drilling platform. A total of 513 m of lacustrine sediments were recovered from the central deep basin by an international team. The sediments hold a unique record of paleoclimatic and paleoecological variability from a region sensitive to variations in southern hemispheric wind and pressure systems and thus significant for the understanding of the global climate system. Moreover, Laguna Potrok Aike is close to many active volcanoes allowing a better understanding of the history of volcanism in the Pali Aike Volcanic Field and in the nearby Andean mountain chain. These challenging scientific themes need to be tackled in a global context as both are of increasing socio-economic relevance. On-site core logging based on magnetic susceptibility data documents an excellent correlation between the quadruplicate holes drilled at Site 1 and between the triplicate holes recovered from Site 2. Also, correlation between both sites located 700 m apart from each other is feasible. After splitting the cores in the lab, a reference profile was established down to a composite depth of 107 m for the replicate cores from Site 2. Sediments consist of laminated and sand-layered lacustrine silts with an increasing number of turbidites and homogenites with depth. Below 80 m composite depth two mass movement deposits (10 m and 5 m in thickness) are recorded. These deposits show tilted and distorted layers as well as nodules of fine grained sediments and randomly distributed gravel. Such features indicate an increased slump activity probably related to lake level fluctuations or seismicity. Also with depth coarse gravel layers are present and point to changes in hydrological conditions in the catchment area. Intercalated throughout the record are 24 macroscopic volcanic ash layers that document the regional volcanic history and open the possibility to establish an independent time control through tephrochronology. These isochrones potentially act as links to marine sediment records from the South Atlantic and to Antarctic ice cores. Preliminary interpretation of all available data and extrapolation of sedimentation rates determined for the upper 16 ka indicate that the record may go back in time to oxygen isotope stage 5a and covers approximately the last 80 ka.

  7. Linking microbial assemblages to paleoenvironmental conditions from the Holocene and Last Glacial Maximum times in Laguna Potrok Aike sediments, Argentina

    NASA Astrophysics Data System (ADS)

    Vuillemin, Aurele; Ariztegui, Daniel; Leavitt, Peter R.; Bunting, Lynda

    2014-05-01

    Laguna Potrok Aike is a closed basin located in the southern hemisphere's mid-latitudes (52°S) where paleoenvironmental conditions were recorded as temporal sedimentary sequences resulting from variations in the regional hydrological regime and geology of the catchment. The interpretation of the limnogeological multiproxy record developed during the ICDP-PASADO project allowed the identification of contrasting time windows associated with the fluctuations of Southern Westerly Winds. In the framework of this project, a 100-m-long core was also dedicated to a detailed geomicrobiological study which aimed at a thorough investigation of the lacustrine subsurface biosphere. Indeed, aquatic sediments do not only record past climatic conditions, but also provide a wide range of ecological niches for microbes. In this context, the influence of environmental features upon microbial development and survival remained still unexplored for the deep lacustrine realm. Therefore, we investigated living microbes throughout the sedimentary sequence using in situ ATP assays and DAPI cell count. These results, compiled with pore water analysis, SEM microscopy of authigenic concretions and methane and fatty acid biogeochemistry, provided evidence for a sustained microbial activity in deep sediments and pinpointed the substantial role of microbial processes in modifying initial organic and mineral fractions. Finally, because the genetic material associated with microorganisms can be preserved in sediments over millennia, we extracted environmental DNA from Laguna Potrok Aike sediments and established 16S rRNA bacterial and archaeal clone libraries to better define the use of DNA-based techniques in reconstructing past environments. We focused on two sedimentary horizons both displaying in situ microbial activity, respectively corresponding to the Holocene and Last Glacial Maximum periods. Sequences recovered from the productive Holocene record revealed a microbial community adapted to subsaline conditions producing methane with a high potential of organic matter degradation. In contrast, sediments rich in volcanic detritus from the Last Glacial Maximum showed a substantial presence of lithotrophic microorganisms and sulphate-reducing bacteria mediating authigenic minerals. Together, these features suggested that microbial communities developed in response to climatic control of lake and catchment productivity at the time of sediment deposition. Prevailing climatic conditions exerted a hierarchical control on the microbial composition of lake sediments by regulating the influx of organic and inorganic material to the lake basin, which in turn determined water column chemistry, production and sedimentation of particulate material, resulting in the different niches sheltering these microbial assemblages. Moreover, it demonstrated that environmental DNA can constitute sedimentary archives of phylogenetic diversity and diagenetic processes over tens of millennia.

  8. Methodology for Designing Fault-Protection Software

    NASA Technical Reports Server (NTRS)

    Barltrop, Kevin; Levison, Jeffrey; Kan, Edwin

    2006-01-01

    A document describes a methodology for designing fault-protection (FP) software for autonomous spacecraft. The methodology embodies and extends established engineering practices in the technical discipline of Fault Detection, Diagnosis, Mitigation, and Recovery; and has been successfully implemented in the Deep Impact Spacecraft, a NASA Discovery mission. Based on established concepts of Fault Monitors and Responses, this FP methodology extends the notion of Opinion, Symptom, Alarm (aka Fault), and Response with numerous new notions, sub-notions, software constructs, and logic and timing gates. For example, Monitor generates a RawOpinion, which graduates into Opinion, categorized into no-opinion, acceptable, or unacceptable opinion. RaiseSymptom, ForceSymptom, and ClearSymptom govern the establishment and then mapping to an Alarm (aka Fault). Local Response is distinguished from FP System Response. A 1-to-n and n-to- 1 mapping is established among Monitors, Symptoms, and Responses. Responses are categorized by device versus by function. Responses operate in tiers, where the early tiers attempt to resolve the Fault in a localized step-by-step fashion, relegating more system-level response to later tier(s). Recovery actions are gated by epoch recovery timing, enabling strategy, urgency, MaxRetry gate, hardware availability, hazardous versus ordinary fault, and many other priority gates. This methodology is systematic, logical, and uses multiple linked tables, parameter files, and recovery command sequences. The credibility of the FP design is proven via a fault-tree analysis "top-down" approach, and a functional fault-mode-effects-and-analysis via "bottoms-up" approach. Via this process, the mitigation and recovery strategy(s) per Fault Containment Region scope (width versus depth) the FP architecture.

  9. Structural evolution and fault interaction of the frontal Longmen Shan fault zone

    NASA Astrophysics Data System (ADS)

    Chang, C.; Xu, X.; Yuan, R.; Li, K.; Sun, X.

    2013-12-01

    Field investigations show that the Wenchuan earthquake (Mw 7.9) on the 12th of May 2008 ruptured two NW-dipping imbricate reverse faults along the Longmen Shan fault zone at the eastern margin of the Tibetan Plateau. The length of the Beichuan-Yingxiu Fault reaches nearly 240 km. Southeast of this fault, a smaller displacement occurred along the Guanxian-Jiangyou Fault, which has a length of about 70 km. A 7 km long NW-striking left-lateral reverse fault, the Xiaoyudong Fault, was clearly observed between these two main surface ruptures. This co-seismic surface rupture pattern, involving multiple structures, is one of the most complicated patterns of recent great earthquakes. The surface rupture length is the longest among the co-seismic surface rupture zones for reverse faulting events ever reported. The Lushan Ms 7.0 earthquake on the 20th of April 2013 is another destructive earthquake which struck the Longmen Shan area five years after the 2008 Wenchuan earthquake. The epicentre of the Lushan earthquake is located in a southern segment of the Longmen Shan fault zone and is about 85 km away from the initial nucleation point or epicentre of the Wenchuan earthquake. Our detail field investigations reveal that the surface rupture of the Wenchuan earthquake cascaded through several pre-existing fault segments. But no apparent surface rupture has been found along the faults or their adjacent areas of the Lushan epicentre. Combining the rupture distribution, the rupture pattern, the displacement amount, the aftershock distribution, and the stress orientation calculated from the fault slickenside striations, we proposed a multi-segment rupturing model to explain the structural evolution and fault interaction of the Longmen Shan fault zone.

  10. Fault heterogeneity and earthquake scaling

    NASA Astrophysics Data System (ADS)

    Hetherington, Alison; Steacy, Sandy

    2007-08-01

    There is an on-going debate in the seismological community as to whether stress drop is independent of earthquake size and this has important implications for earthquake physics. Here we investigate this question in a simple 2D cellular automaton that includes heterogeneity. We find that when the range of heterogeneity is low, the scaling approaches that of constant stress drop. However, clear deviations from the constant stress drop model are observed when the range of heterogeneity is large. Further, fractal distributions of strength show more significant departures from constant scaling than do random ones. Additionally, sub-sampling the data over limited magnitude ranges can give the appearance of constant stress drop even when the entire data set does not support this. Our results suggest that deviations from constant earthquake scaling are real and reflect the heterogeneity of natural fault zones, but may not provide much information about the physics of earthquakes.

  11. Reconfigurable fault tolerant avionics system

    NASA Astrophysics Data System (ADS)

    Ibrahim, M. M.; Asami, K.; Cho, Mengu

    This paper presents the design of a reconfigurable avionics system based on modern Static Random Access Memory (SRAM)-based Field Programmable Gate Array (FPGA) to be used in future generations of nano satellites. A major concern in satellite systems and especially nano satellites is to build robust systems with low-power consumption profiles. The system is designed to be flexible by providing the capability of reconfiguring itself based on its orbital position. As Single Event Upsets (SEU) do not have the same severity and intensity in all orbital locations, having the maximum at the South Atlantic Anomaly (SAA) and the polar cusps, the system does not have to be fully protected all the time in its orbit. An acceptable level of protection against high-energy cosmic rays and charged particles roaming in space is provided within the majority of the orbit through software fault tolerance. Check pointing and roll back, besides control flow assertions, is used for that level of protection. In the minority part of the orbit where severe SEUs are expected to exist, a reconfiguration for the system FPGA is initiated where the processor systems are triplicated and protection through Triple Modular Redundancy (TMR) with feedback is provided. This technique of reconfiguring the system as per the level of the threat expected from SEU-induced faults helps in reducing the average dynamic power consumption of the system to one-third of its maximum. This technique can be viewed as a smart protection through system reconfiguration. The system is built on the commercial version of the (XC5VLX50) Xilinx Virtex5 FPGA on bulk silicon with 324 IO. Simulations of orbit SEU rates were carried out using the SPENVIS web-based software package.

  12. Robot Position Sensor Fault Tolerance

    NASA Technical Reports Server (NTRS)

    Aldridge, Hal A.

    1997-01-01

    Robot systems in critical applications, such as those in space and nuclear environments, must be able to operate during component failure to complete important tasks. One failure mode that has received little attention is the failure of joint position sensors. Current fault tolerant designs require the addition of directly redundant position sensors which can affect joint design. A new method is proposed that utilizes analytical redundancy to allow for continued operation during joint position sensor failure. Joint torque sensors are used with a virtual passive torque controller to make the robot joint stable without position feedback and improve position tracking performance in the presence of unknown link dynamics and end-effector loading. Two Cartesian accelerometer based methods are proposed to determine the position of the joint. The joint specific position determination method utilizes two triaxial accelerometers attached to the link driven by the joint with the failed position sensor. The joint specific method is not computationally complex and the position error is bounded. The system wide position determination method utilizes accelerometers distributed on different robot links and the end-effector to determine the position of sets of multiple joints. The system wide method requires fewer accelerometers than the joint specific method to make all joint position sensors fault tolerant but is more computationally complex and has lower convergence properties. Experiments were conducted on a laboratory manipulator. Both position determination methods were shown to track the actual position satisfactorily. A controller using the position determination methods and the virtual passive torque controller was able to servo the joints to a desired position during position sensor failure.

  13. Fault failure with moderate earthquakes

    USGS Publications Warehouse

    Johnston, M.J.S.; Linde, A.T.; Gladwin, M.T.; Borcherdt, R.D.

    1987-01-01

    High resolution strain and tilt recordings were made in the near-field of, and prior to, the May 1983 Coalinga earthquake (ML = 6.7, ?? = 51 km), the August 4, 1985, Kettleman Hills earthquake (ML = 5.5, ?? = 34 km), the April 1984 Morgan Hill earthquake (ML = 6.1, ?? = 55 km), the November 1984 Round Valley earthquake (ML = 5.8, ?? = 54 km), the January 14, 1978, Izu, Japan earthquake (ML = 7.0, ?? = 28 km), and several other smaller magnitude earthquakes. These recordings were made with near-surface instruments (resolution 10-8), with borehole dilatometers (resolution 10-10) and a 3-component borehole strainmeter (resolution 10-9). While observed coseismic offsets are generally in good agreement with expectations from elastic dislocation theory, and while post-seismic deformation continued, in some cases, with a moment comparable to that of the main shock, preseismic strain or tilt perturbations from hours to seconds (or less) before the main shock are not apparent above the present resolution. Precursory slip for these events, if any occurred, must have had a moment less than a few percent of that of the main event. To the extent that these records reflect general fault behavior, the strong constraint on the size and amount of slip triggering major rupture makes prediction of the onset times and final magnitudes of the rupture zones a difficult task unless the instruments are fortuitously installed near the rupture initiation point. These data are best explained by an inhomogeneous failure model for which various areas of the fault plane have either different stress-slip constitutive laws or spatially varying constitutive parameters. Other work on seismic waveform analysis and synthetic waveforms indicates that the rupturing process is inhomogeneous and controlled by points of higher strength. These models indicate that rupture initiation occurs at smaller regions of higher strength which, when broken, allow runaway catastrophic failure. ?? 1987.

  14. Detachment Faults in Ocean Continent Transitions

    NASA Astrophysics Data System (ADS)

    Manatschal, G.; Peron-Pinvidic, G.

    2005-12-01

    Ancient models of continental break-up conventionally juxtapose normal continental and oceanic crusts. However, deep-sea drilling in the Iberia margin and observations in the Alpine Tethys margins exposed in the Alps provide compelling evidence that these two crusts are separated by continental mantle commonly interpreted to be exhumed at the seafloor by lithospheric-scale detachment faulting. In the Iberia margin, detachment faults were interpreted to coincide with strong seismic reflections (e.g. S and H reflections) and have been drilled at ODP Sites 900, 1067 and 1068. Based on kinematic inversion of seismic sections and drill-hole data, it was shown that the detachment faults formed as a sequence of high-angle faults during a late stage of rifting in a previously thinned, less than 10 km thick crust at rates of 1 to 2 cm/yr. With ongoing extension, the faults rotated and changed from upward to downward concave faults enabling to exhume mantle rocks over tens of kilometres without producing a major seafloor topography. In the Alps, remnants of detachment faults belonging to the former Ocean Continent Transition (OCT) of the Alpine Tethys are spectacularly exposed in several places in SE Switzerland. Like in the Iberia example, these structures show break-aways towards the continent and cut oceanwards into serpentinized mantle peridotites. The detachment faults are covered either by extensional allochthons of continental origin or sediments, further oceanwards also by basalts. Detailed mapping combined with structural and petrological investigations show that these detachment faults were active in the stability field of serpentine. The detachment faults show a complex relationship to high- temperature mantle mylonites (>700C) and infiltrated mantle peridotites. Further studies are necessary to unravel the complex relationship between shallow and deep lithospheric deformation processes as well as between magmatic and hydration processes interacting with mantle exhumation along detachment faults. The available data favour the hypothesis that the detachment faults did not root into an asthenospheric mantle, but were more likely interacting with a weak subhorizontal decollement in the mantle. Such a weak zone may be related either to a hydration or an infiltration front at temperatures >700C. The 3D geometry of detachment faults in the OCT is very complex and shows some similarities with those observed at oceanic core complexes. In the Err nappe in the Alps, preserved detachment structures can be mapped over an area of about 30 km2. The mapped fault planes are either corrugated parallel to the transport direction or form lateral ramps reactivating pre-existing structures. Mapping of the reflections interpreted as detachment faults in the Iberia margin shows that on the scale of the margin, these structures form domes and ridges. Moreover, extensional allochthons overlying exhumed mantle can be correlated along strike with a series of fault-bounded blocks overlying strong intra-basement reflections interpreted as detachment faults. These observations suggest that detachment faults in OCT are poly-phase structures that form during a final stage of continental break-up and continue to deform after their exhumation at the seafloor. The scale, 3D geometry and the processes controlling the evolution of detachment faults in the OCT are not yet sufficiently constrained to draw some further conclusions or to compare them with oceanic core complexes.

  15. Facies composition and scaling relationships of extensional faults in carbonates

    NASA Astrophysics Data System (ADS)

    Bastesen, Eivind; Braathen, Alvar

    2010-05-01

    Fault seal evaluations in carbonates are challenged by limited input data. Our analysis of 100 extensional faults in shallow-buried layered carbonate rocks aims to improve forecasting of fault core characteristics in these rocks. We have analyzed the spatial distribution of fault core elements described using a Fault Facies classification scheme; a method specifically developed for 3D fault description and quantification, with application in reservoir modelling. In modelling, the fault envelope is populated with fault facies originating from the host rock, the properties of which (e.g. dimensions, geometry, internal structure, petrophysical properties, and spatial distribution of structural elements) are defined by outcrop data. Empirical data sets were collected from outcrops of extensional faults in fine grained, micro-porosity carbonates from western Sinai (Egypt), Central Spitsbergen (Arctic Norway), and Central Oman (Adam Foothills) which all have experienced maximum burial of 2-3 kilometres and exhibit displacements ranging from 4 centimetres to 400 meters. Key observations include fault core thickness, intrinsic composition and geometry. The studied fault cores display several distinct fault facies and facies associations. Based on geometry, fault cores can be categorised as distributed or localized. Each can be further sub-divided according to the presence of shale smear, carbonate fault rocks and cement/secondary calcite layers. Fault core thickness in carbonate rocks may be controlled by several mechanisms: (1) Mechanical breakdown: Irregularities such as breached relays and asperities are broken down by progressive faulting and fracturing to eventually form a thicker fault rock layer. (2) Layer shearing: Accumulations of shale smear along the fault core. (3) Diagenesis; pressure solution, karstification and precipitation of secondary calcite in the core. Observed fault core thicknesses scatter over three orders of magnitude, with a D/T range of 1:1 to 1:1000. In general the complete dataset shows a positive correlation between thickness (T) of fault cores and the displacement (D) on faults. For increasing displacement relationships, the D/T relationship is not constant. The D/T relationship is generally higher for small faults than for larger faults, which implies that comparisons between small and large fault with respect to this parameter should be handled with care. Fault envelope composition, as reflected by the relative proportions of different fault facies in the core, varies with displacement. In small scale faults (0-1 m displacement), secondary calcite layers and fault gouge dominate, whereas shale dominated fault rocks (shale smear) and carbonate dominated fault rocks (breccias) constitute minor components. Shale dominated fault rocks are restricted to shale-rich protoliths, and fault breccias to break-down of lenses formed near fault jogs. In medium scale faults (1-10m), fault rocks form the dominating facies, whereas the amount of secondary calcite layers decreases due to transformation into breccias. Further, in shale rich carbonates the fault cores consist of composite facies associations. In major faults (10-300 m displacement) fault rock layers and lenses dominate the fault cores. A common observation in large scale faults is a distinct layering of different fault rocks, shale smearing of major shale layers and massive secondary calcite layers along slip surfaces. Fault core heterogeneity in carbonates is ascribed to the distribution of fault facies, such as fault rocks, secondary calcite layers and shale smear. In a broader sense, facies distribution and thickness are controlled by displacement, protolith and tectonic environment. The heterogeneous properties and the varied distribution observed in this study may be valuable in forecasting fault seal characteristics of carbonate reservoirs.

  16. Structural and geomorphic fault segmentations of the Doruneh Fault System, central Iran

    NASA Astrophysics Data System (ADS)

    Farbod, Yassaman; Bellier, Olivier; Shabanian, Esmaeil; Abbassi, Mohammad Reza

    2010-05-01

    The active tectonics of Iran results from the northward Arabia-Eurasia convergence at a rate of ~222 mm/yr at the longitude of Bahrain (e.g., Sella et al., 2002). At the southwestern and southern boundaries of the Arabia-Eurasia collision zone, the convergence is taken up by the continental collision in the Zagros Mountains, and the active subduction of Makran, respectively. Further north, the northward motion not absorbed by the Makran subduction is expressed as the N-trending right lateral shear between central Iran and Eurasia at a rate of ~16 mm/yr (e.g., Regard et al., 2005; Vernant et al, 2004). This shear involves N-trending right-lateral fault systems, which are extended at both sides of the Lut block up to the latitude of 34N. North of this latitude, about 35N, the left-lateral Doruneh Fault separates the N-trending right-lateral fault systems from the northern deformation domains (i.e., the Alborz, Kopeh Dagh and Binalud mountain ranges). At the Iranian tectonic scale, the Doruneh Fault represents a curved-shape, 600-km-long structure through central Iran, which runs westward from the Iran-Afghanistan boundary (i.e., the eastern boundary of the Arabia-Eurasia collision zone) to the Great Kavir desert. Nevertheless, east of the longitude of 5645'E, the fault is expressed as an E-trending ~360-km-long fault (hereinafter the Doruneh Fault System - DFS) having a geological evolution history different from the western part (the Great Kavir Fault System). In this study, we look for characterizing geomorphic and structural features of active faulting on the DFS. Detailed structural and geomorphic mapping based on satellite Imageries (SPOT5 and Landsat ETM+) and SRTM digital topographic data, complemented with field surveys allowed us to establish structural and geomorphic segmentations along the DFS. According to our observations, the DFS is comprised of three distinct fault zones: (1) The 100-km-long, N75E-trending western fault zone, which is characterized by the left-handed step-over geometry and its associated geomorphic features such as pull-apart basins, (2) The 100-km-long, E-trending central fault zone characterized by pure left-lateral offsets recorded by alluvial fan and drainage systems incised in, and (3) The 160-km-long, N115E-trending eastern fault zone along which the active faulting is distributed into a 24-km-wide (maximum) fault zone characterized by Quaternary reverse faulting and thrust-parallel folding. At the regional scale, the eastern fault zone likes a horsetail fault termination for the DFS. Our results indicate that the central fault zone is a pure left-lateral strike-slip fault. Taking the northward convexity of the DFS into account, such a pure strike-slip faulting on the central fault zone involved (1) the eastern fault zone in a compressional regime, and (2) the western fault zone in a transtensional tectonic regime. These structural relationships led us to propose a tectonic model in which the central fault zone controls the deformation pattern and faulting mechanism on both terminations of the DFS.

  17. An observer based approach for achieving fault diagnosis and fault tolerant control of systems modeled as hybrid Petri nets.

    PubMed

    Renganathan, K; Bhaskar, VidhyaCharan

    2011-07-01

    In this paper, we propose an approach for achieving detection and identification of faults, and provide fault tolerant control for systems that are modeled using timed hybrid Petri nets. For this purpose, an observer based technique is adopted which is useful in detection of faults, such as sensor faults, actuator faults, signal conditioning faults, etc. The concepts of estimation, reachability and diagnosability have been considered for analyzing faulty behaviors, and based on the detected faults, different schemes are proposed for achieving fault tolerant control using optimization techniques. These concepts are applied to a typical three tank system and numerical results are obtained. PMID:21507399

  18. Is There any Relationship Between Active Tabriz Fault Zone and Bozkush Fault Zones, NW Iran?

    NASA Astrophysics Data System (ADS)

    ISIK, V.; Saber, R.; Caglayan, A.

    2012-12-01

    Tectonic plate motions and consequent earthquakes can be actively observed along the northwestern Iran. The Tabriz fault zone (TFZ), also called the North Tabriz fault, active right-lateral strike-slip fault zone with slip rates estimated as ~8 mm/yr, has been vigorously deforming much of northwestern Iran for over the past several million years. Historical earthquakes on the TFZ consist of large magnitude, complimentary rupture length and changed the landscape of regions surrounding the fault zone. The TFZ in the city of Bostanabad is more segmented with several strands and joined by a series of WNW-ESE trending faults, called the Bozkush fault zones. The Bozkush fault zones (BFZ's) (south and north), bounding arch-shaped Bozkush mountains, generates not only hundreds of small earthquakes each year but also has provided significant earthquakes that have been historically documented. The rock units deformed within the BFZ's include Eocene-Oligocene volcanic rocks with intercalation limestone, Oligo-Miocene clastic rocks with intercalation gypsiferous marl and Plio-Quaternary volcano-sedimentary rocks, travertine and alluvium. The North and South Bozkush fault zones are characterized by development of structures typically associated with transpression. These include right-lateral strike-slip faults, thrust faults and foldings. Our field studies indicate that these zones include step to sub-vertical fault surfaces trending NW and NE with slickenlines. Slickensides preserve brittle kinematic indicators (e.g., Riedel shear patterns, slickenside marks) suggesting both dextral displacements and top-to-the-NE/NW and-SE/SW sense of shearing. Besides, mesoscopic and microscopic ductile kinematic indicators (e.g., asymmetric porphyroclasts, C/S fabrics) within Miocene gypsum marl show dextral displacements. Fault rocks along most of these faults consist of incohesive fault breccia and gauge. Adjacent to the fault contact evidence of bedding in Oligo-Miocene and Plio-Quaternary units is obliterated and the strata are folded into NW folds. The geometry of the BFZ's and partitioning of deformation within them indicate positive flower structures, which are commonly noted in zones of transpression. Our preliminary results suggest that the Bozkush fault zones have evidence for late Quaternary activity and similar to the coeval right-lateral strike-slip faulting with a thrust component taking place along the Tabriz fault zone at the present time.

  19. The End Of Chi-Shan Fault:Tectonic of Transtensional Fault

    NASA Astrophysics Data System (ADS)

    Chou, H.; Song, G.

    2011-12-01

    Chishan fault is an active strike-slip fault that located at the Southwestern Taiwan and extend to the offshore area of SouShan in Kaohsiung. The strike and dip of the fault is N80E,50N. It's believed that the Wushan Formation of Chishan fault, which is composed of sandstone, thrusts upon the Northwestern Kutingkeng Formation, which is composed of mudstone. Chishan fault is acting as a reversal fault with sinistral motion. (Tsan and Keng,1968; Hsieh, 1970; Wen-Pu Geng, 1981). This left-lateral strike-slip fault extend to shelf break and stop, with a transtensional basin at the termination. The transtensional basin has stopped extending to open sea, whereas it is spreading toward the inshore area. Therefore, we can know that a young extensional activity is developing at the offshore seabed of Tsoying Naval Port and the activity is relative to the transtension of left-lateral fault. ( Gwo-Shyh Song, 2010). Tectonic of transtensional basin deformed in strike-slip settings overland have been described by many authors, but the field outcrop could be distoryed by Weathering and made the tectonic features incomplete. Hence, this research use multibeam bathymetry and 3.5-kHz sub-bottom profiler data data collected from the offshore extended part of Chishan fault in Kaohsiung to define the transtensional characteristics of Chishan fault. At first, we use the multibeam bathymetry data to make a Geomorphological map of our research area and we can see a triangulate depressed area near shelf break. Then, we use Fledermaus to print 3D diagram for understanding the distribution of the major normal faults(fig.1). Furthermore, we find that there are amount of listric normal fault and the area between the listric faults is curving. After that, we use the 3.5-kHz sub-bottom profiler data to understand the subsurface structure of the normal faults and the curved area between the listric normal fault, which seems to be En e'chelon folds. As the amount of displacement on the wrench zone increases, the initial En e'chelon folds are broken first by fractures and then faults.( WILCOX, 1973). Therefore, we infer that the seabed is deformed by shear strain of Chishan fault. Finally, we illustrate a diagram for the incremental strain associated with simple-shear deformation and conjecture the mode of motion in the research area. This triangulate depressed area seems to be a flower structure developing in a pull-apart basin made by Chishan fault.

  20. High Resolution Seismic Imaging of Fault Zones: Methods and Examples From The San Andreas Fault

    NASA Astrophysics Data System (ADS)

    Catchings, R. D.; Rymer, M. J.; Goldman, M.; Prentice, C. S.; Sickler, R. R.; Criley, C.

    2011-12-01

    Seismic imaging of fault zones at shallow depths is challenging. Conventional seismic reflection methods do not work well in fault zones that consist of non-planar strata or that have large variations in velocity structure, two properties that occur in most fault zones. Understanding the structure and geometry of fault zones is important to elucidate the earthquake hazard associated with fault zones and the barrier effect that faults impose on subsurface fluid flow. In collaboration with the San Francisco Public Utilities Commission (SFPUC) at San Andreas Lake on the San Francisco peninsula, we acquired combined seismic P-wave and S-wave reflection, refraction, and guided-wave data to image the principal strand of the San Andreas Fault (SAF) that ruptured the surface during the 1906 San Francisco earthquake and additional fault strands east of the rupture. The locations and geometries of these fault strands are important because the SFPUC is seismically retrofitting the Hetch Hetchy water delivery system, which provides much of the water for the San Francisco Bay area, and the delivery system is close to the SAF at San Andreas Lake. Seismic reflection images did not image the SAF zone well due to the brecciated bedrock, a lack of layered stratigraphy, and widely varying velocities. Tomographic P-wave velocity images clearly delineate the fault zone as a low-velocity zone at about 10 m depth in more competent rock, but due to soil saturation above the rock, the P-waves do not clearly image the fault strands at shallower depths. S-wave velocity images, however, clearly show a diagnostic low-velocity zone at the mapped 1906 surface break. To image the fault zone at greater depths, we utilized guided waves, which exhibit high amplitude seismic energy within fault zones. The guided waves appear to image the fault zone at varying depths depending on the frequency of the seismic waves. At higher frequencies (~30 to 40 Hz), the guided waves show strong amplification at the 1906 surface break and at about 20 m to the east, but at lower frequencies (2-5 Hz), the guided waves show strong amplification approximately 10 m east of the 1906 surface break. We attribute the difference in amplification of guided waves to an east-dipping fault strand that merges with other strands below about 10 m depth. Vp/Vs and Poisson's ratios clearly delineate multiple fault strands about 2 km north of the mapped 1906 surface break at the SFPUC intake structure. Combining these fault-imaging methods provide a powerful set of tools for mapping fault zones in the shallow subsurface in areas of complex geology.

  1. Underground Cable Fault Location Reference Manual. Final report

    SciTech Connect

    Bascom, E.C.; Di Carlo, S.; Mauser, S.F.; McDermott, T.E.; Smith, D.R.; Gnerlich, H.; Medek, J.D.; Seman, G.W.

    1995-11-01

    Successful cable fault locations depend as much on the fault locator`s knowledge and experience as on the quality or quantity of fault location equipment. This fault location reference manual provides practical technical material in the art and science of locating cable faults, including a description of common fault location instruments and principles of advanced fault location techniques used by utilities throughout the United States. A unique feature of this manual is the durable, removable fault location method summaries that field crews can insert into their personal field manuals. Also available in connection with this manual is expert system software, Fault Analysis and Underground Location Techniques (FAULT{reg_sign}), for use in selecting a fault location method.

  2. Secondary forest succession and tree planting at the Laguna Cartagena and Cabo Rojo wildlife refuges in southwestern Puerto Rico.

    PubMed

    Weaver, Peter L; Schwagerl, Joseph J

    2008-12-01

    Secondary forest succession and tree planting are contributing to the recovery of the Cabo Rojo refuge (Headquarters and Salinas tracts) and Laguna Cartagena refuge (Lagoon and Tinaja tracts) of the Fish and Wildlife Service in southwestern Puerto Rico. About 80 species, mainly natives, have been planted on 44 ha during the past 25 y in an effort to reduce the threat of grass fires and to restore wildlife habitat. A 2007 survey of 9-y-old tree plantings on the Lagoon tract showed satisfactory growth rates for 16 native species. Multiple stems from individual trees at ground level were common. A sampling of secondary forest on the entire 109 ha Tinaja tract disclosed 141 native tree species, or 25% of Puerto Rico's native tree flora, along with 20 exotics. Five tree species made up about 58% of the total basal area, and seven species were island endemics. Between 1998 and 2003, tree numbers and basal area, as well as tree heights and diameter at breast height values (diameter at 1.4 m above the ground), increased on the lower 30 ha of the Tinaja tract. In this area, much of it subject to fires and grazing through 1996, exotic trees made up 25% of the species. Dry forest throughout the tropics is an endangered habitat, and its recovery (i.e., in biomass, structure, and species composition) at Tinaja may exceed 500 y. Future forests, however, will likely contain some exotics. PMID:19205183

  3. Evolution of unrest at Laguna del Maule volcanic field (Chile) from InSAR and GPS measurements, 2003 to 2014

    NASA Astrophysics Data System (ADS)

    Mével, Hélène Le; Feigl, Kurt L.; Córdova, Loreto; DeMets, Charles; Lundgren, Paul

    2015-08-01

    The Laguna del Maule (LdM) volcanic field in the southern volcanic zone of the Chilean Andes exhibits a large volume of rhyolitic material erupted during postglacial times (20-2 ka). Since 2007, LdM has experienced an unrest episode characterized by high rates of deformation. Analysis of new GPS and Interferometric Synthetic Aperture Radar (InSAR) data reveals uplift rates greater than 190 mm/yr between January 2013 and November 2014. The geodetic data are modeled as an inflating sill at depth. The results are used to calculate the temporal evolution of the vertical displacement. The best time function for modeling the InSAR data set is a double exponential model with rates increasing from 2007 through 2010 and decreasing slowly since 2010. We hypothesize that magma intruding into an existing silicic magma reservoir is driving the surface deformation. Modeling historical uplift at Yellowstone, Long Valley, and Three Sisters volcanic fields suggests a common temporal evolution of vertical displacement rates.

  4. Geothermal asymmetry across a continental transform fault inferred from thermochronology: the Motagua Fault Zone, Guatemala

    NASA Astrophysics Data System (ADS)

    Simon-Labric, Thibaud; Brocard, Gilles Y.; Teyssier, Christian; van der Beek, Peter A.; Giuditta Fellin, Maria; Reiners, Peter W.; Authemayou, Christine

    2013-04-01

    Strike-slip faults juxtapose crustal blocks with different geodynamic origins and potentially different thermal structures. Large-magnitude horizontal displacements along these faults may juxtapose terranes of contrasted thermal regimes. The effect of strike-slip faulting on the cooling histories that are derived from thermochronological dating remains poorly documented. We have used the zircon (U-Th)/He method in order to construct age-elevation profiles across the Motagua fault zone, a 500 km-long segment of the transform boundary between the North American and Caribbean plates. We combine our results with published thermochronological data to document a sharp cooling age discontinuity across the Motagua fault. This discontinuity could be interpreted as a difference in denudation history on each side of the fault. However, a low-relief Miocene erosional surface extends across the fault; this surface has been uplifted and incised and provides a geomorphic argument against differential denudation across the fault. Using surface heat-flow data, thermochronological age-elevation profiles and three-dimensional thermo-kinematic modeling, we propose that strike-slip displacement has juxtaposed the cold Maya block (s.s.) to the north against the hot, arc-derived, Chorts block (s.s.) to the south. Large-scale horizontal displacement along the Motagua fault maintained this geothermal asymmetry across the fault and explains both the age discontinuities and the age-elevation patterns. This study illustrates how thermochronology can be used to detect large strike-slip displacements.

  5. Data Fault Detection in Medical Sensor Networks

    PubMed Central

    Yang, Yang; Liu, Qian; Gao, Zhipeng; Qiu, Xuesong; Meng, Luoming

    2015-01-01

    Medical body sensors can be implanted or attached to the human body to monitor the physiological parameters of patients all the time. Inaccurate data due to sensor faults or incorrect placement on the body will seriously influence clinicians diagnosis, therefore detecting sensor data faults has been widely researched in recent years. Most of the typical approaches to sensor fault detection in the medical area ignore the fact that the physiological indexes of patients arent changing synchronously at the same time, and fault values mixed with abnormal physiological data due to illness make it difficult to determine true faults. Based on these facts, we propose a Data Fault Detection mechanism in Medical sensor networks (DFD-M). Its mechanism includes: (1) use of a dynamic-local outlier factor (D-LOF) algorithm to identify outlying sensed data vectors; (2) use of a linear regression model based on trapezoidal fuzzy numbers to predict which readings in the outlying data vector are suspected to be faulty; (3) the proposal of a novel judgment criterion of fault state according to the prediction values. The simulation results demonstrate the efficiency and superiority of DFD-M. PMID:25774708

  6. Formal Validation of Fault Management Design Solutions

    NASA Technical Reports Server (NTRS)

    Gibson, Corrina; Karban, Robert; Andolfato, Luigi; Day, John

    2013-01-01

    The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model's logic by performing fault injection simulations, and verify the fault protection system's logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-Statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.

  7. Data fault detection in medical sensor networks.

    PubMed

    Yang, Yang; Liu, Qian; Gao, Zhipeng; Qiu, Xuesong; Meng, Luoming

    2015-01-01

    Medical body sensors can be implanted or attached to the human body to monitor the physiological parameters of patients all the time. Inaccurate data due to sensor faults or incorrect placement on the body will seriously influence clinicians' diagnosis, therefore detecting sensor data faults has been widely researched in recent years. Most of the typical approaches to sensor fault detection in the medical area ignore the fact that the physiological indexes of patients aren't changing synchronously at the same time, and fault values mixed with abnormal physiological data due to illness make it difficult to determine true faults. Based on these facts, we propose a Data Fault Detection mechanism in Medical sensor networks (DFD-M). Its mechanism includes: (1) use of a dynamic-local outlier factor (D-LOF) algorithm to identify outlying sensed data vectors; (2) use of a linear regression model based on trapezoidal fuzzy numbers to predict which readings in the outlying data vector are suspected to be faulty; (3) the proposal of a novel judgment criterion of fault state according to the prediction values. The simulation results demonstrate the efficiency and superiority of DFD-M. PMID:25774708

  8. On-line diagnosis of unrestricted faults

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.; Sundstrom, R. J.

    1974-01-01

    A formal model for the study of on-line diagnosis is introduced and used to investigate the diagnosis of unrestricted faults. A fault of a system S is considered to be a transformation of S into another system S' at some time tau. The resulting faulty system is taken to be the system which looks like S up to time tau, and like S' thereafter. Notions of fault tolerance error are defined in terms of the resulting system being able to mimic some desired behavior as specified by a system similar to S. A notion of on-line diagnosis is formulated which involves an external detector and a maximum time delay within which every error caused by a fault in a prescribed set must be detected. It is shown that if a system is on-line diagnosable for the unrestricted set of faults then the detector is at least as complex, in terms of state set size, as the specification. The use of inverse systems for the diagnosis of unrestricted faults is considered. A partial characterization of those inverses which can be used for unrestricted fault diagnosis is obtained.

  9. Characterizing Fault Zone Permeability Through Integrated Geophysical and Hydrological Data, Elkhorn Fault, Park County, Colorado

    NASA Astrophysics Data System (ADS)

    Ball, L. B.; Ge, S.; Caine, J. S.

    2007-12-01

    Fault zones are ubiquitous in ground-water aquifers and can play a significant role in fluid transport at local to regional scales. Fault-zone permeability structure controls whether the fault behaves as a barrier, conduit, or combined barrier-conduit for fluid flow. Much work has been done to measure local- to well-scale permeability of faults. However, hydrogeologic heterogeneity prevents local-scale measurements from accurately characterizing the regional hydrogeologic impact of fault zones. Near-surface geophysical techniques provide an estimate the distribution and continuity of subsurface fault zone properties, allowing for the identification of structural heterogeneities that may cause heterogeneities in permeability. This research focuses on the integration of geophysical and hydrological techniques to characterize the regional hydrogeologic effects of the Elkhorn Fault in Park County, Colorado. The Elkhorn fault is a Laramide-aged thrust fault with a sedimentary foot wall and fractured Proterozoic, crystalline hanging wall. Magnetic, gravity, and resistivity data are used to constrain the location and geometry of the fault as well as to aid in the interpretation of the structural and hydrogeologic complexity of the fault and its associated damage zone. The combined geophysical interpretation provides a framework for the development of a physical domain in which pumping test and time-series hydrologic data can be used to evaluate permeability. This interpretation also serves as a guideline for the placement of wells, facilitating direct hydrological testing of the in-situ permeability of different components of the fault zone. By utilizing geophysical techniques, hydrological data may be more effectively collected and interpreted, leading to the development of ground-water-flow models that more accurately depict the regional hydrogeologic effect of the Elkhorn fault.

  10. Frictional properties of natural fault gouge from a low-angle normal fault, Panamint Valley, California

    NASA Astrophysics Data System (ADS)

    Numelin, T.; Marone, C.; Kirby, E.

    2007-04-01

    We investigate the relationship between frictional strength and clay mineralogy of natural fault gouge from a low-angle normal fault in Panamint Valley, California. Gouge samples were collected from the fault zone at five locations along a north-south transect of the range-bounding fault system, spanning a variety of bedrock lithologies. Samples were powdered and sheared in the double-direct shear configuration at room temperature and humidity. The coefficient of friction, ?, was measured at a range of normal stresses from 5 to 150 MPa for all samples. Our results reinforce the intuitive understanding that natural fault gouge zones are inherently heterogeneous. Samples from a single location exhibit dramatic differences in behavior, yet all three were collected within a meter of the fault core. For most of the samples, friction varies from ? = 0.6 to ? = 0.7, consistent with Byerlee's law. However, samples with greater than 50 wt % total clay content were much weaker (? = 0.2-0.4). Expandable clay content of the samples ranged from 10 to 40 wt %. Frictional weakness did not correlate with expandable clays. Our results indicate that friction decreases with increasing total clay content, rather than with the abundance of expandable clays. The combination of field relations, analytical results, and friction measurements indicates a positive correlation between clay content, fabric intensity, and localization of strain in the fault core. A mechanism based upon foliations enveloping angular elements to reduce friction is suggested for weakening of fault gouge composed of mixed clay and granular material. We provide broad constraints of 1-5 km on the depth of gouge generation and the depth at which fault weakness initiates. We show that slip on the Panamint Valley fault and similar low-angle normal faults is mechanically feasible in the mid-upper crust if the strength of the fault is limited by weak, clay-rich fault gouge.

  11. Software-implemented fault insertion: An FTMP example

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1987-01-01

    This report presents a model for fault insertion through software; describes its implementation on a fault-tolerant computer, FTMP; presents a summary of fault detection, identification, and reconfiguration data collected with software-implemented fault insertion; and compares the results to hardware fault insertion data. Experimental results show detection time to be a function of time of insertion and system workload. For the fault detection time, there is no correlation between software-inserted faults and hardware-inserted faults; this is because hardware-inserted faults must manifest as errors before detection, whereas software-inserted faults immediately exercise the error detection mechanisms. In summary, the software-implemented fault insertion is able to be used as an evaluation technique for the fault-handling capabilities of a system in fault detection, identification and recovery. Although the software-inserted faults do not map directly to hardware-inserted faults, experiments show software-implemented fault insertion is capable of emulating hardware fault insertion, with greater ease and automation.

  12. a Study of Fault Zone Hydrology

    NASA Astrophysics Data System (ADS)

    Karasaki, K.; Onishi, C. T.; Goto, J.; Moriya, T.; Tsuchi, H.; Ueta, K.; Kiho, K.; Miyakawa, K.

    2010-12-01

    The Nuclear Waste Management Organization of Japan and Lawrence Berkeley National Laboratory are presently collaborating at a dedicated field site to further understand, and to develop the characterization technology for, fault zone hydrology. To this end, several deep trenches were cut, and a number of geophysical surveys were conducted across the Wildcat Fault in the hills east of Berkeley, California. The Wildcat Fault is believed to be a strike-slip fault and a member of the Hayward Fault System, with over 10 km of displacement. So far, three boreholes of ~ 150 m have been core-drilled; one on the east side and two on the west side of the suspected fault trace. The lithology at Wildcat Fault mainly consists of chert, shale and sandstone, extensively sheared and fractured; with gouges observed at several depths and a thick cataclasite zone. After conducting hydraulic tests, the boreholes were instrumented with temperature and pressure sensors at multiple levels. Preliminary results from these holes indicated that the geology was not what was expected: while confirming some earlier, published conclusions about Wildcat, they have also led to some unexpected findings. The pressure and temperature distributions indicate a downward hydraulic gradient and a relatively large geothermal gradient. Wildcat near the field site appear to consist of multiple faults. The hydraulic test data suggest the dual properties of the hydrologic structure of the fault zone. At this writing an inclined fourth borehole is being drilled to penetrate the main Wildcat. Using the existing three boreholes as observation wells, we plan to conduct hydrologic cross-hole tests in this fourth borehole. The main philosophy behind our approach for the hydrologic characterization of such a complex fractured system is to let the system take its own average and monitor long term behavior, instead of collecting a multitude of data at small length and time scales, or at a discrete fracture scale, and then to up-scale, which is extremely tenuous.

  13. Fault-tolerant almost exact state transmission

    PubMed Central

    Wang, Zhao-Ming; Wu, Lian-Ao; Modugno, Michele; Yao, Wang; Shao, Bin

    2013-01-01

    We show that a category of one-dimensional XY-type models may enable high-fidelity quantum state transmissions, regardless of details of coupling configurations. This observation leads to a fault-tolerant design of a state transmission setup. The setup is fault-tolerant, with specified thresholds, against engineering failures of coupling configurations, fabrication imperfections or defects, and even time-dependent noises. We propose an experimental implementation of the fault-tolerant scheme using hard-core bosons in one-dimensional optical lattices. PMID:24185259

  14. Efficient fault diagnosis of helicopter gearboxes

    NASA Technical Reports Server (NTRS)

    Chin, H.; Danai, K.; Lewicki, D. G.

    1993-01-01

    Application of a diagnostic system to a helicopter gearbox is presented. The diagnostic system is a nonparametric pattern classifier that uses a multi-valued influence matrix (MVIM) as its diagnostic model and benefits from a fast learning algorithm that enables it to estimate its diagnostic model from a small number of measurement-fault data. To test this diagnostic system, vibration measurements were collected from a helicopter gearbox test stand during accelerated fatigue tests and at various fault instances. The diagnostic results indicate that the MVIM system can accurately detect and diagnose various gearbox faults so long as they are included in training.

  15. Concatenated codes for fault tolerant quantum computing

    SciTech Connect

    Knill, E.; Laflamme, R.; Zurek, W.

    1995-05-01

    The application of concatenated codes to fault tolerant quantum computing is discussed. We have previously shown that for quantum memories and quantum communication, a state can be transmitted with error {epsilon} provided each gate has error at most c{epsilon}. We show how this can be used with Shor`s fault tolerant operations to reduce the accuracy requirements when maintaining states not currently participating in the computation. Viewing Shor`s fault tolerant operations as a method for reducing the error of operations, we give a concatenated implementation which promises to propagate the reduction hierarchically. This has the potential of reducing the accuracy requirements in long computations.

  16. Cooperative human-machine fault diagnosis

    NASA Technical Reports Server (NTRS)

    Remington, Roger; Palmer, Everett

    1987-01-01

    Current expert system technology does not permit complete automatic fault diagnosis; significant levels of human intervention are still required. This requirement dictates a need for a division of labor that recognizes the strengths and weaknesses of both human and machine diagnostic skills. Relevant findings from the literature on human cognition are combined with the results of reviews of aircrew performance with highly automated systems to suggest how the interface of a fault diagnostic expert system can be designed to assist human operators in verifying machine diagnoses and guiding interactive fault diagnosis. It is argued that the needs of the human operator should play an important role in the design of the knowledge base.

  17. Tunable architecture for aircraft fault detection

    NASA Technical Reports Server (NTRS)

    Ganguli, Subhabrata (Inventor); Papageorgiou, George (Inventor); Glavaski-Radovanovic, Sonja (Inventor)

    2012-01-01

    A method for detecting faults in an aircraft is disclosed. The method involves predicting at least one state of the aircraft and tuning at least one threshold value to tightly upper bound the size of a mismatch between the at least one predicted state and a corresponding actual state of the non-faulted aircraft. If the mismatch between the at least one predicted state and the corresponding actual state is greater than or equal to the at least one threshold value, the method indicates that at least one fault has been detected.

  18. Negative Selection Algorithm for Aircraft Fault Detection

    NASA Technical Reports Server (NTRS)

    Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.

    2004-01-01

    We investigated a real-valued Negative Selection Algorithm (NSA) for fault detection in man-in-the-loop aircraft operation. The detection algorithm uses body-axes angular rate sensory data exhibiting the normal flight behavior patterns, to generate probabilistically a set of fault detectors that can detect any abnormalities (including faults and damages) in the behavior pattern of the aircraft flight. We performed experiments with datasets (collected under normal and various simulated failure conditions) using the NASA Ames man-in-the-loop high-fidelity C-17 flight simulator. The paper provides results of experiments with different datasets representing various failure conditions.

  19. Mechanical Models of Fault-Related Folding

    SciTech Connect

    Johnson, A. M.

    2003-01-09

    The subject of the proposed research is fault-related folding and ground deformation. The results are relevant to oil-producing structures throughout the world, to understanding of damage that has been observed along and near earthquake ruptures, and to earthquake-producing structures in California and other tectonically-active areas. The objectives of the proposed research were to provide both a unified, mechanical infrastructure for studies of fault-related foldings and to present the results in computer programs that have graphical users interfaces (GUIs) so that structural geologists and geophysicists can model a wide variety of fault-related folds (FaRFs).

  20. Geofluid Dynamics of Faulted Sedimentary Basins

    NASA Astrophysics Data System (ADS)

    Garven, G.; Jung, B.; Boles, J. R.

    2014-12-01

    Faults are known to affect basin-scale groundwater flow, and exert a profound control on petroleum migration/accumulation, the PVT-history of hydrothermal fluids, and the natural (submarine) seepage from offshore reservoirs. For example, in the Santa Barbara basin, measured gas flow data from a natural submarine seep area in the Santa Barbara Channel helps constrain fault permeability k ~ 30 millidarcys for the large-scale upward migration of methane-bearing formation fluids along one of the major fault zones. At another offshore site near Platform Holly, pressure-transducer time-series data from a 1.5 km deep exploration well in the South Ellwood Field demonstrate a strong ocean tidal component, due to vertical fault connectivity to the seafloor. Analytical solutions to the poroelastic flow equation can be used to extract both fault permeability and compressibility parameters, based on tidal-signal amplitude attenuation and phase shift at depth. These data have proven useful in constraining coupled hydrogeologic 2-D models for reactive flow and geomechanical deformation. In a similar vein, our studies of faults in the Los Angeles basin, suggest an important role for the natural retention of fluids along the Newport-Inglewood fault zone. Based on the estimates of fault permeability derived above, we have also constructed new two-dimensional numerical simulations to characterize large-scale multiphase flow in complex heterogeneous and anisotropic geologic profiles, such as the Los Angeles basin. The numerical model was developed in our lab at Tufts from scratch, and based on an IMPES-type algorithm for a finite element/volume mesh. This numerical approach allowed us model large differentials in fluid saturation and relative permeability, caused by complex geological heterogeneities associated with sedimentation and faulting. Our two-phase flow models also replicated the formation-scale patterns of petroleum accumulation associated with the basin margin, where deep faults resulted in stacked petroleum reservoirs along the Newport-Inglewood Fault, as deep geofluids migrated out of the basin to the Palo Verde Peninsula. Recent isotope data collected by our group also verify fault connectivity at the deep crustal scale.

  1. Cooperative application/OS DRAM fault recovery.

    SciTech Connect

    Ferreira, Kurt Brian; Bridges, Patrick G.; Heroux, Michael Allen; Hoemmen, Mark; Brightwell, Ronald Brian

    2012-05-01

    Exascale systems will present considerable fault-tolerance challenges to applications and system software. These systems are expected to suffer several hard and soft errors per day. Unfortunately, many fault-tolerance methods in use, such as rollback recovery, are unsuitable for many expected errors, for example DRAM failures. As a result, applications will need to address these resilience challenges to more effectively utilize future systems. In this paper, we describe work on a cross-layer application/OS framework to handle uncorrected memory errors. We illustrate the use of this framework through its integration with a new fault-tolerant iterative solver within the Trilinos library, and present initial convergence results.

  2. On-line diagnosis of unrestricted faults

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.; Sundstrom, R. J.

    1975-01-01

    Attention is given to the formal development of the notion of a discrete-time system and the associated concepts of fault, result of a fault, and error. The considered concept of on-line diagnosis is formalized and a diagnosis using inverse machines is discussed. The case of an inverse which is lossless is investigated. It is found that in such a case the class of unrestricted faults can be diagnosed with a delay equal to the delay of losslessness of the inverse system.

  3. Constraints on faulting mechanisms found by combining multiscale fault surface geometry measurements with fault internal deformation structure

    NASA Astrophysics Data System (ADS)

    Sagy, A.; Axen, G. J.; Brodsky, E. E.

    2006-12-01

    New ground-based LiDAR and laboratory profilometer measurements of fault surface geometry show that faults evolve with slip. The surfaces of small-slip faults are relatively rough at all measured scales, whereas those of moderate-slip faults are polished at small scales but contain elongated bumps and depressions at scales of a few to several meters. The mechanism that controls this evolution of fault geometry with slip is investigated by combining the 3D measurements of fault-surface geometry with structural analysis of the fault zone architecture. The Flower Pit fault is a young normal fault in an active seismic zone near Klamath Falls, Oregon. Its net slip is <100 m and probably <300 m. The fault includes three fresh surface exposures with a composite area of ~6000 m2. The main fault-surface features are smooth semi-ellipsoidal ridges and depressions that are ~1-5 m wide, ~10-30 m long, and ~0.5-2 m high. The long axes of these bumps, or asperities, are parallel to the slip direction. Despite the existence of these large-scale bumps, most of the shear localized on or between two ultracataclasite layers ~1-25 mm thick that exhibit evidence for a history of both accumulation and removal. Beneath the ultracataclasite is a very cohesive zone 5-40 cm thick that is underlain by less cohesive cataclasite, breccia, and/or highly fractured rock that extends 1-3 m from the surface. Thus, the thickness of this cataclasitic to highly fractured zone is similar to the amplitude of the surface bumps, but the main slip is focused in a zone only centimeters thick. Abrasional striations on the fault surface, the thin ultracataclasite zone, and the existence of the large scale bumps on single fault surfaces suggest that the surface evolved by continual sliding of hard rock against a less cohesive medium (gauge or crushed zone). The bumps are evidently not fault branches or secondary slip surfaces, so we deduce that they are remnants of an initially\\ "rough" surface formed during fracture nucleation. Abrasive polishing depends strongly on the grain size and sorting of the polishing compound. The semi-continuous ultracataclasite layers suggest that final polishing of the fault surface was by grains much smaller than the large-scale bumps. A multiscale surface roughness is expected to be formed by polishing with a poorly sorted abrasive medium, but this is not what is observed. We see scale-dependent bimodal surface roughness which suggests that the grains were comminuted to a size much smaller than the bumps before the abrasion was able to grind down the bumps. Thus, grain size decreased at a rate that was significantly faster than the polishing rate. We suggest the following fault-surface evolution involving three different mechanisms: (1) fracture nucleation and propagation that generated an immature, rough, self-affine surface, (2) continued fracture, brecciation and cataclasis within an evolving layer of\\ "polishing compound" with thickness similar to the initial large-scale roughness, evolving into (3) sliding and abrasive polishing of the pre-existing surface.

  4. Fault roughness evolution with slip (Gole Larghe Fault Zone, Italian Alps)

    NASA Astrophysics Data System (ADS)

    Bistacchi, A.; Spagnuolo, E.; Di Toro, G.; Nielsen, S. B.; Griffith, W. A.

    2011-12-01

    Fault surface roughness is a principal factor influencing fault and earthquake mechanics. However, little is known on roughness of fault surfaces at seismogenic depths, and particularly on how it evolves with accumulating slip. We have studied seismogenic fault surfaces of the Gole Larghe Fault Zone, which exploit precursor cooling joints of the Adamello tonalitic pluton (Italian Alps). These faults developed at 9-11 km and 250-300C. Seismic slip along these surfaces, which individually accommodated from 1 to 20 m of net slip, resulted in the production of cm-thick cataclasites and pseudotachylytes (solidified melts produced during seismic slip). The roughness of fault surfaces was determined with a multi-resolution aerial and terrestrial LIDAR and photogrammetric dataset (Bistacchi et al., 2011, Pageoph, doi: 10.1007/s00024-011-0301-7). Fault surface roughness is self-affine, with Hurst exponent H < 1, indicating that faults are comparatively smoother at larger wavelengths. Fault surface roughness is inferred to have been inherited from the precursor cooling joints, which show H ? 0.8. Slip on faults progressively modified the roughness distribution, lowering the Hurst exponent in the along-slip direction up to H ? 0.6. This behaviour has been observed for wavelengths up to the scale of the accumulated slip along each individual fault surface, whilst at larger wavelengths the original roughness seems not to be affected by slip. Processes that contribute to modify fault roughness with slip include brittle failure of the interacting asperities (production of cataclasites) and frictional melting (production of pseudotachylytes). To quantify the "wear" due to these processes, we measured, together with the roughness of fault traces and their net slip, the thickness and distribution of cataclasites and pseudotachylytes. As proposed also in the tribological literature, we observe that wearing is scale dependent, as smaller wavelength asperities have a shorter interaction distance and are consumed faster with slip than larger ones. However, in faults, production of cataclasites and pseudotachylytes changes the contact area of sliding surfaces by interposing a layer of wear products. This layer may preserve from wearing asperities that are smaller in amplitude than the layer thickness, thus providing a mechanism that is likely to preserve small amplitude/wavelength roughness. These processes have been considered in a new spectral model of wear, which allows to model wear for self-affine surfaces and includes the accumulation of wear products within the fault zone. This model can be used to generalize our results and contribute to reconstruct a realistic model of a seismogenic fault zone (http://roma1.rm.ingv.it/laboratori/laboratorio-hp-ht/usems-project).

  5. Continuity of the West Napa Fault Zone Inferred from Aftershock Recordings on Fault-Crossing Arrays

    NASA Astrophysics Data System (ADS)

    Catchings, R.; Goldman, M.; Slad, G. W.; Criley, C.; Chan, J. H.; Fay, R. P.; Fay, W.; Svitek, J. F.

    2014-12-01

    In an attempt to determine the continuity and lateral extent of the causative fault(s) of the 24 August 2014 Mw 6.0 Napa earthquake and possible interconnections with other mapped faults, we recorded aftershocks on three closely spaced (100 m) seismograph arrays that were positioned across the coseismic rupture zone and across mapped faults located north and south of coseismic rupture. Array 1 was located in northwest Napa, between Highway 29 and the intersection of Redwood and Mt. Veeder roads, array 2 was located southwest of Napa, ~1 km north of Cuttings Wharf, and array 3 was located south of San Pablo Bay, within the town of Alhambra. Our intent was to record high-amplitude guided waves that only travel within the causative fault zone and its extensions (Li and Vidale, 1996). Preliminary analysis of seismic data from an M 3.2 aftershock shows high-amplitude (up to 1 cm/s) seismic waves occurred on seismographs within 100 m of mapped surface ruptures and fault zones. Northwest of Napa, the high amplitudes along array 1 coincide with zones of structural damage and wide spread surface ground cracking, and along array 2 near Cuttings Wharf, the high amplitudes occur slightly east of surface ruptures seen along Los Amigas Road. We also observe relatively high-amplitude seismic waves across the Franklin Fault (array 3), approximately 32 km southeast of the mainshock epicenter; this observation suggests the West Napa and the Franklin faults may be continuous or connected. Existing fault maps show that the Franklin Fault extends at least 15 km southward to the Calaveras Fault zone and the West Napa Fault extends at least 25 km north of our array 1. Collectively, the mapped faults, surface ruptures, and guided waves suggest that the West Napa- Franklin Fault zone may extend more than 85 km before it merges with the Calaveras Fault. Assuming a continuous fault zone, the West Napa - Franklin Fault zone may be capable of generating a much larger magnitude earthquake that the Mw 6.0 that occurred on 24 August, 2014.

  6. Fault detection in finite frequency domain for Takagi-Sugeno fuzzy systems with sensor faults.

    PubMed

    Li, Xiao-Jian; Yang, Guang-Hong

    2014-08-01

    This paper is concerned with the fault detection (FD) problem in finite frequency domain for continuous-time Takagi-Sugeno fuzzy systems with sensor faults. Some finite-frequency performance indices are initially introduced to measure the fault/reference input sensitivity and disturbance robustness. Based on these performance indices, an effective FD scheme is then presented such that the generated residual is designed to be sensitive to both fault and reference input for faulty cases, while robust against the reference input for fault-free case. As the additional reference input sensitivity for faulty cases is considered, it is shown that the proposed method improves the existing FD techniques and achieves a better FD performance. The theory is supported by simulation results related to the detection of sensor faults in a tunnel-diode circuit. PMID:24184791

  7. Fault detection for discrete-time switched systems with sensor stuck faults and servo inputs.

    PubMed

    Zhong, Guang-Xin; Yang, Guang-Hong

    2015-09-01

    This paper addresses the fault detection problem of switched systems with servo inputs and sensor stuck faults. The attention is focused on designing a switching law and its associated fault detection filters (FDFs). The proposed switching law uses only the current states of FDFs, which guarantees the residuals are sensitive to the servo inputs with known frequency ranges in faulty cases and robust against them in fault-free case. Thus, the arbitrarily small sensor stuck faults, including outage faults can be detected in finite-frequency domain. The levels of sensitivity and robustness are measured in terms of the finite-frequency H- index and l2-gain. Finally, the switching law and FDFs are obtained by the solution of a convex optimization problem. PMID:26055929

  8. Kinematic links between the Eastern Mosha Fault and the North Tehran Fault, Alborz range, northern Iran

    NASA Astrophysics Data System (ADS)

    Ghassemi, Mohammad R.; Fattahi, Morteza; Landgraf, Angela; Ahmadi, Mehdi; Ballato, Paolo; Tabatabaei, Saeid H.

    2014-05-01

    Kinematic interaction of faults is an important issue for detailed seismic hazard assessments in seismically active regions. The Eastern Mosha Fault (EMF) and the North Tehran Fault (NTF) are two major active faults of the southern central Alborz mountains, located in proximity of Tehran (population ~ 9 million). We used field, geomorphological and paleoseismological data to explore the kinematic transition between the faults, and compare their short-term and long-term history of activity. We introduce the Niknamdeh segment of the NTF along which the strike-slip kinematics of EMF is transferred onto the NTF, and which is also responsible for the development of a pull-apart basin between the eastern segments of the NTF. The Ira trench site at the linkage zone between the two faults reveals the history of interaction between rock avalanches, active faulting and sag-pond development. The kinematic continuity between the EMF and NTF requires updating of seismic hazard models for the NTF, the most active fault adjacent to the Tehran Metropolis. Study of offsets of large-scale morphological features along the EMF, and comparison with estimated slip rates along the fault indicates that the EMF has started its left-lateral kinematics between 3.2 and 4.7 Ma. According to our paleoseismological data and the morphology of the nearby EMF and NTF, we suggest minimum and maximum values of about 1.8 and 3.0 mm/year for the left-lateral kinematics on the two faults in their linkage zone, averaged over Holocene time scales. Our study provides a partial interpretation, based on available data, for the fault activity in northeastern Tehran region, which may be completed with studies of other active faults of the region to evaluate a more realistic seismic hazard analysis for this heavily populated major city.

  9. Fault Rock Variation as a Function of Host Rock Lithology

    NASA Astrophysics Data System (ADS)

    Fagereng, A.; Diener, J.

    2013-12-01

    Fault rocks contain an integrated record of the slip history of a fault, and thereby reflect the deformation processes associated with fault slip. Within the Aus Granulite Terrane, Namibia, a number of Jurassic to Cretaceous age strike-slip faults cross-cut Precambrian high grade metamorphic rocks. These strike-slip faults were active at subgreenschist conditions and occur in a variety of host rock lithologies. Where the host rock contains significant amounts of hydrous minerals, representing granulites that have undergone retrogressive metamorphism, the fault rock is dominated by hydrothermal breccias. In anhydrous, foliated rocks interlayered with minor layers containing hydrous phyllosilicates, the fault rock is a cataclasite partially cemented by jasper and quartz. Where the host rock is an isotropic granitic rock the fault rock is predominantly a fine grained black fault rock. Cataclasites and breccias show evidence for multiple deformation events, whereas the fine grained black fault rocks appear to only record a single slip increment. The strike-slip faults observed all formed in the same general orientation and at a similar time, and it is unlikely that regional stress, strain rate, pressure and temperature varied between the different faults. We therefore conclude that the type of fault rock here depended on the host rock lithology, and that lithology alone accounts for why some faults developed a hydrothermal breccia, some cataclasite, and some a fine grained black fault rock. Consequently, based on the assumption that fault rocks reflect specific slip styles, lithology was also the main control on different fault slip styles in this area at the time of strike-slip fault activity. Whereas fine grained black fault rock is inferred to represent high stress events, hydrothermal breccia is rather related to events involving fluid pressure in excess of the least stress. Jasper-bearing cataclasites may represent faults that experienced dynamic weakening as seen in experiments where silica gel was produced, in other words, strong faults that experienced significant slip weakening.

  10. Geometry and earthquake potential of the shoreline fault, central California

    USGS Publications Warehouse

    Hardebeck, Jeanne L.

    2013-01-01

    The Shoreline fault is a vertical strike?slip fault running along the coastline near San Luis Obispo, California. Much is unknown about the Shoreline fault, including its slip rate and the details of its geometry. Here, I study the geometry of the Shoreline fault at seismogenic depth, as well as the adjacent section of the offshore Hosgri fault, using seismicity relocations and earthquake focal mechanisms. The Optimal Anisotropic Dynamic Clustering (OADC) algorithm (Ouillon et al., 2008) is used to objectively identify the simplest planar fault geometry that fits all of the earthquakes to within their location uncertainty. The OADC results show that the Shoreline fault is a single continuous structure that connects to the Hosgri fault. Discontinuities smaller than about 1 km may be undetected, but would be too small to be barriers to earthquake rupture. The Hosgri fault dips steeply to the east, while the Shoreline fault is essentially vertical, so the Hosgri fault dips towards and under the Shoreline fault as the two faults approach their intersection. The focal mechanisms generally agree with pure right?lateral strike?slip on the OADC planes, but suggest a non?planar Hosgri fault or another structure underlying the northern Shoreline fault. The Shoreline fault most likely transfers strike?slip motion between the Hosgri fault and other faults of the PacificNorth America plate boundary system to the east. A hypothetical earthquake rupturing the entire known length of the Shoreline fault would have a moment magnitude of 6.46.8. A hypothetical earthquake rupturing the Shoreline fault and the section of the Hosgri fault north of the HosgriShoreline junction would have a moment magnitude of 7.27.5.

  11. PV Systems Reliability Final Technical Report: Ground Fault Detection

    SciTech Connect

    Lavrova, Olga; Flicker, Jack David; Johnson, Jay

    2016-01-01

    We have examined ground faults in PhotoVoltaic (PV) arrays and the efficacy of fuse, current detection (RCD), current sense monitoring/relays (CSM), isolation/insulation (Riso) monitoring, and Ground Fault Detection and Isolation (GFID) using simulations based on a Simulation Program with Integrated Circuit Emphasis SPICE ground fault circuit model, experimental ground faults installed on real arrays, and theoretical equations.

  12. Network Connectivity for Permanent, Transient, Independent, and Correlated Faults

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Sicher, Courtney; henry, Courtney

    2012-01-01

    This paper develops a method for the quantitative analysis of network connectivity in the presence of both permanent and transient faults. Even though transient noise is considered a common occurrence in networks, a survey of the literature reveals an emphasis on permanent faults. Transient faults introduce a time element into the analysis of network reliability. With permanent faults it is sufficient to consider the faults that have accumulated by the end of the operating period. With transient faults the arrival and recovery time must be included. The number and location of faults in the system is a dynamic variable. Transient faults also introduce system recovery into the analysis. The goal is the quantitative assessment of network connectivity in the presence of both permanent and transient faults. The approach is to construct a global model that includes all classes of faults: permanent, transient, independent, and correlated. A theorem is derived about this model that give distributions for (1) the number of fault occurrences, (2) the type of fault occurrence, (3) the time of the fault occurrences, and (4) the location of the fault occurrence. These results are applied to compare and contrast the connectivity of different network architectures in the presence of permanent, transient, independent, and correlated faults. The examples below use a Monte Carlo simulation, but the theorem mentioned above could be used to guide fault-injections in a laboratory.

  13. Active and inactive faults in southern California viewed from Skylab

    NASA Technical Reports Server (NTRS)

    Merifield, P. M.; Lamar, D. L.

    1975-01-01

    The application is discussed of Skylab imagery along with larger scale photography and field investigations in preparing fault maps of California for use in land use planning. The images were used to assist in distinguishing active from inactive faults (by recognizing indications of recent displacement), determining the length of potentially active faults, identifying previously unmapped faults, and gaining additional information on regional tectonic history.

  14. Sea-Floor Spreading and Transform Faults

    ERIC Educational Resources Information Center

    Armstrong, Ronald E.; And Others

    1978-01-01

    Presents the Crustal Evolution Education Project (CEEP) instructional module on Sea-Floor Spreading and Transform Faults. The module includes activities and materials required, procedures, summary questions, and extension ideas for teaching Sea-Floor Spreading. (SL)

  15. Transfer zones in listric normal fault systems

    NASA Astrophysics Data System (ADS)

    Bose, Shamik

    Listric normal faults are common in passive margin settings where sedimentary units are detached above weaker lithological units, such as evaporites or are driven by basal structural and stratigraphic discontinuities. The geometries and styles of faulting vary with the types of detachment and form landward and basinward dipping fault systems. Complex transfer zones therefore develop along the terminations of adjacent faults where deformation is accommodated by secondary faults, often below seismic resolution. The rollover geometry and secondary faults within the hanging wall of the major faults also vary with the styles of faulting and contribute to the complexity of the transfer zones. This study tries to understand the controlling factors for the formation of the different styles of listric normal faults and the different transfer zones formed within them, by using analog clay experimental models. Detailed analyses with respect to fault orientation, density and connectivity have been performed on the experiments in order to gather insights on the structural controls and the resulting geometries. A new high resolution 3D laser scanning technology has been introduced to scan the surfaces of the clay experiments for accurate measurements and 3D visualizations. Numerous examples from the Gulf of Mexico have been included to demonstrate and geometrically compare the observations in experiments and real structures. A salt cored convergent transfer zone from the South Timbalier Block 54, offshore Louisiana has been analyzed in detail to understand the evolutionary history of the region, which helps in deciphering the kinematic growth of similar structures in the Gulf of Mexico. The dissertation is divided into three chapters, written in a journal article format, that deal with three different aspects in understanding the listric normal fault systems and the transfer zones so formed. The first chapter involves clay experimental models to understand the fault patterns in divergent and convergent transfer zones. Flat base plate setups have been used to build different configurations that would lead to approaching, normal offset and overlapping faults geometries. The results have been analyzed with respect to fault orientation, density, connectivity and 3D geometry from photographs taken from the three free surfaces and laser scans of the top surface of the clay cake respectively. The second chapter looks into the 3D structural analysis of the South Timbalier Block 54, offshore Louisiana in the Gulf of Mexico with the help of a 3D seismic dataset and associated well tops and velocity data donated by ExxonMobil Corporation. This study involves seismic interpretation techniques, velocity modeling, cross section restoration of a series of seismic lines and 3D subsurface modeling using depth converted seismic horizons, well tops and balanced cross sections. The third chapter deals with the clay experiments of listric normal fault systems and tries to understand the controls on geometries of fault systems with and without a ductile substrate. Sloping flat base plate setups have been used and silicone fluid underlain below the clay cake has been considered as an analog for salt. The experimental configurations have been varied with respect to three factors viz. the direction of slope with respect to extension, the termination of silicone polymer with respect to the basal discontinuities and overlap of the base plates. The analyses for the experiments have again been performed from photographs and 3D laser scans of the clay surface.

  16. Radon emanation on San Andreas Fault

    USGS Publications Warehouse

    King, C.-Y.

    1978-01-01

    Subsurface radon emanation monitored in shallow dry holes along an active segment of the San Andreas fault in central California shows spatially coherent large temporal variations that seem to be correlated with local seismicity. ??1978 Nature Publishing Group.

  17. Current Sensor Fault Reconstruction for PMSM Drives.

    PubMed

    Huang, Gang; Luo, Yi-Ping; Zhang, Chang-Fan; He, Jing; Huang, Yi-Shan

    2016-01-01

    This paper deals with a current sensor fault reconstruction algorithm for the torque closed-loop drive system of an interior PMSM. First, sensor faults are equated to actuator ones by a new introduced state variable. Then, in αβ coordinates, based on the motor model with active flux linkage, a current observer is constructed with a specific sliding mode equivalent control methodology to eliminate the effects of unknown disturbances, and the phase current sensor faults are reconstructed by means of an adaptive method. Finally, an αβ axis current fault processing module is designed based on the reconstructed value. The feasibility and effectiveness of the proposed method are verified by simulation and experimental tests on the RT-LAB platform. PMID:26840317

  18. Study of fault-tolerant software technology

    NASA Technical Reports Server (NTRS)

    Slivinski, T.; Broglio, C.; Wild, C.; Goldberg, J.; Levitt, K.; Hitt, E.; Webb, J.

    1984-01-01

    Presented is an overview of the current state of the art of fault-tolerant software and an analysis of quantitative techniques and models developed to assess its impact. It examines research efforts as well as experience gained from commercial application of these techniques. The paper also addresses the computer architecture and design implications on hardware, operating systems and programming languages (including Ada) of using fault-tolerant software in real-time aerospace applications. It concludes that fault-tolerant software has progressed beyond the pure research state. The paper also finds that, although not perfectly matched, newer architectural and language capabilities provide many of the notations and functions needed to effectively and efficiently implement software fault-tolerance.

  19. Fault-Tolerant Software For Flight Control

    NASA Technical Reports Server (NTRS)

    Deets, Dwain A.; Lock, Wilton P.; Megna, Vincent A.

    1988-01-01

    Report discusses design and testing of redundant control system for F-8 digital fly-by-wire airplane. Outstanding feature of system is fault-tolerant software (REBUS) residing in primary digital computers. Transition to operation on backup software smooth.

  20. Delineation of fault zones using imaging radar

    NASA Technical Reports Server (NTRS)

    Toksoz, M. N.; Gulen, L.; Prange, M.; Matarese, J.; Pettengill, G. H.; Ford, P. G.

    1986-01-01

    The assessment of earthquake hazards and mineral and oil potential of a given region requires a detailed knowledge of geological structure, including the configuration of faults. Delineation of faults is traditionally based on three types of data: (1) seismicity data, which shows the location and magnitude of earthquake activity; (2) field mapping, which in remote areas is typically incomplete and of insufficient accuracy; and (3) remote sensing, including LANDSAT images and high altitude photography. Recently, high resolution radar images of tectonically active regions have been obtained by SEASAT and Shuttle Imaging Radar (SIR-A and SIR-B) systems. These radar images are sensitive to terrain slope variations and emphasize the topographic signatures of fault zones. Techniques were developed for using the radar data in conjunction with the traditional types of data to delineate major faults in well-known test sites, and to extend interpretation techniques to remote areas.

  1. Fault detection in programmable logic arrays

    NASA Astrophysics Data System (ADS)

    Somenzi, F.; Gai, S.

    1986-05-01

    When designing fault-tolerant systems including programmable logic arrays (PLAs), the various aspects of these circuits concerning fault diagnosis have to be taken into account. The peculiarity of these aspects, ranging from fault models to test generation algorithms and to self-checking structures, is due to the regularity of PLAs. The fault model generally accepted for PLAs is the crosspoint defect; it is employed by dedicated test generation algorithms, based on the fact that PLAs implement a two-level combinational function. The problem of accessing inputs and outputs of the PLA can be alleviated by augmenting the PLA itself so as to simplify the test vectors to be applied, making them function independent in the limit. A further step consists in the addition of the circuitry required to generate test vectors and to evaluate the answer, thus obtaining a built-in self-test (BIST) architecture. Finally, high reliability can be achieved with PLAs featuring concurrent error detection.

  2. Current Sensor Fault Reconstruction for PMSM Drives

    PubMed Central

    Huang, Gang; Luo, Yi-Ping; Zhang, Chang-Fan; He, Jing; Huang, Yi-Shan

    2016-01-01

    This paper deals with a current sensor fault reconstruction algorithm for the torque closed-loop drive system of an interior PMSM. First, sensor faults are equated to actuator ones by a new introduced state variable. Then, in αβ coordinates, based on the motor model with active flux linkage, a current observer is constructed with a specific sliding mode equivalent control methodology to eliminate the effects of unknown disturbances, and the phase current sensor faults are reconstructed by means of an adaptive method. Finally, an αβ axis current fault processing module is designed based on the reconstructed value. The feasibility and effectiveness of the proposed method are verified by simulation and experimental tests on the RT-LAB platform. PMID:26840317

  3. Fault-tolerant communication channel structures

    NASA Technical Reports Server (NTRS)

    Alkalai, Leon (Inventor); Chau, Savio N. (Inventor); Tai, Ann T. (Inventor)

    2006-01-01

    Systems and techniques for implementing fault-tolerant communication channels and features in communication systems. Selected commercial-off-the-shelf devices can be integrated in such systems to reduce the cost.

  4. Reset Tree-Based Optical Fault Detection

    PubMed Central

    Lee, Dong-Geon; Choi, Dooho; Seo, Jungtaek; Kim, Howon

    2013-01-01

    In this paper, we present a new reset tree-based scheme to protect cryptographic hardware against optical fault injection attacks. As one of the most powerful invasive attacks on cryptographic hardware, optical fault attacks cause semiconductors to misbehave by injecting high-energy light into a decapped integrated circuit. The contaminated result from the affected chip is then used to reveal secret information, such as a key, from the cryptographic hardware. Since the advent of such attacks, various countermeasures have been proposed. Although most of these countermeasures are strong, there is still the possibility of attack. In this paper, we present a novel optical fault detection scheme that utilizes the buffers on a circuit's reset signal tree as a fault detection sensor. To evaluate our proposal, we model radiation-induced currents into circuit components and perform a SPICE simulation. The proposed scheme is expected to be used as a supplemental security tool. PMID:23698267

  5. Hydrologic Characterization Study at Wildcat Fault Zone

    NASA Astrophysics Data System (ADS)

    Karasaki, K.; Onishi, C. T.; Goto, J.; Moriya, T.; Ueta, K.; Kiho, K.

    2011-12-01

    A dedicated field site has been developed to further the understanding of, and to develop the characterization technology for, fault zone hydrology in the hills east of Berkeley, California across the Wildcat Fault. The Wildcat is believed to be a strike-slip fault and a member of the Hayward Fault System, with over 10 km of displacement. So far, several ~2-4-m deep trenches were cut, a number of surface-based geophysical surveys were conducted, and four ~150-m deep fully cored boreholes were drilled at the site; one on the east side and two on the west side of the suspected fault trace. The inclined fourth hole was drilled to penetrate the Wildcat. Geologic analysis results from these trenches and boreholes indicated that the geology was not always what was expected: while confirming some earlier, published conclusions about Wildcat, they have also led to some unexpected findings. The lithology at the Wildcat Fault area mainly consists of chert, shale, silt and sandstone, extensively sheared and fractured with gouge and cataclasite zones observed at several depths. Wildcat near the field site appears to consist of multiple fault planes with the major fault planes filled with unconsolidated pulverized rock instead of clay gouge. The pressure and temperature distributions indicate a downward hydraulic gradient and a relatively large geothermal gradient. Various types of borehole logging were conducted but there were no obvious correlations between boreholes or to hydrologic properties. Using the three other boreholes as observation wells, hydrologic cross-hole pumping tests were conducted in the fourth borehole. The hydraulic test data suggest the dual properties of the hydrologic structure of the fault zone: high permeability along the plane and low permeability across it, and the fault planes may be compartmentalizing aquifers. No correlation was found between fracture frequency and flow. Long term pressure monitoring over multiple seasons was shown to be very important. The main philosophy behind our approach for the hydrologic characterization of such a complex faulted, fractured system is to let the system take its own average and monitor long term behavior, instead of collecting a multitude of data at small length and time scales, or at a discrete fracture scale, and then to "up-scale," which is extremely tenuous at best.

  6. Actuator fault diagnosis and fault-tolerant control: Application to the quadruple-tank process

    NASA Astrophysics Data System (ADS)

    Buciakowski, Mariusz; de Rozprza-Faygel, Micha?; Ocha?ek, Joanna; Witczak, Marcin

    2014-12-01

    The paper focuses on an important problem related to the modern control systems, which is the robust fault-tolerant control. In particular, the problem is oriented towards a practical application to quadruple-tank process. The proposed approach starts with a general description of the system and fault-tolerant strategy, which is composed of a suitably integrated fault estimator and robust controller. The subsequent part of the paper is concerned with the design of robust controller as well as the proposed fault-tolerant control scheme. To confirm the effectiveness of the proposed approach, the final part of the paper presents experimental results for considered the quadruple-tank process.

  7. The Zuccale Fault, Elba Island, Italy: A new perspective from fault architecture

    NASA Astrophysics Data System (ADS)

    Musumeci, G.; Mazzarini, F.; Cruden, A. R.

    2015-06-01

    The Zuccale Fault, central-eastern Elba Island, has been regarded since the 1990s as a low-angle normal fault that records Neogene crustal extension in the inner (Tyrrhenian side) portion of the northern Apennines. The flat-lying attitude of the fault zone and the strong excision of thick nappes were the main reasons for this interpretation. Previous structural and petrographic studies have focused primarily on the fault rocks themselves without map-scale investigation of the structural setting and deformation structures in the hanging wall and footwall blocks. Furthermore, despite the complex history proposed for the Zuccale Fault, the timing of deformation has not yet been constrained by radiometric age data. We present the findings of recent geological studies on eastern Elba Island that provide significant new insight on the nature and tectonic significance of the Zuccale Fault. We document in detail the architecture of breccias and cataclasites that comprise the Zuccale Fault. Our new observations are consistent with a purely brittle deformation zone that crosscuts older early-middle and late Miocene regional and local tectonic structures. The activity on the fault postdates emplacement of the late Miocene Porto Azzurro pluton, and it displaces a previously formed nappe stack ~6 km eastward without any footwall exhumation or hanging wall block rotation. These new data raise questions about the development of misoriented faults in the upper crust.

  8. Dynamic characteristics of a 20 kHz resonant power system - Fault identification and fault recovery

    NASA Technical Reports Server (NTRS)

    Wasynczuk, O.

    1988-01-01

    A detailed simulation of a dc inductor resonant driver and receiver is used to demonstrate the transient characteristics of a 20 kHz resonant power system during fault and overload conditions. The simulated system consists of a dc inductor resonant inverter (driver), a 50-meter transmission cable, and a dc inductor resonant receiver load. Of particular interest are the driver and receiver performance during fault and overload conditions and on the recovery characteristics following removal of the fault. The information gained from these studies sets the stage for further work in fault identification and autonomous power system control.

  9. Porosity variations in and around normal fault zones: implications for fault seal and geomechanics

    NASA Astrophysics Data System (ADS)

    Healy, David; Neilson, Joyce; Farrell, Natalie; Timms, Nick; Wilson, Moyra

    2015-04-01

    Porosity forms the building blocks for permeability, exerts a significant influence on the acoustic response of rocks to elastic waves, and fundamentally influences rock strength. And yet, published studies of porosity around fault zones or in faulted rock are relatively rare, and are hugely dominated by those of fault zone permeability. We present new data from detailed studies of porosity variations around normal faults in sandstone and limestone. We have developed an integrated approach to porosity characterisation in faulted rock exploiting different techniques to understand variations in the data. From systematic samples taken across exposed normal faults in limestone (Malta) and sandstone (Scotland), we combine digital image analysis on thin sections (optical and electron microscopy), core plug analysis (He porosimetry) and mercury injection capillary pressures (MICP). Our sampling includes representative material from undeformed protoliths and fault rocks from the footwall and hanging wall. Fault-related porosity can produce anisotropic permeability with a 'fast' direction parallel to the slip vector in a sandstone-hosted normal fault. Undeformed sandstones in the same unit exhibit maximum permeability in a sub-horizontal direction parallel to lamination in dune-bedded sandstones. Fault-related deformation produces anisotropic pores and pore networks with long axes aligned sub-vertically and this controls the permeability anisotropy, even under confining pressures up to 100 MPa. Fault-related porosity also has interesting consequences for the elastic properties and velocity structure of normal fault zones. Relationships between texture, pore type and acoustic velocity have been well documented in undeformed limestone. We have extended this work to include the effects of faulting on carbonate textures, pore types and P- and S-wave velocities (Vp, Vs) using a suite of normal fault zones in Malta, with displacements ranging from 0.5 to 90 m. Our results show a clear lithofacies control on the Vp-porosity and the Vs-Vp relationships for faulted limestones. Using porosity patterns quantified in naturally deformed rocks we have modelled their effect on the mechanical stability of fluid-saturated fault zones in the subsurface. Poroelasticity theory predicts that variations in fluid pressure could influence fault stability. Anisotropic patterns of porosity in and around fault zones can - depending on their orientation and intensity - lead to an increase in fault stability in response to a rise in fluid pressure, and a decrease in fault stability for a drop in fluid pressure. These predictions are the exact opposite of the accepted role of effective stress in fault stability. Our work has provided new data on the spatial and statistical variation of porosity in fault zones. Traditionally considered as an isotropic and scalar value, porosity and pore networks are better considered as anisotropic and as scale-dependent statistical distributions. The geological processes controlling the evolution of porosity are complex. Quantifying patterns of porosity variation is an essential first step in a wider quest to better understand deformation processes in and around normal fault zones. Understanding porosity patterns will help us to make more useful predictive tools for all agencies involved in the study and management of fluids in the subsurface.

  10. Parameter Transient Behavior Analysis on Fault Tolerant Control System

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine (Technical Monitor); Shin, Jong-Yeob

    2003-01-01

    In a fault tolerant control (FTC) system, a parameter varying FTC law is reconfigured based on fault parameters estimated by fault detection and isolation (FDI) modules. FDI modules require some time to detect fault occurrences in aero-vehicle dynamics. This paper illustrates analysis of a FTC system based on estimated fault parameter transient behavior which may include false fault detections during a short time interval. Using Lyapunov function analysis, the upper bound of an induced-L2 norm of the FTC system performance is calculated as a function of a fault detection time and the exponential decay rate of the Lyapunov function.

  11. GIS coverages of the Castle Mountain Fault, south central Alaska

    USGS Publications Warehouse

    Labay, Keith A.; Haeussler, Peter J.

    2001-01-01

    The Castle Mountain fault is one of several major east-northeast-striking faults in southern Alaska, and it is the only fault with had historic seismicity and Holocene surface faulting. This report is a digital compilation of three maps along the Castle Mountain fault in south central Alaska. This compilation consists only of GIS coverages of the location of the fault, line attributes indicating the certainty of the fault location, and information about scarp height, where measured. The files are presented in ARC/INFO export file format and include metadata.

  12. Fault detection and diagnosis of photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Wu, Xing

    The rapid growth of the solar industry over the past several years has expanded the significance of photovoltaic (PV) systems. One of the primary aims of research in building-integrated PV systems is to improve the performance of the system's efficiency, availability, and reliability. Although much work has been done on technological design to increase a photovoltaic module's efficiency, there is little research so far on fault diagnosis for PV systems. Faults in a PV system, if not detected, may not only reduce power generation, but also threaten the availability and reliability, effectively the "security" of the whole system. In this paper, first a circuit-based simulation baseline model of a PV system with maximum power point tracking (MPPT) is developed using MATLAB software. MATLAB is one of the most popular tools for integrating computation, visualization and programming in an easy-to-use modeling environment. Second, data collection of a PV system at variable surface temperatures and insolation levels under normal operation is acquired. The developed simulation model of PV system is then calibrated and improved by comparing modeled I-V and P-V characteristics with measured I--V and P--V characteristics to make sure the simulated curves are close to those measured values from the experiments. Finally, based on the circuit-based simulation model, a PV model of various types of faults will be developed by changing conditions or inputs in the MATLAB model, and the I--V and P--V characteristic curves, and the time-dependent voltage and current characteristics of the fault modalities will be characterized for each type of fault. These will be developed as benchmark I-V or P-V, or prototype transient curves. If a fault occurs in a PV system, polling and comparing actual measured I--V and P--V characteristic curves with both normal operational curves and these baseline fault curves will aid in fault diagnosis.

  13. GN and C Fault Protection Fundamentals

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert D.

    2008-01-01

    This is a companion presentation for a paper by the same name for the same conference. The objective of this paper is to shed some light on the fundamentals of fault tolerant design for GN&C. The common heritage of ideas behind both faulted and normal operation is explored, as is the increasingly indistinct line between these realms in complex missions. Techniques in common practice are then evaluated in this light to suggest a better direction for future efforts.

  14. The San Andreas Fault System, California, USA

    USGS Publications Warehouse

    Brown, R.D.; Wallace, R.E.; Hill, D.P.

    1992-01-01

    Geologists, seismologists, and geophysicists have intensively studied the San Andreas fault system for the past 20 to 30 years. Their goals were to learn more about damaging earthquakes, the behavior of major stirke-slip faults, and methods of reducing earthquake hazards in populated areas. Field geologic investigations, seismic networks, post-earthquake studies, precision geodetic surveys, and reflection and refraction seismic surveys are among the methods used to decipher the history, geometry, and mechanics of the system. -from Authors

  15. Interrelationship of nondestructive testing to fault determination.

    NASA Technical Reports Server (NTRS)

    Menichelli, V. J.; Rosenthal, L. A.

    1971-01-01

    Several nondestructive test techniques have been developed for electroexplosive devices. The bridgewire will respond, when pulsed with a safe level current, by generating a characteristic heating curve. The response is indicative of the electrothermal behavior of the bridgewire-explosive interface. Bridgewires which deviate from the characteristic heating curve have been dissected and examined to determine the cause for the abnormality. Deliberate faults have been fabricated into squibs. The relationship of the specific abnormality and the fault associated with it is discussed.

  16. Origin and models of oceanic transform faults

    NASA Astrophysics Data System (ADS)

    Gerya, Taras

    2012-02-01

    Mid-ocean ridges sectioned by transform faults represent prominent surface expressions of plate tectonics. A fundamental problem of plate tectonics is how this pattern has formed and why it is maintained. Gross-scale geometry of mid-ocean ridges is often inherited from respective rifted margins. Indeed, transform faults seem to nucleate after the beginning of the oceanic spreading and can spontaneously form at a single straight ridge. Both analog and numerical models of transform faults were investigated since the 1970s. Two main groups of analog models were developed: thermomechanical (freezing wax) models with accreting and cooling plates and mechanical models with non-accreting lithosphere. Freezing wax models reproduced ridge-ridge transform faults, inactive fracture zones, rotating microplates, overlapping spreading centers and other features of oceanic ridges. However, these models often produced open spreading centers that are dissimilar to nature. Mechanical models, on the other hand, do not accrete the lithosphere and their results are thus only applicable for relatively small amount of spreading. Three main types of numerical models were investigated: models of stress and displacement distribution around transforms, models of their thermal structure and crustal growth, and models of nucleation and evolution of ridge-transform fault patterns. It was shown that a limited number of spreading modes can form: transform faults, microplates, overlapping spreading centers, zigzag ridges and oblique connecting spreading centers. However, the controversy exists whether these patterns always result from pre-existing ridge offsets or can also form spontaneously at a single straight ridge during millions of year of accretion. Therefore, two types of transform fault interpretation exist: plate fragmentation structures vs. plate accretion structures. Models of transform faults are yet relatively scarce and partly controversial. Consequently, a number of first order questions remain standing and significant cross-disciplinary efforts are needed in the future by combining natural observations, analog experiments, and numerical modeling.

  17. Limiting Maximum Magnitude by Fault Dimensions (Invited)

    NASA Astrophysics Data System (ADS)

    Stirling, M. W.

    2010-12-01

    A standard practise of seismic hazard modeling is to combine fault and background seismicity sources to produce a multidisciplinary source model for a region. Background sources are typically modeled with a Gutenberg-Richter magnitude-frequency distribution developed from historical seismicity catalogs, and fault sources are typically modeled with earthquakes that are limited in size by the mapped fault rupture dimensions. The combined source model typically exhibits a Gutenberg-Richter-like distribution due to there being many short faults relative to the number of longer faults. The assumption that earthquakes are limited by the mapped fault dimensions therefore appears to be consistent with the Gutenberg-Richter relationship, one of the fundamental laws of seismology. Recent studies of magnitude-frequency distributions for California and New Zealand have highlighted an excess of fault-derived earthquakes relative to the log-linear extrapolation of the Gutenberg-Richter relationship from the smaller magnitudes (known as the “bulge”). Relaxing the requirement of maximum magnitude being limited by fault dimensions is a possible solution for removing the “bulge” to produce a perfectly log-linear Gutenberg-Richter distribution. An alternative perspective is that the “bulge” does not represent a significant departure from a Gutenberg-Richter distribution, and may simply be an artefact of a small earthquake dataset relative to the more plentiful data at the smaller magnitudes. In other words the uncertainty bounds of the magnitude-frequency distribution at the moderate-to-large magnitudes may be far greater than the size of the “bulge”.

  18. A Primer on Architectural Level Fault Tolerance

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    2008-01-01

    This paper introduces the fundamental concepts of fault tolerant computing. Key topics covered are voting, fault detection, clock synchronization, Byzantine Agreement, diagnosis, and reliability analysis. Low level mechanisms such as Hamming codes or low level communications protocols are not covered. The paper is tutorial in nature and does not cover any topic in detail. The focus is on rationale and approach rather than detailed exposition.

  19. A survey of fault diagnosis technology

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel

    1989-01-01

    Existing techniques and methodologies for fault diagnosis are surveyed. The techniques run the gamut from theoretical artificial intelligence work to conventional software engineering applications. They are shown to define a spectrum of implementation alternatives where tradeoffs determine their position on the spectrum. Various tradeoffs include execution time limitations and memory requirements of the algorithms as well as their effectiveness in addressing the fault diagnosis problem.

  20. Strong ground motions generated by earthquakes on creeping faults

    USGS Publications Warehouse

    Harris, Ruth A.; Abrahamson, Norman A.

    2014-01-01

    A tenet of earthquake science is that faults are locked in position until they abruptly slip during the sudden strain-relieving events that are earthquakes. Whereas it is expected that locked faults when they finally do slip will produce noticeable ground shaking, what is uncertain is how the ground shakes during earthquakes on creeping faults. Creeping faults are rare throughout much of the Earth's continental crust, but there is a group of them in the San Andreas fault system. Here we evaluate the strongest ground motions from the largest well-recorded earthquakes on creeping faults. We find that the peak ground motions generated by the creeping fault earthquakes are similar to the peak ground motions generated by earthquakes on locked faults. Our findings imply that buildings near creeping faults need to be designed to withstand the same level of shaking as those constructed near locked faults.

  1. Inferred depth of creep on the Hayward Fault, central California

    USGS Publications Warehouse

    Savage, J.C.; Lisowski, M.

    1993-01-01

    A relation between creep rate at the surface trace of a fault, the depth to the bottom of the creeping zone, and the rate of stress accumulation on the fault is derived from Weertman's 1964 friction model of slip on a fault. A 5??1 km depth for the creeping zone on the Hayward fault is estimated from the measured creep rate (5mm/yr) at the fault trace and the rate of stress increase on the upper segment of the fault trace inferred from geodetic measurements across the San Francisco Bay area. Although fault creep partially accommodates the secular slip rate on the Hayward fault, a slip deficit is accumulating equivalent to a magnitude 6.6 earthquake on each 40 km segment of the fault each century. Thus, the current behavior of the fault is consistent with its seismic history, which includes two moderate earthquakes in the mid-1800s. -Authors

  2. Software fault tolerance in computer operating systems

    NASA Technical Reports Server (NTRS)

    Iyer, Ravishankar K.; Lee, Inhwan

    1994-01-01

    This chapter provides data and analysis of the dependability and fault tolerance for three operating systems: the Tandem/GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Based on measurements from these systems, basic software error characteristics are investigated. Fault tolerance in operating systems resulting from the use of process pairs and recovery routines is evaluated. Two levels of models are developed to analyze error and recovery processes inside an operating system and interactions among multiple instances of an operating system running in a distributed environment. The measurements show that the use of process pairs in Tandem systems, which was originally intended for tolerating hardware faults, allows the system to tolerate about 70% of defects in system software that result in processor failures. The loose coupling between processors which results in the backup execution (the processor state and the sequence of events occurring) being different from the original execution is a major reason for the measured software fault tolerance. The IBM/MVS system fault tolerance almost doubles when recovery routines are provided, in comparison to the case in which no recovery routines are available. However, even when recovery routines are provided, there is almost a 50% chance of system failure when critical system jobs are involved.

  3. Diagnosing faults in autonomous robot plan execution

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Doshi, Rajkumar S.; Atkinson, David J.; Lawson, Denise M.

    1989-01-01

    A major requirement for an autonomous robot is the capability to diagnose faults during plan execution in an uncertain environment. Many diagnostic researches concentrate only on hardware failures within an autonomous robot. Taking a different approach, the implementation of a Telerobot Diagnostic System that addresses, in addition to the hardware failures, failures caused by unexpected event changes in the environment or failures due to plan errors, is described. One feature of the system is the utilization of task-plan knowledge and context information to deduce fault symptoms. This forward deduction provides valuable information on past activities and the current expectations of a robotic event, both of which can guide the plan-execution inference process. The inference process adopts a model-based technique to recreate the plan-execution process and to confirm fault-source hypotheses. This technique allows the system to diagnose multiple faults due to either unexpected plan failures or hardware errors. This research initiates a major effort to investigate relationships between hardware faults and plan errors, relationships which were not addressed in the past. The results of this research will provide a clear understanding of how to generate a better task planner for an autonomous robot and how to recover the robot from faults in a critical environment.

  4. Diagnosing faults in autonomous robot plan execution

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Doshi, Rajkumar S.; Atkinson, David J.; Lawson, Denise M.

    1988-01-01

    A major requirement for an autonomous robot is the capability to diagnose faults during plan execution in an uncertain environment. Many diagnostic researches concentrate only on hardware failures within an autonomous robot. Taking a different approach, the implementation of a Telerobot Diagnostic System that addresses, in addition to the hardware failures, failures caused by unexpected event changes in the environment or failures due to plan errors, is described. One feature of the system is the utilization of task-plan knowledge and context information to deduce fault symptoms. This forward deduction provides valuable information on past activities and the current expectations of a robotic event, both of which can guide the plan-execution inference process. The inference process adopts a model-based technique to recreate the plan-execution process and to confirm fault-source hypotheses. This technique allows the system to diagnose multiple faults due to either unexpected plan failures or hardware errors. This research initiates a major effort to investigate relationships between hardware faults and plan errors, relationships which were not addressed in the past. The results of this research will provide a clear understanding of how to generate a better task planner for an autonomous robot and how to recover the robot from faults in a critical environment.

  5. Protecting Against Faults in JPL Spacecraft

    NASA Technical Reports Server (NTRS)

    Morgan, Paula

    2007-01-01

    A paper discusses techniques for protecting against faults in spacecraft designed and operated by NASA s Jet Propulsion Laboratory (JPL). The paper addresses, more specifically, fault-protection requirements and techniques common to most JPL spacecraft (in contradistinction to unique, mission specific techniques), standard practices in the implementation of these techniques, and fault-protection software architectures. Common requirements include those to protect onboard command, data-processing, and control computers; protect against loss of Earth/spacecraft radio communication; maintain safe temperatures; and recover from power overloads. The paper describes fault-protection techniques as part of a fault-management strategy that also includes functional redundancy, redundant hardware, and autonomous monitoring of (1) the operational and health statuses of spacecraft components, (2) temperatures inside and outside the spacecraft, and (3) allocation of power. The strategy also provides for preprogrammed automated responses to anomalous conditions. In addition, the software running in almost every JPL spacecraft incorporates a general-purpose "Safe Mode" response algorithm that configures the spacecraft in a lower-power state that is safe and predictable, thereby facilitating diagnosis of more complex faults by a team of human experts on Earth.

  6. Photovoltaic system grounding and fault protection

    NASA Astrophysics Data System (ADS)

    Stolte, W. J.

    1983-11-01

    The grounding and fault protection aspects of large photovoltaic power systems are studied. Broadly, the overlapping functions of these two plant subsystems include providing for the safety of personnel and equipment. Grounding subsystem design is generaly governed by considerations of personnel safety and the limiting of hazardous voltages to which they are exposed during the occurrence of a fault or other misoperation of equipment. A ground system is designed to provide a safe path for fault currents. Metal portions of the modules, array structures, and array foundations are used as a part of the ground system, provided that they and their interconnection are designed to be suitably reliable over the life of the plant. Several alternative types of fault protection and detection equipment are designed into the source circuits and dc buses feeding the input terminals of the subfield power conditioner. This design process requires evaluation of plausible faults, equipment, and remedial actions planned to correct faults. The evaluation should also consider life cycle cost impacts.

  7. Sequential Testing Algorithms for Multiple Fault Diagnosis

    NASA Technical Reports Server (NTRS)

    Shakeri, Mojdeh; Raghavan, Vijaya; Pattipati, Krishna R.; Patterson-Hine, Ann

    1997-01-01

    In this paper, we consider the problem of constructing optimal and near-optimal test sequencing algorithms for multiple fault diagnosis. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and AND/OR graph search, we present several test sequencing algorithms for the multiple fault isolation problem. These algorithms provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a diagnostic directed graph (digraph), instead of a diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. The algorithms developed herein have been successfully applied to several real-world systems. Computational results indicate that the size of a multiple fault strategy is strictly related to the structure of the system.

  8. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    SciTech Connect

    Cheung, Howard; Braun, James E.

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

  9. High and Low Temperature Oceanic Detachment Faults

    NASA Astrophysics Data System (ADS)

    Titarenko, Sofya; McCaig, Andrew

    2013-04-01

    One of the most important discoveries in Plate Tectonics in the last ten years is a "detachment mode" of seafloor spreading. Up to 50% of the Atlantic seafloor has formed by a combination of magmatism and slip on long-lived, convex-up detachment faults, forming oceanic core complexes (OCC). Two end-member types of OCC can be defined: The Atlantis Bank on the Southwest Indian Ridge is a high temperature OCC sampled by ODP Hole 735b. Deformation was dominated by crystal-plastic flow both above and below the solidus at 800-950 C, over a period of around 200 ka. In contrast, the Atlantis Massif at 30 N in the Atlantic, sampled by IODP Hole 1309D, is a low temperature OCC in which crystal plastic deformation of gabbro is very rare and greenschist facies deformation was localised onto talc-tremolite-chlorite schists in serpentinite, and breccia zones in gabbro and diabase. The upper 100m of Hole 1309D contains about 43% diabase intruded into hydrated fault breccias. This detachment fault zone can be interpreted as a dyke-gabbro transition, which was originally (before flexural unroofing) a lateral boundary between active hydrothermal circulation in the fault zone and hangingwall, and intrusion of gabbroic magma in the footwall. Thus a major difference between high and low temperature detachment faults may be cooling of the latter by active hydrothermal circulation. 2-D thermal modelling suggests that if a detachment fault is formed in a magmatically robust segment of a slow spreading ridge, high temperature mylonites can be formed for 1-2 ka provided there is no significant hydrothermal cooling of the fault zone. In contrast, if the fault zone is held at temperatures of 400 C by fluid circulation, cooling of the upper 1 km of the fault footwall occurs far too rapidly for extensive mylonites to form. Our models are consistent with published cooling rate data from geospeedometry and isotopic closure temperatures. The control on this process is likely a combination of geometry and timing of deformation; if the fault zone forms within a large semi-molten gabbro body it will be isolated from hydrothermal fluid, whereas if a series of small melt bodies collect in the footwall of a permeable detachment fault, they will cool rapidly. A corollary of our model is that at slow spreading ridges the depth of melt lenses and hence the dyke gabbro transition is determined not by spreading rate (as has been suggested at fast spreading ridges) but by the effective depth of high permeability and hence hydrothermal circulation. In actively faulting environments permeability can exist to greater depths, and magma can only easily rise above these depths as dykes or volcanics. The type of detachment fault formed may depend on whether detachment faults nucleate in a robust magmatic system where they can root into a melt zone, or if magma collects in the footwall of an active fault.

  10. WFSD fault monitoring using active seismic source

    NASA Astrophysics Data System (ADS)

    Yang, W.; Ge, H.; Wang, B.; Yuan, S.; Song, L.

    2010-12-01

    The Wenchuan Fault Scientific Drilling(WFSD)is a rapid response drilling project to the great Wenchuan earthquake. It focuses on the fault structure, earthquake physical mechanism, fluid and in-situ stress, energy budget and so on. Temporal variation of stress and physical property in the fault zone is important information for understanding earthquake physics, especially when the fault is still under the post-seismic recovery or stress modification. Seismic velocity is a good indicator of the medium mechanics, stress state within the fault zone. After the great Wenchuan Ms 8.0 earthquake, May 12, 2008, we built up a fault dynamic monitoring system using active seismic source cross the WFSD fault. It consists of a 10 ton accurately controlled eccentric mass source and eight receivers to continuously monitor the seismic velocity cross the fault zone. Combining the aftershock data, we try to monitor the fault recovery and some aftershock physical process. The observatory is located at the middle of the Longmenshan range-front fault, Mianzhu, Sichuan Province. The No.3 hole of WFSD is on the survey line near the No.4 receiver. The source and receiver site were carefully treated. All instruments were well installed to ensure the system's repeatability. Seismic velocity across the fault zone was monitored with continuous observation. The recording system consists of Guralp-40T short period seismometer and RefTek-130B recorder which was continuously GPS timed up to 20us. The active source ran since June 20, 2009. It was operated routinely at night and working continuously from 21:00 to 02:00 of the next day. So far, we have gotten almost one year recording. The seismic velocity variation may be caused by changes of the fault zone medium mechanical property, fault stress, fluid, and earth tide, barometric pressure and rainfall. Deconvolution, stacking and cross-correlation analysis were used for the velocity analysis. Results show that the relationship between seismic velocity change and the aftershock events is very complicated. An earthquake of Ms 5.6 happened at 02:03 in the morning of June 30th 2009, which was very close to the observatory site. A 5 ~ 9ms of time delay, corresponding to 0.3% of relative direct S wave velocity decrease, was observed. These variations of the velocity are much more than the possible variations caused by barometric pressure, solid earth tide and instrument factors. We speculate that the velocity variations are caused by the co-seismic effects of the aftershock. Experiment shows that the accurately controlled eccentric mass source is suitable for the fault monitoring. Large numbers of stacking (a few days recording) was done in order to increase the S/N, consequently the time resolution is not high enough to analyze the refined aftershock physical process. Some new measurements(continuous GPS, corss-hole ultrasonic) are planed to be introduced. The observation data will be analyzed in details and combined with WFSD core and downhole measurements, so as to get the stress variation and fracture deformation information for WFSD fault dynamic analysis.

  11. The Lower Tagus Valley (LTV) Fault System

    NASA Astrophysics Data System (ADS)

    Besana-Ostman, G. M.; Fereira, H.; Pinheiro, A.; Falcao Flor, A. P.; Nemser, E.; Villanova, S. P.; Fonseca, J. D.

    2010-05-01

    The LTV fault and its associated historical seismic activity have been the focus of several scientific studies in Portugal. There are at least three historical earthquakes associated with the LTV fault, in 1344, 1531, and 1909. Magnitude estimates for these earthquakes range from 6.5 to 7.0. They caused widespread damage throughout the Lower Tagus Valley region with intensities ranging from VIII to X from Lisbon to Entroncamento. During the great 1755 earthquake, the LTV fault was likewise proposed to have ruptured coseismically. The Azambuja fault or the Vila Franca de Xira fault are suggested origins of the 1909 earthquake. Trenching activities together with borehole data analyses, geophysical investigations, and seismic hazard assessments were undertaken in the LTV in the recent years. Complex trench features along the excavated sections were argued to be either fault- or erosion-related phenomena. Borehole data and seismic profiles indicate subsurface structures within the Lower Tagus Valley and adjacent areas. Furthermore, recent attempts to improve seismic hazard assessment indicate that the highest values in Portugal for 10% probability of exceedance in 50 years correspond with the greater Lisbon area, with the LTV fault as the most probable source. Considering the above, efforts are being made to acquire more information about the location of the LTV seismic source taking into account the presence of extensive erosion and/or deposition processes within the valley, densely populated urban areas, heavily forested regions, and flooded sections such as the Tagus estuary. Results from recent mapping along the LTV reveal surface faulting that left-laterally displaced numerous geomorphic landforms within the Lower Tagus River valley. The mapped trace shows clear evidence of left-lateral displacement and deformation within the valley transecting the river, its tributaries, and innumerable young terraces. The trace has been mapped by analyzing topographic maps, aerial photographs, and river systems together with other remotely-sensed data. Active fault-related features that were identified include fault scarps, pressure ridges, pull-apart basin, saddles, and linear valleys. Limited ocular investigation has also been undertaken to verify modifications that post-date the aerial photos, quantify both elevation differences across the fault, and possibly evaluate the cumulative lateral displacements. Thus, the newly-identified traces of an active fault in the LTV corresponds with a left-lateral fault along the Lower Tagus floodplains striking parallel to the principal structural trend (NNE-SSW) in the region. This trace clearly indicates continued tectonic movement along the LTV fault during the Holocene. Taking into account the newly-mapped location and length of the active trace, trenching work is being planned to determine recurrence intervals along the LTV fault while further mapping of its possible extension and other related active structures are underway. Moreover, new estimates of slip rate along this structure will result from this study and can be used for an improved seismic hazard assessment for the region.

  12. A complete hydro-climate model chain to investigate the influence of sea surface temperature on recent hydroclimatic variability in subtropical South America (Laguna Mar Chiquita, Argentina)

    NASA Astrophysics Data System (ADS)

    Troin, Magali; Vrac, Mathieu; Khodri, Myriam; Caya, Daniel; Vallet-Coulomb, Christine; Piovano, Eduardo; Sylvestre, Florence

    2015-06-01

    During the 1970s, Laguna Mar Chiquita (Argentina) experienced a dramatic hydroclimatic anomaly, with a substantial rise in its level. Precipitations are the dominant driving factor in lake level fluctuations. The present study investigates the potential role of remote forcing through global sea surface temperature (SST) fields in modulating recent hydroclimatic variability in Southeastern South America and especially over the Laguna Mar Chiquita region. Daily precipitation and temperature are extracted from a multi-member LMDz atmospheric general circulation model (AGCM) ensemble of simulations forced by HadISST1 observed time-varying global SST and sea-ice boundary conditions from 1950 to 2005. The various members of the ensemble are only different in their atmospheric initial conditions. Statistical downscaling (SD) is used to adjust precipitation and temperature from LMDz ensemble mean at the station scale over the basin. A coupled basin-lake hydrological model (cpHM) is then using the LMDz-downscaled (LMDz-SD) climate variables as input to simulate the lake behavior. The results indicate that the long-term lake level trend is fairly well depicted by the LMDz-SD-cpHM simulations. The 1970s level rise and high-level conditions are generally well captured in timing and in magnitude when SST-forced AGCM-SD variables are used to drive the cpHM. As the LMDz simulations are forced solely with the observed sea surface conditions, the global SST seems to have an influence on the lake level variations of Laguna Mar Chiquita. As well, this study shows that the AGCM-SD-cpHM model chain is a useful approach for evaluating long-term lake level fluctuations in response to the projected climate changes.

  13. A complete hydro-climate model chain to investigate the influence of sea surface temperature on recent hydroclimatic variability in subtropical South America (Laguna Mar Chiquita, Argentina)

    NASA Astrophysics Data System (ADS)

    Troin, Magali; Vrac, Mathieu; Khodri, Myriam; Caya, Daniel; Vallet-Coulomb, Christine; Piovano, Eduardo; Sylvestre, Florence

    2016-03-01

    During the 1970s, Laguna Mar Chiquita (Argentina) experienced a dramatic hydroclimatic anomaly, with a substantial rise in its level. Precipitations are the dominant driving factor in lake level fluctuations. The present study investigates the potential role of remote forcing through global sea surface temperature (SST) fields in modulating recent hydroclimatic variability in Southeastern South America and especially over the Laguna Mar Chiquita region. Daily precipitation and temperature are extracted from a multi-member LMDz atmospheric general circulation model (AGCM) ensemble of simulations forced by HadISST1 observed time-varying global SST and sea-ice boundary conditions from 1950 to 2005. The various members of the ensemble are only different in their atmospheric initial conditions. Statistical downscaling (SD) is used to adjust precipitation and temperature from LMDz ensemble mean at the station scale over the basin. A coupled basin-lake hydrological model ( cpHM) is then using the LMDz-downscaled (LMDz-SD) climate variables as input to simulate the lake behavior. The results indicate that the long-term lake level trend is fairly well depicted by the LMDz-SD- cpHM simulations. The 1970s level rise and high-level conditions are generally well captured in timing and in magnitude when SST-forced AGCM-SD variables are used to drive the cpHM. As the LMDz simulations are forced solely with the observed sea surface conditions, the global SST seems to have an influence on the lake level variations of Laguna Mar Chiquita. As well, this study shows that the AGCM-SD- cpHM model chain is a useful approach for evaluating long-term lake level fluctuations in response to the projected climate changes.

  14. PASADO - ICDP Deep Drilling at Laguna Potrok Aike (Argentina): A 50 ka Record of Increasing Environmental Dynamics

    NASA Astrophysics Data System (ADS)

    Zolitschka, Bernd; Anselmetti, Flavio; Ariztegui, Daniel; Francus, Pierre; Gebhardt, Catalina; Kliem, Annette Hahn Pierre; Lcke, Andreas; Ohlendorf, Christian; Schbitz, Frank; Wastegard, Stefan

    2010-05-01

    Laguna Potrok Aike, located in the South-Patagonian province of Santa Cruz (5258'S, 7023'W), was formed by a volcanic (maar) eruption in the late Quaternary Pali Aike Volcanic Field several hundred thousand years ago. This archive holds a unique record of paleoclimatic and paleoecological variability from a region sensitive to variations in southern hemispheric wind and pressure systems, which provide a significant cornerstone for the understanding of the entire global climate system. Moreover, Laguna Potrok Aike is close to many active volcanoes allowing a better understanding of the history of volcanism in the Pali Aike Volcanic Field as well as in the Andean mountain chain, the latter located in a distance of less than 150 km to the west. Finally, Patagonia is the source region of eolian dust blown from the South American continent into the South Atlantic and onto the Antarctic ice sheet. The currently ongoing global climate change, the thread of volcanic hazards as well as of regional dust storms are of increasing socio-economic relevance and thus challenging scientific themes that are tackled for southernmost South America with an interdisciplinary research approach in the framework of the ICDP-funded "Potrok Aike Maar Lake Sediment Archive Drilling Project" (PASADO). Using the GLAD800 drilling platform seven holes were drilled in the southern spring of 2008. A total of 510 m of lacustrine sediments were recovered by an international scientific team from the central 100 m deep basin with an excellent core recovery rate of 94.4%. The reference profile with a composite depth of 106 m consists of undisturbed laminated and sand-layered lacustrine silts with an increasing number of coarse gravel layers, turbidites and homogenites with depth. Below 80 m composite depth two mass-movement deposits (10 m and 5 m in thickness) are recorded. These deposits show tilted and distorted layers as well as nodules of fine-grained sediments and randomly distributed gravel. Such features either indicate an increased seismicity that cannot be completely excluded for this late Quaternary backarc volcanic field or they are the result of hydrologically induced lake level variations and hence relate to changes in hydrological conditions within the catchment area. Intercalated throughout the record are 24 macroscopically visible volcanic ash layers that document the regional volcanic history and open the possibility to establish an independent time control through tephrochronology supported by Ar/Ar dating. Moreover, these isochrones potentially act as links to marine sediment records from the South Atlantic and to Antarctic ice cores. Preliminary extrapolation of the mean sedimentation rate of 1.1 mm/a determined for the upper 16 ka indicates that a continuous and high quality record may go back in time to approximately 50 ka. A comparable time frame is supported by first radiocarbon dates obtained from aquatic mosses.

  15. Hydrologic, water-quality, and biological assessment of Laguna de las Salinas, Ponce, Puerto Rico, January 2003-September 2004

    USGS Publications Warehouse

    Soler-López, Luis R.; Gómez-Gómez, Fernando; Rodríguez-Martínez, Jesús

    2005-01-01

    The Laguna de Las Salinas is a shallow, 35-hectare, hypersaline lagoon (depth less than 1 meter) in the municipio of Ponce, located on the southern coastal plain of Puerto Rico. Hydrologic, water-quality, and biological data in the lagoon were collected between January 2003 and September 2004 to establish baseline conditions. During the study period, rainfall was about 1,130 millimeters, with much of the rain recorded during three distinct intense events. The lagoon is connected to the sea by a shallow, narrow channel. Subtle tidal changes, combined with low rainfall and high evaporation rates, kept the lagoon at salinities above that of the sea throughout most of the study. Water-quality properties measured on-site (temperature, pH, dissolved oxygen, specific conductance, and Secchi disk transparency) exhibited temporal rather than spatial variations and distribution. Although all physical parameters were in compliance with current regulatory standards for Puerto Rico, hyperthermic and hypoxic conditions were recorded during isolated occasions. Nutrient concentrations were relatively low and in compliance with current regulatory standards (less than 5.0 and 1.0 milligrams per liter for total nitrogen and total phosphorus, respectively). The average total nitrogen concentration was 1.9 milligrams per liter and the average total phosphorus concentration was 0.4 milligram per liter. Total organic carbon concentrations ranged from 12.0 to 19.0 milligrams per liter. Chlorophyll a was the predominant form of photosynthetic pigment in the water. The average chlorophyll a concentration was 13.4 micrograms per liter. Chlorophyll b was detected (detection limits 0.10 microgram per liter) only twice during the study. About 90 percent of the primary productivity in the Laguna de Las Salinas was generated by periphyton such as algal mats and macrophytes such as seagrasses. Of the average net productivity of 13.6 grams of oxygen per cubic meter per day derived from the diel study, the periphyton and macrophyes produced 12.3 grams per cubic meter per day; about 1.3 grams (about 10 percent) were produced by the phytoplankton (plant and algae component of plankton). The total respiration rate was 59.2 grams of oxygen per cubic meter per day. The respiration rate ascribed to the plankton (all organisms floating through the water column) averaged about 6.2 grams of oxygen per cubic meter per day (about 10 percent), whereas the respiration rate by all other organisms averaged 53.0 grams of oxygen per cubic meter per day (about 90 percent). Plankton gross productivity was 7.5 grams per cubic meter per day; the gross productivity of the entire community averaged 72.8 grams per cubic meter per day. Fecal coliform bacteria counts were generally less than 200 colonies per 100 milliliters; the highest concentration was 600 colonies per 100 milliliters.

  16. Fault structure and mechanics of the Hayward Fault, California, from double-difference earthquake locations

    NASA Astrophysics Data System (ADS)

    Waldhauser, Felix; Ellsworth, William L.

    2002-03-01

    The relationship between small-magnitude seismicity and large-scale crustal faulting along the Hayward Fault, California, is investigated using a double-difference (DD) earthquake location algorithm. We used the DD method to determine high-resolution hypocenter locations of the seismicity that occurred between 1967 and 1998. The DD technique incorporates catalog travel time data and relative P and S wave arrival time measurements from waveform cross correlation to solve for the hypocentral separation between events. The relocated seismicity reveals a narrow, near-vertical fault zone at most locations. This zone follows the Hayward Fault along its northern half and then diverges from it to the east near San Leandro, forming the Mission trend. The relocated seismicity is consistent with the idea that slip from the Calaveras Fault is transferred over the Mission trend onto the northern Hayward Fault. The Mission trend is not clearly associated with any mapped active fault as it continues to the south and joins the Calaveras Fault at Calaveras Reservoir. In some locations, discrete structures adjacent to the main trace are seen, features that were previously hidden in the uncertainty of the network locations. The fine structure of the seismicity suggests that the fault surface on the northern Hayward Fault is curved or that the events occur on several substructures. Near San Leandro, where the more westerly striking trend of the Mission seismicity intersects with the surface trace of the (aseismic) southern Hayward Fault, the seismicity remains diffuse after relocation, with strong variation in focal mechanisms between adjacent events indicating a highly fractured zone of deformation. The seismicity is highly organized in space, especially on the northern Hayward Fault, where it forms horizontal, slip-parallel streaks of hypocenters of only a few tens of meters width, bounded by areas almost absent of seismic activity. During the interval from 1984 to 1998, when digital waveforms are available, we find that fewer than 6.5% of the earthquakes can be classified as repeating earthquakes, events that rupture the same fault patch more than one time. These most commonly are located in the shallow creeping part of the fault, or within the streaks at greater depth. The slow repeat rate of 2-3 times within the 15-year observation period for events with magnitudes around M = 1.5 is indicative of a low slip rate or a high stress drop. The absence of microearthquakes over large, contiguous areas of the northern Hayward Fault plane in the depth interval from ~5 to 10 km and the concentrations of seismicity at these depths suggest that the aseismic regions are either locked or retarded and are storing strain energy for release in future large-magnitude earthquakes.

  17. Fault structure and mechanics of the Hayward Fault, California from double-difference earthquake locations

    USGS Publications Warehouse

    Waldhauser, F.; Ellsworth, W.L.

    2002-01-01

    The relationship between small-magnitude seismicity and large-scale crustal faulting along the Hayward Fault, California, is investigated using a double-difference (DD) earthquake location algorithm. We used the DD method to determine high-resolution hypocenter locations of the seismicity that occurred between 1967 and 1998. The DD technique incorporates catalog travel time data and relative P and S wave arrival time measurements from waveform cross correlation to solve for the hypocentral separation between events. The relocated seismicity reveals a narrow, near-vertical fault zone at most locations. This zone follows the Hayward Fault along its northern half and then diverges from it to the east near San Leandro, forming the Mission trend. The relocated seismicity is consistent with the idea that slip from the Calaveras Fault is transferred over the Mission trend onto the northern Hayward Fault. The Mission trend is not clearly associated with any mapped active fault as it continues to the south and joins the Calaveras Fault at Calaveras Reservoir. In some locations, discrete structures adjacent to the main trace are seen, features that were previously hidden in the uncertainty of the network locations. The fine structure of the seismicity suggest that the fault surface on the northern Hayward Fault is curved or that the events occur on several substructures. Near San Leandro, where the more westerly striking trend of the Mission seismicity intersects with the surface trace of the (aseismic) southern Hayward Fault, the seismicity remains diffuse after relocation, with strong variation in focal mechanisms between adjacent events indicating a highly fractured zone of deformation. The seismicity is highly organized in space, especially on the northern Hayward Fault, where it forms horizontal, slip-parallel streaks of hypocenters of only a few tens of meters width, bounded by areas almost absent of seismic activity. During the interval from 1984 to 1998, when digital waveforms are available, we find that fewer than 6.5% of the earthquakes can be classified as repeating earthquakes, events that rupture the same fault patch more than one time. These most commonly are located in the shallow creeping part of the fault, or within the streaks at greater depth. The slow repeat rate of 2-3 times within the 15-year observation period for events with magnitudes around M = 1.5 is indicative of a low slip rate or a high stress drop. The absence of microearthquakes over large, contiguous areas of the northern Hayward Fault plane in the depth interval from ???5 to 10 km and the concentrations of seismicity at these depths suggest that the aseismic regions are either locked or retarded and are storing strain energy for release in future large-magnitude earthquakes.

  18. Characterizing Complex Faulting and Near Fault Velocity and Attenuation Heterogeneity Along the New Madrid Seismic Zone

    NASA Astrophysics Data System (ADS)

    DeShon, H. R.; Bisrat, S. T.; Powell, C. A.

    2011-12-01

    In order to more accurately assess the seismic hazard posed by intraplate earthquakes, we need understand the interaction of compositional, thermal, hydrological, and mechanical processes along these faults. We summarize ongoing research that aims to better characterize seismotectonics and near fault velocity and attenuation structure of the New Madrid Seismic Zone (NMSZ). An outstanding question in intraplate settings, including the NMSZ, is the relationship between fluids and faulting. We present recently updated P- and S-wave velocity and attenuation models, and Vp/Vs solutions, derived using double-difference (DD) tomography methods. Vp/Vs models are derived in two fashions: by directly dividing Vp by Vs and by inverting for Vp/Vs directly using S-P times. Similar to previously published solutions, the outstanding features are 1) reduced Vp and Vs southeast of the NE-trending Axial (or Cottonwood Grove) fault and underlying the southern NW-trending Reelfoot fault, which is interpreted as heavily fractured upper crust, and 2) variable near fault Vp/Vs along the major fault arms. The new models are used to relocate local earthquakes recorded by the PANDA experiment in the 1990s and the Cooperative New Madrid Seismic Network data from 1995-2011. The high-resolution relocations have relative (absolute) uncertainty of 200-400 m (400-1000 m). We show that the four NMSZ faults exhibit extensive internal complexity. Waveform cross-correlation additionally reveals localized swarm and repeating earthquake activity. This microseismicity appears to localize along suspected fault intersections with the exception of one region along the southern Reelfoot Fault that has hosted repeated swarm activity. Swarm and repeating earthquake activity may be due to local stress perturbations or changes in pore fluid pressure. All of these features are consistent with a well-developed, active transpressional fault system that likely continues to pose significant hazard to the Central United States.

  19. Aftershocks illuninate the 2011 Mineral, Virginia, earthquake causative fault zone and nearby active faults

    USGS Publications Warehouse

    Horton, Jr., J. Wright; Shah, Anjana K.; McNamara, Daniel E.; Snyder, Stephen L.; Carter, Aina M

    2015-01-01

    Deployment of temporary seismic stations after the 2011 Mineral, Virginia (USA), earthquake produced a well-recorded aftershock sequence. The majority of aftershocks are in a tabular cluster that delineates the previously unknown Quail fault zone. Quail fault zone aftershocks range from ~3 to 8 km in depth and are in a 1-km-thick zone striking ~036 and dipping ~50SE, consistent with a 028, 50SE main-shock nodal plane having mostly reverse slip. This cluster extends ~10 km along strike. The Quail fault zone projects to the surface in gneiss of the Ordovician Chopawamsic Formation just southeast of the OrdovicianSilurian Ellisville Granodiorite pluton tail. The following three clusters of shallow (<3 km) aftershocks illuminate other faults. (1) An elongate cluster of early aftershocks, ~10 km east of the Quail fault zone, extends 8 km from Fredericks Hall, strikes ~035039, and appears to be roughly vertical. The Fredericks Hall fault may be a strand or splay of the older Lakeside fault zone, which to the south spans a width of several kilometers. (2) A cluster of later aftershocks ~3 km northeast of Cuckoo delineates a fault near the eastern contact of the Ordovician Quantico Formation. (3) An elongate cluster of late aftershocks ~1 km northwest of the Quail fault zone aftershock cluster delineates the northwest fault (described herein), which is temporally distinct, dips more steeply, and has a more northeastward strike. Some aftershock-illuminated faults coincide with preexisting units or structures evident from radiometric anomalies, suggesting tectonic inheritance or reactivation.

  20. Late Holocene earthquakes on the Toe Jam Hill fault, Seattle fault zone, Bainbridge Island, Washington

    USGS Publications Warehouse

    Nelson, A.R.; Johnson, S.Y.; Kelsey, H.M.; Wells, R.E.; Sherrod, B.L.; Pezzopane, S.K.; Bradley, L.-A.; Koehler, R. D., III; Bucknam, R.C.

    2003-01-01

    Five trenches across a Holocene fault scarp yield the first radiocarbon-measured earthquake recurrence intervals for a crustal fault in western Washington. The scarp, the first to be revealed by laser imagery, marks the Toe Jam Hill fault, a north-dipping backthrust to the Seattle fault. Folded and faulted strata, liquefaction features, and forest soil A horizons buried by hanging-wall-collapse colluvium record three, or possibly four, earthquakes between 2500 and 1000 yr ago. The most recent earthquake is probably the 1050-1020 cal. (calibrated) yr B.P. (A.D. 900-930) earthquake that raised marine terraces and triggered a tsunami in Puget Sound. Vertical deformation estimated from stratigraphic and surface offsets at trench sites suggests late Holocene earthquake magnitudes near M7, corresponding to surface ruptures >36 km long. Deformation features recording poorly understood latest Pleistocene earthquakes suggest that they were smaller than late Holocene earthquakes. Postglacial earthquake recurrence intervals based on 97 radiocarbon ages, most on detrital charcoal, range from ???12,000 yr to as little as a century or less; corresponding fault-slip rates are 0.2 mm/yr for the past 16,000 yr and 2 mm/yr for the past 2500 yr. Because the Toe Jam Hill fault is a backthrust to the Seattle fault, it may not have ruptured during every earthquake on the Seattle fault. But the earthquake history of the Toe Jam Hill fault is at least a partial proxy for the history of the rest of the Seattle fault zone.

  1. Reconstructing paleoenvironmental conditions during the past 50 ka from the biogeochemical record of Laguna Potrok Aike, southern Patagonia

    NASA Astrophysics Data System (ADS)

    Hahn, A.; Rosn, P.; Kliem, P.; Ohlendorf, C.; Zolitschka, B.

    2011-12-01

    Total organic carbon (TOC), total inorganic carbon (TIC) and biogenic silica (BSi) assessed by Fourier transform infrared spectroscopy (FTIRS) are used to reconstruct the environmental history during the past 50kyrs in high resolution from Laguna Potrok Aike. During the Holocene warmer conditions lead to an increased productivity reflected in higher TOC and BSi contents. Calcite precipitation initiated around 9 ka cal. BP probably due to supersaturation induced by lake level lowering. It is assumed that prior to this time period sediments are carbonate-free because high lake-level conditions prevailed. During the Glacial, increased runoff linked to permafrost, precipitation related to stronger cyclonic activity and reduced evaporation have caused higher lake levels. Moreover, during cold glacial conditions lake productivity was low and organic matter mainly of algal or cyanobacterial origin as indicated by generally low TOC and C/N values. During interstadials, such as the Antarctic A-events and the Younger Dryas, TOC contents appear to rise. The glacial C/N ratios and their correlation with TOC concentrations indicate that aquatic moss blooms probably induce these increases in TOC. Aquatic mosses grow if surface water temperatures rise due to warmer climatic conditions and/or development of a lake water stratification. The latter may occur if wind speeds are low and melt water inflow caused higher density gradients. Prevailing permafrost thawing during warmer periods could lead to considerable rises of lake levels, which would contribute to the preservation of organic material. This may explain why higher C/N and TOC values occur at the end of Antarctic A-events. For the uppermost 25 m, the BSi profile shows a high correlation with the TOC profile. In deeper horizons, however, there are indications that the BSi/TOC ratio increased. This part of the record is dominated by mass movement events, which may have supplied nutrients and thus triggered diatom blooms.

  2. [Specific diversity and culicidian nuisance in the villages of N'gatty and Allaba in laguna area of Ivory Coast].

    PubMed

    Fofana, D; Konan, K L; Djohan, V; Konan, Y L; Kon, A B; Doannio, J M C; N'goran, K E

    2010-12-01

    Entomological surveys were undertaken between June and December 2006 in N'gatty and Allaba. These villages are located in southern Ivory Coast in a laguna area in Dabou department. In these villages, there are large swampy areas, which have caused the multiplication of anthropophilic Culicidae. Mosquitoes have been collected at preimaginal stage at the time of the larval prospecting and at adult stage through human landing catch. Larval collections have been made using the classic method of "dipping". Larvae have been identified to the genus level. Then, they have been bred in the laboratory to identify adults. Adults collection has been made once a month during three consecutive nights by human landing catch inside houses. Adults have been identified to the specific level. Eight genera of mosquitoes have been collected in these two villages: Aedes, Anopheles, Coquillettidia, Culex, Eretmapodites, Mansonia, Toxorhynchites and Uranotaenia. Twenty-four species have been listed during this stu y. The genus Mansonia is the most predominant with 86% (N = 15,811) and 80% (N = 1,385), respectively, in N'gatty and Allaba. The average biting rate per day varies between N'gatty and Allaba. It is estimated to 308 bites per human per night (b/h/n) in N'gatty and 72 b/h/n in Allaba. In these villages, mosquito nuisance is mainly due to Mansonia with 264 b/h/n and 58 b/h/n, respectively, in N'gatty and Allaba. However, Anopheles gambiae s.l. average rate was 12 b/h/n in N'gatty and 2 b/h/n in Allaba. PMID:20632142

  3. Holocene History of the Chocó Rain Forest from Laguna Piusbi, Southern Pacific Lowlands of Colombia

    NASA Astrophysics Data System (ADS)

    Behling, Hermann; Hooghiemstra, Henry; Negret, Alvaro José

    1998-11-01

    A high-resolution pollen record from a 5-m-long sediment core from the closed-lake basin Laguna Piusbi in the southern Colombian Pacific lowlands of Chocó, dated by 11 AMS 14C dates that range from ca. 7670 to 220 14C yr B.P., represents the first Holocene record from the Chocó rain forest area. The interval between 7600 and 6100 14C yr B.P. (500-265 cm), composed of sandy clays that accumulated during the initial phase of lake formation, is almost barren of pollen. Fungal spores and the presence of herbs and disturbance taxa suggest the basin was at least temporarily inundated and the vegetation was open. The closed lake basin might have formed during an earthquake, probably about 4400 14C yr B.P. From the interval of about 6000 14C yr B.P. onwards, 200 different pollen and spore types were identified in the core, illustrating a diverse floristic composition of the local rain forest. Main taxa are Moraceae/Urticaceae, Cecropia,Melastomataceae/Combretaceae, Acalypha, Alchornea,Fabaceae, Mimosa, Piper, Protium, Sloanea, Euterpe/Geonoma, Socratea,and Wettinia.Little change took place during that time interval. Compared to the pollen records from the rain forests of the Colombian Amazon basin and adjacent savannas, the Chocó rain forest ecosystem has been very stable during the late Holocene. Paleoindians probably lived there at least since 3460 14C yr B.P. Evidence of agricultural activity, shown by cultivation of Zea maissurrounding the lake, spans the last 1710 yr. Past and present very moist climate and little human influence are important factors in maintaining the stable ecosystem and high biodiversity of the Chocó rain forest.

  4. Destructive and non-destructive density determination: method comparison and evaluation from the Laguna Potrok Aike sedimentary record

    NASA Astrophysics Data System (ADS)

    Fortin, David; Francus, Pierre; Gebhardt, Andrea Catalina; Hahn, Annette; Kliem, Pierre; Lis-Pronovost, Agathe; Roychowdhury, Rajarshi; Labrie, Jacques; St-Onge, Guillaume; Pasado Science Team

    2013-07-01

    Density measurements play a central role in the characterization of sediment profiles. When working with long records (>100 m), such as those routinely obtained within the frame of the International Continental Scientific Drilling Program, several methods can be used, all of them varying in resolution, time-cost efficiency and source of errors within the measurements. This paper compares two relatively new non-destructive densitometric methods, CT-Scanning and the coherent/incoherent ratio from an Itrax XRF core Scanner, to data acquired from a Multi-sensor core logger Gamma Ray Attenuation Porosity Evaluator (MSCL Grape) and discrete measurements of dry bulk density, wet bulk density and water content. Quality assessment of density measurements is performed at low and high resolution along the Laguna Potrok Aike (LPA) composite sequence. Giving its resolution (0.4 mm in our study), its high signal to noise ratio, we conclude that CT-Scan provides a precise, fast and cost-efficient way to determine density variation of long sedimentary record. Although more noisy that the CT-Scan measurements, coherent/incoherent ratio from the XRF core scanner also provides a high-resolution, reliable continuous measure of density variability of the sediment profile. The MSCL Grape density measurements provide actual density data and have the significant advantage to be completely non-destructive since the acquisition is performed on full cores prior to opening. However, the quality MSCL Grape density measurements can potentially be reduced by the presence of voids within the sediment core tubes and the dry and bulk density measurements suffers from sampling challenges and are time-consuming.

  5. Lobomycosis-like disease in wild bottlenose dolphins Tursiops truncatus of Laguna, southern Brazil: monitoring of a progressive case.

    PubMed

    Daura-Jorge, Fbio G; Simes-Lopes, Paulo C

    2011-01-21

    Lobomycosis is a chronic dermal infection affecting humans and small cetaceans. In 1993, a study identified the presence of the etiologic agent of lobomycosis in a resident population of Tursiops truncatus (bottlenose dolphin) in Laguna, Brazil. Until now, no additional information relating to the persistence or prevalence of this pathogen in this population has been available. Numbering less than 60 animals, the residency of these dolphins in an impacted lagoon system has raised concerns about the health and viability of this small population. Using photo-identification data collected between September 2007 and September 2009, this study evaluated the occurrence of lobomycosis-like disease (LLD) throughout this population. Of 47 adult dolphins and 10 calves identified, 7 (12%) presented some form of epidermal lesion and 5 (9%) had evidence of LLD. The lesions were stable in all but 2 cases, in which a progressive development was recorded in a presumed adult female and her calf (referred to here as the LLD pair). During the first few months of observation, the lesion grew slowly and at a constant rate on the adult. However, in the fourteenth month, the growth rate increased rapidly and the first lesions appeared on the calf. Compared to the rest of the population, the LLD pair also presented a different spatial ranging pattern, suggesting a possible social or geographic factor. Current and previous records of LLD or lobomycosis indicate that the disease is endemic in this population. These findings highlight the importance of monitoring both the health of these cetaceans and the quality of their habitat. PMID:21381522

  6. 15,000-yr pollen record of vegetation change in the high altitude tropical Andes at Laguna Verde Alta, Venezuela

    NASA Astrophysics Data System (ADS)

    Rull, Valentí; Abbott, Mark B.; Polissar, Pratigya J.; Wolfe, Alexander P.; Bezada, Maximiliano; Bradley, Raymond S.

    2005-11-01

    Pollen analysis of sediments from a high-altitude (4215 m), Neotropical (9°N) Andean lake was conducted in order to reconstruct local and regional vegetation dynamics since deglaciation. Although deglaciation commenced ˜15,500 cal yr B.P., the area around the Laguna Verde Alta (LVA) remained a periglacial desert, practically unvegetated, until about 11,000 cal yr B.P. At this time, a lycopod assemblage bearing no modern analog colonized the superpáramo. Although this community persisted until ˜6000 cal yr B.P., it began to decline somewhat earlier, in synchrony with cooling following the Holocene thermal maximum of the Northern Hemisphere. At this time, the pioneer assemblage was replaced by a low-diversity superpáramo community that became established ˜9000 cal yr B.P. This replacement coincides with regional declines in temperature and/or available moisture. Modern, more diverse superpáramo assemblages were not established until ˜4600 cal yr B.P., and were accompanied by a dramatic decline in Alnus, probably the result of factors associated with climate, humans, or both. Pollen influx from upper Andean forests is remarkably higher than expected during the Late Glacial and early to middle Holocene, especially between 14,000 and 12,600 cal yr B.P., when unparalleled high values are recorded. We propose that intensification of upslope orographic winds transported lower elevation forest pollen to the superpáramo, causing the apparent increase in tree pollen at high altitude. The association between increased forest pollen and summer insolation at this time suggests a causal link; however, further work is needed to clarify this relationship.

  7. An update of Quaternary faults of central and eastern Oregon

    USGS Publications Warehouse

    Weldon, Ray J., II; Fletcher, D.K.; Weldon, E.M.; Scharer, K.M.; McCrory, P.A.

    2002-01-01

    This is the online version of a CD-ROM publication. We have updated the eastern portion of our previous active fault map of Oregon (Pezzopane, Nakata, and Weldon, 1992) as a contribution to the larger USGS effort to produce digital maps of active faults in the Pacific Northwest region. The 1992 fault map has seen wide distribution and has been reproduced in essentially all subsequent compilations of active faults of Oregon. The new map provides a substantial update of known active or suspected active faults east of the Cascades. Improvements in the new map include (1) many newly recognized active faults, (2) a linked ArcInfo map and reference database, (3) more precise locations for previously recognized faults on shaded relief quadrangles generated from USGS 30-m digital elevations models (DEM), (4) more uniform coverage resulting in more consistent grouping of the ages of active faults, and (5) a new category of 'possibly' active faults that share characteristics with known active faults, but have not been studied adequately to assess their activity. The distribution of active faults has not changed substantially from the original Pezzopane, Nakata and Weldon map. Most faults occur in the south-central Basin and Range tectonic province that is located in the backarc portion of the Cascadia subduction margin. These faults occur in zones consisting of numerous short faults with similar rates, ages, and styles of movement. Many active faults strongly correlate with the most active volcanic centers of Oregon, including Newberry Craters and Crater Lake.