These are representative sample records from Science.gov related to your search topic.
For comprehensive and current results, perform a real-time search at Science.gov.
1

Revisiting the 23 February 1892 Laguna Salada earthquake  

USGS Publications Warehouse

According to some compilations, the Laguna Salada, Baja California, earthquake of 23 February 1892 ranks among the largest earthquakes in California and Baja California in historic times. Although surface rupture was not documented at the time of the earthquake, recent geologic investigations have identified and mapped a rupture on the Laguna Salada fault that can be associated with high probability with the 1892 event (Mueller and Rockwell, 1995). The only intensity-based magnitude estimate for the earthquake, M 7.8, was made by Strand (1980) based on an interpretation of macroseismic effects and a comparison of isoseismal areas with those from instrumentally recorded earthquakes. In this study we reinterpret original accounts of the Laguna Salada earthquake. We assign modified Mercalli intensity (MMI) values in keeping with current practice, focusing on objective descriptions of damage rather than subjective human response and not assigning MMI values to effects that are now known to be poor indicators of shaking level, such as liquefaction and rockfalls. The reinterpreted isoseismal contours and the estimated magnitude are both significantly smaller than those obtained earlier. Using the method of Bakun and Wentworth (1997) we obtain a magnitude estimate of M 7.2 and an optimal epicenter less than 15 km from the center of the mapped Laguna Salada rupture. The isoseismal contours are elongated toward the northwest, which is qualitatively consistent with a directivity effect, assuming that the fault ruptured from southeast to northwest. We suggest that the elongation may also thus reflect wave propagation effects, with more efficient propagation of crustal surface (Lg) waves in the direction of the overall regional tectonic fabric.

Hough, S.E.; Elliot, A.

2004-01-01

2

Late Neogene stratigraphy and tectonic control on facies evolution in the Laguna Salada Basin, northern Baja California, Mexico  

Microsoft Academic Search

The Laguna Salada Basin (LSB) in northeastern Baja California records late-Neogene marine incursions in the Salton Trough and progradation of the Colorado River delta. Early subsidence and subsequent tectonic erosion are related to evolution of the Sierra El Mayor detachment fault during late Miocene time (<12Ma). The stratigraphy of uplifted blocks on the east-central margin of the Laguna Salada Basin

A. Mart??n-Barajas; S. Vázquez-Hernández; A. L. Carreño; J. Helenes; F. Suárez-Vidal; J. Alvarez-Rosales

2001-01-01

3

Complex Fault Interaction in the Yuha Desert  

NASA Astrophysics Data System (ADS)

We determine precise hypocentral locations for over 3,600 aftershocks that occurred in the Yuha Desert (YD) region following the 4 April 2010 Mw 7.2 El Mayor-Cucapah (EMC) earthquake until 14 June 2010 originally located by the Southern California Seismic Network (SCSN). To calculate precise hypocenters we used manually identified phase arrivals and cross-correlation delay times in a series of absolute and relative relocation procedures with algorithms including hypoinverse, velest and hypoDD. We used velest to simultaneously invert for station corrections and the best-fitting velocity model for the event and station distribution. Location errors were reduced with this process to ~20 m horizontally and ~80 m vertically. The locations reveal a complex pattern of faulting with en echelon fault segments trending toward the northwest, approximately parallel to the North American-Pacific plate boundary and en echelon, conjugate features trending to the northeast. The relocated seismicity is highly correlated with the mapped faults that show triggered surface slip in response to the EMC mainshock. Aftershocks are located between depths of 2 km and 11 km, consistent with previous studies of seismogenic thickness in the region. Three-dimensional analysis reveals individual and intersecting fault planes between 5 km and 10 km in the along-strike and along-dip directions. These fault planes remain distinct structures at depth, indicative of conjugate faulting, and do not appear to coalesce onto a through-going fault segment. We observe a complex spatiotemporal migration of aftershocks with individual fault strands that are often active for relatively short time periods. In addition, events relocated by Hauksson et al., (2012) that occur in the two-year period following the 15 June 2010 M5.7 Ocotillo earthquake show majority of seismicity occurred along the Laguna Salada-West branch. At the same time, seismicity along the Laguna Salada-East and other faults in the Yuha Desert abruptly shuts off suggesting fault activity is highly sensitive to local stress conditions. To further our investigation, we locate over 15,000 previously unreported aftershocks in the YD during the same time period. For this analysis we detect arrivals using an STA/LTA filter from data continuously recorded on 8 seismometers installed in the YD from 6 April through 14 June 2010. Event association was performed with the Antelope software package. Absolute locations were first determined with hypoinverse using the automated phase picks, and the velocity model used in the above relocation procedure. We refined the relative locations using the automated detections and cross-correlation delay times in hypoDD. We use these newly detected earthquakes to further the investigation of fault geometry at the surface and how it relates to fault structure at depth, rheology of the crust, and the spatiotemporal migration patterns within the aftershock distribution.

Kroll, K.; Cochran, E. S.; Richards-Dinger, K. B.; Sumy, D. F.

2012-12-01

4

Laguna Foundation Education Program  

E-print Network

which says: "Keystone tree species in the Laguna." Students who know the answer to the clue may call out. In this game, the species shown on the playing card is not immediately called out. Rather, a clue is called out to excite their students about species and habitats of the Laguna. The images we are receiving

Ravikumar, B.

5

Diversity of halophilic bacteria isolated from Rambla Salada, Murcia (Spain).  

PubMed

In this study we analyzed the diversity of the halophilic bacteria community from Rambla Salada during the years 2006 and 2007. We collected a total of 364 strains, which were then identified by means of phenotypic tests and by the hypervariable V1-V3 region of the 16S rRNA sequences (around 500 bp). The ribosomal data showed that the isolates belonged to Proteobacteria (72.5%), Firmicutes (25.8%), Actinobacteria (1.4%), and Bacteroidetes (0.3%) phyla, with Gammaproteobacteria the predominant class. Halomonas was the most abundant genus (41.2% isolates) followed by Marinobacter (12.9% isolates) and Bacillus (12.6% isolates). In addition, 9 strains showed <97% sequence identity with validly described species and may well represent new taxa. The diversity of the bacterial community analyzed with the DOTUR package determined 139 operational taxonomic units at 3% genetic distance level. Rarefaction curves and diversity indexes demonstrated that our collection of isolates adequately represented all the bacterial community at Rambla Salada that can be grown under the conditions used in this work. We found that the sampling season influenced the composition of the bacterial community, and bacterial diversity was higher in 2007; this fact could be related to lower salinity at this sampling time. PMID:25403824

Luque, Rocío; Béjar, Victoria; Quesada, Emilia; Llamas, Inmaculada

2014-12-01

6

The Pueblo of Laguna.  

ERIC Educational Resources Information Center

Proximity to urban areas, a high employment rate, development of natural resources and high academic achievement are all serving to bring Laguna Pueblo to a period of rapid change on the reservation. While working to realize its potential in the areas of natural resources, commercialism and education, the Pueblo must also confront the problems of…

Lockart, Barbetta L.

7

Lateglacial and Late Holocene environmental and vegetational change in Salada Mediana, central Ebro Basin, Spain  

Microsoft Academic Search

The Salada Mediana lacustrine sequence, central Ebro Basin, Spain (41°30?10?N, 0°44?W, 350m a.s.l.) provides an example of the potential and limitations of saline lake records as palaeoclimate proxies in the semi-arid Mediterranean region. Sedimentary facies analyses, chemical stratigraphy, stable isotopes (?18O and ?13C) of authigenic carbonates, ?13C values of bulk organic matter and pollen analyses from sediment cores provide paleohydrological

Blas L Valero-Garcés; Penélope González-Sampériz; A Delgado-Huertas; A Navas; J Mach??n; K Kelts

2000-01-01

8

Diversity and distribution of Halomonas in Rambla Salada, a hypersaline environment in the southeast of Spain.  

PubMed

We have studied the diversity and distribution of Halomonas populations in the hypersaline habitat Rambla Salada (Murcia, southeastern Spain) by using different molecular techniques. Denaturing gradient gel electrophoresis (DGGE) using specific primers for the 16S rRNA gene of Halomonas followed by a multivariate analysis of the results indicated that richness and evenness of the Halomonas populations were mainly influenced by the season. We found no significant differences between the types of samples studied, from either watery sediments or soil samples. The highest value of diversity was reached in June 2006, the season with the highest salinity. Furthermore, canonical correspondence analysis (CCA) demonstrated that both salinity and pH significantly affected the structure of the Halomonas community. Halomonas almeriensis and two denitrifiers, H. ilicicola and H. ventosae were the predominant species. CARD-FISH showed that the percentage of Halomonas cells with respect to the total number of microorganisms ranged from 4.4% to 5.7%. To study the functional role of denitrifying species, we designed new primer sets targeting denitrification nirS and nosZ genes. Using these primers, we analyzed sediments from the upwelling zone collected in June 2006, where we found the highest percentage of denitrifiers (74%). Halomonas ventosae was the predominant denitrifier in this site. PMID:24164442

Oueriaghli, Nahid; González-Domenech, Carmen M; Martínez-Checa, Fernando; Muyzer, Gerard; Ventosa, Antonio; Quesada, Emilia; Béjar, Victoria

2014-02-01

9

Present-day loading rate of faults in southern California and northern Baja California, Mexico, and post-seismic deformation following the M7.2 April 4, 2010, El Mayor-Cucapah earthquake from GPS Geodesy  

NASA Astrophysics Data System (ADS)

We use 142 GPS velocity estimates from the SCEC Crustal Motion Map 4 and 59 GPS velocity estimates from additional sites to model the crustal velocity field of southern California, USA, and northern Baja California, Mexico, prior to the 2010 April 4 Mw 7.2 El Mayor-Cucapah (EMC) earthquake. The EMC earthquake is the largest event to occur along the southern San Andreas fault system in nearly two decades. In the year following the EMC earthquake, the EarthScope Plate Boundary Observatory (PBO) constructed eight new continuous GPS sites in northern Baja California, Mexico. We used our velocity model, which represents the period before the EMC earthquake, to assess postseismic velocity changes at the new PBO sites. Time series from the new PBO sites, which were constructed 4-18 months following the earthquake do not exhibit obvious exponential or logarithmic decay, showing instead fairly secular trends through the period of our analysis (2010.8-2012.5). The weighted RMS misfit to secular rates, accounting for periodic site motions is typically around 1.7 mm/yr, indicating high positioning precision and fairly linear site motion. Results of our research include new fault slip rate estimates for the greater San Andreas fault system, including model faults representing the Cerro Prieto (39.0±0.1 mm/yr), Imperial (35.7±0.1 mm/yr), and southernmost San Andreas (24.7±0.1 mm/yr), generally consistent with previous geodetic studies within the region. Velocity changes at the new PBO sites associated with the EMC earthquake are in the range 1.7±0.3 to 9.2±2.6 mm/yr. The maximum rate difference is found in Mexicali Valley, close to the rupture. Rate changes decay systematically with distance from the EMC epicenter and velocity orientations exhibit a butterfly pattern as expected from a strike slip earthquake. Sites to the south and southwest of the Baja California shear zone are moving more rapidly to the northwest relative to their motions prior to the earthquake. Sites to the west of the Laguna Salada fault zone are moving more westerly. Sites to the east of the EMC rupture move more southerly than prior to the EMC earthquake. Continued monitoring of these velocity changes will allow us to differentiate between lower crustal and upper mantle relaxation processes.

Spinler, J. C.; Bennett, R. A.

2012-12-01

10

Santa Fe Indian Camp, House 21, Richmond, California: Persistence of Identity among Laguna Pueblo Railroad Laborers, 1945-1982.  

ERIC Educational Resources Information Center

In 1880 the Laguna people and the predecessor of the Atchison, Topeka, and Santa Fe Railroad reached an agreement giving the railroad unhindered right-of-way through Laguna lands in exchange for Laguna employment "forever." Discusses the Laguna-railroad relationship through 1982, Laguna labor camps in California, and the persistence of Laguna

Peters, Kurt

1995-01-01

11

Spirit Does a 'Jig' at Laguna Hollow  

NASA Technical Reports Server (NTRS)

This front hazard-avoidance image taken by the Mars Exploration Rover Spirit on sol 45 shows Spirit in its new location after a drive totaling about 20 meters (65.6 feet). The circular depression that Spirit is in, dubbed 'Laguna Hollow,' was most likely formed by a small impact.

Scientists were interested in reaching Laguna Hollow because of the location's abundance of very fine, dust-like soil. The fine material could be atmospheric dust that has settled into the depression, or a salt-based material that causes crusts in the soils and coating on rocks. Either way, scientists hope to be able to characterize the material and broaden their understanding of this foreign world.

To help scientists get a better look at the variations in the fine-grained dust at different depths, controllers commanded Spirit to 'jiggle' its wheels in the soil before backing away to a distance that allows the area to be reached with the robotic arm. Spirit will likely spend part of sol 46 analyzing this area with the instruments on its robotic arm.

2004-01-01

12

Field reconnaissance of the effects of the earthquake of April 13, 1973, near Laguna de Arenal, Costa Rica  

USGS Publications Warehouse

At about 3:34 a.m. on April 13, 1973, a moderate-sized, but widely-felt, earthquake caused extensive damage with loss of 23 lives in a rural area of about 150 km2 centered just south of Laguna de Arenal in northwestern Costa Rica (fig. 1). This report summarizes the results of the writer's reconnaissance investigation of the area that was affected by the earthquake of April 13, 1973. A 4-day field study of the meizoseismal area was carried out during the period from April 28 through May 1 under the auspices of the U.S. Geological Survey. The primary objective of this study was to evaluate geologic factors that contributed to the damage and loss of life. The earthquake was also of special interest because of the possibility that it was accompanied by surface faulting comparable to that which occurred at Managua, Nicaragua, during the disastrous earthquake of December 23, 1972 (Brown, Ward, and Plafker, 1973). Such earthquake-related surface faulting can provide scientifically valuable information on active tectonic processes at shallow depths within the Middle America arc. Also, identification of active faults in this area is of considerable practical importance because of the planned construction of a major hydroelectrical facility within the meizoseismal area by the Instituto Costarricense de Electricidad (I.C.E.). The project would involve creation of a storage reservoir within the Laguna de Arenal basin and part of the Río Arenal valley with a 75 m-high earthfill dam across Río Arenal at a point about 10 km east of the outlet of Laguna de Arenal.

Plafker, George

1973-01-01

13

Performance of the LAGUNA pulsed power system  

SciTech Connect

The goal of the LAGUNA experimental series of the Los Alamos National Laboratory TRAILMASTER program is to accelerate an annular aluminum plasma z-pinch to greater than one hundred kilojoules of implosion kinetic energy. To accomplish this, an electrical pulse >5.5 MA must be delivered to a 20 nH load in approx.1 ..mu..s. The pulsed power system for these experiments consists of a capacitor bank for initial energy storage, a helical explosive-driven magnetic-flux compression generator for the prime power supply and opening and closing switches for power conditioning. While we have not yet achieved our design goal of 15 MA delivered to the inductive store of the system, all major components have functioned successfully at the 10 MA level. Significant successes and some difficulties experienced in these experiments are described.

Goforth, J.H.; Caird, R.S.; Fowler, C.M.; Greene, A.E.; Kruse, H.W.; Lindemuth, I.R.; Oona, H.; Reinovsky, R.E.

1987-01-01

14

76 FR 41513 - Proclaiming Certain Lands, Bowlin North Property, as an Addition to the Pueblo of Laguna...  

Federal Register 2010, 2011, 2012, 2013, 2014

...the Pueblo of Laguna Reservation, New Mexico AGENCY: Bureau of Indian Affairs, Interior...Laguna Reservation, (Laguna), New Mexico. FOR FURTHER INFORMATION CONTACT: Ben...enrollment or tribal membership. New Mexico Principal Meridian Bernalillo...

2011-07-14

15

Fault Motion  

NSDL National Science Digital Library

This collection of animations provides elementary examples of fault motion intended for simple demonstrations. Examples include dip-slip faults (normal and reverse), strike-slip faults, and oblique-slip faults.

16

Limnology of Laguna Tortuguero, Puerto Rico  

USGS Publications Warehouse

The principal chemical, physical and biological characteristics, and the hydrology of Laguna Tortuguero, Puerto Rico, were studied from 1974-75. The lagoon, with an area of 2.24 square kilometers and a volume of about 2.68 million cubic meters, contains about 5 percent of seawater. Drainage through a canal on the north side averages 0.64 cubic meters per second per day, flushing the lagoon about 7.5 times per year. Chloride and sodium are the principal ions in the water, ranging from 300 to 700 mg/liter and 150 to 400 mg/liter, respectively. Among the nutrients, nitrogen averages about 1.7 mg/liter, exceeding phosphorus in a weight ratio of 170:1. About 10 percent of the nitrogen and 40 percent of the phosphorus entering the lagoon is retained. The bottom sediments, with a volume of about 4.5 million cubic meters, average 0.8 and 0.014 percent nitrogen and phosphorus, respectively. (Woodard-USGS)

Quinones-Marquez, Ferdinand; Fuste, Luis A.

1978-01-01

17

Laguna de Santa RoSa docent tRaining www.lagunafoundation.org  

E-print Network

-9277 x102 Expand and share your love of nature and nurture the next generation of naturalists! Molly Eckler Laguna Docents are volunteers who are trained in the natural and cultural history of the Laguna de children learn about nature and connect to the Laguna. We invite you to join us for this rich and rewarding

Ravikumar, B.

18

Faulted Barn  

USGS Multimedia Gallery

This barn is faulted through the middle; the moletrack is seen in the foreground with the viewer standing on the fault. From the air one can see metal roof panels of the barn that rotated as the barn was faulted....

19

Fault Separation  

NSDL National Science Digital Library

Students use gestures to explore the relationship between fault slip direction and fault separation by varying the geometry of faulted layers, slip direction, and the perspective from which these are viewed.

Carol Ormand

20

Quaternary Pollen Record from Laguna De Tagua Tagua, Chile  

Microsoft Academic Search

Pollen of southern beech and podocarp at Laguna de Tagua Tagua during the late Pleistocene indicates that cooler and more humid intervals were a feature of Ice Age climate at this subtropical latitude in Chile. The influence of the southern westerlies may have been greater at this time, and the effect of the Pacific anticyclone was apparently weakened. The climate

Calvin J. Heusser

1983-01-01

21

Lagunas Norte Mine Achieves Start-up Ahead of Schedule  

Microsoft Academic Search

Barrick Gold Corporation announced today that the new Lagunas Norte mine in Peru achieved start-up ahead of the original third-quarter schedule and within its $340 million budget. The mine is the second of Barrick's new generation of mines and will be a significant contributor to the Company's gold production for the second half of 2005 and in the years to

James Mavor; Vincent Borg

22

RADIOCARBON ANALYSIS OF PINUS LAGUNAE TREE RINGS: IMPLICATIONS FOR TROPICAL DENDROCHRONOLOGY  

Microsoft Academic Search

A promising species for tropical dendrochronology is Pinus lagunae, a pine tree found in Baja California Sur (Mexico) around lat 23.5°N. In 1995, we sampled a total of 27 wood cores from 13 Pinus lagunae trees in Sierra La Victoria (23°36'N, 109°56'W), just north of Sierra La Laguna, at an elevation of 1500-1600 m. Selected trees were locally dominant, but

Franco Biondi; Julianna E Fessenden

23

Fault finder  

DOEpatents

A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

Bunch, Richard H. (1614 NW. 106th St., Vancouver, WA 98665)

1986-01-01

24

A geophysical and geological study of Laguna de Ayarza, a Guatemalan caldera lake  

USGS Publications Warehouse

Geologic and geophysical data from Laguna de Ayarza, a figure-8-shaped doublecaldera lake in the Guatemalan highlands, show no evidence of postcaldera eruptive tectonic activity. The bathymetry of the lake has evolved as a result of sedimentary infilling. The western caldera is steep-sided and contains a large flat-floored central basin 240 m deep. The smaller, older, eastern caldera is mostly filled by coalescing delta fans and is connected with the larger caldera by means of a deep channel. Seismicreflection data indicate that at least 170 m of flat-lying unfaulted sediments partly fill the central basin and that the strata of the pre-eruption edifice have collapsed partly along inward-dipping ring faults and partly by more chaotic collapses. These sediments have accumulated in the last 23,000 years at a minimum average sedimentation rate of 7 m/103 yr. The upper 9 m of these sediments is composed of > 50% turbidites, interbedded with laminated clayey silts containing separate diatom and ash layers. The bottom sediments have >1% organic material, an average of 4% pyrite, and abundant biogenic gas, all of which demonstrate that the bottom sediments are anoxic. Although thin (<0.5 cm) ash horizons are common, only one thick (7-16 cm) primary ash horizon could be identified in piston cores. Alterations in the mineralogy and variations in the diatom assemblage suggest magnesium-rich hydrothermal activity. ?? 1985.

Poppe, L.J.; Paull, C.K.; Newhall, C.G.; Bradbury, J.P.; Ziagos, J.

1985-01-01

25

Análise de esteiras microbianas e cianobactérias da laguna Amarga, Parque Nacional de Torres del Paine, Chile  

Microsoft Academic Search

ANALYSIS OF MICROBIAL MATS AND CYANOBACTERIA FROM THE LAGUNA AMARGA, TORRES DEL PAINE NATIONAL PARK, CHILE. This study is based on cyanobacterial and sedimentary investigations of the microbial mats formed at Laguna Amarga, located at the southernmost part of South America, specifically in the Chilean Torres del Paine National Park. The Torres del Paine's natural and geological richness is composed

Loreine Hermida da Silva

26

Waterbirds (other than Laridae) nesting in the middle section of Laguna Cuyutlán, Colima, México  

Microsoft Academic Search

Laguna de Cuyutlán, in the state of Colima, Mexico, is the only large coastal wetland in a span of roughly 1150 km. Despite this, the study of its birds has been largely neglected. Between 2003 and 2006 we assessed the waterbirds nesting in the middle portion of Laguna Cuyutlán, a large tropical coastal lagoon, through field visits. We documented the

Eric Mellink; Mónica E. Riojas-López

27

75 FR 74073 - Laguna Atascosa National Wildlife Refuge, Cameron and Willacy Counties, TX; Final Comprehensive...  

Federal Register 2010, 2011, 2012, 2013, 2014

...20131-1265-2CCP-S3] Laguna Atascosa National Wildlife Refuge, Cameron and Willacy Counties...for the Laguna Atascosa National Wildlife Refuge (NWR). In this final...species, more than any other national wildlife refuge. A total of eight...

2010-11-30

28

JOB DESCRIPTION: Intern in Laguna San Ignacio, Baja, Mexico The Philanthropiece Foundation is seeking an intern to live and work in Laguna San Ignacio, Baja, Mexico.  

E-print Network

JOB DESCRIPTION: Intern in Laguna San Ignacio, Baja, Mexico The Philanthropiece Foundation information. Job Title: Internship Information about Philanthropiece What we do: Philanthropiece supports offers Baja's most intimate gray whale-watching experience. Job Description The overall goal

29

Possibilities For The LAGUNA Projects At The Frejus Site  

SciTech Connect

The present laboratory (LSM) at the Frejus site and the project of a first extension of it, mainly aimed at the next generation of dark matter and double beta decay experiments, are briefly reviewed. Then the main characteristics of the LAGUNA cooperation and Design Study network are summarized. Seven underground sites in Europe are considered in LAGUNA and are under study as candidates for the installation of Megaton scale detectors using three different techniques: a liquid Argon TPC (GLACIER), a liquid scintillator detector (LENA) and a Water Cerenkov (MEMPHYS), all mainly aimed at investigation of proton decay and properties of neutrinos from SuperNovae and other astrophysical sources as well as from accelerators (Super-beams and/or Beta-beams from CERN). One of the seven sites is located at Frejus, near the present LSM laboratory, and the results of its feasibility study are presented and discussed. Then the physics potential of a MEMPHYS detector installed in this site are emphasized both for non-accelerator and for neutrino beam based configurations. The MEMPHYNO prototype with its R and D programme is presented. Finally a possible schedule is sketched.

Mosca, Luigi [LSM-Frejus - CNRS/IN2P3 and CEA/DSM/IRFU (France)

2010-11-24

30

High-$\\gamma$ Beta Beams within the LAGUNA design study  

E-print Network

Within the LAGUNA design study, seven candidate sites are being assessed for their feasibility to host a next-generation, very large neutrino observatory. Such a detector will be expected to feature within a future European accelerator neutrino programme (Superbeam or Beta Beam), and hence the distance from CERN is of critical importance. In this article, the focus is a $^{18}$Ne and $^{6}$He Beta Beam sourced at CERN and directed towards a 50 kton Liquid Argon detector located at the LAGUNA sites: Slanic (L=1570 km) and Pyh\\"{a}salmi (L=2300 km). To improve sensitivity to the neutrino mass ordering, these baselines are then combined with a concurrent run with the same flux directed towards a large Water \\v{C}erenkov detector located at Canfranc (L=650 km). This degeneracy breaking combination is shown to provide comparable physics reach to the conservative Magic Baseline Beta Beam proposals. For $^{18}$Ne ions boosted to $\\gamma=570$ and $^{6}$He ions boosted to $\\gamma=350$, the correct mass ordering can be...

Orme, Christopher

2010-01-01

31

High-$?$ Beta Beams within the LAGUNA design study  

E-print Network

Within the LAGUNA design study, seven candidate sites are being assessed for their feasibility to host a next-generation, very large neutrino observatory. Such a detector will be expected to feature within a future European accelerator neutrino programme (Superbeam or Beta Beam), and hence the distance from CERN is of critical importance. In this article, the focus is a $^{18}$Ne and $^{6}$He Beta Beam sourced at CERN and directed towards a 50 kton Liquid Argon detector located at the LAGUNA sites: Slanic (L=1570 km) and Pyh\\"{a}salmi (L=2300 km). To improve sensitivity to the neutrino mass ordering, these baselines are then combined with a concurrent run with the same flux directed towards a large Water \\v{C}erenkov detector located at Canfranc (L=650 km). This degeneracy breaking combination is shown to provide comparable physics reach to the conservative Magic Baseline Beta Beam proposals. For $^{18}$Ne ions boosted to $\\gamma=570$ and $^{6}$He ions boosted to $\\gamma=350$, the correct mass ordering can be determined at Slanic for all $\\delta$ when $\\sin^{2}2\\theta_{13}>4\\cdot 10^{-3}$ in this combination.

Christopher Orme

2010-04-06

32

Fault diagnosis  

NASA Technical Reports Server (NTRS)

The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to examine pilot mental models of the aircraft subsystems and their use in diagnosis tasks. Future research plans include piloted simulation evaluation of the diagnosis decision aiding concepts and crew interface issues. Information is given in viewgraph form.

Abbott, Kathy

1990-01-01

33

About a Gadolinium-doped Water Cherenkov LAGUNA Detector  

SciTech Connect

Water Cherenkov (wC) detectors are extremely powerful apparatuses for scientific research. Nevertheless they lack of neutron tagging capabilities, which translates, mainly, into an inability to identify the anti-matter nature of the reacting incoming anti-neutrino particles. A solution was proposed by R. Beacon and M. Vagins back in 2004: by dissolving in the water a compound with nucleus with very large cross section for neutron capture like the Gadolinium, with a corresponding emission of photons of enough energy to be detected, they can tag thermal neutrons with an efficiency larger than 80%. In this talk we detail the technique and its implications in the measurement capabilities and, as well, the new backgrounds induced. We discuss the improvement on their physics program, also for the case of LAGUNA type detectors. We comment shortly the status of the pioneering R and D program of the Super-Kamiokande Collaboration towards dissolving a Gadolinium compound in its water.

Labarga, Luis [Department of Theoretical Physics, University Autonoma Madrid, 28049 Madrid (Spain)

2010-11-24

34

Fish for the City: Urban Political Ecologies of Laguna Lake Aquaculture  

E-print Network

The dissertation tells the story of the production of socionatures through the development of aquaculture in Laguna Lake. The state introduced lake aquaculture to supplement fisherfolk livelihoods and improve fish production in part to provide...

Saguin, Kristian Karlo Cordova

2013-10-31

35

Hatching success of Caspian terns nesting in the lower Laguna Madre, Texas, USA  

USGS Publications Warehouse

The average clutch size of Caspian Terns nesting in a colony in the Lower Laguna Madre near Laguna Vista, Texas, USA in 1984 was 1.9 eggs per nest. Using the Mayfield method for calculating success, one egg hatched in 84.1% of the nests and 69.8% of the eggs laid hatched. These hatching estimates are as high or higher than estimates from colonies in other areas.

Mitchell, C.A.; Custer, T.W.

1986-01-01

36

Fault terminations and barriers to fault growth  

NASA Astrophysics Data System (ADS)

Field observations of strike slip faults in jointed granitic rocks of the central Sierra Nevada, California, combined with a mechanical analysis of fault interaction, provide insight into how fault terminations vary with scale. We document here a strike-slip fault system 2-3 km long. Clustered about the west end of the fault system are several dozen faults that parallel the three main fault zones in the system. We interpret this cluster of small faults as a barrier that inhibited growth of fault zones in the fault system. A two-dimensional mechanical analysis shows that a cluster of small faults flanking the tip of a large fault zone will tend to diffuse the stress concentration near the fault zone tip—an analogous effect in engineering is known as crack-tip shielding. Near-tip stress concentrations promote fault growth, and processes that decrease these stress concentrations inhibit fault growth. As faults lengthen and grow, they interact with features at greater distances and over a broader area, so the potential for tip shielding effects will increase as fault length increases. This effect can account for why the mechanisms and character of fault terminations would tend to vary as a function of scale.

d'Alessio, Matthew A.; Martel, Stephen J.

2004-10-01

37

DNA barcoding of fishes of Laguna de Bay, Philippines.  

PubMed

Laguna de Bay, the largest lake in the Philippines, is an important part of the country's fisheries industry. It is also home to a number of endemic fishes including Gobiopterus lacustris (Herre 1927) of family Gobiidae, Leiopotherapon plumbeus (Kner 1864) of family Terapontidae, Zenarchopterus philippinus (Peters 1868) of family Hemiramphidae and Arius manillensis Valenciennes 1840 of family Ariidae. Over the years, a steady decline has been observed in the abundance and diversity of native fishes in the lake due to anthropogenic disturbances. In this study, a total of 71 specimens of 18 different species belonging to 18 genera, 16 families, and seven orders were DNA barcoded using the mitochondrial cytochrome c oxidase subunit I (COI) gene. All of the fish species were discriminated by their COI sequences and one endemic species G. lacustris, showing deep genetic divergence, was highlighted for further taxonomic investigation. Average Kimura 2-parameter genetic distances within species, family, and order were 1.33%, 18.91%, and 24.22%, respectively. These values show that COI divergence increases as taxa become less exclusive. All of the COI sequences obtained were grouped together according to their species designation in the Neighbor-joining tree that was constructed. This study demonstrated that DNA barcoding has great potential as a tool for fast and accurate species identification and also for highlighting species that warrant further taxonomic investigation. PMID:22040082

Aquino, Luis Miguel G; Tango, Jazzlyn M; Canoy, Reynand Jay C; Fontanilla, Ian Kendrich C; Basiao, Zubaida U; Ong, Perry S; Quilang, Jonas P

2011-08-01

38

The LAGUNA/LBNO potential for Long Baseline neutrino physics  

NASA Astrophysics Data System (ADS)

The LAGUNA/LBNO collaboration proposes a new generation neutrino experiment to address fundamental questions in particle and astroparticle physics. The experiment consists of a far detector, Liquid Argon (LAr) double phase Time TPC (Time Projection Chamber), the fiducial mass of the detector is set to 20 kt in its first stage. The detector will be situated at 2300 km from CERN: this long baseline provides a unique opportunity to study the neutrino flavour oscillations over the first and second oscillation maxima and to explore the L/E (Length over energy) behaviour. The near detector is based on a high-pressure argon gas TPC situated at CERN. I will detail the physics potential of this experiment for determining without ambiguity the mass hierarchy (MH) in its first stage and discovering CP violation (CPV) using the CERN SPS beam with a power of 750 kw. The impact of the assumptions on the knowledge of the oscillation parameters and the systematic errors are very important and will be shown in detail to prove the force of the experiment assuming realistic and conservative parameter values.

Agostino, Luca; Consortium, Laguna-Lbno

2014-12-01

39

Factors controlling navigation-channel Shoaling in Laguna Madre, Texas  

USGS Publications Warehouse

Shoaling in the Gulf Intracoastal Waterway of Laguna Madre, Tex., is caused primarily by recycling of dredged sediments. Sediment recycling, which is controlled by water depth and location with respect to the predominant wind-driven currents, is minimal where dredged material is placed on tidal flats that are either flooded infrequently or where the water is extremely shallow. In contrast, nearly all of the dredged material placed in open water >1.5 m deep is reworked and either transported back into the channel or dispersed into the surrounding lagoon. A sediment flux analysis incorporating geotechnical properties demonstrated that erosion and not postemplacement compaction caused most sediment losses from the placement areas. Comparing sediment properties in the placement areas and natural lagoon indicated that the remaining dredged material is mostly a residual of initial channel construction. Experimental containment designs (shallow subaqueous mound, submerged levee, and emergent levee) constructed in high-maintenance areas to reduce reworking did not retain large volumes of dredged material. The emergent levee provided the greatest retention potential approximately 2 years after construction.

Morton, R.A.; Nava, R.C.; Arhelger, M.

2001-01-01

40

High-Performance Wireless Internet Connection to Mount Laguna Observatory  

NASA Astrophysics Data System (ADS)

A 45 Mbit/sec full-duplex wireless Internet backbone is now under construction that will connect SDSU's Mount Laguna Observatory (MLO) to the San Diego Supercomputer Center (SDSC), which is located on the campus of UCSD. The SDSU campus is connected to the SDSC via Abilene/OC3 (Internet2) at 155 Mbit/sec. The MLO-SDSC backbone is part of the High-Performance Wireless Research and Education Network (HPWREN) project. Other scientific applications include earthquake monitoring from a remote array of automated seismic stations operated by researchers at the UCSD Institute for Geophysics and Planetary Physics, and environmental monitoring at Ecology field stations administered by SDSU. Educational initiatives include bringing the Internet to schools and educational centers at remote Indian reservations such as Pala and Rincon. HPWREN will allow SDSU astronomers and their collaborators to transmit CCD images to their home institutions while observations are being made. Archive retrieval of images from on-campus data bases, for comparison purposes, could easily be done. SDSU desires to build a modern, large telescope at MLO. HPWREN would support both robotic and remote observing capabilities for such a telescope. Astronomers could observe at their home institutions with multiple workstations to feed command and control instructions, data, and slow-scan video, which would give them the "feel" of being in a control room next to the telescope. HPWREN was funded by the NSF under grant ANI-0087344.

Etzel, P. B.; Braun, H.-W.

2000-12-01

41

2004-2005 Texas Water Resources Institute Mills Scholarship Application Water Management, Soil Salinity and Landscape Ecology in Laguna  

E-print Network

degrade water quality and availability in the area. For human and natural ecological communities to co Salinity and Landscape Ecology in Laguna Atascosa National Wildlife Refuge Heather R. Miller Department Management, Soil Salinity and Landscape Ecology in Laguna Atascosa National Wildlife Refuge Nature of Problem

Herbert, Bruce

42

Luminescence dating of the PASADO core 5022-1D from Laguna Potrok Aike (Argentina) using IRSL signals from feldspar  

E-print Network

a reliable chronology. Both radiocarbon and luminescence dating have been applied to Laguna Potrok Aike of luminescence dating to provide an absolute chronology beyond the radiocarbon dating limit (w40 ka) but 5 outLuminescence dating of the PASADO core 5022-1D from Laguna Potrok Aike (Argentina) using IRSL

43

CMOS Bridging Fault Detection  

Microsoft Academic Search

The authors compare the performance of two test generation techniques, stuck fault testing and current testing, when applied to CMOS bridging faults. Accurate simulation of such faults mandated the development of several new design automation tools, including an analog-digital fault simulator. The results of this simulation are analyzed. It is shown that stuck fault test generation, while inherently incapable of

Thomas M. Storey; Wojciech Maly

1990-01-01

44

Normal Fault Visualization  

NSDL National Science Digital Library

This module demonstrates the motion on an active normal fault. Faulting offsets three horizontal strata. At the end of the faulting event, surface topography has been generated. The upper rock layer is eroded by clicking on the 'begin erosion' button. The operator can manipulate the faulting motion, stopping and reversing motion on the fault at any point along the transit of faulting. The action of erosion is also interactive. One possible activity is an investigation of the control of different faulting styles on regional landscape form. This visual lends itself to an investigation of fault motion, and a comparison of types of faults. The interactive normal faulting visual could be compared to other interactive visuals depicting thrust faults, reverse faults, and strike slip faults (interactive animations of these fault types can be found by clicking on 'Media Types' at top red bar, then 'Animations', then 'Faults'). By comparing the interactive images of different types of faulting with maps of terrains dominated by different faulting styles, students are aided in conceptualizing how certain faulting styles produce distinctive landforms on the earth's surface (e.g., ridge and valley topography [thrust faulting dominant] versus basin-and-range topography [normal faulting dominant]). Jimm Myers, geology professor at the University of Wyoming, originated the concept of The Magma Foundry, a website dedicated to improving Earth science education across the grade levels. The Magma Foundry designs and creates modular, stand-alone media components that can be utilized in a variety of pedagogical functions in courses and labs.

Jimm Myers

45

Optimal fault location  

E-print Network

after the accurate fault condition and location are detected. This thesis has been focusing on automated fault location procedure. Different fault location algorithms, classified according to the spatial placement of physical measurements on single ended...

Knezev, Maja

2008-10-10

46

Optimal fault location  

E-print Network

after the accurate fault condition and location are detected. This thesis has been focusing on automated fault location procedure. Different fault location algorithms, classified according to the spatial placement of physical measurements on single ended...

Knezev, Maja

2009-05-15

47

Fault Separation Gestures  

NSDL National Science Digital Library

Students explore the relationship between fault slip direction and fault separation by varying the geometry of faulted layers, slip direction, and the perspective from which these are viewed. They work in teams to explore these complex geometric relationships via gestures.

Carol Ormand

48

Optimal stochastic fault detection filter  

Microsoft Academic Search

Properties of the optimal stochastic fault detection filter for fault detection and identification are determined. The objective of the filter is to monitor certain faults called target faults and block other faults which are called nuisance faults. This filter is derived by keeping the ratio of the transmission from nuisance fault to the transmission from target fault small. It is

Robert H. Chen; Jason L. Speyer

1999-01-01

49

Finding Fault with Faults: A Case Study  

NASA Technical Reports Server (NTRS)

We describe our effort in extending this work beyond the initial software contruction. Our area of focus is determining the rate of fault injection over a sequence of successive builds, first observing that software faults may be seen to fall into two distinct classes some faults are incorporated during the initial coding effort, while others are added in successive software builds.

Munson, John C.; Nikora, Allen P.

1997-01-01

50

Laguna Negra Virus Associated with HPS in Western Paraguay and Bolivia  

Microsoft Academic Search

A large outbreak of hantavirus pulmonary syndrome (HPS) recently occurred in the Chaco region of Paraguay. Using PCR approaches, partial virus genome sequences were obtained from 5 human sera, and spleens from 5Calomys laucharodents from the outbreak area. Genetic analysis revealed a newly discovered hantavirus, Laguna Negra (LN) virus, to be associated with the HPS outbreak and established a direct

Angela M Johnson; Michael D Bowen; Thomas G Ksiazek; R. Joel Williams; Ralph T Bryan; James N Mills; C. J Peters; Stuart T Nichol

1997-01-01

51

Late Holocene air temperature variability reconstructed from the sediments of Laguna Escondida, Patagonia, Chile (4530S)  

E-print Network

, Patagonia, Chile (45°30S) Julie Elbert a, , Richard Wartenburger a , Lucien von Gunten a,b , Roberto Urrutia in Northern Chilean Patagonia, Lago Castor (45°36S, 71°47W) and Laguna Escondida (45°31S, 71°49W). Radiometric

Wehrli, Bernhard

52

Redhead duck behavior on lower Laguna Madre and adjacent ponds of southern Texas  

USGS Publications Warehouse

Behavior of redheads (Aythya americana) during winter was studied on the hypersaline lower Laguna Madre and adjacent freshwater to brackish water ponds of southern Texas. On Laguna Madre, feeding (46%) and sleeping (37%) were the most common behaviors. Redheads fed more during early morning (64%) than during the rest of the day (40%); feeding activity was negatively correlated with temperature. Redheads fed more often by dipping (58%) than by tipping (25%), diving (16%), or gleaning (0.1%). Water depth was least where they fed by dipping (16 cm), greatest where diving (75 cm), and intermediate where tipping (26 cm). Feeding sequences averaged 5.3 s for dipping, 8.1 s for tipping, and 19.2 s for diving. Redheads usually were present on freshwater to brackish water ponds adjacent to Laguna Madre only during daylight hours, and use of those areas declined as winter progressed. Sleeping (75%) was the most frequent behavior at ponds, followed by preening (10%), swimming (10%), and feeding (0.4%). Because redheads fed almost exclusively on shoalgrass while dipping and tipping in shallow water and shoalgrass meadows have declined in the lower Laguna Madre, proper management of the remaining shoalgrass habitat is necessary to ensure that this area remains the major wintering area for redheads.

Mitchell, C.A.; Custer, T.W.; Zwank, P.J.

1992-01-01

53

La captura comercial del coypo Myocastor coypus (Mammalia: Myocastoridae) en Laguna Adela, Argentina  

Microsoft Academic Search

We analize commercial harvest of coypus Myocastor coypus in Laguna Adela, Argentina, during 1988. A total of 217 animals was trapped from April to October (?. = 31 per month), with a capture effort of 1768 night?traps (? = 255.57 per month). The capture increased up to a peak in August ? September and was inversely correlated with capture effort.

M. Gorostiague; H. A. Regidor

1993-01-01

54

Quantitative fault seal prediction  

Microsoft Academic Search

Fault seal can arise from reservoir\\/nonreservoir juxtaposition or by development of fault rock having high entry pressure. The methodology for evaluating these possibilities uses detailed seismic mapping and well analysis. A first-order seal analysis involves identifying reservoir juxtaposition areas over the fault surface by using the mapped horizons and a refined reservoir stratigraphy defined by isochores at the fault surface.

G. Yielding; B. Freeman; D. T. Needham

1997-01-01

55

Transition Fault Simulation  

Microsoft Academic Search

Delay fault testing is becoming more important as VLSI chips become more complex. Components that are fragments of functions, such as those in gate-array designs, need a general model of a delay fault and a feasible method of generating test patterns and simulating the fault. The authors present such a model, called a transition fault, which when used with parallel-pattern,

John Waicukauski; Eric Lindbloom; Barry Rosen; Vijay Iyengar

1987-01-01

56

Migration chronology and distribution of redheads on the lower Laguna Madre, Texas  

USGS Publications Warehouse

An estimated 80% of redheads (Aythya americana) winter on the Laguna Madre of southern Texas and Mexico. Because there have been profound changes in the Laguna Madre over the past three decades and the area is facing increasing industrial and recreational development, we studied the winter distribution and habitat requirements of redheads during two winters (1987-1988 and 1988-1989) on the Lower Laguna Madre, Texas to provide information that could be used to understand, identify, and protect wintering redhead habitat. Redheads began arriving on the Lower Laguna Madre during early October in 1987 and 1988, and continued to arrive through November. Redhead migration was closely associated with passing weather fronts. Redheads arrived on the day a front arrived and during the following two days; no migrants were observed arriving the day before a weather front arrived. Flock size of arriving redheads was 26.4 ± 0.6 birds and did not differ among days or by time of day (morning midday, or afternoon). Number of flocks arriving per 0.5 h interval (arrival rate) was greater during afternoon (21.7 ± 0.6) than during morning (4.3 ± 1.2) or midday (1.5 ± 0.4) on the day of frontal passage and during the first day after frontal passage. Upon arrival, redhead flocks congregated in the central portion of the Lower Laguna Madre. They continued to use the central portion throughout the winter, but gradually spread to the northern and southern ends of the lagoon. Seventy-one percent of the area used by flocks was vegetated with shoalgrass (Halodule wrightii) although shoalgrass covered only 32% of the lagoon. Flock movements seemed to be related to tide level; redheads moved to remain in water 12-30 cm deep. These data can be used by the environmental community to identify and protect this unique and indispensable habitat for wintering redheads.

Custer, Christine M.; Custer, T.W.; Zwank, P.J.

1997-01-01

57

Flight elements: Fault detection and fault management  

NASA Technical Reports Server (NTRS)

Fault management for an intelligent computational system must be developed using a top down integrated engineering approach. An approach proposed includes integrating the overall environment involving sensors and their associated data; design knowledge capture; operations; fault detection, identification, and reconfiguration; testability; causal models including digraph matrix analysis; and overall performance impacts on the hardware and software architecture. Implementation of the concept to achieve a real time intelligent fault detection and management system will be accomplished via the implementation of several objectives, which are: Development of fault tolerant/FDIR requirement and specification from a systems level which will carry through from conceptual design through implementation and mission operations; Implementation of monitoring, diagnosis, and reconfiguration at all system levels providing fault isolation and system integration; Optimize system operations to manage degraded system performance through system integration; and Lower development and operations costs through the implementation of an intelligent real time fault detection and fault management system and an information management system.

Lum, H.; Patterson-Hine, A.; Edge, J. T.; Lawler, D.

1990-01-01

58

Fault zone fabric and fault weakness.  

PubMed

Geological and geophysical evidence suggests that some crustal faults are weak compared to laboratory measurements of frictional strength. Explanations for fault weakness include the presence of weak minerals, high fluid pressures within the fault core and dynamic processes such as normal stress reduction, acoustic fluidization or extreme weakening at high slip velocity. Dynamic weakening mechanisms can explain some observations; however, creep and aseismic slip are thought to occur on weak faults, and quasi-static weakening mechanisms are required to initiate frictional slip on mis-oriented faults, at high angles to the tectonic stress field. Moreover, the maintenance of high fluid pressures requires specialized conditions and weak mineral phases are not present in sufficient abundance to satisfy weak fault models, so weak faults remain largely unexplained. Here we provide laboratory evidence for a brittle, frictional weakening mechanism based on common fault zone fabrics. We report on the frictional strength of intact fault rocks sheared in their in situ geometry. Samples with well-developed foliation are extremely weak compared to their powdered equivalents. Micro- and nano-structural studies show that frictional sliding occurs along very fine-grained foliations composed of phyllosilicates (talc and smectite). When the same rocks are powdered, frictional strength is high, consistent with cataclastic processes. Our data show that fault weakness can occur in cases where weak mineral phases constitute only a small percentage of the total fault rock and that low friction results from slip on a network of weak phyllosilicate-rich surfaces that define the rock fabric. The widespread documentation of foliated fault rocks along mature faults in different tectonic settings and from many different protoliths suggests that this mechanism could be a viable explanation for fault weakening in the brittle crust. PMID:20016599

Collettini, Cristiano; Niemeijer, André; Viti, Cecilia; Marone, Chris

2009-12-17

59

Optimal stochastic fault detection filter  

Microsoft Academic Search

A fault detection and identification algorithm, called optimal stochastic fault detection filter, is determined. The objective of the filter is to detect a single fault, called the target fault, and block other faults, called the nuisance faults, in the presence of the process and sensor noises. The filter is derived by maximizing the transmission from the target fault to the

Robert H. Chen; D. Lewis Mingori; Jason L. Speyer

2003-01-01

60

A case of paleo-creep? Comparison of fault displacements in a trench with the corresponding earthquake record in lake sediments along the Polochic fault, Guatemala  

NASA Astrophysics Data System (ADS)

The Polochic and Motagua strike-slip faults in Guatemala accommodate the displacement (~2 cm/y) across the boundary between the Caribbean and North American plates. Both faults are expected to produce large destructive earthquakes such as the Mw 7.5 earthquake of 1976 on the Motagua fault. Former large earthquakes with magnitudes larger than Mw 7.0 are suggested from the areal extent of destructions to Precolombian Mayan cities and churches, and both the Motagua and Polochic fault have been suspected as the sources of these earthquakes. The available record, however, is surprisingly poor in large earthquakes, suggesting either that the record is sketchy or that such earthquakes are effectively infrequent. We investigated the activity of the Polochic fault by opening trenches along its major strand in Uspantán, Quiché, and Agua Blanca, Alta Verapaz. Recent displacements are evidenced in Agua Blanca, with soils less than 350 years old disrupted by the fault. We combined the study of the trenches with the study of sediment cores in Laguna Chichój, a lake located 4 km north of the Polochic fault. We had previously conducted an analysis of the sensitivity of the Chichój lake sediments to earthquakes in the 20th century. In the 20th centurey the earthquake record is well known, as well the locally felt intensity of these earthquakes. We found that for MMI intensities of VI and higher turbidites and slumps are produced in the lake. We used this calibration to study the earthquake record of the past 12 centuries and identified a cluster of earthquakes with MMI > VI between 830 and 1450 AD. The oldest seismite temporally matches widespread destructions in Mayan cities in 830 AD. Surprisingly, no earthquakes are recorded between 1450 and 1976 AD. Yet, the trench in Agua Blanca records substantial displacement of the Polochic fault over the period. It seems therefore that this ultimate displacement did not produce any substantial earthquake, and may correspond to a period of creeping on the Polochic fault.

Brocard, Gilles; Anselmetti, Flavio

2014-05-01

61

Generation of Fault Trees from Simulated Incipient Fault Case Data  

Microsoft Academic Search

Fault tree analysis is widely used in industry in fault diagnosis. The diagnosis of incipient or 'soft' faults is considerably more dif ficult than of 'hard' faults, which is the situation considered normally. A detailed fault tree model reflecting signal variations over wide range is required for diagnosing such soft faults. This paper describes the investigation of a machine learning

Michael G. Madden; Paul J. Nolan

1994-01-01

62

Fault Tolerant Strategies for BLDC Motor Drives under Switch Faults  

Microsoft Academic Search

In this paper, the fault tolerant system for BLDC motors has been proposed to maintain the control performance under switching device faults of inverter. The proposed fault tolerant system provides compensation for open-circuit faults and short-circuit faults in power converter. The fault identification is quickly achieved by simple algorithm using the characteristic of BLDC motor drives. The drive system after

Byoung-Gun Park; Tae-Sung Kim; Ji-Su Ryu; Dong-Seok Hyun

2006-01-01

63

The microbial community at Laguna Figueroa, Baja California Mexico - From miles to microns  

NASA Technical Reports Server (NTRS)

The changes in the composition of the stratified microbial community in the sediments at Laguna Figeroa following floods are studied. The laguna which is located on the Pacific coast of the Baja California peninsula 200 km south of the Mexican-U.S. border is comprised of an evaporite flat and a salt marsh. Data collected from 1979-1983 using Landsat imagery, Skylab photographs, and light and transmission electron microscopy are presented. The flood conditions, which included 1-3 m of meteoric water covering the area and a remanent of 5-10 cm of siliciclastic and clay sediment, are described. The composition of the community prior to the flooding consisted of Microcoleus, Phormidium sp., a coccoid cynanobacteria, Phloroflexus, Ectothiorhodospira, Chloroflexus, Thiocapsa sp., and Chromatium. Following the floods Thiocapsa, Chromatium, Oscillatora sp., Spirulina sp., and Microcoleus are observed in the sediments.

Stolz, J. F.

1985-01-01

64

Every Place Has Its Faults  

NSDL National Science Digital Library

This site covers the four main types of faults (not including growth faults): the normal fault, reverse fault, transcurrent (strike-slip) fault, and thrust fault. Animations show the type of movement for each different type of fault. There is a section on the initial stage of a landform, containing a diagram of a graben and horst system. Also included are photographs of fault scarps along Hebgen Lake, Montana.

65

Comparasion of finite difference and finite element hydrodynamic models applied to the Laguna Madre Estuary, Texas  

E-print Network

Madre Estuary . . General Hydrodynamic Modeling. . Previous Studies in the Laguna Madre . TxBLEND and SWIFI2D. . III DESCRIPTION OF MODELS . SWIFT2D TxBLEND . . Data Bathymetry Generation . . Grid Cell Size Selecdon. Simulation . . Calibration... developed by the U. S. Army Corps of Engineers Waterways Experiment Station hydraulic group has been used in a number of applications. The TABS system is comprised of the Geometry File Generation program (GFGEN), RMA2, RMA4, and SED2D. The GFGEN software...

McArthur, Karl Edward

1996-01-01

66

Planning for Water Scarcity: The Vulnerability of the Laguna Region, Mexico  

E-print Network

Estimates of Water in the Aquifer at Drought Probabilities of 0.1 and 0.2 in the Laguna Region ............................................................... 41 3.3 Comparisons of Economic Productivity of Four Different Initial Estimates of Water... new pumping permit but had to acquire a permit from an existing user. Implementation of this law is ongoing and not complete. This dissertation examines declining groundwater availability and management strategies for addressing water shortages...

Sanchez Flores, Maria Del Rosario

2010-10-12

67

Coastal Pond Use by Redheads Wintering in the Laguna Madre, Texas  

Microsoft Academic Search

The distribution of North American redheads (Aythya americana) during winter is highly concentrated in the Laguna Madre of Texas and Tamaulipas, Mexico. Redheads forage almost exclusively\\u000a in the lagoon and primarily on shoalgrass (Halodule wrightii) rhizomes; however, they make frequent flights to adjacent coastal ponds to dilute salt loads ingested while foraging. We\\u000a conducted 63 weekly aerial surveys during October–March

Bart M. Ballard; J. Dale James; Ralph L. Bingham; Mark J. Petrie; Barry C. Wilson

2010-01-01

68

Natural resource appropriation in cooperative artisanal fishing between fishermen and dolphins (Tursiops truncatus) in Laguna, Brazil  

E-print Network

(Tursiops truncatus) in Laguna, Brazil De´bora Peterson a,*, Natalia Hanazaki a , Paulo Ce´sar Simo/ECZ, Floriano´polis, SC 88010-970, Brazil b Laborato´rio de Mami´feros Aqua´ticos, Universidade Federal de Santa Catarina, CCB/ECZ, Floriano´polis, SC 88010-970, Brazil a r t i c l e i n f o Article history: Available

Simões-Lopes, Paulo César

69

Fault Mapping in Haiti  

USGS Multimedia Gallery

USGS geologist Carol Prentice surveying features that have been displaced by young movements on the Enriquillo fault in southwest Haiti.  The January 2010 Haiti earthquake was associated with the Enriquillo fault....

70

Waterbirds (other than Laridae) nesting in the middle section of Laguna Cuyutlán, Colima, México.  

PubMed

Laguna de Cuyutlán, in the state of Colima, Mexico, is the only large coastal wetland in a span of roughly 1150 km. Despite this, the study of its birds has been largely neglected. Between 2003 and 2006 we assessed the waterbirds nesting in the middle portion of Laguna Cuyutlán, a large tropical coastal lagoon, through field visits. We documented the nesting of 15 species of non-Laridae waterbirds: Neotropic Cormorant (Phalacrocorax brasilianus), Tricolored Egret (Egretta tricolor), Snowy Egret (Egretta thula), Little Blue Heron (Egretta caerulea), Great Egret (Ardea alba), Cattle Egret (Bubulcus ibis), Black-crowned Night-heron (Nycticorax nycticorax), Yellow-crowned Night-heron (Nyctanassa violacea), Green Heron (Butorides virescens), Roseate Spoonbill (Platalea ajaja), White Ibis (Eudocimus albus), Black-bellied Whistling-duck (Dendrocygna autumnalis), Clapper Rail (Rallus longirostris), Snowy Plover (Charadrius alexandrinus), and Black-necked Stilt (Himantopus mexicanus). These add to six species of Laridae known to nest in that area: Laughing Gulls (Larus atricilla), Royal Terns (Thalasseus maximus), Gull-billed Terns (Gelochelidon nilotica), Forster's Terns (S. forsteri), Least Terns (Sternula antillarum), and Black Skimmer (Rynchops niger), and to at least 57 species using it during the non-breeding season. With such bird assemblages, Laguna Cuyutlán is an important site for waterbirds, which should be given conservation status. PMID:18624252

Mellink, Eric; Riojas-López, Mónica E

2008-03-01

71

Hydrocarbon concentrations in sediments and clams (Rangia cuneata) in Laguna de Pom, Mexico  

SciTech Connect

Laguna de Pom is a coastal lagoon within the Laguna de Terminos system in southern Gulf of Mexico. It belongs to the Grijalva-Usumacinta basin, and is located between 18{degrees} 33{prime} and 18{degrees} 38{prime} north latitude and 92{degrees} 01{prime} and 92{degrees} 14{prime} west longitude, in the Coastal Plain physiographic Province of the Gulf. It is ellipsoidal and approximately 10 km long, with a surface area of 5,200 ha and a mean depth of 1.5 m. Water salinity and temperature ranges are 0 to 13 {per_thousand} and 25{degrees} to 31{degrees}C, respectively. Benthic macrofauna is dominated by bivalves such as the clams Rangia cuneata, R. flexuosa, and Polymesoda carolineana. These clams provide the basis of an artisanal fishery, which is the main economic activity in the region. The presence of several oil-processing facilities around the lagoon is very conspicuous, which together with decreasing yields has created social conflicts, with the fishermen blaming the mexican state oil company (PEMEX) for the decrease in the clam population. This work aims to determine if the concentration of hydrocarbons in the clams (R. cuneata) and sediments of Laguna de Pom are responsible for the declining clam fishery. 11 refs., 4 figs., 2 tabs.

Alvarez-Legorreta, T.; Gold-Bouchot, G.; Zapata-Perez, O. [Unidad Merida (Mexico)

1994-01-01

72

Late Quaternary faulting in the Cabo San Lucas-La Paz Region, Baja California  

NASA Astrophysics Data System (ADS)

While Baja California drifts, active deformation on and just offshore indicates that spreading is not completely localized to the rift axis in the Gulf of California. Using on and offshore data, we characterize normal faulting- related deformation in the Cabo San Lucas-La Paz area. We mapped sections of the north trending faults in a 150 km long left-stepping fault array. Starting in the south, the San Jose del Cabo fault (east dipping) bounds the ~2 km high Sierra La Laguna. It is >70 km long with well defined 1-10 meter fault scarps cutting the youngest late Quaternary geomorphic surfaces. Our preliminary mapping along the north central section exhibits extensive late Quaternary terraces with riser heights of tens of meters above Holocene terraces. The San Jose del Cabo fault trace becomes diffuse and terminates in the area of Los Barriles. Moving northward, the fault system steps to the west, apparently transferring slip to the faults of San Juan de Los Planes and Saltito, which then step left again across the La Paz basin to the NNW trending Carrizal Fault. It has an on shore length of > 60 km. We produced a 25 km detailed strip map along the northern segment. It is embayed by convex east arcs several km long and 100 m deep. In the south, few-m-high scarps cut a pediment of thin Quaternary cover over tertiary volcanic rocks. The escarpment along the fault is hundreds of meters high and scarps 1-10 m high where it goes offshore in the north. Near Bonfil, a quarry cut exposes the fault zone. It comprises a 5-10 m wide bedrock shear zone with sheared tertiary volcanic units. On the footwall, the lower silty and sandy units have moderately well developed pedogenic carbonate, whereas the upper coarse gravel does not. These late Quaternary units appear to be faulted by one to three earthquakes. Finally, we mapped the Saltito fault zone NNE of La Paz. It is a NW trending structure with well developed 5- 10 meter high bedrock scarps defining its NW 5 km and slightly concave east with a 500 m left. Along all the fault zones studied, offset geomorphic surfaces indicate late Pleistocene to Holocene offset. These surfaces can be exploited to determine slip rates and produce a regional chronosequence to test for synchroneity of climatically modulated variations in sediment supply and transport capacity. In addition, a shallow marine geophysics and coring extends our mapping and provides important age control and improved stratigraphic assessment of fault activity.

Busch, M.; Arrowsmith, J. R.; Umhoefer, P. J.; Gutiérrez, G. M.; Toke, N.; Brothers, D.; Dimaggio, E.; Maloney, S.; Zielke, O.; Buchanan, B.

2006-12-01

73

Active Faulting in Idaho  

NSDL National Science Digital Library

This lesson introduces students to faulting from the Quaternary Period and the Holocene Epoch in the State of Idaho. They will examine a map showing the distribution of these faults and answer questions concerning groundwater circulation and earthquake potential, and determine which geologic province has the most neotectonically active faults (15,000 years or younger).

74

Fault-tolerant estimation  

Microsoft Academic Search

A fault-tolerant estimator is obtained from fusing the concept of fault detection with estimation. Two possible architectures are evaluated. At the center of the fault-tolerant estimation procedure is a bank of filters computing local state estimates, a residual screening scheme to isolate corrupted estimates and a method of blending untainted ones into a global minimum variance estimate, free of the

Laurence H. Mutuel; Jason L. Speyer

2000-01-01

75

Fault Tree Analysis  

Microsoft Academic Search

In this chapter, a state-of-the-art review of fault tree analysis is presented. Different forms of fault trees, including\\u000a static, dynamic, and non-coherent fault trees, their applications and analyses will be discussed. Some advanced topics such\\u000a as importance analysis, dependent failures, disjoint events, and multistate systems will also be presented.

Liudong Xing; Suprasad V. Amari

76

Mechanics of discontinuous faults  

Microsoft Academic Search

Fault traces consist of numerous discrete segments, commonly arranged as echelon arrays. In some cases, discontinuities influence the distribution of slip and seismicity along faults. To analyze fault segments, we derive a two-dimensional solution for any number of nonintersecting cracks arbitrarily located in a homogeneous elastic material. The solution includes the elastic interaction between cracks. Crack surfaces are assumed to

P. Segall; D. D. Pollard

1980-01-01

77

Faults of Southern California  

NSDL National Science Digital Library

This interactive map displays faults for five regions in Southern California. Clicking on a region links to an enlarged relief map of the area, with local faults highlighted in colors. Users can click on individual faults to access pages with more detailed information, such as type, length, nearest communities, and a written description. In all of the maps, the segment of the San Andreas fault that is visible is highlighted in red, and scales for distances and elevations are provided. There is also a link to an alphabetical listing of faults by name.

78

Diagnosing CMOS bridging faults with stuck-at fault dictionaries  

Microsoft Academic Search

It is shown that the traditional approach to diagnosing stuck-at faults with fault dictionaries generated for stuck-at faults is not appropriate for diagnosing CMOS bridging faults. A novel technique for using stuck-at-fault dictionaries to diagnose bridging faults is described. Teradyne's LASAR was used to simulate bridging and stuck-at faults in a number of combinational circuits, including parity trees, multiplexers, and

Steven D. Millman; Edward J. McCluskey; John M. Acken

1990-01-01

79

Earthquake fault superhighways  

NASA Astrophysics Data System (ADS)

Motivated by the observation that the rare earthquakes which propagated for significant distances at supershear speeds occurred on very long straight segments of faults, we examine every known major active strike-slip fault system on land worldwide and identify those with long (> 100 km) straight portions capable not only of sustained supershear rupture speeds but having the potential to reach compressional wave speeds over significant distances, and call them "fault superhighways". The criteria used for identifying these are discussed. These superhighways include portions of the 1000 km long Red River fault in China and Vietnam passing through Hanoi, the 1050 km long San Andreas fault in California passing close to Los Angeles, Santa Barbara and San Francisco, the 1100 km long Chaman fault system in Pakistan north of Karachi, the 700 km long Sagaing fault connecting the first and second cities of Burma, Rangoon and Mandalay, the 1600 km Great Sumatra fault, and the 1000 km Dead Sea fault. Of the 11 faults so classified, nine are in Asia and two in North America, with seven located near areas of very dense populations. Based on the current population distribution within 50 km of each fault superhighway, we find that more than 60 million people today have increased seismic hazards due to them.

Robinson, D. P.; Das, S.; Searle, M. P.

2010-10-01

80

Creeping Faults and Seismicity: Lessons From The Hayward Fault, California  

Microsoft Academic Search

While faults remain mostly locked between large strain releasing events, they can dissipate some of the accumulating elastic strain through creep. One such fault that releases a significant fraction of accumulating strain by creep is the Hayward fault in the San Francisco Bay region of California. The seismic risk associated with creeping faults such as the Hayward fault will depend

R. Malservisi; K. P. Furlong; C. Gans

2002-01-01

81

Morphometric or morpho-anatomal and genetic investigations highlight allopatric speciation in Western Mediterranean lagoons within the Atherina lagunae species (Teleostei, Atherinidae)  

Microsoft Academic Search

Current distribution of Atherina lagunae poses an interesting biogeographical problem as this species inhabits widely separate circum-Mediterranean lagoons. Statistical analyses of 87 biometric parameters and genetic variation in a portion of the cytochrome b gene were examined in four populations of A. lagunae from Tunisian and French lagoons. The results suggested a subdivision into two distinct Atherinid groups: one included

M. Trabelsi; F. Maamouri; J.-P. Quignard; M. Boussaid; E. Faure

2004-01-01

82

Isolability of faults in sensor fault diagnosis  

NASA Astrophysics Data System (ADS)

A major concern with fault detection and isolation (FDI) methods is their robustness with respect to noise and modeling uncertainties. With this in mind, several approaches have been proposed to minimize the vulnerability of FDI methods to these uncertainties. But, apart from the algorithm used, there is a theoretical limit on the minimum effect of noise on detectability and isolability. This limit has been quantified in this paper for the problem of sensor fault diagnosis based on direct redundancies. In this study, first a geometric approach to sensor fault detection is proposed. The sensor fault is isolated based on the direction of residuals found from a residual generator. This residual generator can be constructed from an input-output or a Principal Component Analysis (PCA) based model. The simplicity of this technique, compared to the existing methods of sensor fault diagnosis, allows for more rational formulation of the isolability concepts in linear systems. Using this residual generator and the assumption of Gaussian noise, the effect of noise on isolability is studied, and the minimum magnitude of isolable fault in each sensor is found based on the distribution of noise in the measurement system. Finally, some numerical examples are presented to clarify this approach.

Sharifi, Reza; Langari, Reza

2011-10-01

83

How Faults Shape the Earth.  

ERIC Educational Resources Information Center

Presents fault activity with an emphasis on earthquakes and changes in continent shapes. Identifies three types of fault movement: normal, reverse, and strike faults. Discusses the seismic gap theory, plate tectonics, and the principle of superposition. Vignettes portray fault movement, and the locations of the San Andreas fault and epicenters of…

Bykerk-Kauffman, Ann

1992-01-01

84

It's Not Your Fault  

NSDL National Science Digital Library

In this lesson students will learn about tectonic plate movement. They will discover that we can measure the relative motions of the Pacific Plate and the North American Plate along the San Andreas Fault. Students will be able to compare and contrast movements on either side of the San Andreas Fault, calculate the amount of movement of a tectonic plate over a period of time, and describe the processes involved in the occurrence of earthquakes along the fault.

85

The San Andreas Fault  

NSDL National Science Digital Library

This United States Geological Survey (USGS) publication discusses the San Andreas Fault in California; specifically what has caused the fault, where it is located, surface features that characterize it, and movement that has occurred. General earthquake information includes an explanation of what earthquakes are, and earthquake magnitude versus intensity. Earthquakes that have occurred along the fault are covered, as well as where the next large one may occur and what can be done about large earthquakes in general.

Sandra Schulz

86

Its Not My Fault  

NSDL National Science Digital Library

Students become familiar with strike-slip faults, normal faults, reverse faults and visualize these geological structures using cardboard or a plank of wood, a stack of books, protractor, and a spring scale. The resource is part of the teacher's guide accompanying the video, NASA SCI Files: The Case of the Shaky Quake. Lesson objectives supported by the video, additional resources, teaching tips and an answer sheet are included in the teacher's guide.

2012-08-03

87

Fault simulation and test generation for small delay faults  

E-print Network

Delay faults are an increasingly important test challenge. Traditional delay fault models are incomplete in that they model only a subset of delay defect behaviors. To solve this problem, a more realistic delay fault model has been developed which...

Qiu, Wangqi

2007-04-25

88

Rock Magnetic Properties of Laguna Carmen (Tierra del Fuego, Argentina): Implications for Paleomagnetic Reconstruction  

NASA Astrophysics Data System (ADS)

We report preliminary results obtained from a multi-proxy analysis including paleomagnetic and rock-magnetic studies of two sediment cores of Laguna Carmen (53°40'60" S 68°19'0" W, ~83m asl) in the semiarid steppe in northern Tierra del Fuego island, Southernmost Patagonia, Argentina. Two short cores (115 cm) were sampled using a Livingstone piston corer during the 2011 southern fall. Sediments are massive green clays (115 to 70 cm depth) with irregularly spaced thin sandy strata and lens. Massive yellow clay with thin sandy strata continues up to 30 cm depth; from here up to 10 cm yellow massive clays domain. The topmost 10 cm are mixed yellow and green clays with fine sand. Measurements of intensity and directions of Natural Remanent Magnetization (NRM), magnetic susceptibility, isothermal remanent magnetization, saturation isothermal remanent magnetization (SIRM), back field and anhysteretic remanent magnetization at 100 mT (ARM100mT) were performed and several associated parameters calculated (ARM100mT/k and SIRM/ ARM100mT). Also, as a first estimate of relative magnetic grain-size variations, the median destructive field of the NRM (MDFNRM), was determined. Additionally, we present results of magnetic parameters measured with vibrating sample magnetometer (VSM). The stability of the NRM was analyzed by alternating field demagnetization. The magnetic properties have shown variable values, showing changes in both grain size and concentration of magnetic minerals. It was found that the main carrier of remanence is magnetite with the presence of hematite in very low percentages. This is the first paleomagnetic study performed in lakes located in the northern, semiarid fuegian steppe, where humid-dry cycles have been interpreted all along the Holocene from an aeolian paleosoil sequence (Orgeira et el, 2012). Comparison between paleomagnetic records of Laguna Carmen and results obtained in earlier studies carried out at Laguna Potrok Aike (Gogorza et al., 2012) were performed. References Gogorza, C.S.G., Irurzun, M.A., Sinito, A.M., Lisé-Pronovost, A., St-Onge, G., Haberzettl, T., Ohlendorf, C., Kastner, S., Zolitschka, B., 2012. High-resolution paleomagnetic records from Laguna Potrok Aike (Patagonia, Argentina) for the last 16,000 years. Geochemistry Geophysics Geosystems. 13, Q12Z37. Orgeira, M.J., Vásquez, C.A., Coronato, A., Ponce, F., Moreto, A., Osterrieth, M, Egli, R., Onorato, R., 2012. Magnetic properties of Holocene edaphized silty eolian sediments from Tierra del Fuego (Argentina). Revista de la Sociedad Geológica de España. 25 (1-2), 45-56.

Gogorza, C. G.; Orgeira, M. J.; Ponce, F.; Fernández, M.; Laprida, C.; Coronato, A.

2013-05-01

89

Laguna Potrok Aike: palaeoenvironmental reconstruction in southern South America covering the last 50,000 years  

NASA Astrophysics Data System (ADS)

Laguna Potrok Aike located in the Province of Santa Cruz, southern Argentina, is one of the very few locations that are suited to reconstruct the palaeoenvironmental and climatic history of southern Patagonia outside of the Andes. The lake was drilled in the framework of the multinational ICDP (International Continental Scientific Drilling Program) project "Potrok Aike maar lake sediment archive drilling project" PASADO in 2008, when several long sediment cores to a composite depth of more than 100 m were obtained, which dates back about 50,000 years. Laguna Potrok Aike is located at about 52°S 70°W, just north of the Strait of Magellan and close to the Antarctic continent. The origin of that 100 m deep lake was a maar explosion around 770,000 years ago. Today it has an episodic inflow in the west from its catchment area stretching in SW-direction and is surrounded by Patagonian steppe formation. The first forest patches are situated about 80 km further west at the foothills of the Andes. Laguna Potrok Aike is one of the few permanent lakes in the area and was not covered by glaciers during the last ice ages. It therefore offers a unique archive providing a continuous lacustrine record of the climatic and ecological history. Thus, the presentation will give a brief overview of the most important results gathered by different disciplines covering aspects of Quaternary geology, hydrology, climate reconstruction, and different dating techniques, while the focus will be on palaeobiological proxies like pollen. A continuous paleoprecipitation record for the last 50,000 years will be presented based on a pollen transfer function using the Weighted Average Partial Least Square method. Results show higher precipitation values during the Holocene than during the Last Glacial with a transition during Termination one. The paper will synthesize the locally derived palaeoecological data from Laguna Potrok Aike, compare them on a regional scale for south-eastern Patagonia and finally link them with the Southern Hemispheric Westerlies (SHW). This contributes to the scientific debates about the SHW position and intensity changes in the past and the dust transport to Antarctica. This study extend our knowledge about climate variability, trends, events and their respective forcing factors in an area subject to shifts in polar and mid-latitude wind fields and related precipitation regimes beyond the Last Glacial Maximum.

Schaebitz, F. W.

2012-12-01

90

Niebla ceruchis from Laguna Figueroa: dimorphic spore morphology and secondary compounds localized in pycnidia and apothecia  

NASA Technical Reports Server (NTRS)

During and after the floods of 1979-80 Niebla ceruchis growing epiphytically on Lycium brevipes was one of the dominant aspects of the vegetation in the coastal dunal complex bordering the microbial mats at Laguna Figueroa, Baja California Norte, Mexico. The lichen on denuded branches of Lycium was far more extensively distributed than Lycium lacking lichen. Unusual traits of this Niebla ceruchis strain, namely localization of lichen compounds in the mycobiont reproductive structures (pycnidia and apothecia) and simultaneous presence of bilocular and quadrilocular ascospores, are reported. The abundance of this coastal lichen cover at the microbial mat site has persisted through April 1988.

Enzien, M.; Margulis, L.

1988-01-01

91

A comprehensive analysis of the performance characteristics of the Mount Laguna solar photovoltaic installation  

NASA Technical Reports Server (NTRS)

This paper represents the first comprehensive survey of the Mount Laguna Photovoltaic Installation. The novel techniques used for performing the field tests have been effective in locating and characterizing defective modules. A comparative analysis on the two types of modules used in the array indicates that they have significantly different failure rates, different distributions in degradational space and very different failure modes. A life cycle model is presented to explain a multimodal distribution observed for one module type. A statistical model is constructed and it is shown to be in good agreement with the field data.

Shumka, A.; Sollock, S. G.

1981-01-01

92

Fault Creep on the Hayward Fault, CA: Implications for Fault Properties and Patterns of Moment Release  

Microsoft Academic Search

The seismic risk associated with creeping faults such as the Hayward fault (San Francisco Bay Area, CA) will depend on the rate of moment accumulation (slip deficit) on the fault plane, on the specific geometry of locked and free portions of the fault, and on the interactions between the fault zone and the surrounding lithosphere. Using a visco-elastic finite-element model,

R. Malservisi; K. P. Furlong; C. Gans

2001-01-01

93

Fault detection and fault tolerance in robotics  

NASA Technical Reports Server (NTRS)

Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.

Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.

1992-01-01

94

Fault table generation using Graphics Processing Units  

Microsoft Academic Search

In this paper, we explore the implementation of fault table generation on a Graphics Processing Unit (GPU). A fault table is essential for fault diagnosis and fault detection in VLSI testing and debug. Generating a fault table requires extensive fault simulation, with no fault dropping, and is extremely expensive from a computational standpoint. Fault simulation is inherently parallelizable, and the

Kanupriya Gulati; Sunil P Khatri

2009-01-01

95

Testing for Design Faults  

Microsoft Academic Search

Existing theories of testing focus on verification. Their strategy is to cover a specification or a program text to a certain degree in order to raise the confidence in the correctness of a system under test. We take a dierent approach in the sense that we present a theory of fault-based testing. Fault-based testing uses test data designed to demonstrate

Bernhard K. Aichernig; Jifeng He

2005-01-01

96

SFT: scalable fault tolerance  

Microsoft Academic Search

In this paper we will present a new technology that we are currently developing within the SFT: Scalable Fault Tolerance FastOS project which seeks to implement fault tolerance at the operating system level. Major design goals include dynamic reallocation of resources to allow continuing execution in the presence of hardware failures, very high scalability, high efficiency (low overhead), and transparency---requiring

Fabrizio Petrini; Jarek Nieplocha; Vinod Tipparaju

2006-01-01

97

Denali Fault: Susitna Glacier  

USGS Multimedia Gallery

Helicopters and satellite phones were integral to the geologic field response. Here, Peter Haeussler is calling a seismologist to pass along the discovery of the Susitna Glacier thrust fault. View is to the north up the Susitna Glacier. The Denali fault trace lies in the background where the two lan...

98

Denali Fault: Gillette Pass  

USGS Multimedia Gallery

View northward of mountain near Gillette Pass showing sackung features. Here the mountaintop moved downward like a keystone, producing an uphill-facing scarp. The main Denali fault trace is on the far side of the mountain and a small splay fault is out of view below the photo....

99

Denali Fault: Gillette Pass  

USGS Multimedia Gallery

View north of Denali fault trace at Gillette Pass. this view shows that the surface rupture reoccupies the previous fault scarp. Also the right-lateral offset of these stream gullies has developed since deglaciation in the last 10,000 years or so....

100

Denali Fault: Alaska Pipeline  

USGS Multimedia Gallery

View south along the Trans Alaska Pipeline in the zone where it was engineered for the Denali fault. The fault trace passes beneath the pipeline between the 2nd and 3rd slider supports at the far end of the zone. A large arc in the pipe can be seen in the pipe on the right, due to shortening of the ...

101

Puente Hills Fault Visualization  

NSDL National Science Digital Library

Puente Hills Fault posses a disaster threat for Los Angeles region. Earthquake simulations on this fault estimate damages over $250 billion. Visualizations created by SDSC using the data computed from earthquake simulations helps one to fathom the propagation of siesmic waves and the areas affected.

102

Practical Byzantine Fault Tolerance  

Microsoft Academic Search

This paper describes a new replication algorithm that is able to tolerate Byzantine faults. We believe that Byzantine- fault-tolerant algorithms will be increasingly important in the future because malicious attacks and software errors are increasingly common and can cause faulty nodes to exhibit arbitrary behavior. Whereas previous algorithms assumed a synchronous system or were too slow to be used in

Miguel Castro; Barbara Liskov

1999-01-01

103

Fault rocks lab  

NSDL National Science Digital Library

This lab is intended to give students some hands on experience looking at fault rocks with a suite of cataclasites and mylonites I have collected. The focus is on identifying key textural features in both hand sample and thin section and understanding how deformation within a fault zone varies with depth.

John Singleton

104

Elastohydrodynamic lubrication of faults  

Microsoft Academic Search

The heat flow paradox provides evidence that a dynamic weakening mechanism may be important in understanding fault friction and rupture. We present here a specific model for dynamic velocity weakening that uses the mechanics of well-studied industrial bearings to explain fault zone processes. An elevated fluid pressure is generated in a thin film of viscous fluid that is sheared between

Emily E. Brodsky; Hiroo Kanamori

2001-01-01

105

Folds and Faults  

NSDL National Science Digital Library

In this activity, students will learn how rock layers are folded and faulted and how to represent these structures in maps and cross sections. They will use playdough to represent layers of rock and make cuts in varying orientations to represent faults and other structures.

106

A 20,000-year record of environmental change from Laguna Kollpa Kkota, Bolivia  

SciTech Connect

Most records of paleoclimate in the Bolivian Andes date from the last glacial-to-interglacial transition. However, Laguna Kollpa Kkota and other lakes like it, formed more than 20,000 yr BP when glaciers retreated and moraines dammed the drainage of the valleys they are located in. These lakes were protected from subsequent periods of glaciation because the headwalls of these valleys are below the level of the late-Pleistocene glacial equilibrium-line altitude. The chemical, mineral, and microfossil stratigraphies of these glacial lakes provide continuous records of environmental change for the last 20,000 years that can be used to address several problems in paleoclimate specific to tropical-subtropical latitudes. Preliminary results from Laguna Kollpa Kkota indicate that glacial equilibrium-line altitudes were never depressed more than 600 m during the last 20,000 years, suggesting that temperatures were reduced only a few-degrees celsius over this time period. Sedimentation rates and the organic carbon stratigraphy of cores reflect an increase in moisture in the late Pleistocene just prior to the transition to a warmer and drier Holocene. The pollen and diatom concentrations in the sediments are sufficient to permit the high resolution analyses needed to address whether or not there were climatic reversals during the glacial-to-interglacial transition.

Seltzer, G.O. (Byrd Polar Research Center, Columbus, OH (United States). Mendenhall Lab.); Abbott, M.B. (Limnological Research Center, Minneapolis, MN (United States))

1992-01-01

107

Solar system fault detection  

DOEpatents

A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

Farrington, R.B.; Pruett, J.C. Jr.

1984-05-14

108

Solar system fault detection  

DOEpatents

A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

Farrington, Robert B. (Wheatridge, CO); Pruett, Jr., James C. (Lakewood, CO)

1986-01-01

109

Characterization of leaky faults  

SciTech Connect

Leaky faults provide a flow path for fluids to move underground. It is very important to characterize such faults in various engineering projects. The purpose of this work is to develop mathematical solutions for this characterization. The flow of water in an aquifer system and the flow of air in the unsaturated fault-rock system were studied. If the leaky fault cuts through two aquifers, characterization of the fault can be achieved by pumping water from one of the aquifers, which are assumed to be horizontal and of uniform thickness. Analytical solutions have been developed for two cases of either a negligibly small or a significantly large drawdown in the unpumped aquifer. Some practical methods for using these solutions are presented. 45 refs., 72 figs., 11 tabs.

Shan, Chao

1990-05-01

110

Towards a late Quaternary tephrochronological framework for the southernmost part of South America e the Laguna Potrok Aike tephra  

E-print Network

Chronology Mt. Burney Volcanic a b s t r a c t A total of 18 tephra samples have been analysed from the composite sediment sequence from Site 2 of the Laguna Potrok Aike ICDP expedition 5022 from southern Patagonia, Argentina, which extends back to ca 51 ka cal BP. Analyses of the volcanic glass show that all

111

The LAGUNA design study- towards giant liquid based underground detectors for neutrino physics and astrophysics and proton decay searches  

Microsoft Academic Search

The feasibility of a next generation neutrino observatory in Europe is being considered within the LAGUNA design study. To accommodate giant neutrino detectors and shield them from cosmic rays, a new very large underground infrastructure is required. Seven potential candidate sites in different parts of Europe and at several distances from CERN are being studied: Boulby (UK), Canfranc (Spain), Fr\\\\'ejus

D. Angus; A. Ariga; D. Autiero; A. Apostu; A. Badertscher; T. Bennet; G. Bertola; P. F. Bertola; O. Besida; A. Bettini; C. Booth; J. L. Borne; I. Brancus; W. Bujakowsky; J. E. Campagne; G. Cata Danil; F. Chipesiu; M. Chorowski; J. Cripps; A. Curioni; S. Davidson; Y. Declais; U. Drost; O. Duliu; J. Dumarchez; T. Enqvist; A. Ereditato; F. von Feilitzsch; H. Fynbo; T. Gamble; G. Galvanin; A. Gendotti; W. Gizicki; M. Goger-Neff; U. Grasslin; D. Gurney; M. Hakala; S. Hannestad; M. Haworth; S. Horikawa; A. Jipa; F. Juget; T. Kalliokoski; S. Katsanevas; M. Keen; J. Kisiel; I. Kreslo; V. Kudryastev; P. Kuusiniemi; L. Labarga; T. Lachenmaier; J. C. Lanfranchi; I. Lazanu; T. Lewke; K. Loo; P. Lightfoot; M. Lindner; A. Longhin; J. Maalampi; M. Marafini; A. Marchionni; R. M. Margineanu; A. Markiewicz; T. Marrodan-Undagoita; J. E. Marteau; R. Matikainen; Q. Meindl; M. Messina; J. W. Mietelski; B. Mitrica; A. Mordasini; L. Mosca; U. Moser; G. Nuijten; L. Oberauer; A. Oprina; S. Paling; S. Pascoli; T. Patzak; M. Pectu; Z. Pilecki; F. Piquemal; W. Potzel; W. Pytel; M. Raczynski; G. Rafflet; G. Ristaino; M. Robinson; R. Rogers; J. Roinisto; M. Romana; E. Rondio; B. Rossi; A. Rubbia; Z. Sadecki; C. Saenz; A. Saftoiu; J. Salmelainen; O. Sima; J. Slizowski; K. Slizowski; J. Sobczyk; N. Spooner; S. Stoica; J. Suhonen; R. Sulej; M. Szarska; T. Szeglowski; M. Temussi; J. Thompson; L. Thompson; W. H. Trzaska; M. Tippmann; A. Tonazzo; K. Urbanczyk; G. Vasseur; A. Williams; J. Winter; K. Wojutszewska; M. Wurm; A. Zalewska; M. Zampaolo; M. Zito

2009-01-01

112

Congener-specific polychlorinated biphenyl patterns in eggs of aquatic birds from the Lower Laguna Madre, Texas  

Microsoft Academic Search

Eggs from four aquatic bird species nesting in the Lower Laguna Madre, Texas, were collected to determine differences and similarities in the accumulation of congener-specific polychlorinated biphenyls (PCBs) and to evaluate PCB impacts on reproduction. Because of the different toxicities of PCB congeners, it is important to know which congeners contribute most to total PCBs. The predominant PCB congeners were

Miguel A. Mora

1996-01-01

113

vol. 165, no. 6 the american naturalist june 2005 Eocene Plant Diversity at Laguna del Hunco and  

E-print Network

vol. 165, no. 6 the american naturalist june 2005 Eocene Plant Diversity at Laguna del Hunco and Ri. Most notably, Neotropical plant diversity exceeds other tropical regions by factors of two to three Hortorium, Department of Plant Biology, Cornell University, Ithaca, New York 14853 Submitted October 1, 2004

Wilf, Peter

114

Conocimientos, concepciones erróneas y lagunas de los maestros sobre el trastorno por déficit de atención con hiperactividad  

Microsoft Academic Search

Este estudio se diseñó para analizar los conocimientos, concepciones erróneas y lagunas sobre el Tras- torno por Déficit de Atención con Hiperactividad (TDAH) de 193 maestros, como una replicación de un estudio realizado por Sciutto, Terjesen y Bender en el año 2000. Los maestros cumplimentaron el Knowledge of Attention Deficit Hyperactivity Disorder (KADDS) en su versión española, adaptada por los

Sonia Jarque Fernández; Raúl Tárraga Mínguez; Ana Miranda Casas

2007-01-01

115

Magnitude, geomorphologic response and climate links of lake level oscillations at Laguna Potrok Aike, Patagonian steppe (Argentina)  

E-print Network

Aike, Patagonian steppe (Argentina) P. Kliem a,*, J.P. Buylaert b,c , A. Hahn a , C. Mayr d,e , A Holocene Glacial a b s t r a c t Laguna Potrok Aike is a large maar lake located in the semiarid steppe

116

A CMOS fault extractor for inductive fault analysis  

Microsoft Academic Search

The inductive fault analysis (IFA) method is presented and a description is given of the CMOS fault extraction program FXT. The IFA philosophy is to consider the causes of faults (manufacturing defects) and then simulate these causes to find the faults that are likely to occur in a circuit. FXT automates IFA for a CMOS technology by generating a list

F. Joel Ferguson; John Paul Shen

1988-01-01

117

ZAMBEZI: a parallel pattern parallel fault sequential circuit fault simulator  

Microsoft Academic Search

Sequential circuit fault simulators use the multiple bits in a computer data word to accelerate simulation. We introduce, and implement, a new sequential circuit fault simulator, a parallel pattern parallel fault simulator, ZAMBEZI, which simultaneously simulates multiple faults with multiple vectors in one data word. ZAMBEZI is developed by enhancing the control flow, of existing parallel pattern algorithms. For a

Minesh B. Amin; Bapiraju Vinnakota

1996-01-01

118

Transient fault modeling and fault injection simulation  

E-print Network

An accurate transient fault model is presented in this thesis. A 7-term exponential current upset model is derived from the results of a device-level, 3-dimensional, single-event-upset simulation. A curve-fitting algorithm is used to extract...

Yuan, Xuejun

1996-01-01

119

System fault diagnostics using fault tree analysis  

Microsoft Academic Search

Over the last 50 years advances in technology have led to an increase in the complexity and sophistication of systems. More complex systems can be harder to maintain and the root cause of a fault more difficult to isolate. Down-time resulting from a system failure can be dangerous or expensive depending on the type of system. In aircraft systems the

E. E. Hurdle; L. M. Bartlett; J. D. Andrews

2008-01-01

120

Characterization of Fault Zones  

NASA Astrophysics Data System (ADS)

- There are currently three major competing views on the essential geometrical, mechanical, and mathematical nature of faults. The standard view is that faults are (possibly segmented and heterogeneous) Euclidean zones in a continuum solid. The continuum-Euclidean view is supported by seismic, gravity, and electromagnetic imaging studies; by successful modeling of observed seismic radiation, geodetic data, and changes in seismicity patterns; by detailed field studies of earthquake rupture zones and exhumed faults; and by recent high resolution hypocenter distributions along several faults. The second view focuses on granular aspects of fault structures and deformation fields. The granular view is supported by observations of rock particles in fault zone gouge; by studies of block rotations and the mosaic structure of the lithosphere (which includes the overall geometry of plate tectonics); by concentration of deformation signals along block boundaries; by correlation of seismicity patterns on scales several times larger than those compatible with a continuum framework; and by strongly heterogeneous wave propagation effects on the earth's surface. The third view is that faults are fractal objects with rough surfaces and branching geometry. The fractal view is supported by some statistical analysis of regional hypocenter locations; by long-range correlation of various measurements in geophysical boreholes; by the fact that observed power-law statistics of earthquakes are compatible with an underlying scale-invariant geometrical structure; by geometrical analysis of fault traces at the earth's surface; and by measurements of joint and fault surfaces topography.There are several overlaps between expected phenomenology in continuum-Euclidean, granular, and fractal frameworks of crustal deformation. As examples, highly heterogeneous seismic wavefields can be generated by granular media, by fractal structures, and by ground motion amplification around and scattering from an ensemble of Euclidean fault zones. A hierarchical granular structure may have fractal geometry. Power-law statistics of earthquakes can be generated by slip on one or more heterogeneous planar faults, by a fractal collection of faults, and by deformation of granular material. Each of the three frameworks can produce complex spatio-temporal patterns of earthquakes and faults. At present the existing data cannot distinguish unequivocally between the three different views on the nature of fault zones or determine their scale of relevance. However, in each observational category, the highest resolution results associated with mature large-displacement faults are compatible with the standard continuum-Euclidean framework. This can be explained by a positive feedback mechanism associated with strain weakening rheology and localization, which attracts the long-term evolution of faults toward progressive regularization and Euclidean geometry. A negative feedback mechanism associated with strain hardening during initial deformation phases and around persisting geometrical irregularities and conjugate sets of faults generates new fractures and granularity at different scales. We conclude that long-term deformation in the crust, including many aspects of the observed spatio-temporal complexity of earthquakes and faults, may be explained to first order within the continuum-Euclidean framework.

Ben-Zion, Y.; Sammis, C. G.

121

Origin and evolution of the Laguna Potrok Aike maar (Patagonia, Argentina)  

NASA Astrophysics Data System (ADS)

Laguna Potrok Aike, a maar lake in southern-most Patagonia, is located at about 110 m a.s.l. in the Pliocene to late Quaternary Pali Aike Volcanic Field (Santa Cruz, southern Patagonia, Argentina) at about 52°S and 70°W, some 20 km north of the Strait of Magellan and approximately 90 km west of the city of Rio Gallegos. The lake is almost circular and bowl-shaped with a 100 m deep, flat plain in its central part and an approximate diameter of 3.5 km. Steep slopes separate the central plain from the lake shoulder at about 35 m water depth. At present, strong winds permanently mix the entire water column. The closed lake basin contains a sub saline water body and has only episodic inflows with the most important episodic tributary situated on the western shore. Discharge is restricted to major snowmelt events. Laguna Potrok Aike is presently located at the boundary between the Southern Hemispheric Westerlies and the Antarctic Polar Front. The sedimentary regime is thus influenced by climatic and hydrologic conditions related to the Antarctic Circumpolar Current, the Southern Hemispheric Westerlies and sporadic outbreaks of Antarctic polar air masses. Previous studies demonstrated that closed lakes in southern South America are sensitive to variations in the evaporation/precipitation ratio and have experienced drastic lake level changes in the past causing for example the desiccation of the 75 m deep Lago Cardiel during the Late Glacial. Multiproxy environmental reconstruction of the last 16 ka documents that Laguna Potrok Aike is highly sensitive to climate change. Based on an Ar/Ar age determination, the phreatomagmatic tephra that is assumed to relate to the Potrok Aike maar eruption was formed around 770 ka. Thus Laguna Potrok Aike sediments contain almost 0.8 million years of climate history spanning several past glacial-interglacial cycles making it a unique archive for non-tropical and non-polar regions of the Southern Hemisphere. In particular, variations of the hydrological cycle, changes in eolian dust deposition, frequencies and consequences of volcanic activities and other natural forces controlling climatic and environmental responses can be tracked throughout time. Laguna Potrok Aike has thus become a major focus of the International Continental Scientific Drilling Program. Drilling operations were carried out within PASADO (Potrok Aike Maar Lake Sediment Archive Drilling Project) in late 2008 and penetrated ~100 m into the lacustrine sediment. Laguna Potrok Aike is surrounded by a series of subaerial paleo-shorelines of modern to Holocene age that reach up to 21 m above the 2003 AD lake level. An erosional unconformity which can be observed basin-wide along the lake shoulder at about 33 m below the 2003 AD lake level marks the lowest lake level reached during Late Glacial to Holocene times. A high-resolution seismic survey revealed a series of buried, subaquatic paleo-shorelines that hold a record of the complex transgressional history of the past approximately 6800 years, which was temporarily interrupted by two regressional phases from approximately 5800 to 5400 and 4700 to 4000 cal BP. Seismic reflection and refraction data provide insights into the sedimentary infill and the underlying volcanic structure of Laguna Potrok Aike. Reflection data show undisturbed, stratified lacustrine sediments at least in the upper ~100 m of the sedimentary infill. Two stratigraphic boundaries were identified in the seismic profiles (separating subunits I-ab, I-c and I-d) that are likely related to changes in lake level. Subunits I-ab and I-d are quite similar even though velocities are enhanced in subunit I-d. This might point at cementation in subunit I-d. Subunit I-c is restricted to the central parts of the lake and thins out laterally. A velocity-depth model calculated from seismic refraction data reveals a funnel-shaped structure embedded in the sandstone rocks of the surrounding Santa Cruz Formation. This funnel structure is filled by lacustrine sediments of up to 370 m in thickness. These can be separated into two

Gebhardt, A. C.; de Batist, M.; Niessen, F.; Anselmetti, F. S.; Ariztegui, D.; Ohlendorf, C.; Zolitschka, B.

2009-04-01

122

Fault zone structure of the Wildcat fault in Berkeley, California - Field survey and fault model test -  

NASA Astrophysics Data System (ADS)

In order to develop hydrologic characterization technology of fault zones, it is desirable to clarify the relationship between the geologic structure and hydrologic properties of fault zones. To this end, we are performing surface-based geologic and trench investigations, geophysical surveys and borehole-based hydrologic investigations along the Wildcat fault in Berkeley,California to investigate the effect of fault zone structure on regional hydrology. The present paper outlines the fault zone structure of the Wildcat fault in Berkeley on the basis of results from trench excavation surveys. The approximately 20 - 25 km long Wildcat fault is located within the Berkeley Hills and extends northwest-southeast from Richmond to Oakland, subparallel to the Hayward fault. The Wildcat fault, which is a predominantly right-lateral strike-slip fault, steps right in a releasing bend at the Berkeley Hills region. A total of five trenches have been excavated across the fault to investigate the deformation structure of the fault zone in the bedrock. Along the Wildcat fault, multiple fault surfaces are branched, bent, paralleled, forming a complicated shear zone. The shear zone is ~ 300 m in width, and the fault surfaces may be classified under the following two groups: 1) Fault surfaces offsetting middle Miocene Claremont Chert on the east against late Miocene Orinda formation and/or San Pablo Group on the west. These NNW-SSE trending fault surfaces dip 50 - 60° to the southwest. Along the fault surfaces, fault gouge of up to 1 cm wide and foliated cataclasite of up to 60 cm wide can be observed. S-C fabrics of the fault gouge and foliated cataclasite show normal right-slip shear sense. 2) Fault surfaces forming a positive flower structure in Claremont Chert. These NW-SE trending fault surfaces are sub-vertical or steeply dipping. Along the fault surfaces, fault gouge of up to 3 cm wide and foliated cataclasite of up to 200 cm wide can be observed. S-C fabrics of the fault gouge and foliated cataclasite show reverse right-slip shear sense. We are performing sandbox experiments to investigate the three-dimensional kinematic evolution of fault systems caused by oblique-slip motion. The geometry of the Wildcat fault in the Berkeley Hills region shows a strong resemblance to our sandbox experimental model. Based on these geological and experimental data, we inferred that the complicated fault systems were dominantly developed within the fault step and the tectonic regime switched from transpression to transtension during the middle to late Miocene along the Wildcat fault.

Ueta, K.; Onishi, C. T.; Karasaki, K.; Tanaka, S.; Hamada, T.; Sasaki, T.; Ito, H.; Tsukuda, K.; Ichikawa, K.; Goto, J.; Moriya, T.

2010-12-01

123

Fault detection and isolation  

NASA Technical Reports Server (NTRS)

In order for a current satellite-based navigation system (such as the Global Positioning System, GPS) to meet integrity requirements, there must be a way of detecting erroneous measurements, without help from outside the system. This process is called Fault Detection and Isolation (FDI). Fault detection requires at least one redundant measurement, and can be done with a parity space algorithm. The best way around the fault isolation problem is not necessarily isolating the bad measurement, but finding a new combination of measurements which excludes it.

Bernath, Greg

1994-01-01

124

Fault diagnosis of analog circuits  

Microsoft Academic Search

In this paper, various fault location techniques in analog networks are described and compared. The emphasis is on the more recent developments in the subject. Four main approaches for fault location are addressed, examined, and illustrated using simple network examples. In particular, we consider the fault dictionary approach, the parameter identification approach, the fault verification approach, and the approximation approach.

J. W. Bandler; A. E. Salama

1985-01-01

125

Triggered surface slips in southern California associated with the 2010 El Mayor-Cucapah, Baja California, Mexico, earthquake  

USGS Publications Warehouse

Triggered slip in the Yuha Desert area occurred along more than two dozen faults, only some of which were recognized before the April 4, 2010, El Mayor-Cucapah earthquake. From east to northwest, slip occurred in seven general areas: (1) in the Northern Centinela Fault Zone (newly named), (2) along unnamed faults south of Pinto Wash, (3) along the Yuha Fault (newly named), (4) along both east and west branches of the Laguna Salada Fault, (5) along the Yuha Well Fault Zone (newly revised name) and related faults between it and the Yuha Fault, (6) along the Ocotillo Fault (newly named) and related faults to the north and south, and (7) along the southeasternmost section of the Elsinore Fault. Faults that slipped in the Yuha Desert area include northwest-trending right-lateral faults, northeast-trending left-lateral faults, and north-south faults, some of which had dominantly vertical offset. Triggered slip along the Ocotillo and Elsinore Faults appears to have occurred only in association with the June 14, 2010 (Mw5.7), aftershock. This aftershock also resulted in slip along other faults near the town of Ocotillo. Triggered offset on faults in the Yuha Desert area was mostly less than 20 mm, with three significant exceptions, including slip of about 50–60 mm on the Yuha Fault, 40 mm on a fault south of Pinto Wash, and about 85 mm on the Ocotillo Fault. All triggered slips in the Yuha Desert area occurred along preexisting faults, whether previously recognized or not.

Rymer, Michael J.; Treiman, Jerome A.; Kendrick, Katherine J.; Lienkaemper, James J.; Weldon, Ray J.; Bilham, Roger; Wei, Meng; Fielding, Eric J.; Hernandez, Janis L.; Olson, Brian P.E.; Irvine, Pamela J.; Knepprath, Nichole; Sickler, Robert R.; Tong, Xiaopeng; Siem, Martin E.

2011-01-01

126

Salt lake Laguna de Fuente de Piedra (S-Spain) as Late Quaternary palaeoenvironmental archive  

NASA Astrophysics Data System (ADS)

This study deals with Late Quaternary palaeoenvironmental variability in Iberia reconstructed from terrestrial archives. In southern Iberia, endorheic basins of the Betic Cordilleras are relatively common and contain salt or fresh-water lakes due to subsurface dissolution of Triassic evaporites. Such precipitation or ground-water fed lakes (called Lagunas in Spanish) are vulnerable to changes in hydrology, climate or anthropogenic modifications. The largest Spanish salt lake, Laguna de Fuente de Piedra (Antequera region, S-Spain), has been investigated and serves as a palaeoenvironmental archive for the Late Pleistocene to Holocene time interval. Several sediment cores taken during drilling campaigns in 2012 and 2013 have revealed sedimentary sequences (up to 14 m length) along the shoreline. A multi-proxy study, including sedimentology, geochemistry and physical properties (magnetic susceptibility) has been performed on the cores. The sedimentary history is highly variable: several decimetre thick silty variegated clay deposits, laminated evaporites, and even few-centimetre thick massive gypsum crystals (i.e., selenites). XRF analysis was focussed on valuable palaeoclimatic proxies (e.g., S, Zr, Ti, and element ratios) to identify the composition and provenance of the sediments and to delineate palaeoenvironmental conditions. First age control has been realized by AMS-radiocarbon dating. The records start with approximately 2-3 m Holocene deposits and reach back to the middle of MIS 3 (GS-3). The sequences contain changes in sedimentation rates as well as colour changes, which can be summarized as brownish-beige deposits at the top and more greenish-grey deposits below as well as highly variegated lamination and selenites below ca. 6 m depth. The Younger Dryas, Bølling/Allerød, and the so-called Mystery Interval/Last Glacial Maximum have presumably been identified in the sediment cores and aligned to other climate records. In general, the cores of the Laguna de Fuente de Piedra show cyclic deposition including evaporitic sequences throughout the Holocene and Late Pleistocene, indicating higher fluxes and reworking of organic/inorganic carbon as well as other indicative proxy elements like Ti, Zr and Ca/Sr ratio during Late Pleistocene times. In order to achieve a better understanding of the palaeoenvironmental history in the study area further studies are planned which encompass biological/palaeontological indicators (e.g., pollen, diatoms) as well as another geochemical isotopic techniques on evaporitic deposits such as fluid inclusion analysis.

Höbig, Nicole; Melles, Martin; Reicherter, Klaus

2014-05-01

127

A “mesh” of crossing faults: Fault networks of southern California  

NASA Astrophysics Data System (ADS)

Detailed geologic mapping of active fault systems in the western Salton Trough and northern Peninsular Ranges of southern California make it possible to expand the inventory of mapped and known faults by compiling and updating existing geologic maps, and analyzing high resolution imagery, LIDAR, InSAR, relocated hypocenters and other geophysical datasets. A fault map is being compiled on Google Earth and will ultimately discriminate between a range of different fault expressions: from well-mapped faults to subtle lineaments and geomorphic anomalies. The fault map shows deformation patterns in both crystalline and basinal deposits and reveals a complex fault mesh with many curious and unexpected relationships. Key findings are: 1) Many fault systems have mutually interpenetrating geometries, are grossly coeval, and allow faults to cross one another. A typical relationship reveals a dextral fault zone that appears to be continuous at the regional scale. In detail, however, there are no continuous NW-striking dextral fault traces and instead the master dextral fault is offset in a left-lateral sense by numerous crossing faults. Left-lateral faults also show small offsets where they interact with right lateral faults. Both fault sets show evidence of Quaternary activity. Examples occur along the Clark, Coyote Creek, Earthquake Valley and Torres Martinez fault zones. 2) Fault zones cross in other ways. There are locations where active faults continue across or beneath significant structural barriers. Major fault zones like the Clark fault of the San Jacinto fault system appears to end at NE-striking sinistral fault zones (like the Extra and Pumpkin faults) that clearly cross from the SW to the NE side of the projection of the dextral traces. Despite these blocking structures, there is good evidence for continuation of the dextral faults on the opposite sides of the crossing fault array. In some instances there is clear evidence (in deep microseismic alignments of hypocenters) that the master dextral faults zones pass beneath shallower crossing fault arrays above them and this mechanism may transfer strain through the blocking zones. 3) The curvature of strands of the Coyote Creek fault and the Elsinore fault are similar along their SE 60 km. The scale, locations and concavity of bends are so similar that their shape appears to be coordinated. The matching contractional and extensional bends suggests that originally straighter dextral fault zones may be deforming in response of coeval sinistral deformation between, beneath, and around them. 4) Deformation is strongly domainal with one style or geometry of structure dominating in one area then another in an adjacent area. Boundaries may be abrupt. 5) There are drastic lateral changes in the width of damage zones adjacent to master faults. Outlines of the deformation related to some dextral fault zones resemble a snake that has ingested a squirming cat or soccer ball. 6) A mesh of interconnected faults seems to transfer slip back and forth between structures. 7) Scarps are not necessarily more abundant on the long master faults than on connector or crossing faults. Much remains to be learned upon completion the fault map.

Janecke, S. U.

2009-12-01

128

Practical Byzantine Fault Tolerance  

Microsoft Academic Search

Our growing reliance on online services accessible on the Internet demands highly-available systemsthat provide correct service without interruptions. Byzantine faults such as software bugs, operatormistakes, and malicious attacks are the major cause of service interruptions. This thesis describesa new replication algorithm, BFT, that can be used to build highly-available systems that tolerateByzantine faults. It shows, for the first time, how

Miguel Castro

2001-01-01

129

Fault reactivation control on normal fault growth: an experimental study  

NASA Astrophysics Data System (ADS)

Field studies frequently emphasize how fault reactivation is involved in the deformation of the upper crust. However, this phenomenon is generally neglected (except in inversion models) in analogue and numerical models performed to study fault network growth. Using sand/silicon analogue models, we show how pre-existing discontinuities can control the geometry and evolution of a younger fault network. The models show that the reactivation of pre-existing discontinuities and their orientation control: (i) the evolution of the main fault orientation distribution through time, (ii) the geometry of relay fault zones, (iii) the geometry of small scale faulting, and (iv) the geometry and location of fault-controlled basins and depocenters. These results are in good agreement with natural fault networks observed in both the Gulf of Suez and Lake Tanganyika. They demonstrate that heterogeneities such as pre-existing faults should be included in models designed to understand the behavior and the tectonic evolution of sedimentary basins.

Bellahsen, Nicolas; Daniel, Jean Marc

2005-04-01

130

Response of shoal grass, Halodule wrightii, to extreme winter conditions in the Lower Laguna Madre, Texas  

USGS Publications Warehouse

Effects of a severe freeze on the shoal grass, Halodule wrightii, were documented through analysis of temporal and spatial trends in below-ground biomass. The coincidence of the second lowest temperature (-10.6??C) in 107 years of record, 56 consecutive hours below freezing, high winds and extremely low water levels exposed the Laguna Madre, TX, to the most severe cold stress in over a century. H. wrightii tolerated this extreme freeze event. Annual pre- and post-freeze surveys indicated that below-ground biomass estimated from volume was Unaffected by the freeze event. Nor was there any post-freeze change in biomass among intertidal sites directly exposed to freezing air temperatures relative to subtidal sites which remained submerged during the freezing period.

Hicks, D.W.; Onuf, C.P.; Tunnell, J.W.

1998-01-01

131

Water quality mapping of Laguna de Bay and its watershed, Philippines  

NASA Astrophysics Data System (ADS)

Laguna de Bay (or Laguna Lake) is the largest lake in the Philippines, with a surface area of 900 km2 and its watershed area of 2920 km2 (Santos-Borja, 2005). It is located on the southwest part of the Luzon Island and its watershed contains 5 provinces, 49 municipalities and 12 cities, including parts of Metropolitan Manila. The water quality in Laguna de Bay has significantly deteriorated due to pollution from soil erosion, effluents from chemical industries, and household discharges. In this study, we performed multiple element analysis of water samples in the lake and its watersheds for chemical mapping, which allows us to evaluate the regional distribution of elements including toxic heavy metals such as Cd, Pb and As. We collected water samples from 24 locations in Laguna de Bay and 160 locations from rivers in the watersheds. The sampling sites of river are mainly downstreams around the lake, which covers from urbanized areas to rural areas. We also collected well water samples from 17 locations, spring water samples from 10 locations, and tap water samples from 21 locations in order to compare their data with the river and lake samples and to assess the quality of household use waters. The samples were collected in dry season of the study area (March 13 - 17 and May 2 - 9, 2011). The analysis was performed at the Research Institute for Humanity and Nature (RIHN), Japan. The concentrations of the major components (Cl, NO3, SO4, Ca, Mg, Na, and K) dissolved in the samples were determined with ion chromatograph (Dionex Corporation ICS-3000). We also analyzed major and trace elements (Li, B, Na, Mg, Al, Si, P, K, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn Ga, Ge, As, Se, Rb, Sr, Y, Zr, Mo, Ag, Cd, Sn, Sb, Cs, Ba, La, Ce, Pr, Nd, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu, W, Pb and U) with inductively coupled plasma-mass spectrometry (ICP-MS, Agilent Technologies 7500cx). The element concentrations of rivers are characterized by remarkable regional variations. For example, heavy metals such as Ni, Cd and Pb are markedly high in the western region as compared to the eastern region implying that the chemical variation reflects the urbanization in the western region. On the other hand, As contents is relatively high in the south of the lake and some inflowing rivers in the area. The higher concentration of As is also observed in the spring water samples in the area. Therefore, the source of As in the area is probably natural origin rather than anthropogenic. Although river water samples in western watersheds have high concentrations of heavy metals, the lake water samples in western area of the lake are not remarkably high in heavy metals. This inconsistency implies that the heavy metals flowed into the western lake from heavy metal-enriched rives have precipitated on the bottom of the lake. The polluted sediments may induce the pollution of benthos resulting in increase of the risks of food pollution through the bioaccumulation in the ecosystem.

Saito, S.; Nakano, T.; Shin, K.; Maruyama, S.; Miyakawa, C.; Yaota, K.; Kada, R.

2011-12-01

132

Foraminifera Assemblages in Laguna Torrecilla- Puerto Rico: an Environmental Micropaleontology Approach.  

NASA Astrophysics Data System (ADS)

Foraminiferal assemblages (Ammonia becarii cf. typica - A. becarii cf. tepida - Triloculina spp.) from 30 cm cores taken at Laguna Torrecilla, a polluted estuary, contain a relative high occurrence of deformed tests (up to 13%). Such deformities (i.e., double tests, aberrant tests) are mostly found within the miliolids (Triloculina spp.) while the rotaliids (Ammonia spp.) show fewer deformities (i.e., extended proloculi, stunted tests). Preliminary results for heavy metal analysis (ACTLABS Laboratories-Canada) from bulk sediment samples show concentrations below toxicity levels except for copper. Copper concentrations (50- 138 ppm) fall between the ERL (Effect Range Low) and ERM (Effect Range Median) values representing possible to occasional detrimental effects to the aquatic environment. Organic matter content (loss-on-ignition) ranging from 10-23%, coupled with pyritized tests and framboidal pyrite, indicates low oxygen conditions. Ammonia becarii cf. typica and A. becarii cf. tepida showed no significant variation in size with sample depth. However, forma tepida was not found in the intervals with highest organic concentrations. The abundance of A. becarii, which is a species highly resistant to environmental stresses, appears to be related to hypoxia events. Ammonia-Elphidium index values, a previously established indicator of hypoxia, are 80-100, reflecting the lack of Elphidium spp. Apparently reduced oxygen conditions at Laguna Torrecilla exceeded the tolerance levels of Elphidium spp. In addition, diversity indices show that there has been temporal variability in terms of abundance and distribution of foraminifera. Foraminiferal assemblages coupled with diversity indices and organic matter content indicates that Torrecilla Lagoon has undergone several episodes of hypoxia. Such conditions could explain the relatively high percentage of test deformities, although elevated copper concentrations may be a compounding factor.

Martinez-Colon, M.; Hallock, P.

2006-12-01

133

Fair game : learning from La Salada  

E-print Network

This thesis seeks to expand the potential role of urban design for informal places under the process of formalization. More specifically, it examines the spatial principles that comprise the successful cultural and economic ...

Hu, Allison (Allison May)

2012-01-01

134

Unrest within a large rhyolitic magma system at Laguna del Maule volcanic field (Chile) from 2007 through 2013: geodetic measurements and numerical models  

NASA Astrophysics Data System (ADS)

The Laguna del Maule (LdM) volcanic field is remarkable for its unusual concentration of post-glacial rhyolitic lava coulées and domes that erupted between 25 and 2 thousand years ago. Covering more than 100 square kilometers, they erupted from 24 vents encircling a lake basin approximately 20 km in diameter on the range crest of the Andes. Geodetic measurements at the LdM volcanic field show rapid uplift since 2007 over an area more than 20 km in diameter that is centered on the western portion of the young rhyolite domes. By quantifying this active deformation and its evolution with time, we aim to investigate the storage conditions and dynamic processes in the underlying rhyolitic reservoir that drive the ongoing inflation. Analyzing interferometric synthetic aperture radar (InSAR) data, we track the rate of deformation. The rate of vertical uplift is negligible from 2003 to 2004, accelerates from at least 200 mm/yr in 2007 to more than 300 mm/yr in 2012, and then decreases to 200mm/yr in early 2013. To describe the deformation, we use a simple model that approximates the source as a 8 km-by-6 km sill at a depth of 5 km, assuming a rectangular dislocation in a half space with uniform elastic properties. Between 2007 and 2013, the modeled sill increased in volume by at least 190 million cubic meters. Four continuous GPS stations installed in April 2012 around the lake confirm this extraordinarily high rate of vertical uplift and a substantial rate of radial expansion. As of June 2013, the rapid deformation persists in the InSAR and GPS data. To describe the spatial distribution of material properties at depth, we are developing a model using the finite element method. This approach can account for geophysical observations, including magneto-telluric measurements, gravity surveys, and earthquake locations. It can also calculate changes in the local stress field. In particular, a large increase in stress in the magma chamber roof could lead to the initiation and/or reactivation of the ring faults. Potential evidence for fault reactivation is the detection of diffuse soil degassing of CO2 with concentrations reaching 5-7% near the center of deformation. We therefore consider several hypotheses for the processes driving the deformation, including: (1) an intrusion of basalt into the base of a melt-rich layer of rhyolite leading to heating, bubble growth and subsequent increase pressure in the reservoir, and/or (2) inflation of a hydrothermal system above the rhyolite melt layer.

Le Mevel, H.; Cordova, L.; Ali, S. T.; Feigl, K. L.; DeMets, C.; Williams-Jones, G.; Tikoff, B.; Singer, B. S.

2013-12-01

135

Fault displacement hazard for strike-slip faults  

USGS Publications Warehouse

In this paper we present a methodology, data, and regression equations for calculating the fault rupture hazard at sites near steeply dipping, strike-slip faults. We collected and digitized on-fault and off-fault displacement data for 9 global strikeslip earthquakes ranging from moment magnitude M 6.5 to M 7.6 and supplemented these with displacements from 13 global earthquakes compiled byWesnousky (2008), who considers events up to M 7.9. Displacements on the primary fault fall off at the rupture ends and are often measured in meters, while displacements on secondary (offfault) or distributed faults may measure a few centimeters up to more than a meter and decay with distance from the rupture. Probability of earthquake rupture is less than 15% for cells 200 m??200 m and is less than 2% for 25 m??25 m cells at distances greater than 200mfrom the primary-fault rupture. Therefore, the hazard for off-fault ruptures is much lower than the hazard near the fault. Our data indicate that rupture displacements up to 35cm can be triggered on adjacent faults at distances out to 10kmor more from the primary-fault rupture. An example calculation shows that, for an active fault which has repeated large earthquakes every few hundred years, fault rupture hazard analysis should be an important consideration in the design of structures or lifelines that are located near the principal fault, within about 150 m of well-mapped active faults with a simple trace and within 300 m of faults with poorly defined or complex traces.

Petersen, M.D.; Dawson, T.E.; Chen, R.; Cao, T.; Wills, C.J.; Schwartz, D.P.; Frankel, A.D.

2011-01-01

136

Development of a bridge fault extractor tool  

E-print Network

complete list of realistic two-node bridging faults. The DEFAM (Defect to Fault Mapper) fault extractor tool [23] was one of the first hierarchical fault extractors. The DEFAM tool consists of three main parts: 1) Hierarchy Identification 11 2...

Bhat, Nandan D.

2005-02-17

137

15,000-yr pollen record of vegetation change in the high altitude tropical Andes at Laguna Verde Alta, Venezuela  

Microsoft Academic Search

Pollen analysis of sediments from a high-altitude (4215 m), Neotropical (9°N) Andean lake was conducted in order to reconstruct local and regional vegetation dynamics since deglaciation. Although deglaciation commenced ?15,500 cal yr B.P., the area around the Laguna Verde Alta (LVA) remained a periglacial desert, practically unvegetated, until about 11,000 cal yr B.P. At this time, a lycopod assemblage bearing

Valentí Rull; Mark B. Abbott; Pratigya J. Polissar; Alexander P. Wolfe; Maximiliano Bezada; Raymond S. Bradley

2005-01-01

138

Human Impact Since Medieval Times and Recent Ecological Restorationin a Mediterranean Lake: The Laguna Zoñar, Southern Spain  

Microsoft Academic Search

The multidisciplinary study of sediment cores from Laguna Zoar (3729?00?? N, 441?22?? W, 300 m a.s.l., Andaluca, Spain)\\u000a provides a detailed record of environmental, climatic and anthropogenic changes in a Mediterranean watershed since Medieval\\u000a times, and an opportunity to evaluate the lake restoration policies during the last decades. The paleohydrological reconstructions\\u000a show fluctuating lake levels since the end of the Medieval

Blas L. Valero-Garcés; Penélope González-Sampériz; Ana Navas; Javier Machín; Pilar Mata; Antonio Delgado-Huertas; Roberto Bao; Ana Moreno; José S. Carrión; Antje Schwalb; Antonio González-Barrios

2006-01-01

139

Stresses and Faulting  

NSDL National Science Digital Library

This module is designed for students in an introductory structural geology course. While key concepts are described here, it is assumed that the students will have access to a good textbook to augment the information presented here. Learning goals: (1) Understand the role of gravity and rock properties in producing stresses in the shallow Earth. (2) Graphically represent stress states using Mohr diagrams. (3) Determine failure criteria from the results of laboratory experiments. (4) Explore the interaction of gravity-induced and tectonic stresses on fault formation. (5) Apply models of fault formation to predict fault behavior in two natural settings: San Onofre Beach in southern California and Canyonland National Park in Utah. The module is implemented entirely using Microsoft Excel. This program was selected due to its widespread availability and relative ease-of-use. It is assumed that students are familiar with using equations and graphing tools in Excel.

Linda Reinen

140

Transition-fault test generation  

E-print Network

. One way to detect these timing defects is to apply test patterns to the integrated circuit that are generated using the transition-fault model. Unfortunately, industry's current transition-fault test generation schemes produce test sets that are too...

Cobb, Bradley Douglas

2013-02-22

141

A 6000-year record of ecological and hydrological changes from Laguna de la Leche, north coastal Cuba  

NASA Astrophysics Data System (ADS)

Laguna de la Leche, north coastal Cuba, is a shallow (? 3 m), oligohaline (˜ 2.0-4.5‰) coastal lake surrounded by mangroves and cattail stands. A 227-cm core was studied using loss-on-ignition, pollen, calcareous microfossils, and plant macrofossils. From ˜6200 to ˜ 4800 cal yr BP, the area was an oligohaline lake. The period from ˜ 4800 to ˜ 4200 cal yr BP saw higher water levels and a freshened system; these changes are indicated by an increase in the regional pollen rain, as well as by the presence of charophyte oogonia and an increase in freshwater gastropods (Hydrobiidae). By ˜ 4000 cal yr BP, an open mesohaline lagoon had formed; an increase in salt-tolerant foraminifers suggests that water level increase was driven by relative sea level rise. The initiation of Laguna de la Leche correlates with a shift to wetter conditions as indicated in pollen records from the southeastern United States (e.g., Lake Tulane). This synchronicity suggests that sea level rise caused middle Holocene environmental change region-wide. Two other cores sampled from mangrove swamps in the vicinity of Laguna de la Leche indicate that a major expansion of mangroves was underway by ˜ 1700 cal yr BP.

Peros, Matthew C.; Reinhardt, Eduard G.; Davis, Anthony M.

2007-01-01

142

Fault-Scarp Degradation  

NSDL National Science Digital Library

In this exercise, students investigate the evolution of Earth's surface over time, as governed by the balance between constructional (tectonic) processes and destructional (erosional) processes. Introductory materials explain the processes of degradation, including the concepts of weathering-limited versus transport-limited slopes, and diffusion modeling. Using the process of diffusion modeling, students will determine how a slope changes through four 100-year time steps, calculate gradient angles for a fault scarp, and compare parameters calculated for two fault scarps, attempting to determine the age of the scarp created by the older, unknown earthquake. Example problems, study questions, and a bibliography are provided.

Nicholas Pinter

143

Computer hardware fault administration  

DOEpatents

Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

Archer, Charles J. (Rochester, MN); Megerian, Mark G. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian E. (Rochester, MN)

2010-09-14

144

Fault tolerant linear actuator  

DOEpatents

In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.

Tesar, Delbert

2004-09-14

145

Fault Rate Acceleration and Low Angle Normal Faulting: The Hunter Mountain Fault, California.  

NASA Astrophysics Data System (ADS)

The Panamint Valley, Hunter Mountain, Saline Range (PHS) faults are, together with the Death Valley and Owens Valley faults, one of the three major fault zones within the Eastern California Shear Zone (ECSZ). The ECSZ is the most active fault system bounding the Basin and Range to the southwest with approximately 10 mm/yr of cumulative slip along strike-slip and trans-tensional segments. Interferometry Synthetic Aperture Radar (InSAR) is a geodetic technique that allows measurement of ground motion at a mm/yr accuracy over large areas with a high measurement sampling. We processed a large number of data to investigate ground motion in the PHS fault system to shed light on the interseismic strain accumulation and its relation to the fault geometry. Results indicate high strain rate over the Hunter Mountain fault, possibly showing slip rate acceleration of the fault since inception time. The locking depth of the fault inferred from elastic modeling of interseismic strain accumulation is on the order of a few kilometers, significantly shallower than for neighboring faults. The shallow locking depth inferred for the Hunter Mountain fault corresponds to the extension at depth of two suggested bounding low angle normal faults. This finding reinforced recent field study findings about possible activity of the low angle normal fault system.

Gourmelen, N.; Amelung, F.; Dixon, T.; Manzo, M.; Casu, F.; Pepe, A.; Lanari, R.

2008-05-01

146

Analyzing Fault/Fracture Patterns  

NSDL National Science Digital Library

During a lab period, students go out in the field to an area that contains at least 2 fault/fracture sets. Students measure orientations of faults and make observations about the relationship between different fault sets. After the field trip, the students compile their field data, plot it on a stereonet and write-up a brief report. In this report students will use their field observations and stereonet patterns to determine whether faults are related or unrelated to each other.

Jamie Levine

147

Examine animations of fault motion  

NSDL National Science Digital Library

Developed for high school students, this Earth science resource provides animations of each of four different fault types: normal, reverse, thrust, and strike-slip faults. Each animation has its own set of movie control buttons, and arrows in each animation indicate the direction of force that causes that particular kind of fault. The introductory paragraph defines the terms fault plane, handing wall, and footwall--features that are labeled at the end of the appropriate animations. Copyright 2005 Eisenhower National Clearinghouse

TERC. Center for Earth and Space Science Education

2003-01-01

148

Fault reconstruction from sensor and actuator failures  

Microsoft Academic Search

Many fault detection filters have been developed to detect and identify sensor and actuator faults by using analytical redundancy. In this paper, an approach for reconstructing sensor and actuator faults from the residual generated by the fault detection filter is proposed. The transfer matrix from the faults to the residual is derived in terms of the eigenvalues of the fault

Robert H. Chen; Jason L. Speyer

2001-01-01

149

Optimal stochastic multiple-fault detection filter  

Microsoft Academic Search

A class of robust fault detection filters is generalized from detecting single fault to multiple faults. This generalization is called the optimal stochastic multiple-fault detection filter since in the formulation, the unknown fault amplitudes are modeled as white noise. The residual space of the filter is divided into several subspaces and each subspace is sensitive to only one fault (target

Robert H. Chen; Jason L. Speyer

1999-01-01

150

Postglacial eruptive history of Laguna del Maule volcanic field in Chile, from fallout stratigraphy in Argentina  

NASA Astrophysics Data System (ADS)

The Laguna del Maule (LdM) volcanic field, which surrounds the 54-km2 lake of that name, covers ~500 km2 of rugged glaciated terrain with Quaternary lavas and tuffs that extend for 40 km westward from the Argentine frontier and 30 km N-S from the Rio Campanario to Laguna Fea in the Southern Volcanic Zone of Chile. Geologic mapping (Hildreth et al., 2010) shows that at least 130 separate vents are part of the LdM field, from which >350 km3 of products have erupted since 1.5 Ma. These include a ring of 36 postglacial rhyolite and rhyodacite coulees and domes that erupted from 24 separate vents and encircle the lake, suggesting a continued large magma reservoir. Because the units are young, glassy, and do not overlap, only a few ages had been determined and the sequence of most of the postglacial eruptions had not previously been established. However, most of these postglacial silicic eruptions were accompanied by explosive eruptions of pumice and ash. Recent investigations downwind in Argentina are combining stratigraphy, grain-size analysis, chemistry, and radiocarbon dating to correlate the tephra with eruptive units mapped in Chile, assess fallout distribution, and establish a time-stratigraphic framework for the postglacial eruptions at Laguna del Maule. Two austral summer field seasons with a tri-country collaboration among the geological surveys of the U.S., Chile, and Argentina, have now established that a wide area east of the volcanic field was blanketed by at least 3 large explosive eruptions from LdM sources, and by at least 3 more modest, but still significant, eruptions. In addition, an ignimbrite from the LdM Barrancas vent complex on the border in the SE corner of the lake traveled at least 15 km from source and now makes up a pyroclastic mesa that is at least 40 m thick. This ignimbrite (72-75% SiO2) preceded a series of fall deposits that are correlated with eruption of several lava flows that built the Barrancas complex. Recent 14C dates suggest that most of the preserved LdM fallout eruptions were between 7 ka and 2 ka. However, the oldest and perhaps largest fall unit yet recognized is correlated with the Los Espejos rhyolite lava flow that dammed the lake and yields a 40Ar/39Ar age of 23 ka. Pumice clasts as large as 8.5 cm and lithics to 4 cm were measured 32 km ENE of source. It is the only high-silica rhyolite (75.5-76% SiO2) fall layer yet found, correlates chemically with the Los Espejos rhyolite lava flow, and includes distinctive olivine-bearing lithics that are correlated with mafic lavas which underlie the Espejos vent. Extremely frothy pumice found near the vent is also consistent with the bubble-wall shards and reticulite pumice distinctive of the correlative fall deposit. Another large rhyolite fall deposit (74.5% SiO2), 4 m thick 22 km E of source, has pumice clasts to 9.5 cm and includes ubiquitous coherent clasts of fine, dense soil that suggests it erupted through wet ground; 14C dates (uncalibrated) yield ages ~7 ka. Stratigraphic details suggest that pulses of fallout were accompanied by small pyroclastic flows. Ongoing field and lab work continues to build the LdM postglacial eruptive story. The numerous postglacial explosive eruptions from the LdM field are of significant concern because of ongoing 33 cm/year uplift along the western lakeshore, as measured by InSAR and verified by GPS.

Fierstein, J.; Sruoga, P.; Amigo, A.; Elissondo, M.; Rosas, M.

2012-12-01

151

New high impedance fault detection  

Microsoft Academic Search

The high impedance fault detection in power system is one of the hard works and if the fault does not clear then the power system may get into damage. In this paper a new method for high impedance fault detection is proposed. The proposed method is based on the chaotic and doffing function. The paper will propose the appropriate formula

A. Siadatan; H. Kazemi Karegar; V. Najmi

2010-01-01

152

A decentralized fault detection filter  

Microsoft Academic Search

We introduce the decentralized fault detection filter which is the structure that results from merging decentralized estimation theory with a game theoretic fault detection filter. A decentralized approach may be the ideal way to health monitor large-scale systems for faults, since it decomposes the problem down into (potentially smaller) “local” problems and then blends the “local” results into a “global”

W. H. Chung; Jason L. Speyer

1998-01-01

153

Tolerating transient faults in MARS  

Microsoft Academic Search

The concepts of transient fault handling in the MARS architecture are discussed. After an overview of the MARS architecture, the mechanisms for the detection of transient faults are discussed in detail. In addition to extensive checks in the hardware and in the operating system, time-redundant execution of application tasks is proposed for the detection of transient faults. The time difference

H. Kopetz; H. Kantz; G. Grunsteidl; P. Puschner; J. Reisinger

1990-01-01

154

Diagnosable Systems for Intermittent Faults  

Microsoft Academic Search

Diagnosable systems composed of interconnected units which are capable of testing each other have been studied primarily from the point of view of permanent faults. Along such lines, designs have been proposed, and necessary and sufficient conditions for the diagnosis of such faults have been established. In this paper, we study the intermittent fault diagnosis capabilities of such systems. Necessary

Sivanarayana Mallela; Gerald M. Masson

1978-01-01

155

Diatom diversity and paleoenvironmental changes in Laguna Potrok Aike, Patagonia: the ~ 50 kyr PASADO sediment record  

NASA Astrophysics Data System (ADS)

Laguna Potrok Aike is a maar lake located in the southernmost Argentinean Patagonia, in the province of Santa Cruz. Being one of the few permanent lakes in the area, it provides an exceptional and continuous sedimentary record. The sediment cores from Laguna Potrok Aike, obtained in the framework of the ICDP-sponsored project PASADO (Potrok Aike Maar Lake Sediment Archive Drilling Program), were sampled for diatom analysis in order to reconstruct a continuous history of hydrological and climatic changes since the Late Pleistocene. Diatoms are widely used to characterize and often quantify the impact of past environmental changes in aquatic systems. We use variations in diatom concentration and in their dominant assemblages, combined with other proxies, to track these changes. Diatom assemblages were analyzed on the composite core 5022-2CP with a multi-centennial time resolution. The total composite profile length of 106.09 mcd (meters composite depth) was reduced to 45.80 m cd-ec (event-corrected composite profile) of pelagic deposits once gaps, reworked sections, and tephra deposits were removed. This continuous deposit spans the last ca. 51.2 cal. ka BP. Previous diatomological analysis from the core catcher samples of core 5022-1D, allowed us to determine the dominant diatom assemblages in this lake and select the sections where higher temporal resolution was needed. Over 200 species, varieties and forms were identified in the sediment record, including numerous endemic species and others which can be new to science. Among these, a new species has been described: Cymbella gravida sp. nov. Recasens and Maidana. The quantitative analysis of the sediment record reveals diatom abundances reaching 460 million valves per gram of dry sediment, with substantial fluctuations through time. Variations in the abundance and species distribution point toward lake level variations, changes in nutrient input or even periods of ice-cover in the lake. The top meters of the record reveal a shift in the phytoplakton composition, corresponding to the previously documented salinization of the water and the lake level drop, indicators of warming temperatures and lower moisture availability during the early and middle Holocene. The new results presented here on diatom diversity and distribution in the Glacial to Late Glacial sections of the record bring much needed information on the previously poorly known paleolimnology of this lake for that time period.

Recasens, C.; Ariztegui, D.; Maidana, N. I.

2012-12-01

156

Earthquakes and fault creep on the northern San Andreas fault  

USGS Publications Warehouse

At present there is an absence of both fault creep and small earthquakes on the northern San Andreas fault, which had a magnitude 8 earthquake with 5 m of slip in 1906. The fault has apparently been dormant after the 1906 earthquake. One possibility is that the fault is 'locked' in some way and only produces great earthquakes. An alternative possibility, presented here, is that the lack of current activity on the northern San Andreas fault is because of a lack of sufficient elastic strain after the 1906 earthquake. This is indicated by geodetic measurements at Fort Ross in 1874, 1906 (post-earthquake), and 1969, which show that the strain accumulation in 1969 (69 ?? 10-6 engineering strain) was only about one-third of the strain release (rebound) in the 1906 earthquake (200 ?? 10-6 engineering strain). The large difference in seismicity before and after 1906, with many strong local earthquakes from 1836 to 1906, but only a few strong earthquakes from 1906 to 1976, also indicates a difference of elastic strain. The geologic characteristics (serpentine, fault straightness) of most of the northern San Andreas fault are very similar to the characteristics of the fault south of Hollister, where fault creep is occurring. Thus, the current absence of fault creep on the northern fault segment is probably due to a lack of sufficient elastic strain at the present time. ?? 1979.

Nason, R.

1979-01-01

157

Row fault detection system  

DOEpatents

An apparatus and program product check for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

Archer, Charles Jens (Rochester, MN); Pinnow, Kurt Walter (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian Edward (Rochester, MN)

2010-02-23

158

Row fault detection system  

DOEpatents

An apparatus, program product and method check for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

Archer, Charles Jens (Rochester, MN); Pinnow, Kurt Walter (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian Edward (Rochester, MN)

2012-02-07

159

Row fault detection system  

DOEpatents

An apparatus, program product and method checks for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

Archer, Charles Jens (Rochester, MN); Pinnow, Kurt Walter (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian Edward (Rochester, MN)

2008-10-14

160

Abnormal fault-recovery characteristics of the fault-tolerant multiprocessor uncovered using a new fault-injection methodology  

NASA Astrophysics Data System (ADS)

An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.

Padilla, Peter A.

1991-03-01

161

Fault diagnosis of analog circuits  

SciTech Connect

In this paper, various fault location techniques in analog networks are described and compared. The emphasis is on the more recent developments in the subject. Four main approaches for fault location are addressed, examined, and illustrated using simple network examples. In particular, we consider the fault dictionary approach, the parameter identification approach, the fault verification approach, and the approximation approach. Theory and algorithms that are associated with these approaches are reviewed and problems of their practical application are identified. Associated with the fault dictionary approach we consider fault dictionary construction techniques, methods of optimum measurement selection, different fault isolation criteria, and efficient fault simulation techniques. Parameter identification techniques that either utilize linear or nonlinear systems of equations to identify all network elements are examined very thoroughly. Under fault verification techniques we discuss node-fault diagnosis, branch-fault diagnosis, subnetwork testability conditions as well as combinatorial techniques, the failure bound technique, and the network decomposition technique. For the approximation approach we consider probabilistic methods and optimization-based methods. The artificial intelligence technique and the different measures of testability are also considered. The main features of the techniques considered are summarized in a comparative table. An extensive, but not exhaustive, bibliography is provided.

Bandler, J.W.; Salama, A.E.

1985-08-01

162

Estimating floodplain sedimentation in the Laguna de Santa Rosa, Sonoma County, CA  

USGS Publications Warehouse

We present a conceptual and analytical framework for predicting the spatial distribution of floodplain sedimentation for the Laguna de Santa Rosa, Sonoma County, CA. We assess the role of the floodplain as a sink for fine-grained sediment and investigate concerns regarding the potential loss of flood storage capacity due to historic sedimentation. We characterized the spatial distribution of sedimentation during a post-flood survey and developed a spatially distributed sediment deposition potential map that highlights zones of floodplain sedimentation. The sediment deposition potential map, built using raster files that describe the spatial distribution of relevant hydrologic and landscape variables, was calibrated using 2 years of measured overbank sedimentation data and verified using longer-term rates determined using dendrochronology. The calibrated floodplain deposition potential relation was used to estimate an average annual floodplain sedimentation rate (3.6 mm/year) for the ~11 km2 floodplain. This study documents the development of a conceptual model of overbank sedimentation, describes a methodology to estimate the potential for various parts of a floodplain complex to accumulate sediment over time, and provides estimates of short and long-term overbank sedimentation rates that can be used for ecosystem management and prioritization of restoration activities.

Curtis, Jennifer A.; Flint, Lorraine E.; Hupp, Cliff R.

2013-01-01

163

Vegetation and climate history from Laguna de Río Seco, Sierra Nevada, southern Spain  

NASA Astrophysics Data System (ADS)

The largest mountain range in southern Spain - the Sierra Nevada - is an immense landscape with a rich biological and cultural heritage. Rising to 3,479 m at the summit of Mulhacén, the range was extensively glaciated during the late Pleistocene. Subsequent melting of cirque glaciers allowed formation of numerous small lakes and wetlands. One south-facing basin contains Laguna de Río Seco, a small lake at ca. 3020 m elevation, presently above potential treeline. Pollen analysis of sediment cores documents over 11,000 calendar years of vegetation change there. The early record, to ca. 5,700 cal yr BP, is dominated by pine pollen, with birch, deciduous oak, and grass, with an understory of shrubs types. Pine trees probably never grew at the elevation of the lake, but aquatic microfossils indicate lake levels were highest prior to ca. 7,800 cal yr BP, perhaps as a result of heavy winter precipitation, and early Holocene expansion of the ITCZ. Drier conditions commenced by 5,700 cal yr BP, shown by declines in wetland pollen, and increases in high elevation steppe shrubs more common today (juniper, sage, and others). The local and regional impact of humans increased substantially after ca. 2700 years ago, with the regional loss of pine forest or woodland, increases in pollen and spore types associated with pasturing, and olive cultivation at lower elevations.

Anderson, R. S.; Jimenez-Moreno, G.

2010-12-01

164

Earthquakes and Fault Lines  

NSDL National Science Digital Library

The purpose of this activity is for students to find the locations of the fault lines in Utah and understand that fault lines are often earthquake zones. They will learn how often earthquakes are expected to occur, when Utah is due for another one, and where the next one is expected to occur. This meets the Utah Core Standard for fifth grade science: Standard 2: Students will understand that volcanoes, earthquakes, uplift, weathering, and erosion reshape Earth's surface. Objective 1,c: Explain the relationship between time and specific geological changes. Objective 2: Explain how volcanoes, earthquakes, and uplift affect Earth's surface. Situation You are from Montana, and your dad just got a new job in Northern Utah. Your family will have to move there. Your parents have heard that Utah has the potential for major earthquakes, and don?t know where to build your new house. They ...

Miss Bennington

2010-04-26

165

Managing Fault Management Development  

NASA Technical Reports Server (NTRS)

As the complexity of space missions grows, development of Fault Management (FM) capabilities is an increasingly common driver for significant cost overruns late in the development cycle. FM issues and the resulting cost overruns are rarely caused by a lack of technology, but rather by a lack of planning and emphasis by project management. A recent NASA FM Workshop brought together FM practitioners from a broad spectrum of institutions, mission types, and functional roles to identify the drivers underlying FM overruns and recommend solutions. They identified a number of areas in which increased program and project management focus can be used to control FM development cost growth. These include up-front planning for FM as a distinct engineering discipline; managing different, conflicting, and changing institutional goals and risk postures; ensuring the necessary resources for a disciplined, coordinated approach to end-to-end fault management engineering; and monitoring FM coordination across all mission systems.

McDougal, John M.

2010-01-01

166

Randomness fault detection system  

NASA Technical Reports Server (NTRS)

A method and apparatus are provided for detecting a fault on a power line carrying a line parameter such as a load current. The apparatus monitors and analyzes the load current to obtain an energy value. The energy value is compared to a threshold value stored in a buffer. If the energy value is greater than the threshold value a counter is incremented. If the energy value is greater than a high value threshold or less than a low value threshold then a second counter is incremented. If the difference between two subsequent energy values is greater than a constant then a third counter is incremented. A fault signal is issued if the counter is greater than a counter limit value and either the second counter is greater than a second limit value or the third counter is greater than a third limit value.

Russell, B. Don (Inventor); Aucoin, B. Michael (Inventor); Benner, Carl L. (Inventor)

1996-01-01

167

This paper compares two fault injection techniques: scan chain implemented fault injection (SCIFI), i.e. fault  

E-print Network

Abstract This paper compares two fault injection techniques: scan chain implemented fault injection (SCIFI), i.e. fault injection in a physical system using built in test logic, and fault injection in a VHDL software simulation model of a system. The fault injections were used to evaluate the error

Karlsson, Johan

168

Fault tree analysis is widely used in industry for fault diagnosis. The diagnosis of incipient or `soft' faults is  

E-print Network

Fault tree analysis is widely used in industry for fault diagnosis. The diagnosis of incipient results based on a neural network approach. INTRODUCTION Fault tree analysis (FTA) and fault tree used in systems safety analysis for over 30 years. During this time the fault tree method has been used

Madden, Michael

169

Normal fault growth, displacement localisation and the evolution of normal fault populations: the Hammam Faraun fault block, Suez rift, Egypt  

NASA Astrophysics Data System (ADS)

Fault segment linkage, migration of the locus of fault activity, and displacement localisation were important processes controlling the late Oligocene-Recent evolution of the normal fault population of the Hammam Faraun fault block, Suez rift. Initial fault activity was distributed across the fault block on fault segments that had attained their final length within 1-2 My of rifting. These initial segments then either grew by increasing displacement and linked to form longer segmented fault zones or died, during a rift initiation phase that lasted 6-8 My. Following this rift initiation phase, displacement became localised onto >25-km-long border fault zones bounding the fault block and many of the early high-displacement intra-block fault zones died. Following displacement localisation onto the major faults bounding the fault block, the locus of maximum displacement continued to migrate, with post-Middle Miocene displacement focused on the western margin of the fault block. This migration of fault activity between major crustal-scale normal faults can be viewed in terms of strain localisation at the rift scale. The results from this study question conventional fault growth models based on final displacement distributions, and highlight the sequential nature of faulting on major normal faults bounding domino-style tilted fault blocks.

Gawthorpe, Rob L.; Jackson, Christopher A.-L.; Young, Mike J.; Sharp, Ian R.; Moustafa, Adel R.; Leppard, Christopher W.

2003-06-01

170

Fault tolerant control laws  

NASA Technical Reports Server (NTRS)

A systematic procedure for the synthesis of fault tolerant control laws to actuator failure has been presented. Two design methods were used to synthesize fault tolerant controllers: the conventional LQ design method and a direct feedback controller design method SANDY. The latter method is used primarily to streamline the full-state Q feedback design into a practical implementable output feedback controller structure. To achieve robustness to control actuator failure, the redundant surfaces are properly balanced according to their control effectiveness. A simple gain schedule based on the landing gear up/down logic involving only three gains was developed to handle three design flight conditions: Mach .25 and Mach .60 at 5000 ft and Mach .90 at 20,000 ft. The fault tolerant control law developed in this study provides good stability augmentation and performance for the relaxed static stability aircraft. The augmented aircraft responses are found to be invariant to the presence of a failure. Furthermore, single-loop stability margins of +6 dB in gain and +30 deg in phase were achieved along with -40 dB/decade rolloff at high frequency.

Ly, U. L.; Ho, J. K.

1986-01-01

171

Faults and Faulting Earth Structure (2nd Edition), 2004  

E-print Network

#12;© EarthStructure (2nd ed) 39/14/2010 Faults, fault zones and shear zones #12;© EarthStructure (2nd on a footwall flat, and segment DE is a hanging-wall flat on a footwall flat. #12;© EarthStructure (2nd ed) 109/14/2010 Fault Rocks: Gouge and Cataclasite #12;© EarthStructure (2nd ed) 259/14/2010 Mylonites/Shear Zones #12

172

Software Evolution and the Fault Process  

NASA Technical Reports Server (NTRS)

In developing a software system, we would like to estimate the way in which the fault content changes during its development, as well determine the locations having the highest concentration of faults. In the phases prior to test, however, there may be very little direct information regarding the number and location of faults. This lack of direct information requires developing a fault surrogate from which the number of faults and their location can be estimated. We develop a fault surrogate based on changes in the fault index, a synthetic measure which has been successfully used as a fault surrogate in previous work. We show that changes in the fault index can be used to estimate the rates at which faults are inserted into a system between successive revisions. We can then continuously monitor the total number of faults inserted into a system, the residual fault content, and identify those portions of a system requiring the application of additional fault detection and removal resources.

Nikora, Allen P.; Munson, John C.

1999-01-01

173

Fault management for data systems  

NASA Technical Reports Server (NTRS)

Issues related to automating the process of fault management (fault diagnosis and response) for data management systems are considered. Substantial benefits are to be gained by successful automation of this process, particularly for large, complex systems. The use of graph-based models to develop a computer assisted fault management system is advocated. The general problem is described and the motivation behind choosing graph-based models over other approaches for developing fault diagnosis computer programs is outlined. Some existing work in the area of graph-based fault diagnosis is reviewed, and a new fault management method which was developed from existing methods is offered. Our method is applied to an automatic telescope system intended as a prototype for future lunar telescope programs. Finally, an application of our method to general data management systems is described.

Boyd, Mark A.; Iverson, David L.; Patterson-Hine, F. Ann

1993-01-01

174

Origin and evolution of the Laguna Potrok Aike maar (Southern Patagonia, Argentina) as revealed by seismic data  

NASA Astrophysics Data System (ADS)

Seismic reflection and refraction data provide insights into the sedimentary infill and the underlying volcanic structure of Laguna Potrok Aike, a maar lake situated in the Pali Aike Volcanic Field, Southern Patagonia. The lake has a diameter of ~3.5 km, a maximum water depth of ~100 m and a presumed age of ~770 ka. Its sedimentary regime is influenced by climatic and hydrologic conditions related to the Antarctic Circumpolar Current, the Southern Hemispheric Westerlies and sporadic outbreaks of Antarctic polar air masses. Multiproxy environmental reconstructions of the last 16 ka document that this terminal lake is highly sensitive to climate change. Laguna Potrok Aike has recently become a major focus of the International Continental Scientific Drilling Program and was drilled down to 100 m below lake floor in late 2008 within the PASADO project. The sediments are likely to contain a continental record spanning the last ca. 80 kyrs unique in the South American realm. Seismic reflection data show relatively undisturbed, stratified lacustrine sediments at least in the upper ~100 m of the sedimentary infill but are obscured possibly by gas and/or coarser material in larger areas. A model calculated from seismic refraction data reveals a funnel-shaped structure embedded in the sandstone rocks of the surrounding Santa Cruz Formation. This funnel structure is filled by lacustrine sediments of up to 370 m in thickness. These can be separated into two distinct subunits with low acoustic velocities of 1500-1800 m s-1 in the upper subunit pointing at unconsolidated lacustrine muds, and enhanced velocities of 2000-2350 m s-1 in the lower subunit. Below these lacustrine sediments, a unit of probably volcanoclastic origin is observed (>2400 m s-1). This sedimentary succession is well comparable to other well-studied sequences (e.g. Messel and Baruth maars, Germany), confirming phreatomagmatic maar explosions as the origin of Laguna Potrok Aike.

Gebhardt, C.; de Batist, M. A.; Niessen, F.; Anselmetti, F.; Ariztegui, D.; Haberzettl, T.; Ohlendorf, C.; Zolitschka, B.

2009-12-01

175

Polynomially Complete Fault Detection Problems  

Microsoft Academic Search

We look at several variations of the single fault detection problem for combinational logic circuits and show that deciding whether single faults are detectable by input-output (I\\/O) experiments is polynomially complete, i.e., there is a polynomial time algorithm to decide if these single faults are detectable if and only if there is a polynomial time algorithm for problems such as

Oscar H. Ibarra; Sartaj Sahni

1975-01-01

176

Handling Software Faults with Redundancy  

Microsoft Academic Search

Software engineering methods can increase the dependability of software systems, and yet some faults escape even the most\\u000a rigorous and methodical development process. Therefore, to guarantee high levels of reliability in the presence of faults,\\u000a software systems must be designed to reduce the impact of the failures caused by such faults, for example by deploying techniques\\u000a to detect and compensate

Antonio Carzaniga; Alessandra Gorla; Mauro Pezzè

2008-01-01

177

Impact of solar radiation on bacterioplankton in Laguna Vilama, a hypersaline Andean lake (4650 m)  

NASA Astrophysics Data System (ADS)

Laguna Vilama is a hypersaline Lake located at 4660 m altitude in the northwest of Argentina high up in the Andean Puna. The impact of ultraviolet (UV) radiation on bacterioplankton was studied by collecting samples at different times of the day. Molecular analysis (DGGE) showed that the bacterioplankton community is characterized by Gamma-proteobacteria (Halomonas sp., Marinobacter sp.), Alpha-proteobacteria (Roseobacter sp.), HGC (Agrococcus jenensis and an uncultured bacterium), and CFB (uncultured Bacteroidetes). During the day, minor modifications in bacterial diversity such as intensification of Bacteroidetes' signal and an emergence of Gamma-proteobacteria (Marinobacter flavimaris) were observed after solar exposure. DNA damage, measured as an accumulation of Cyclobutane Pyrimidine Dimers (CPDs), in bacterioplankton and naked DNA increased from 100 CPDs MB-1 at 1200 local time (LT) to 300 CPDs MB-1 at 1600 LT, and from 80 CPDs MB-1 at 1200 LT to 640 CPDs MB-1 at 1600 LT, respectively. In addition, pure cultures of Pseudomonas sp. V1 and Brachybacterium sp. V5, two bacteria previously isolated from this environment, were exposed simultaneously with the community, and viability of both strains diminished after solar exposure. No CPD accumulation was observed in either of the exposed cultures, but an increase in mutagenesis was detected in V5. Of both strains only Brachybacterium sp. V5 showed CPD accumulation in naked DNA. These results suggest that the bacterioplankton community is well adapted to this highly solar irradiated environment showing little accumulation of CPDs and few changes in the community composition. They also demonstrate that these microorganisms contain efficient mechanisms against UV damage.

FaríAs, MaríA. Eugenia; FernáNdez-Zenoff, Verónica; Flores, Regina; OrdóñEz, Omar; EstéVez, Cristina

2009-06-01

178

Holocene History of the Chocó Rain Forest from Laguna Piusbi, Southern Pacific Lowlands of Colombia  

NASA Astrophysics Data System (ADS)

A high-resolution pollen record from a 5-m-long sediment core from the closed-lake basin Laguna Piusbi in the southern Colombian Pacific lowlands of Chocó, dated by 11 AMS 14C dates that range from ca. 7670 to 220 14C yr B.P., represents the first Holocene record from the Chocó rain forest area. The interval between 7600 and 6100 14C yr B.P. (500-265 cm), composed of sandy clays that accumulated during the initial phase of lake formation, is almost barren of pollen. Fungal spores and the presence of herbs and disturbance taxa suggest the basin was at least temporarily inundated and the vegetation was open. The closed lake basin might have formed during an earthquake, probably about 4400 14C yr B.P. From the interval of about 6000 14C yr B.P. onwards, 200 different pollen and spore types were identified in the core, illustrating a diverse floristic composition of the local rain forest. Main taxa are Moraceae/Urticaceae, Cecropia,Melastomataceae/Combretaceae, Acalypha, Alchornea,Fabaceae, Mimosa, Piper, Protium, Sloanea, Euterpe/Geonoma, Socratea,and Wettinia.Little change took place during that time interval. Compared to the pollen records from the rain forests of the Colombian Amazon basin and adjacent savannas, the Chocó rain forest ecosystem has been very stable during the late Holocene. Paleoindians probably lived there at least since 3460 14C yr B.P. Evidence of agricultural activity, shown by cultivation of Zea maissurrounding the lake, spans the last 1710 yr. Past and present very moist climate and little human influence are important factors in maintaining the stable ecosystem and high biodiversity of the Chocó rain forest.

Behling, Hermann; Hooghiemstra, Henry; Negret, Alvaro José

1998-11-01

179

Rickettsia bellii, Rickettsia amblyommii, and Laguna Negra hantavirus in an Indian reserve in the Brazilian Amazon  

PubMed Central

Background The purpose of this study was to identify the presence of rickettsia and hantavirus in wild rodents and arthropods in response to an outbreak of acute unidentified febrile illness among Indians in the Halataikwa Indian Reserve, northwest of the Mato Grosso state, in the Brazilian Amazon. Where previously surveillance data showed serologic evidence of rickettsia and hantavirus human infection. Methods The arthropods were collected from the healthy Indian population and by flagging vegetation in grassland or woodland along the peridomestic environment of the Indian reserve. Wild rodents were live-trapped in an area bordering the reserve limits, due the impossibility of capturing wild animals in the Indian reserve. The wild rodents were identified based on external and cranial morphology and karyotype. DNA was extracted from spleen or liver samples of rodents and from invertebrate (tick and louse) pools, and the molecular characterization of the rickettsia was through PCR and DNA sequencing of fragments of two rickettsial genes (gltA and ompA). In relation to hantavirus, rodent serum samples were serologically screened by IgG ELISA using the Araraquara-N antigen and total RNA was extracted from lung samples of IgG-positive rodents. The amplification of the complete S segment was performed. Results A total of 153 wild rodents, 121 louse, and 36 tick specimens were collected in 2010. Laguna Negra hantavirus was identified in Calomys callidus rodents and Rickettsia bellii, Rickettsia amblyommii were identified in Amblyomma cajennense ticks. Conclusions Zoonotic diseases such as HCPS and spotted fever rickettsiosis are a public health threat and should be considered in outbreaks and acute febrile illnesses among Indian populations. The presence of the genome of rickettsias and hantavirus in animals in this Indian reserve reinforces the need to include these infectious agents in outbreak investigations of febrile cases in Indian populations. PMID:24742108

2014-01-01

180

Postglacial history of alpine vegetation, fire, and climate from Laguna de Río Seco, Sierra Nevada, southern Spain  

NASA Astrophysics Data System (ADS)

The Sierra Nevada of southern Spain is a landscape with a rich biological and cultural heritage. The range was extensively glaciated during the late Pleistocene. However, the postglacial paleoecologic history of the highest range in southern Europe is nearly completely unknown. Here we use sediments from a small lake above present treeline - Laguna de Río Seco at 3020 m elevation - in a paleoecological study documenting over 11,500 calendar years of vegetation, fire and climate change, addressing ecological and paleoclimatic issues unique to this area through comparison with regional paleoecological sequences. The early record is dominated by Pinus pollen, with Betula, deciduous Quercus, and grasses, with an understory of shrubs. It is unlikely that pine trees grew around the lake, and fire was relatively unimportant at this site during this period. Aquatic microfossils indicate that the wettest conditions and highest lake levels at Laguna de Río Seco occurred before 7800 cal yr BP. This is in contrast to lower elevation sites, where wettest conditions occurred after ca 7800. Greater differences in early Holocene seasonal insolation may have translated to greater snowpack and subsequently higher lake levels at higher elevations, but not necessarily at lower elevations, where higher evaporation rates prevailed. With declining seasonality after ca 8000 cal yr BP, but continuing summer precipitation, lake levels at the highest elevation site remained high, but lake levels at lower elevation sites increased as evaporation rates declined. Drier conditions commenced regionally after ca 5700 cal yr BP, shown at Laguna de Río Seco by declines in wetland pollen, and increases in high elevation steppe shrubs common today ( Juniperus, Artemisia, and others). The disappearance or decline of mesophytes, such as Betula from ca 4000 cal yr BP is part of a regional depletion in Mediterranean Spain and elsewhere in Europe from the mid to late Holocene. On the other hand, Castanea sativa increased in Laguna de Río Seco record after ca 4000 cal yr BP, and especially in post-Roman times, probably due to arboriculture. Though not as important at high than at low elevations, fire occurrence was elevated, particularly after ca 3700 years ago, in response to regional human population expansion. The local and regional impact of humans increased substantially after ca 2700 years ago, with the loss of Pinus forest within the mountain range, increases in evidence of pasturing herbivores around the lake, and Olea cultivation at lower elevations. Though human impact was not as extensive at high elevation as at lower elevation sites in southern Iberia, this record confirms that even remote sites were not free of direct human influence during the Holocene.

Anderson, R. S.; Jiménez-Moreno, G.; Carrión, J. S.; Pérez-Martínez, C.

2011-06-01

181

The LAGUNA design study towards giant liquid based underground detectors for neutrino physics and astrophysics and proton decay searches  

E-print Network

The feasibility of a next generation neutrino observatory in Europe is being considered within the LAGUNA design study. To accommodate giant neutrino detectors and shield them from cosmic rays, a new very large underground infrastructure is required. Seven potential candidate sites in different parts of Europe and at several distances from CERN are being studied: Boulby (UK), Canfranc (Spain), Fr\\'ejus (France/Italy), Pyh\\"asalmi (Finland), Polkowice-Sieroszowice (Poland), Slanic (Romania) and Umbria (Italy). The design study aims at the comprehensive and coordinated technical assessment of each site, at a coherent cost estimation, and at a prioritization of the sites within the summer 2010.

Angus, D; Autiero, D; Apostu, A; Badertscher, A; Bennet, T; Bertola, G; Bertola, P F; Besida, O; Bettini, A; Booth, C; Borne, J L; Brancus, I; Bujakowsky, W; Campagne, J E; Danil, G Cata; Chipesiu, F; Chorowski, M; Cripps, J; Curioni, A; Davidson, S; Declais, Y; Drost, U; Duliu, O; Dumarchez, J; Enqvist, T; Ereditato, A; von Feilitzsch, F; Fynbo, H; Gamble, T; Galvanin, G; Gendotti, A; Gizicki, W; Goger-Neff, M; Grasslin, U; Gurney, D; Hakala, M; Hannestad, S; Haworth, M; Horikawa, S; Jipa, A; Juget, F; Kalliokoski, T; Katsanevas, S; Keen, M; Kisiel, J; Kreslo, I; Kudryastev, V; Kuusiniemi, P; Labarga, L; Lachenmaier, T; Lanfranchi, J C; Lazanu, I; Lewke, T; Loo, K; Lightfoot, P; Lindner, M; Longhin, A; Maalampi, J; Marafini, M; Marchionni, A; Margineanu, R M; Markiewicz, A; Marrodan-Undagoita, T; Marteau, J E; Matikainen, R; Meindl, Q; Messina, M; Mietelski, J W; Mitrica, B; Mordasini, A; Mosca, L; Moser, U; Nuijten, G; Oberauer, L; Oprina, A; Paling, S; Pascoli, S; Patzak, T; Pectu, M; Pilecki, Z; Piquemal, F; Potzel, W; Pytel, W; Raczynski, M; Rafflet, G; Ristaino, G; Robinson, M; Rogers, R; Roinisto, J; Romana, M; Rondio, E; Rossi, B; Rubbia, A; Sadecki, Z; Saenz, C; Saftoiu, A; Salmelainen, J; Sima, O; Slizowski, J; Slizowski, K; Sobczyk, J; Spooner, N; Stoica, S; Suhonen, J; Sulej, R; Szarska, M; Szeglowski, T; Temussi, M; Thompson, J; Thompson, L; Trzaska, W H; Tippmann, M; Tonazzo, A; Urbanczyk, K; Vasseur, G; Williams, A; Winter, J; Wojutszewska, K; Wurm, M; Zalewska, A; Zampaolo, M; Zito, M

2010-01-01

182

The LAGUNA design study- towards giant liquid based underground detectors for neutrino physics and astrophysics and proton decay searches  

E-print Network

The feasibility of a next generation neutrino observatory in Europe is being considered within the LAGUNA design study. To accommodate giant neutrino detectors and shield them from cosmic rays, a new very large underground infrastructure is required. Seven potential candidate sites in different parts of Europe and at several distances from CERN are being studied: Boulby (UK), Canfranc (Spain), Fr\\'ejus (France/Italy), Pyh\\"asalmi (Finland), Polkowice-Sieroszowice (Poland), Slanic (Romania) and Umbria (Italy). The design study aims at the comprehensive and coordinated technical assessment of each site, at a coherent cost estimation, and at a prioritization of the sites within the summer 2010.

LAGUNA Collaboration; D. Angus; A. Ariga; D. Autiero; A. Apostu; A. Badertscher; T. Bennet; G. Bertola; P. F. Bertola; O. Besida; A. Bettini; C. Booth; J. L. Borne; I. Brancus; W. Bujakowsky; J. E. Campagne; G. Cata Danil; F. Chipesiu; M. Chorowski; J. Cripps; A. Curioni; S. Davidson; Y. Declais; U. Drost; O. Duliu; J. Dumarchez; T. Enqvist; A. Ereditato; F. von Feilitzsch; H. Fynbo; T. Gamble; G. Galvanin; A. Gendotti; W. Gizicki; M. Goger-Neff; U. Grasslin; D. Gurney; M. Hakala; S. Hannestad; M. Haworth; S. Horikawa; A. Jipa; F. Juget; T. Kalliokoski; S. Katsanevas; M. Keen; J. Kisiel; I. Kreslo; V. Kudryastev; P. Kuusiniemi; L. Labarga; T. Lachenmaier; J. C. Lanfranchi; I. Lazanu; T. Lewke; K. Loo; P. Lightfoot; M. Lindner; A. Longhin; J. Maalampi; M. Marafini; A. Marchionni; R. M. Margineanu; A. Markiewicz; T. Marrodan-Undagoita; J. E. Marteau; R. Matikainen; Q. Meindl; M. Messina; J. W. Mietelski; B. Mitrica; A. Mordasini; L. Mosca; U. Moser; G. Nuijten; L. Oberauer; A. Oprina; S. Paling; S. Pascoli; T. Patzak; M. Pectu; Z. Pilecki; F. Piquemal; W. Potzel; W. Pytel; M. Raczynski; G. Rafflet; G. Ristaino; M. Robinson; R. Rogers; J. Roinisto; M. Romana; E. Rondio; B. Rossi; A. Rubbia; Z. Sadecki; C. Saenz; A. Saftoiu; J. Salmelainen; O. Sima; J. Slizowski; K. Slizowski; J. Sobczyk; N. Spooner; S. Stoica; J. Suhonen; R. Sulej; M. Szarska; T. Szeglowski; M. Temussi; J. Thompson; L. Thompson; W. H. Trzaska; M. Tippmann; A. Tonazzo; K. Urbanczyk; G. Vasseur; A. Williams; J. Winter; K. Wojutszewska; M. Wurm; A. Zalewska; M. Zampaolo; M. Zito

2009-12-31

183

Maximum Magnitude in Relation to Mapped Fault Length and Fault Rupture  

Microsoft Academic Search

Earthquake hazard zones are highlighted using known fault locations and an estimate of the fault's maximum magnitude earthquake. Magnitude limits are commonly determined from fault geometry, which is dependent on fault length. Over the past 30 years it has become apparent that fault length is often poorly constrained and that a single event can rupture across several individual fault segments.

N. Black; D. Jackson; T. Rockwell

2004-01-01

184

Fault-tolerant Sensor Network based on Fault Evaluation Matrix and Compensation for Intermittent Observation  

E-print Network

Fault-tolerant Sensor Network based on Fault Evaluation Matrix and Compensation for Intermittent Observation Kazuya Kosugi, Shinichiro Tokumoto and Toru Namerikawa Abstract-- This paper deals with a fault for constructing a fault tolerant system. Specifically, we propose a fault-evaluation matrix for the fault

185

Fault Location Orion is the distribution company for the Canterbury region. In 2007, a Ground Fault  

E-print Network

Fault Location Orion is the distribution company for the Canterbury region. In 2007, a Ground Fault faults. This system operates by reducing the fault currents present during a fault, extinguishing and preventing arcing from occurring. Although this is greatly beneficial to the system, the reduction in fault

Hickman, Mark

186

DIAGNOSIS USING FAULT TREES INDUCED FROM SIMULATED INCIPIENT FAULT CASE DATA  

Microsoft Academic Search

Fault tree analysis is widely used in industry for fault diagnosis. The diagnosis of incipient or 'soft' faults is considerably more difficult than that of 'hard' faults, which is the case considered normally. A detailed fault tree model reflecting signal variations over a wide range is required in the case of soft faults. This paper presents comprehensive results describing the

P J Nolan; M G Madden; P Muldoon

1994-01-01

187

Normal fault corrugation: implications for growth and seismicity of active normal faults  

Microsoft Academic Search

Large normal faults are corrugated. Corrugations appear to form from overlapping or en échelon fault arrays by two breakthrough mechanisms: lateral propagation of curved fault-tips and linkage by connecting faults. Both mechanisms include localized fault-parallel extension and eventual abandonment of relay ramps. These breakthrough mechanisms produce distinctive hanging wall and footwall geometries indicative of fault system evolution. From such geometries,

David A Ferrill; John A Stamatakos; Darrell Sims

1999-01-01

188

Experimental Fault Reactivation on Favourably and Unfavourably Oriented Faults  

NASA Astrophysics Data System (ADS)

In this study, we introduce work which aims assess the loading of faults to failure under different stress regimes in a triaxial deformation apparatus. We explore experimentally the reshear of an existing fault in various orientations for particular values of (?1 - ?3) and ?3' for contrasting loading systems - load-strengthening (equivalent to a thrust fault) with ?1' increasing at constant ?3', versus load-weakening (equivalent to a normal fault) with reducing ?3' under constant ?1'. Experiments are conducted on sawcut granite samples with fault angles at a variety of orientations relative to ?1 , ranging from an optimal orientation for reactivation to lockup angles where new faults are formed in preference to reactivating the existing sawcut orientation. Prefailure and postfailure behaviour is compared in terms of damage zone development via monitoring variations in ultrasonic velocity and acoustic emission behaviour. For example, damage surrounding unfavourably oriented faults is significantly higher than that seen around favourably orientated faults due to greater maximum stresses attained prior to unstable slip, which is reflected by the increased acoustic emission activity leading up to failure. In addition, we also experimentally explore the reshear of natural pseudotachylytes (PSTs) from two different fault zones; the Gole Larghe Fault, Adamello, Italy in which the PSTs are in relatively isotropic Tonalite (at lab sample scale) and the Alpine Fault, New Zealand in which the PSTs are in highly anisotropic foliated shist. We test whether PSTs will reshear in both rock types under the right conditions, or whether new fractures in the wall rock will form in preference to reactivating the PST (PST shear strength is higher than that of the host rock). Are PSTs representative of one slip event?

Mitchell, T. M.; Sibson, R. H.; Renner, J.; Toy, V. G.; di Toro, G.; Smith, S. A.

2010-12-01

189

Stacking Faults in Cotton Fibers  

NASA Astrophysics Data System (ADS)

The stacking faults in different variety of cotton fibers have been quantified using wide-angle X-ray scattering (WAXS) data. Exponential functions for the column length distribution have been used for the determination of microstructural parameters. The crystal imperfection parameters like crystal size , lattice strain (g in %), stacking faults (?d) and twin faults (?) have been determined by profile analysis using Fourier method of Warren. We examined different variety of raw cotton fibers using WAXS techniques. In all these cases we note that, the stacking faults are quite significant in determining the property of cotton fibers.

Divakara, S.; Niranjana, A. R.; Siddaraju, G. N.; Somashekar, R.

2011-07-01

190

Comparison of Observed Spatio-temporal Aftershock Patterns with Earthquake Simulator Results  

NASA Astrophysics Data System (ADS)

Due to the complex nature of faulting in southern California, knowledge of rupture behavior near fault step-overs is of critical importance to properly quantify and mitigate seismic hazards. Estimates of earthquake probability are complicated by the uncertainty that a rupture will stop at or jump a fault step-over, which affects both the magnitude and frequency of occurrence of earthquakes. In recent years, earthquake simulators and dynamic rupture models have begun to address the effects of complex fault geometries on earthquake ground motions and rupture propagation. Early models incorporated vertical faults with highly simplified geometries. Many current studies examine the effects of varied fault geometry, fault step-overs, and fault bends on rupture patterns; however, these works are limited by the small numbers of integrated fault segments and simplified orientations. The previous work of Kroll et al., 2013 on the northern extent of the 2010 El Mayor-Cucapah rupture in the Yuha Desert region uses precise aftershock relocations to show an area of complex conjugate faulting within the step-over region between the Elsinore and Laguna Salada faults. Here, we employ an innovative approach of incorporating this fine-scale fault structure defined through seismological, geologic and geodetic means in the physics-based earthquake simulator, RSQSim, to explore the effects of fine-scale structures on stress transfer and rupture propagation and examine the mechanisms that control aftershock activity and local triggering of other large events. We run simulations with primary fault structures in state of California and northern Baja California and incorporate complex secondary faults in the Yuha Desert region. These models produce aftershock activity that enables comparison between the observed and predicted distribution and allow for examination of the mechanisms that control them. We investigate how the spatial and temporal distribution of aftershocks are affected by changes to model parameters such as shear and normal stress, rate-and-state frictional properties, fault geometry, and slip rate.

Kroll, K.; Richards-Dinger, K. B.; Dieterich, J. H.

2013-12-01

191

Dynamics of earthquake faults  

SciTech Connect

The authors present an overview of ongoing studies of the rich dynamical behavior of the uniform, deterministic Burridge-Knopoff model of an earthquake fault, discussing the model's behavior in the context of current seismology. The topics considered include: (1) basic properties of the model, such as the distinction between small and large events and the magnitude vs frequency distribution; (2) dynamics of individual events, including dynamical selection of rupture propagation speeds; (3) generalizations of the model to more realistic, higher-dimensional models; and (4) studies of predictability, in which artificial catalogs generated by the model are used to test and determine the limitations of pattern recognition algorithms used in seismology.

Carlson, J.M. (Department of Physics and Institute for Theoretical Physics, University of California, Santa Barbara, California 93106 (United States)); Langer, J.S. (Institute for Theoretical Physics, University of California, Santa Barbara, California 93106 (United States)); Shaw, B.E. (Institute for Theoretical Physics, University of California, Santa Barbara, California 93106 (United States) Lamont-Doherty Earth Observatory, Columbia University, Palisades, New York 10964 (United States))

1994-04-01

192

Fault current limiter  

DOEpatents

A fault current limiter (FCL) includes a series of high permeability posts for collectively define a core for the FCL. A DC coil, for the purposes of saturating a portion of the high permeability posts, surrounds the complete structure outside of an enclosure in the form of a vessel. The vessel contains a dielectric insulation medium. AC coils, for transporting AC current, are wound on insulating formers and electrically interconnected to each other in a manner such that the senses of the magnetic field produced by each AC coil in the corresponding high permeability core are opposing. There are insulation barriers between phases to improve dielectric withstand properties of the dielectric medium.

Darmann, Francis Anthony

2013-10-08

193

Perspective View, Garlock Fault  

NASA Technical Reports Server (NTRS)

California's Garlock Fault, marking the northwestern boundary of the Mojave Desert, lies at the foot of the mountains, running from the lower right to the top center of this image, which was created with data from NASA's shuttle Radar Topography Mission (SRTM), flown in February 2000. The data will be used by geologists studying fault dynamics and landforms resulting from active tectonics. These mountains are the southern end of the Sierra Nevada and the prominent canyon emerging at the lower right is Lone Tree canyon. In the distance, the San Gabriel Mountains cut across from the leftside of the image. At their base lies the San Andreas Fault which meets the Garlock Fault near the left edge at Tejon Pass. The dark linear feature running from lower right to upper left is State Highway 14 leading from the town of Mojave in the distance to Inyokern and the Owens Valley in the north. The lighter parallel lines are dirt roads related to power lines and the Los Angeles Aqueduct which run along the base of the mountains.

This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense (DoD), and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise,Washington, DC.

Size: Varies in a perspective view Location: 35.25 deg. North lat., 118.05 deg. West lon. Orientation: Looking southwest Original Data Resolution: SRTM and Landsat: 30 meters (99 feet) Date Acquired: February 16, 2000

2000-01-01

194

Central Asia Active Fault Database  

NASA Astrophysics Data System (ADS)

The ongoing collision of the Indian subcontinent with Asia controls active tectonics and seismicity in Central Asia. This motion is accommodated by faults that have historically caused devastating earthquakes and continue to pose serious threats to the population at risk. Despite international and regional efforts to assess seismic hazards in Central Asia, little attention has been given to development of a comprehensive database for active faults in the region. To address this issue and to better understand the distribution and level of seismic hazard in Central Asia, we are developing a publically available database for active faults of Central Asia (including but not limited to Afghanistan, Tajikistan, Kyrgyzstan, northern Pakistan and western China) using ArcGIS. The database is designed to allow users to store, map and query important fault parameters such as fault location, displacement history, rate of movement, and other data relevant to seismic hazard studies including fault trench locations, geochronology constraints, and seismic studies. Data sources integrated into the database include previously published maps and scientific investigations as well as strain rate measurements and historic and recent seismicity. In addition, high resolution Quickbird, Spot, and Aster imagery are used for selected features to locate and measure offset of landforms associated with Quaternary faulting. These features are individually digitized and linked to attribute tables that provide a description for each feature. Preliminary observations include inconsistent and sometimes inaccurate information for faults documented in different studies. For example, the Darvaz-Karakul fault which roughly defines the western margin of the Pamir, has been mapped with differences in location of up to 12 kilometers. The sense of motion for this fault ranges from unknown to thrust and strike-slip in three different studies despite documented left-lateral displacements of Holocene and late Pleistocene landforms observed near the fault trace.

Mohadjer, Solmaz; Ehlers, Todd A.; Kakar, Najibullah

2014-05-01

195

Congener-specific polychlorinated biphenyl patterns in eggs of aquatic birds from the Lower Laguna Madre, Texas  

SciTech Connect

Eggs from four aquatic bird species nesting in the Lower Laguna Madre, Texas, were collected to determine differences and similarities in the accumulation of congener-specific polychlorinated biphenyls (PCBs) and to evaluate PCB impacts on reproduction. Because of the different toxicities of PCB congeners, it is important to know which congeners contribute most to total PCBs. The predominant PCB congeners were 153, 138, 180, 110, 118, 187, and 92. Collectively, congeners 153, 138, and 180 accounted for 26 to 42% of total PCBs. Congener 153 was the most abundant in Caspian terns (Sterna caspia) and great blue herons (Ardea herodias) and congener 138 was the most abundant in snowy egrets (Egretta thula) and tricolored herons (Egretta tricolor). Principal component analysis indicated a predominance of higher chlorinated biphenyls in Caspian terns and great blue herons and lower chlorinated biphenyls in tricolored herons. Snowy egrets had a predominance of pentachlorobiphenyls. These results suggest that there are differences in PCB congener patterns in closely related species and that these differences are more likely associated with the species` diet rather than metabolism. Total PCBs were significantly greater (p < 0.05) in Caspian terns than in the other species. Overall, PCBs in eggs of birds from the Lower Laguna Madre were below concentrations known to affect bird reproduction.

Mora, M.A. [Texas A and M Univ., College Station, TX (United States)

1996-06-01

196

Surface Creep on California Faults  

NSDL National Science Digital Library

This site provides data from a number of creepmeters in California. A creepmeter is an instrument that monitors the slow surface displacement of an active fault. Its function is not to measure fault slip during earthquakes, but to record the slow aseismic slip between earthquakes.

Roger Bilham

197

Soft Computing and Fault Management  

Microsoft Academic Search

Soft computing is a partnership between A.I. techniques that are tolerant of imprecision, uncertainty and partial truth, with the aim of obtaining a robust solution for complex systems. Telecommunication systems are built with extensive redundancy and complexity to ensure robustness and quality of service. To facilitate this requires complex fault identification and management systems. Fault identification and management is generally

R. Sterritt

2000-01-01

198

SFT: Scalable Fault Tolerance  

SciTech Connect

In this paper we will present a new technology that we are currently developing within the SFT: Scalable Fault Tolerance FastOS project which seeks to implement fault tolerance at the operating system level. Major design goals include dynamic reallocation of resources to allow continuing execution in the presence of hardware failures, very high scalability, high efficiency (low overhead), and transparency—requiring no changes to user applications. Our technology is based on a global coordination mechanism, that enforces transparent recovery lines in the system, and TICK, a lightweight, incremental checkpointing software architecture implemented as a Linux kernel module. TICK is completely user-transparent and does not require any changes to user code or system libraries; it is highly responsive: an interrupt, such as a timer interrupt, can trigger a checkpoint in as little as 2.5?s; and it supports incremental and full checkpoints with minimal overhead—less than 6% with full checkpointing to disk performed as frequently as once per minute.

Petrini, Fabrizio; Nieplocha, Jarek; Tipparaju, Vinod

2006-04-15

199

Colorado Regional Faults  

SciTech Connect

Citation Information: Originator: Earth Science &Observation Center (ESOC), CIRES, University of Colorado at Boulder Originator: Colorado Geological Survey (CGS) Publication Date: 2012 Title: Regional Faults Edition: First Publication Information: Publication Place: Earth Science & Observation Center, Cooperative Institute for Research in Environmental Science, University of Colorado, Boulder Publisher: Earth Science &Observation Center (ESOC), CIRES, University of Colorado at Boulder Description: This layer contains the regional faults of Colorado Spatial Domain: Extent: Top: 4543192.100000 m Left: 144385.020000 m Right: 754585.020000 m Bottom: 4094592.100000 m Contact Information: Contact Organization: Earth Science &Observation Center (ESOC), CIRES, University of Colorado at Boulder Contact Person: Khalid Hussein Address: CIRES, Ekeley Building Earth Science & Observation Center (ESOC) 216 UCB City: Boulder State: CO Postal Code: 80309-0216 Country: USA Contact Telephone: 303-492-6782 Spatial Reference Information: Coordinate System: Universal Transverse Mercator (UTM) WGS’1984 Zone 13N False Easting: 500000.00000000 False Northing: 0.00000000 Central Meridian: -105.00000000 Scale Factor: 0.99960000 Latitude of Origin: 0.00000000 Linear Unit: Meter Datum: World Geodetic System 1984 (WGS ’984) Prime Meridian: Greenwich Angular Unit: Degree Digital Form: Format Name: Shape file

Hussein, Khalid

2012-02-01

200

Experimental Fault Reactivation on Favourably and Unfavourably Oriented Faults  

NASA Astrophysics Data System (ADS)

In this study, we assess the loading of faults to failure under different stress regimes in a triaxial deformation apparatus, both in dry and saturated conditions. We explore experimentally the reshear of an existing fault in various orientations for particular values of (?_1 - ?_3) and ?_3' for contrasting loading systems - load-strengthening (equivalent to a thrust fault) with ?1' increasing at constant ?_3', versus load-weakening (equivalent to a normal fault) with reducing ?_3' under constant ?_1'. Experiments are conducted on sawcut granite samples with fault angles at a variety of orientations relative to ?_1, ranging from an optimal orientation for reactivation to lockup angles where new faults are formed in preference to reactivating the existing sawcut orientation. Prefailure and postfailure behaviour is compared in terms of damage zone development via monitoring variations in ultrasonic velocity and acoustic emission behaviour. For example, damage surrounding unfavourably oriented faults is significantly higher than that seen around favourably orientated faults due to greater maximum stresses attained prior to unstable slip, which is reflected by the increased acoustic emission activity leading up to failure. In addition, we explore reshear conditions under an initial condition of (?_1' = ?_3'), then inducing reshear on the existing fault first by increasing ?_1'(load-strengthening), then by decreasing ?_3' (load-weakening), again comparing relative damage zone development and acoustic emission levels. In saturated experiments, we explore the values of pore fluid pressure (P_f) needed for re-shear to occur in preference to the formation of a new fault. Typically a limiting factor in conventional triaxial experiments performed in compression is that P_f cannot exceed the confining pressure (?_2 and ?_3). By employing a sample assembly that allows deformation while the loading piston is in extension, it enables us to achieve pore pressures in excess of ?_3 and consequently reach reduced effective stress conditions that allow the reactivation of highly unfavourably orientated faults. We demonstrate that the rate of P_f increase imposed on the fault plane has a significant effect on reshear conditions, which has potentially important implications for faulting in the seismogenic zone.

Mitchell, T. M.; Renner, J.; Sibson, R. H.

2011-12-01

201

Segmentation of the Sumatran fault  

NASA Astrophysics Data System (ADS)

Segmentation of the Sumatran fault is discerned using an analytical approach in which a k-means algorithm partitions earthquakes into clusters of seismicity along the fault. Clusters are tessellated into segment zones from which segment lengths and maximum credible magnitude are estimated. Decreasing the depth of seismicity sampled from 70 to 60 to 50 km reduces interaction with deeper seismicity, and results from the k-means algorithm initially suggest that the fault has K = 14, 16, and 16 clusters, respectively. After inspection, it becomes clear that the optimum number of clusters is 16. The 16 cluster model developed into zones generates segment lengths ranging from 22 to 196 km and maximum earthquake potentials in the range of Mw 6.5-7.8. The Sumatran fault is dominated by eight great central segments distributed approximately symmetrically about Lake Maninjau. These central fault segments dominate the hazard, which is less in the far north because segments are shorter.

Burton, Paul W.; Hall, Thomas R.

2014-06-01

202

Observer-based fault detection for nuclear reactors  

E-print Network

This is a study of fault detection for nuclear reactor systems. Basic concepts are derived from fundamental theories on system observers. Different types of fault- actuator fault, sensor fault, and system dynamics fault ...

Li, Qing, 1972-

2001-01-01

203

Climatically induced lake level changes during the last two millennia as reflected in sediments of Laguna Potrok Aike, southern Patagonia (Santa Cruz, Argentina)  

Microsoft Academic Search

The volcanogenic lake Laguna Potrok Aike, Santa Cruz, Argentina, reveals an unprecedented continuous high resolution climatic record for the steppe regions of southern Patagonia. With the applied multi-proxy approach rapid climatic changes before the turn of the first millennium were detected followed by medieval droughts which are intersected by moist and\\/or cold periods of varying durations and intensities. The ‘total

Torsten Haberzettl; Michael Fey; Andreas Lücke; Nora Maidana; Christoph Mayr; Christian Ohlendorf; Frank Schäbitz; Gerhard H. Schleser; Michael Wille; Bernd Zolitschka

2005-01-01

204

An Evaluation of Seagrass Community Structure and Its Role in Green Sea Turtle (Chelonia mydas) Forgaging Dynamics in the Lower Laguna Madre  

E-print Network

Satellite tracking data of juvenile and subadult green turtles captured and released by Texas A&M University at Galveston?s Sea Turtle and Fisheries Ecology Research Lab (STFERL) from the lower Laguna Madre indicate green sea turtles (Chelonia mydas...

Weatherall, Tracy F.

2010-07-14

205

Improving Multiple Fault Diagnosability using Possible Conflicts  

NASA Technical Reports Server (NTRS)

Multiple fault diagnosis is a difficult problem for dynamic systems. Due to fault masking, compensation, and relative time of fault occurrence, multiple faults can manifest in many different ways as observable fault signature sequences. This decreases diagnosability of multiple faults, and therefore leads to a loss in effectiveness of the fault isolation step. We develop a qualitative, event-based, multiple fault isolation framework, and derive several notions of multiple fault diagnosability. We show that using Possible Conflicts, a model decomposition technique that decouples faults from residuals, we can significantly improve the diagnosability of multiple faults compared to an approach using a single global model. We demonstrate these concepts and provide results using a multi-tank system as a case study.

Daigle, Matthew J.; Bregon, Anibal; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

2012-01-01

206

Fault-Tolerant Quantum Computation for Local Leakage Faults  

E-print Network

We provide a rigorous analysis of fault-tolerant quantum computation in the presence of local leakage faults. We show that one can systematically deal with leakage by using appropriate leakage-reduction units such as quantum teleportation. The leakage noise is described by a Hamiltonian and the noise is treated coherently, similar to general non-Markovian noise analyzed in Refs. quant-ph/0402104 and quant-ph/0504218. We describe ways to limit the use of leakage-reduction units while keeping the quantum circuits fault-tolerant and we also discuss how leakage reduction by teleportation is naturally achieved in measurement-based computation.

Panos Aliferis; Barbara M. Terhal

2006-05-26

207

Availability of ground water in parts of the Acoma and Laguna Indian Reservations, New Mexico  

USGS Publications Warehouse

The need for additional water has increased in recent years on the Acoma and Laguna Indian Reservations in west-central New Mexico because the population and per capita use of water have increased; the tribes also desire water for light industry, for more modern schools, and to increase their irrigation program. Many wells have been drilled in the area, but most have been disappointing because of small yields and poor chemical quality of the water. The topography in the Acoma and Laguna Indian Reservations is controlled primarily by the regional and local dip of alternating beds of sandstone and shale and by the igneous complex of Mount Taylor. The entrenched alluvial valley along the Rio San Jose, which traverses the area, ranges in width from about 0.4 mile to about 2 miles. The climate is characterized by scant rainfall, which occurs mainly in summer, low relative humidity, and large daily fluctuations of temperature. Most of the surface water enters the area through the Rio San Jose. The average annual streamflow past the gaging station Rio San Jose near Grants, N. Mex. is about 4,000 acre-feet. Tributaries to the Rio San Jose within the area probably contribute about 1,000 acre-feet per year. At the present time, most of the surface water is used for irrigation. Ground water is obtained from consolidated sedimentary rocks that range in age from Triassic to Cretaceous, and from unconsolidated alluvium of Quaternary age. The principal aquifers are the Dakota Sandstone, the Tres Hermanos Sandstone Member of the Mancos Shale, and the alluvium. The Dakota Sandstone yields 5 to 50 gpm (gallons per minute) of water to domestic and stock wells. The Tres Hermanos sandstone Member generally yields 5 to 20 gpm of water to domestic and stock wells. Locally, beds of sandstone in the Chinle and Morrison Formations, the Entrada Sandstone, and the Bluff Sandstone also yield small supplies of water to domestic and stock wells. The alluvium yields from 2 gpm to as much as 150 gpm of water to domestic and stock wells. Thirteen test wells were drilled in a search for usable supplies of ground water for pueblo and irrigation supply and to determine the geologic and hydrologic characteristics of the water-bearing material. The performance of six of the test wells suggests that the sites are favorable for pueblo or irrigation supply wells. The yield of the other seven wells was too small or the quality of the water was too poor for development of pueblo or irrigation supply to be feasible. However, the water from one of the seven wells was good in chemical quality, and the yield was large enough to supply a few homes with water. The tests suggest that the water in the alluvium of the Rio San Jose valley is closely related to the streamflow and that it might be possible to withdraw from the alluvium in summer and replenish it in winter. The surface flow in summer might be decreased by extensive pumpage of ground water, but on the other hand, more of the winter flow could be retained in the area by storage in the ground-water reservoir. Wells could be drilled along the axis of the valley, and the water could be pumped into systems for distribution to irrigated farms. The chemical quality of ground water in the area varies widely from one stratigraphic unit to another and laterally within each unit and commonly the water contains undesirably large amounts of sulfate. However, potable water has been obtained locally from all the aquifers. The water of best quality seemingly is in the Tres Hermanos Sandstone Member of the Mancos Shale and in the alluvium north of the Rio San Jose. The largest quantity of water that is suitable for irrigation is in the valley fill along the Rio San Jose. Intensive pumping of ground water from aquifers containing water of good quality may draw water of inferior chemical quality into the wells.

Dinwiddie, George A.; Motts, Ward Sundt

1964-01-01

208

Faulted Sedimentary Rocks  

NASA Technical Reports Server (NTRS)

27 June 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows some of the layered, sedimentary rock outcrops that occur in a crater located at 8oN, 7oW, in western Arabia Terra. Dark layers and dark sand have enhanced the contrast of this scene. In the upper half of the image, one can see numerous lines that off-set the layers. These lines are faults along which the rocks have broken and moved. The regularity of layer thickness and erosional expression are taken as evidence that the crater in which these rocks occur might once have been a lake. The image covers an area about 1.9 km (1.2 mi) wide. Sunlight illuminates the scene from the lower left.

2004-01-01

209

Arc fault detection system  

DOEpatents

An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard.

Jha, Kamal N. (Bethel Park, PA)

1999-01-01

210

Arc fault detection system  

DOEpatents

An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard. 1 fig.

Jha, K.N.

1999-05-18

211

Fault Injection Campaign for a Fault Tolerant Duplex Framework  

NASA Technical Reports Server (NTRS)

Fault tolerance is an efficient approach adopted to avoid or reduce the damage of a system failure. In this work we present the results of a fault injection campaign we conducted on the Duplex Framework (DF). The DF is a software developed by the UCLA group [1, 2] that uses a fault tolerant approach and allows to run two replicas of the same process on two different nodes of a commercial off-the-shelf (COTS) computer cluster. A third process running on a different node, constantly monitors the results computed by the two replicas, and eventually restarts the two replica processes if an inconsistency in their computation is detected. This approach is very cost efficient and can be adopted to control processes on spacecrafts where the fault rate produced by cosmic rays is not very high.

Sacco, Gian Franco; Ferraro, Robert D.; von llmen, Paul; Rennels, Dave A.

2007-01-01

212

The Effects of Fault Counting Methods on Fault Model Quality  

NASA Technical Reports Server (NTRS)

In this paper, we describe three other fault-counting techniques and compare the models resulting from the application of two of those methods to the models obtained from the application of our proposed definition.

Nikora, Allen P.; Munson, John C.

2004-01-01

213

Fault-Tolerant Quantum Walks  

E-print Network

Quantum walks are expected to serve important modelling and algorithmic applications in many areas of science and mathematics. Although quantum walks have been successfully implemented physically in recent times, no major efforts have been made to combat the error associated with these physical implementations in a fault-tolerant manner. In this paper, we propose a systematic method to implement fault-tolerant quantum walks in discrete time on arbitrarily complex graphs, using quantum states encoded with the Steane code and a set of universal fault tolerant matrix operations.

S. D. Freedman; Y. H. Tong; J. B. Wang

2014-08-06

214

An Algebra of Fault Tolerance  

E-print Network

Every system of any significant size is created by composition from smaller sub-systems or components. It is thus fruitful to analyze the fault-tolerance of a system as a function of its composition. In this paper, two basic types of system composition are described, and an algebra to describe fault tolerance of composed systems is derived. The set of systems forms monoids under the two composition operators, and a semiring when both are concerned. A partial ordering relation between systems is used to compare their fault-tolerance behaviors.

Shrisha Rao

2009-07-20

215

Granular packings and fault zones  

PubMed

The failure of a two-dimensional packing of elastic grains is analyzed using a numerical model. The packing fails through formation of shear bands or faults. During failure there is a separation of the system into two grain-packing states. In a shear band, local "rotating bearings" are spontaneously formed. The bearing state is favored in a shear band because it has a low stiffness against shearing. The "seismic activity" distribution in the packing has the same characteristics as that of the earthquake distribution in tectonic faults. The directions of the principal stresses in a bearing are reminiscent of those found at the San Andreas Fault. PMID:11017335

Astrom; Herrmann; Timonen

2000-01-24

216

Data parallel sequential circuit fault simulation  

Microsoft Academic Search

Sequential circuit fault simulation is a compute-intensive problem. Parallel simulation is one method to reduce fault simulation time. In this paper, we discuss a novel technique to partition the fault set for the fault parallel simulation of sequential circuits on multiple processors. When applied statically, the technique can scale well for up to thirty two processors on an ethernet. The

Minesh B. Amin; Bapiraju Vinnakota

1996-01-01

217

Seismic Fault Rheology and Earthquake Dynamics  

E-print Network

5 Seismic Fault Rheology and Earthquake Dynamics JAMES R. RICE1 and MASSIMO COCCO2 1Department Workshop on The Dynamics of Fault Zones, spe- cifically on the subtopic "Rheology of Fault Rocks and Their Surroundings," we addressed critical research issues for understanding the seismic response of fault zones

218

Inductive Fault Analysis of MOS Integrated Circuits  

Microsoft Academic Search

Inductive Fault Analysis (IFA) is a systematic Procedure to predict all the faults that are likely to occur in MOS integrated circuit or subcircuit The three major steps of the IFA procedure are: (1) generation of Physical defects using statistical data from the fabrication process; (2) extraction of circuit-level faults caused by these defects; and (3) classification of faults types

John Shen; W. Maly; F. J. Ferguson

1985-01-01

219

Normal faults geometry and morphometry on Mars  

NASA Astrophysics Data System (ADS)

In this report, we show how normal faults scarps geometry and degradation history can be accessed using high resolution imagery and topography. We show how the initial geometry of the faults can be inferred from faulted craters and we demonstrate how a comparative morphometric analysis of faults scarps can be used to study erosion rates through time on Mars.

Vaz, D. A.; Spagnuolo, M. G.; Silvestro, S.

2014-04-01

220

Fault-Trajectory Approach for Fault Diagnosis on Analog Circuits Carlos Eduardo Savioli,  

E-print Network

Fault-Trajectory Approach for Fault Diagnosis on Analog Circuits Carlos Eduardo Savioli, Claudio C Mesquita@coe.ufrj.br Abstract This issue discusses the fault-trajectory approach suitability for fault on this concept for ATPG for diagnosing faults on analog networks. Such method relies on evolutionary techniques

Paris-Sud XI, Université de

221

Fault Tolerant Control with Additive Compensation for Faults in an Automotive Damper  

E-print Network

Fault Tolerant Control with Additive Compensation for Faults in an Automotive Damper Juan C. Tud: sebastien.varrier@gipsa-lab.fr Abstract--A novel Fault-Tolerant Controller is proposed for an automotive mechanism used to accommodate actuator faults. The compensation mechanism is based on a robust fault

Paris-Sud XI, Université de

222

An Approach to Fault Modeling and Fault Seeding Using the Program Dependence Graph1  

E-print Network

An Approach to Fault Modeling and Fault Seeding Using the Program Dependence Graph1 Mary Jean harrold@cis.ohio-state.edu ofut@isse.gmu.edu kanu@eng.sun.com Abstract We present a fault-classification scheme and a fault-seeding method that are based on the manifes- tation of faults in the program

Harrold, Mary Jean

223

A Fault Prediction Approach for Process Plants using Fault Tree Analysis in Sensor Malfunction  

Microsoft Academic Search

In this paper, a fault prediction approach for process plants using fault tree analysis is presented in the presence of no or false information of certain sensor. The fault propagation model is constructed by causal relationships from fault tree analysis (FTA). Knowledge about system failure, which is obtained from the fault propagation model, is represented as abnormality patterns in process

Zongxiao Yang; Xiaobo Yuan; Zhiqiang Feng; Kazuhiko Suzuki; Akira Inoue

2006-01-01

224

Tutorial: Advanced fault tree applications using HARP  

NASA Technical Reports Server (NTRS)

Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.

Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.

1993-01-01

225

Recognition of Active Faults and Stress Field  

NASA Astrophysics Data System (ADS)

Around the plate-boundary region, the directions of maximum and minimum stress related to the plate motion is one of the key for the recognition of active faults. For example, it is typical idea that there are many N-S trading reverse faults, NE-SW and NW-SE trending strike slip faults and less normal faults (only near volcanoes) in Japan, where the compressional stress with E-W direction is dominant caused by the motion of the subduction of the Pacific Plate beneath the North American Plate. After the 2011 Tohoku earthquake (Mj 9.0), however, many earthquakes with the mechanism of the normal fault type occurred in the coastal region of the northern-east Japan. On 11th April 2011, the Fukushima Hamadori Earthquake (Mj 7.0) occurred accompanying surface faults along two faults, the Idosawa fault and the Yunotake fault, that recognized as active faults by the Research Group for Active Fault of Japan (1980, 1991). It impacted on active fault study by the reason of not only the appearance of two traces of significant surface faults with maximum displacement up to 2.1 m, but also the reactivation of the normal faults under the E-W compressional stress field. When we identify the active faults, it is one of the key whether the direction of slip on the fault consists with the stress field in that area or not. And there is a technique to recognized whether the fault is active or not by using the data of the direction of stress in the field and the geometry of the fault plane. Though it is useful for the fault in the rock without overlain Quaternary deposits, we should care that the active faults may react caused by the temporal stress condition after the generation of large earthquakes.

Azuma, T.

2012-12-01

226

Developing Fault Models for Space Mission Software  

NASA Technical Reports Server (NTRS)

A viewgraph presentation on the development of fault models for space mission software is shown. The topics include: 1) Goal: Improve Understanding of Technology Fault Generation Process; 2) Required Measurement; 3) Measuring Structural Evolution; 4) Module Attributes; 5) Principal Components of Raw Metrics; 6) The Measurement Process; 7) View of Structural Evolution at the System and Module Level; 8) Identifying and Counting Faults; 9) Fault Enumeration; 10) Modeling Fault Content; 11) Modeling Results; 12) Current and Future Work; and 13) Discussion and Conclusions.

Nikora, Allen P.; Munson, John C.

2003-01-01

227

The fault-tree compiler  

NASA Technical Reports Server (NTRS)

The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.

Martensen, Anna L.; Butler, Ricky W.

1987-01-01

228

Slip Rates on young faults  

NSDL National Science Digital Library

Students use measured ages and offset of quaternary surfaces to determine vertical slip rates of a young fault. Students then must determine if vertical slip rates have varied significantly through time.

Audrey Huerta

229

Fault Trace: Marin County, California  

NSDL National Science Digital Library

This photograph shows the trace of a fault (in trench phase) as it passes beneath a barn. The trace developed during the April 18, 1906 San Francisco Earthquake. The location is the Skinner Ranch, near Olema, Marin County, California.

230

Characterizing the eolian sediment component in the lacustrine record of Laguna Potrok Aike (southeastern Patagonia)  

NASA Astrophysics Data System (ADS)

Southern South America with its extended dry areas was one of the major sources for dust in the higher latitudes of the southern hemisphere during the last Glacial, as was deduced from fingerprinting of dust particles found in Antarctic ice cores. The amount of dust that was mobilized is mostly related to strength and latitudinal position of the Southern Hemisphere Westerly Winds (SWW). How exactly SWW shifted between glacial and interglacial times and what consequences such shifts had for ocean and atmospheric circulation changes during the last deglaciation is currently under debate. Laguna Potrok Aike (PTA) as a lake situated in the middle of the source area of dust offers the opportunity to arrive at a better understanding of past SWW changes and their associated consequences for dust transport. For this task, a sediment record of the past ~51 ka is available from a deep drilling campaign (PASADO). From this 106 m long profile, 76 samples representing the different lithologies of the sediment sequence were selected to characterize an eolian sediment component. Prior to sampling of the respective core intervals, magnetic susceptibility was measured and the element composition was determined by XRF-scanning on fresh, undisturbed sediment. After sampling and freeze drying, physical, chemical and mineralogical sediment properties were determined before and after separation of each sample into six grainsize classes for each fraction separately. SEM techniques were used to verify the eolian origin of grains. The aim of this approach is to isolate an exploitable fingerprint of the eolian sediment component in terms of their grain size, physical properties, geochemistry and mineralogy. Thereby, the challenging aspect is that such a fingerprint should be based on high-resolution down-core scanning techniques, so time-consuming techniques such as grain-size measurements by laser detection can be avoided. A first evaluation of the dataset indicates that magnetic susceptibility, which is often used as a tracer for the eolian sediment component in marine sediments, probably does not yield a robust signal of eolian input in this continental setting because it is variably contained in the silt as well as in the fine sand fraction. XRF-scanning of powdered samples of the different grain-size fractions shows that some elements are characteristically enriched in the clay, silt or medium sand fractions which might allow a geochemical fingerprinting of these. For instance, an identification of higher amounts of clay in a sample may be possible based on it's enrichment in heavy metals (Zn, Cu, Pb) and/or Fe. Higher amounts of silt may be recognized by Zr and/or Y enrichment. Hence, unmixing of the signal stored in the sedimentary record of PTA with tools of multivariate statistics is a necesseary step to characterize the eolian fraction. The 51 ka BP sediment record of PTA might then be used for a reconstruction of dust availability in the high latitude source areas of the southern hemisphere.

Ohlendorf, C.; Gebhardt, C.

2013-12-01

231

Fault-tolerant TCP mechanisms  

E-print Network

: Computer Science FAULT-TOLERANT TCP MECHANISMS A Thesis by SURESH KUMAR SATAPATI Submitted to Texas A%M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Approved as to style and content by: Riccardo Bet... tati (Chair of Committee) Nitin H. Vaidya (Member) A. L. Narasimha Reddy (Member) Wei Zhao (Head of Department) August 2000 Major Subject: Computer Science ABSTRACT Fault-Tolerant TCP Mechanisms. (August 2000) Suresh Kumar Satapati, B. E...

Satapati, Suresh Kumar

2000-01-01

232

Types of Faults in California  

NSDL National Science Digital Library

This educational movie made using SCEC-VDO shows the differences between strike-slip faults and thrust faults in southern California.The Southern California Earthquake Center's Virtual Display of Objects SCEC-VDO is 3D visualization software that allows users to display study and make movies of earthquakes as they occur globally. SCEC-VDO was developed by interns of SCEC Undergraduate Studies in Earthquake Information Technology UseIT under the supervision of Sue Perry and Tom Jordan.

Interns of SCEC Undergraduate Studies in Earthquake Information Technology UseIT under the supervision of Sue Perry and Tom Jordan.

233

Fault Diagnosis utilizing Structural Analysis  

Microsoft Academic Search

When designing model-based fault-diagnosis systems, the use of consistency relations (also called e.g. parity relations) is a common choice. Dierent subsets are sensi- tive to dierent subsets of faults, and thereby isolation can be achieved. This paper presents an algorithm for nding a small set of submodels that can be used to derive con- sistency relations with highest possible diagnosis

Mattias Krysander; Mattias Nyberg

234

Fault Tree Analysis: A Bibliography  

NASA Technical Reports Server (NTRS)

Fault tree analysis is a top-down approach to the identification of process hazards. It is as one of the best methods for systematically identifying an graphically displaying the many ways some things can go wrong. This bibliography references 266 documents in the NASA STI Database that contain the major concepts. fault tree analysis, risk an probability theory, in the basic index or major subject terms. An abstract is included with most citations, followed by the applicable subject terms.

2000-01-01

235

Fault-tolerant rotary actuator  

DOEpatents

A fault-tolerant actuator module, in a single containment shell, containing two actuator subsystems that are either asymmetrically or symmetrically laid out is provided. Fault tolerance in the actuators of the present invention is achieved by the employment of dual sets of equal resources. Dual resources are integrated into single modules, with each having the external appearance and functionality of a single set of resources.

Tesar, Delbert

2006-10-17

236

Hardware Fault Simulator for Microprocessors  

NASA Technical Reports Server (NTRS)

Breadboarded circuit is faster and more thorough than software simulator. Elementary fault simulator for AND gate uses three gates and shaft register to simulate stuck-at-one or stuck-at-zero conditions at inputs and output. Experimental results showed hardware fault simulator for microprocessor gave faster results than software simulator, by two orders of magnitude, with one test being applied every 4 microseconds.

Hess, L. M.; Timoc, C. C.

1983-01-01

237

High-Resolution Paleosalinity Reconstruction From Laguna de la Leche, North Coastal Cuba, Using Sr, O, and C Isotopes  

NASA Astrophysics Data System (ADS)

Isotopes of Sr, O, and C were studied from a 227-cm long sediment core to develop a high-resolution paleosalinity record to investigate the paleohydrology of Laguna de la Leche, north coastal Cuba, during the Middle to Late Holocene. Palynological, plant macrofossil, foraminiferal, ostracode, gastropod, and charophyte data from predominantly euryhaline taxa, coupled with a radiocarbon-based chronology, indicate that the wetland evolved through four phases: (1) an oligohaline lake existed from 6200 to 4800 cal yr B.P.; (2) water level in the lake increased and the system freshened from 4800 to 4200 cal yr B.P.; (3) a mesohaline lagoon replaced the lake 4200 cal yr B.P.; and (4) mangroves enclosed the lagoon beginning 1700 cal yr B.P., forming a mesohaline lake. Isotopic ratios were measured on specimens of the euryhaline foraminifer Ammonia beccarii, although several measurements were also made on other calcareous microfossils in order to identify potential taphonomic and/or vital effects. The 87Sr/86Sr results show that the average salinity of Laguna de la Leche was 1.7 ppt during the early lake phase and 8 ppt during the lagoon phase - a change driven by relative sea level rise. The delta18O results do not record the salinity increase seen in the 87Sr/86Sr data, but instead indicate high evaporation from the lake surface. Variability in delta13C was controlled by plant productivity, episodic marine incursions, and vegetation community change. There is some evidence for seasonal effect and the lateral transport of microfossils prior to burial. Our results show that Sr isotopes, while often cited as a powerful paleosalinity tool, should be used in conjunction with other indicators when investigating paleosalinity trends; relying solely on any single isotopic or ecological indicator can lead to inaccurate results, especially in semi-enclosed and closed hydrological systems.

Peros, M. C.; Reinhardt, E. G.; Schwarcz, H. P.; Davis, A. M.

2008-12-01

238

Sr Isotopes and Migration of Prairie Mammoths (Mammuthus columbi) from Laguna de las Cruces, San Luis Potosi, Mexico  

NASA Astrophysics Data System (ADS)

Asserting mobility of ancient humans is a major issue for anthropologists. For more than 25 years, Sr isotopes have been used as a resourceful tracer tool in this context. A comparison of the 87Sr/86Sr ratios found in tooth enamel and in bone is performed to determine if the human skeletal remains belonged to a local or a migrant. Sr in bone approximately reflects the isotopic composition of the geological region where the person lived before death; whereas the Sr isotopic system in tooth enamel is thought to remain as a closed system and thus conserves the isotope ratio acquired during childhood. Sr isotope ratios are obtained through the geologic substrate and its overlying soil, from where an individual got hold of food and water; these ratios are in turn incorporated into the dentition and skeleton during tissue formation. In previous studies from Teotihuacan, Mexico we have shown that a three-step leaching procedure on tooth enamel samples is important to assure that only the biogenic Sr isotope contribution is analyzed. The same Sr isotopic tools can function concerning ancient animal migration patterns. To determine or to discard the mobility of prairie mammoths (Mammuthus columbi) found at Laguna de las Cruces, San Luis Potosi, México the leaching procedure was applied on six molar samples from several fossil remains. The initial hypothesis was to use 87Sr/86Sr values to verify if the mammoth population was a mixture of individuals from various herds and further by comparing their Sr isotopic composition with that of plants and soils, to confirm their geographic origin. The dissimilar Sr results point to two distinct mammoth groups. The mammoth population from Laguna de Cruces was then not a family unit because it was composed by individuals originated from different localities. Only one individual was identified as local. Others could have walked as much as 100 km to find food and water sources.

Solis-Pichardo, G.; Perez-Crespo, V.; Schaaf, P. E.; Arroyo-Cabrales, J.

2011-12-01

239

Lateglacial and Holocene climatic changes in south-eastern Patagonia inferred from carbonate isotope records of Laguna Potrok Aike (Argentina)  

NASA Astrophysics Data System (ADS)

First results of strontium, calcium, carbon and oxygen isotope analyses of bulk carbonates from a 106 m long sediment record of Laguna Potrok Aike, located in southern Patagonia are presented. Morphological and isotopic investigations of ?m-sized carbonate crystals in the sediment reveal an endogenic origin for the entire Holocene. During this time period the calcium carbonate record of Laguna Potrok Aike turned out to be most likely ikaite-derived. As ikaite precipitation in nature has only been observed in a narrow temperature window between 0 and 7 °C, the respective carbonate oxygen isotope ratios serve as a proxy of hydrological variations rather than of palaeotemperatures. We suggest that oxygen isotope ratios are sensitive to changes of the lake water balance induced by intensity variations of the Southern Hemisphere Westerlies and discuss the role of this wind belt as a driver for climate change in southern South America. In combination with other proxy records the evolution of westerly wind intensities is reconstructed. Our data suggest that weak SHW prevailed during the Lateglacial and the early Holocene, interrupted by an interval with strengthened Westerlies between 13.4 and 11.3 ka cal BP. Wind strength increased at 9.2 ka cal BP and significantly intensified until 7.0 ka cal BP. Subsequently, the wind intensity diminished and stabilised to conditions similar to present day after a period of reduced evaporation during the "Little Ice Age". Strontium isotopes (87Sr/86Sr ratio) were identified as a potential lake-level indicator and point to a lowering from overflow conditions during the Glacial (?17 ka cal BP) to lowest lake levels around 8 ka cal BP. Thereafter the strontium isotope curve resembles the lake-level curve which is stepwise rising until the "Little Ice Age". The variability of the Ca isotope composition of the sediment reflects changes in the Ca budget of the lake, indicating higher degrees of Ca utilisation during the period with lowest lake level.

Oehlerich, M.; Mayr, C.; Gussone, N.; Hahn, A.; Hölzl, S.; Lücke, A.; Ohlendorf, C.; Rummel, S.; Teichert, B. M. A.; Zolitschka, B.

2015-04-01

240

Passive fault current limiting device  

DOEpatents

A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment.

Evans, Daniel J. (Wheeling, IL); Cha, Yung S. (Darien, IL)

1999-01-01

241

Passive fault current limiting device  

DOEpatents

A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment. 6 figs.

Evans, D.J.; Cha, Y.S.

1999-04-06

242

Software Fault Tolerance: A Tutorial  

NASA Technical Reports Server (NTRS)

Because of our present inability to produce error-free software, software fault tolerance is and will continue to be an important consideration in software systems. The root cause of software design errors is the complexity of the systems. Compounding the problems in building correct software is the difficulty in assessing the correctness of software for highly complex systems. After a brief overview of the software development processes, we note how hard-to-detect design faults are likely to be introduced during development and how software faults tend to be state-dependent and activated by particular input sequences. Although component reliability is an important quality measure for system level analysis, software reliability is hard to characterize and the use of post-verification reliability estimates remains a controversial issue. For some applications software safety is more important than reliability, and fault tolerance techniques used in those applications are aimed at preventing catastrophes. Single version software fault tolerance techniques discussed include system structuring and closure, atomic actions, inline fault detection, exception handling, and others. Multiversion techniques are based on the assumption that software built differently should fail differently and thus, if one of the redundant versions fails, it is expected that at least one of the other versions will provide an acceptable output. Recovery blocks, N-version programming, and other multiversion techniques are reviewed.

Torres-Pomales, Wilfredo

2000-01-01

243

Static Rupture Model of the 2010 M7.2 El Mayor-Cucapah Earthquake from ALOS, ENVISAT, SPOT and GPS Data  

NASA Astrophysics Data System (ADS)

The April 4, 2010 "Easter Sunday" earthquake on the US-Mexico border was the largest event to strike Southern California in the last 18 years. The earthquake occurred on a northwest trending fault close to, but not coincident with the identified 1892 Laguna Salada rupture. We investigate coseismic deformation due to the 2010 El Mayor-Cucapah earthquake using Synthetic Aperture Radar (SAR) imagery form ENVISAT and ALOS satellites, optical imagery from SPOT-5 satellite, and continuous and campaign GPS data. The earliest campaign postseismic GPS survey was conducted within days after the earthquake, and provided the near-field cosesmic offsets. Along-track SAR interferograms and amplitude cross-correlation of optical images reveal a relatively simple continuous fault trace with maximum offsets of the order of 3 meters. This is in contrast to the results of geological mapping that portrayed a complex broad zone of distributed faulting. Also, SAR data indicate that the rupture propagated bi-laterally from the epicenter near the town of Durango both to the North-West into the Cucapah mountains and to the South-East into the Mexically valley. The inferred South-East part of the rupture was subsequently field-checked and associated with several fresh scarps, although overall the earthquake fault does not have a conspicuous surface trace South-East of the hypocenter. It is worth noting that the 2010 earthquake propagated into stress shadows of prior events - the Laguna Salada earthquake that ruptured the North-West part of the fault in 1892, and several M6+ earthquakes that ruptured the South-East part of the fault over the last century. Analysis of the coseismic displacement field at the Earth's surface (in particular, the full 3-component displacement field retrieved from SAR and optical imagery) shows a pronounced asymmetry in horizontal displacements across both nodal planes. The maximum displacements are observed in the North-Eastern and South-Western quadrants. This pattern cannot be explained by oblique slip on a quasi-planar fault. Multi-parametric inversions of the space geodetic data suggest that the El Mayor-Cucapah earthquake occurred on a helix-shaped rupture, with Eastward dip in the Northern section and Westward dip in the Southern section. This interpretation is consistent with field observations of the surface rupture and aftershock data, and provides an explanation for a strong non-double-couple component suggested by the seismic moment tensor solution. The total geodetic moment of our best-fitting model is in a good agreement with the seismic moment. We will also discuss effects of the elastic structure on the inferred static rupture model, and observations of early postseismic deformation.

Fialko, Y.; Gonzalez, A.; Gonzalez-Garcia, J. J.; Barbot, S.; Leprince, S.; Sandwell, D. T.; Agnew, D. C.

2010-12-01

244

Critical fault patterns determination in fault-tolerant computer systems  

NASA Technical Reports Server (NTRS)

The method proposed tries to enumerate all the critical fault-patterns (successive occurrences of failures) without analyzing every single possible fault. The conditions for the system to be operating in a given mode can be expressed in terms of the static states. Thus, one can find all the system states that correspond to a given critical mode of operation. The next step consists in analyzing the fault-detection mechanisms, the diagnosis algorithm and the process of switch control. From them, one can find all the possible system configurations that can result from a failure occurrence. Thus, one can list all the characteristics, with respect to detection, diagnosis, and switch control, that failures must have to constitute critical fault-patterns. Such an enumeration of the critical fault-patterns can be directly used to evaluate the overall system tolerance to failures. Present research is focused on how to efficiently make use of these system-level characteristics to enumerate all the failures that verify these characteristics.

Mccluskey, E. J.; Losq, J.

1978-01-01

245

Anisotropy of permeability in faulted porous sandstones  

NASA Astrophysics Data System (ADS)

Studies of fault rock permeabilities advance the understanding of fluid migration patterns around faults and contribute to predictions of fault stability. In this study a new model is proposed combining brittle deformation structures formed during faulting, with fluid flow through pores. It assesses the impact of faulting on the permeability anisotropy of porous sandstone, hypothesising that the formation of fault related micro-scale deformation structures will alter the host rock porosity organisation and create new permeability pathways. Core plugs and thin sections were sampled around a normal fault and oriented with respect to the fault plane. Anisotropy of permeability was determined in three orientations to the fault plane at ambient and confining pressures. Results show that permeabilities measured parallel to fault dip were up to 10 times higher than along fault strike permeability. Analysis of corresponding thin sections shows elongate pores oriented at a low angle to the maximum principal palaeo-stress (?1) and parallel to fault dip, indicating that permeability anisotropy is produced by grain scale deformation mechanisms associated with faulting. Using a soil mechanics 'void cell model' this study shows how elongate pores could be produced in faulted porous sandstone by compaction and reorganisation of grains through shearing and cataclasis.

Farrell, N. J. C.; Healy, D.; Taylor, C. W.

2014-06-01

246

Fault Management Guiding Principles  

NASA Technical Reports Server (NTRS)

Regardless of the mission type: deep space or low Earth orbit, robotic or human spaceflight, Fault Management (FM) is a critical aspect of NASA space missions. As the complexity of space missions grows, the complexity of supporting FM systems increase in turn. Data on recent NASA missions show that development of FM capabilities is a common driver for significant cost overruns late in the project development cycle. Efforts to understand the drivers behind these cost overruns, spearheaded by NASA's Science Mission Directorate (SMD), indicate that they are primarily caused by the growing complexity of FM systems and the lack of maturity of FM as an engineering discipline. NASA can and does develop FM systems that effectively protect mission functionality and assets. The cost growth results from a lack of FM planning and emphasis by project management, as well the maturity of FM as an engineering discipline, which lags behind the maturity of other engineering disciplines. As a step towards controlling the cost growth associated with FM development, SMD has commissioned a multi-institution team to develop a practitioner's handbook representing best practices for the end-to-end processes involved in engineering FM systems. While currently concentrating primarily on FM for science missions, the expectation is that this handbook will grow into a NASA-wide handbook, serving as a companion to the NASA Systems Engineering Handbook. This paper presents a snapshot of the principles that have been identified to guide FM development from cradle to grave. The principles range from considerations for integrating FM into the project and SE organizational structure, the relationship between FM designs and mission risk, and the use of the various tools of FM (e.g., redundancy) to meet the FM goal of protecting mission functionality and assets.

Newhouse, Marilyn E.; Friberg, Kenneth H.; Fesq, Lorraine; Barley, Bryan

2011-01-01

247

Taxonomic and Functional Metagenomic Profiling of the Microbial Community in the Anoxic Sediment of a Subsaline Shallow Lake (Laguna de Carrizo, Central Spain)  

Microsoft Academic Search

The phylogenetic and functional structure of the microbial community residing in a Ca2+-rich anoxic sediment of a sub-saline shallow lake (Laguna de Carrizo, initially operated as a gypsum (CaSO4?×?2 H2O) mine) was estimated by analyzing the diversity of 16S rRNA amplicons and a 3.1 Mb of consensus metagenome sequence. The\\u000a lake has about half the salinity of seawater and possesses an

Manuel Ferrer; María-Eugenia Guazzaroni; Michael Richter; Adela García-Salamanca; Pablo Yarza; Ana Suárez-Suárez; Jennifer Solano; María Alcaide; Pieter van Dillewijn; Maria Antonia Molina-Henares; Nieves López-Cortés; Yamal Al-Ramahi; Carmen Guerrero; Alejandro Acosta; Laura I. de Eugenio; Virginia Martínez; Silvia Marques; Fernando Rojo; Eduardo Santero; Olga Genilloud; Julian Pérez-Pérez; Ramón Rosselló-Móra; Juan Luis Ramos

248

An investigation of rainfall variability and distribution in Luzon and a mesoscale study of rainfall of the province of Laguna and adjacent areas, Philippines  

E-print Network

AN INVESTIGATION OF RAINFALL VARIAEILITY AND DISTRIBUTION IN LUZON AND A MESOSCALE STUDY OF RAINFALL OF THE PRO'VINCE OF LAGUNA AND ADJACENT AREAS' PHILIPPINES A Thesis by MAURO COMENDADOR COLIGADO Submitted to the Graduate College... of the Texas A&M University in partial fulfillment of th requirements for the degree of ~cTER OF SCIENCE Ja. . usry I967 Major Subject: Meteorology AN INVESTIGATION OF RAINFALL VARIABILITY AND DISTRIBUTION IN LUZON AND A MESOSCALE STUDY OF RAINFALL...

Coligado, Mauro Comendador

1967-01-01

249

Growth of faults in crystalline rock  

NASA Astrophysics Data System (ADS)

The growth of faults depends on the coupled interplay of the distribution of slip, fault geometry, the stress field in the host rock, and deformation of the host rock, which commonly is manifest in secondary fracturing. The distribution of slip along a fault depends highly on its structure, the stress perturbation associated with its interaction with nearby faults, and its strength distribution; mechanical analyses indicate that the first two factors are more influential than the third. Slip distribution data typically are discrete, but commonly are described, either explicitly or implicitly, using continuous interpolation schemes. Where the third derivative of a continuous slip profile is discontinuous, the compatibility conditions of strain are violated, and fracturing and perturbations to fault geometry should occur. Discontinuous third derivatives accompany not only piecewise linear functions, but also functions as seemingly benign as cubic splines. The stress distribution and fracture distribution along a fault depends strongly on how the fault grows. Evidence to date indicates that a fault that nucleates along a pre-existing, nearly planar joint or a dike typically develops secondary fractures only near its tipline when the slip is small relative to the fault length. In contrast, stress concentrations and fractures are predicted where a discontinuous or non-planar fault exhibits steps and bends; field observations bear this prediction out. Secondary fracturing influences how faults grow by creating damage zones and by linking originally discontinuous elements into a single fault zone. Field observations of both strike-slip faults and dip-slip faults show that linked segments usually will not be coplanar; elastic stress analyses indicate that this is an inherent tendency of how three-dimensional faults grow. Advances in the data we collect and in the rigor and sophistication of our analyses seem essential to substantially advance our ability to successfully predict earthquakes, fluid flow and mineralization along faults, and fault sealing. Particularly promising avenues of research include: (a) collecting high-resolution slip distribution data over fault surfaces (rather than just the maximum slip); (b) refining the locations of microseismic events; (c) conducting large-scale controlled experiments on in-situ faults; (d) characterizing the spatial distribution of fractures along faults (e.g., by back-mining); (e) performing dynamic experiments to evaluate the formation and strength of fault gouge and pseudotachylyte; (f) characterizing the shape of fault surfaces at different scales using laser scanning and differential geometry; and (g) modeling faults mechanically as part of an interacting system rather than as isolated structures.

Martel, S. J.

2009-04-01

250

Fault testing quantum switching circuits  

E-print Network

Test pattern generation is an electronic design automation tool that attempts to find an input (or test) sequence that, when applied to a digital circuit, enables one to distinguish between the correct circuit behavior and the faulty behavior caused by particular faults. The effectiveness of this classical method is measured by the fault coverage achieved for the fault model and the number of generated vectors, which should be directly proportional to test application time. This work address the quantum process validation problem by considering the quantum mechanical adaptation of test pattern generation methods used to test classical circuits. We found that quantum mechanics allows one to execute multiple test vectors concurrently, making each gate realized in the process act on a complete set of characteristic states in space/time complexity that breaks classical testability lower bounds.

Jacob Biamonte; Marek Perkowski

2010-01-19

251

Transient Faults in Computer Systems  

NASA Technical Reports Server (NTRS)

A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.

Masson, Gerald M.

1993-01-01

252

Faulting in porous carbonate grainstones  

NASA Astrophysics Data System (ADS)

In the recent past, a new faulting mechanism has been documented within porous carbonate grainstones. This mechanism is due to strain localization into narrow tabular bands characterized by both volumetric and shear strain; for this reason, these features are named compactive shear bands. In the field, compactive shear bands are easily recognizable because they are lightly coloured with respect to the parent rock, and/or show a positive relief because of their increased resistance to weathering. Both characteristics, light colours and positive relief, are a consequence of the compaction processes that characterize these bands, which are the simplest structure element that form within porous carbonate grainstones. With ongoing deformation, the single compactive shear bands, which solve only a few mm of displacement, may evolve into zone of compactive shear bands and, finally, into well-developed faults characterized by slip surfaces and fault rocks. Field analysis conducted in key areas of Italy allow us to documented different modalities of interaction and linkage among the compactive shear bands: (i) a simple divergence of two different compactive shear bands from an original one, (ii) extensional and contractional jogs formed by two continuous, interacting compactive shear bands, and (iii) eye structures formed by collinear interacting compactive shear bands, which have been already described for deformation bands in sandstones. The last two types of interaction may localize the formation of compaction bands, which are characterized by pronounced component of compaction and negligible components of shearing, and/or pressure solution seams. All the aforementioned types of interaction and linkage could happen at any deformation stage, single bands, zone of bands or well developed faults. The transition from one deformation process to another, which is likely to be controlled by the changes in the material properties, is recorded by different ratios and distributions of the fault dimensional attributes. The results of field analysis are consistent with length (L), displacement (D) and thickness (T) of single compactive shear bands clustering around given values, peculiar to the individual lithologies, and does not point out to any scale relationship among these parameters. On the contrary, in zones of shear bands and well-developed faults the D values are maximum in the central portion of individual elements. Differently from what characterize the well-developed faults, in which the slip increments are solved along the main slip surfaces, within zones of compactive shear bands the displacement varies according to the number of individual single bands, so that an increased displacement is related to an higher number of bands. As a consequence, the T-D plot concerning zones of compactive shear bands and well-developed faults show two different populations, which suggest that well-developed faults are much efficient to resolve displacement, with respect the zone of shear bands, because they include sharp slip surfaces. The petrographical and petrophysical properties of the tectonic features described above, which have been assessed by mean of detailed laboratory analyses, are consistent with the single compactive shear bands and zones of shear bands behaving as seals for underground fluid flow with respect to the host rock. These features, strongly present within the fault damage zones of well-developed faults, may compartmentalize the fluid flow in faulted carbonate reservoirs.

Tondi, Emanuele; Agosta, Fabrizio

2010-05-01

253

InSAR measurements around active faults: creeping Philippine Fault and un-creeping Alpine Fault  

NASA Astrophysics Data System (ADS)

Recently, interferometric synthetic aperture radar (InSAR) time-series analyses have been frequently applied to measure the time-series of small and quasi-steady displacements in wide areas. Large efforts in the methodological developments have been made to pursue higher temporal and spatial resolutions by using frequently acquired SAR images and detecting more pixels that exhibit phase stability. While such a high resolution is indispensable for tracking displacements of man-made and other small-scale structures, it is not necessarily needed and can be unnecessarily computer-intensive for measuring the crustal deformation associated with active faults and volcanic activities. I apply a simple and efficient method to measure the deformation around the Alpine Fault in the South Island of New Zealand, and the Philippine Fault in the Leyte Island. I use a small-baseline subset (SBAS) analysis approach (Berardino, et al., 2002). Generally, the more we average the pixel values, the more coherent the signals are. Considering that, for the deformation around active faults, the spatial resolution can be as coarse as a few hundred meters, we can severely 'multi-look' the interferograms. The two applied cases in this study benefited from this approach; I could obtain the mean velocity maps on practically the entire area without discarding decorrelated areas. The signals could have been only partially obtained by standard persistent scatterer or single-look small-baseline approaches that are much more computer-intensive. In order to further increase the signal detection capability, it is sometimes effective to introduce a processing algorithm adapted to the signal of interest. In an InSAR time-series processing, one usually needs to set the reference point because interferograms are all relative measurements. It is difficult, however, to fix the reference point when one aims to measure long-wavelength deformation signals that span the whole analysis area. This problem can be solved by adding the displacement offset in each interferogram as a model parameter and solving the system of equations with the minimum norm condition. This way, the unknown offsets can be automatically determined. By applying this method to the ALOS/PALSAR data acquired over the Alpine Fault, I obtained the mean velocity map showing the right-lateral relative motion of the blocks north and south of the fault and the strain concentration (large velocity gradient) around the fault. The velocity gradient around the fault has along-fault variation, probably reflecting the variation in the fault locking depth. When one aims to detect fault creeps, i.e., displacement discontinuity in space, one can additionally introduce additional parameters to describe the phase ramps in the interferograms and solve the system of equations again with the minimum norm condition. Then, the displacement discontinuity appears more clearly in the result at the cost of suppressing long-wavelength displacements. By applying this method to the ALOS/PALSAR data acquired over the Philippine Fault in Leyte Island, I obtained the mean velocity map showing fault creep at least in the northern and central parts of Leyte at a rate of around 10 mm/year.

Fukushima, Y.

2013-12-01

254

Faulting at Mormon Point, Death Valley, California: A low-angle normal fault cut by high-angle faults  

Microsoft Academic Search

New geophysical and fault kinematic studies indicate that late Cenozoic basin development in the Mormon Point area of Death Valley, California, was accommodated by fault rotations. Three of six fault segments recognized at Mormon Point are now inactive and have been rotated to low dips during extension. The remaining three segments are now active and moderately to steeply dipping. From

Charles Keener; Laura Serpa; Terry L. Pavlis

1993-01-01

255

Solar Dynamic Power System Fault Diagnosis  

NASA Technical Reports Server (NTRS)

The objective of this research is to conduct various fault simulation studies for diagnosing the type and location of faults in the power distribution system. Different types of faults are simulated at different locations within the distribution system and the faulted waveforms are monitored at measurable nodes such as at the output of the DDCU's. These fault signatures are processed using feature extractors such as FFT and wavelet transforms. The extracted features are fed to a clustering based neural network for training and subsequent testing using previously unseen data. Different load models consisting of constant impedance and constant power are used for the loads. Open circuit faults and short circuit faults are studied. It is concluded from present studies that using features extracted from wavelet transforms give better success rates during ANN testing. The trained ANN's are capable of diagnosing fault types and approximate locations in the solar dynamic power distribution system.

Momoh, James A.; Dias, Lakshman G.

1996-01-01

256

Underground distribution cable incipient fault diagnosis system  

E-print Network

This dissertation presents a methodology for an efficient, non-destructive, and online incipient fault diagnosis system (IFDS) to detect underground cable incipient faults before they become catastrophic. The system provides vital information...

Jaafari Mousavi, Mir Rasoul

2007-04-25

257

Sensor Fault Detection and Isolation System  

E-print Network

The purpose of this research is to develop a Fault Detection and Isolation (FDI) system which is capable to diagnosis multiple sensor faults in nonlinear cases. In order to lead this study closer to real world applications in oil industries...

Yang, Cheng-Ken

2014-08-01

258

Automated Fault Location In Smart Distribution Systems  

E-print Network

Fault location in distribution systems is a critical component of outage management and service restoration, which directly impacts feeder reliability and quality of the electricity supply. Improving fault location methods supports the Department...

Lotfifard, Saeed

2012-10-19

259

Quantum Error Correction and Fault-Tolerance  

E-print Network

I give an overview of the basic concepts behind quantum error correction and quantum fault tolerance. This includes the quantum error correction conditions, stabilizer codes, CSS codes, transversal gates, fault-tolerant error correction, and the threshold theorem.

Daniel Gottesman

2005-07-18

260

Update: San Andreas Fault experiment  

NASA Technical Reports Server (NTRS)

Satellite laser ranging techniques are used to monitor the broad motion of the tectonic plates comprising the San Andreas Fault System. The San Andreas Fault Experiment, (SAFE), has progressed through the upgrades made to laser system hardware and an improvement in the modeling capabilities of the spaceborne laser targets. Of special note is the launch of the Laser Geodynamic Satellite, LAGEOS spacecraft, NASA's only completely dedicated laser satellite in 1976. The results of plate motion projected into this 896 km measured line over the past eleven years are summarized and intercompared.

Christodoulidis, D. C.; Smith, D. E.

1984-01-01

261

Boullier The fault zone geology 1 Fault zone geology: lessons from drilling through the Nojima and 1  

E-print Network

Boullier The fault zone geology 1 Fault zone geology: lessons from active faults with the aim of 11 learning about the geology of the fault all 18 their objectives, have still contributed to a better geological

Boyer, Edmond

262

Stress and fault parameters affecting fault slip magnitude and activation time during a glacial cycle  

NASA Astrophysics Data System (ADS)

The growing and melting of continental ice sheets during a glacial cycle is accompanied by stress changes and reactivation of faults. To better understand the relationship between stress changes, fault activation time, fault parameters, and fault slip magnitude, a new physics-based two-dimensional numerical model is used. In this study, tectonic background stress magnitudes and fault parameters are tested as well as the angle of the fault and the fault locations relative to the ice sheet. Our results show that fault slip magnitude for all faults is mainly affected by the coefficient of friction within the crust and along the fault and also by the depth of the fault tip and angle of the fault. Within a compressional stress regime, we find that steeply dipping faults (˜75°) can be activated after glacial unloading, and fault activity continues thereafter. Furthermore, our results indicate that low-angle faults (dipping at 30°) may slip up to 63m, equivalent to an earthquake with a minimum moment magnitude of 7.0. Finally, our results imply that the crust beneath formerly glaciated regions was close to a critically stressed state, in order to enable activation of faults by small changes in stress during a glacial cycle.

Steffen, Rebekka; Steffen, Holger; Wu, Patrick; Eaton, David W.

2014-07-01

263

Multi-Sensor Fault Recovery in the Presence of Known and Unknown Fault Types  

E-print Network

Multi-Sensor Fault Recovery in the Presence of Known and Unknown Fault Types Steven Reece in the presence of modelled and unmodelled faults. The al- gorithm comprises two stages. The first stage attempts to re- move modelled faults from each individual sensor estimate. The second stage de

Roberts, Stephen

264

Efficient Fault Tolerance: an Approach to Deal with Transient Faults in Multiprocessor Architectures  

E-print Network

Efficient Fault Tolerance: an Approach to Deal with Transient Faults in Multiprocessor be integrated with a fault treatment approach aiming at op- timising resource utilisation. In this paper we propose a diagnosis approach that, accounting for transient faults, tries to remove units very cautiously

Firenze, Università degli Studi di

265

Fault diagnosis system based on Dynamic Fault Tree Analysis of power transformer  

Microsoft Academic Search

Firstly, this research paper introduced the process of transformer fault diagnosis and the theory of DFTA and then we attempt to apply DFTA to the field of transformer faults diagnosis. By establishing the fault tree of transformer, a practical, easily-extended, interactive and self-learning enabled fault diagnosis system based on DFTA for transformer is designed and developed. With the implementation and

Jiang Guo; Kefei Zhang; Lei Shi; Kaikai Gu; Weimin Bai; Bing Zeng; Yajin Liu

2012-01-01

266

Curved Fault Dynamic Rupture Study: Wasatch Fault Salt Lake City Segment  

NASA Astrophysics Data System (ADS)

Faults are not planar; the curvature of the fault provides us useful information on the earthquake mechanics and faulting (Scholz, 1990). Fault geometry has a profound impact on both static aspect (stress distribution in the fault zone) and dynamic aspect (facilitation and impedance of the fault rupture process) of some fundamental earthquake problems. In most earthquake simulations, planar/piece-wise planar faults are used for numerical simplicity. For real earthquake scenarios, especially ground motion prediction, the eligibility of using simplified planar fault geometry needs to be validated, otherwise the simplification might bias the final conclusion. We analyze the rupture process and ground motion statistics in earthquake simulations for Wasatch Fault -Salt Lake City segment- with different fault configurations. We use a finite element method (Ma & Liu, 2006) to simulate the dynamics of a propagating rupture. We consider various initial stress distribution schemes on the fault (uniform, depth-dependent, random). We want to understand 1) how does the fault geometry itself influence the physical rupture process? and 2) what effect does the curvature have on redistributing the initial stresses on the fault? We will monitor the Coulomb stress change near the fault (Liu et al, 2010). This may provide some indication of the interaction between discontinuous fault segments and dynamic triggering as well as the distribution of aftershocks/foreshocks in relation to the fault geometry.

Liu, Q.; Archuleta, R. J.; Smith, R. B.

2011-12-01

267

Fault Behavior and Characteristic Earthquakes: Examples From the Wasatch and San Andreas Fault Zones  

Microsoft Academic Search

Paleoseismological data for the Wasatch and San Andreas fault zones have led to the formulation of the characteristic earthquake model, which postulates that individual faults and fault segments tend to generate essentially same size or characteristic earthquakes having a relatively narrow range of magnitudes near the maximum. Analysis of scarp-derived colluvium in trench exposures across the Wasatch fault provides estimates

David P. Schwartz; Kevin J. Coppersmith

1984-01-01

268

Neural network based fault diagnosis and fault tolerant control for BLDC motor  

Microsoft Academic Search

A fault diagnostics and fault tolerant control system for controller of brushless direct current motor is designed. The neural network state observer is trained by real nonlinear control system. From the residual difference between outputs of actual system and neural network observer, the fault of control system is detected and determined. The simulation results and study on fault diagnostics are

Zheng Li

2009-01-01

269

Microseismicity and creeping faults: Hints from modeling the Hayward fault, California (USA)  

Microsoft Academic Search

Creeping segments of strike-slip faults are often characterized by high rates of microseismicity on or near the fault. This microseismicity releases only a small fraction of the slip occurring on the fault and the majority of the accumulating elastic strain is released either through aseismic creep or in rare large events. Distinguishing between creeping or non-creeping patches on faults and

R. Malservisi; K. P. Furlong; C. R. Gans

2005-01-01

270

Microseismicity and creeping faults: Hints from modeling the Hayward fault, California (USA) [rapid communication  

Microsoft Academic Search

Creeping segments of strike-slip faults are often characterized by high rates of microseismicity on or near the fault. This microseismicity releases only a small fraction of the slip occurring on the fault and the majority of the accumulating elastic strain is released either through aseismic creep or in rare large events. Distinguishing between creeping or non-creeping patches on faults and

R. Malservisi; K. P. Furlong; C. R. Gans

2005-01-01

271

Fault seal analysis: Methodology and case studies  

Microsoft Academic Search

Fault seal can arise from reservoir\\/non-reservoir juxtaposition or by development of fault rock of high entry-pressure. The methodology for evaluating these possibilities uses detailed seismic mapping and well analysis. A [open quote]first-order[close quote] seal analysis involves identifying reservoir juxtaposition areas over the fault surface, using the mapped horizons and a refined reservoir stratigraphy defined by isochores at the fault surface.

M. E. Badley; B. Freeman; D. T. Needham

1996-01-01

272

Low Angle Normal Fault, Fossil or Active?  

NASA Astrophysics Data System (ADS)

The Panamint Valley - Hunter Mountain - Saline Range (PHS) faults are, together with the Death Valley and Owens Valley faults, one of the three major fault zones within the Eastern California Shear Zone (ECSZ). The ECSZ is the most active fault system bounding the Basin and Range to the southwest with approximately 10 mm/yr of cumulative slip along strike-slip and trans-tensional segments. Previous work has identified the Panamint Valley and Saline Range faults as low angle normal faults and the Hunter Mountain as a transfer fault (Wesnousky and Jones, 1994). A debate exists whether this system is active at present time. Interferometry Synthetic Aperture Radar (InSAR) is a geodetic technique that allows measurement of ground motion at a mm/yr accuracy over large areas with a high measurement sampling. We processed a large number of data to investigate ground motion in the PHS fault system to shed light on the interseismic strain accumulation and its relation to the fault geometry. Preliminary results indicate high strain rate over the Hunter Mountain fault. The locking depth of the fault inferred from elastic modeling of interseismic strain accumulation is on the order of 4km, significantly shallower than for neighboring faults. In contrast, the long wavelength strain field across the Panamint and Saline faults indicates possibly deeper locking depths and/or shallower dip. The shallow locking depth of 4km inferred for the Hunter Mountain fault corresponds with the extension at depth of the two bounding low angle normal faults below Hunter Mountain, suggesting a control by the low angle normal fault system.

Gourmelen, N.; Falk, A.; Manzo, M.; Francesco, C.; Lanari, R.; Johnson, K.

2007-12-01

273

The arc-fault circuit protection  

Microsoft Academic Search

In electrical power systems bolted short-circuits are rare and the fault usually involves arcing and burning; mostly the limit value of minimum short-circuit depends on arcing-fault. In AC low voltage systems, the paper examines the arcing-fault branch circuits as weak points. Different protection measures are available against the arc-faults. A first measure that can guarantee a probabilistic protection is allowed

G. Parise; L. Martirano; U. Grasselli; L. Benetti

2001-01-01

274

Fault Models for Quantum Mechanical Switching Networks  

E-print Network

The difference between faults and errors is that, unlike faults, errors can be corrected using control codes. In classical test and verification one develops a test set separating a correct circuit from a circuit containing any considered fault. Classical faults are modelled at the logical level by fault models that act on classical states. The stuck fault model, thought of as a lead connected to a power rail or to a ground, is most typically considered. A classical test set complete for the stuck fault model propagates both binary basis states, 0 and 1, through all nodes in a network and is known to detect many physical faults. A classical test set complete for the stuck fault model allows all circuit nodes to be completely tested and verifies the function of many gates. It is natural to ask if one may adapt any of the known classical methods to test quantum circuits. Of course, classical fault models do not capture all the logical failures found in quantum circuits. The first obstacle faced when using methods from classical test is developing a set of realistic quantum-logical fault models. Developing fault models to abstract the test problem away from the device level motivated our study. Several results are established. First, we describe typical modes of failure present in the physical design of quantum circuits. From this we develop fault models for quantum binary circuits that enable testing at the logical level. The application of these fault models is shown by adapting the classical test set generation technique known as constructing a fault table to generate quantum test sets. A test set developed using this method is shown to detect each of the considered faults.

Jacob Biamonte; Jeff S. Allen; Marek A. Perkowski

2010-01-19

275

Facies composition and scaling relationships of extensional faults in carbonates  

Microsoft Academic Search

Fault seal evaluations in carbonates are challenged by limited input data. Our analysis of 100 extensional faults in shallow-buried layered carbonate rocks aims to improve forecasting of fault core characteristics in these rocks. We have analyzed the spatial distribution of fault core elements described using a Fault Facies classification scheme; a method specifically developed for 3D fault description and quantification,

Eivind Bastesen; Alvar Braathen

2010-01-01

276

Analysis of the ecosystem structure of Laguna Alvarado, western Gulf of Mexico, by means of a mass balance model  

NASA Astrophysics Data System (ADS)

Alvarado is one of the most productive estuary-lagoon systems in the Mexican Gulf of Mexico. It has great economic and ecological importance due to high fisheries productivity and because it serves as a nursery, feeding, and reproduction area for numerous populations of fishes and crustaceans. Because of this, extensive studies have focused on biology, ecology, fisheries (e.g. shrimp, oysters) and other biological components of the system during the last few decades. This study presents a mass-balanced trophic model for Laguna Alvarado to determine it's structure and functional form, and to compare it with similar coastal systems of the Gulf of Mexico and Mexican Pacific coast. The model, based on the software Ecopath with Ecosim, consists of eighteen fish groups, seven invertebrate groups, and one group each of sharks and rays, marine mammals, phytoplankton, sea grasses and detritus. The acceptability of the model is indicated by the pedigree index (0.5) which range from 0 to 1 based on the quality of input data. The highest trophic level was 3.6 for marine mammals and snappers. Total system throughput reached 2680 t km -2 year -1, of which total consumption made up 47%, respiratory flows made up 37% and flows to detritus made up 16%. The total system production was higher than consumption, and net primary production higher than respiration. The mean transfer efficiency was 13.8%. The mean trophic level of the catch was 2.3 and the primary production required to sustain the catch was estimated in 31 t km -2 yr -1. Ecosystem overhead was 2.4 times the ascendancy. Results suggest a balance between primary production and consumption. In contrast with other Mexican coastal lagoons, Laguna Alvarado differs strongly in relation to the primary source of energy; here the primary producers (seagrasses) are more important than detritus pathways. This fact can be interpreted a response to mangrove deforest, overfishing, etc. Future work might include the compilation of fishing and biomass time trends to develop historical verification and fitting of temporal simulations.

Cruz-Escalona, V. H.; Arreguín-Sánchez, F.; Zetina-Rejón, M.

2007-03-01

277

The Supernova Spectropolarimetry Project: Photometric Followup in the Optical and Near-Infrared by the Mount Laguna Supernova Survey  

NASA Astrophysics Data System (ADS)

The SuperNova SpectroPOLarimetry project (SNSPOL) is a recently formed collaboration between observers and theorists that focuses on decoding the complex, time-dependent spectropolarimetric behavior of supernovae (SNe) of all types. Photometric followup of targeted SNe is provided by the MOunt LAguna SUpernova Survey (MOLASUS), which is carried out using Mount Laguna Observatory's 1-meter telescope. Here we present optical and near-infrared (NIR) photometric observations of three recent SNe that were observed as part of this coordinated effort: SN 2013ej, SN 2013dy, and SN 2014J. We discuss the multi-band light curves of these three SNe, with a particular focus on the use of NIRIM (Meixner et al. 1999), our NIR camera used to obtain the J, H, and K' data. SN 2013ej is a Type II supernova in M74, discovered by the Lick Observatory Supernova Search (LOSS) on 2013 July 25.45 (UT; UT dates are used throughout). Our monitoring of this object began 2013 August 07.88 and continued until 2013 December 13.74. The data provide evidence for aphotospheric phase lasting roughly 70 days from our first observation, with SN 2013ej then declining by about 3 magnitudes in H-band over the following 50 days. SN 2013dy is a Type Ia supernova in NGC 7250 discovered by LOSS on 2013 July 10.45. We monitored SN 2013dy from July 19.89 until 2013 December 13.62. Our observations show a characteristic type Ia light curve that declines in brightness by about 3 magnitudes in H through the course of our monitoring. Lastly, SN 2014J is a Type Ia-HV [High Velocity] (Takaki et. al (2014) - ATEL 5791) in M82, discovered on 2014 January 21.81, and the closest Type Ia supernovae in over three decades. Our monitoring of SN 2014J began on 2014 January 30.67.We acknowledge support from NSF grants AST-1009571 and AST-1210311, under which part of this research was carried out.

Khandrika, Harish G.; Leonard, Douglas C.; Horst, Chuck; Rachubo, Alisa; Duong, Nhieu; Williams, G. Grant; Smith, Paul S.; Smith, Nathan; Milne, Peter; Hoffman, Jennifer L.; Huk, Leah N.; Dessart, Luc

2014-06-01

278

Fault Management Frameworks in Wireless Sensor Networks  

Microsoft Academic Search

With the rapidly gaining ground of the Internet of things, wireless sensor networks (WSNs) will be bound to permeate more and more applications. However, due to the nature of these applications such as harsh environment, resource-limited wireless sensor networks are usually fault- prone. Therefore, it is essential to provide effective fault management techniques and robust fault management frameworks for WSNs.

Hu Huangshui; Qin Guihe

2011-01-01

279

High impedance fault detection on distribution systems  

Microsoft Academic Search

The detection of high impedance faults on electrical distribution systems has been one of the most persistent and difficult problems facing the electric utility industry. Recent advances in digital technology have enabled practical solutions for the detection of a high percentage of these previously undetectable faults. This paper reviews several mechanical and electrical methods of detecting high impedance faults. The

C. G. Wester

1998-01-01

280

Network Reliability and Fault Tolerance Muriel Medard  

E-print Network

; for transient faults, a combination of error-correcting codes and data retransmission usually provides adequate problems must be considered in the design of a fault-tolerant system. A system must be capable of detectingNetwork Reliability and Fault Tolerance Muriel M´edard medard@mit.edu Laboratory for Information

Médard, Muriel

281

Relyzer: Application Resiliency Analyzer for Transient Faults  

E-print Network

that these undetected faults can result in silent data corruptions or SDCs. The SDC rates demonstrated by the state that Relyzer is capable of pruning about 99.9979% of hardware faults for the workloads that we studied. Some Terms--Silent Data Corruption, Transient Faults, Dy- namic Program Analysis, Architecture I

Adve, Sarita

282

Fault tolerant software modules for SIFT  

NASA Technical Reports Server (NTRS)

The implementation of software fault tolerance is investigated for critical modules of the Software Implemented Fault Tolerance (SIFT) operating system to support the computational and reliability requirements of advanced fly by wire transport aircraft. Fault tolerant designs generated for the error reported and global executive are examined. A description of the alternate routines, implementation requirements, and software validation are included.

Hecht, M.; Hecht, H.

1982-01-01

283

FAULT PREDICTIVE CONTROL OF COMPACT DISK PLAYERS  

E-print Network

FAULT PREDICTIVE CONTROL OF COMPACT DISK PLAYERS Peter Fogh Odgaard Mladen Victor Wickerhauser playing certain discs with surface faults like scratches and fingerprints. The problem is to be found in an other publications of the first author. This scheme is based on an assumption that the surface faults do

Wickerhauser, M. Victor

284

The Fault Detection Problem Andreas Haeberlen1  

E-print Network

The Fault Detection Problem Andreas Haeberlen1 and Petr Kuznetsov2 1 Max Planck Institute challenges in distributed com- puting is ensuring that services are correct and available despite faults. Recently it has been argued that fault detection can be factored out from computation, and that a generic

Pennsylvania, University of

285

The Fault Detection Problem Andreas Haeberlen  

E-print Network

The Fault Detection Problem Andreas Haeberlen Petr Kuznetsov Abstract One of the most important challenges in distributed computing is ensuring that services are correct and available despite faults. Recently it has been argued that fault detection can be factored out from computation, and that a generic

Pennsylvania, University of

286

High temperature superconducting fault current limiter  

DOEpatents

A fault current limiter (10) for an electrical circuit (14). The fault current limiter (10) includes a high temperature superconductor (12) in the electrical circuit (14). The high temperature superconductor (12) is cooled below its critical temperature to maintain the superconducting electrical properties during operation as the fault current limiter (10).

Hull, John R. (Hinsdale, IL)

1997-01-01

287

High temperature superconducting fault current limiter  

DOEpatents

A fault current limiter for an electrical circuit is disclosed. The fault current limiter includes a high temperature superconductor in the electrical circuit. The high temperature superconductor is cooled below its critical temperature to maintain the superconducting electrical properties during operation as the fault current limiter. 15 figs.

Hull, J.R.

1997-02-04

288

Fault detection via parameter robust estimation  

Microsoft Academic Search

We derive and analyze a fault detection filter which is robust to model uncertainty. To do this, we recast the fault detection problem as a disturbance attenuation problem and then incorporate parameter variations as an additional disturbance. The corresponding solution is a parameter robust game theoretic fault detection filter. A second look at our results, however, shows that the parameter

Walter H. Chung; Jason L. Speyer

1997-01-01

289

Lake Tahoe Faults, Shaded Relief Map  

USGS Multimedia Gallery

Shaded relief map of western part of the Lake Tahoe basin, California. Faults lines are dashed where approximately located, dotted where concealed, bar and ball on downthrown side. Heavier line weight shows principal range-front fault strands of the Tahoe-Sierra frontal fault zone (TSFFZ). Opaque wh...

290

Salton Sea Satellite Image Showing Fault Slip  

USGS Multimedia Gallery

Landsat satellite image (LE70390372003084EDC00) showing location of surface slip triggered along faults in the greater Salton Trough area. Red bars show the generalized location of 2010 surface slip along faults in the central Salton Trough and many additional faults in the southwestern section of t...

291

5 CFR 845.302 - Fault.  

Code of Federal Regulations, 2010 CFR

... 2010-01-01 2010-01-01 false Fault. 845.302 Section 845.302 Administrative...Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of...

2010-01-01

292

20 CFR 255.11 - Fault.  

Code of Federal Regulations, 2011 CFR

...2011-04-01 2011-04-01 false Fault. 255.11 Section 255.11 Employees...RECOVERY OF OVERPAYMENTS § 255.11 Fault. (a) Before recovery of an overpayment...that the overpaid individual was without fault in causing the overpayment. If...

2011-04-01

293

5 CFR 845.302 - Fault.  

Code of Federal Regulations, 2011 CFR

... 2011-01-01 2011-01-01 false Fault. 845.302 Section 845.302 Administrative...Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of...

2011-01-01

294

40 CFR 258.13 - Fault areas.  

Code of Federal Regulations, 2012 CFR

...2012-07-01 2011-07-01 true Fault areas. 258.13 Section 258.13... Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral...located within 200 feet (60 meters) of a fault that has had displacement in...

2012-07-01

295

Active faulting and tectonics in China  

Microsoft Academic Search

We present a study of the active tectonics of China based on an interpretation of Landsat (satellite) imagery and supplemented with seismic data. Several important fault systems can be identified, and most are located in regions of high historical seismicity. We deduce the type and sense of faulting from adjacent features seen on these photos, from fault plane solutions of

Paul Tapponnier; Peter Molnar

1977-01-01

296

22 CFR 17.3 - Fault.  

Code of Federal Regulations, 2013 CFR

... 2013-04-01 2013-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...SERVICE PENSION SYSTEM (FSPS) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of...

2013-04-01

297

20 CFR 255.11 - Fault.  

Code of Federal Regulations, 2013 CFR

...2013-04-01 2012-04-01 true Fault. 255.11 Section 255.11 Employees...RECOVERY OF OVERPAYMENTS § 255.11 Fault. (a) Before recovery of an overpayment...that the overpaid individual was without fault in causing the overpayment. If...

2013-04-01

298

40 CFR 258.13 - Fault areas.  

Code of Federal Regulations, 2014 CFR

...2014-07-01 2014-07-01 false Fault areas. 258.13 Section 258.13... Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral...located within 200 feet (60 meters) of a fault that has had displacement in...

2014-07-01

299

40 CFR 258.13 - Fault areas.  

Code of Federal Regulations, 2013 CFR

...2013-07-01 2013-07-01 false Fault areas. 258.13 Section 258.13... Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral...located within 200 feet (60 meters) of a fault that has had displacement in...

2013-07-01

300

5 CFR 831.1402 - Fault.  

Code of Federal Regulations, 2014 CFR

... 2014-01-01 2014-01-01 false Fault. 831.1402 Section 831.1402 Administrative...for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of...

2014-01-01

301

22 CFR 17.3 - Fault.  

Code of Federal Regulations, 2011 CFR

... 2011-04-01 2011-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...SERVICE PENSION SYSTEM (FSPS) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of...

2011-04-01

302

5 CFR 845.302 - Fault.  

Code of Federal Regulations, 2012 CFR

... 2012-01-01 2012-01-01 false Fault. 845.302 Section 845.302 Administrative...Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of...

2012-01-01

303

22 CFR 17.3 - Fault.  

Code of Federal Regulations, 2014 CFR

... 2014-04-01 2014-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...SERVICE PENSION SYSTEM (FSPS) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of...

2014-04-01

304

40 CFR 258.13 - Fault areas.  

Code of Federal Regulations, 2010 CFR

...2010-07-01 2010-07-01 false Fault areas. 258.13 Section 258.13... Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral...located within 200 feet (60 meters) of a fault that has had displacement in...

2010-07-01

305

40 CFR 258.13 - Fault areas.  

Code of Federal Regulations, 2011 CFR

...2011-07-01 2011-07-01 false Fault areas. 258.13 Section 258.13... Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral...located within 200 feet (60 meters) of a fault that has had displacement in...

2011-07-01

306

20 CFR 255.11 - Fault.  

Code of Federal Regulations, 2014 CFR

...2014-04-01 2012-04-01 true Fault. 255.11 Section 255.11 Employees...RECOVERY OF OVERPAYMENTS § 255.11 Fault. (a) Before recovery of an overpayment...that the overpaid individual was without fault in causing the overpayment. If...

2014-04-01

307

Field Trip to the Hayward Fault Zone  

NSDL National Science Digital Library

This guide provides directions to locations in Hayward, California where visitors can see evidence of creep along the Hayward Fault. There is also information about the earthquake hazards associated with fault zones, earthquake prediction, and landforms associated with offset along a fault. The guide is available in downloadable, printable format (PDF) in two resolutions

308

5 CFR 831.1402 - Fault.  

Code of Federal Regulations, 2010 CFR

... 2010-01-01 2010-01-01 false Fault. 831.1402 Section 831.1402 Administrative...for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of...

2010-01-01

309

5 CFR 831.1402 - Fault.  

Code of Federal Regulations, 2012 CFR

... 2012-01-01 2012-01-01 false Fault. 831.1402 Section 831.1402 Administrative...for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of...

2012-01-01

310

5 CFR 831.1402 - Fault.  

Code of Federal Regulations, 2011 CFR

... 2011-01-01 2011-01-01 false Fault. 831.1402 Section 831.1402 Administrative...for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of...

2011-01-01

311

22 CFR 17.3 - Fault.  

Code of Federal Regulations, 2012 CFR

... 2012-04-01 2012-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...SERVICE PENSION SYSTEM (FSPS) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of...

2012-04-01

312

5 CFR 845.302 - Fault.  

Code of Federal Regulations, 2014 CFR

... 2014-01-01 2014-01-01 false Fault. 845.302 Section 845.302 Administrative...Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of...

2014-01-01

313

5 CFR 831.1402 - Fault.  

Code of Federal Regulations, 2013 CFR

... 2013-01-01 2013-01-01 false Fault. 831.1402 Section 831.1402 Administrative...for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of...

2013-01-01

314

20 CFR 255.11 - Fault.  

Code of Federal Regulations, 2010 CFR

...2010-04-01 2010-04-01 false Fault. 255.11 Section 255.11 Employees...RECOVERY OF OVERPAYMENTS § 255.11 Fault. (a) Before recovery of an overpayment...that the overpaid individual was without fault in causing the overpayment. If...

2010-04-01

315

22 CFR 17.3 - Fault.  

Code of Federal Regulations, 2010 CFR

... 2010-04-01 2010-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...SERVICE PENSION SYSTEM (FSPS) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of...

2010-04-01

316

5 CFR 845.302 - Fault.  

Code of Federal Regulations, 2013 CFR

... 2013-01-01 2013-01-01 false Fault. 845.302 Section 845.302 Administrative...Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of...

2013-01-01

317

20 CFR 255.11 - Fault.  

Code of Federal Regulations, 2012 CFR

...2012-04-01 2012-04-01 false Fault. 255.11 Section 255.11 Employees...RECOVERY OF OVERPAYMENTS § 255.11 Fault. (a) Before recovery of an overpayment...that the overpaid individual was without fault in causing the overpayment. If...

2012-04-01

318

A fault tolerance approach to computer viruses  

Microsoft Academic Search

Extensions of program flow monitors and n-version programming can be combined to provide a solution to the detection and containment of computer viruses. The consequence is that a computer can tolerate both deliberate faults and random physical faults by one common mechanism. Specifically, the technique detects control flow errors due to physical faults as well as the presence of viruses

Mark K. Joseph; Algirdas AviZienis

1988-01-01

319

Ground Fault--A Health Hazard  

ERIC Educational Resources Information Center

A ground fault is especially hazardous because the resistance through which the current is flowing to ground may be sufficient to cause electrocution. The Ground Fault Circuit Interrupter (G.F.C.I.) protects 15 and 25 ampere 120 volt circuits from ground fault condition. The design and examples of G.F.C.I. functions are described in this article.…

Jacobs, Clinton O.

1977-01-01

320

A Fault Prediction Model with Limited Fault Data to Improve Test Process  

Microsoft Academic Search

Software fault prediction models are used to identify the fault-prone software modules and produce reliable software. Performance\\u000a of a software fault prediction model is correlated with available software metrics and fault data. In some occasions, there\\u000a may be few software modules having fault data and therefore, prediction models using only labeled data can not provide accurate\\u000a results. Semi-supervised learning approaches

Cagatay Catal; Banu Diri

2008-01-01

321

Extraction and Simulation of Realistic CMOS Faults Using Inductive Fault Analysis  

Microsoft Academic Search

FXT is a software tool which implements inductive fault analysis for CMOS circuits. It extracts a comprehensive list of circuit-level faults for any given CMOS circuit and ranks them according to their relative likelihood of occurrence. Five commercial CMOS circuits are analyzed using FXT. Of the extracted faults, approximately 50% can be modeled by single-line stuck-at 0\\/1 fault model. Faults

John Paul Shen; F. Joel Ferguson

1988-01-01

322

Abstract--Fault collapsing is the process of reducing the number of faults by using redundance and equiva-  

E-print Network

1 Abstract--Fault collapsing is the process of reducing the number of faults by using redundance and equiva- lence/dominance relationships among faults. Exact glo- bal fault collapsing can be easily applied fault collapsing method for library modules that uses both binary deci- sion diagrams and fault

Al-Asaad, Hussain

323

Tsunamis and splay fault dynamics  

USGS Publications Warehouse

The geometry of a fault system can have significant effects on tsunami generation, but most tsunami models to date have not investigated the dynamic processes that determine which path rupture will take in a complex fault system. To gain insight into this problem, we use the 3D finite element method to model the dynamics of a plate boundary/splay fault system. We use the resulting ground deformation as a time-dependent boundary condition for a 2D shallow-water hydrodynamic tsunami calculation. We find that if me stress distribution is homogeneous, rupture remains on the plate boundary thrust. When a barrier is introduced along the strike of the plate boundary thrust, rupture propagates to the splay faults, and produces a significantly larger tsunami man in the homogeneous case. The results have implications for the dynamics of megathrust earthquakes, and also suggest mat dynamic earthquake modeling may be a useful tool in tsunami researcn. Copyright 2009 by the American Geophysical Union.

Wendt, J.; Oglesby, D.D.; Geist, E.L.

2009-01-01

324

Denali Fault: Black Rapids Glacier  

USGS Multimedia Gallery

View eastward along Black Rapids Glacier. The Denali fault follows the trace of the glacier. These very large rockslides went a mile across the glacier on the right side. Investigations of the headwall of the middle landslide indicate a volume at least as large as that which fell, has dropped a mete...

325

Cell boundary fault detection system  

DOEpatents

An apparatus and program product determine a nodal fault along the boundary, or face, of a computing cell. Nodes on adjacent cell boundaries communicate with each other, and the communications are analyzed to determine if a node or connection is faulty.

Archer, Charles Jens (Rochester, MN); Pinnow, Kurt Walter (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian Edward (Rochester, MN)

2011-04-19

326

Implementing fault-tolerant sensors  

NASA Technical Reports Server (NTRS)

One aspect of fault tolerance in process control programs is the ability to tolerate sensor failure. A methodology is presented for transforming a process control program that cannot tolerate sensor failures to one that can. Additionally, a hierarchy of failure models is identified.

Marzullo, Keith

1989-01-01

327

Predeployment validation of fault-tolerant systems through software-implemented fault insertion  

NASA Technical Reports Server (NTRS)

Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.

Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

1989-01-01

328

Fault Diagnosis in HVAC Chillers  

NASA Technical Reports Server (NTRS)

Modern buildings are being equipped with increasingly sophisticated power and control systems with substantial capabilities for monitoring and controlling the amenities. Operational problems associated with heating, ventilation, and air-conditioning (HVAC) systems plague many commercial buildings, often the result of degraded equipment, failed sensors, improper installation, poor maintenance, and improperly implemented controls. Most existing HVAC fault-diagnostic schemes are based on analytical models and knowledge bases. These schemes are adequate for generic systems. However, real-world systems significantly differ from the generic ones and necessitate modifications of the models and/or customization of the standard knowledge bases, which can be labor intensive. Data-driven techniques for fault detection and isolation (FDI) have a close relationship with pattern recognition, wherein one seeks to categorize the input-output data into normal or faulty classes. Owing to the simplicity and adaptability, customization of a data-driven FDI approach does not require in-depth knowledge of the HVAC system. It enables the building system operators to improve energy efficiency and maintain the desired comfort level at a reduced cost. In this article, we consider a data-driven approach for FDI of chillers in HVAC systems. To diagnose the faults of interest in the chiller, we employ multiway dynamic principal component analysis (MPCA), multiway partial least squares (MPLS), and support vector machines (SVMs). The simulation of a chiller under various fault conditions is conducted using a standard chiller simulator from the American Society of Heating, Refrigerating, and Air-conditioning Engineers (ASHRAE). We validated our FDI scheme using experimental data obtained from different types of chiller faults.

Choi, Kihoon; Namuru, Setu M.; Azam, Mohammad S.; Luo, Jianhui; Pattipati, Krishna R.; Patterson-Hine, Ann

2005-01-01

329

Fault-Tolerant Heat Exchanger  

NASA Technical Reports Server (NTRS)

A compact, lightweight heat exchanger has been designed to be fault-tolerant in the sense that a single-point leak would not cause mixing of heat-transfer fluids. This particular heat exchanger is intended to be part of the temperature-regulation system for habitable modules of the International Space Station and to function with water and ammonia as the heat-transfer fluids. The basic fault-tolerant design is adaptable to other heat-transfer fluids and heat exchangers for applications in which mixing of heat-transfer fluids would pose toxic, explosive, or other hazards: Examples could include fuel/air heat exchangers for thermal management on aircraft, process heat exchangers in the cryogenic industry, and heat exchangers used in chemical processing. The reason this heat exchanger can tolerate a single-point leak is that the heat-transfer fluids are everywhere separated by a vented volume and at least two seals. The combination of fault tolerance, compactness, and light weight is implemented in a unique heat-exchanger core configuration: Each fluid passage is entirely surrounded by a vented region bridged by solid structures through which heat is conducted between the fluids. Precise, proprietary fabrication techniques make it possible to manufacture the vented regions and heat-conducting structures with very small dimensions to obtain a very large coefficient of heat transfer between the two fluids. A large heat-transfer coefficient favors compact design by making it possible to use a relatively small core for a given heat-transfer rate. Calculations and experiments have shown that in most respects, the fault-tolerant heat exchanger can be expected to equal or exceed the performance of the non-fault-tolerant heat exchanger that it is intended to supplant (see table). The only significant disadvantages are a slight weight penalty and a small decrease in the mass-specific heat transfer.

Izenson, Michael G.; Crowley, Christopher J.

2005-01-01

330

Geologic map + fault mechanics problem set  

NSDL National Science Digital Library

This exercise requires students to answer some questions about stress and fault mechanics that relate to geologic maps. In part A) students must draw a cross section and Mohr circles and make some calculations to explain the slip history and mechanics of two generations of normal faults. In part B) students interpret the faulting history and fault mechanics of the Yerington District, Nevada, based on a classic geologic map and cross section by John Proffett. keywords: geologic map, cross section, normal faults, Mohr circle, Coulomb failure, Andersonian theory, frictional sliding, Byerlee's law

John Singleton

331

Multiple Fault Isolation in Redundant Systems  

NASA Technical Reports Server (NTRS)

Fault diagnosis in large-scale systems that are products of modern technology present formidable challenges to manufacturers and users. This is due to large number of failure sources in such systems and the need to quickly isolate and rectify failures with minimal down time. In addition, for fault-tolerant systems and systems with infrequent opportunity for maintenance (e.g., Hubble telescope, space station), the assumption of at most a single fault in the system is unrealistic. In this project, we have developed novel block and sequential diagnostic strategies to isolate multiple faults in the shortest possible time without making the unrealistic single fault assumption.

Pattipati, Krishna R.; Patterson-Hine, Ann; Iverson, David

1997-01-01

332

Multiple Fault Isolation in Redundant Systems  

NASA Technical Reports Server (NTRS)

Fault diagnosis in large-scale systems that are products of modem technology present formidable challenges to manufacturers and users. This is due to large number of failure sources in such systems and the need to quickly isolate and rectify failures with minimal down time. In addition, for fault-tolerant systems and systems with infrequent opportunity for maintenance (e.g., Hubble telescope, space station), the assumption of at most a single fault in the system is unrealistic. In this project, we have developed novel block and sequential diagnostic strategies to isolate multiple faults in the shortest possible time without making the unrealistic single fault assumption.

Pattipati, Krishna R.

1997-01-01

333

Mapping tasks into fault tolerant manipulators  

SciTech Connect

The application of robots in critical missions in hazardous environments requires the development of reliable or fault tolerant manipulators. In this paper, we define fault tolerance as the ability to continue the performance of a task after immobilization of a joint due to failure. Initially, no joint limits are considered, in which case we prove the existence of fault tolerant manipulators and develop an analysis tool to determine the fault tolerant work space. We also derive design templates for spatial fault tolerant manipulators. When joint limits are introduced, analytic solutions become infeasible but instead a numerical design procedure can be used, as is illustrated through an example.

Paredis, C.J.J.; Khosla, P.K.; Kanade, T. [Carnegie Mellon Univ., Pittsburgh, PA (United States)

1994-12-31

334

Towards a late Quaternary tephrochronological framework for the southernmost part of South America - the Laguna Potrok Aike tephra record  

NASA Astrophysics Data System (ADS)

A total of 18 tephra samples have been analysed from the composite sediment sequence from Site 2 of the Laguna Potrok Aike ICDP expedition 5022 from southern Patagonia, Argentina, which extends back to ca 51 ka cal BP. Analyses of the volcanic glass show that all layers but one are rhyolitic in composition, with SiO2 contents ranging between ca 74.5 and 78 wt% and suggest an origin in the Austral Andean Volcanic Zone (AVZ; 49-55°S). Nonetheless, two main data clusters occur, one group with K2O contents between ca 1.5 and 2.0 wt%, indicating an origin in the Mt. Burney volcanic area, and one group with K2O contents between ca 2.7 and 3.9 wt%, tentatively correlated with Viedma/Lautaro and the Aguilera volcanoes in the northern part of the AVZ. The early Holocene Tephra, MB1 and the late Pleistocene Reclus R1 tephra occur in the upper part of the sequence. Periods with significant tephra deposition occurred between ca 51-44 ka cal BP, and ca 31-25 ka cal BP, with a decrease in tephra layer frequency between these two periods.

Wastegård, S.; Veres, D.; Kliem, P.; Hahn, A.; Ohlendorf, C.; Zolitschka, B.; Pasado Science Team

2013-07-01

335

late Pleistocene and Holocene pollen record from Laguna de las Trancas, northern coastal Santa Cruz County, California  

USGS Publications Warehouse

A 2.1-m core from Laguna de las Trancas, a marsh atop a landslide in northern Santa Cruz County, California, has yielded a pollen record for the period between about 30,000 B. P. and roughly 5000 B. P. Three pollen zones are recognized. The earliest is characterized by high frequencies of pine pollen and is correlated with a mid-Wisconsinan interstade of the mid-continent. The middle zone contains high frequencies of both pine and fir (Abies, probably A. grandis) pollen and is correlated with the last full glacial interval (upper Wisconsinan). The upper zone is dominated by redwood (Sequoia) pollen and represents latest Pleistocene to middle Holocene. The past few thousand years are not represented in the core. The pollen evidence indicates that during the full glacial period the mean annual temperature at the site was about 2°C to 3°C lower than it is today. We attribute this small difference to the stabilizing effect of marine upwelling on the temperature regime in the immediate vicinity of the coast. Precipitation may have been about 20 percent higher as a result of longer winter wet seasons.

Adam, David P.; Byrne, Roger; Luther, Edgar

1981-01-01

336

Rule-based fault diagnosis of hall sensors and fault-tolerant control of PMSM  

NASA Astrophysics Data System (ADS)

Hall sensor is widely used for estimating rotor phase of permanent magnet synchronous motor(PMSM). And rotor position is an essential parameter of PMSM control algorithm, hence it is very dangerous if Hall senor faults occur. But there is scarcely any research focusing on fault diagnosis and fault-tolerant control of Hall sensor used in PMSM. From this standpoint, the Hall sensor faults which may occur during the PMSM operating are theoretically analyzed. According to the analysis results, the fault diagnosis algorithm of Hall sensor, which is based on three rules, is proposed to classify the fault phenomena accurately. The rotor phase estimation algorithms, based on one or two Hall sensor(s), are initialized to engender the fault-tolerant control algorithm. The fault diagnosis algorithm can detect 60 Hall fault phenomena in total as well as all detections can be fulfilled in 1/138 rotor rotation period. The fault-tolerant control algorithm can achieve a smooth torque production which means the same control effect as normal control mode (with three Hall sensors). Finally, the PMSM bench test verifies the accuracy and rapidity of fault diagnosis and fault-tolerant control strategies. The fault diagnosis algorithm can detect all Hall sensor faults promptly and fault-tolerant control algorithm allows the PMSM to face failure conditions of one or two Hall sensor(s). In addition, the transitions between health-control and fault-tolerant control conditions are smooth without any additional noise and harshness. Proposed algorithms can deal with the Hall sensor faults of PMSM in real applications, and can be provided to realize the fault diagnosis and fault-tolerant control of PMSM.

Song, Ziyou; Li, Jianqiu; Ouyang, Minggao; Gu, Jing; Feng, Xuning; Lu, Dongbin

2013-07-01

337

Model-Based Fault Tolerant Control  

NASA Technical Reports Server (NTRS)

The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.

Kumar, Aditya; Viassolo, Daniel

2008-01-01

338

Fault diagnosis for magnetic bearing systems  

NASA Astrophysics Data System (ADS)

A full fault diagnosis for active magnetic bearing (AMB) and rotor systems to monitor the closed-loop operation and analyze fault patterns on-line in case any malfunction occurs is proposed in this paper. Most traditional approaches for fault diagnosis are based on actuator or sensor diagnosis individually and can solely detect a single fault at a time. This research combines two diagnosis methodologies by using both state estimators and parameter estimators to detect, identify and analyze actuators and sensors faults in AMB/rotor systems. The proposed fault diagnosis algorithm not only enhances the diagnosis accuracy, but also illustrates the capability to detect multiple sensors faults which occur concurrently. The efficacy of the presented algorithm has been verified by computer simulations and intensive experiments. The test rig for experiments is equipped with AMB, interface module (dSPACE DS1104), data acquisition unit MATLAB/Simulink simulation environment. At last, the fault patterns, such as bias, multiplicative loop gain variation and noise addition, can be identified by the algorithm presented in this work. In other words, the proposed diagnosis algorithm is able to detect faults at the first moment, find which sensors or actuators under failure and identify which fault pattern the found faults belong to.

Tsai, Nan-Chyuan; King, Yueh-Hsun; Lee, Rong-Mao

2009-05-01

339

Tool for Viewing Faults Under Terrain  

NASA Technical Reports Server (NTRS)

Multi Surface Light Table (MSLT) is an interactive software tool that was developed in support of the QuakeSim project, which has created an earthquake- fault database and a set of earthquake- simulation software tools. MSLT visualizes the three-dimensional geometries of faults embedded below the terrain and animates time-varying simulations of stress and slip. The fault segments, represented as rectangular surfaces at dip angles, are organized into collections, that is, faults. An interface built into MSLT queries and retrieves fault definitions from the QuakeSim fault database. MSLT also reads time-varying output from one of the QuakeSim simulation tools, called "Virtual California." Stress intensity is represented by variations in color. Slips are represented by directional indicators on the fault segments. The magnitudes of the slips are represented by the duration of the directional indicators in time. The interactive controls in MSLT provide a virtual track-ball, pan and zoom, translucency adjustment, simulation playback, and simulation movie capture. In addition, geographical information on the fault segments and faults is displayed on text windows. Because of the extensive viewing controls, faults can be seen in relation to one another, and to the terrain. These relations can be realized in simulations. Correlated slips in parallel faults are visible in the playback of Virtual California simulations.

Siegel, Herbert, L.; Li, P. Peggy

2005-01-01

340

Multiple sensor fault diagnosis for dynamic processes.  

PubMed

Modern industrial plants are usually large scaled and contain a great amount of sensors. Sensor fault diagnosis is crucial and necessary to process safety and optimal operation. This paper proposes a systematic approach to detect, isolate and identify multiple sensor faults for multivariate dynamic systems. The current work first defines deviation vectors for sensor observations, and further defines and derives the basic sensor fault matrix (BSFM), consisting of the normalized basic fault vectors, by several different methods. By projecting a process deviation vector to the space spanned by BSFM, this research uses a vector with the resulted weights on each direction for multiple sensor fault diagnosis. This study also proposes a novel monitoring index and derives corresponding sensor fault detectability. The study also utilizes that vector to isolate and identify multiple sensor faults, and discusses the isolatability and identifiability. Simulation examples and comparison with two conventional PCA-based contribution plots are presented to demonstrate the effectiveness of the proposed methodology. PMID:20542268

Li, Cheng-Chih; Jeng, Jyh-Cheng

2010-10-01

341

Alp Transit: Crossing Faults 44 and 49  

NASA Astrophysics Data System (ADS)

This paper describes the crossing of faults 44 and 49 when constructing the 57 km Gotthard base tunnel of the Alp Transit project. Fault 44 is a permeable fault that triggered significant surface deformations 1,400 m above the tunnel when it was reached by the advancing excavation. The fault runs parallel to the downstream face of the Nalps arch dam. Significant deformations were measured at the dam crown. Fault 49 is sub-vertical and permeable, and runs parallel at the upstream face of the dam. It was necessary to assess the risk when crossing fault 49, as a limit was put on the acceptable dam deformation for structural safety. The simulation model, forecasts and action decided when crossing over the faults are presented, with a brief description of the tunnel, the dam, and the monitoring system.

El Tani, M.; Bremen, R.

2014-05-01

342

Arc burst pattern analysis fault detection system  

NASA Technical Reports Server (NTRS)

A method and apparatus are provided for detecting an arcing fault on a power line carrying a load current. Parameters indicative of power flow and possible fault events on the line, such as voltage and load current, are monitored and analyzed for an arc burst pattern exhibited by arcing faults in a power system. These arcing faults are detected by identifying bursts of each half-cycle of the fundamental current. Bursts occurring at or near a voltage peak indicate arcing on that phase. Once a faulted phase line is identified, a comparison of the current and voltage reveals whether the fault is located in a downstream direction of power flow toward customers, or upstream toward a generation station. If the fault is located downstream, the line is de-energized, and if located upstream, the line may remain energized to prevent unnecessary power outages.

Russell, B. Don (Inventor); Aucoin, B. Michael (Inventor); Benner, Carl L. (Inventor)

1997-01-01

343

Longest fault-free paths in hypercubes with vertex faults  

Microsoft Academic Search

The hypercube is one of the most versatile and efficient interconnection networks (networks for short) so far discovered for parallel computation. Let f denote the number of faulty vertices in an n-cube. This study demonstrates that when f?n?2, the n-cube contains a fault-free path with length at least 2n?2f?1 (or 2n?2f?2) between two arbitrary vertices of odd (or even) distance.

Jung-sheng Fu

2006-01-01

344

Silica Lubrication in Faults (Invited)  

NASA Astrophysics Data System (ADS)

Silica-rich rocks are common in the crust, so silica lubrication may be important for causing fault weakening during earthquakes if the phenomenon occurs in nature. In laboratory friction experiments on chert, dramatic shear weakening has been attributed to amorphization and attraction of water from atmospheric humidity to form a 'silica gel'. Few observations of the slip surfaces have been reported, and the details of weakening mechanism(s) remain enigmatic. Therefore, no criteria exist on which to make comparisons of experimental materials to natural faults. We performed a series of friction experiments, characterized the materials formed on the sliding surface, and compared these to a geological fault in the same rock type. Experiments were performed in the presence of room humidity at 2.5 MPa normal stress with 3 and 30 m total displacement for a variety of slip rates (10-4 - 10-1 m/s). The friction coefficient (?) reduced from >0.6 to ~0.2 at 10-1 m/s, but only fell to ~0.4 at 10-2 - 10-4 m/s. The slip surfaces and wear material were observed using laser confocal Raman microscopy, electron microprobe, X-ray diffraction, and transmission electron microscopy. Experiments at 10-1 m/s formed wear material consisting of ?1 ?m powder that is aggregated into irregular 5-20 ?m clumps. Some material disaggregated during analysis with electron beams and lasers, suggesting hydrous and unstable components. Compressed powder forms smooth pavements on the surface in which grains are not visible (if present, they are <100 nm). Powder contains amorphous material and as yet unidentified crystalline and non-crystalline forms of silica (not quartz), while the worn chert surface underneath shows Raman spectra consistent with a mixture of quartz and amorphous material. If silica amorphization facilitates shear weakening in natural faults, similar wear materials should be formed, and we may be able to identify them through microstructural studies. However, the sub-micron particles of unstable materials are unlikely to survive in the crust over geologic time, so a direct comparison of fresh experimental wear material and ancient fault rock needs to account for the alteration and crystallization of primary materials. The surface of the Corona fault is coated by a translucent shiny layer consisting of ~100 nm interlocking groundmass of dislocation-free quartz, 10 nm ellipsoidal particles, and interstitial patches of amorphous silica. We interpret this layer as the equivalent of the experimentally produced amorphous material after crystallizing to more stable forms over geological time.

Rowe, C. D.; Rempe, M.; Lamothe, K.; Kirkpatrick, J. D.; White, J. C.; Mitchell, T. M.; Andrews, M.; Di Toro, G.

2013-12-01

345

Frictional constraints on crustal faulting  

USGS Publications Warehouse

We consider how variations in fault frictional properties affect the phenomenology of earthquake faulting. In particular, we propose that lateral variations in fault friction produce the marked heterogeneity of slip observed in large earthquakes. We model these variations using a rate- and state-dependent friction law, where we differentiate velocity-weakening behavior into two fields: the strong seismic field is very velocity weakening and the weak seismic field is slightly velocity weakening. Similarly, we differentiate velocity-strengthening behavior into two fields: the compliant field is slightly velocity strengthening and the viscous field is very velocity strengthening. The strong seismic field comprises the seismic slip concentrations, or asperities. The two "intermediate" fields, weak seismic and compliant, have frictional velocity dependences that are close to velocity neutral: these fields modulate both the tectonic loading and the dynamic rupture process. During the interseismic period, the weak seismic and compliant regions slip aseismically, while the strong seismic regions remain locked, evolving into stress concentrations that fail only in main shocks. The weak seismic areas exhibit most of the interseismic activity and aftershocks but can also creep seismically. This "mixed" frictional behavior can be obtained from a sufficiently heterogenous distribution of the critical slip distance. The model also provides a mechanism for rupture arrest: dynamic rupture fronts decelerate as they penetrate into unloaded complaint or weak seismic areas, producing broad areas of accelerated afterslip. Aftershocks occur on both the weak seismic and compliant areas around a fault, but most of the stress is diffused through aseismic slip. Rapid afterslip on these peripheral areas can also produce aftershocks within the main shock rupture area by reloading weak fault areas that slipped in the main shock and then healed. We test this frictional model by comparing the seismicity and the coseismic slip for the 1966 Parkfield, 1979 Coyote Lake, and 1984 Morgan Hill earthquakes. The interevent seismicity and aftershocks appear to occur on fault areas outside the regions of significant slip: these regions are interpreted as either weak seismic or compliant, depending on whether or not they manifest interevent seismicity.

Boatwright, J.; Cocco, M.

1996-01-01

346

A Log-Scaling Fault Tolerant Agreement Algorithm for a Fault Tolerant MPI  

SciTech Connect

The lack of fault tolerance is becoming a limiting factor for application scalability in HPC systems. The MPI does not provide standardized fault tolerance interfaces and semantics. The MPI Forum's Fault Tolerance Working Group is proposing a collective fault tolerant agreement algorithm for the next MPI standard. Such algorithms play a central role in many fault tolerant applications. This paper combines a log-scaling two-phase commit agreement algorithm with a reduction operation to provide the necessary functionality for the new collective without any additional messages. Error handling mechanisms are described that preserve the fault tolerance properties while maintaining overall scalability.

Hursey, Joshua J [ORNL; Naughton, III, Thomas J [ORNL; Vallee, Geoffroy R [ORNL; Graham, Richard L [ORNL

2011-01-01

347

Fault tolerant operation of switched reluctance machine  

NASA Astrophysics Data System (ADS)

The energy crisis and environmental challenges have driven industry towards more energy efficient solutions. With nearly 60% of electricity consumed by various electric machines in industry sector, advancement in the efficiency of the electric drive system is of vital importance. Adjustable speed drive system (ASDS) provides excellent speed regulation and dynamic performance as well as dramatically improved system efficiency compared with conventional motors without electronics drives. Industry has witnessed tremendous grow in ASDS applications not only as a driving force but also as an electric auxiliary system for replacing bulky and low efficiency auxiliary hydraulic and mechanical systems. With the vast penetration of ASDS, its fault tolerant operation capability is more widely recognized as an important feature of drive performance especially for aerospace, automotive applications and other industrial drive applications demanding high reliability. The Switched Reluctance Machine (SRM), a low cost, highly reliable electric machine with fault tolerant operation capability, has drawn substantial attention in the past three decades. Nevertheless, SRM is not free of fault. Certain faults such as converter faults, sensor faults, winding shorts, eccentricity and position sensor faults are commonly shared among all ASDS. In this dissertation, a thorough understanding of various faults and their influence on transient and steady state performance of SRM is developed via simulation and experimental study, providing necessary knowledge for fault detection and post fault management. Lumped parameter models are established for fast real time simulation and drive control. Based on the behavior of the faults, a fault detection scheme is developed for the purpose of fast and reliable fault diagnosis. In order to improve the SRM power and torque capacity under faults, the maximum torque per ampere excitation are conceptualized and validated through theoretical analysis and experiments. With the proposed optimal waveform, torque production is greatly improved under the same Root Mean Square (RMS) current constraint. Additionally, position sensorless operation methods under phase faults are investigated to account for the combination of physical position sensor and phase winding faults. A comprehensive solution for position sensorless operation under single and multiple phases fault are proposed and validated through experiments. Continuous position sensorless operation with seamless transition between various numbers of phase fault is achieved.

Wang, Wei

348

Perspective View, San Andreas Fault  

NASA Technical Reports Server (NTRS)

The prominent linear feature straight down the center of this perspective view is California's famous San Andreas Fault. The image, created with data from NASA's Shuttle Radar Topography Mission (SRTM), will be used by geologists studying fault dynamics and landforms resulting from active tectonics. This segment of the fault lies west of the city of Palmdale, Calif., about 100 kilometers (about 60 miles) northwest of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. Two large mountain ranges are visible, the San Gabriel Mountains on the left and the Tehachapi Mountains in the upper right. Another fault, the Garlock Fault lies at the base of the Tehachapis; the San Andreas and the Garlock Faults meet in the center distance near the town of Gorman. In the distance, over the Tehachapi Mountains is California's Central Valley. Along the foothills in the right hand part of the image is the Antelope Valley, including the Antelope Valley California Poppy Reserve. The data used to create this image were acquired by SRTM aboard the Space Shuttle Endeavour, launched on February 11, 2000.

This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

SRTM uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense (DoD), and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, DC.

Size: Varies in a perspective view Location: 34.70 deg. North lat., 118.57 deg. West lon. Orientation: Looking Northwest Original Data Resolution: SRTM and Landsat: 30 meters (99 feet) Date Acquired: February 16, 2000

2000-01-01

349

Anisotropic Permeability of a Strike Slip Fault  

NASA Astrophysics Data System (ADS)

Pump tests were performed in isolated sections of two inclined ~200m long boreholes that are ~130 meters apart from each other (WF-4 and WF-5 in Figure 1). The boreholes penetrate the Wildcat Fault, a semi-vertical strike slip fault, which is a member of the Hayward Fault system situated in the Berkeley Hills. The geology encountered in the boreholes was predominantly the Claremont Fm., extensively fractured and alternating sequences of chert, shale and sandstone. The drawdowns in four isolated sections in a vertical borehole (WF-1) drilled adjacent to the fault at distances of ~45m and ~95m from each of the inclined borehole was analyzed. The permeability of the fault plane was found to be two orders of magnitude higher than that of the protolith and anisotropic with approximately 10 fold higher permeability in near horizontal direction, which is somewhat expected for a strike slip fault (Figure 2). Build-up analysis suggests that the fault is asymmetric with higher permeability along the east side of the fault plane and lower along the west side.igure 1. Pumping test configuration with two inclined boreholes (WF-4 and WF-5) intersecting the Wildcat Fault. The vertical borehole WF-1 is situated very close to the fault. igure 2. Dimensionless directional drawdowns observed in four isolated sections in WF-1 in response to the pumping in WF-4 and WF-5 at a dimensionless time of 16. Also shown is the best fit permeability ellipse.

Karasaki, K.; Goto, J.; Kiho, K.

2012-12-01

350

Illuminating Northern California's Active Faults  

Microsoft Academic Search

Newly acquired light detection and ranging (lidar) topographic data provide a powerful community resource for the study of landforms associated with the plate boundary faults of northern California (Figure 1). In the spring of 2007, GeoEarthScope, a component of the EarthScope Facility construction project funded by the U.S. National Science Foundation, acquired approximately 2000 square kilometers of airborne lidar topographic

Carol S. Prentice; Christopher J. Crosby; Caroline S. Whitehill; J. Ramón Arrowsmith; Kevin P. Furlong; David A. Phillips

2009-01-01

351

Fault-ignorant quantum search  

NASA Astrophysics Data System (ADS)

We investigate the problem of quantum searching on a noisy quantum computer. Taking a fault-ignorant approach, we analyze quantum algorithms that solve the task for various different noise strengths, which are possibly unknown beforehand. We prove lower bounds on the runtime of such algorithms and thereby find that the quadratic speedup is necessarily lost (in our noise models). However, for low but constant noise levels the algorithms we provide (based on Grover's algorithm) still outperform the best noiseless classical search algorithm.

Vrana, Péter; Reeb, David; Reitzner, Daniel; Wolf, Michael M.

2014-07-01

352

Fault-ignorant Quantum Search  

E-print Network

We investigate the problem of quantum searching on a noisy quantum computer. Taking a 'fault-ignorant' approach, we analyze quantum algorithms that solve the task for various different noise strengths, which are possibly unknown beforehand. We prove lower bounds on the runtime of such algorithms and thereby find that the quadratic speedup is necessarily lost (in our noise models). However, for low but constant noise levels the algorithms we provide (based on Grover's algorithm) still outperform the best noiseless classical search algorithm.

Peter Vrana; David Reeb; Daniel Reitzner; Michael M. Wolf

2014-07-25

353

Fault-Tolerant FFT Networks  

Microsoft Academic Search

Two concurrent error detection (CED) schemes are proposed for N-point fast Fourier transform (FFT) networks that consists of log\\/sub 2\\/N stages with N\\/2 two-point butterfly modules for each stage. The method assumes that failures are confined to a single complex multiplier or adder or to one input or output set of lines. Such a fault model covers a broad class

Jing-yang Jou; Jacob A. Abraham

1988-01-01

354

DEM simulation of growth normal fault slip  

NASA Astrophysics Data System (ADS)

Slip of the fault can cause deformation of shallower soil layers and lead to the destruction of infrastructures. Shanchiao fault on the west side of the Taipei basin is categorized. The activities of Shanchiao fault will cause the quaternary sediments underneath the Taipei basin to become deformed. This will cause damage to structures, traffic construction, and utility lines within the area. It is determined from data of geological drilling and dating, Shanchiao fault has growth fault. In experiment, a sand box model was built with non-cohesive sand soil to simulate the existence of growth fault in Shanchiao Fault and forecast the effect on scope of shear band development and ground differential deformation. The results of the experiment showed that when a normal fault containing growth fault, at the offset of base rock the shear band will develop upward along with the weak side of shear band of the original topped soil layer, and this shear band will develop to surface much faster than that of single top layer. The offset ratio (basement slip / lower top soil thickness) required is only about 1/3 of that of single cover soil layer. In this research, it is tried to conduct numerical simulation of sand box experiment with a Discrete Element Method program, PFC2D, to simulate the upper covering sand layer shear band development pace and scope of normal growth fault slip. Results of simulation indicated, it is very close to the outcome of sand box experiment. It can be extended to application in water pipeline project design around fault zone in the future. Keywords: Taipei Basin, Shanchiao fault, growth fault, PFC2D

Chu, Sheng-Shin; Lin, Ming-Lang; Nien, Wie-Tung; Chan, Pei-Chen

2014-05-01

355

Is the fault core-damage zone model representative of seismogenic faults? Pre-existing anisotropies and fault zone complexity  

NASA Astrophysics Data System (ADS)

Seismogenic fault zones are often described in terms of a "fault core" surrounded by an intensely fractured "damage zone". This useful framework has found broad application in many fault zone studies (hydraulic potential, etc.). However, we found it difficult to apply this model in the case of several seismogenic faults zones hosted in the continental crust of the Italian Southern Alps. As an example, we present quantitative field data (e.g. roughness analysis, fracture density profiles) derived from various digital mapping methods (LiDAR, RTK-GPS, high resolution photogrammetry) to illustrate two case studies of seismogenic strike-slip faults: 1) The Gole Larghe Fault Zone (GLFZ) hosted in granitoids and exhumed from 8-10 km depth, and, 2) The Borcola Pass Fault Zone (BPFZ) hosted in dolostones and exhumed from 1.5-2 km depth. Ancient seismicity is corroborated by the occurrence of pseudotachylytes (GLFZ) and fluidized cataclasites (BPFZ). Both of the studied fault zones accommodated < 2 km of displacement. Despite the large differences in exhumation depth and host rock lithology, both fault zones: 1) are up to several hundreds of meters thick; 2) consist of tens to hundreds of sub-parallel fault strands, connected by a network of minor faults and fractures; 3) most significantly, lack a well-defined fault core that accommodated a majority of fault displacement. Instead, displacement was distributed amongst the networks of minor faults and fractures. The above similarities can be explained by the fact that both fault zones developed in rock volumes containing strong pre-existing anisotropies: magmatic cooling joints sets spaced 2-5 m apart for the GLFZ, regional joint sets spaced < 1 m apart for the BPFZ. During initial development of both fault zones, the pre-existing anisotropies were diffusely reactivated over wide volumes. This was associated in both cases with extensive fluid flow, and sealing/hardening of the pre-existing anisotropies by syn-deformation mineral precipitation. Pre-existing anisotropies are a common occurrence in the continental crust (e.g. joints, bedding surfaces, old fault zones, cleavage surfaces): fault zones developing in such areas will be highly segmented and discontinuous, particularly during the early stages of fault evolution (first few kilometers of displacement?). We speculate that the absence of a leading fault may result in long duration earthquake sequences with several main shocks, especially if accompanied by fluid migration. This is the case for the L'Aquila 2008-2009 seismic sequence (mainshock Mw 6.3) occurring within a fault zone with ~1.5 km total displacement cutting limestones and dolostones (Chiaraluce et al. 2011). High-resolution aftershock locations suggest the re-activation of both optimally and non-optimally oriented small fault segments over a total fault zone width of ~1 km. The magnitude of aftershocks is consistent with activation of fault strands tens to hundreds of meters in length for a period of several months following the mainshocks.

Di Toro, G.; Smith, S. A.; Fondriest, M.; Bistacchi, A.; Nielsen, S. B.; Mitchell, T. M.; Mittempergher, S.; Griffith, W. A.

2012-12-01

356

Identifiability of Additive Actuator and Sensor Faults by State Augmentation  

NASA Technical Reports Server (NTRS)

A class of fault detection and identification (FDI) methods for bias-type actuator and sensor faults is explored in detail from the point of view of fault identifiability. The methods use state augmentation along with banks of Kalman-Bucy filters for fault detection, fault pattern determination, and fault value estimation. A complete characterization of conditions for identifiability of bias-type actuator faults, sensor faults, and simultaneous actuator and sensor faults is presented. It is shown that FDI of simultaneous actuator and sensor faults is not possible using these methods when all sensors have unknown biases. The fault identifiability conditions are demonstrated via numerical examples. The analytical and numerical results indicate that caution must be exercised to ensure fault identifiability for different fault patterns when using such methods.

Joshi, Suresh; Gonzalez, Oscar R.; Upchurch, Jason M.

2014-01-01

357

Networking of Near Fault Observatories in Europe  

NASA Astrophysics Data System (ADS)

Networking of six European near-fault observatories (NFO) was established In the FP7 infrastructure project NERA (Network of European Research Infrastructures for Earthquake Risk Assessment and Mitigation). This networking has included sharing of expertise and know-how among the observatories, distribution of analysis tools and access to data. The focus of the NFOs is on research into the active processes of their respective fault zones through acquisition and analysis of multidisciplinary data. These studies include the role of fluids in fault initiation, site effects, derived processes such as earthquake generated tsunamis and landslides, mapping the internal structure of fault systems and development of automatic early warning systems. The six fault zones are in different tectonic regimes: The South Iceland Seismic Zone (SISZ) in Iceland, the Marmara Sea in Turkey and the Corinth Rift in Greece are at plate boundaries, with strike-slip faulting characterizing the SISZ and the Marmara Sea, while normal faulting dominates in the Corinth Rift. The Alto Tiberina and Irpinia faults, dominated by low- and medium-angle normal faulting, respectively are in the Apennine mountain range in Italy and the Valais Region, characterized by both strike-slip and normal faulting is located in the Swiss Alps. The fault structures range from well-developed long faults, such as in the Marmara Sea, to more complex networks of smaller, book-shelf faults such as in the SISZ. Earthquake hazard in the fault zones ranges from significant to substantial. The Marmara Sea and Corinth rift are under ocean causing additional tsunami hazard and steep slopes and sediment-filled valleys in the Valais give rise to hazards from landslides and liquefaction. Induced seismicity has repeatedly occurred in connection with geothermal drilling and water injection in the SISZ and active volcanoes flanking the SISZ also give rise to volcanic hazard due to volcano-tectonic interaction. Organization among the NERA NFO's has led to their gaining working-group status in EPOS as the WG on Near Fault Observatories, representing multidisciplinary research of faults and fault zones.

Vogfjörd, Kristín; Bernard, Pascal; Chiraluce, Lauro; Fäh, Donat; Festa, Gaetano; Zulficar, Can

2014-05-01

358

Surface faulting along the Superstition Hills fault zone and nearby faults associated with the earthquakes of 24 November 1987  

USGS Publications Warehouse

The M6.2 Elmore Desert Ranch earthquake of 24 November 1987 was associated spatially and probably temporally with left-lateral surface rupture on many northeast-trending faults in and near the Superstition Hills in western Imperial Valley. Three curving discontinuous principal zones of rupture among these breaks extended northeastward from near the Superstition Hills fault zone as far as 9km; the maximum observed surface slip, 12.5cm, was on the northern of the three, the Elmore Ranch fault, at a point near the epicenter. Twelve hours after the Elmore Ranch earthquake, the M6.6 Superstition Hills earthquake occurred near the northwest end of the right-lateral Superstition Hills fault zone. We measured displacements over 339 days at as many as 296 sites along the Superstition Hills fault zone, and repeated measurements at 49 sites provided sufficient data to fit with a simple power law. The overall distributions of right-lateral displacement at 1 day and the estimated final slip are nearly symmetrical about the midpoint of the surface rupture. The average estimated final right-lateral slip for the Superstition Hills fault zone is ~54cm. The average left-lateral slip for the conjugate faults trending northeastward is ~23cm. The southernmost ruptured member of the Superstition Hills fault zone, newly named the Wienert fault, extends the known length of the zone by about 4km. -from Authors

Sharp, R.V.

1989-01-01

359

Hydrologic, water-quality, and biological assessment of Laguna de las Salinas, Ponce, Puerto Rico, January 2003-September 2004  

USGS Publications Warehouse

The Laguna de Las Salinas is a shallow, 35-hectare, hypersaline lagoon (depth less than 1 meter) in the municipio of Ponce, located on the southern coastal plain of Puerto Rico. Hydrologic, water-quality, and biological data in the lagoon were collected between January 2003 and September 2004 to establish baseline conditions. During the study period, rainfall was about 1,130 millimeters, with much of the rain recorded during three distinct intense events. The lagoon is connected to the sea by a shallow, narrow channel. Subtle tidal changes, combined with low rainfall and high evaporation rates, kept the lagoon at salinities above that of the sea throughout most of the study. Water-quality properties measured on-site (temperature, pH, dissolved oxygen, specific conductance, and Secchi disk transparency) exhibited temporal rather than spatial variations and distribution. Although all physical parameters were in compliance with current regulatory standards for Puerto Rico, hyperthermic and hypoxic conditions were recorded during isolated occasions. Nutrient concentrations were relatively low and in compliance with current regulatory standards (less than 5.0 and 1.0 milligrams per liter for total nitrogen and total phosphorus, respectively). The average total nitrogen concentration was 1.9 milligrams per liter and the average total phosphorus concentration was 0.4 milligram per liter. Total organic carbon concentrations ranged from 12.0 to 19.0 milligrams per liter. Chlorophyll a was the predominant form of photosynthetic pigment in the water. The average chlorophyll a concentration was 13.4 micrograms per liter. Chlorophyll b was detected (detection limits 0.10 microgram per liter) only twice during the study. About 90 percent of the primary productivity in the Laguna de Las Salinas was generated by periphyton such as algal mats and macrophytes such as seagrasses. Of the average net productivity of 13.6 grams of oxygen per cubic meter per day derived from the diel study, the periphyton and macrophyes produced 12.3 grams per cubic meter per day; about 1.3 grams (about 10 percent) were produced by the phytoplankton (plant and algae component of plankton). The total respiration rate was 59.2 grams of oxygen per cubic meter per day. The respiration rate ascribed to the plankton (all organisms floating through the water column) averaged about 6.2 grams of oxygen per cubic meter per day (about 10 percent), whereas the respiration rate by all other organisms averaged 53.0 grams of oxygen per cubic meter per day (about 90 percent). Plankton gross productivity was 7.5 grams per cubic meter per day; the gross productivity of the entire community averaged 72.8 grams per cubic meter per day. Fecal coliform bacteria counts were generally less than 200 colonies per 100 milliliters; the highest concentration was 600 colonies per 100 milliliters.

Soler-López, Luis R.; Gómez-Gómez, Fernando; Rodríguez-Martínez, Jesús

2005-01-01

360

West Coast Tsunami: Cascadia's Fault?  

NASA Astrophysics Data System (ADS)

The tragedies of 2004 Sumatra and 2011 Japan tsunamis exposed the limits of our knowledge in preparing for devastating tsunamis. The 1,100-km coastline of the Pacific coast of North America has tectonic and geological settings similar to Sumatra and Japan. The geological records unambiguously show that the Cascadia fault had caused devastating tsunamis in the past and this geological process will cause tsunamis in the future. Hypotheses of the rupture process of Cascadia fault include a long rupture (M9.1) along the entire fault line, short ruptures (M8.8 - M9.1) nucleating only a segment of the coastline, or a series of lesser events of M8+. Recent studies also indicate an increasing probability of small rupture occurring at the south end of the Cascadia fault. Some of these hypotheses were implemented in the development of tsunami evacuation maps in Washington and Oregon. However, the developed maps do not reflect the tsunami impact caused by the most recent updates regarding the Cascadia fault rupture process. The most recent study by Wang et al. (2013) suggests a rupture pattern of high- slip patches separated by low-slip areas constrained by estimates of coseismic subsidence based on microfossil analyses. Since this study infers that a Tokohu-type of earthquake could strike in the Cascadia subduction zone, how would such an tsunami affect the tsunami hazard assessment and planning along the Pacific Coast of North America? The rapid development of computing technology allowed us to look into the tsunami impact caused by above hypotheses using high-resolution models with large coverage of Pacific Northwest. With the slab model of MaCrory et al. (2012) (as part of the USGS slab 1.0 model) for the Cascadia earthquake, we tested the above hypotheses to assess the tsunami hazards along the entire U.S. West Coast. The modeled results indicate these hypothetical scenarios may cause runup heights very similar to those observed along Japan's coastline during the 2011 Japan tsunami,. Comparing to a long rupture, the Tohoku-type rupture may cause more serious impact at the adjacent coastline, independent of where it would occur in the Cascadia subduction zone. These findings imply that the Cascadia tsunami hazard may be greater than originally thought.

Wei, Y.; Bernard, E. N.; Titov, V.

2013-12-01

361

Destructive and non-destructive density determination: method comparison and evaluation from the Laguna Potrok Aike sedimentary record  

NASA Astrophysics Data System (ADS)

Density measurements play a central role in the characterization of sediment profiles. When working with long records (>100 m), such as those routinely obtained within the frame of the International Continental Scientific Drilling Program, several methods can be used, all of them varying in resolution, time-cost efficiency and source of errors within the measurements. This paper compares two relatively new non-destructive densitometric methods, CT-Scanning and the coherent/incoherent ratio from an Itrax XRF core Scanner, to data acquired from a Multi-sensor core logger Gamma Ray Attenuation Porosity Evaluator (MSCL Grape) and discrete measurements of dry bulk density, wet bulk density and water content. Quality assessment of density measurements is performed at low and high resolution along the Laguna Potrok Aike (LPA) composite sequence. Giving its resolution (0.4 mm in our study), its high signal to noise ratio, we conclude that CT-Scan provides a precise, fast and cost-efficient way to determine density variation of long sedimentary record. Although more noisy that the CT-Scan measurements, coherent/incoherent ratio from the XRF core scanner also provides a high-resolution, reliable continuous measure of density variability of the sediment profile. The MSCL Grape density measurements provide actual density data and have the significant advantage to be completely non-destructive since the acquisition is performed on full cores prior to opening. However, the quality MSCL Grape density measurements can potentially be reduced by the presence of voids within the sediment core tubes and the dry and bulk density measurements suffers from sampling challenges and are time-consuming.

PASADO Science Team Fortin, David; Francus, Pierre; Gebhardt, Andrea Catalina; Hahn, Annette; Kliem, Pierre; Lisé-Pronovost, Agathe; Roychowdhury, Rajarshi; Labrie, Jacques; St-Onge, Guillaume

2013-07-01

362

Syntactic Fault Patterns in OO Programs Roger T. Alexander  

E-print Network

Syntactic Fault Patterns in OO Programs Roger T. Alexander Colorado State University Dept faults are widely studied, there are many aspects of faults that we still do not understand, par is to cause failures and thereby detect faults, a full understanding of the char- acteristics of faults

Offutt, Jeff

363

COMPLETE FAULT ANALYSIS FOR LONG TRANSMISSION LINE USING  

E-print Network

COMPLETE FAULT ANALYSIS FOR LONG TRANSMISSION LINE USING SYNCHRONIZED SAMPLING Nan Zhang Mladen 77843-3128, U.S.A. Abstract: A complete fault analysis scheme for long transmission line represented for normal situation and external faults, and is close to fault current during the internal faults

364

Fault tree analysis on handwashing for hygiene management  

Microsoft Academic Search

FTA (fault tree analysis) of the handwashing process was performed to investigate the causes for faults in hygiene management. The causes were deductively identified as the events causing every possible hazard by constructing a fault tree. The fault tree was constructed in a hierarchical structure with a single top event (occurrence of faults in hand washing), seven intermediate events, and

Aeri Park; Seung Ju Lee

2009-01-01

365

Actuator fault tolerant control in experimental networked embedded mini Drone  

Microsoft Academic Search

This paper deals with freezing fault reconfiguration in a small four-rotor helicopter (drone). This fault may be because of network faults such as packet loss or long delay in one actuator. In case of the fault occurrence in one actuator (motor) different strategies were proposed to compensate the fault effects on drone. These approaches are based on the minimisation of

Hossein Hashemi Nejad; Dominique Sauter; Samir Aberkane; Suzanne Lesecq

2009-01-01

366

Performance Analysis on Fault Tolerant Control System  

NASA Technical Reports Server (NTRS)

In a fault tolerant control (FTC) system, a parameter varying FTC law is reconfigured based on fault parameters estimated by fault detection and isolation (FDI) modules. FDI modules require some time to detect fault occurrences in aero-vehicle dynamics. In this paper, an FTC analysis framework is provided to calculate the upper bound of an induced-L(sub 2) norm of an FTC system with existence of false identification and detection time delay. The upper bound is written as a function of a fault detection time and exponential decay rates and has been used to determine which FTC law produces less performance degradation (tracking error) due to false identification. The analysis framework is applied for an FTC system of a HiMAT (Highly Maneuverable Aircraft Technology) vehicle. Index Terms fault tolerant control system, linear parameter varying system, HiMAT vehicle.

Shin, Jong-Yeob; Belcastro, Christine

2005-01-01

367

Norumbega Fault System of the Northern Appalachians  

NASA Astrophysics Data System (ADS)

Yes, Virginia, the eastern United States can finally claim to have a strike-slip fault of its own on a scale to rival the west coast's San Andreas fault: the Norumbega fault system, which stretches nearly 450 km from central New Brunswick, south to Casco Bay in southern Maine, and perhaps even farther into southern Connecticut. The Norumbega fault was active for 100 Ma, five times longer than the San Andreas. Its displacement is reckoned by one author to be as much as 1768 km (!), five times more than the San Andreas, and it has been exhumed locally to mid-crustal depths. This collection of 12 full-length, standalone papers and preface provides a single venue for a rapidly growing body of interdisciplinary research about a remarkable—though inactive—fault that may prove to be the longest, most long-lived fault with the greatest displacement of any in North America.

Sylvester, Arthur Gibbs

368

Rotating parallel faults: book shelf mechanism  

SciTech Connect

The mechanical analysis of book shelf operations induced by simple shearing shows that, under certain conditions, this operation requires less driving shear stress than an accommodation of the imposed shear by shear-parallel faulting. The operation of cross faults between neighboring Riedel faults in a wrench zone is a typical example. Large-scale rotation of parallel normal faults in domino style (tilted block tectonics) is primarily associated with the extension of ductile substrata. It may be inferred from mechanical arguments and sandbox experiments how the process, and in particular the dip direction of the faults, is controlled by the way the substratal extension progresses, by the direction of a substratal squeeze flow, by the presence of a surface slope, and by the configuration of the rock boundaries that confine the set of faults in the direction of extension.

Mandl, G.

1984-04-01

369

Perspective View, San Andreas Fault  

NASA Technical Reports Server (NTRS)

The prominent linear feature straight down the center of this perspective view is the San Andreas Fault in an image created with data from NASA's shuttle Radar Topography Mission (SRTM), which will be used by geologists studying fault dynamics and landforms resulting from active tectonics. This segment of the fault lies west of the city of Palmdale, California, about 100 kilometers (about 60 miles) northwest of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. This area is at the junction of two large mountain ranges, the San Gabriel Mountains on the left and the Tehachapi Mountains on the right. Quail Lake Reservoir sits in the topographic depression created by past movement along the fault. Interstate 5 is the prominent linear feature starting at the left edge of the image and continuing into the fault zone, passing eventually over Tejon Pass into the Central Valley, visible at the upper left.

This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense (DoD), and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise,Washington, DC.

Size: Varies in a perspective view Location: 34.78 deg. North lat., 118.75 deg. West lon. Orientation: Looking Northwest Original Data Resolution: SRTM and Landsat: 30 meters (99 feet) Date Acquired: February 16, 2000

2000-01-01

370

Fault prophet : a fault injection tool for large scale computer systems  

E-print Network

In this thesis, I designed and implemented a fault injection tool, to study the impact of soft errors for large scale systems. Fault injection is used as a mechanism to simulate soft errors, measure the output variability ...

Tchwella, Tal

2014-01-01

371

Realistic fault modeling and quality test generation of combined delay faults  

E-print Network

With increasing operating speed and shrinking technology, timing defects in integrated circuits are becoming increasingly important. The well established stuck-at-fault model is not sufficient because it is a static fault model and does not account...

Thadhlani, Ajaykumar A

2001-01-01

372

Vibration-based fault detection of sharp bearing faults in helicopters  

E-print Network

Vibration-based fault detection of sharp bearing faults in helicopters Victor Girondin , Herve the context of helicopter imposes a limited sampling frequency regarding the observed phenomena, many noisy their efficiency. Keywords: vibration, helicopter, health monitoring, frequency estimation, bearing, HUMS

Paris-Sud XI, Université de

373

Modeling Fault Coverage of Random Test Patterns  

Microsoft Academic Search

We present a new probabilistic fault coverage model that is accurate, simple, predictive, and easily integrated with the normal design o w of built-in self-test circuits. The parameters of the model are determined by tting the fault simulation data obtained on an initial segment of the random test. A cost-based analysis nds the point at which to stop fault simulation,

Hailong Cui; Sharad C. Seth; Shashank K. Mehta

2003-01-01

374

Evidence for a strong San Andreas fault  

Microsoft Academic Search

Stress measurements in deep boreholes have universally shown that stresses in the Earth's crust are in equilibrium with favorably oriented faults with friction coefficients in the range 0.6-0.7 and with nearly hydrostatic pore-pressure gradients. Because of the lack of any fault-adjacent heat-flow anomaly as predicted by a conductive model of frictional heating, the San Andreas fault has long been thought

Christopher H. Scholz

2000-01-01

375

Diagnosing process faults using neural network models  

SciTech Connect

In order to be of use for realistic problems, a fault diagnosis method should have the following three features. First, it should apply to nonlinear processes. Second, it should not rely on extensive amounts of data regarding previous faults. Lastly, it should detect faults promptly. The authors present such a scheme for static (i.e., non-dynamic) systems. It involves using a neural network to create an associative memory whose fixed points represent the normal behavior of the system.

Buescher, K.L.; Jones, R.D.; Messina, M.J.

1993-11-01

376

Hydrogen Embrittlement And Stacking-Fault Energies  

NASA Technical Reports Server (NTRS)

Embrittlement in Ni/Cu alloys appears related to stacking-fault porbabilities. Report describes attempt to show a correlation between stacking-fault energy of different Ni/Cu alloys and susceptibility to hydrogen embrittlement. Correlation could lead to more fundamental understanding and method of predicting susceptibility of given Ni/Cu alloy form stacking-fault energies calculated from X-ray diffraction measurements.

Parr, R. A.; Johnson, M. H.; Davis, J. H.; Oh, T. K.

1988-01-01

377

Theory of fault-tolerant quantum computation  

Microsoft Academic Search

In order to use quantum error-correcting codes to improve the performance of a quantum computer, it is necessary to be able to perform operations fault-tolerantly on encoded states. I present a theory of fault-tolerant operations on stabilizer codes based on symmetries of the code stabilizer. This allows a straightforward determination of which operations can be performed fault-tolerantly on a given

Daniel Gottesman

1998-01-01

378

Approximate active fault detection and control  

NASA Astrophysics Data System (ADS)

This paper deals with approximate active fault detection and control for nonlinear discrete-time stochastic systems over an infinite time horizon. Multiple model framework is used to represent fault-free and finitely many faulty models. An imperfect state information problem is reformulated using a hyper-state and dynamic programming is applied to solve the problem numerically. The proposed active fault detector and controller is illustrated in a numerical example of an air handling unit.

Škach, Jan; Pun?ochá?, Ivo; Šimandl, Miroslav

2014-12-01

379

Developing fault models for space mission software  

NASA Technical Reports Server (NTRS)

Over the past several years, we have focused on developing fault models for space mission software. In general, these models use measurable attributes of a software system and its development process to estimate the number of faults inserted into the system during its development; their outputs can be used to better estimate the resources to be allocated to fault identification and removal for all system components.

Nikora, A. P.; Munson, J. C.

2003-01-01

380

On Identifiability of Bias-Type Actuator-Sensor Faults in Multiple-Model-Based Fault Detection and Identification  

NASA Technical Reports Server (NTRS)

This paper explores a class of multiple-model-based fault detection and identification (FDI) methods for bias-type faults in actuators and sensors. These methods employ banks of Kalman-Bucy filters to detect the faults, determine the fault pattern, and estimate the fault values, wherein each Kalman-Bucy filter is tuned to a different failure pattern. Necessary and sufficient conditions are presented for identifiability of actuator faults, sensor faults, and simultaneous actuator and sensor faults. It is shown that FDI of simultaneous actuator and sensor faults is not possible using these methods when all sensors have biases.

Joshi, Suresh M.

2012-01-01

381

An algorithm for faulted phase and feeder selection under high impedance fault conditions  

E-print Network

substation. One-line diagram of a distribution system with two feeders. Typical high impedance fault, showing little or no change in phase voltage due to the fault. . Phase voltage and high frequency current component during a staged high impedance... be a valuable aid in the location oi' the high impedance fault by repair crews, since the area of search would be greatly reduced. Two primary methods have been used in the past to make directional deter- minations on faulted power systems, both...

Benner, Carl Lee

1988-01-01

382

Analysis of Unsymmetrical Faults in High Voltage Power Systems With Superconducting Fault Current Limiters  

Microsoft Academic Search

An analysis of unsymmetrical faults for a 110 kV sub-grid coupling with a superconducting fault current limiter is conducted in this contribution. For the design of the super-conducting fault current limiters it is essential to identify the highest possible voltage during the limitation process. As reference the symmetric three phase fault which generally leads to the highest short-circuit currents is

Mark Stemmle; Claus Neumann; Frank Merschel; Ulrich Schwing; Karl-Heinz Weck; Mathias Noe; Frank Breuer; Steffen Elschner

2007-01-01

383

Internal structure of the Kern Canyon Fault, California: a deeply exhumed strike-slip fault  

E-print Network

. Financial support for the study was provided by the USGS National Earthquake Hazards Reduction Program, under grant numbers 0 1 HQGR0029 and 01HQGR0056. TABLE OF CONTENTS Page l. INTRODUCTION . . 1. 1. Structure and Evolution of Major Fault Zones. 1... photographs of cataclastic fault rocks . . 79 28 Optical photomicrographs of hematite fault gouge . . 81 29 Backscattered scanning electron microscope (SEM) image of hematite fault gouge 82 30 Grid map of the intersection of the phyllonite zone...

Neal, Leslie Ann

2002-01-01

384

The morphology of strike-slip faults - Examples from the San Andreas Fault, California  

NASA Technical Reports Server (NTRS)

The dilatational strains associated with vertical faults embedded in a horizontal plate are examined in the framework of fault kinematics and simple displacement boundary conditions. Using boundary element methods, a sequence of examples of dilatational strain fields associated with commonly occurring strike-slip fault zone features (bends, offsets, finite rupture lengths, and nonuniform slip distributions) is derived. The combinations of these strain fields are then used to examine the Parkfield region of the San Andreas fault system in central California.

Bilham, Roger; King, Geoffrey

1989-01-01

385

Software reliability through fault-avoidance and fault-tolerance  

NASA Technical Reports Server (NTRS)

The use of back-to-back, or comparison, testing for regression test or porting is examined. The efficiency and the cost of the strategy is compared with manual and table-driven single version testing. Some of the key parameters that influence the efficiency and the cost of the approach are the failure identification effort during single version program testing, the extent of implemented changes, the nature of the regression test data (e.g., random), and the nature of the inter-version failure correlation and fault-masking. The advantages and disadvantages of the technique are discussed, together with some suggestions concerning its practical use.

Vouk, Mladen A.; Mcallister, David F.

1990-01-01

386

Faults Discovery By Using Mined Data  

NASA Technical Reports Server (NTRS)

Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.

Lee, Charles

2005-01-01

387

1, 135149, 2006 Earthquake fault rock  

E-print Network

Discussion EGU Abstract A pseudotachylyte bounded by a carbonate-matrix implosion breccia was found promoted melting during fault movement. Coexistence of fluid implosion breccia and pseudotachylyte has

Paris-Sud XI, Université de

388

Sequential Test Strategies for Multiple Fault Isolation  

NASA Technical Reports Server (NTRS)

In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.

Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.

1997-01-01

389

Applications of Fault Detection in Vibrating Structures  

NASA Technical Reports Server (NTRS)

Structural fault detection and identification remains an area of active research. Solutions to fault detection and identification may be based on subtle changes in the time series history of vibration signals originating from various sensor locations throughout the structure. The purpose of this paper is to document the application of vibration based fault detection methods applied to several structures. Overall, this paper demonstrates the utility of vibration based methods for fault detection in a controlled laboratory setting and limitations of applying the same methods to a similar structure during flight on an experimental subscale aircraft.

Eure, Kenneth W.; Hogge, Edward; Quach, Cuong C.; Vazquez, Sixto L.; Russell, Andrew; Hill, Boyd L.

2012-01-01

390

Proactive Fault-Recovery in Distributed Systems  

E-print Network

Department of Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, Pennsylvania, USA framework for proactive (rather than the classical reactive) fault-recov- ery that reduces the latencies

Narasimhan, Priya

391

Use of Fault Dropping for Multiple Fault Analysis Youns KARKOURI, El Mostapha ABOULHAMID, Eduard CERNY and Alain VERREAULT  

E-print Network

- 1 - Use of Fault Dropping for Multiple Fault Analysis Younès KARKOURI, El Mostapha ABOULHAMID Montréal, C.P. 6128, Succ. "A" Montréal, (Québec), H3C-3J7, Canada. ABSTRACT A new approach to fault analysis is presented. We consider multiple stuck-at-0/1 faults at the gate level. First, a fault

Aboulhamid, El Mostapha

392

Toward Reducing Fault Fix Time: Understanding Developer Behavior for the Design of Automated Fault Detection Tools, the Full Report  

E-print Network

Toward Reducing Fault Fix Time: Understanding Developer Behavior for the Design of Automated Fault}@csc.ncsu.edu Abstract The longer a fault remains in the code from the time it was injected, the more time it will take to fix the fault. Increasingly, automated fault detection (AFD) tools are providing developers

Young, R. Michael

393

Fault collapsing is the process of reducing the number of faults by using redundance and equivalence/dominance  

E-print Network

1 Abstract Fault collapsing is the process of reducing the number of faults by using redundance and equivalence/dominance relationships among faults. Exact fault collapsing can be easily applied locally such as execution time and/or memory. In this paper, we present EGFC, an exact global fault collapsing tool

Al-Asaad, Hussain

394

Transform fault earthquakes in the North Atlantic: Source mechanisms and depth of faulting  

NASA Technical Reports Server (NTRS)

The centroid depths and source mechanisms of 12 large earthquakes on transform faults of the northern Mid-Atlantic Ridge were determined from an inversion of long-period body waveforms. The earthquakes occurred on the Gibbs, Oceanographer, Hayes, Kane, 15 deg 20 min, and Vema transforms. The depth extent of faulting during each earthquake was estimated from the centroid depth and the fault width. The source mechanisms for all events in this study display the strike slip motion expected for transform fault earthquakes; slip vector azimuths agree to 2 to 3 deg of the local strike of the zone of active faulting. The only anomalies in mechanism were for two earthquakes near the western end of the Vema transform which occurred on significantly nonvertical fault planes. Secondary faulting, occurring either precursory to or near the end of the main episode of strike-slip rupture, was observed for 5 of the 12 earthquakes. For three events the secondary faulting was characterized by reverse motion on fault planes striking oblique to the trend of the transform. In all three cases, the site of secondary reverse faulting is near a compression jog in the current trace of the active transform fault zone. No evidence was found to support the conclusions of Engeln, Wiens, and Stein that oceanic transform faults in general are either hotter than expected from current thermal models or weaker than normal oceanic lithosphere.

Bergman, Eric A.; Solomon, Sean C.

1987-01-01

395

Fault Detection of Broken Rotor Bars in Induction Motor using a Global Fault Index  

E-print Network

Modulation Index, Global Fault Index. 1 Introduction Induction motors, especially the asynchronous motorsFault Detection of Broken Rotor Bars in Induction Motor using a Global Fault Index G. Didier , E. Ternisien , O. Caspary , and H. Razik Abstract Induction motors play a very important part in the safe

Boyer, Edmond

396

Collateral damage: Evolution with displacement of fracture distribution and secondary fault strands in fault  

E-print Network

Collateral damage: Evolution with displacement of fracture distribution and secondary fault strands in fault damage zones Heather M. Savage1,2 and Emily E. Brodsky1 Received 22 April 2010; revised 10 faults is governed by the same process. Based on our own field work combined with data from

Savage, Heather M.

397

Permeability of fault-related rocks, and implications for hydraulic structure of fault zones  

Microsoft Academic Search

The permeability structure of a fault zone in granitic rocks has been investigated by laboratory testing of intact core samples from the unfaulted protolith and the two principal fault zone components; the fault core and the damaged zone. The results of two test series performed on rocks obtained from outcrop are reported. First, tests performed at low confining pressure on

J. Goddard; C. Forster

1997-01-01

398

Fuzzy Set Theory and Fault Tree Analysis based Method Suitable for Fault Diagnosis of Power Transformer  

Microsoft Academic Search

The fault detection and analysis for power transformer are the key measures to improve the security of power systems and the reliability of power supply. Due to the complicity of the power transformer structure and the variations in operating conditions, the occurrence of a fault inside power transformer is uncertain and random. Until now, the fault statistics of power transformer

Tong Wu; Guangyu Tu; Z. Q. Bo; A. Klimek

2007-01-01

399

Supervision, fault-detection and fault-diagnosis methods — An introduction  

Microsoft Academic Search

The operation of technical processes requires increasingly advanced supervision and fault diagnosis to improve reliability, safety and economy. This paper gives an introduction to the field of fault detection and diagnosis. It begins with a consideration of a knowledge-based procedure that is based on analytical and heuristic information. Then different methods of fault detection are considered, which extract features from

R. Isermann

1997-01-01

400

Using the Work of Fault Generation in the Laboratory to Predict Fault Evolution  

NASA Astrophysics Data System (ADS)

Fault systems may evolve to minimize the total work on the system. The work budget of fault systems includes frictional heating, internal work, tectonic work (external to the system), work against gravity, and seismic work. More enigmatic than these is the work required to propagate faults. While this term may be smaller than other terms in the work budget, observations of long-lived faults suggest that Wprop is not negligible. Measurements of external work from sandbox experiments show a drop in work associated with detachment growth, ramp (fore/backthrusts) growth, and the reactivation of older faults. The new forethrusts develop when the total work savings of having the fault exceeds the cost of the new fault. The drop in work with new fault formation can thus be used to estimate the cost of fault growth in the sandbox. We measure Wprop of 0.15 J/m2. Numerical simulations of fault growth episodes within our experiments are calibrated to the measured Wprop. By knowing the required work to grow faults, we can implement the principal of work minimization to predict fault growth in the tabletop sandbox experiments and larger accretionary systems.

Herbert, J. W.; Cooke, M. L.

2012-12-01

401

Towards Fault-Tolerant Digital Microfluidic Lab-on-Chip: Defects, Fault Modeling, Testing, and Reconfiguration  

E-print Network

Towards Fault-Tolerant Digital Microfluidic Lab-on-Chip: Defects, Fault Modeling, Testing microfluidic lab-on-chip systems. Defects are related to logical fault models that can be viewed not only of microfluidics, referred to as "digital microfluidics", relies on the principle of electrowetting-on-dielectric

Chakrabarty, Krishnendu

402

Research paper Dating deep? Luminescence studies of fault gouge from the San Andreas Fault  

E-print Network

Research paper Dating deep? Luminescence studies of fault gouge from the San Andreas Fault zone 2 Resetting IRSL TL Dating a b s t r a c t This study aims to assess whether luminescence emission from fault in lower energy trapping sites. In this work luminescence experiments are being conducted on minerals from

403

MODELACIN CUALITATIVA DE FUENTES SALADAS EN ACUFEROS KRSTICOS  

E-print Network

GEOLÓGICO A B A B (Gelabert,1997) #12;· MARCO GEOLÓGICO A B (Gelabert,1997) Dogger-Malm Mioceno Manantial estimada: 40 km2 ­ Prof. del acuífero inferior en manantial: 300 m. ­ Precipitación irregular: época húmeda: 6 m.s.n.m ­ Respuesta muy rápida a las precipitaciones ­ Datos de temperatura, CE y caudal

Politècnica de Catalunya, Universitat

404

Seismic images and fault relations of the Santa Monica thrust fault, West Los Angeles, California  

USGS Publications Warehouse

In May 1997, the US Geological Survey (USGS) and the University of Southern California (USC) acquired high-resolution seismic reflection and refraction images on the grounds of the Wadsworth Veterans Administration Hospital (WVAH) in the city of Los Angeles (Fig. 1a,b). The objective of the seismic survey was to better understand the near-surface geometry and faulting characteristics of the Santa Monica fault zone. In this report, we present seismic images, an interpretation of those images, and a comparison of our results with results from studies by Dolan and Pratt (1997), Pratt et al. (1998) and Gibbs et al. (2000). The Santa Monica fault is one of the several northeast-southwest-trending, north-dipping, reverse faults that extend through the Los Angeles metropolitan area (Fig. 1a). Through much of area, the Santa Monica fault trends subparallel to the Hollywood fault, but the two faults apparently join into a single fault zone to the southwest and to the northeast (Dolan et al., 1995). The Santa Monica and Hollywood faults may be part of a larger fault system that extends from the Pacific Ocean to the Transverse Ranges. Crook et al. (1983) refer to this fault system as the Malibu Coast-Santa Monica-Raymond-Cucamonga fault system. They suggest that these faults have not formed a contiguous zone since the Pleistocene and conclude that each of the faults should be treated as a separate fault with respect to seismic hazards. However, Dolan et al. (1995) suggest that the Hollywood and Santa Monica faults are capable of generating Mw 6.8 and Mw 7.0 earthquakes, respectively. Thus, regardless of whether the overall fault system is connected and capable of rupturing in one event, individually, each of the faults present a sizable earthquake hazard to the Los Angeles metropolitan area. If, however, these faults are connected, and they were to rupture along a continuous fault rupture, the resulting hazard would be even greater. Although the Santa Monica fault represents a hazard to millions of people, its lateral extent and rupture history are not well known, due largely to limited knowledge of the fault location, geometry, and relationship to other faults. The Santa Monica fault has been obscured at the surface by alluvium and urbanization. For example, Dolan et al. (1995) could find only one 200-m-long stretch of the Santa Monica fault that was not covered by either streets or buildings. Of the 19-km length onshore section of the Santa Monica fault, its apparent location has been delineated largely on the basis of geomorphic features and oil-well drilling. Seismic imaging efforts, in combination with other investigative methods, may be the best approach in locating and understanding the Santa Monica fault in the Los Angeles region. This investigation and another recent seismic imaging investigation (Pratt et al., 1998) were undertaken to resolve the near-surface location, fault geometry, and faulting relations associated with the Santa Monica fault.

Catchings, R.D.; Gandhok, G.; Goldman, M.R.; Okaya, D.

2001-01-01

405

A Fault-tolerant RISC Microprocessor for Spacecraft Applications  

NASA Technical Reports Server (NTRS)

Viewgraphs on a fault-tolerant RISC microprocessor for spacecraft applications are presented. Topics covered include: reduced instruction set computer; fault tolerant registers; fault tolerant ALU; and double rail CMOS logic.

Timoc, Constantin; Benz, Harry

1990-01-01

406

Intermittent/transient fault phenomena in digital systems  

NASA Technical Reports Server (NTRS)

An overview of the intermittent/transient (IT) fault study is presented. An interval survivability evaluation of digital systems for IT faults is discussed along with a method for detecting and diagnosing IT faults in digital systems.

Masson, G. M.

1977-01-01

407

Earthquake behavior and structure of oceanic transform faults  

E-print Network

Oceanic transform faults that accommodate strain at mid-ocean ridge offsets represent a unique environment for studying fault mechanics. Here, I use seismic observations and models to explore how fault structure affects ...

Roland, Emily Carlson

2012-01-01

408

Acoustic fault injection tool (AFIT)  

NASA Astrophysics Data System (ADS)

On September 18, 1997, Honeywell Technology Center (HTC) successfully completed a three-week flight test of its rotor acoustic monitoring system (RAMS) at Patuxent River Flight Test Center. This flight test was the culmination of an ambitious 38-month proof-of-concept effort directed at demonstrating the feasibility of detecting crack propagation in helicopter rotor components. The program was funded as part of the U.S. Navy's Air Vehicle Diagnostic Systems (AVDS) program. Reductions in Navy maintenance budgets and available personnel have dictated the need to transition from time-based to 'condition-based' maintenance. Achieving this will require new enabling diagnostic technologies. The application of acoustic emission for the early detection of helicopter rotor head dynamic component faults has proven the feasibility of the technology. The flight-test results demonstrated that stress-wave acoustic emission technology can detect signals equivalent to small fatigue cracks in rotor head components and can do so across the rotating articulated rotor head joints and in the presence of other background acoustic noise generated during flight operation. During the RAMS flight test, 12 test flights were flown from which 25 Gbyte of digital acoustic data and about 15 hours of analog flight data recorder (FDR) data were collected from the eight on-rotor acoustic sensors. The focus of this paper is to describe the CH-46 flight-test configuration and present design details about a new innovative machinery diagnostic technology called acoustic fault injection. This technology involves the injection of acoustic sound into machinery to assess health and characterize operational status. The paper will also address the development of the Acoustic Fault Injection Tool (AFIT), which was successfully demonstrated during the CH-46 flight tests.

Schoess, Jeffrey N.

1999-05-01

409

Lightning faults on distribution lines  

SciTech Connect

Until now, power engineers have been unable to quantify electrical system outages and damage caused by lightning. Determining the number of lightning strikes to overhead lines is a necessary first step in evaluating design options for lightning protection systems. Under contract to the Electric Power Research Institute (EPRI), the authors have developed low-cost instrumentation by lightning and those caused by other phenomena. The theories used to develop this coincident lightning events detector (CLED), the experiment design used for testing the CLED, and the test results are discussed. Shielding from nearby structures were found to be a major consideration in assessing the lightning fault rate on distribution lines.

Parrish, D.E.; Kvaltine, D.J. (CH2M Hill, Gainesville, FL (US))

1989-10-01

410

SUMC fault tolerant computer system  

NASA Technical Reports Server (NTRS)

The results of the trade studies are presented. These trades cover: establishing the basic configuration, establishing the CPU/memory configuration, establishing an approach to crosstrapping interfaces, defining the requirements of the redundancy management unit (RMU), establishing a spare plane switching strategy for the fault-tolerant memory (FTM), and identifying the most cost effective way of extending the memory addressing capability beyond the 64 K-bytes (K=1024) of SUMC-II B. The results of the design are compiled in Contract End Item (CEI) Specification for the NASA Standard Spacecraft Computer II (NSSC-II), IBM 7934507. The implementation of the FTM and memory address expansion.

1980-01-01

411

Paleoseismicity of two historically quiescent faults in Australia: Implications for fault behavior in stable continental regions  

USGS Publications Warehouse

Paleoseismic studies of two historically aseismic Quaternary faults in Australia confirm that cratonic faults in stable continental regions (SCR) typically have a long-term behavior characterized by episodes of activity separated by quiescent intervals of at least 10,000 and commonly 100,000 years or more. Studies of the approximately 30-km-long Roopena fault in South Australia and the approximately 30-km-long Hyden fault in Western Australia document multiple Quaternary surface-faulting events that are unevenly spaced in time. The episodic clustering of events on cratonic SCR faults may be related to temporal fluctuations of fault-zone fluid pore pressures in a volume of strained crust. The long-term slip rate on cratonic SCR faults is extremely low, so the geomorphic expression of many cratonic SCR faults is subtle, and scarps may be difficult to detect because they are poorly preserved. Both the Roopena and Hyden faults are in areas of limited or no significant seismicity; these and other faults that we have studied indicate that many potentially hazardous SCR faults cannot be recognized solely on the basis of instrumental data or historical earthquakes. Although cratonic SCR faults may appear to be nonhazardous because they have been historically aseismic, those that are favorably oriented for movement in the current stress field can and have produced unexpected damaging earthquakes. Paleoseismic studies of modern and prehistoric SCR faulting events provide the basis for understanding of the long-term behavior of these faults and ultimately contribute to better seismic-hazard assessments.

Crone, A.J.; De Martini, P. M.; Machette, M.M.; Okumura, K.; Prescott, J.R.

2003-01-01

412

Growth of Fault-cored Anticlines by combined Mechanisms of Fault Slip and Buckling  

NASA Astrophysics Data System (ADS)

A primary goal of studies of blind faults underlying actively growing anticlines is assessment of earthquake hazard associated with slip on the faults. It is generally assumed that the amount of slip on the fault is directly related to the amplitude of the fold. Under this assumption, the potential for earthquakes on blind faults can be determined directly from fold geometry. However, anticlines grow over slipping reverse faults can be amplified by a factor of two or more by buckling of mechanical layering under horizontal shortening. Studies that attempt to estimate fault slip from fold geometry may therefore overestimate fault slip by a factor of two or more if the contribution to fold growth from buckling is ignored. We construct boundary element models to demonstrate that fault-cored anticlines in mechanically layered media subjected to layer-parallel shortening are not built solely by slip on the underlying fault. The amplitude of folds produced in a medium containing a fault and elastic layers with free slip and subjected to layer-parallel shortening are 2-5 times larger than the amplitudes of folds produced in homogeneous media without mechanical layering. We compare the model results with data from fault-cored anticlines in the western United States. Pitchfork Anticline on the western flank of the Big Horn Basin in Wyoming likely formed by the combined mechanisms of fault slip and buckling. Geometric features of Pitchfork Anticline such as a localized anticlinal dome shape with tight hinges and amplitude that increases away from the fault tip are characteristic features of buckle folds produced in our numerical simulations. The coseismic uplift pattern produced during the 1985 earthquake on a fault under the Kettleman Hills Anticline and subsurface fold geometry of the anticline inferred from seismic reflection images are consistent with folding produced by the combined mechanisms of fault slip and buckling.

Huang, W.; Johnson, K. M.

2007-12-01

413

Fault Management Techniques in Human Spaceflight Operations  

NASA Technical Reports Server (NTRS)

This paper discusses human spaceflight fault management operations. Fault detection and response capabilities available in current US human spaceflight programs Space Shuttle and International Space Station are described while emphasizing system design impacts on operational techniques and constraints. Preflight and inflight processes along with products used to anticipate, mitigate and respond to failures are introduced. Examples of operational products used to support failure responses are presented. Possible improvements in the state of the art, as well as prioritization and success criteria for their implementation are proposed. This paper describes how the architecture of a command and control system impacts operations in areas such as the required fault response times, automated vs. manual fault responses, use of workarounds, etc. The architecture includes the use of redundancy at the system and software function level, software capabilities, use of intelligent or autonomous systems, number and severity of software defects, etc. This in turn drives which Caution and Warning (C&W) events should be annunciated, C&W event classification, operator display designs, crew training, flight control team training, and procedure development. Other factors impacting operations are the complexity of a system, skills needed to understand and operate a system, and the use of commonality vs. optimized solutions for software and responses. Fault detection, annunciation, safing responses, and recovery capabilities are explored using real examples to uncover underlying philosophies and constraints. These factors directly impact operations in that the crew and flight control team need to understand what happened, why it happened, what the system is doing, and what, if any, corrective actions they need to perform. If a fault results in multiple C&W events, or if several faults occur simultaneously, the root cause(s) of the fault(s), as well as their vehicle-wide impacts, must be determined in order to maintain situational awareness. This allows both automated and manual recovery operations to focus on the real cause of the fault(s). An appropriate balance must be struck between correcting the root cause failure and addressing the impacts of that fault on other vehicle components. Lastly, this paper presents a strategy for using lessons learned to improve the software, displays, and procedures in addition to determining what is a candidate for automation. Enabling technologies and techniques are identified to promote system evolution from one that requires manual fault responses to one that uses automation and autonomy where they are most effective. These considerations include the value in correcting software defects in a timely manner, automation of repetitive tasks, making time critical responses autonomous, etc. The paper recommends the appropriate use of intelligent systems to determine the root causes of faults and correctly identify separate unrelated faults.

O'Hagan, Brian; Crocker, Alan

2006-01-01

414

The work budget of rough faults  

NASA Astrophysics Data System (ADS)

Faults in nature have measurable roughness at many scales and are not planar as generally idealized. We utilize the boundary element method to model the geomechanical response of synthetic rough faults in an isotropic, linear elastic continuum to external tectonic loading in terms of the work budget. Faults are generated with known fractal roughness parameters, including the root mean square slope (?), a measure of roughness amplitude, and the Hurst exponent (H), a measure of geometric self-similarity. Energy within the fault models is partitioned into external work (Wext), internal elastic strain energy (Wint), gravitational work (Wgrav), frictional work (Wfric), and seismic energy (Wseis). Results confirm that Wext, or work done on the external model boundaries, is smallest for a perfectly planar fault, and steadily increases with increasing ?. This pattern is also observed in Wint, the energy expended in deforming the host rock. The opposite is true for gravitational work, or work done against gravity in uplifting host rock, as well as with frictional work, or energy dissipated with frictional slip on the fault, and Wseis, or seismic energy released during slip events. Effects of variation in H are not as large as for ?, but Wgrav, Wfric, and Wseis increase with increasing H, with Wint and Wext decreasing across the same range. Remarkably, however, for a narrow range of roughness amplitudes which are commonly observed along natural faults, the total work of the system remains approximately constant, while slightly larger than the total work of a planar fault. Faults evolve toward the most mechanically efficient configuration; therefore we argue that this range of roughness amplitudes may represent an energy barrier, preventing faults from removing asperities and evolving to smooth, planar discontinuities. A similar conclusion is drawn from simulations at relatively shallow depths, with results showing that shallower faults have larger energy barriers, and can be mechanically efficient at higher roughness amplitudes.

Newman, Patrick J.; Ashley Griffith, W.

2014-12-01

415

Effects of Fault Displacement on Emplacement Drifts  

SciTech Connect

The purpose of this analysis is to evaluate potential effects of fault displacement on emplacement drifts, including drip shields and waste packages emplaced in emplacement drifts. The output from this analysis not only provides data for the evaluation of long-term drift stability but also supports the Engineered Barrier System (EBS) process model report (PMR) and Disruptive Events Report currently under development. The primary scope of this analysis includes (1) examining fault displacement effects in terms of induced stresses and displacements in the rock mass surrounding an emplacement drift and (2 ) predicting fault displacement effects on the drip shield and waste package. The magnitude of the fault displacement analyzed in this analysis bounds the mean fault displacement corresponding to an annual frequency of exceedance of 10{sup -5} adopted for the preclosure period of the repository and also supports the postclosure performance assessment. This analysis is performed following the development plan prepared for analyzing effects of fault displacement on emplacement drifts (CRWMS M&O 2000). The analysis will begin with the identification and preparation of requirements, criteria, and inputs. A literature survey on accommodating fault displacements encountered in underground structures such as buried oil and gas pipelines will be conducted. For a given fault displacement, the least favorable scenario in term of the spatial relation of a fault to an emplacement drift is chosen, and the analysis is then performed analytically. Based on the analysis results, conclusions are made regarding the effects and consequences of fault displacement on emplacement drifts. Specifically, the analysis will discuss loads which can be induced by fault displacement on emplacement drifts, drip shield and/or waste packages during the time period of postclosure.

F. Duan

2000-04-25

416

Lake records of Northern Hemisphere South American summer monsoon variability from the Cordillera Oriental, Colombia: Initial results from Lago de Tota and Laguna de Ubaque  

NASA Astrophysics Data System (ADS)

The lack of terrestrial paleoclimate records from the Northern Hemisphere Andes with decadal resolution has meant that our understanding of abrupt South American summer monsoon (SASM) variability during the Holocene is almost exclusively based on data from Southern Hemisphere sites. In order to develop a more integrated and complete picture of the SASM as a system and its response during rapid climate changes, high-resolution paleoclimate records are needed from the Northern Hemisphere Andes. We present initial results from analysis of lake sediment cores that were collected from Lago de Tota (N 5.554, W 72.916) and Laguna de Ubaque (N 4.500, W 73.935) in the Eastern Cordillera of the Colombian Andes. These sediment cores were collected in July 2013 as part on an ongoing paleoclimate research initiative in Colombia. Located in the Boyacá Provence, Lago de Tota is the largest high-altitude lake (3010 masl) in the Northern Hemisphere Andes and the second largest Andean lake in South America. As such, hydrologic changes recorded in the lake's sediment record reflect regional climate responses. Lago de Ubaque (2070 masl) is a small east facing moraine-dammed lake near the capital of Bogotá that contains finely laminated clastic sediments. The initial sedimentological and chronological results demonstrate that Lago de Tota and Laguna de Ubaque have excellent potential for resolving Northern Hemisphere SASM variability at decadal time scales or better. Such records will provide important counterparts to high-resolution paleoclimate records from the Southern Hemisphere Andes.

Escobar, J.; Rudloff, O.; Bird, B. W.

2013-12-01

417

Additional shear resistance from fault roughness and stress levels on geometrically complex faults  

NASA Astrophysics Data System (ADS)

The majority of crustal faults host earthquakes when the ratio of average background shear stress ?b to effective normal stress ?eff is ?b/?eff?0.6. In contrast, mature plate-boundary faults like the San Andreas Fault (SAF) operate at ?b/?eff?0.2. Dynamic weakening, the dramatic reduction in frictional resistance at coseismic slip velocities that is commonly observed in laboratory experiments, provides a leading explanation for low stress levels on mature faults. Strongly velocity-weakening friction laws permit rupture propagation on flat faults above a critical stress level ?pulse/?eff?0.25. Provided that dynamic weakening is not restricted to mature faults, the higher stress levels on most faults are puzzling. In this work, we present a self-consistent explanation for the relatively high stress levels on immature faults that is compatible with low coseismic frictional resistance, from dynamic weakening, for all faults. We appeal to differences in structural complexity with the premise that geometric irregularities introduce resistance to slip in addition to frictional resistance. This general idea is quantified for the special case of self-similar fractal roughness of the fault surface. Natural faults have roughness characterized by amplitude-to-wavelength ratios ? between 10-3 and 10-2. Through a second-order boundary perturbation analysis of quasi-static frictionless sliding across a band-limited self-similar interface in an ideally elastic solid, we demonstrate that roughness induces an additional shear resistance to slip, or roughness drag, given by ?drag=8?3?2G??/?min, for G?=G/(1-?) with shear modulus Gand Poisson's ratio ?, slip ?, and minimum roughness wavelength ?min. The influence of roughness drag on fault mechanics is verified through an extensive set of dynamic rupture simulations of earthquakes on strongly rate-weakening fractal faults with elastic-plastic off-fault response. The simulations suggest that fault rupture, in the form of self-healing slip pulses, becomes probable above a background stress level ?b??pulse+?drag. For the smoothest faults (?˜10-3), ?drag is negligible compared to frictional resistance, so that ?b??pulse?0.25?eff. However, on rougher faults (?˜10-2), roughness drag can exceed frictional resistance. We expect that ?drag ultimately departs from the predicted scaling when roughness-induced stress perturbations activate pervasive off-fault inelastic deformation, such that background stress saturates at a limit (?b?0.6?eff) determined by the finite strength of the off-fault material. We speculate that this strength, and not the much smaller dynamically weakened frictional strength, determines the stress levels at which the majority of faults operate.

Fang, Zijun; Dunham, Eric M.

2013-07-01

418

Off-fault Damage Associated with a Localized Bend in the North Branch San Gabriel Fault, California  

E-print Network

Structures within very large displacement, mature fault zones, such as the North Branch San Gabriel Fault (NBSGF), are the product of a complex combination of processes. Off-fault damage within a damage zone and first-order geometric asperities...

Becker, Andrew 1987-

2012-08-15

419

Evaluation of faulting characteristics and ground acceleration associated with recent movement along the Meers Fault, Southwestern Oklahoma  

E-print Network

being cited as responsible for the Meers Fault scarp. Earthquakes of magnitude 7 to 8 occurring in conjunction with recent reactivation of the fault have been calculated. However, evidence found within the Wichita Mountains just south of the fault...

Burrell, Richard Dennis

1997-01-01

420

Partial fault dictionary: A new approach for computer-aided fault localization  

SciTech Connect

The approach described in this paper has been developed to address the computation time and problem size of localization methodologies in VLSI circuits in order to speed up the overall time consumption for fault localization. The reduction of the problem to solve is combined with the idea of the fault dictionary. In a pre-processing phase, a possibly faulty area is derived using the netlist and the actual test results as input data. The result is a set of cones originating from each faulty primary output. In the next step, the best cone is extracted for the fault dictionary methodology according to a heuristic formula. The circuit nodes, which are included in the intersection of the cones, are combined to a fault list. This fault list together with the best cone can be used by the fault simulator to generate a small and manageable fault dictionary related to one faulty output. In connection with additional algorithms for the reduction of stimuli and netlist a partial fault dictionary can be set up. This dictionary is valid only for the given faulty device together with the given and reduced stimuli, but offers important benefits: Practical results show a reduction of simulation time and size of the fault dictionary by factors around 100 or even more, depending on the actual circuit and assumed fault. The list of fault candidates is significantly reduced, and the required number of steps during the process of localization is reduced, too.

Hunger, A.; Papathanasiou, A. [Gerhard Mercator Univ., Duisburg (Germany)

1995-12-31

421

Spatial analysis of hypocenter to fault relationships for determining fault process zone width in Japan.  

SciTech Connect

Preliminary investigation areas (PIA) for a potential repository of high-level radioactive waste must be evaluated by NUMO with regard to a number of qualifying factors. One of these factors is related to earthquakes and fault activity. This study develops a spatial statistical assessment method that can be applied to the active faults in Japan to perform such screening evaluations. This analysis uses the distribution of seismicity near faults to define the width of the associated process zone. This concept is based on previous observations of aftershock earthquakes clustered near active faults and on the assumption that such seismic activity is indicative of fracturing and associated impacts on bedrock integrity. Preliminary analyses of aggregate data for all of Japan confirmed that the frequency of earthquakes is higher near active faults. Data used in the analysis were obtained from NUMO and consist of three primary sources: (1) active fault attributes compiled in a spreadsheet, (2) earthquake hypocenter data, and (3) active fault locations. Examination of these data revealed several limitations with regard to the ability to associate fault attributes from the spreadsheet to locations of individual fault trace segments. In particular, there was no direct link between attributes of the active faults in the spreadsheet and the active fault locations in the GIS database. In addition, the hypocenter location resolution in the pre-1983 data was less accurate than for later data. These pre-1983 hypocenters were eliminated from further analysis.

Arnold, Bill Walter; Roberts, Barry L.; McKenna, Sean Andrew; Coburn, Timothy C. (Abilene Christian University, Abilene, TX)

2004-09-01

422

Presented by Fault-Tolerance Challenges  

E-print Network

Presented by Fault-Tolerance Challenges and Solutions Al Geist Computer Science Research Group Geist_FT_SC10 Rapid growth in scale drives fault tolerance need Challenges · Fundamental assumptions as scale increases #12;3 Managed by UT-Battelle for the U.S. Department of Energy Geist_FT_SC10 Hardware

423

Intermittent/transient faults in digital systems  

NASA Technical Reports Server (NTRS)

Containment set techniques are applied to 8085 microprocessor controllers so as to transform a typical control system into a slightly modified version, shown to be crashproof: after the departure of the intermittent/transient fault, return to one proper control algorithm is assured, assuming no permanent faults occur.

Masson, G. M.; Glazer, R. E.

1982-01-01

424

Detecting Latent Faults In Digital Flight Controls  

NASA Technical Reports Server (NTRS)

Report discusses theory, conduct, and results of tests involving deliberate injection of low-level faults into digital flight-control system. Part of study of effectiveness of techniques for detection of and recovery from faults, based on statistical assessment of inputs and outputs of parts of control systems. Offers exceptional new capability to establish reliabilities of critical digital electronic systems in aircraft.

Mcgough, John; Mulcare, Dennis; Larsen, William E.

1992-01-01

425

Training for Skill in Fault Diagnosis  

ERIC Educational Resources Information Center

The Knitting, Lace and Net Industry Training Board has developed a training innovation called fault diagnosis training. The entire training process concentrates on teaching based on the experiences of troubleshooters or any other employees whose main tasks involve fault diagnosis and rectification. (Author/DS)

Turner, J. D.

1974-01-01

426

Fault detection method with antennas [communication wires  

Microsoft Academic Search

A fault detection method with antennas has been developed and compared with conventional methods such as the Murry loop bridge and the pulse method. The method was applied to communication wire (surge impedance 50 ohms) with an artificial fault using metal electrode systems, and an artificial void as breakdown and\\/or degradation. The results showed that the detecting distance precision was

M. Kando

1997-01-01

427

Late Cenozoic intraplate faulting in eastern Australia  

NASA Astrophysics Data System (ADS)

The intensity and tectonic origin of late Cenozoic intraplate deformation in eastern Australia is relatively poorly understood. Here we show that Cenozoic volcanic rocks in southeast Queensland have been deformed by numerous faults. Using gridded aeromagnetic data and field observations, structural investigations were conducted on these faults. Results show that faults have mainly undergone strike-slip movement with a reverse component, displacing Cenozoic volcanic rocks ranging in ages from ?31 to ?21 Ma. These ages imply that faulting must have occurred after the late Oligocene. Late Cenozoic deformation has mostly occurred due to the reactivation of major faults, which were active during episodes of basin formation in the Jurassic-Early Cretaceous and later during the opening of the Tasman and Coral Seas from the Late Cretaceous to the early Eocene. The wrench reactivation of major faults in the late Cenozoic also gave rise to the occurrence of brittle subsidiary reverse strike-slip faults that affected Cenozoic volcanic rocks. Intraplate transpressional deformation possibly resulted from far-field stresses transmitted from the collisional zones at the northeast and southeast boundaries of the Australian plate during the late Oligocene-early Miocene and from the late Miocene to the Pliocene. These events have resulted in the hitherto unrecognized reactivation of faults in eastern Australia.

Babaahmadi, Abbas; Rosenbaum, Gideon

2014-12-01

428

Fault current limiter using a superconducting coil  

Microsoft Academic Search

A novel circuit, consisting of solid-state diodes and a biased superconducting coil, for limiting the fault currents in three phase ac systems is presented. A modification of the basic circuit results in a solid-state ac breaker with current limiting features. The operating characteristics of the fault current limiter and the ac breaker are analyzed. An optimization procedure for sizing the

H. Boenig; D. A. Paice

1983-01-01

429

Preseismic fault slip and earthquake prediction  

Microsoft Academic Search

It is proposed that preseismic fault creep may be the underlying process that is responsible for observations of earthquake precursors. The assertion that fault creep precedes earthquakes is supported by evidence from at least some earthquakes and by analogy with detailed laboratory observations. Laboratory observations of stick slip reveal that at least two stages of preseismic slip are an intrinsic

J. H. Dieterich

1978-01-01

430

Low overhead fault-tolerant FPGA systems  

Microsoft Academic Search

Fault-tolerance is an important system metric for many operating environments, from automotive to space explo- ration. The conventional technique for improving system reli- ability is through component replication, which usually comes at significant cost: increased design time, testing, power con- sumption, volume, and weight. We have developed a new fault- tolerance approach that capitalizes on the unique reconfiguration capabilities of

John Lach; William H. Mangione-smith; Miodrag Potkonjak

1998-01-01

431

Measurement selection for parametric IC fault diagnosis  

NASA Technical Reports Server (NTRS)

Experimental results obtained with the use of measurement reduction for statistical IC fault diagnosis are described. The reduction method used involves data pre-processing in a fashion consistent with a specific definition of parametric faults. The effects of this preprocessing are examined.

Wu, A.; Meador, J.

1991-01-01

432

Fault tolerance of data transmission systems  

Microsoft Academic Search

This work defines quantitative indices of fault tolerance, i.e., properties of a data transmission system (DTS) that ensure effective data transmission in the presence of system-component failures. Fault tolerance is evaluated according to the mean time of the delivery or the probability of the nondelivery of a message to a DTS based on a data transmission network of general use.

G. I. Shakun; P. I. Trofimov; V. P. Altarev

1984-01-01

433

A game theoretic fault detection filter  

Microsoft Academic Search

The fault detection process is approximated with a disturbance attenuation problem. The solution to this problem, for both linear time-varying and time-invariant systems, leads to a game theoretic filter which bounds the transmission of all exogenous signals except the fault to be detected. In the limit, when the disturbance attenuation bound is brought to zero, a complete transmission block is

Walter H. Chung; Jason L. Speyer

1998-01-01

434

Scalable fault localization in enterprise networks  

Microsoft Academic Search

Modern enterprise networks encompass tens of thousands of network entities and present a very challenging task of monitoring the network. Because of the sheer size of these networks, fast, accurate, automated and scalable fault localization becomes one of the primary objectives of any enterprise network fault management system. Finding the root cause for hard failures or performance related failures in

Dipu John John

2009-01-01

435

State variable method of fault tree analysis  

Microsoft Academic Search

The current technique of Fault Tree Analysis (FTA) generally employs computer codes that calculate the minimal cut sets of the Boolean function, where each cut set comprises basic initiator events (roots) whose intersection implies the occurrence of a TOP (system failure) event. Because the number of calculations can be very large for typical fault trees, the importance of any given

R. J. Bartholomew; H. K. Knudsen; G. A. Whan

1984-01-01

436

Fault tree analysis for vital area identification  

Microsoft Academic Search

This paper discusses the use of fault tree analysis to identify those areas of nuclear fuel cycle facilities which must be protected to prevent acts of sabotage that could lead to sifnificant release of radioactive material. By proper manipulation of the fault trees for a plant, an analyst can identify vital areas in a manner consistent with regulatory definitions. This

G. B. Varnado; N. R. Ortiz

1978-01-01

437

Fault tree analysis of computer systems  

Microsoft Academic Search

Fault Tree Analysis (FTA) is a well developed technique for the reliability and safety analysis of complex systems such as nuclear power plants and weapon systems. In this paper, we apply FTA to analyze the reliability and the performance of computer systems. An approach to detect the sequence dependent faults in computer systems is proposed and exemplified. Based on the

C. V. Ramamoorthy; Gary S. Ho; Yih-wu Han

1977-01-01

438

FAULT TREE ANALYSIS FOR SYSTEM RELIABILITY  

Microsoft Academic Search

Fault Tree Analysis (FTA) can be used to predict and prevent accidents or as an investigative tool after an event. FTA is an analytical methodology that uses a graphical model to display the analysis process. Visually a fault tree is built by special symbols, some derived from Boolean algebra. Consequently, the resulting model resembles a logic diagram or a flow

Ercüment N. D?ZDAR

439

Sensor Fault Diagnosis Using Principal Component Analysis  

E-print Network

The purpose of this research is to address the problem of fault diagnosis of sensors which measure a set of direct redundant variables. This study proposes: 1. A method for linear senor fault diagnosis 2. An analysis of isolability and detectability...

Sharifi, Mahmoudreza

2010-07-14

440

Globally controlled fault tolerant quantum computation  

E-print Network

We describe a method to execute globally controlled quantum information processing which admits a fault tolerant quantum error correction scheme. Our scheme nominally uses three species of addressable two-level systems which are arranged in a one dimensional array in a specific periodic arrangement. We show that the scheme possesses a fault tolerant error threshold.

J. Fitzsimons; J. Twamley

2007-07-08

441

Diagnostics Tools Identify Faults Prior to Failure  

NASA Technical Reports Server (NTRS)

Through the SBIR program, Rochester, New York-based Impact Technologies LLC collaborated with Ames Research Center to commercialize the Center s Hybrid Diagnostic Engine, or HyDE, software. The fault detecting program is now incorporated into a software suite that identifies potential faults early in the design phase of systems ranging from printers to vehicles and robots, saving time and money.

2013-01-01

442

Philippine fault: A key for Philippine kinematics  

Microsoft Academic Search

On the basis of new geologic data and a kinematic analysis, we establish a simple kinematic model in which the motion between the Philippine Sea plate and Eurasia is distributed on two boundaries: the Philippine Trench and the Philippine fault. This model predicts a velocity of 2 to 2.5 cm\\/yr along the fault. Geologic data from the Visayas provide an

E. Barrier; P. Huchon; M. Aurelio

1991-01-01

443

A Game Theoretic Fault Detection Filter  

NASA Technical Reports Server (NTRS)

The fault detection process is modelled as a disturbance attenuation problem. The solution to this problem is found via differential game theory, leading to an H(sub infinity) filter which bounds the transmission of all exogenous signals save the fault to be detected. For a general class of linear systems which includes some time-varying systems, it is shown that this transmission bound can be taken to zero by simultaneously bringing the sensor noise weighting to zero. Thus, in the limit, a complete transmission block can he achieved, making the game filter into a fault detection filter. When we specialize this result to time-invariant system, it is found that the detection filter attained in the limit is identical to the well known Beard-Jones Fault Detection Filter. That is, all fault inputs other than the one to be detected (the "nuisance faults") are restricted to an invariant subspace which is unobservable to a projection on the output. For time-invariant systems, it is also shown that in the limit, the order of the state-space and the game filter can be reduced by factoring out the invariant subspace. The result is a lower dimensional filter which can observe only the fault to be detected. A reduced-order filter can also he generated for time-varying systems, though the computational overhead may be intensive. An example given at the end of the paper demonstrates the effectiveness of the filter as a tool for fault detection and identification.

Chung, Walter H.; Speyer, Jason L.

1995-01-01

444

The cost of software fault tolerance  

NASA Technical Reports Server (NTRS)

The proposed use of software fault tolerance techniques as a means of reducing software costs in avionics and as a means of addressing the issue of system unreliability due to faults in software is examined. A model is developed to provide a view of the relationships among cost, redundancy, and reliability which suggests strategies for software development and maintenance which are not conventional.

Migneault, G. E.

1982-01-01

445

Tectonics and Landforms: Types of Faults  

NSDL National Science Digital Library

This resource offers animations depicting the defining motions of five different types of faults. Also included is a data visualization showing the geographic patterns of faults and earthquakes, and a documentary video which discusses a series of great earthquakes that occurred between 1811 - 1812 and includes an animation of how the Mississippi River Valley was created.

Michael Ritter

446

Fault-Tolerant Broadcasts in CAN  

Microsoft Academic Search

Fault-tolerant distributed systems based on field-buses may takeadvantage from reliable and atomic broadcast. There is a currentbelief that CAN native mechanisms provide atomic broadcast. In thispaper, we dismiss this misconception, explaining how network errorsmay lead to: inconsistent message delivery; generation of messageduplicates. These errors may occur when faults hit the last two bitsof the end of frame delimiter. Although rare,

José Rufino; Paulo Veríssimo; Guilherme Arroz; Carlos Almeida; Luís Rodrigues

1998-01-01

447

BASE: Using abstraction to improve fault tolerance  

Microsoft Academic Search

Software errors are a major cause of outages and they are increasingly exploited in malicious attacks. Byzantine fault tolerance allows replicated systems to mask some software errors but it is expensive to deploy. This paper describes a replication technique, BASE, which uses abstraction to reduce the cost of Byzantine fault tolerance and to improve its ability to mask software errors.

Miguel Castro; Rodrigo Rodrigues; Barbara Liskov

2003-01-01

448

The Galileo Fault Tree Analysis Tool  

Microsoft Academic Search

We present Galileo, a dynamic fault tree modeling and analysis tool that combines the innovative DIF- Tree analysis methodology with a rich user interface built using package-oriented programming. DIFTree integrates binary decision diagram and Markov meth- ods under the common notation of dynamic fault trees, allowing the user to exploit the benefits of both tech- niques while avoiding the need

Kevin J. Sullivan; Joanne Bechta Dugan; David Coppit

1999-01-01

449

Fault tolerant reversible AC motor drive system  

Microsoft Academic Search

This paper investigates a fault tolerant three-phase AC\\/AC converter topology. With such a topology, when either one leg of the input rectifier, or one leg of the output inverter, is lost, the power flow between the grid and the load can still be controlled effectively. The topology proposed and its operating principles are presented. The system model under fault conditions

C. B. Jacobina; R. L. A. Ribeiro; A. M. N. Lima; E. R. C. Da Silva

2001-01-01

450

Performance Implications of Tolerating Cache Faults  

Microsoft Academic Search

Microprocessors are increasingly incorporating one or more on-chip caches. These caches are occupying a greater share of chip area, and thus may be the locus of manufacturing defects. Some of these defects will cause faults in cache tag or data memory. These faults can be tolerated by disabling the cache blocks that contain them. This approach lets chips with defects

Andreas Farid Pour; Mark D. Hill

1993-01-01

451

Physiochemical Evidence of Faulting Processes and Modeling of Fluid in Evolving Fault Systems in Southern California  

SciTech Connect

Our study targets recent (Plio-Pleistocene) faults and young (Tertiary) petroleum fields in southern California. Faults include the Refugio Fault in the Transverse Ranges, the Ellwood Fault in the Santa Barbara Channel, and most recently the Newport- Inglewood in the Los Angeles Basin. Subsurface core and tubing scale samples, outcrop samples, well logs, reservoir properties, pore pressures, fluid compositions, and published structural-seismic sections have been used to characterize the tectonic/diagenetic history of the faults. As part of the effort to understand the diagenetic processes within these fault zones, we have studied analogous processes of rapid carbonate precipitation (scaling) in petroleum reservoir tubing and manmade tunnels. From this, we have identified geochemical signatures in carbonate that characterize rapid CO2 degassing. These data provide constraints for finite element models that predict fluid pressures, multiphase flow patterns, rates and patterns of deformation, subsurface temperatures and heat flow, and geochemistry associated with large fault systems.

Boles, James [Professor

2013-05-24

452

Illuminating Northern California's Active Faults  

NASA Astrophysics Data System (ADS)

Newly acquired light detection and ranging (lidar) topographic data provide a powerful community resource for the study of landforms associated with the plate boundary faults of northern California (Figure 1). In the spring of 2007, GeoEarthScope, a component of the EarthScope Facility construction project funded by the U.S. National Science Foundation, acquired approximately 2000 square kilometers of airborne lidar topographic data along major active fault zones of northern California. These data are now freely available in point cloud (x, y, z coordinate data for every laser return), digital elevation model (DEM), and KMZ (zipped Keyhole Markup Language, for use in Google Earth™ and other similar software) formats through the GEON OpenTopography Portal (http://www.OpenTopography.org/data). Importantly, vegetation can be digitally removed from lidar data, producing high-resolution images (0.5- or 1.0-meter DEMs) of the ground surface beneath forested regions that reveal landforms typically obscured by vegetation canopy (Figure 2).

Prentice, Carol S.; Crosby, Christopher J.; Whitehill, Caroline S.; Arrowsmith, J. Ramón; Furlong, Kevin P.; Phillips, David A.

2009-02-01

453

Fault Tolerant Homopolar Magnetic Bearings  

NASA Technical Reports Server (NTRS)

Magnetic suspensions (MS) satisfy the long life and low loss conditions demanded by satellite and ISS based flywheels used for Energy Storage and Attitude Control (ACESE) service. This paper summarizes the development of a novel MS that improves reliability via fault tolerant operation. Specifically, flux coupling between poles of a homopolar magnetic bearing is shown to deliver desired forces even after termination of coil currents to a subset of failed poles . Linear, coordinate decoupled force-voltage relations are also maintained before and after failure by bias linearization. Current distribution matrices (CDM) which adjust the currents and fluxes following a pole set failure are determined for many faulted pole combinations. The CDM s and the system responses are obtained utilizing 1D magnetic circuit models with fringe and leakage factors derived from detailed, 3D, finite element field models. Reliability results are presented vs. detection/correction delay time and individual power amplifier reliability for 4, 6, and 7 pole configurations. Reliability is shown for two success criteria, i.e. (a) no catcher bearing contact following pole failures and (b) re-levitation off of the catcher bearings following pole failures. An advantage of the method presented over other redundant operation approaches is a significantly reduced requirement for backup hardware such as additional actuators or power amplifiers.

Li, Ming-Hsiu; Palazzolo, Alan; Kenny, Andrew; Provenza, Andrew; Beach, Raymond; Kascak, Albert

2003-01-01

454

Active faulting in the Walker Lane  

NASA Astrophysics Data System (ADS)

Deformation across the San Andreas and Walker Lane fault systems accounts for most relative Pacific-North American transform plate motion. The Walker Lane is composed of discontinuous sets of right-slip faults that are located to the east and strike approximately parallel to the San Andreas fault system. Mapping of active faults in the central Walker Lane shows that right-lateral shear is locally accommodated by rotation of crustal blocks bounded by steep-dipping east striking left-slip faults. The left slip and clockwise rotation of crustal blocks bounded by the east striking faults has produced major basins in the area, including Rattlesnake and Garfield flats; Teels, Columbus and Rhodes salt marshes; and Queen Valley. The Benton Springs and Petrified Springs faults are the major northwest striking structures currently accommodating transform motion in the central Walker Lane. Right-lateral offsets of late Pleistocene surfaces along the two faults point to slip rates of at least 1 mm/yr. The northern limit of northwest trending strike-slip faults in the central Walker Lane is abrupt and reflects transfer of strike-slip to dip-slip deformation in the western Basin and Range and transformation of right slip into rotation of crustal blocks to the north. The transfer of strike slip in the central Walker Lane to dip slip in the western Basin and Range correlates to a northward broadening of the modern strain field suggested by geodesy and appears to be a long-lived feature of the deformation field. The complexity of faulting and apparent rotation of crustal blocks within the Walker Lane is consistent with the concept of a partially detached and elastic-brittle crust that is being transported on a continuously deforming layer below. The regional pattern of faulting within the Walker Lane is more complex than observed along the San Andreas fault system to the west. The difference is attributed to the relatively less cumulative slip that has occurred across the Walker Lane and that oblique components of displacement are of opposite sense along the Walker Lane (extension) and San Andreas (contraction), respectively. Despite the gross differences in fault pattern, the Walker Lane and San Andreas also share similarities in deformation style, including clockwise rotations of crustal blocks leading to development of structural basins and the partitioning of oblique components of slip onto subparallel strike-slip and dip-slip faults.

Wesnousky, Steven G.

2005-06-01

455

Classification of Aircraft Maneuvers for Fault Detection  

NASA Technical Reports Server (NTRS)

Automated fault detection is an increasingly important problem in aircraft maintenance and operation. Standard methods of fault detection assume the availability of either data produced during all possible faulty operation modes or a clearly-defined means to determine whether the data provide a reasonable match to known examples of proper operation. In the domain of fault detection in aircraft, the first assumption is unreasonable and the second is difficult to determine. We envision a system for online fault detection in aircraft, one part of which is a classifier that predicts the maneuver being performed by the aircraft as a function of vibration data and other available data. To develop such a system, we use flight data collected under a controlled test environment, subject to many sources of variability. We explain where our classifier fits into the envisioned fault detection system as well as experiments showing the promise of this classification subsystem.

Oza, Nikunj; Tumer, Irem Y.; Tumer, Kagan; Huff, Edward M.; Koga, Dennis (Technical Monitor)

2002-01-01

456

Fault characterization of a multilayered perceptron network  

NASA Technical Reports Server (NTRS)

The results of a set of simulation experiments conducted to quantify the effects of faults in a classification network implemented as a three-layered perception model are reported. The percentage of vectors misclassified by the classification network, the time taken for the network to stabilize, and the output values are measured. The results show that both transient and permanent faults have a significant impact on the performance of the network. Transient faults are also found to cause the network to be increasingly unstable as the duration of a transient is increased. The average percentage of the vectors misclassified is about 25 percent; after relearning, this is reduced to 10 percent. The impact of link faults is relatively insignificant in comparison with node faults (1 percent versus 19 percent misclassified after relearning). A study of the impact of hardware redundancy shows a linear increase in misclassifications with increasing hardware size.

Tan, Chang H.; Iyer, Ravishankar K.

1990-01-01

457

Self-triggering superconducting fault current limiter  

DOEpatents

A modular and scaleable Matrix Fault Current Limiter (MFCL) that functions as a "variable impedance" device in an electric power network, using components made of superconducting and non-superconducting electrically conductive materials. The matrix fault current limiter comprises a fault current limiter module that includes a superconductor which is electrically coupled in parallel with a trigger coil, wherein the trigger coil is magnetically coupled to the superconductor. The current surge doing a fault within the electrical power network will cause the superconductor to transition to its resistive state and also generate a uniform magnetic field in the trigger coil and simultaneously limit the voltage developed across the superconductor. This results in fast and uniform quenching of the superconductors, significantly reduces the burnout risk associated with non-uniformity often existing within the volume of superconductor materials. The fault current limiter modules may be electrically coupled together to form various "n" (rows).times."m" (columns) matrix configurations.

Yuan, Xing (Albany, NY); Tekletsadik, Kasegn (Rexford, NY)

2008-10-21

458

Quantifying fault recovery in multiprocessor systems  

NASA Technical Reports Server (NTRS)

Various aspects of reliable computing are formalized and quantified with emphasis on efficient fault recovery. The mathematical model which proves to be most appropriate is provided by the theory of graphs. New measures for fault recovery are developed and the value of elements of the fault recovery vector are observed to depend not only on the computation graph H and the architecture graph G, but also on the specific location of a fault. In the examples, a hypercube is chosen as a representative of parallel computer architecture, and a pipeline as a typical configuration for program execution. Dependability qualities of such a system is defined with or without a fault. These qualities are determined by the resiliency triple defined by three parameters: multiplicity, robustness, and configurability. Parameters for measuring the recovery effectiveness are also introduced in terms of distance, time, and the number of new, used, and moved nodes and edges.

Malek, Miroslaw; Harary, Frank

1990-01-01

459

Maneuver Classification for Aircraft Fault Detection  

NASA Technical Reports Server (NTRS)

Automated fault detection is an increasingly important problem in aircraft maintenance and operation. Standard methods of fault detection assume the availability of either data produced during all possible faulty operation modes or a clearly-defined means to determine whether the data provide a reasonable match to known examples of proper operation. In the domain of fault detection in aircraft, identifying all possible faulty and proper operating modes is clearly impossible. We envision a system for online fault detection in aircraft, one part of which is a classifier that predicts the maneuver being performed by the aircraft as a function of vibration data and other available data. To develop such a system, we use flight data collected under a controlled test environment, subject to many sources of variability. We explain where our classifier fits into the envisioned fault detection system as well as experiments showing the promise of this classification subsystem.

Oza, Nikunj C.; Tumer, Irem Y.; Tumer, Kagan; Huff, Edward M.

2003-01-01

460

Rainfall-runoff modeling of recent hydroclimatic change in a subtropical lake catchment: Laguna Mar Chiquita, Argentina  

NASA Astrophysics Data System (ADS)

SummaryThe 1970s abrupt lake level rise of Laguna Mar Chiquita in central Argentina was shown to be driven by an increase in the Rio Sali-Dulce discharge outflowing from the northern part of the lake catchment. This regional hydrological change was consistent with the 20th century hydroclimatic trends observed in southeastern South America. However, little is known about the impacts of climate or land cover changes on this regional hydrological change causing the sharp lake level rise. To address this question, the present study aims to provide an integrated basin-lake model. We used the physically-based SWAT model in order to simulate streamflow in the Sali-Dulce Basin. The ability of SWAT to simulate non-stationary hydrological conditions was evaluated by a cross-calibration exercise. Based on observed daily meteorological data over 1973-2004, two successive 9-year periods referred to as wet (P1976-1985 = 1205 mm/yr) and dry (P1986-1995 = 796 mm/yr) periods were selected. The calibration yielded similar Nash-Sutcliffe efficiencies (NSE) at the monthly time scale for both periods (NSEwet = 0.86; NSEdry = 0.90) supporting the model's ability to adapt its structure to changing climatic situations. The simulation was extended in scarce data conditions over 1931-1972 and the simulation of monthly discharge values was acceptable (NSE = 0.71). When precipitation in the model was increased until it reach the change observed in the 1970s (?P/P¯=22%), the resulting increase in streamflow was found to closely match the 1970s hydrological change (?Q/Q¯=45%). Sensitivity analyses revealed that the land cover changes had a minor impact on the 1970s hydrological changes in the Sali-Dulce Basin. Integrating the SWAT simulations within the lake model over 1973-2004 provided lake level variations similar to those obtained using observed discharge values. Over the longer period, going back to 1931, the main features of lake levels were still adequately reproduced, which suggests that this basin-lake model is a promising approach for simulating long-term lake level fluctuations in response to climate.

Troin, Magali; Vallet-Coulomb, Christine; Piovano, Eduardo; Sylvestre, Florence

2012-12-01

461

Magnitude, geomorphologic response and climate links of lake level oscillations at Laguna Potrok Aike, Patagonian steppe (Argentina)  

NASA Astrophysics Data System (ADS)

Laguna Potrok Aike is a large maar lake located in the semiarid steppe of southern Patagonia known for its Lateglacial and Holocene lake level fluctuations. Based on sedimentary, seismic and geomorphological evidences, the lake level curve is updated and extended into the Last Glacial period and the geomorphological development of the lake basin and its catchment area is interpreted.Abrasion and lake level oscillations since at least ˜50 ka caused concentric erosion of the surrounding soft rocks of the Miocene Santa Cruz Formation and expanded the basin diameter by approximately 1 km. A high lake level and overflow conditions of the lake were dated by luminescence methods and tephra correlation to the early Lateglacial as well as to ˜45 ka. The lowest lake level of record occurred during the mid-Holocene. A further lake level drop was probably prevented by groundwater supply. This low lake level eroded a distinct terrace into lacustrine sediments. Collapse of these terraces probably caused mass movement deposits in the profundal zone of the lake. After the mid-Holocene lake level low stand a general and successive transgression occurred until the Little Ice Age maximum; i.e. ca 40 m above the local groundwater table. Frequent lake level oscillations caused deflation of emerged terraces only along the eastern shoreline due to prevailing westerly winds. Preservation of eolian deposits might be linked to relatively moist climate conditions during the past 2.5 ka.Precisely dated lake level reconstructions in the rain-shadow of the Andes document high Last Glacial and low Holocene lake levels that could suggest increased precipitation during the Last Glacial period. As permafrost in semiarid Patagonia is documented and dated to the Last Glacial period we argue that the frozen ground might have increased surficial runoff from the catchment and thus influenced the water balance of the lake. This is important for investigating the glacial to Holocene latitudinal shift and/or strengthening of the Southern Hemispheric Westerlies by using lake level reconstructions as a means to assess the regional water balance. Our interpretation explains the contradiction with investigations based on pollen data indicating drier climatic conditions for the Last Glacial period.

PASADO science Team Kliem, P.; Buylaert, J. P.; Hahn, A.; Mayr, C.; Murray, A. S.; Ohlendorf, C.; Veres, D.; Wastegård, S.; Zolitschka, B.

2013-07-01

462

Magnitude, geomorphologic response and climate links of lake level oscillations at Laguna Potrok Aike, Patagonian steppe (Argentina)  

NASA Astrophysics Data System (ADS)

Laguna Potrok Aike is a large maar lake located in the semiarid steppe of southern Patagonia known for its Lateglacial and Holocene lake level fluctuations. Based on sedimentary, seismic and geomorphological evidences, the lake level curve is updated and extended into the Last Glacial period and the geomorphological development of the lake basin and its catchment area is interpreted. Abrasion and lake level oscillations since at least ˜50 ka caused concentric erosion of the surrounding soft rocks of the Miocene Santa Cruz Formation and expanded the basin diameter by approximately 1 km. A high lake level and overflow conditions of the lake were dated by luminescence methods and tephra correlation to the early Lateglacial as well as to ˜45 ka. The lowest lake level of record occurred during the mid-Holocene. A further lake level drop was probably prevented by groundwater supply. This low lake level eroded a distinct terrace into lacustrine sediments. Collapse of these terraces probably caused mass movement deposits in the profundal zone of the lake. After the mid-Holocene lake level low stand a general and successive transgression occurred until the Little Ice Age maximum; i.e. ca 40 m above the local groundwater table. Frequent lake level oscillations caused deflation of emerged terraces only along the eastern shoreline due to prevailing westerly winds. Preservation of eolian deposits might be linked to relatively moist climate conditions during the past 2.5 ka. Precisely dated lake level reconstructions in the rain-shadow of the Andes document high Last Glacial and low Holocene lake levels that could suggest increased precipitation during the Last Glacial period. As permafrost in semiarid Patagonia is documented and dated to the Last Glacial period we argue that the frozen ground might have increased surficial runoff from the catchment and thus influenced the water balance of the lake. This is important for investigating the glacial to Holocene latitudinal shift and/or strengthening of the Southern Hemispheric Westerlies by using lake level reconstructions as a means to assess the regional water balance. Our interpretation explains the contradiction with investigations based on pollen data indicating drier climatic conditions for the Last Glacial period.

Kliem, P.; Buylaert, J. P.; Hahn, A.; Mayr, C.; Murray, A. S.; Ohlendorf, C.; Veres, D.; Wastegård, S.; Zolitschka, B.; The Pasado Science Team

2013-07-01

463

Geophysical anomalies and segmentation of the Hayward Fault, San Andreas Fault System, Northern California, USA  

NASA Astrophysics Data System (ADS)

The Hayward Fault, part of the San Andreas Fault System, extends for about 90 km and is regarded as one of the most hazardous faults in northern California. The Hayward Fault is predominantly a right-lateral strike-slip fault that forms the western boundary of the East Bay Hills with about 100 km of total offset along the fault zone. The Hayward Fault juxtaposes very different basement terranes, with subduction related Franciscan rocks on the southwest and oceanic basement and forarc sedimentary rocks (Great Valley sequence) on the northeast. The Hayward Fault also appears to have preferentially followed a pre-existing structure. Historically, the Hayward Fault has been partitioned into two fault segments on the basis of an 1868 and 1836 earthquake. However, because the 1868 earthquake ruptured beyond one of these segments and the 1836 earthquake is no longer associated with the Hayward Fault, this necessitates a re-evaluation of the two-segment model of the Hayward Fault. The fault is characterized by distinct linear gravity and magnetic anomalies that correlate with edges of mafic and ultramafic bodies, juxtaposition of Franciscan and Great Valley sequence rocks, structural trends, creep rates, and clusters of seismicity. These inter-relationships suggest that the Hayward Fault consists of numerous fault-zone discontinuities that may define several fault segments. Fault segment boundaries can be defined by various fault-zone discontinuities based on geometric, structural, geophysical, and geologic information. Segment boundaries or asperities are locations where seismic ruptures may tend to nucleate or terminate. We suggest that the Hayward Fault is partitioned into several segments based on geological and geophysical evidence. While some of these may be rupture bounding segments, the ability of an earthquake to propagate across segment boundaries may, in part, be related to the location, magnitude, and rupture propagation velocity of the earthquake. The approximate location and rupture length of the great M6.7 1868 earthquake combined with geophysical evidence suggests that it may have been located at or near a segment boundary and propagated bilaterally to both the northwest and southeast. In addition, geological and geophysical data suggest that the rupture associated with the 1868 earthquake may have terminated at a segment boundary near Berkeley.

Ponce, D. A.; Hildenbrand, T. G.; Jachens, R. C.

2003-04-01