Test Generation Algorithm for Fault Detection of Analog Circuits Based on Extreme Learning Machine
Zhou, Jingyu; Tian, Shulin; Yang, Chenglin; Ren, Xuelong
2014-01-01
This paper proposes a novel test generation algorithm based on extreme learning machine (ELM), and such algorithm is cost-effective and low-risk for analog device under test (DUT). This method uses test patterns derived from the test generation algorithm to stimulate DUT, and then samples output responses of the DUT for fault classification and detection. The novel ELM-based test generation algorithm proposed in this paper contains mainly three aspects of innovation. Firstly, this algorithm saves time efficiently by classifying response space with ELM. Secondly, this algorithm can avoid reduced test precision efficiently in case of reduction of the number of impulse-response samples. Thirdly, a new process of test signal generator and a test structure in test generation algorithm are presented, and both of them are very simple. Finally, the abovementioned improvement and functioning are confirmed in experiments. PMID:25610458
Experimental results in evolutionary fault-recovery for field programmable analog devices
NASA Technical Reports Server (NTRS)
Zebulum, Ricardo S.; Keymeulen, Didier; Duong, Vu; Guo, Xin; Ferguson, M. I.; Stoica, Adrian
2003-01-01
This paper presents experimental results of fast intrinsic evolutionary design and evolutionary fault recovery of a 4-bit Digital to Analog Converter (DAC) using the JPL stand-alone board-level evolvable system (SABLES).
Fault diagnosis for analog circuits utilizing time-frequency features and improved VVRKFA
NASA Astrophysics Data System (ADS)
He, Wei; He, Yigang; Luo, Qiwu; Zhang, Chaolong
2018-04-01
This paper proposes a novel scheme for analog circuit fault diagnosis utilizing features extracted from the time-frequency representations of signals and an improved vector-valued regularized kernel function approximation (VVRKFA). First, the cross-wavelet transform is employed to yield the energy-phase distribution of the fault signals over the time and frequency domain. Since the distribution is high-dimensional, a supervised dimensionality reduction technique—the bilateral 2D linear discriminant analysis—is applied to build a concise feature set from the distributions. Finally, VVRKFA is utilized to locate the fault. In order to improve the classification performance, the quantum-behaved particle swarm optimization technique is employed to gradually tune the learning parameter of the VVRKFA classifier. The experimental results for the analog circuit faults classification have demonstrated that the proposed diagnosis scheme has an advantage over other approaches.
NASA Technical Reports Server (NTRS)
Wilhite, Larry D.; Lee, S. C.; Lollar, Louis F.
1989-01-01
The design and implementation of the real-time data acquisition and processing system employed in the AMPERES project is described, including effective data structures for efficient storage and flexible manipulation of the data by the knowledge-based system (KBS), the interprocess communication mechanism required between the data acquisition system and the KBS, and the appropriate data acquisition protocols for collecting data from the sensors. Sensor data are categorized as critical or noncritical data on the basis of the inherent frequencies of the signals and the diagnostic requirements reflected in their values. The critical data set contains 30 analog values and 42 digital values and is collected every 10 ms. The noncritical data set contains 240 analog values and is collected every second. The collected critical and noncritical data are stored in separate circular buffers. Buffers are created in shared memory to enable other processes, i.e., the fault monitoring and diagnosis process and the user interface process, to freely access the data sets.
Okubo, Chris H.
2012-01-01
Volcanic ash is thought to comprise a large fraction of the Martian equatorial layered deposits and much new insight into the process of faulting and related fluid flow in these deposits can be gained through the study of analogous terrestrial tuffs. This study identifies a set of fault-related processes that are pertinent to understanding the evolution of fault systems in fine-grained, poorly indurated volcanic ash by investigating exposures of faults in the Miocene-aged Joe Lott Tuff Member of the Mount Belknap Volcanics, Utah. The porosity and granularity of the host rock are found to control the style of localized strain that occurs prior to and contemporaneous with faulting. Deformation bands occur in tuff that was porous and granular at the time of deformation, while fractures formed where the tuff lost its porous and granular nature due to silicic alteration. Non-localized deformation of the host rock is also prominent and occurs through compaction of void space, including crushing of pumice clasts. Significant off-fault damage of the host rock, resembling fault pulverization, is recognized adjacent to one analog fault and may reflect the strain rate dependence of the resulting fault zone architecture. These findings provide important new guidelines for future structural analyses and numerical modeling of faulting and subsurface fluid flow through volcanic ash deposits on Mars.
Finding faults: analogical comparison supports spatial concept learning in geoscience.
Jee, Benjamin D; Uttal, David H; Gentner, Dedre; Manduca, Cathy; Shipley, Thomas F; Sageman, Bradley
2013-05-01
A central issue in education is how to support the spatial thinking involved in learning science, technology, engineering, and mathematics (STEM). We investigated whether and how the cognitive process of analogical comparison supports learning of a basic spatial concept in geoscience, fault. Because of the high variability in the appearance of faults, it may be difficult for students to learn the category-relevant spatial structure. There is abundant evidence that comparing analogous examples can help students gain insight into important category-defining features (Gentner in Cogn Sci 34(5):752-775, 2010). Further, comparing high-similarity pairs can be especially effective at revealing key differences (Sagi et al. 2012). Across three experiments, we tested whether comparison of visually similar contrasting examples would help students learn the fault concept. Our main findings were that participants performed better at identifying faults when they (1) compared contrasting (fault/no fault) cases versus viewing each case separately (Experiment 1), (2) compared similar as opposed to dissimilar contrasting cases early in learning (Experiment 2), and (3) viewed a contrasting pair of schematic block diagrams as opposed to a single block diagram of a fault as part of an instructional text (Experiment 3). These results suggest that comparison of visually similar contrasting cases helped distinguish category-relevant from category-irrelevant features for participants. When such comparisons occurred early in learning, participants were more likely to form an accurate conceptual representation. Thus, analogical comparison of images may provide one powerful way to enhance spatial learning in geoscience and other STEM disciplines.
NASA Astrophysics Data System (ADS)
Duru, K.; Dunham, E. M.; Bydlon, S. A.; Radhakrishnan, H.
2014-12-01
Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake.The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws.The corresponding initial boundary value problems are both numerically and computationally challenging.In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas.Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods.We present a numerical method for:a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration;b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of seismic waves in heterogeneous media with free surface topography.We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. We have performed extensive numerical experiments using a slip-weakening friction law on non-planar faults, including recent SCEC benchmark problems. We also show simulations on fractal faults revealing the complexity of rupture dynamics on rough faults. We are presently extending our method to rate-and-state friction laws and off-fault plasticity.
Distribution of active faulting along orogenic wedges: Minimum-work models and natural analogue
NASA Astrophysics Data System (ADS)
Yagupsky, Daniel L.; Brooks, Benjamin A.; Whipple, Kelin X.; Duncan, Christopher C.; Bevis, Michael
2014-09-01
Numerical 2-D models based on the principle of minimum work were used to examine the space-time distribution of active faulting during the evolution of orogenic wedges. A series of models focused on thin-skinned thrusting illustrates the effects of arid conditions (no erosion), unsteady state conditions (accretionary influx greater than erosional efflux) and steady state conditions (accretionary influx balances erosional efflux), on the distribution of fault activity. For arid settings, a general forward accretion sequence prevails, although a significant amount of internal deformation is registered: the resulting fault pattern is a rather uniform spread along the profile. Under fixed erosional efficiency settings, the frontal advance of the wedge-front is inhibited, reaching a steady state after a given forward propagation. Then, the applied shortening is consumed by surface ruptures over a narrow frontal zone. Under a temporal increase in erosional efficiency (i.e., transient non-steady state mass balance conditions), a narrowing of the synthetic wedge results; a rather diffuse fault activity distribution is observed during the deformation front retreat. Once steady balanced conditions are reached, a single long-lived deformation front prevails. Fault activity distribution produced during the deformation front retreat of the latter scenario, compares well with the structural evolution and hinterlandward deformation migration identified in southern Bolivian Subandes (SSA) from late Miocene to present. This analogy supports the notion that the SSA is not in steady state, but is rather responding to an erosional efficiency increase since late Miocene. The results shed light on the impact of different mass balance conditions on the vastly different kinematics found in mountain ranges, suggesting that those affected by growing erosion under a transient unbalanced mass flux condition tend to distribute deformation along both frontal and internal faults, while others under balanced conditions would display focused deformation on a limited number of steady structures.
Origins of oblique-slip faulting during caldera subsidence
NASA Astrophysics Data System (ADS)
Holohan, Eoghan P.; Walter, Thomas R.; Schöpfer, Martin P. J.; Walsh, John J.; van Wyk de Vries, Benjamin; Troll, Valentin R.
2013-04-01
Although conventionally described as purely dip-slip, faults at caldera volcanoes may have a strike-slip displacement component. Examples occur in the calderas of Olympus Mons (Mars), Miyakejima (Japan), and Dolomieu (La Reunion). To investigate this phenomenon, we use numerical and analog simulations of caldera subsidence caused by magma reservoir deflation. The numerical models constrain mechanical causes of oblique-slip faulting from the three-dimensional stress field in the initial elastic phase of subsidence. The analog experiments directly characterize the development of oblique-slip faulting, especially in the later, non-elastic phases of subsidence. The combined results of both approaches can account for the orientation, mode, and location of oblique-slip faulting at natural calderas. Kinematically, oblique-slip faulting originates to resolve the following: (1) horizontal components of displacement that are directed radially toward the caldera center and (2) horizontal translation arising from off-centered or "asymmetric" subsidence. We informally call these two origins the "camera iris" and "sliding trapdoor" effects, respectively. Our findings emphasize the fundamentally three-dimensional nature of deformation during caldera subsidence. They hence provide an improved basis for analyzing structural, geodetic, and geophysical data from calderas, as well as analogous systems, such as mines and producing hydrocarbon reservoirs.
NASA Astrophysics Data System (ADS)
Hsieh, S. Y.; Neubauer, F.; Genser, J.
2012-04-01
The aim of this project is to study the surface expression of strike-slip faults with main aim to find rules how these structures can be extrapolated to depth. In the first step, several basic properties of the fault architecture are in focus: (1) Is it possible to define the fault architecture by studying surface structures of the damage zone vs. the fault core, particularly the width of the damage zone? (2) Which second order structures define the damage zone of strike-slip faults, and how relate these to such reported in basement fault strike-slip analog experiments? (3) Beside classical fault bend structures, is there a systematic along-strike variation of the damage zone width and to which properties relates the variation of the damage zone width. We study the above mentioned properties on the dextral Altyn fault, which is one of the largest strike-slip on Earth with the advantage to have developed in a fully arid climate. The Altyn fault includes a ca. 250 to 600 m wide fault valley, usually with the trace of actual fault in its center. The fault valley is confined by basement highs, from which alluvial fans develop towards the center of the fault valley. The active fault trace is marked by small scale pressure ridges and offset of alluvial fans. The fault valley confining basement highs are several kilometer long and ca. 0.5 to 1 km wide and confined by rotated dextral anti-Riedel faults and internally structured by a regular fracture pattern. Dextral anti-Riedel faults are often cut by Riedel faults. Consequently, the Altyn fault comprises a several km wide damage zone. The fault core zone is a barrier to fluid flow, and the few springs of the region are located on the margin of the fault valley implying the fractured basement highs as the reservoir. Consequently, the southern Silk Road was using the Altyn fault valley. The preliminary data show that two or more orders of structures exist. Small-scale develop during a single earthquake. These finally accumulate to a several 100 m wide fault core, which is in part exposed at surface to arid climate and a km wide damage zone. The basic structures of analog experiments can be well transferred to nature, although along strike changes are common due to fault bending and fracture failure of country rocks.
High-Threshold Fault-Tolerant Quantum Computation with Analog Quantum Error Correction
NASA Astrophysics Data System (ADS)
Fukui, Kosuke; Tomita, Akihisa; Okamoto, Atsushi; Fujii, Keisuke
2018-04-01
To implement fault-tolerant quantum computation with continuous variables, the Gottesman-Kitaev-Preskill (GKP) qubit has been recognized as an important technological element. However, it is still challenging to experimentally generate the GKP qubit with the required squeezing level, 14.8 dB, of the existing fault-tolerant quantum computation. To reduce this requirement, we propose a high-threshold fault-tolerant quantum computation with GKP qubits using topologically protected measurement-based quantum computation with the surface code. By harnessing analog information contained in the GKP qubits, we apply analog quantum error correction to the surface code. Furthermore, we develop a method to prevent the squeezing level from decreasing during the construction of the large-scale cluster states for the topologically protected, measurement-based, quantum computation. We numerically show that the required squeezing level can be relaxed to less than 10 dB, which is within the reach of the current experimental technology. Hence, this work can considerably alleviate this experimental requirement and take a step closer to the realization of large-scale quantum computation.
NASA Astrophysics Data System (ADS)
Howard, K. A.
2009-12-01
The 1968 collapse structure of Fernandina caldera (1.5 km3 collapsed) and also the smaller Darwin Bay caldera in Galápagos each closely resembles morphologically the structural zoning of features found in depressions collapsed into nuclear-explosion cavities (“sinks” of Houser, 1969) and in coherent sandbox-collapse models. Coherent collapses characterized by faulting, folding, and organized structure contrast with spalled pit craters (and lab experiments with collapsed powder) where disorganized piles of floor rubble result from tensile failure of the roof. Subsidence in coherent mode, whether in weak sand in the lab, stronger desert alluvium for nuclear-test sinks, or in hard rock for calderas, exhibits consistent morphologic zones. Characteristically in the sandbox and the nuclear-test analogs these include a first-formed central plug that drops along annular reverse faults. This plug and a surrounding inward-tilted or monoclinal ring (hanging wall of the reverse fault) contract as the structure expands outward by normal faulting, wherein peripheral rings of distending material widen the upper part of the structure along inward-dipping normal faults and compress inner zones and help keep them intact. In Fernandina, a region between the monocline and the outer zone of normal faulting is interpreted, by comparison to the analogs, to overlie the deflation margin of an underlying magma chamber. The same zoning pattern is recognized in structures ranging from sandbox subsidence features centimeters across, to Alae lave lake and nuclear-test sinks tens to hundreds of meters across, to Fenandina’s 2x4 km-wide collapse, to Martian calderas tens of kilometers across. Simple dimensional analysis using the height of cliffs as a proxie for material strength implies that the geometric analogs are good dynamic analogs, and validates that the pattern of both reverse and normal faulting that has been reported consistently from sandbox modeling applies widely to calderas.
Fault detection in digital and analog circuits using an i(DD) temporal analysis technique
NASA Technical Reports Server (NTRS)
Beasley, J.; Magallanes, D.; Vridhagiri, A.; Ramamurthy, Hema; Deyong, Mark
1993-01-01
An i(sub DD) temporal analysis technique which is used to detect defects (faults) and fabrication variations in both digital and analog IC's by pulsing the power supply rails and analyzing the temporal data obtained from the resulting transient rail currents is presented. A simple bias voltage is required for all the inputs, to excite the defects. Data from hardware tests supporting this technique are presented.
Casale, Gabriele; Pratt, Thomas L.
2015-01-01
The Yakima fold and thrust belt (YFTB) deforms the Columbia River Basalt Group flows of Washington State. The YFTB fault geometries and slip rates are crucial parameters for seismic‐hazard assessments of nearby dams and nuclear facilities, yet there are competing models for the subsurface fault geometry involving shallowly rooted versus deeply rooted fault systems. The YFTB is also thought to be analogous to the evenly spaced wrinkle ridges found on other terrestrial planets. Using seismic reflection data, borehole logs, and surface geologic data, we tested two proposed kinematic end‐member thick‐ and thin‐skinned fault models beneath the Saddle Mountains anticline of the YFTB. Observed subsurface geometry can be produced by 600–800 m of heave along a single listric‐reverse fault or ∼3.5 km of slip along two superposed low‐angle thrust faults. Both models require decollement slip between 7 and 9 km depth, resulting in greater fault areas than sometimes assumed in hazard assessments. Both models require initial slip much earlier than previously thought and may provide insight into the subsurface geometry of analogous comparisons to wrinkle ridges observed on other planets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duru, Kenneth, E-mail: kduru@stanford.edu; Dunham, Eric M.; Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA
Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a)more » enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge–Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.« less
NASA Astrophysics Data System (ADS)
Duru, Kenneth; Dunham, Eric M.
2016-01-01
Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.
NASA Astrophysics Data System (ADS)
Lu, Siliang; Zhou, Peng; Wang, Xiaoxian; Liu, Yongbin; Liu, Fang; Zhao, Jiwen
2018-02-01
Wireless sensor networks (WSNs) which consist of miscellaneous sensors are used frequently in monitoring vital equipment. Benefiting from the development of data mining technologies, the massive data generated by sensors facilitate condition monitoring and fault diagnosis. However, too much data increase storage space, energy consumption, and computing resource, which can be considered fatal weaknesses for a WSN with limited resources. This study investigates a new method for motor bearings condition monitoring and fault diagnosis using the undersampled vibration signals acquired from a WSN. The proposed method, which is a fusion of the kurtogram, analog domain bandpass filtering, bandpass sampling, and demodulated resonance technique, can reduce the sampled data length while retaining the monitoring and diagnosis performance. A WSN prototype was designed, and simulations and experiments were conducted to evaluate the effectiveness and efficiency of the proposed method. Experimental results indicated that the sampled data length and transmission time of the proposed method result in a decrease of over 80% in comparison with that of the traditional method. Therefore, the proposed method indicates potential applications on condition monitoring and fault diagnosis of motor bearings installed in remote areas, such as wind farms and offshore platforms.
Reliability modeling of fault-tolerant computer based systems
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1987-01-01
Digital fault-tolerant computer-based systems have become commonplace in military and commercial avionics. These systems hold the promise of increased availability, reliability, and maintainability over conventional analog-based systems through the application of replicated digital computers arranged in fault-tolerant configurations. Three tightly coupled factors of paramount importance, ultimately determining the viability of these systems, are reliability, safety, and profitability. Reliability, the major driver affects virtually every aspect of design, packaging, and field operations, and eventually produces profit for commercial applications or increased national security. However, the utilization of digital computer systems makes the task of producing credible reliability assessment a formidable one for the reliability engineer. The root of the problem lies in the digital computer's unique adaptability to changing requirements, computational power, and ability to test itself efficiently. Addressed here are the nuances of modeling the reliability of systems with large state sizes, in the Markov sense, which result from systems based on replicated redundant hardware and to discuss the modeling of factors which can reduce reliability without concomitant depletion of hardware. Advanced fault-handling models are described and methods of acquiring and measuring parameters for these models are delineated.
NASA Technical Reports Server (NTRS)
Gwaltney, David A.; Ferguson, Michael I.
2003-01-01
Evolvable hardware provides the capability to evolve analog circuits to produce amplifier and filter functions. Conventional analog controller designs employ these same functions. Analog controllers for the control of the shaft speed of a DC motor are evolved on an evolvable hardware platform utilizing a second generation Field Programmable Transistor Array (FPTA2). The performance of an evolved controller is compared to that of a conventional proportional-integral (PI) controller. It is shown that hardware evolution is able to create a compact design that provides good performance, while using considerably less functional electronic components than the conventional design. Additionally, the use of hardware evolution to provide fault tolerance by reconfiguring the design is explored. Experimental results are presented showing that significant recovery of capability can be made in the face of damaging induced faults.
Analysis of fault-tolerant neurocontrol architectures
NASA Technical Reports Server (NTRS)
Troudet, T.; Merrill, W.
1992-01-01
The fault-tolerance of analog parallel distributed implementations of a multivariable aircraft neurocontroller is analyzed by simulating weight and neuron failures in a simplified scheme of analog processing based on the functional architecture of the ETANN chip (Electrically Trainable Artificial Neural Network). The neural information processing is found to be only partially distributed throughout the set of weights of the neurocontroller synthesized with the backpropagation algorithm. Although the degree of distribution of the neural processing, and consequently the fault-tolerance of the neurocontroller, could be enhanced using Locally Distributed Weight and Neuron Approaches, a satisfactory level of fault-tolerance could only be obtained by retraining the degrated VLSI neurocontroller. The possibility of maintaining neurocontrol performance and stability in the presence of single weight of neuron failures was demonstrated through an automated retraining procedure of the neurocontroller based on a pre-programmed choice and sequence of the training parameters.
NASA Astrophysics Data System (ADS)
Lacroix, P.; Perfettini, H.; Taipe, E.; Guillier, B.
2014-10-01
We document the first time series of a landslide reactivation by an earthquake using continuous GPS measurements over the Maca landslide (Peru). Our survey shows a coseismic response of the landslide of about 2 cm, followed by a relaxation period of 5 weeks during which postseismic slip is 3 times greater than the coseismic displacement itself. Our results confirm the coseismic activation of landslides and provide the first observation of a postseismic displacement. These observations are consistent with a mechanical model where slip on the landslide basal interface is governed by rate and state friction, analogous to the mechanics of creeping tectonic faults, opening new perspectives to study the mechanics of landslides and active faults.
Tectonic history of the northern Nabitah fault zone, Arabian Shield, Kingdom of Saudi Arabia
Quick, J.E.; Bosch, Paul S.
1990-01-01
Based on the presence of similar lithologies, similar structure, and analogous tectonic setting, the Mother Lode District in California is reviewed as a model for gold occurrences near the Nabitah fault zone in this report.
Bayarsayhan, C.; Bayasgalan, A.; Enhtuvshin, B.; Hudnut, K.W.; Kurushin, R.A.; Molnar, P.; Olziybat, M.
1996-01-01
The 1957 Gobi-Altay earthquake was associated with both strike-slip and thrust faulting, processes similar to those along the San Andreas fault and the faults bounding the San Gabriel Mountains just north of Los Angeles, California. Clearly, a major rupture either on the San Andreas fault north of Los Angeles or on the thrust faults bounding the Los Angeles basin poses a serious hazard to inhabitants of that area. By analogy with the Gobi-Altay earthquake, we suggest that simultaneous rupturing of both the San Andreas fault and the thrust faults nearer Los Angeles is a real possibility that amplifies the hazard posed by ruptures on either fault system separately.
Analog modeling of the deformation and kinematics of the Calabrian accretionary wedge
NASA Astrophysics Data System (ADS)
Dellong, David; Gutscher, Marc-Andre; Klingelhoefer, Frauke; Graindorge, David; Kopp, Heidrun; Mercier de Lepinay, Bernard; Dominguez, Stephane; Malavieille, Jacques
2017-04-01
The Calabrian accretionary wedge in the Ionian Sea, is the site of slow, deformation related to the overall convergence between Africa and Eurasia and the subduction zone beneath Calabria. High-resolution swath bathymetric data and seismic profiling image a complex network of compressional and strike-slip structures. Major Mesozoic rift structures (Malta Escarpment) are also present and appear to be reactivated in places by normal faulting. Ongoing normal faulting also occurs in the straits of Messina area (1908 M7.2 earthquake). We applied analog modeling using granular materials as well as ductile (silicone) in some experiments. The objective was to test the predictions of certain kinematic models regarding the location and kinematics of a major lateral slab edge tear fault. One experiment, using two independently moving backstops, demonstrates that the relative kinematics of the Calabrian and Peloritan blocks can produce a zone of dextral transtension and subsidence which corresponds well to the asymmetric rift observed in seismic data in the southward prolongation of the straits of Messina faults. However, the expected dextral offset in the deformation front of the accretionary wedge is not observed in bathymetry. In fact sinistral motion is observed along the boundary between two lobes of the accretionary wedge suggesting the dextral motion is absorbed along a network of transcurrent faults within the eastern lobe. Bathymetric and seismic observations indicate that the major dextral boundary along the western boundary of the accretionary wedge is the Alfeo fault system, whose southern termination is the focal point of a striking set of radial slip-lines. Further analog modeling experiments attempted to reproduce these structures, with mixed results.
Zhou, Jingyu; Tian, Shulin; Yang, Chenglin
2014-01-01
Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.
Study of a phase-to-ground fault on a 400 kV overhead transmission line
NASA Astrophysics Data System (ADS)
Iagăr, A.; Popa, G. N.; Diniş, C. M.
2018-01-01
Power utilities need to supply their consumers at high power quality level. Because the faults that occur on High-Voltage and Extra-High-Voltage transmission lines can cause serious damages in underlying transmission and distribution systems, it is important to examine each fault in detail. In this work we studied a phase-to-ground fault (on phase 1) of 400 kV overhead transmission line Mintia-Arad. Indactic® 650 fault analyzing system was used to record the history of the fault. Signals (analog and digital) recorded by Indactic® 650 were visualized and analyzed by Focus program. Summary of fault report allowed evaluation of behavior of control and protection equipment and determination of cause and location of the fault.
NASA Astrophysics Data System (ADS)
Okubo, C. H.
2011-12-01
The equatorial layered deposits on Mars exhibit abundant evidence for the sustained presence of groundwater, and therefore insight into past water-related processes may be gained through the study of these deposits. Pyroclastic and evaporitic sediments are two broad lithologies that are known or inferred to comprise these deposits. Investigations into the effects of faulting on fluid flow potential through such Mars analog lithologies have been limited. Thus a study into the effects of faulting on fluid flow pathways through fine-grained pyroclastic sediments has been undertaken, and the results of this study are presented here. Faults and their damage zones can influence the trapping and migration of fluids by acting as either conduits or barriers to fluid flow. In clastic sedimentary rocks, the conductivity of fault damage zones is primarily a function of the microstructure of the host rock, stress history, phyllosilicate content, and cementation. The chemical composition of the host rock influences the mechanical strength of the grains, the susceptibility of the grains to alteration, and the availability of authigenic cements. The spatial distribution of fault-related damage is investigated within the Joe Lott Tuff Member of the Mount Belknap Volcanics, Utah. Damage is characterized by measuring fracture densities along the fault, and by mapping the gas permeability of the surrounding rock. The Joe Lott Tuff is a partially welded, crystal-poor, rhyolite ash-flow tuff of Miocene age. While the rhyolitic chemical composition of the Joe Lott Tuff is not analogous to the basaltic compositions expected for Mars, the mechanical behavior of a poorly indurated mixture of fine-grained glass and pumice is pertinent to understanding the fundamental mechanics of faulting in Martian pyroclastic sediments. Results of mapping around two faults are presented here. The first fault is entirely exposed in cross-section and has a down-dip height of ~10 m. The second fault is partially exposed, with ~21 m visible in cross-section. Both faults have a predominantly normal sense of offset and a minor dextral strike-slip component. The 10 m fault has a single well-defined surface, while the 21 m fault takes the form of a 5-10 cm wide fault core. Fracture density at the 10 m fault is highest near its upper and lower tips, forming distinct near-tip fracture damage zones. At the 21 m fault, fracture density is broadly consistent along the exposed height of the fault, with the highest fracture densities nearest to the fault core. Fracture density is higher in the hanging walls than in the footwalls of both faults, and the footwall of the 21 m fault exhibits m-scale areas of significant distributed cataclasis. Gas permeability has a marked decrease, several orders of magnitude relative to the non-deformed host rock, at 1.5 m on either side of the 10 m fault. Permeability is lowest outboard of the fault's near-tip fracture damage zones. A similar permeability drop occurs at 1-5 m from the center of the 21 m fault's core, with the permeability drop extending furthest from the fault core in the footwall. These findings will be used to improve existing numerical methods for predicting subsurface fluid flow patterns from observed fault geometries on Mars.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Fangzhen; Wang, Huanhuan; Raghothamachar, Balaji
A new method has been developed to determine the fault vectors associated with stacking faults in 4H-SiC from their stacking sequences observed on high resolution TEM images. This method, analogous to the Burgers circuit technique for determination of dislocation Burgers vector, involves determination of the vectors required in the projection of the perfect lattice to correct the deviated path constructed in the faulted material. Results for several different stacking faults were compared with fault vectors determined from X-ray topographic contrast analysis and were found to be consistent. This technique is expected to applicable to all structures comprising corner shared tetrahedra.
Radiation efficiency during slow crack propagation: an experimental study.
NASA Astrophysics Data System (ADS)
Jestin, Camille; Lengliné, Olivier; Schmittbuhl, Jean
2017-04-01
Creeping faults are known to host a significant aseismic deformation. However, the observations of micro-earthquake activity related to creeping faults (e.g. San Andreas Faults, North Anatolian Fault) suggest the presence of strong lateral variabilities of the energy partitioning between radiated and fracture energies. The seismic over aseismic slip ratio is rather difficult to image over time and at depth because of observational limitations (spatial resolution, sufficiently broad band instruments, etc.). In this study, we aim to capture in great details the energy partitioning during the slow propagation of mode I fracture along a heterogeneous interface, where the toughness is strongly varying in space.We lead experiments at laboratory scale on a rock analog model (PMMA) enabling a precise monitoring of fracture pinning and depinning on local asperities in the brittle-creep regime. Indeed, optical imaging through the transparent material allows the high resolution description of the fracture front position and velocity during its propagation. At the same time, acoustic emissions are also measured by accelerometers positioned around the rupture. Combining acoustic records, measurements of the crack front position and the loading curve, we compute the total radiated energy and the fracture energy. We deduce from them the radiation efficiency, ηR, characterizing the proportion of the available energy that is radiated in form of seismic wave. We show an increase of ηR with the crack rupture speed computed for each of our experiments in the sub-critical crack propagation domain. Our experimental estimates of ηR are larger than the theoretical model proposed by Freund, stating that the radiation efficiency of crack propagation in homogeneous media is proportional to the crack velocity. Our results are demonstrated to be in agreement with existing studies which showed that the distribution of crack front velocity in a heterogeneous medium can be well described by a power-law decay function above the average fracture front speed, ⟨v⟩, and then establishing a relation of the type ηR ∝⟨v ⟩0.55. These observations suggest that the radiation efficiency in heterogeneous media is defined by a power law involving a lower exponent value than the one predicted for a homogeneous media, but is sensitive to the shape of the velocity distribution of the heterogeneous interface. Finally, when studying the case of similar events observed in natural conditions, such as seismic swarms associated to slow slip along a fault, we notice a good agreement between our results and the radiation efficiency computed for these field data.
A Novel Prediction Method about Single Components of Analog Circuits Based on Complex Field Modeling
Tian, Shulin; Yang, Chenglin
2014-01-01
Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments. PMID:25147853
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.
Fault Tolerant Characteristics of Artificial Neural Network Electronic Hardware
NASA Technical Reports Server (NTRS)
Zee, Frank
1995-01-01
The fault tolerant characteristics of analog-VLSI artificial neural network (with 32 neurons and 532 synapses) chips are studied by exposing them to high energy electrons, high energy protons, and gamma ionizing radiations under biased and unbiased conditions. The biased chips became nonfunctional after receiving a cumulative dose of less than 20 krads, while the unbiased chips only started to show degradation with a cumulative dose of over 100 krads. As the total radiation dose increased, all the components demonstrated graceful degradation. The analog sigmoidal function of the neuron became steeper (increase in gain), current leakage from the synapses progressively shifted the sigmoidal curve, and the digital memory of the synapses and the memory addressing circuits began to gradually fail. From these radiation experiments, we can learn how to modify certain designs of the neural network electronic hardware without using radiation-hardening techniques to increase its reliability and fault tolerance.
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091
Brune, J.N.; Anooshehpoor, A.; Shi, B.; Zheng, Yen
2004-01-01
Precariously balanced rocks and overturned transformers in the vicinity of the White Wolf fault provide constraints on ground motion during the 1952 Ms 7.7 Kern County earthquake, a possible analog for an anticipated large earthquake in the Los Angeles basin (Shaw et al., 2002; Dolan et al., 2003). On the northeast part of the fault preliminary estimates of ground motion on the footwall give peak accelerations considerably lower than predicted by standard regression curves. On the other hand, on the hanging-wall, there is evidence of intense ground shattering and lack of precarious rocks, consistent with the intense hanging-wall accelerations suggested by foam-rubber modeling, numerical modeling, and observations from previous thrust fault earthquakes. There is clear evidence of the effects of rupture directivity in ground motions on the hanging-wall side of the fault (from both precarious rocks and numerical simulations). On the southwest part of the fault, which is covered by sediments, the thrust fault did not reach the surface ("blind" thrust). Overturned and damaged transformers indicate significant transfer of energy from the hanging wall to the footwall, an effect that may not be as effective when the rupture reaches the surface (is not "blind"). Transformers near the up-dip projection of the fault tip have been damaged or overturned on both the hanging-wall and footwall sides of the fault. The transfer of energy is confirmed in a numerical lattice model and could play an important role in a similar situation in Los Angeles. We suggest that the results of this study can provide important information for estimating the effects of a large thrust fault rupture in the Los Angeles basin, specially given the fact that there is so little instrumental data from large thrust fault earthquakes.
Application of lifting wavelet and random forest in compound fault diagnosis of gearbox
NASA Astrophysics Data System (ADS)
Chen, Tang; Cui, Yulian; Feng, Fuzhou; Wu, Chunzhi
2018-03-01
Aiming at the weakness of compound fault characteristic signals of a gearbox of an armored vehicle and difficult to identify fault types, a fault diagnosis method based on lifting wavelet and random forest is proposed. First of all, this method uses the lifting wavelet transform to decompose the original vibration signal in multi-layers, reconstructs the multi-layer low-frequency and high-frequency components obtained by the decomposition to get multiple component signals. Then the time-domain feature parameters are obtained for each component signal to form multiple feature vectors, which is input into the random forest pattern recognition classifier to determine the compound fault type. Finally, a variety of compound fault data of the gearbox fault analog test platform are verified, the results show that the recognition accuracy of the fault diagnosis method combined with the lifting wavelet and the random forest is up to 99.99%.
CNN universal machine as classificaton platform: an art-like clustering algorithm.
Bálya, David
2003-12-01
Fast and robust classification of feature vectors is a crucial task in a number of real-time systems. A cellular neural/nonlinear network universal machine (CNN-UM) can be very efficient as a feature detector. The next step is to post-process the results for object recognition. This paper shows how a robust classification scheme based on adaptive resonance theory (ART) can be mapped to the CNN-UM. Moreover, this mapping is general enough to include different types of feed-forward neural networks. The designed analogic CNN algorithm is capable of classifying the extracted feature vectors keeping the advantages of the ART networks, such as robust, plastic and fault-tolerant behaviors. An analogic algorithm is presented for unsupervised classification with tunable sensitivity and automatic new class creation. The algorithm is extended for supervised classification. The presented binary feature vector classification is implemented on the existing standard CNN-UM chips for fast classification. The experimental evaluation shows promising performance after 100% accuracy on the training set.
Long term fault system reorganization of convergent and strike-slip systems
NASA Astrophysics Data System (ADS)
Cooke, M. L.; McBeck, J.; Hatem, A. E.; Toeneboehn, K.; Beyer, J. L.
2017-12-01
Laboratory and numerical experiments representing deformation over many earthquake cycles demonstrate that fault evolution includes episodes of fault reorganization that optimize work on the fault system. Consequently, the mechanical and kinematic efficiencies of fault systems do not increase monotonically through their evolution. New fault configurations can optimize the external work required to accommodate deformation, suggesting that changes in system efficiency can drive fault reorganization. Laboratory evidence and numerical results show that fault reorganization within accretion, strike-slip and oblique convergent systems is associated with increasing efficiency due to increased fault slip (frictional work and seismic energy) and commensurate decreased off-fault deformation (internal work and work against gravity). Between episodes of fault reorganization, fault systems may become less efficient as they produce increasing off fault deformation. For example, laboratory and numerical experiments show that the interference and interaction between different fault segments may increase local internal work or that increasing convergence can increase work against gravity produced by a fault system. This accumulation of work triggers fault reorganization as stored work provides the energy required to grow new faults that reorganize the system to a more efficient configuration. The results of laboratory and numerical experiments reveal that we should expect crustal fault systems to reorganize following periods of increasing inefficiency, even in the absence of changes to the tectonic regime. In other words, fault reorganization doesn't require a change in tectonic loading. The time frame of fault reorganization depends on fault system configuration, strain rate and processes that relax stresses within the crust. For example, stress relaxation may keep pace with stress accumulation, which would limit the increase in the internal work and gravitational work so that irregularities can persist along active fault systems without reorganization of the fault system. Consequently, steady state behavior, for example with constant fault slip rates, may arise either in systems with high degree of stress-relaxation or occur only within the intervals between episodes of fault reorganization.
Modeling of a latent fault detector in a digital system
NASA Technical Reports Server (NTRS)
Nagel, P. M.
1978-01-01
Methods of modeling the detection time or latency period of a hardware fault in a digital system are proposed that explain how a computer detects faults in a computational mode. The objectives were to study how software reacts to a fault, to account for as many variables as possible affecting detection and to forecast a given program's detecting ability prior to computation. A series of experiments were conducted on a small emulated microprocessor with fault injection capability. Results indicate that the detecting capability of a program largely depends on the instruction subset used during computation and the frequency of its use and has little direct dependence on such variables as fault mode, number set, degree of branching and program length. A model is discussed which employs an analog with balls in an urn to explain the rate of which subsequent repetitions of an instruction or instruction set detect a given fault.
Validation techniques for fault emulation of SRAM-based FPGAs
Quinn, Heather; Wirthlin, Michael
2015-08-07
A variety of fault emulation systems have been created to study the effect of single-event effects (SEEs) in static random access memory (SRAM) based field-programmable gate arrays (FPGAs). These systems are useful for augmenting radiation-hardness assurance (RHA) methodologies for verifying the effectiveness for mitigation techniques; understanding error signatures and failure modes in FPGAs; and failure rate estimation. For radiation effects researchers, it is important that these systems properly emulate how SEEs manifest in FPGAs. If the fault emulation systems does not mimic the radiation environment, the system will generate erroneous data and incorrect predictions of behavior of the FPGA inmore » a radiation environment. Validation determines whether the emulated faults are reasonable analogs to the radiation-induced faults. In this study we present methods for validating fault emulation systems and provide several examples of validated FPGA fault emulation systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, Heather; Wirthlin, Michael
A variety of fault emulation systems have been created to study the effect of single-event effects (SEEs) in static random access memory (SRAM) based field-programmable gate arrays (FPGAs). These systems are useful for augmenting radiation-hardness assurance (RHA) methodologies for verifying the effectiveness for mitigation techniques; understanding error signatures and failure modes in FPGAs; and failure rate estimation. For radiation effects researchers, it is important that these systems properly emulate how SEEs manifest in FPGAs. If the fault emulation systems does not mimic the radiation environment, the system will generate erroneous data and incorrect predictions of behavior of the FPGA inmore » a radiation environment. Validation determines whether the emulated faults are reasonable analogs to the radiation-induced faults. In this study we present methods for validating fault emulation systems and provide several examples of validated FPGA fault emulation systems.« less
NASA Astrophysics Data System (ADS)
Okubo, C. H.
2014-12-01
In order to establish a foundation for studies of faulting in Martian rocks and soils in volcanic terrain, the distribution of brittle strain around faults within the North Menan Butte Tuff in the eastern Snake River Plain, Idaho and the Joe Lott Tuff Member of the Mount Belknap Volcanics, Utah, has been recently described. These studies employed a combination of macroscopic and microscopic observations, including measurements of in situ permeability as a proxy for non-localized brittle deformation of the host rock. In areas where the tuff retained its primary granular nature at the time of deformation, initial plastic yielding in both tuffs occurred along deformation bands. Both compactional and dilational types of deformation bands were observed, and faulting occurred along clusters of deformation bands. Where secondary alteration processes imparted a massive texture to the tuff, brittle deformation was accommodated along fractures. Host-rock permeability exhibits little variation from non-deformed values in the North Menan Butte Tuff, whereas host rock permeability is reduced by roughly an order of magnitude through compaction alone (no alteration) in the Joe Lott Tuff. To create a bridge between these observations in tuff and the more substantial body of work centered on deformation band formation and faulting in quartz-rich sandstones, the same techniques employed in the North Menan Butte Tuff and the Joe Lott Tuff have also been applied to a kilometer-scale fault in the Jurassic Navajo Sandstone in the Waterpocket Fold, Utah. These observations demonstrate that the manifestation of strain and evolution of faulting in the Mars-analog tuffs are comparable to that in quartz-rich sandstones. Therefore, current understanding of brittle deformation in quartz-rich sandstones can be used to inform investigations into fault growth within porous tuffs on Mars. A discussion of these observations, practical limitations, and directions for future work are presented here.
Blakely, R.J.; John, D.A.; Box, S.E.; Berger, B.R.; Fleck, R.J.; Ashley, R.P.; Newport, G.R.; Heinemeyer, G.R.
2007-01-01
The White River altered area, Washington, and the Goldfield mining district, Nevada, are nearly contemporaneous Tertiary (ca.20 Ma) calc-alkaline igneous centers with large exposures of shallow (<1 km depth) magmatic-hydrothermal, acid-sulfate alteration. Goldfield is the largest known high-sulfidation gold deposit in North America. At White River, silica is the only commodity exploited to date, but, based on its similarities with Goldfield, White River may have potential for concealed precious and/or base metal deposits at shallow depth. Both areas are products of the ancestral Cascade arc Goldfield lies within the Great Basin physiographic province in an area of middle Miocene and younger Basin and Range and Walker Lane faulting, whereas White River is largely unaffected by young faults. However, west-northwest-striking magnetic anomalies at White River do correspond with mapped faults synchronous with magmatism, and other linear anomalies may reflect contemporaneous concealed faults. The White River altered area lies immediately south of the west-northwest-striking White River fault zone and north of a postulated fault with similar orientation. Structural data from the White River altered area indicate that alteration developed synchronously with an anomalous stress field conducive to left-lateral, strike-slip displacement on west-north-west-striking faults. Thus, the White River alteration may have developed in a transient transtensional region between the two strike-slip faults, analogous to models proposed for Goldfield and other mineral deposits in transverse deformational zones. Gravity and magnetic anomalies provide evidence for a pluton beneath the White River altered area that may have provided heat and fluids to overlying volcanic rocks. East- to east- northeast-striking extensional faults and/or fracture zones in the step-over region, also expressed in magnetic anomalies, may have tapped this intrusion and provided vertical and lateral transport of fluids to now silicified areas. By analogy to Goldfield, geophysical anomalies at the White River altered area may serve as proxies for geologic mapping in identifying faults, fractures, and intrusions relevant to hydrothermal alteration and ore formation in areas of poor exposure. ?? 2006 Geological Society of America.
Cunningham, Kevin J.; Walker, Cameron; Westcott, Richard L.
2012-01-01
Approximately 210 km of near-surface, high-frequency, marine seismic-reflection data were acquired on the southeastern part of the Florida Platform between 2007 and 2011. Many high-resolution, seismic-reflection profiles, interpretable to a depth of about 730 m, were collected on the shallow-marine shelf of southeastern Florida in water as shallow as 1 m. Landward of the present-day shelf-margin slope, these data image middle Eocene to Pleistocene strata and Paleocene to Pleistocene strata on the Miami Terrace. This high-resolution data set provides an opportunity to evaluate geologic structures that cut across confining units of the Paleocene to Oligocene-age carbonate rocks that form the Floridan aquifer system.Seismic profiles image two structural systems, tectonic faults and karst collapse structures, which breach confining beds in the Floridan aquifer system. Both structural systems may serve as pathways for vertical groundwater flow across relatively low-permeability carbonate strata that separate zones of regionally extensive high-permeability rocks in the Floridan aquifer system. The tectonic faults occur as normal and reverse faults, and collapse-related faults have normal throw. The most common fault occurrence delineated on the reflection profiles is associated with karst collapse structures. These high-frequency seismic data are providing high quality structural analogs to unprecedented depths on the southeastern Florida Platform. The analogs can be used for assessment of confinement of other carbonate aquifers and the sealing potential of deeper carbonate rocks associated with reservoirs around the world.
NASA Astrophysics Data System (ADS)
Strak, V.; Dominguez, S.; Petit, C.; Meyer, B.; Loget, N.
2013-12-01
Relief evolution in active tectonic areas is controlled by the interactions between tectonics and surface processes (erosion, transport and sedimentation). These interactions lead to the formation of geomorphologic markers that remain stable during the equilibrium reached in the long-term between tectonics and erosion. In regions experiencing active extension, drainage basins and faceted spurs (triangular facets) are such long-lived morphologic markers and they can help in quantifying the competing effects between tectonics, erosion and sedimentation. We performed analog and numerical models simulating the morphologic evolution of a mountain range bounded by a normal fault. In each approach we imposed identical initial conditions. We carried out several models by varying the fault slip rate (V) and keeping a constant rainfall rate allowing us to study the effect of V on morphology. Both approaches highlight the main control of V on the topographic evolution of the footwall. The experimental approach shows that V controls erosion rates (incision rate, erosion rate of slopes and regressive erosion rate) and possibly the height of triangular facets. This approach indicates likewise that the parameter K of the stream power law depends on V even for non-equilibrium topography. The numerical approach corroborates the control of V on erosion rates and facet height. It also shows a correlation between the shape of drainage basins and V (slope-area relationship) and it suggests the same for the parameters of the stream power law. Therefore both approaches suggest the possibility of using the height of triangular facets and the slope-area relationship to infer the fault slip rate of normal faults situated in a given climatic context.
NASA Astrophysics Data System (ADS)
Madden, E. H.; McBeck, J.; Cooke, M. L.
2013-12-01
Over multiple earthquake cycles, strike-slip faults link to form through-going structures, as demonstrated by the continuous nature of the mature San Andreas fault system in California relative to the younger and more segmented San Jacinto fault system nearby. Despite its immaturity, the San Jacinto system accommodates between one third and one half of the slip along the boundary between the North American and Pacific plates. It therefore poses a significant seismic threat to southern California. Better understanding of how the San Jacinto system has evolved over geologic time and of current interactions between faults within the system is critical to assessing this seismic hazard accurately. Numerical models are well suited to simulating kilometer-scale processes, but models of fault system development are challenged by the multiple physical mechanisms involved. For example, laboratory experiments on brittle materials show that faults propagate and eventually join (hard-linkage) by both opening-mode and shear failure. In addition, faults interact prior to linkage through stress transfer (soft-linkage). The new algorithm GROW (GRowth by Optimization of Work) accounts for this complex array of behaviors by taking a global approach to fault propagation while adhering to the principals of linear elastic fracture mechanics. This makes GROW a powerful tool for studying fault interactions and fault system development over geologic time. In GROW, faults evolve to minimize the work (or energy) expended during deformation, thereby maximizing the mechanical efficiency of the entire system. Furthermore, the incorporation of both static and dynamic friction allows GROW models to capture fault slip and fault propagation in single earthquakes as well as over consecutive earthquake cycles. GROW models with idealized faults reveal that the initial fault spacing and the applied stress orientation control fault linkage propensity and linkage patterns. These models allow the gains in efficiency provided by both hard-linkage and soft-linkage to be quantified and compared. Specialized models of interactions over the past 1 Ma between the Clark and Coyote Creek faults within the San Jacinto system reveal increasing mechanical efficiency as these fault structures change over time. Alongside this increasing efficiency is an increasing likelihood for single, larger earthquakes that rupture multiple fault segments. These models reinforce the sensitivity of mechanical efficiency to both fault structure and the regional tectonic stress orientation controlled by plate motions and provide insight into how slip may have been partitioned between the San Andreas and San Jacinto systems over the past 1 Ma.
Physical fault tolerance of nanoelectronics.
Szkopek, Thomas; Roychowdhury, Vwani P; Antoniadis, Dimitri A; Damoulakis, John N
2011-04-29
The error rate in complementary transistor circuits is suppressed exponentially in electron number, arising from an intrinsic physical implementation of fault-tolerant error correction. Contrariwise, explicit assembly of gates into the most efficient known fault-tolerant architecture is characterized by a subexponential suppression of error rate with electron number, and incurs significant overhead in wiring and complexity. We conclude that it is more efficient to prevent logical errors with physical fault tolerance than to correct logical errors with fault-tolerant architecture.
Technical Basis for Evaluating Software-Related Common-Cause Failures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muhlheim, Michael David; Wood, Richard
2016-04-01
The instrumentation and control (I&C) system architecture at a nuclear power plant (NPP) incorporates protections against common-cause failures (CCFs) through the use of diversity and defense-in-depth. Even for well-established analog-based I&C system designs, the potential for CCFs of multiple systems (or redundancies within a system) constitutes a credible threat to defeating the defense-in-depth provisions within the I&C system architectures. The integration of digital technologies into the I&C systems provides many advantages compared to the aging analog systems with respect to reliability, maintenance, operability, and cost effectiveness. However, maintaining the diversity and defense-in-depth for both the hardware and software within themore » digital system is challenging. In fact, the introduction of digital technologies may actually increase the potential for CCF vulnerabilities because of the introduction of undetected systematic faults. These systematic faults are defined as a “design fault located in a software component” and at a high level, are predominately the result of (1) errors in the requirement specification, (2) inadequate provisions to account for design limits (e.g., environmental stress), or (3) technical faults incorporated in the internal system (or architectural) design or implementation. Other technology-neutral CCF concerns include hardware design errors, equipment qualification deficiencies, installation or maintenance errors, instrument loop scaling and setpoint mistakes.« less
Characteristics of Fault Zones in Volcanic Rocks Near Yucca Flat, Nevada Test Site, Nevada
Sweetkind, Donald S.; Drake II, Ronald M.
2007-01-01
During 2005 and 2006, the USGS conducted geological studies of fault zones at surface outcrops at the Nevada Test Site. The objectives of these studies were to characterize fault geometry, identify the presence of fault splays, and understand the width and internal architecture of fault zones. Geologic investigations were conducted at surface exposures in upland areas adjacent to Yucca Flat, a basin in the northeastern part of the Nevada Test Site; these data serve as control points for the interpretation of the subsurface data collected at Yucca Flat by other USGS scientists. Fault zones in volcanic rocks near Yucca Flat differ in character and width as a result of differences in the degree of welding and alteration of the protolith, and amount of fault offset. Fault-related damage zones tend to scale with fault offset; damage zones associated with large-offset faults (>100 m) are many tens of meters wide, whereas damage zones associated with smaller-offset faults are generally a only a meter or two wide. Zeolitically-altered tuff develops moderate-sized damage zones whereas vitric nonwelded, bedded and airfall tuff have very minor damage zones, often consisting of the fault zone itself as a deformation band, with minor fault effect to the surrounding rock mass. These differences in fault geometry and fault zone architecture in surface analog sites can serve as a guide toward interpretation of high-resolution subsurface geophysical results from Yucca Flat.
Kinematics of the New Madrid seismic zone, central United States, based on stepover models
Pratt, Thomas L.
2012-01-01
Seismicity in the New Madrid seismic zone (NMSZ) of the central United States is generally attributed to a stepover structure in which the Reelfoot thrust fault transfers slip between parallel strike-slip faults. However, some arms of the seismic zone do not fit this simple model. Comparison of the NMSZ with an analog sandbox model of a restraining stepover structure explains all of the arms of seismicity as only part of the extensive pattern of faults that characterizes stepover structures. Computer models show that the stepover structure may form because differences in the trends of lower crustal shearing and inherited upper crustal faults make a step between en echelon fault segments the easiest path for slip in the upper crust. The models predict that the modern seismicity occurs only on a subset of the faults in the New Madrid stepover structure, that only the southern part of the stepover structure ruptured in the A.D. 1811–1812 earthquakes, and that the stepover formed because the trends of older faults are not the same as the current direction of shearing.
Fault tolerant operation of switched reluctance machine
NASA Astrophysics Data System (ADS)
Wang, Wei
The energy crisis and environmental challenges have driven industry towards more energy efficient solutions. With nearly 60% of electricity consumed by various electric machines in industry sector, advancement in the efficiency of the electric drive system is of vital importance. Adjustable speed drive system (ASDS) provides excellent speed regulation and dynamic performance as well as dramatically improved system efficiency compared with conventional motors without electronics drives. Industry has witnessed tremendous grow in ASDS applications not only as a driving force but also as an electric auxiliary system for replacing bulky and low efficiency auxiliary hydraulic and mechanical systems. With the vast penetration of ASDS, its fault tolerant operation capability is more widely recognized as an important feature of drive performance especially for aerospace, automotive applications and other industrial drive applications demanding high reliability. The Switched Reluctance Machine (SRM), a low cost, highly reliable electric machine with fault tolerant operation capability, has drawn substantial attention in the past three decades. Nevertheless, SRM is not free of fault. Certain faults such as converter faults, sensor faults, winding shorts, eccentricity and position sensor faults are commonly shared among all ASDS. In this dissertation, a thorough understanding of various faults and their influence on transient and steady state performance of SRM is developed via simulation and experimental study, providing necessary knowledge for fault detection and post fault management. Lumped parameter models are established for fast real time simulation and drive control. Based on the behavior of the faults, a fault detection scheme is developed for the purpose of fast and reliable fault diagnosis. In order to improve the SRM power and torque capacity under faults, the maximum torque per ampere excitation are conceptualized and validated through theoretical analysis and experiments. With the proposed optimal waveform, torque production is greatly improved under the same Root Mean Square (RMS) current constraint. Additionally, position sensorless operation methods under phase faults are investigated to account for the combination of physical position sensor and phase winding faults. A comprehensive solution for position sensorless operation under single and multiple phases fault are proposed and validated through experiments. Continuous position sensorless operation with seamless transition between various numbers of phase fault is achieved.
Integration of magnetic bearings in the design of advanced gas turbine engines
NASA Technical Reports Server (NTRS)
Storace, Albert F.; Sood, Devendra K.; Lyons, James P.; Preston, Mark A.
1994-01-01
Active magnetic bearings provide revolutionary advantages for gas turbine engine rotor support. These advantages include tremendously improved vibration and stability characteristics, reduced power loss, improved reliability, fault-tolerance, and greatly extended bearing service life. The marriage of these advantages with innovative structural network design and advanced materials utilization will permit major increases in thrust to weight performance and structural efficiency for future gas turbine engines. However, obtaining the maximum payoff requires two key ingredients. The first key ingredient is the use of modern magnetic bearing technologies such as innovative digital control techniques, high-density power electronics, high-density magnetic actuators, fault-tolerant system architecture, and electronic (sensorless) position estimation. This paper describes these technologies. The second key ingredient is to go beyond the simple replacement of rolling element bearings with magnetic bearings by incorporating magnetic bearings as an integral part of the overall engine design. This is analogous to the proper approach to designing with composites, whereby the designer tailors the geometry and load carrying function of the structural system or component for the composite instead of simply substituting composites in a design originally intended for metal material. This paper describes methodologies for the design integration of magnetic bearings in gas turbine engines.
Glass Microbeads in Analog Models of Thrust Wedges.
D'Angelo, Taynara; Gomes, Caroline J S
2017-01-01
Glass microbeads are frequently used in analog physical modeling to simulate weak detachment zones but have been neglected in models of thrust wedges. Microbeads differ from quartz sand in grain shape and in low angle of internal friction. In this study, we compared the structural characteristics of microbeads and sand wedges. To obtain a better picture of their mechanical behavior, we determined the physical and frictional properties of microbeads using polarizing and scanning electron microscopy and ring-shear tests, respectively. We built shortening experiments with different basal frictions and measured the thickness, slope and length of the wedges and also the fault spacings. All the microbeads experiments revealed wedge geometries that were consistent with previous studies that have been performed with sand. However, the deformation features in the microbeads shortened over low to intermediate basal frictions were slightly different. Microbeads produced different fault geometries than sand as well as a different grain flow. In addition, they produced slip on minor faults, which was associated with distributed deformation and gave the microbeads wedges the appearance of disharmonic folds. We concluded that the glass microbeads may be used to simulate relatively competent rocks, like carbonates, which may be characterized by small-scale deformation features.
Extensional faulting in the southern Klamath Mountains, California
Schweickert, R.A.; Irwin, W.P.
1989-01-01
Large northeast striking normal faults in the southern Klamath Mountains may indicate that substantial crustal extension occurred during Tertiary time. Some of these faults form grabens in the Jurassic and older bedrock of the province. The grabens contain continental Oligocene or Miocene deposits (Weaverville Formation), and in two of them the Oligocene or Miocene is underlain by Lower Cretaceous marine formations (Great Valley sequence). At the La Grange gold placer mine the Oligocene or Miocene strata dip northwest into the gently southeast dipping mylonitic footwall surface of the La Grange fault. The large normal displacement required by the relations at the La Grange mine is also suggested by omission of several kilometers of structural thickness of bedrock units across the northeast continuation of the La Grange fault, as well as by significant changes in bedrock across some northeast striking faults elsewhere in the Central Metamorphic and Eastern Klamath belts. The Trinity ultramafic sheet crops out in the Eastern Klamath terrane as part of a broad northeast trending arch that may be structurally analogous to the domed lower plate of metamorphic core complexes found in eastern parts of the Cordillera. The northeast continuation of the La Grange fault bounds the southeastern side of the Trinity arch in the Eastern Klamath terrane and locally cuts out substantial lower parts of adjacent Paleozoic strata of the Redding section. Faults bounding the northwestem side of the Trinity arch generally trend northeast and juxtapose stacked thrust sheets of lower Paleozoic strata of the Yreka terrane against the Trinity ultramafic sheet. Geometric relations suggest that the Tertiary extension of the southern Klamath Mountains was in NW-SE directions and that the Redding section and the southern part of the Central Metamorphic terrane may be a large Tertiary allochthon detached from the Trinity ultramafic sheet. Paleomagnetic data indicate a lack of rotation about a vertical axis during the extension. We propose that the Trinity ultramafic sheet is structurally analogous to a metamorphic core complex; if so, it is the first core complex to be described that involves ultramafic rocks. We infer that Mesozoic terrane accretion produced a large gravitational instability in the crust that spread laterally during Tertiary extension
Delivery and application of precise timing for a traveling wave powerline fault locator system
NASA Technical Reports Server (NTRS)
Street, Michael A.
1990-01-01
The Bonneville Power Administration (BPA) has successfully operated an in-house developed powerline fault locator system since 1986. The BPA fault locator system consists of remotes installed at cardinal power transmission line system nodes and a central master which polls the remotes for traveling wave time-of-arrival data. A power line fault produces a fast rise-time traveling wave which emanates from the fault point and propagates throughout the power grid. The remotes time-tag the traveling wave leading edge as it passes through the power system cardinal substation nodes. A synchronizing pulse transmitted via the BPA analog microwave system on a wideband channel sychronizes the time-tagging counters in the remote units to a different accuracy of better than one microsecond. The remote units correct the raw time tags for synchronizing pulse propagation delay and return these corrected values to the fault locator master. The master then calculates the power system disturbance source using the collected time tags. The system design objective is a fault location accuracy of 300 meters. BPA's fault locator system operation, error producing phenomena, and method of distributing precise timing are described.
Interface For Fault-Tolerant Control System
NASA Technical Reports Server (NTRS)
Shaver, Charles; Williamson, Michael
1989-01-01
Interface unit and controller emulator developed for research on electronic helicopter-flight-control systems equipped with artificial intelligence. Interface unit interrupt-driven system designed to link microprocessor-based, quadruply-redundant, asynchronous, ultra-reliable, fault-tolerant control system (controller) with electronic servocontrol unit that controls set of hydraulic actuators. Receives digital feedforward messages from, and transmits digital feedback messages to, controller through differential signal lines or fiber-optic cables (thus far only differential signal lines have been used). Analog signals transmitted to and from servocontrol unit via coaxial cables.
Effects of Bounded Fault on Seismic Radiation and Rupture Propagation
NASA Astrophysics Data System (ADS)
Weng, H.; Yang, H.
2016-12-01
It has been suggested that narrow rectangle fault may emit stopping phases that can largely affect seismic radiation and thus rupture propagation, e.g., generation of short-duration pulse-like ruptures. Here we investigate the effects of narrow along-dip rectangle fault (analogously to 2015 Nepal earthquake with 200 km * 40 km) on seismic radiation and rupture propagation through numerical modeling in the framework of the linear slip-weakening friction law. First, we found the critical slip-weakening distance Dc may largely affect the seismic radiation and other source parameters, such as rupture speed, final slip and stress drop. Fixing all other uniform parameters, decreasing Dc could decrease the duration time of slip rate and increase the peak slip rate, thus increase the seismic radiation energy spectrum of slip acceleration. In addition, smaller Dc could lead to larger rupture speed (close to S wave velocity), but smaller stress drop and final slip. The results show that Dc may control the efficiency of far-field radiation. Furthermore, the duration time of slip rate at locations close to boundaries is 1.5 - 4 s less than that in the center of the fault. Such boundary effect is especially remarkable for smaller Dc due to the smaller average duration time of slip rate, which could increase the high-frequency radiation energy and impede low-frequency component near the boundaries from the analysis of energy spectrum of slip acceleration. These results show high frequency energy tends to be radiated near the fault boundaries as long as Dc is small enough. In addition, ruptures are fragile and easy to self-arrest if the width of the seismogenic zone is very narrow. In other words, the sizes of nucleation zone need to be larger to initiate runaway ruptures. Our results show the critical sizes of nucleation zones increase as the widths of seismogenic zones decrease.
How do horizontal, frictional discontinuities affect reverse fault-propagation folding?
NASA Astrophysics Data System (ADS)
Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio
2017-09-01
The development of new reverse faults and related folds is strongly controlled by the mechanical characteristics of the host rocks. In this study we analyze the impact of a specific kind of anisotropy, i.e. thin mechanical and frictional discontinuities, in affecting the development of reverse faults and of the associated folds using physical scaled models. We perform analog modeling introducing one or two initially horizontal, thin discontinuities above an initially blind fault dipping at 30° in one case, and 45° in another, and then compare the results with those obtained from a fully isotropic model. The experimental results show that the occurrence of thin discontinuities affects both the development and the propagation of new faults and the shape of the associated folds. New faults 1) accelerate or decelerate their propagation depending on the location of the tips with respect to the discontinuities, 2) cross the discontinuities at a characteristic angle (∼90°), and 3) produce folds with different shapes, resulting not only from the dip of the new faults but also from their non-linear propagation history. Our results may have direct impact on future kinematic models, especially those aimed to reconstruct the tectonic history of faults that developed in layered rocks or in regions affected by pre-existing faults.
Volcanism/tectonics working group summary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovach, L.A.
1995-09-01
This article is a summary of the proceedings of a group discussion which took place at the Workshop on the Role of Natural Analogs in Geologic Disposal of High-Level Nuclear Waste in San Antonio, Texas on July 22-25, 1991. The working group concentrated on the subject of the impacts of earthquakes, fault rupture, and volcanic eruption on the underground repository disposal of high-level radioactive wastes. The tectonics and seismic history of the Yucca Mountain site in Nevada is discussed and geologic analogs to that site are described.
NASA Astrophysics Data System (ADS)
Aydin, Orhun; Caers, Jef Karel
2017-08-01
Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed methodology generates realistic fault network models conditioned to data and a conceptual model of the underlying tectonics.
NASA Astrophysics Data System (ADS)
Roma, Maria; Vidal-Royo, Oskar; McClay, Ken; Ferrer, Oriol; Muñoz, Josep Anton
2017-04-01
The formation of hagingwall syncline basins is basically constrained by the geometry of the basement-involved fault, but also by salt distribution . The formation of such basins is common around the Iberian Peninsula (e.g. Lusitanian, Parentis, Basque-Cantabian, Cameros and Organyà basins) where Upper Triassic (Keuper) salt governed their polyphasic Mesozoic extension and their subsequent Alpine inversion. In this scenario, a precise interpretation of the sub-salt faults geometry and a reconstruction of the initial salt thickness are key to understand the kinematic evolution of such basins. Using an experimental approach (sandbox models) and these Mesozoic basins as natural analogues, the aim of this work is to: 1) investigate the main parameters that controlled the formation and evolution of hagingwall syncline basins analyzing the role of syn-kinematic salt during extension and subsequent inversion; and 2) quantify the deformation and salt mobilization based on restoration of analog model cross sections. The experimental results demonstrate that premature welds are developed by salt deflation with consequent upward propagation of the basal fault in salt-bearing rift systems with a large amount of extension,. In contrast, thicker salt inhibits the upward fault propagation, which results into a further salt migration and development of a hagingwall syncline basins flanked by salt walls. The inherited extensional architecture as well as salt continuity dramatically controlled subsequent inversion. Shortening initially produced the folding and the uplift of the synclinal basins. Minor reverse faults form as a consequence of overtightening of welded diapir stems. However, no trace of reverse faulting is found around diapirs stems, as ductile unit is still available for extrusion, squeezing and accommodation of shortening. Restoration of the sandbox models has demonstrated that this is a powerful tool to unravel the complex structures in the models and this may similarly be applied to the seismic interpretation of the natural complex salt structures.
MHDL CAD tool with fault circuit handling
NASA Astrophysics Data System (ADS)
Espinosa Flores-Verdad, Guillermo; Altamirano Robles, Leopoldo; Osorio Roque, Leticia
2003-04-01
Behavioral modeling and simulation, with Analog Hardware and Mixed Signal Description High Level Languages (MHDLs), have generated the development of diverse simulation tools that allow handling the requirements of the modern designs. These systems have million of transistors embedded and they are radically diverse between them. This tendency of simulation tools is exemplified by the development of languages for modeling and simulation, whose applications are the re-use of complete systems, construction of virtual prototypes, realization of test and synthesis. This paper presents the general architecture of a Mixed Hardware Description Language, based on the standard 1076.1-1999 IEEE VHDL Analog and Mixed-Signal Extensions known as VHDL-AMS. This architecture is novel by consider the modeling and simulation of faults. The main modules of the CAD tool are briefly described in order to establish the information flow and its transformations, starting from the description of a circuit model, going throw the lexical analysis, mathematical models generation and the simulation core, ending at the collection of the circuit behavior as simulation"s data. In addition, the incorporated mechanisms to the simulation core are explained in order to realize the handling of faults into the circuit models. Currently, the CAD tool works with algebraic and differential descriptions for the circuit models, nevertheless the language design is open to be able to handle different model types: Fuzzy Models, Differentials Equations, Transfer Functions and Tables. This applies for fault models too, in this sense the CAD tool considers the inclusion of mutants and saboteurs. To exemplified the results obtained until now, the simulated behavior of a circuit is shown when it is fault free and when it has been modified by the inclusion of a fault as a mutant or a saboteur. The obtained results allow the realization of a virtual diagnosis for mixed circuits. This language works in a UNIX system; it was developed with an object-oriented methodology and programmed in C++.
Current Seismicity in the Vicinity of Yucca Mountain, Nevada
NASA Astrophysics Data System (ADS)
Smith, K.; von Seggern, D.; dePolo, D.
2001-12-01
The 1992 to 2000 earthquakes in the Southern Great Basin have been relocated in order to better recognize the active tectonic processes in the vicinity of Yucca Mountain. During this time period seismic monitoring in the Southern Great Basin transitioned from a primarily single-component analog network to a 3-component digital network. Through the transition analog and digital networks were run in tandem. The station density over this period is as great as any prior recording period. The analog and digital networks were administered separately during the transition, and we have merged the phase data from the two operations. We performed relocations starting in October 1992, thus creating a hypocentral list for FY1993-FY2000. Aftershocks of the June 1992 M 5.6 Little Skull Mountain earthquake, located approximately 20 km southeast of Yucca Mountain, dominate the seismicity in the Southern Great Basin from 1992-2000. After the Little Skull Mountain earthquake, there was a general increase in earthquake activity in southern NTS, principally associated with the Rock Valley fault zone. There was no corresponding increase in seismicity west of Little Skull Mountain near the potential repository site. The distribution of high-quality earthquake locations generally reflects trends in Miocene tectonism. In particular, a general north-south trending gravity low, interpreted by Carr (1984) as the Kawich-Greenwater Rift, is highlighted by the microseismicity in many areas. Locally small magnitude earthquakes tend to outline the 8-10 Ma Timber Mountain caldera in northern and central NTS. Although these structures do not generally correlate with Quaternary faults, the micro-earthquake activity may reflect zones of weakness within these older structures. A 100 km long, conspicuous, north-south trending seismic zone, which shows no correlation with know Quaternary features, aligns along the steep gravity gradient bordering the western side of the Kawich-Greenwater gravity structure. This apparently is an indication that at least some of the seismicity near Yucca Mountain is driven by density contrasts in the lower crust or upper mantle as well as by low regional tectonic strain rates. Overall, the seismicity near Yucca Mountain is low compared to other areas of the southern Great Basin and to the west in the Eastern California Shear Zone. We have calculated the Coulomb stress changes on Yucca Mountain area faults due to large (M > 7) faulting events on the Furnace Creek Fault Zone and interpreted this result in terms of the implications for understanding the distribution of the current seismicity. Because of the significant difference in the Quaternary geologic slip rates between the Furnace Creek and Yucca Mountain area faults (a factor of 250-500) and the stress modeling results, we investigate the hypothesis that the Furnace Creek and Death Valley faults act to decrease the long-term recurrence rate for normal faulting events in the Yucca Mountain block.
Ji, C.; Helmberger, D.V.; Wald, D.J.
2004-01-01
Slip histories for the 2002 M7.9 Denali fault, Alaska, earthquake are derived rapidly from global teleseismic waveform data. In phases, three models improve matching waveform data and recovery of rupture details. In the first model (Phase I), analogous to an automated solution, a simple fault plane is fixed based on the preliminary Harvard Centroid Moment Tensor mechanism and the epicenter provided by the Preliminary Determination of Epicenters. This model is then updated (Phase II) by implementing a more realistic fault geometry inferred from Digital Elevation Model topography and further (Phase III) by using the calibrated P-wave and SH-wave arrival times derived from modeling of the nearby 2002 M6.7 Nenana Mountain earthquake. These models are used to predict the peak ground velocity and the shaking intensity field in the fault vicinity. The procedure to estimate local strong motion could be automated and used for global real-time earthquake shaking and damage assessment. ?? 2004, Earthquake Engineering Research Institute.
NASA Astrophysics Data System (ADS)
Karson, J. A.
2017-11-01
Unlike most of the Mid-Atlantic Ridge, the North America/Eurasia plate boundary in Iceland lies above sea level where magmatic and tectonic processes can be directly investigated in subaerial exposures. Accordingly, geologic processes in Iceland have long been recognized as possible analogs for seafloor spreading in the submerged parts of the mid-ocean ridge system. Combining existing and new data from across Iceland provides an integrated view of this active, mostly subaerial plate boundary. The broad Iceland plate boundary zone includes segmented rift zones linked by transform fault zones. Rift propagation and transform fault migration away from the Iceland hotspot rearrange the plate boundary configuration resulting in widespread deformation of older crust and reactivation of spreading-related structures. Rift propagation results in block rotations that are accommodated by widespread, rift-parallel, strike-slip faulting. The geometry and kinematics of faulting in Iceland may have implications for spreading processes elsewhere on the mid-ocean ridge system where rift propagation and transform migration occur.
NASA Astrophysics Data System (ADS)
Yang, Yong-sheng; Ming, An-bo; Zhang, You-yun; Zhu, Yong-sheng
2017-10-01
Diesel engines, widely used in engineering, are very important for the running of equipments and their fault diagnosis have attracted much attention. In the past several decades, the image based fault diagnosis methods have provided efficient ways for the diesel engine fault diagnosis. By introducing the class information into the traditional non-negative matrix factorization (NMF), an improved NMF algorithm named as discriminative NMF (DNMF) was developed and a novel imaged based fault diagnosis method was proposed by the combination of the DNMF and the KNN classifier. Experiments performed on the fault diagnosis of diesel engine were used to validate the efficacy of the proposed method. It is shown that the fault conditions of diesel engine can be efficiently classified by the proposed method using the coefficient matrix obtained by DNMF. Compared with the original NMF (ONMF) and principle component analysis (PCA), the DNMF can represent the class information more efficiently because the class characters of basis matrices obtained by the DNMF are more visible than those in the basis matrices obtained by the ONMF and PCA.
Premonitory acoustic emissions and stick-slip in natural and smooth-faulted Westerly granite
Thompson, B.D.; Young, R.P.; Lockner, David A.
2009-01-01
A stick-slip event was induced in a cylindrical sample of Westerly granite containing a preexisting natural fault by loading at constant confining pressure of 150 MPa. Continuously recorded acoustic emission (AE) data and computer tomography (CT)-generated images of the fault plane were combined to provide a detailed examination of microscale processes operating on the fault. The dynamic stick-slip event, considered to be a laboratory analog of an earthquake, generated an ultrasonic signal that was recorded as a large-amplitude AE event. First arrivals of this event were inverted to determine the nucleation site of slip, which is associated with a geometric asperity on the fault surface. CT images and AE locations suggest that a variety of asperities existed in the sample because of the intersection of branch or splay faults with the main fault. This experiment is compared with a stick-slip experiment on a sample prepared with a smooth, artificial saw-cut fault surface. Nearly a thousand times more AE were observed for the natural fault, which has a higher friction coefficient (0.78 compared to 0.53) and larger shear stress drop (140 compared to 68 MPa). However at the measured resolution, the ultrasonic signal emitted during slip initiation does not vary significantly between the two experiments, suggesting a similar dynamic rupture process. We propose that the natural faulted sample under triaxial compression provides a good laboratory analogue for a field-scale fault system in terms of the presence of asperities, fault surface heterogeneity, and interaction of branching faults. ?? 2009.
Fault zone processes in mechanically layered mudrock and chalk
NASA Astrophysics Data System (ADS)
Ferrill, David A.; Evans, Mark A.; McGinnis, Ronald N.; Morris, Alan P.; Smart, Kevin J.; Wigginton, Sarah S.; Gulliver, Kirk D. H.; Lehrmann, Daniel; de Zoeten, Erich; Sickmann, Zach
2017-04-01
A 1.5 km long natural cliff outcrop of nearly horizontal Eagle Ford Formation in south Texas exposes northwest and southeast dipping normal faults with displacements of 0.01-7 m cutting mudrock, chalk, limestone, and volcanic ash. These faults provide analogs for both natural and hydraulically-induced deformation in the productive Eagle Ford Formation - a major unconventional oil and gas reservoir in south Texas, U.S.A. - and other mechanically layered hydrocarbon reservoirs. Fault dips are steep to vertical through chalk and limestone beds, and moderate through mudrock and clay-rich ash, resulting in refracted fault profiles. Steeply dipping fault segments contain rhombohedral calcite veins that cross the fault zone obliquely, parallel to shear segments in mudrock. The vertical dimensions of the calcite veins correspond to the thickness of offset competent beds with which they are contiguous, and the slip parallel dimension is proportional to fault displacement. Failure surface characteristics, including mixed tensile and shear segments, indicate hybrid failure in chalk and limestone, whereas shear failure predominates in mudrock and ash beds - these changes in failure mode contribute to variation in fault dip. Slip on the shear segments caused dilation of the steeper hybrid segments. Tabular sheets of calcite grew by repeated fault slip, dilation, and cementation. Fluid inclusion and stable isotope geochemistry analyses of fault zone cements indicate episodic reactivation at 1.4-4.2 km depths. The results of these analyses document a dramatic bed-scale lithologic control on fault zone architecture that is directly relevant to the development of porosity and permeability anisotropy along faults.
Ruleman, Chester A.; Larsen, Mort; Stickney, Michael C.
2014-01-01
The catastrophic Hebgen Lake earthquake of 18 August 1959 (MW 7.3) led many geoscientists to develop new methods to better understand active tectonics in extensional tectonic regimes that address seismic hazards. The Madison Range fault system and adjacent Hebgen Lake–Red Canyon fault system provide an intermountain active tectonic analog for regional analyses of extensional crustal deformation. The Madison Range fault system comprises fault zones (~100 km in length) that have multiple salients and embayments marked by preexisting structures exposed in the footwall. Quaternary tectonic activity rates differ along the length of the fault system, with less displacement to the north. Within the Hebgen Lake basin, the 1959 earthquake is the latest slip event in the Hebgen Lake–Red Canyon fault system and southern Madison Range fault system. Geomorphic and paleoseismic investigations indicate previous faulting events on both fault systems. Surficial geologic mapping and historic seismicity support a coseismic structural linkage between the Madison Range and Hebgen Lake–Red Canyon fault systems. On this trip, we will look at Quaternary surface ruptures that characterize prehistoric earthquake magnitudes. The one-day field trip begins and ends in Bozeman, and includes an overview of the active tectonics within the Madison Valley and Hebgen Lake basin, southwestern Montana. We will also review geologic evidence, which includes new geologic maps and geomorphic analyses that demonstrate preexisting structural controls on surface rupture patterns along the Madison Range and Hebgen Lake–Red Canyon fault systems.
A dynamic integrated fault diagnosis method for power transformers.
Gao, Wensheng; Bai, Cuifen; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.
A Dynamic Integrated Fault Diagnosis Method for Power Transformers
Gao, Wensheng; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841
Method and apparatus for transfer function simulator for testing complex systems
NASA Technical Reports Server (NTRS)
Kavaya, M. J. (Inventor)
1985-01-01
A method and apparatus for testing the operation of a complex stabilization circuit in a closed loop system is presented. The method is comprised of a programmed analog or digital computing system for implementing the transfer function of a load thereby providing a predictable load. The digital computing system employs a table stored in a microprocessor in which precomputed values of the load transfer function are stored for values of input signal from the stabilization circuit over the range of interest. This technique may be used not only for isolating faults in the stabilization circuit, but also for analyzing a fault in a faulty load by so varying parameters of the computing system as to simulate operation of the actual load with the fault.
Some comparisons between mining-induced and laboratory earthquakes
McGarr, A.
1994-01-01
Although laboratory stick-slip friction experiments have long been regarded as analogs to natural crustal earthquakes, the potential use of laboratory results for understanding the earthquake source mechanism has not been fully exploited because of essential difficulties in relating seismographic data to measurements made in the controlled laboratory environment. Mining-induced earthquakes, however, provide a means of calibrating the seismic data in terms of laboratory results because, in contrast to natural earthquakes, the causative forces as well as the hypocentral conditions are known. A comparison of stick-slip friction events in a large granite sample with mining-induced earthquakes in South Africa and Canada indicates both similarities and differences between the two phenomena. The physics of unstable fault slip appears to be largely the same for both types of events. For example, both laboratory and mining-induced earthquakes have very low seismic efficiencies {Mathematical expression} where ??a is the apparent stress and {Mathematical expression} is the average stress acting on the fault plane to cause slip; nearly all of the energy released by faulting is consumed in overcoming friction. In more detail, the mining-induced earthquakes differ from the laboratory events in the behavior of ?? as a function of seismic moment M0. Whereas for the laboratory events ?????0.06 independent of M0, ?? depends quite strongly on M0 for each set of induced earthquakes, with 0.06 serving, apparently, as an upper bound. It seems most likely that this observed scaling difference is due to variations in slip distribution over the fault plane. In the laboratory, a stick-slip event entails homogeneous slip over a fault of fixed area. For each set of induced earthquakes, the fault area appears to be approximately fixed but the slip is inhomogeneous due presumably to barriers (zones of no slip) distributed over the fault plane; at constant {Mathematical expression}, larger events correspond to larger??a as a consequence of fewer barriers to slip. If the inequality ??a/ {Mathematical expression} ??? 0.06 has general validity, then measurements of ??a=??Ea/M0, where ?? is the modulus of rigidity and Ea is the seismically-radiated energy, can be used to infer the absolute level of deviatoric stress at the hypocenter. ?? 1994 Birkha??user Verlag.
Initiation process of a thrust fault revealed by analog experiments
NASA Astrophysics Data System (ADS)
Yamada, Yasuhiro; Dotare, Tatsuya; Adam, Juergen; Hori, Takane; Sakaguchi, Hide
2016-04-01
We conducted 2D (cross-sectional) analog experiments with dry sand using a high resolution digital image correlation (DIC) technique to reveal initiation process of a thrust fault in detail, and identified a number of "weak shear bands" and minor uplift prior to the thrust initiation. The observations suggest that the process can be divided into three stages. Stage 1: characterized by a series of abrupt and short-lived weak shear bands at the location where the thrust will be generated later. Before initiation of the fault, the area to be the hanging wall starts to uplift. Stage 2: defined by the generation of the new thrust and its active displacement. The location of the new thrust seems to be constrained by its associated back-thrust, produced at the foot of the surface slope (by the previous thrust). The activity of the previous thrust turns to zero once the new thrust is generated, but the timing of these two events is not the same. Stage 3: characterized by a constant displacement along the (new) thrust. Similar minor shear bands can be seen in the toe area of the Nankai accretionary prism, SW Japan and we can correlate the along-strike variations in seismic profiles to the model results that show the characteristic features in each thrust development stage.
Power System Transient Diagnostics Based on Novel Traveling Wave Detection
NASA Astrophysics Data System (ADS)
Hamidi, Reza Jalilzadeh
Modern electrical power systems demand novel diagnostic approaches to enhancing the system resiliency by improving the state-of-the-art algorithms. The proliferation of high-voltage optical transducers and high time-resolution measurements provide opportunities to develop novel diagnostic methods of very fast transients in power systems. At the same time, emerging complex configuration, such as multi-terminal hybrid transmission systems, limits the applications of the traditional diagnostic methods, especially in fault location and health monitoring. The impedance-based fault-location methods are inefficient for cross-bounded cables, which are widely used for connection of offshore wind farms to the main grid. Thus, this dissertation first presents a novel traveling wave-based fault-location method for hybrid multi-terminal transmission systems. The proposed method utilizes time-synchronized high-sampling voltage measurements. The traveling wave arrival times (ATs) are detected by observation of the squares of wavelet transformation coefficients. Using the ATs, an over-determined set of linear equations are developed for noise reduction, and consequently, the faulty segment is determined based on the characteristics of the provided equation set. Then, the fault location is estimated. The accuracy and capabilities of the proposed fault location method are evaluated and also compared to the existing traveling-wave-based method for a wide range of fault parameters. In order to improve power systems stability, auto-reclosing (AR), single-phase auto-reclosing (SPAR), and adaptive single-phase auto-reclosing (ASPAR) methods have been developed with the final objectives of distinguishing between the transient and permanent faults to clear the transient faults without de-energization of the solid phases. However, the features of the electrical arcs (transient faults) are severely influenced by a number of random parameters, including the convection of the air and plasma, wind speed, air pressure, and humidity. Therefore, the dead-time (the de-energization duration of the faulty phase) is unpredictable. Accordingly, conservatively long dead-times are usually considered by protection engineers. However, if the exact arc distinction time is determined, the power system stability and quality will enhance. Therefore, a new method for detection of arc extinction times leading to a new ASPAR method utilizing power line carrier (PLC) signals is presented. The efficiency of the proposed ASPAR method is verified through simulations and compared with the existing ASPAR methods. High-sampling measurements are prone to be skewed by the environmental noises and analog-to-digital (A/D) converters quantization errors. Therefore noise-contaminated measurements are the major source of uncertainties and errors in the outcomes of traveling wave-based diagnostic applications. The existing AT-detection methods do not provide enough sensitivity and selectivity at the same time. Therefore, a new AT-detection method based on short-time matrix pencil (STMPM) is developed to accurately detect ATs of the traveling waves with low signal-to-noise (SNR) ratios. As STMPM is based on matrix algebra, it is a challenging to implement this new technique in microprocessor-based fault locators. Hence, a fully recursive and computationally efficient method based on adaptive discrete Kalman filter (ADKF) is introduced for AT-detection, which is proper for microprocessors and able to accomplish accurate AT-detection for online applications such as ultra-high-speed protection. Both proposed AT-detection methods are evaluated based on extensive simulation studies, and the superior outcomes are compared to the existing methods.
Secure and Efficient Network Fault Localization
2012-02-27
ORGANIZATION NAME(S) AND ADDRESS (ES) Carnegie Mellon University,School of Computer Science,Computer Science Department,Pittsburgh,PA,15213 8. PERFORMING...ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS (ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT...efficiency than previously known protocols for fault localization. Our proposed fault localization protocols also address the security threats that
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Zhong, Zhengqiang; Xu, Lei
2015-10-01
In this paper, an integrated system health management-oriented adaptive fault diagnostics and model for avionics is proposed. With avionics becoming increasingly complicated, precise and comprehensive avionics fault diagnostics has become an extremely complicated task. For the proposed fault diagnostic system, specific approaches, such as the artificial immune system, the intelligent agents system and the Dempster-Shafer evidence theory, are used to conduct deep fault avionics diagnostics. Through this proposed fault diagnostic system, efficient and accurate diagnostics can be achieved. A numerical example is conducted to apply the proposed hybrid diagnostics to a set of radar transmitters on an avionics system and to illustrate that the proposed system and model have the ability to achieve efficient and accurate fault diagnostics. By analyzing the diagnostic system's feasibility and pragmatics, the advantages of this system are demonstrated.
Network-Physics(NP) Bec DIGITAL(#)-VULNERABILITY Versus Fault-Tolerant Analog
NASA Astrophysics Data System (ADS)
Alexander, G. K.; Hathaway, M.; Schmidt, H. E.; Siegel, E.
2011-03-01
Siegel[AMS Joint Mtg.(2002)-Abs.973-60-124] digits logarithmic-(Newcomb(1881)-Weyl(1914; 1916)-Benford(1938)-"NeWBe"/"OLDbe")-law algebraic-inversion to ONLY BEQS BEC:Quanta/Bosons= digits: Synthesis reveals EMP-like SEVERE VULNERABILITY of ONLY DIGITAL-networks(VS. FAULT-TOLERANT ANALOG INvulnerability) via Barabasi "Network-Physics" relative-``statics''(VS.dynamics-[Willinger-Alderson-Doyle(Not.AMS(5/09)]-]critique); (so called)"Quantum-computing is simple-arithmetic(sans division/ factorization); algorithmic-complexities: INtractibility/ UNdecidability/ INefficiency/NONcomputability / HARDNESS(so MIScalled) "noise"-induced-phase-transitions(NITS) ACCELERATION: Cook-Levin theorem Reducibility is Renormalization-(Semi)-Group fixed-points; number-Randomness DEFINITION via WHAT? Query(VS. Goldreich[Not.AMS(02)] How? mea culpa)can ONLY be MBCS "hot-plasma" versus digit-clumping NON-random BEC; Modular-arithmetic Congruences= Signal X Noise PRODUCTS = clock-model; NON-Shor[Physica A,341,586(04)] BEC logarithmic-law inversion factorization:Watkins number-thy. U stat.-phys.); P=/=NP TRIVIAL Proof: Euclid!!! [(So Miscalled) computational-complexity J-O obviation via geometry.
Detection and Modeling of High-Dimensional Thresholds for Fault Detection and Diagnosis
NASA Technical Reports Server (NTRS)
He, Yuning
2015-01-01
Many Fault Detection and Diagnosis (FDD) systems use discrete models for detection and reasoning. To obtain categorical values like oil pressure too high, analog sensor values need to be discretized using a suitablethreshold. Time series of analog and discrete sensor readings are processed and discretized as they come in. This task isusually performed by the wrapper code'' of the FDD system, together with signal preprocessing and filtering. In practice,selecting the right threshold is very difficult, because it heavily influences the quality of diagnosis. If a threshold causesthe alarm trigger even in nominal situations, false alarms will be the consequence. On the other hand, if threshold settingdoes not trigger in case of an off-nominal condition, important alarms might be missed, potentially causing hazardoussituations. In this paper, we will in detail describe the underlying statistical modeling techniques and algorithm as well as the Bayesian method for selecting the most likely shape and its parameters. Our approach will be illustrated by several examples from the Aerospace domain.
Analog FM/FM versus digital color TV transmission aboard space station
NASA Technical Reports Server (NTRS)
Hart, M. M.
1985-01-01
Langley Research Center is developing an integrated fault tolerant network to support data, voice, and video communications aboard Space Station. The question of transmitting the video data via dedicated analog channels or converting it to the digital domain for consistancy with the test of the data is addressed. The recommendations in this paper are based on a comparison in the signal-to-noise ratio (SNR), the type of video processing required aboard Space Station, the applicability to Space Station, and how they integrate into the network.
Energy-efficient fault tolerance in multiprocessor real-time systems
NASA Astrophysics Data System (ADS)
Guo, Yifeng
The recent progress in the multiprocessor/multicore systems has important implications for real-time system design and operation. From vehicle navigation to space applications as well as industrial control systems, the trend is to deploy multiple processors in real-time systems: systems with 4 -- 8 processors are common, and it is expected that many-core systems with dozens of processing cores will be available in near future. For such systems, in addition to general temporal requirement common for all real-time systems, two additional operational objectives are seen as critical: energy efficiency and fault tolerance. An intriguing dimension of the problem is that energy efficiency and fault tolerance are typically conflicting objectives, due to the fact that tolerating faults (e.g., permanent/transient) often requires extra resources with high energy consumption potential. In this dissertation, various techniques for energy-efficient fault tolerance in multiprocessor real-time systems have been investigated. First, the Reliability-Aware Power Management (RAPM) framework, which can preserve the system reliability with respect to transient faults when Dynamic Voltage Scaling (DVS) is applied for energy savings, is extended to support parallel real-time applications with precedence constraints. Next, the traditional Standby-Sparing (SS) technique for dual processor systems, which takes both transient and permanent faults into consideration while saving energy, is generalized to support multiprocessor systems with arbitrary number of identical processors. Observing the inefficient usage of slack time in the SS technique, a Preference-Oriented Scheduling Framework is designed to address the problem where tasks are given preferences for being executed as soon as possible (ASAP) or as late as possible (ALAP). A preference-oriented earliest deadline (POED) scheduler is proposed and its application in multiprocessor systems for energy-efficient fault tolerance is investigated, where tasks' main copies are executed ASAP while backup copies ALAP to reduce the overlapped execution of main and backup copies of the same task and thus reduce energy consumption. All proposed techniques are evaluated through extensive simulations and compared with other state-of-the-art approaches. The simulation results confirm that the proposed schemes can preserve the system reliability while still achieving substantial energy savings. Finally, for both SS and POED based Energy-Efficient Fault-Tolerant (EEFT) schemes, a series of recovery strategies are designed when more than one (transient and permanent) faults need to be tolerated.
Learning and diagnosing faults using neural networks
NASA Technical Reports Server (NTRS)
Whitehead, Bruce A.; Kiech, Earl L.; Ali, Moonis
1990-01-01
Neural networks have been employed for learning fault behavior from rocket engine simulator parameters and for diagnosing faults on the basis of the learned behavior. Two problems in applying neural networks to learning and diagnosing faults are (1) the complexity of the sensor data to fault mapping to be modeled by the neural network, which implies difficult and lengthy training procedures; and (2) the lack of sufficient training data to adequately represent the very large number of different types of faults which might occur. Methods are derived and tested in an architecture which addresses these two problems. First, the sensor data to fault mapping is decomposed into three simpler mappings which perform sensor data compression, hypothesis generation, and sensor fusion. Efficient training is performed for each mapping separately. Secondly, the neural network which performs sensor fusion is structured to detect new unknown faults for which training examples were not presented during training. These methods were tested on a task of fault diagnosis by employing rocket engine simulator data. Results indicate that the decomposed neural network architecture can be trained efficiently, can identify faults for which it has been trained, and can detect the occurrence of faults for which it has not been trained.
Corrugated megathrust revealed offshore from Costa Rica
NASA Astrophysics Data System (ADS)
Edwards, Joel H.; Kluesner, Jared W.; Silver, Eli A.; Brodsky, Emily E.; Brothers, Daniel S.; Bangs, Nathan L.; Kirkpatrick, James D.; Wood, Ruby; Okamoto, Kristina
2018-03-01
Exhumed faults are rough, often exhibiting topographic corrugations oriented in the direction of slip; such features are fundamental to mechanical processes that drive earthquakes and fault evolution. However, our understanding of corrugation genesis remains limited due to a lack of in situ observations at depth, especially at subducting plate boundaries. Here we present three-dimensional seismic reflection data of the Costa Rica subduction zone that image a shallow megathrust fault characterized by corrugated, and chaotic and weakly corrugated topographies. The corrugated surfaces extend from near the trench to several kilometres down-dip, exhibit high reflection amplitudes (consistent with high fluid content/pressure) and trend 11-18° oblique to subduction, suggesting 15 to 25 mm yr-1 of trench-parallel slip partitioning across the plate boundary. The corrugations form along portions of the megathrust with greater cumulative slip and may act as fluid conduits. In contrast, weakly corrugated areas occur adjacent to active plate bending faults where the megathrust has migrated up-section, forming a nascent fault surface. The variations in megathrust roughness imaged here suggest that abandonment and then reestablishment of the megathrust up-section transiently increases fault roughness. Analogous corrugations may exist along significant portions of subduction megathrusts globally.
Corrugated megathrust revealed offshore from Costa Rica
Edwards, Joel H.; Kluesner, Jared; Silver, Eli A.; Brodsky, Emily E.; Brothers, Daniel; Bangs, Nathan L.; Kirkpatrick, James D.; Wood, Ruby; Okamato, Kristina
2018-01-01
Exhumed faults are rough, often exhibiting topographic corrugations oriented in the direction of slip; such features are fundamental to mechanical processes that drive earthquakes and fault evolution. However, our understanding of corrugation genesis remains limited due to a lack of in situ observations at depth, especially at subducting plate boundaries. Here we present three-dimensional seismic reflection data of the Costa Rica subduction zone that image a shallow megathrust fault characterized by corrugated, and chaotic and weakly corrugated topographies. The corrugated surfaces extend from near the trench to several kilometres down-dip, exhibit high reflection amplitudes (consistent with high fluid content/pressure) and trend 11–18° oblique to subduction, suggesting 15 to 25 mm yr−1 of trench-parallel slip partitioning across the plate boundary. The corrugations form along portions of the megathrust with greater cumulative slip and may act as fluid conduits. In contrast, weakly corrugated areas occur adjacent to active plate bending faults where the megathrust has migrated up-section, forming a nascent fault surface. The variations in megathrust roughness imaged here suggest that abandonment and then reestablishment of the megathrust up-section transiently increases fault roughness. Analogous corrugations may exist along significant portions of subduction megathrusts globally.
Transfer zones in listric normal fault systems
NASA Astrophysics Data System (ADS)
Bose, Shamik
Listric normal faults are common in passive margin settings where sedimentary units are detached above weaker lithological units, such as evaporites or are driven by basal structural and stratigraphic discontinuities. The geometries and styles of faulting vary with the types of detachment and form landward and basinward dipping fault systems. Complex transfer zones therefore develop along the terminations of adjacent faults where deformation is accommodated by secondary faults, often below seismic resolution. The rollover geometry and secondary faults within the hanging wall of the major faults also vary with the styles of faulting and contribute to the complexity of the transfer zones. This study tries to understand the controlling factors for the formation of the different styles of listric normal faults and the different transfer zones formed within them, by using analog clay experimental models. Detailed analyses with respect to fault orientation, density and connectivity have been performed on the experiments in order to gather insights on the structural controls and the resulting geometries. A new high resolution 3D laser scanning technology has been introduced to scan the surfaces of the clay experiments for accurate measurements and 3D visualizations. Numerous examples from the Gulf of Mexico have been included to demonstrate and geometrically compare the observations in experiments and real structures. A salt cored convergent transfer zone from the South Timbalier Block 54, offshore Louisiana has been analyzed in detail to understand the evolutionary history of the region, which helps in deciphering the kinematic growth of similar structures in the Gulf of Mexico. The dissertation is divided into three chapters, written in a journal article format, that deal with three different aspects in understanding the listric normal fault systems and the transfer zones so formed. The first chapter involves clay experimental models to understand the fault patterns in divergent and convergent transfer zones. Flat base plate setups have been used to build different configurations that would lead to approaching, normal offset and overlapping faults geometries. The results have been analyzed with respect to fault orientation, density, connectivity and 3D geometry from photographs taken from the three free surfaces and laser scans of the top surface of the clay cake respectively. The second chapter looks into the 3D structural analysis of the South Timbalier Block 54, offshore Louisiana in the Gulf of Mexico with the help of a 3D seismic dataset and associated well tops and velocity data donated by ExxonMobil Corporation. This study involves seismic interpretation techniques, velocity modeling, cross section restoration of a series of seismic lines and 3D subsurface modeling using depth converted seismic horizons, well tops and balanced cross sections. The third chapter deals with the clay experiments of listric normal fault systems and tries to understand the controls on geometries of fault systems with and without a ductile substrate. Sloping flat base plate setups have been used and silicone fluid underlain below the clay cake has been considered as an analog for salt. The experimental configurations have been varied with respect to three factors viz. the direction of slope with respect to extension, the termination of silicone polymer with respect to the basal discontinuities and overlap of the base plates. The analyses for the experiments have again been performed from photographs and 3D laser scans of the clay surface.
NASA Astrophysics Data System (ADS)
Bertrand, Lionel; Jusseaume, Jessie; Géraud, Yves; Diraison, Marc; Damy, Pierre-Clément; Navelot, Vivien; Haffen, Sébastien
2018-03-01
In fractured reservoirs in the basement of extensional basins, fault and fracture parameters like density, spacing and length distribution are key properties for modelling and prediction of reservoir properties and fluids flow. As only large faults are detectable using basin-scale geophysical investigations, these fine-scale parameters need to be inferred from faults and fractures in analogous rocks at the outcrop. In this study, we use the western shoulder of the Upper Rhine Graben as an outcropping analogue of several deep borehole projects in the basement of the graben. Geological regional data, DTM (Digital Terrain Model) mapping and outcrop studies with scanlines are used to determine the spatial arrangement of the faults from the regional to the reservoir scale. The data shows that: 1) The fault network can be hierarchized in three different orders of scale and structural blocks with a characteristic structuration. This is consistent with other basement rocks studies in other rifting system allowing the extrapolation of the important parameters for modelling. 2) In the structural blocks, the fracture network linked to the faults is linked to the interplay between rock facies variation linked to the rock emplacement and the rifting event.
NASA Astrophysics Data System (ADS)
Fernández-Remolar, D. C.; Prieto-Ballesteros, O.; Rodríguez, N.; Dávila, F.; Stevens, T.; Amils, R.; Gómez-Elvira, J.; Stoker, C. R.
2005-03-01
Reconstruction of the probable habitats hosting the detected microbial communities through the integration of the geobiological data obtained from the MARTE drilling campaigns, TEM sounding and field surface geological survey
Monitoring the performance of the Southern African Large Telescope
NASA Astrophysics Data System (ADS)
Hettlage, Christian; Coetzee, Chris; Väisänen, Petri; Romero Colmenero, Encarni; Crawford, Steven M.; Kotze, Paul; Rabe, Paul; Hulme, Stephen; Brink, Janus; Maartens, Deneys; Browne, Keith; Strydom, Ockert; De Bruyn, David
2016-07-01
The efficient operation of a telescope requires awareness of its performance on a daily and long-term basis. This paper outlines the Fault Tracker, WebSAMMI and the Dashboard used by the Southern African Large Telescope (SALT) to achieve this aim. Faults are mostly logged automatically, but the Fault Tracker allows users to add and edit faults. The SALT Astronomer and SALT Operator record weather conditions and telescope usage with WebSAMMI. Various efficiency metrics are shown for different time periods on the Dashboard. A kiosk mode for displaying on a public screen is included. Possible applications for other telescopes are discussed.
NASA Astrophysics Data System (ADS)
McBeck, Jessica A.; Cooke, Michele L.; Herbert, Justin W.; Maillot, Bertrand; Souloumiac, Pauline
2017-09-01
We employ work optimization to predict the geometry of frontal thrusts at two stages of an evolving physical accretion experiment. Faults that produce the largest gains in efficiency, or change in external work per new fault area, ΔWext/ΔA, are considered most likely to develop. The predicted thrust geometry matches within 1 mm of the observed position and within a few degrees of the observed fault dip, for both the first forethrust and backthrust when the observed forethrust is active. The positions of the second backthrust and forethrust that produce >90% of the maximum ΔWext/ΔA also overlap the observed thrusts. The work optimal fault dips are within a few degrees of the fault dips that maximize the average Coulomb stress. Slip gradients along the detachment produce local elevated shear stresses and high strain energy density regions that promote thrust initiation near the detachment. The mechanical efficiency (Wext) of the system decreases at each of the two simulated stages of faulting and resembles the evolution of experimental force. The higher ΔWext/ΔA due to the development of the first pair relative to the second pair indicates that the development of new thrusts may lead to diminishing efficiency gains as the wedge evolves. The numerical estimates of work consumed by fault propagation overlap the range calculated from experimental force data and crustal faults. The integration of numerical and physical experiments provides a powerful approach that demonstrates the utility of work optimization to predict the development of faults.
Fault Injection Campaign for a Fault Tolerant Duplex Framework
NASA Technical Reports Server (NTRS)
Sacco, Gian Franco; Ferraro, Robert D.; von llmen, Paul; Rennels, Dave A.
2007-01-01
Fault tolerance is an efficient approach adopted to avoid or reduce the damage of a system failure. In this work we present the results of a fault injection campaign we conducted on the Duplex Framework (DF). The DF is a software developed by the UCLA group [1, 2] that uses a fault tolerant approach and allows to run two replicas of the same process on two different nodes of a commercial off-the-shelf (COTS) computer cluster. A third process running on a different node, constantly monitors the results computed by the two replicas, and eventually restarts the two replica processes if an inconsistency in their computation is detected. This approach is very cost efficient and can be adopted to control processes on spacecrafts where the fault rate produced by cosmic rays is not very high.
Optimal fault-tolerant control strategy of a solid oxide fuel cell system
NASA Astrophysics Data System (ADS)
Wu, Xiaojuan; Gao, Danhui
2017-10-01
For solid oxide fuel cell (SOFC) development, load tracking, heat management, air excess ratio constraint, high efficiency, low cost and fault diagnosis are six key issues. However, no literature studies the control techniques combining optimization and fault diagnosis for the SOFC system. An optimal fault-tolerant control strategy is presented in this paper, which involves four parts: a fault diagnosis module, a switching module, two backup optimizers and a controller loop. The fault diagnosis part is presented to identify the SOFC current fault type, and the switching module is used to select the appropriate backup optimizer based on the diagnosis result. NSGA-II and TOPSIS are employed to design the two backup optimizers under normal and air compressor fault states. PID algorithm is proposed to design the control loop, which includes a power tracking controller, an anode inlet temperature controller, a cathode inlet temperature controller and an air excess ratio controller. The simulation results show the proposed optimal fault-tolerant control method can track the power, temperature and air excess ratio at the desired values, simultaneously achieving the maximum efficiency and the minimum unit cost in the case of SOFC normal and even in the air compressor fault.
Fault Analysis in Solar Photovoltaic Arrays
NASA Astrophysics Data System (ADS)
Zhao, Ye
Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.
Effects of Strike-Slip Fault Segmentation on Earthquake Energy and Seismic Hazard
NASA Astrophysics Data System (ADS)
Madden, E. H.; Cooke, M. L.; Savage, H. M.; McBeck, J.
2014-12-01
Many major strike-slip faults are segmented along strike, including those along plate boundaries in California and Turkey. Failure of distinct fault segments at depth may be the source of multiple pulses of seismic radiation observed for single earthquakes. However, how and when segmentation affects fault behavior and energy release is the basis of many outstanding questions related to the physics of faulting and seismic hazard. These include the probability for a single earthquake to rupture multiple fault segments and the effects of segmentation on earthquake magnitude, radiated seismic energy, and ground motions. Using numerical models, we quantify components of the earthquake energy budget, including the tectonic work acting externally on the system, the energy of internal rock strain, the energy required to overcome fault strength and initiate slip, the energy required to overcome frictional resistance during slip, and the radiated seismic energy. We compare the energy budgets of systems of two en echelon fault segments with various spacing that include both releasing and restraining steps. First, we allow the fault segments to fail simultaneously and capture the effects of segmentation geometry on the earthquake energy budget and on the efficiency with which applied displacement is accommodated. Assuming that higher efficiency correlates with higher probability for a single, larger earthquake, this approach has utility for assessing the seismic hazard of segmented faults. Second, we nucleate slip along a weak portion of one fault segment and let the quasi-static rupture propagate across the system. Allowing fractures to form near faults in these models shows that damage develops within releasing steps and promotes slip along the second fault, while damage develops outside of restraining steps and can prohibit slip along the second fault. Work is consumed in both the propagation of and frictional slip along these new fractures, impacting the energy available for further slip and for subsequent earthquakes. This suite of models reveals that efficiency may be a useful tool for determining the relative seismic hazard of different segmented fault systems, while accounting for coseismic damage zone production is critical in assessing fault interactions and the associated energy budgets of specific systems.
NASA Astrophysics Data System (ADS)
Yin, An; Kelty, Thomas K.; Davis, Gregory A.
1989-09-01
Geologic mapping in southern Glacier National Park, Montana, reveals the presence of two duplexes sharing the same floor thrust fault, the Lewis thrust. The westernmost duplex (Brave Dog Mountain) includes the low-angle Brave Dog roof fault and Elk Mountain imbricate system, and the easternmost (Rising Wolf Mountain) duplex includes the low-angle Rockwell roof fault and Mt. Henry imbricate system. The geometry of these duplexes suggests that they differ from previously described geometric-kinematic models for duplex development. Their low-angle roof faults were preexisting structures that were locally utilized as roof faults during the formation of the imbricate systems. Crosscutting of the Brave Dog fault by the Mt. Henry imbricate system indicates that the two duplexes formed at different times. The younger Rockwell-Mt. Henry duplex developed 20 km east of the older Brave Dog-Elk Mountain duplex; the roof fault of the former is at a higher structural level. Field relations confirm that the low-angle Rockwell fault existed across the southern Glacier Park area prior to localized formation of the Mt. Henry imbricate thrusts beneath it. These thrusts kinematically link the Rockwell and Lewis faults and may be analogous to P shears that form between two synchronously active faults bounding a simple shear system. The abandonment of one duplex and its replacement by another with a new and higher roof fault may have been caused by (1) warping of the older and lower Brave Dog roof fault during the formation of the imbricate system (Elk Mountain) beneath it, (2) an upward shifting of the highest level of a simple shear system in the Lewis plate to a new decollement level in subhorizontal belt strata (= the Rockwell fault) that lay above inclined strata within the first duplex, and (3) a reinitiation of P-shear development (= Mt. Henry imbricate faults) between the Lewis thrust and the subparallel, synkinematic Rockwell fault.
NASA Astrophysics Data System (ADS)
Rosas, F. M.; Tomas, R.; Duarte, J. C.; Schellart, W. P.; Terrinha, P.
2014-12-01
The intersection between the Gloria Fault (GF) and the Tore-Madeira rise (TMR) in NE Atlantic marks a transition from a discrete to a diffuse nature along a critical segment of the Eurasia/Africa plate boundary. To the West of such intersection, approximately since the Azores triple junction, this plate boundary is mostly characterized by a set of closely aligned and continuous strike-slip faults that make up the narrow active dextral transcurrent system of the GF (with high magnitude M>7 historical earthquakes). While intersecting the TMR the closely E-W trending trace of the GF system is slightly deflected (changing to WNW-ESE), and splays into several fault branches that often coincide with aligned (TMR related?) active volcanic plugs. The segment of the plate boundary between the TMR and the Gorringe Bank (further to the East) corresponds to a more complex (less discrete) tectonic configuration, within which the tectonic connection between the Gloria Fault and another major dextral transcurrent system (the so called SWIM system) occurs. This SWIM fault system has been described to extend even further to the East (almost until the Straits of Gibraltar) across the Gulf of Cadiz domain. In this domain the relative movement between the Eurasian and the African plates is thought to be accommodated through a diffuse manner, involving large scale strain partition between a dextral transcurrent fault-system (the SWIM system), and a set of active west-directed én-échelon major thrusts extending to the North along the SW Iberian margin. We present new analog modeling results, in which we employed different experimental settings to address (namely) the following main questions (as a first step to gain new insight on the tectonic evolution of the TRM-GF critical intersection area): Could the observed morphotectonic configuration of such intersection be simply caused by a bathymetric anomaly determined by a postulated thickened oceanic crust, or is it more compatible with a crustal rheological (viscous) anomaly, possibly related with the active volcanism in the intersection zone? What could cause the observed deflection and splaying of the GF in the intersection with the TMR? Is the GF cutting across the TMR, or is it ending against a morpho-rheological anomaly through waning lateral propagation?
NASA Astrophysics Data System (ADS)
Fattaruso, Laura A.; Cooke, Michele L.; Dorsey, Rebecca J.; Housen, Bernard A.
2016-12-01
Between 1.5 and 1.1 Ma, the southern San Andreas fault system underwent a major reorganization that included initiation of the San Jacinto fault zone and termination of slip on the extensional West Salton detachment fault. The southern San Andreas fault itself has also evolved since this time, with several shifts in activity among fault strands within San Gorgonio Pass. We use three-dimensional mechanical Boundary Element Method models to investigate the impact of these changes to the fault network on deformation patterns. A series of snapshot models of the succession of active fault geometries explore the role of fault interaction and tectonic loading in abandonment of the West Salton detachment fault, initiation of the San Jacinto fault zone, and shifts in activity of the San Andreas fault. Interpreted changes to uplift patterns are well matched by model results. These results support the idea that initiation and growth of the San Jacinto fault zone led to increased uplift rates in the San Gabriel Mountains and decreased uplift rates in the San Bernardino Mountains. Comparison of model results for vertical-axis rotation to data from paleomagnetic studies reveals a good match to local rotation patterns in the Mecca Hills and Borrego Badlands. We explore the mechanical efficiency at each step in the modeled fault evolution, and find an overall trend toward increased efficiency through time. Strain energy density patterns are used to identify regions of incipient faulting, and support the notion of north-to-south propagation of the San Jacinto fault during its initiation.
NASA Astrophysics Data System (ADS)
Cooke, M. L.; Fattaruso, L.; Dorsey, R. J.; Housen, B. A.
2015-12-01
Between ~1.5 and 1.1 Ma, the southern San Andreas fault system underwent a major reorganization that included initiation of the San Jacinto fault and termination of slip on the extensional West Salton detachment fault. The southern San Andreas fault itself has also evolved since this time, with several shifts in activity among fault strands within San Gorgonio Pass. We use three-dimensional mechanical Boundary Element Method models to investigate the impact of these changes to the fault network on deformation patterns. A series of snapshot models of the succession of active fault geometries explore the role of fault interaction and tectonic loading in abandonment of the West Salton detachment fault, initiation of the San Jacinto fault, and shifts in activity of the San Andreas fault. Interpreted changes to uplift patterns are well matched by model results. These results support the idea that growth of the San Jacinto fault led to increased uplift rates in the San Gabriel Mountains and decreased uplift rates in the San Bernardino Mountains. Comparison of model results for vertical axis rotation to data from paleomagnetic studies reveals a good match to local rotation patterns in the Mecca Hills and Borrego Badlands. We explore the mechanical efficiency at each step in the evolution, and find an overall trend toward increased efficiency through time. Strain energy density patterns are used to identify regions of off-fault deformation and potential incipient faulting. These patterns support the notion of north-to-south propagation of the San Jacinto fault during its initiation. The results of the present-day model are compared with microseismicity focal mechanisms to provide additional insight into the patterns of off-fault deformation within the southern San Andreas fault system.
All-Digital Baseband 65nm PLL/FPLL Clock Multiplier using 10-cell Library
NASA Technical Reports Server (NTRS)
Shuler, Robert L., Jr.; Wu, Qiong; Liu, Rui; Chen, Li
2014-01-01
PLLs for clock generation are essential for modern circuits, to generate specialized frequencies for many interfaces and high frequencies for chip internal operation. These circuits depend on analog circuits and careful tailoring for each new process, and making them fault tolerant is an incompletely solved problem. Until now, all digital PLLs have been restricted to sampled data DSP techniques and not available for the highest frequency baseband applications. This paper presents the design and preliminary evaluation of an all-digital baseband technique built entirely with an easily portable 10-cell digital library. The library is also described, as it aids in research and low volume design porting to new processes. The advantages of the digital approach are the wide variety of techniques available to give varying degrees of fault tolerance, and the simplicity of porting the design to new processes, even to exotic processes that may not have analog capability. The only tuning parameter is digital gate delay. An all-digital approach presents unique problems and standard analog loop stability design criteria cannot be directly used. Because of the quantization of frequency, there is effectively infinite gain for very small loop error feedback. The numerically controlled oscillator (NCO) based on a tapped delay line cannot be reliably updated while a pulse is active in the delay line, and ordinarily does not have enough frequency resolution for a low-jitter output.
ALL-Digital Baseband 65nm PLL/FPLL Clock Multiplier Using 10-Cell Library
NASA Technical Reports Server (NTRS)
Schuler, Robert L., Jr.; Wu, Qiong; Liu, Rui; Chen, Li; Madala, Shridhar
2014-01-01
PLLs for clock generation are essential for modern circuits, to generate specialized frequencies for many interfaces and high frequencies for chip internal operation. These circuits depend on analog circuits and careful tailoring for each new process, and making them fault tolerant is an incompletely solved problem. Until now, all digital PLLs have been restricted to sampled data DSP techniques and not available for the highest frequency baseband applications. This paper presents the design and preliminary evaluation of an all-digital baseband technique built entirely with an easily portable 10-cell digital library. The library is also described, as it aids in research and low volume design porting to new processes. The advantages of the digital approach are the wide variety of techniques available to give varying degrees of fault tolerance, and the simplicity of porting the design to new processes, even to exotic processes that may not have analog capability. The only tuning parameter is digital gate delay. An all-digital approach presents unique problems and standard analog loop stability design criteria cannot be directly used. Because of the quantization of frequency, there is effectively infinite gain for very small loop error feedback. The numerically controlled oscillator (NCO) based on a tapped delay line cannot be reliably updated while a pulse is active in the delay line, and ordinarily does not have enough frequency resolution for a low-jitter output.
Sarpeshkar, R
2014-03-28
We analyse the pros and cons of analog versus digital computation in living cells. Our analysis is based on fundamental laws of noise in gene and protein expression, which set limits on the energy, time, space, molecular count and part-count resources needed to compute at a given level of precision. We conclude that analog computation is significantly more efficient in its use of resources than deterministic digital computation even at relatively high levels of precision in the cell. Based on this analysis, we conclude that synthetic biology must use analog, collective analog, probabilistic and hybrid analog-digital computational approaches; otherwise, even relatively simple synthetic computations in cells such as addition will exceed energy and molecular-count budgets. We present schematics for efficiently representing analog DNA-protein computation in cells. Analog electronic flow in subthreshold transistors and analog molecular flux in chemical reactions obey Boltzmann exponential laws of thermodynamics and are described by astoundingly similar logarithmic electrochemical potentials. Therefore, cytomorphic circuits can help to map circuit designs between electronic and biochemical domains. We review recent work that uses positive-feedback linearization circuits to architect wide-dynamic-range logarithmic analog computation in Escherichia coli using three transcription factors, nearly two orders of magnitude more efficient in parts than prior digital implementations.
NASA Astrophysics Data System (ADS)
Barnhart, W. D.; Briggs, R.
2015-12-01
Geodetic imaging techniques enable researchers to "see" details of fault rupture that cannot be captured by complementary tools such as seismology and field studies, thus providing increasingly detailed information about surface strain, slip kinematics, and how an earthquake may be transcribed into the geological record. For example, the recent Haiti, Sierra El Mayor, and Nepal earthquakes illustrate the fundamental role of geodetic observations in recording blind ruptures where purely geological and seismological studies provided incomplete views of rupture kinematics. Traditional earthquake hazard analyses typically rely on sparse paleoseismic observations and incomplete mapping, simple assumptions of slip kinematics from Andersonian faulting, and earthquake analogs to characterize the probabilities of forthcoming ruptures and the severity of ground accelerations. Spatially dense geodetic observations in turn help to identify where these prevailing assumptions regarding fault behavior break down and highlight new and unexpected kinematic slip behavior. Here, we focus on three key contributions of space geodetic observations to the analysis of co-seismic deformation: identifying near-surface co-seismic slip where no easily recognized fault rupture exists; discerning non-Andersonian faulting styles; and quantifying distributed, off-fault deformation. The 2013 Balochistan strike slip earthquake in Pakistan illuminates how space geodesy precisely images non-Andersonian behavior and off-fault deformation. Through analysis of high-resolution optical imagery and DEMs, evidence emerges that a single fault map slip as both a strike slip and dip slip fault across multiple seismic cycles. These observations likewise enable us to quantify on-fault deformation, which account for ~72% of the displacements in this earthquake. Nonetheless, the spatial distribution of on- and off-fault deformation in this event is highly spatially variable- a complicating factor for comparisons of geologic and geodetic slip rates. As such, detailed studies such as this will play a continuing vital role in the accurate assessment of short- and long-term fault slip kinematics.
NASA Astrophysics Data System (ADS)
Urnes, James M., Sr.; Cushing, John; Bond, William E.; Nunes, Steve
1996-10-01
Fly-by-Light control systems offer higher performance for fighter and transport aircraft, with efficient fiber optic data transmission, electric control surface actuation, and multi-channel high capacity centralized processing combining to provide maximum aircraft flight control system handling qualities and safety. The key to efficient support for these vehicles is timely and accurate fault diagnostics of all control system components. These diagnostic tests are best conducted during flight when all facts relating to the failure are present. The resulting data can be used by the ground crew for efficient repair and turnaround of the aircraft, saving time and money in support costs. These difficult to diagnose (Cannot Duplicate) fault indications average 40 - 50% of maintenance activities on today's fighter and transport aircraft, adding significantly to fleet support cost. Fiber optic data transmission can support a wealth of data for fault monitoring; the most efficient method of fault diagnostics is accurate modeling of the component response under normal and failed conditions for use in comparison with the actual component flight data. Neural Network hardware processors offer an efficient and cost-effective method to install fault diagnostics in flight systems, permitting on-board diagnostic modeling of very complex subsystems. Task 2C of the ARPA FLASH program is a design demonstration of this diagnostics approach, using the very high speed computation of the Adaptive Solutions Neural Network processor to monitor an advanced Electrohydrostatic control surface actuator linked through a AS-1773A fiber optic bus. This paper describes the design approach and projected performance of this on-line diagnostics system.
Agent Based Fault Tolerance for the Mobile Environment
NASA Astrophysics Data System (ADS)
Park, Taesoon
This paper presents a fault-tolerance scheme based on mobile agents for the reliable mobile computing systems. Mobility of the agent is suitable to trace the mobile hosts and the intelligence of the agent makes it efficient to support the fault tolerance services. This paper presents two approaches to implement the mobile agent based fault tolerant service and their performances are evaluated and compared with other fault-tolerant schemes.
Drilling Automation Demonstrations in Subsurface Exploration for Astrobiology
NASA Technical Reports Server (NTRS)
Glass, Brian; Cannon, H.; Lee, P.; Hanagud, S.; Davis, K.
2006-01-01
This project proposes to study subsurface permafrost microbial habitats at a relevant Arctic Mars-analog site (Haughton Crater, Devon Island, Canada) while developing and maturing the subsurface drilling and drilling automation technologies that will be required by post-2010 missions. It builds on earlier drilling technology projects to add permafrost and ice-drilling capabilities to 5m with a lightweight drill that will be automatically monitored and controlled in-situ. Frozen cores obtained with this drill under sterilized protocols will be used in testing three hypotheses pertaining to near-surface physical geology and ground H2O ice distribution, viewed as a habitat for microbial life in subsurface ice and ice-consolidated sediments. Automation technologies employed will demonstrate hands-off diagnostics and drill control, using novel vibrational dynamical analysis methods and model-based reasoning to monitor and identify drilling fault states before and during faults. Three field deployments, to a Mars-analog site with frozen impact crater fallback breccia, will support science goals, provide a rigorous test of drilling automation and lightweight permafrost drilling, and leverage past experience with the field site s particular logistics.
Geometric incompatibility in a fault system.
Gabrielov, A; Keilis-Borok, V; Jackson, D D
1996-01-01
Interdependence between geometry of a fault system, its kinematics, and seismicity is investigated. Quantitative measure is introduced for inconsistency between a fixed configuration of faults and the slip rates on each fault. This measure, named geometric incompatibility (G), depicts summarily the instability near the fault junctions: their divergence or convergence ("unlocking" or "locking up") and accumulation of stress and deformations. Accordingly, the changes in G are connected with dynamics of seismicity. Apart from geometric incompatibility, we consider deviation K from well-known Saint Venant condition of kinematic compatibility. This deviation depicts summarily unaccounted stress and strain accumulation in the region and/or internal inconsistencies in a reconstruction of block- and fault system (its geometry and movements). The estimates of G and K provide a useful tool for bringing together the data on different types of movement in a fault system. An analog of Stokes formula is found that allows determination of the total values of G and K in a region from the data on its boundary. The phenomenon of geometric incompatibility implies that nucleation of strong earthquakes is to large extent controlled by processes near fault junctions. The junctions that have been locked up may act as transient asperities, and unlocked junctions may act as transient weakest links. Tentative estimates of K and G are made for each end of the Big Bend of the San Andreas fault system in Southern California. Recent strong earthquakes Landers (1992, M = 7.3) and Northridge (1994, M = 6.7) both reduced K but had opposite impact on G: Landers unlocked the area, whereas Northridge locked it up again. Images Fig. 1 Fig. 2 PMID:11607673
Bound states for magic state distillation in fault-tolerant quantum computation.
Campbell, Earl T; Browne, Dan E
2010-01-22
Magic state distillation is an important primitive in fault-tolerant quantum computation. The magic states are pure nonstabilizer states which can be distilled from certain mixed nonstabilizer states via Clifford group operations alone. Because of the Gottesman-Knill theorem, mixtures of Pauli eigenstates are not expected to be magic state distillable, but it has been an open question whether all mixed states outside this set may be distilled. In this Letter we show that, when resources are finitely limited, nondistillable states exist outside the stabilizer octahedron. In analogy with the bound entangled states, which arise in entanglement theory, we call such states bound states for magic state distillation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, E.W.
1996-09-01
The San Antonio relay ramp, a gentle southwest-dipping monocline, formed between the tips of two en echelon master faults having maximum throws of >240 in. Structural analysis of this relay ramp is important to studies of Edwards aquifer recharge and ground-water flow because the ramp is an area of relatively good stratal continuity linking the outcrop belt recharge zone and unconfined aquifer with the downdip confined aquifer. Part of the relay ramp lies within the aquifer recharge zone and is crossed by several southeast-draining creeks, including Salado, Cibolo, and Comal Creeks, that supply water to the ramp recharge area. Thismore » feature is an analog for similar structures within the aquifer and for potential targets for hydrocarbons in other Gulf Coast areas. Defining the ramp is an {approximately}13-km-wide right step of the Edwards Group outcrop belt and the en echelon master faults that bound the ramp. The master faults strike N55-75{degrees}E, and maximum displacement exceeds the {approximately}165-m thickness of the Edwards Group strata. The faults therefore probably serve as barriers to Edwards ground-water flow. Within the ramp, tilted strata gently dip southwestward at {approximately}5 m/km, and the total structural relief along the ramp`s southwest-trending axis is <240 in. The ramp`s internal framework is defined by three fault blocks that are {approximately}4 to {approximately}6 km wide and are bound by northeast-striking faults having maximum throws between 30 and 150 m. Within the fault blocks, local areas of high fracture permeability may exist where smaller faults and joints are well connected.« less
Sarpeshkar, R.
2014-01-01
We analyse the pros and cons of analog versus digital computation in living cells. Our analysis is based on fundamental laws of noise in gene and protein expression, which set limits on the energy, time, space, molecular count and part-count resources needed to compute at a given level of precision. We conclude that analog computation is significantly more efficient in its use of resources than deterministic digital computation even at relatively high levels of precision in the cell. Based on this analysis, we conclude that synthetic biology must use analog, collective analog, probabilistic and hybrid analog–digital computational approaches; otherwise, even relatively simple synthetic computations in cells such as addition will exceed energy and molecular-count budgets. We present schematics for efficiently representing analog DNA–protein computation in cells. Analog electronic flow in subthreshold transistors and analog molecular flux in chemical reactions obey Boltzmann exponential laws of thermodynamics and are described by astoundingly similar logarithmic electrochemical potentials. Therefore, cytomorphic circuits can help to map circuit designs between electronic and biochemical domains. We review recent work that uses positive-feedback linearization circuits to architect wide-dynamic-range logarithmic analog computation in Escherichia coli using three transcription factors, nearly two orders of magnitude more efficient in parts than prior digital implementations. PMID:24567476
Enhanced Control for Local Helicity Injection on the Pegasus ST
NASA Astrophysics Data System (ADS)
Pierren, C.; Bongard, M. W.; Fonck, R. J.; Lewicki, B. T.; Perry, J. M.
2017-10-01
Local helicity injection (LHI) experiments on Pegasus rely upon programmable control of a 250 MVA modular power supply system that drives the electromagnets and helicity injection systems. Precise control of the central solenoid is critical to experimental campaigns that test the LHI Taylor relaxation limit and the coupling efficiency of LHI-produced plasmas to Ohmic current drive. Enhancement and expansion of the present control system is underway using field programmable gate array (FPGA) technology for digital logic and control, coupled to new 10 MHz optical-to-digital transceivers for semiconductor level device communication. The system accepts optical command signals from existing analog feedback controllers, transmits them to multiple devices in parallel H-bridges, and aggregates their status signals for fault detection. Present device-level multiplexing/de-multiplexing and protection logic is extended to include bridge-level protections with the FPGA. An input command filter protects against erroneous and/or spurious noise generated commands that could otherwise cause device failures. Fault registration and response times with the FPGA system are 25 ns. Initial system testing indicates an increased immunity to power supply induced noise, enabling plasma operations at higher working capacitor bank voltage. This can increase the applied helicity injection drive voltage, enable longer pulse lengths and improve Ohmic loop voltage control. Work supported by US DOE Grant DE-FG02-96ER54375.
Won, Jongho; Ma, Chris Y. T.; Yau, David K. Y.; ...
2016-06-01
Smart meters are integral to demand response in emerging smart grids, by reporting the electricity consumption of users to serve application needs. But reporting real-time usage information for individual households raises privacy concerns. Existing techniques to guarantee differential privacy (DP) of smart meter users either are not fault tolerant or achieve (possibly partial) fault tolerance at high communication overheads. In this paper, we propose a fault-tolerant protocol for smart metering that can handle general communication failures while ensuring DP with significantly improved efficiency and lower errors compared with the state of the art. Our protocol handles fail-stop faults proactively bymore » using a novel design of future ciphertexts, and distributes trust among the smart meters by sharing secret keys among them. We prove the DP properties of our protocol and analyze its advantages in fault tolerance, accuracy, and communication efficiency relative to competing techniques. We illustrate our analysis by simulations driven by real-world traces of electricity consumption.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Won, Jongho; Ma, Chris Y. T.; Yau, David K. Y.
Smart meters are integral to demand response in emerging smart grids, by reporting the electricity consumption of users to serve application needs. But reporting real-time usage information for individual households raises privacy concerns. Existing techniques to guarantee differential privacy (DP) of smart meter users either are not fault tolerant or achieve (possibly partial) fault tolerance at high communication overheads. In this paper, we propose a fault-tolerant protocol for smart metering that can handle general communication failures while ensuring DP with significantly improved efficiency and lower errors compared with the state of the art. Our protocol handles fail-stop faults proactively bymore » using a novel design of future ciphertexts, and distributes trust among the smart meters by sharing secret keys among them. We prove the DP properties of our protocol and analyze its advantages in fault tolerance, accuracy, and communication efficiency relative to competing techniques. We illustrate our analysis by simulations driven by real-world traces of electricity consumption.« less
Room temperature high-fidelity holonomic single-qubit gate on a solid-state spin.
Arroyo-Camejo, Silvia; Lazariev, Andrii; Hell, Stefan W; Balasubramanian, Gopalakrishnan
2014-09-12
At its most fundamental level, circuit-based quantum computation relies on the application of controlled phase shift operations on quantum registers. While these operations are generally compromised by noise and imperfections, quantum gates based on geometric phase shifts can provide intrinsically fault-tolerant quantum computing. Here we demonstrate the high-fidelity realization of a recently proposed fast (non-adiabatic) and universal (non-Abelian) holonomic single-qubit gate, using an individual solid-state spin qubit under ambient conditions. This fault-tolerant quantum gate provides an elegant means for achieving the fidelity threshold indispensable for implementing quantum error correction protocols. Since we employ a spin qubit associated with a nitrogen-vacancy colour centre in diamond, this system is based on integrable and scalable hardware exhibiting strong analogy to current silicon technology. This quantum gate realization is a promising step towards viable, fault-tolerant quantum computing under ambient conditions.
Structural Evolution of Transform Fault Zones in Thick Oceanic Crust of Iceland
NASA Astrophysics Data System (ADS)
Karson, J. A.; Brandsdottir, B.; Horst, A. J.; Farrell, J.
2017-12-01
Spreading centers in Iceland are offset from the regional trend of the Mid-Atlantic Ridge by the Tjörnes Fracture Zone (TFZ) in the north and the South Iceland Seismic Zone (SISZ) in the south. Rift propagation away from the center of the Iceland hotspot, has resulted in migration of these transform faults to the N and S, respectively. As they migrate, new transform faults develop in older crust between offset spreading centers. Active transform faults, and abandoned transform structures left in their wakes, show features that reflect different amounts (and durations) of slip that can be viewed as a series of snapshots of different stages of transform fault evolution in thick, oceanic crust. This crust has a highly anisotropic, spreading fabric with pervasive zones of weakness created by spreading-related normal faults, fissures and dike margins oriented parallel to the spreading centers where they formed. These structures have a strong influence on the mechanical properties of the crust. By integrating available data, we suggest a series of stages of transform development: 1) Formation of an oblique rift (or leaky transform) with magmatic centers, linked by bookshelf fault zones (antithetic strike-slip faults at a high angle to the spreading direction) (Grimsey Fault Zone, youngest part of the TFZ); 2) broad zone of conjugate faulting (tens of km) (Hreppar Block N of the SISZ); 3) narrower ( 20 km) zone of bookshelf faulting aligned with the spreading direction (SISZ); 4) mature, narrow ( 1 km) through-going transform fault zone bounded by deformation (bookshelf faulting and block rotations) distributed over 10 km to either side (Húsavík-Flatey Fault Zone in the TFZ). With progressive slip, the transform zone becomes progressively narrower and more closely aligned with the spreading direction. The transform and non-transform (beyond spreading centers) domains may be truncated by renewed propagation and separated by subsequent spreading. This perspective provides an analog for the evolution of migrating transforms along mid-ocean ridge spreading centers or other places where plate boundary rearrangements result in the formation of a new transform fault in highly anisotropic oceanic crust.
Fisher, M.A.; Langenheim, V.E.; Nicholson, C.; Ryan, H.F.; Sliter, R.W.
2009-01-01
During late Mesozoic and Cenozoic time, three main tectonic episodes affected the Southern California offshore area. Each episode imposed its unique structural imprint such that early-formed structures controlled or at least influenced the location and development of later ones. This cascaded structural inheritance greatly complicates analysis of the extent, orientation, and activity of modern faults. These fault attributes play key roles in estimates of earthquake magnitude and recurrence interval. Hence, understanding the earthquake hazard posed by offshore and coastal faults requires an understanding of the history of structural inheritance and modifi-cation. In this report we review recent (mainly since 1987) findings about the tectonic development of the Southern California offshore area and use analog models of fault deformation as guides to comprehend the bewildering variety of offshore structures that developed over time. This report also provides a background in regional tectonics for other chapters in this section that deal with the threat from offshore geologic hazards in Southern California. ?? 2009 The Geological Society of America.
Oak Ridge fault, Ventura fold belt, and the Sisar decollement, Ventura basin, California
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeats, R.S.; Huftile, G.J.; Grigsby, F.B.
1988-12-01
The rootless Ventura Avenue, San Miguelito, and Rincon anticlines (Ventura fold belt) in Pliocene -Pleistocene turbidites are fault-propagation folds related to south-dipping reverse faults rising from a decollement in Miocene shale. To the east, the Sulfur Mountain anticlinorium overlies and is cut by the Sisar, Big Canyon, and Lion south-dipping thrusts that merge downward into the Sisar decollement in lower Miocene shale. Shortening of the Miocene and younger sequence is {approximately} 3 km greater than that of underlying competent Paleogens strata in the Ventura fold belt and {approximately} 7 km greater farther east at Sulfur Mountain. Cross-section balancing requires thatmore » this difference be taken up by the Paleogene sequence at the Oak Ridge fault to the south. Convergence is northeast to north-northeast on the base of earthquake focal mechanisms, borehole breakouts, and piercing-point offest of the South Mountain seaknoll by the Oak Ridge fault. A northeast-trending line connecting the west end of Oak Ridge and the east end of Sisar fault separates an eastern domain where late Quaternary displacement is taken up entirely on the Oak Ridge fault and a western domain where displacement is transferred to the Sisar decollement and its overlying rootless folds. This implies that (1) the Oak Ridge fault near the coast presents as much seismic risk as it does farther east, despite negligible near-surface late Quaternary movement; (2) ground-rupture hazard is high for the Sisar fault set in the upper Ojai Valley; and (3) the decollement itself could produce an earthquake analogous to the 1987 Whittier Narrows event in Low Angeles.« less
Latest Progress of Fault Detection and Localization in Complex Electrical Engineering
NASA Astrophysics Data System (ADS)
Zhao, Zheng; Wang, Can; Zhang, Yagang; Sun, Yi
2014-01-01
In the researches of complex electrical engineering, efficient fault detection and localization schemes are essential to quickly detect and locate faults so that appropriate and timely corrective mitigating and maintenance actions can be taken. In this paper, under the current measurement precision of PMU, we will put forward a new type of fault detection and localization technology based on fault factor feature extraction. Lots of simulating experiments indicate that, although there are disturbances of white Gaussian stochastic noise, based on fault factor feature extraction principal, the fault detection and localization results are still accurate and reliable, which also identifies that the fault detection and localization technology has strong anti-interference ability and great redundancy.
Redundancy management for efficient fault recovery in NASA's distributed computing system
NASA Technical Reports Server (NTRS)
Malek, Miroslaw; Pandya, Mihir; Yau, Kitty
1991-01-01
The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.
Marlow, M. S.; Hart, P.E.; Carlson, P.R.; Childs, J. R.; Mann, D. M.; Anima, R.J.; Kayen, R.E.
1996-01-01
We collected high-resolution seismic reflection profiles in the southern part of San Francisco Bay in 1992 and 1993 to investigate possible Holocene faulting along postulated transbay bedrock fault zones. The initial analog records show apparent offsets of reflection packages along sharp vertical boundaries. These records were originally interpreted as showing a complex series of faults along closely spaced, sharp vertical boundaries in the upper 10 m (0.013 s two-way travel time) of Holocene bay mud. A subsequent survey in 1994 was run with a different seismic reflection system, which utilized a higher power source. This second system generated records with deeper penetration (max. 20 m, 0.026 s two-way travel time) and demonstrated that the reflections originally interpreted as fault offsets by faulting were actually laterally continuous reflection horizons. The pitfall in the original interpretations was caused by lateral variations in the amplitude brightness of reflection events, coupled with a long (greater than 15 ms) source signature of the low-power system. These effects combined to show apparent offsets of reflection packages along sharp vertical boundaries. These boundaries, as shown by the second system, in fact occur where the reflection amplitude diminishes abruptly on laterally continuous reflection events. This striking lateral variation in reflection amplitude is attributable to the localized presence of biogenic(?) gas.
Slip re-orientation in the oblique Abiquiu embayment, northern Rio Grande rift
NASA Astrophysics Data System (ADS)
Liu, Y.; Murphy, M. A.; Andrea, R. A.
2015-12-01
Traditional models of oblique rifting predict that an oblique fault accommodates both dip-slip and strike-slip kinematics. However, recent analog experiments suggest that slip can be re-oriented to almost pure dip-slip on oblique faults if a preexisting weak zone is present at the onset of oblique extension. In this study, we use fault slip data from the Abiquiu embayment in northern Rio Grande rift to test the new model. The Rio Grande rift is a Cenozoic oblique rift extending from southern Colorado to New Mexico. From north to south, it comprises three major half grabens (San Luis, Española, and Albuquerque). The Abiquiu embayment is a sub-basin of the San Luis basin in northern New Mexico. Rift-border faults are generally older and oblique to the trend of the rift, whereas internal faults are younger and approximately N-S striking, i.e. orthogonal to the regional extension direction. Rift-border faults are deep-seated in the basement rocks while the internal faults only cut shallow stratigraphic sections. It has been suggested by many that inherited structures may influence the Rio Grande rifting. Particularly, Laramide structures (and possibly the Ancestral Rockies as well) that bound the Abiquiu embayment strike N- to NW. Our data show that internal faults in the Abiquiu embayment exhibit almost pure dip-slip (rake of slickenlines = 90º ± 15º), independent of their orientations with respect to the regional extension direction. On the contrary, border faults show two sets of rakes: almost pure dip-slip (rake = 90º ± 15º) where the fault is sub-parallel to the foliation, and moderately-oblique (rake = 30º ± 15º) where the fault is high angle to the foliation. We conclude that slip re-orientation occurs on most internal faults and some oblique border faults under the influence of inherited structures. Regarding those border faults on which slip is not re-oriented, we hypothesize that it may be caused by the Jemez volcanism or small-scale mantle convection.
Distribution, morphology, and origins of Martian pit crater chains
NASA Astrophysics Data System (ADS)
Wyrick, Danielle; Ferrill, David A.; Morris, Alan P.; Colton, Shannon L.; Sims, Darrell W.
2004-06-01
Pit craters are circular to elliptical depressions found in alignments (chains), which in many cases coalesce into linear troughs. They are common on the surface of Mars and similar to features observed on Earth and other terrestrial bodies. Pit craters lack an elevated rim, ejecta deposits, or lava flows that are associated with impact craters or calderas. It is generally agreed that the pits are formed by collapse into a subsurface cavity or explosive eruption. Hypotheses regarding the formation of pit crater chains require development of a substantial subsurface void to accommodate collapse of the overlying material. Suggested mechanisms of formation include: collapsed lava tubes, dike swarms, collapsed magma chamber, substrate dissolution (analogous to terrestrial karst), fissuring beneath loose material, and dilational faulting. The research described here is intended to constrain current interpretations of pit crater chain formation by analyzing their distribution and morphology. The western hemisphere of Mars was systematically mapped using Mars Orbiter Camera (MOC) images to generate ArcView™ Geographic Information System (GIS) coverages. All visible pit crater chains were mapped, including their orientations and associations with other structures. We found that pit chains commonly occur in areas that show regional extension or local fissuring. There is a strong correlation between pit chains and fault-bounded grabens. Frequently, there are transitions along strike from (1) visible faulting to (2) faults and pits to (3) pits alone. We performed a detailed quantitative analysis of pit crater morphology using MOC narrow angle images, Thermal Emission Imaging System (THEMIS) visual images, and Mars Orbiter Laser Altimeter (MOLA) data. This allowed us to determine a pattern of pit chain evolution and calculate pit depth, slope, and volume. Volumes of approximately 150 pits from five areas were calculated to determine volume size distribution and regional trends. The information collected in the study was then compared with non-Martian examples of pit chains and physical analog models. We evaluated the various mechanisms for pit chain development based on the data collected and conclude that dilational normal faulting and sub-vertical fissuring provide the simplest and most comprehensive mechanisms to explain the regional associations, detailed geometry, and progression of pit chain development.
An Efficient Algorithm for Server Thermal Fault Diagnosis Based on Infrared Image
NASA Astrophysics Data System (ADS)
Liu, Hang; Xie, Ting; Ran, Jian; Gao, Shan
2017-10-01
It is essential for a data center to maintain server security and stability. Long-time overload operation or high room temperature may cause service disruption even a server crash, which would result in great economic loss for business. Currently, the methods to avoid server outages are monitoring and forecasting. Thermal camera can provide fine texture information for monitoring and intelligent thermal management in large data center. This paper presents an efficient method for server thermal fault monitoring and diagnosis based on infrared image. Initially thermal distribution of server is standardized and the interest regions of the image are segmented manually. Then the texture feature, Hu moments feature as well as modified entropy feature are extracted from the segmented regions. These characteristics are applied to analyze and classify thermal faults, and then make efficient energy-saving thermal management decisions such as job migration. For the larger feature space, the principal component analysis is employed to reduce the feature dimensions, and guarantee high processing speed without losing the fault feature information. Finally, different feature vectors are taken as input for SVM training, and do the thermal fault diagnosis after getting the optimized SVM classifier. This method supports suggestions for optimizing data center management, it can improve air conditioning efficiency and reduce the energy consumption of the data center. The experimental results show that the maximum detection accuracy is 81.5%.
Diagnosing a Strong-Fault Model by Conflict and Consistency
Zhou, Gan; Feng, Wenquan
2018-01-01
The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model’s prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain—the heat control unit of a spacecraft—where the proposed methods are significantly better than best first and conflict directly with A* search methods. PMID:29596302
NASA Astrophysics Data System (ADS)
Ming, A. B.; Qin, Z. Y.; Zhang, W.; Chu, F. L.
2013-12-01
Bearing failure is one of the most common reasons of machine breakdowns and accidents. Therefore, the fault diagnosis of rolling element bearings is of great significance to the safe and efficient operation of machines owing to its fault indication and accident prevention capability in engineering applications. Based on the orthogonal projection theory, a novel method is proposed to extract the fault characteristic frequency for the incipient fault diagnosis of rolling element bearings in this paper. With the capability of exposing the oscillation frequency of the signal energy, the proposed method is a generalized form of the squared envelope analysis and named as spectral auto-correlation analysis (SACA). Meanwhile, the SACA is a simplified form of the cyclostationary analysis as well and can be iteratively carried out in applications. Simulations and experiments are used to evaluate the efficiency of the proposed method. Comparing the results of SACA, the traditional envelope analysis and the squared envelope analysis, it is found that the result of SACA is more legible due to the more prominent harmonic amplitudes of the fault characteristic frequency and that the SACA with the proper iteration will further enhance the fault features.
The Fault Block Model: A novel approach for faulted gas reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ursin, J.R.; Moerkeseth, P.O.
1994-12-31
The Fault Block Model was designed for the development of gas production from Sleipner Vest. The reservoir consists of marginal marine sandstone of Hugine Formation. Modeling of highly faulted and compartmentalized reservoirs is severely impeded by the nature and extent of known and undetected faults and, in particular, their effectiveness as flow barrier. The model presented is efficient and superior to other models, for highly faulted reservoir, i.e. grid based simulators, because it minimizes the effect of major undetected faults and geological uncertainties. In this article the authors present the Fault Block Model as a new tool to better understandmore » the implications of geological uncertainty in faulted gas reservoirs with good productivity, with respect to uncertainty in well coverage and optimum gas recovery.« less
NASA Astrophysics Data System (ADS)
Blank, D. G.; Morgan, J.
2017-12-01
Large earthquakes that occur on convergent plate margin interfaces have the potential to cause widespread damage and loss of life. Recent observations reveal that a wide range of different slip behaviors take place along these megathrust faults, which demonstrate both their complexity, and our limited understanding of fault processes and their controls. Numerical modeling provides us with a useful tool that we can use to simulate earthquakes and related slip events, and to make direct observations and correlations among properties and parameters that might control them. Further analysis of these phenomena can lead to a more complete understanding of the underlying mechanisms that accompany the nucleation of large earthquakes, and what might trigger them. In this study, we use the discrete element method (DEM) to create numerical analogs to subduction megathrusts with heterogeneous fault friction. Displacement boundary conditions are applied in order to simulate tectonic loading, which in turn, induces slip along the fault. A wide range of slip behaviors are observed, ranging from creep to stick slip. We are able to characterize slip events by duration, stress drop, rupture area, and slip magnitude, and to correlate the relationships among these quantities. These characterizations allow us to develop a catalog of rupture events both spatially and temporally, for comparison with slip processes on natural faults.
The Classroom Sandbox: A Physical Model for Scientific Inquiry
ERIC Educational Resources Information Center
Feldman, Allan; Cooke, Michele L.; Ellsworth, Mary S.
2010-01-01
For scientists, the sandbox serves as an analog for faulting in Earth's crust. Here, the large, slow processes within the crust can be scaled to the size of a table, and time scales are directly observable. This makes it a useful tool for demonstrating the role of inquiry in science. For this reason, the sandbox is also helpful for learning…
Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory
Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong
2016-01-01
Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods. PMID:26797611
Three dimensional modelling of earthquake rupture cycles on frictional faults
NASA Astrophysics Data System (ADS)
Simpson, Guy; May, Dave
2017-04-01
We are developing an efficient MPI-parallel numerical method to simulate earthquake sequences on preexisting faults embedding within a three dimensional viscoelastic half-space. We solve the velocity form of the elasto(visco)dynamic equations using a continuous Galerkin Finite Element Method on an unstructured pentahedral mesh, which thus permits local spatial refinement in the vicinity of the fault. Friction sliding is coupled to the viscoelastic solid via rate- and state-dependent friction laws using the split-node technique. Our coupled formulation employs a picard-type non-linear solver with a fully implicit, first order accurate time integrator that utilises an adaptive time step that efficiently evolves the system through multiple seismic cycles. The implementation leverages advanced parallel solvers, preconditioners and linear algebra from the Portable Extensible Toolkit for Scientific Computing (PETSc) library. The model can treat heterogeneous frictional properties and stress states on the fault and surrounding solid as well as non-planar fault geometries. Preliminary tests show that the model successfully reproduces dynamic rupture on a vertical strike-slip fault in a half-space governed by rate-state friction with the ageing law.
Radiation efficiency of earthquake sources at different hierarchical levels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kocharyan, G. G., E-mail: gevorgkidg@mail.ru; Moscow Institute of Physics and Technology
Such factors as earthquake size and its mechanism define common trends in alteration of radiation efficiency. The macroscopic parameter that controls the efficiency of a seismic source is stiffness of fault or fracture. The regularities of this parameter alteration with scale define several hierarchical levels, within which earthquake characteristics obey different laws. Small variations of physical and mechanical properties of the fault principal slip zone can lead to dramatic differences both in the amplitude of released stress and in the amount of radiated energy.
NASA Technical Reports Server (NTRS)
Belcastro, Celeste M.
1989-01-01
Digital control systems for applications such as aircraft avionics and multibody systems must maintain adequate control integrity in adverse as well as nominal operating conditions. For example, control systems for advanced aircraft, and especially those with relaxed static stability, will be critical to flight and will, therefore, have very high reliability specifications which must be met regardless of operating conditions. In addition, multibody systems such as robotic manipulators performing critical functions must have control systems capable of robust performance in any operating environment in order to complete the assigned task reliably. Severe operating conditions for electronic control systems can result from electromagnetic disturbances caused by lightning, high energy radio frequency (HERF) transmitters, and nuclear electromagnetic pulses (NEMP). For this reason, techniques must be developed to evaluate the integrity of the control system in adverse operating environments. The most difficult and illusive perturbations to computer-based control systems that can be caused by an electromagnetic environment (EME) are functional error modes that involve no component damage. These error modes are collectively known as upset, can occur simultaneously in all of the channels of a redundant control system, and are software dependent. Upset studies performed to date have not addressed the assessment of fault tolerant systems and do not involve the evaluation of a control system operating in a closed-loop with the plant. A methodology for performing a real-time simulation of the closed-loop dynamics of a fault tolerant control system with a simulated plant operating in an electromagnetically harsh environment is presented. In particular, considerations for performing upset tests on the controller are discussed. Some of these considerations are the generation and coupling of analog signals representative of electromagnetic disturbances to a control system under test, analog data acquisition, and digital data acquisition from fault tolerant systems. In addition, a case study of an upset test methodology for a fault tolerant electromagnetic aircraft engine control system is presented.
NASA Astrophysics Data System (ADS)
Olive, J. A. L.; Escartin, J.; Leclerc, F.; Garcia, R.; Gracias, N.; Odemar Science Party, T.
2016-12-01
While >70% of Earth's seismicity is submarine, almost all observations of earthquake-related ruptures and surface deformation are restricted to subaerial environments. Such observations are critical for understanding fault behavior and associated hazards (including tsunamis), but are not routinely conducted at the seafloor due to obvious constraints. During the 2013 ODEMAR cruise we used autonomous and remotely operated vehicles to map the Roseau normal Fault (Lesser Antilles), source of the 2004 Mw6.3 earthquake and associated tsunami (<3.5m run-up). These vehicles acquired acoustic (multibeam bathymetry) and optical data (video and electronic images) spanning from regional (>1 km) to outcrop (<1 m) scales. These high-resolution submarine observations, analogous to those routinely conducted subaerially, rely on advanced image and video processing techniques, such as mosaicking and structure-from-motion (SFM). We identify sub-vertical fault slip planes along the Roseau scarp, displaying coseismic deformation structures undoubtedly due to the 2004 event. First, video mosaicking allows us to identify the freshly exposed fault plane at the base of one of these scarps. A maximum vertical coseismic displacement of 0.9 m can be measured from the video-derived terrain models and the texture-mapped imagery, which have better resolution than any available acoustic systems (<10 cm). Second, seafloor photomosaics allow us to identify and map both additional sub-vertical fault scarps, and cracks and fissures at their base, recording hangingwall damage from the same event. These observations provide critical parameters to understand the seismic cycle and long-term seismic behavior of this submarine fault. Our work demonstrates the feasibility of extensive, high-resolution underwater surveys using underwater vehicles and novel imaging techniques, thereby opening new possibilities to study recent seafloor changes associated with tectonic, volcanic, or hydrothermal activity.
Eastern Sahara Geology from Orbital Radar: Potential Analog to Mars
NASA Technical Reports Server (NTRS)
Farr, T. G.; Paillou, P.; Heggy, E.
2004-01-01
Much of the surface of Mars has been intensely reworked by aeolian processes and key evidence about the history of the Martian environment seems to be hidden beneath a widespread layer of debris (paleo lakes and rivers, faults, impact craters). In the same way, the recent geological and hydrological history of the eastern Sahara is still mainly hidden under large regions of wind-blown sand which represent a possible terrestrial analog to Mars. The subsurface geology there is generally invisible to optical remote sensing techniques, but radar images obtained from the Shuttle Imaging Radar (SIR) missions were able to penetrate the superficial sand layer to reveal parts of paleohydrological networks in southern Egypt.
DAME: planetary-prototype drilling automation.
Glass, B; Cannon, H; Branson, M; Hanagud, S; Paulsen, G
2008-06-01
We describe results from the Drilling Automation for Mars Exploration (DAME) project, including those of the summer 2006 tests from an Arctic analog site. The drill hardware is a hardened, evolved version of the Advanced Deep Drill by Honeybee Robotics. DAME has developed diagnostic and executive software for hands-off surface operations of the evolved version of this drill. The DAME drill automation tested from 2004 through 2006 included adaptively controlled drilling operations and the downhole diagnosis of drilling faults. It also included dynamic recovery capabilities when unexpected failures or drilling conditions were discovered. DAME has developed and tested drill automation software and hardware under stressful operating conditions during its Arctic field testing campaigns at a Mars analog site.
DAME: Planetary-Prototype Drilling Automation
NASA Astrophysics Data System (ADS)
Glass, B.; Cannon, H.; Branson, M.; Hanagud, S.; Paulsen, G.
2008-06-01
We describe results from the Drilling Automation for Mars Exploration (DAME) project, including those of the summer 2006 tests from an Arctic analog site. The drill hardware is a hardened, evolved version of the Advanced Deep Drill by Honeybee Robotics. DAME has developed diagnostic and executive software for hands-off surface operations of the evolved version of this drill. The DAME drill automation tested from 2004 through 2006 included adaptively controlled drilling operations and the downhole diagnosis of drilling faults. It also included dynamic recovery capabilities when unexpected failures or drilling conditions were discovered. DAME has developed and tested drill automation software and hardware under stressful operating conditions during its Arctic field testing campaigns at a Mars analog site.
Anderson, M.; Matti, J.; Jachens, R.
2004-01-01
The San Bernardino basin is an area of Quaternary extension between the San Jacinto and San Andreas Fault zones in southern California. New gravity data are combined with aeromagnetic data to produce two- and three-dimensional models of the basin floor. These models are used to identify specific faults that have normal displacements. In addition, aeromagnetic maps of the basin constrain strike-slip offset on many faults. Relocated seismicity, focal mechanisms, and a seismic reflection profile for the basin area support interpretations of the gravity and magnetic anomalies. The shape of the basin revealed by our interpretations is different from past interpretations, broadening its areal extent while confining the deepest parts to an area along the modern San Jacinto fault, west of the city of San Bernardino. Through these geophysical observations and related geologic information, we propose a model for the development of the basin. The San Jacinto fault-related strike-slip displacements started on fault strands in the basin having a stepping geometry thus forming a pull-apart graben, and finally cut through the graben in a simpler, bending geometry. In this model, the San Bernardino strand of the San Andreas Fault has little influence on the formation of the basin. The deep, central part of the basin resembles classic pull-apart structures and our model describes a high level of detail for this structure that can be compared to other pull-apart structures as well as analog and numerical models in order to better understand timing and kinematics of pull-apart basin formation. Copyright 2004 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Anderson, Megan; Matti, Jonathan; Jachens, Robert
2004-04-01
The San Bernardino basin is an area of Quaternary extension between the San Jacinto and San Andreas Fault zones in southern California. New gravity data are combined with aeromagnetic data to produce two- and three-dimensional models of the basin floor. These models are used to identify specific faults that have normal displacements. In addition, aeromagnetic maps of the basin constrain strike-slip offset on many faults. Relocated seismicity, focal mechanisms, and a seismic reflection profile for the basin area support interpretations of the gravity and magnetic anomalies. The shape of the basin revealed by our interpretations is different from past interpretations, broadening its areal extent while confining the deepest parts to an area along the modern San Jacinto fault, west of the city of San Bernardino. Through these geophysical observations and related geologic information, we propose a model for the development of the basin. The San Jacinto fault-related strike-slip displacements started on fault strands in the basin having a stepping geometry thus forming a pull-apart graben, and finally cut through the graben in a simpler, bending geometry. In this model, the San Bernardino strand of the San Andreas Fault has little influence on the formation of the basin. The deep, central part of the basin resembles classic pull-apart structures and our model describes a high level of detail for this structure that can be compared to other pull-apart structures as well as analog and numerical models in order to better understand timing and kinematics of pull-apart basin formation.
NASA Astrophysics Data System (ADS)
Li, De Z.; Wang, Wilson; Ismail, Fathy
2017-11-01
Induction motors (IMs) are commonly used in various industrial applications. To improve energy consumption efficiency, a reliable IM health condition monitoring system is very useful to detect IM fault at its earliest stage to prevent operation degradation, and malfunction of IMs. An intelligent harmonic synthesis technique is proposed in this work to conduct incipient air-gap eccentricity fault detection in IMs. The fault harmonic series are synthesized to enhance fault features. Fault related local spectra are processed to derive fault indicators for IM air-gap eccentricity diagnosis. The effectiveness of the proposed harmonic synthesis technique is examined experimentally by IMs with static air-gap eccentricity and dynamic air-gap eccentricity states under different load conditions. Test results show that the developed harmonic synthesis technique can extract fault features effectively for initial IM air-gap eccentricity fault detection.
NO-FAULT COMPENSATION FOR MEDICAL INJURIES: TRENDS AND CHALLENGES.
Kassim, Puteri Nemie
2014-12-01
As an alternative to the tort or fault-based system, a no-fault compensation system has been viewed as having the potential to overcome problems inherent in the tort system by providing fair, speedy and adequate compensation for medically injured victims. Proponents of the suggested no-fault compensation system have argued that this system is more efficient in terms of time and money, as well as in making the circumstances in which compensation is paid, much clearer. However, the arguments against no-fault compensation systems are mainly on issues of funding difficulties, accountability and deterrence, particularly, once fault is taken out of the equation. Nonetheless, the no-fault compensation system has been successfully implemented in various countries but, at the same time, rejected in some others, as not being implementable. In the present trend, the no-fault system seems to fit the needs of society by offering greater access to justice for medically injured victims and providing a clearer "road map" towards obtaining suitable redress. This paper aims at providing the readers with an overview of the characteristics of the no fault compensation system and some examples of countries that have implemented it. Qualitative Research-Content Analysis. Given the many problems and hurdles posed by the tort or fault-based system, it is questionable that it can efficiently play its role as a mechanism that affords fair and adequate compensation for victims of medical injuries. However, while a comprehensive no-fault compensation system offers a tempting alternative to the tort or fault-based system, to import such a change into our local scenario requires a great deal of consideration. There are major differences, mainly in terms of social standing, size of population, political ideology and financial commitment, between Malaysia and countries that have successfully implemented no-fault systems. Nevertheless, implementing a no-fault compensation system in Malaysia is not entirely impossible. A custom-made no-fault model tailored to suit our local scenario can be promising, provided that a thorough research is made, assessing the viability of a no-fault system in Malaysia, addressing the inherent problems and, consequently, designing a workable no-fault system in Malaysia.
Fuzzy-Wavelet Based Double Line Transmission System Protection Scheme in the Presence of SVC
NASA Astrophysics Data System (ADS)
Goli, Ravikumar; Shaik, Abdul Gafoor; Tulasi Ram, Sankara S.
2015-06-01
Increasing the power transfer capability and efficient utilization of available transmission lines, improving the power system controllability and stability, power oscillation damping and voltage compensation have made strides and created Flexible AC Transmission (FACTS) devices in recent decades. Shunt FACTS devices can have adverse effects on distance protection both in steady state and transient periods. Severe under reaching is the most important problem of relay which is caused by current injection at the point of connection to the system. Current absorption of compensator leads to overreach of relay. This work presents an efficient method based on wavelet transforms, fault detection, classification and location using Fuzzy logic technique which is almost independent of fault impedance, fault distance and fault inception angle. The proposed protection scheme is found to be fast, reliable and accurate for various types of faults on transmission lines with and without Static Var compensator at different locations and with various incidence angles.
Thermodynamic method for generating random stress distributions on an earthquake fault
Barall, Michael; Harris, Ruth A.
2012-01-01
This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.
Adaptive robust fault-tolerant control for linear MIMO systems with unmatched uncertainties
NASA Astrophysics Data System (ADS)
Zhang, Kangkang; Jiang, Bin; Yan, Xing-Gang; Mao, Zehui
2017-10-01
In this paper, two novel fault-tolerant control design approaches are proposed for linear MIMO systems with actuator additive faults, multiplicative faults and unmatched uncertainties. For time-varying multiplicative and additive faults, new adaptive laws and additive compensation functions are proposed. A set of conditions is developed such that the unmatched uncertainties are compensated by actuators in control. On the other hand, for unmatched uncertainties with their projection in unmatched space being not zero, based on a (vector) relative degree condition, additive functions are designed to compensate for the uncertainties from output channels in the presence of actuator faults. The developed fault-tolerant control schemes are applied to two aircraft systems to demonstrate the efficiency of the proposed approaches.
Normal Faulting in the 1923 Berdún Earthquake and Postorogenic Extension in the Pyrenees
NASA Astrophysics Data System (ADS)
Stich, Daniel; Martín, Rosa; Batlló, Josep; Macià, Ramón; Mancilla, Flor de Lis; Morales, Jose
2018-04-01
The 10 July 1923 earthquake near Berdún (Spain) is the largest instrumentally recorded event in the Pyrenees. We recover old analog seismograms and use 20 hand-digitized waveforms for regional moment tensor inversion. We estimate moment magnitude Mw 5.4, centroid depth of 8 km, and a pure normal faulting source with strike parallel to the mountain chain (N292°E), dip of 66° and rake of -88°. The new mechanism fits into the general predominance of normal faulting in the Pyrenees and extension inferred from Global Positioning System data. The unique location of the 1923 earthquake, near the south Pyrenean thrust front, shows that the extensional regime is not confined to the axial zone where high topography and the crustal root are located. Together with seismicity near the northern mountain front, this indicates that gravitational potential energy in the western Pyrenees is not extracted locally but induces a wide distribution of postorogenic deformation.
NASA Astrophysics Data System (ADS)
Mège, Daniel; Reidel, Stephen P.
The Yakima folds on the central Columbia Plateau are a succession of thrusted anticlines thought to be analogs of planetary wrinkle ridges. They provide a unique opportunity to understand wrinkle ridge structure. Field data and length-displacement scaling are used to demonstrate a method for estimating two-dimensional horizontal contractional strain at wrinkle ridges. Strain is given as a function of ridge length, and depends on other parameters that can be inferred from the Yakima folds and fault population displacement studies. Because ridge length can be readily obtained from orbital imagery, the method can be applied to any wrinkle ridge population, and helps constrain quantitative tectonic models on other planets.
Coherent Oscillations inside a Quantum Manifold Stabilized by Dissipation
NASA Astrophysics Data System (ADS)
Touzard, S.; Grimm, A.; Leghtas, Z.; Mundhada, S. O.; Reinhold, P.; Axline, C.; Reagor, M.; Chou, K.; Blumoff, J.; Sliwa, K. M.; Shankar, S.; Frunzio, L.; Schoelkopf, R. J.; Mirrahimi, M.; Devoret, M. H.
2018-04-01
Manipulating the state of a logical quantum bit (qubit) usually comes at the expense of exposing it to decoherence. Fault-tolerant quantum computing tackles this problem by manipulating quantum information within a stable manifold of a larger Hilbert space, whose symmetries restrict the number of independent errors. The remaining errors do not affect the quantum computation and are correctable after the fact. Here we implement the autonomous stabilization of an encoding manifold spanned by Schrödinger cat states in a superconducting cavity. We show Zeno-driven coherent oscillations between these states analogous to the Rabi rotation of a qubit protected against phase flips. Such gates are compatible with quantum error correction and hence are crucial for fault-tolerant logical qubits.
NASA Technical Reports Server (NTRS)
Wilson, Edward (Inventor)
2008-01-01
The present invention is a method for detecting and isolating fault modes in a system having a model describing its behavior and regularly sampled measurements. The models are used to calculate past and present deviations from measurements that would result with no faults present, as well as with one or more potential fault modes present. Algorithms that calculate and store these deviations, along with memory of when said faults, if present, would have an effect on the said actual measurements, are used to detect when a fault is present. Related algorithms are used to exonerate false fault modes and finally to isolate the true fault mode. This invention is presented with application to detection and isolation of thruster faults for a thruster-controlled spacecraft. As a supporting aspect of the invention, a novel, effective, and efficient filtering method for estimating the derivative of a noisy signal is presented.
A Structural Model Decomposition Framework for Hybrid Systems Diagnosis
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil
2015-01-01
Nowadays, a large number of practical systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete modes of behavior, each defined by a set of continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task very challenging. In this work, we present a new modeling and diagnosis framework for hybrid systems. Models are composed from sets of user-defined components using a compositional modeling approach. Submodels for residual generation are then generated for a given mode, and reconfigured efficiently when the mode changes. Efficient reconfiguration is established by exploiting causality information within the hybrid system models. The submodels can then be used for fault diagnosis based on residual generation and analysis. We demonstrate the efficient causality reassignment, submodel reconfiguration, and residual generation for fault diagnosis using an electrical circuit case study.
NASA Astrophysics Data System (ADS)
Nolan, S.; Jones, C. E.; Munro, R.; Norman, P.; Galloway, S.; Venturumilli, S.; Sheng, J.; Yuan, W.
2017-12-01
Hybrid electric propulsion aircraft are proposed to improve overall aircraft efficiency, enabling future rising demands for air travel to be met. The development of appropriate electrical power systems to provide thrust for the aircraft is a significant challenge due to the much higher required power generation capacity levels and complexity of the aero-electrical power systems (AEPS). The efficiency and weight of the AEPS is critical to ensure that the benefits of hybrid propulsion are not mitigated by the electrical power train. Hence it is proposed that for larger aircraft (~200 passengers) superconducting power systems are used to meet target power densities. Central to the design of the hybrid propulsion AEPS is a robust and reliable electrical protection and fault management system. It is known from previous studies that the choice of protection system may have a significant impact on the overall efficiency of the AEPS. Hence an informed design process which considers the key trades between choice of cable and protection requirements is needed. To date the fault response of a voltage source converter interfaced DC link rail to rail fault in a superconducting power system has only been investigated using simulation models validated by theoretical values from the literature. This paper will present the experimentally obtained fault response for a variety of different types of superconducting tape for a rail to rail DC fault. The paper will then use these as a platform to identify key trades between protection requirements and cable design, providing guidelines to enable future informed decisions to optimise hybrid propulsion electrical power system and protection design.
NASA Astrophysics Data System (ADS)
Hoprich, M.; Decker, K.; Grasemann, B.; Sokoutis, D.; Willingshofer, E.
2009-04-01
Former analog modeling on pull-apart basins dealt with different sidestep geometries, the symmetry and ratio between velocities of moving blocks, the ratio between ductile base and model thickness, the ratio between fault stepover and model thickness and their influence on basin evolution. In all these models the pull-apart basin is deformed over an even detachment. The Vienna basin, however, is considered a classical thin-skinned pull-apart with a rather peculiar basement structure. Deformation and basin evolution are believed to be limited to the brittle upper crust above the Alpine-Carpathian floor thrust. The latter is not a planar detachment surface, but has a ramp-shaped topography draping the underlying former passive continental margin. In order to estimate the effects of this special geometry, nine experiments were accomplished and the resulting structures were compared with the Vienna basin. The key parameters for the models (fault and basin geometry, detachment depth and topography) were inferred from a 3D GoCad model of the natural Vienna basin, which was compiled from seismic, wells and geological cross sections. The experiments were scaled 1:100.000 ("Ramberg-scaling" for brittle rheology) and built of quartz sand (300 µm grain size). An average depth of 6 km (6 cm) was calculated for the basal detachment, distances between the bounding strike-slip faults of 40 km (40 cm) and a finite length of the natural basin of 200 km were estimated (initial model length: 100 cm). The following parameters were changed through the experimental process: (1) syntectonic sedimentation; (2) the stepover angle between bounding strike slip faults and basal velocity discontinuity; (3) moving of one or both fault blocks (producing an asymmetrical or symmetrical basin); (4) inclination of the basal detachment surface by 5°; (6) installation of 2 and 3 ramp systems at the detachment; (7) simulation of a ductile detachment through a 0.4 cm thick PDMS layer at the basin floor. The surface of the model was photographed after each deformation increment through the experiment. Pictures of serial cross sections cut through the models in their final state every 4 cm were also taken and interpreted. The formation of en-echelon normal faults with relay ramps is observed in all models. These faults are arranged in an acute angle to the basin borders, according to a Riedel-geometry. In the case of an asymmetric basin they emerge within the non-moving fault block. Substantial differences between the models are the number, the distance and the angle of these Riedel faults, the length of the bounding strike-slip faults and the cross basin symmetry. A flat detachment produces straight fault traces, whereas inclined detachments (or inclined ramps) lead to "bending" of the normal faults, rollover and growth strata thickening towards the faults. Positions and the sizes of depocenters also vary, with depocenters preferably developing above ramp-flat-transitions. Depocenter thicknesses increase with ramp heights. A similar relation apparently exists in the natural Vienna basin, which shows ramp-like structures in the detachment just underneath large faults like the Steinberg normal fault and the associated depocenters. The 3-ramp-model also reveals segmentation of the basin above the lowermost ramp. The evolving structure is comparable to the Wiener Neustadt sub-basin in the southern part of the Vienna basin, which is underlain by a topographical high of the detachment. Cross sections through the ductile model show a strong disintergration into a horst-and-graben basin. The thin silicon putty base influences the overlying strata in a way that the basin - unlike the "dry" sand models - becomes very flat and shallow. The top view shows an irregular basin shape and no rhombohedral geometry, which characterises the Vienna basin. The ductile base also leads to a symmetrical distribution of deformation on both fault blocks, even though only one fault block is moved. The stepover angle, the influence of gravitation in a ramp or inclined system and the strain accomodation by a viscous silicone layer can be summarized as factors controlling the characteristics of the models.
NASA Technical Reports Server (NTRS)
Momoh, James A.; Wang, Yanchun; Dolce, James L.
1997-01-01
This paper describes the application of neural network adaptive wavelets for fault diagnosis of space station power system. The method combines wavelet transform with neural network by incorporating daughter wavelets into weights. Therefore, the wavelet transform and neural network training procedure become one stage, which avoids the complex computation of wavelet parameters and makes the procedure more straightforward. The simulation results show that the proposed method is very efficient for the identification of fault locations.
NASA Astrophysics Data System (ADS)
Carlson, C. W.; Faulds, J. E.
2014-12-01
Positioned between the Sierra Nevada microplate and Basin and Range in western North America, the Walker Lane (WL) accommodates ~20% of the dextral motion between the North American and Pacific plates on predominately NW-striking dextral and ENE to E-W-striking sinistral fault systems. The Terrill Mountains (TM) lie at the northern terminus of a domain of dextral faults accommodating translation of crustal-blocks in the central WL and at the southeast edge of sinistral faults accommodating oroclinal flexure and CW rotation of blocks in the northern WL. As the mechanisms of strain transfer between these disparate fault systems are poorly understood, the thick Oligocene to Pliocene volcanic strata of the TM area make it an ideal site for studying the transfer of strain between regions undergoing differing styles of deformation and yet both accommodating dextral shear. Detailed geologic mapping and paleomagnetic study of ash-flow tuffs in the TM region has been conducted to elucidate Neogene strain accommodation for this transitional region of the WL. Strain at the northernmost TM appears to be transferred from a system of NW-striking dextral faults to a system of ~E-W striking sinistral faults with associated CW flexure. A distinct ~23 Ma paleosol is locally preserved below the tuff of Toiyabe and provides an important marker bed. This paleosol is offset with ~6 km of dextral separation across the fault bounding the NE flank of the TM. This fault is inferred as the northernmost strand of the NW-striking, dextral Benton Spring fault system, with offset consistent with minimums constrained to the south (6.4-9.6 km, Gabbs Valley Range). Paleomagnetic results suggest counter-intuitive CCW vertical-axis rotation of crustal blocks south of the domain boundary in the system of NW-striking dextral faults, similar to some other domains of NW-striking dextral faults in the northern WL. This may result from coeval dextral shear and WNW-directed extension within the left-stepping system of dextral fault. The left steps are analogous to Riedel shears developing above a more through-going shear zone at depth. However, a site directly adjacent to the Benton Springs fault is rotated ~30° CW, likely due to fault drag. These results show the complex and important contribution of vertical-axis rotations in accommodation of dextral shear.
A 3D modeling approach to complex faults with multi-source data
NASA Astrophysics Data System (ADS)
Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan
2015-04-01
Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.
Continental Extensional Tectonics in the Basins and Ranges and Aegean Regions: A Review
NASA Astrophysics Data System (ADS)
Cemen, I.
2017-12-01
The Basins and Ranges of North America and the Aegean Region of Eastern Europe and Asia Minor have been long considered as the two best developed examples of continental extension. The two regions contain well-developed normal faults which were considered almost vertical in the 1950s and 1960s. By the mid 1980s, however, overwhelming field evidence emerged to conclude that the dip angle normal faults in the two regions may range from almost vertical to almost horizontal. This led to the discovery that high-grade metamorphic rocks could be brought to surface by the exhumation of mid-crustal rocks along major low-angle normal faults (detachment faults) which were previously either mapped as thrust faults or unconformity. Within the last three decades, our understanding of continental extensional tectonics in the Basins and Ranges and the Aegean Region have improved substantially based on fieldwork, geochemical analysis, analog and computer modeling, detailed radiometric age determinations and thermokinematic modelling. It is now widely accepted that a) Basin and Range extension is controlled by the movement along the San Andreas fault zone as the North American plate moved southeastward with respect to the northwestward movement of the Pacific plate; b) Aegean extension is controlled by subduction roll-back associated with the Hellenic subduction zone; and c) the two regions contain best examples of detachment faulting, extensional folding, and extensional basins. However, there are still many important questions of continental extensional tectonics in the two regions that remain poorly understood. These include determining a) precise amount and percentage of cumulative extension; b) role of strike-slip faulting in the extensional processes; c) exhumation history along detachment surfaces using multimethod geochronology; d) geometry and nature of extensional features in the middle and lower crust; e) the nature of upper mantle and asthenospheric flow; f) evolutions of sedimentary basins associated with dip-slip and strike-slip faults; g) seismic hazards; and i) economic significance of extensional basins.
NASA Astrophysics Data System (ADS)
Kumar, S.; Biswal, S.; Parija, M. P.
2016-12-01
The Himalaya overrides the Indian plate along a decollement fault, referred as the Main Himalayan Thrust (MHT). The 2400 km long Himalayan mountain arc in the northern boundary of the Indian sub-continent is one of the most seismically active regions of the world. The Himalayan Frontal Thrust (HFT) is characterized by an abrupt physiographic and tectonic break between the Himalayan front and the Indo-Gangetic plain. The HFT represents the southern surface expression of the MHT on the Himalayan front. The tectonic zone between the Main Boundary Thrust (MBT) and the HFT encompasses the Himalayan Frontal Fault System (HFFS). The zone indicates late Quaternary-Holocene active deformation. Late Quaternary intramontane basin of Dehradun flanked to the south by the Mohand anticline lies between the MBT and the HFT in Garhwal Sub Himalaya. Slip rate 13-15 mm/yr has been estimated on the HFT based on uplifted strath terrace on the Himalyan front (Wesnousky et al. 2006). An out of sequence active fault, Bhauwala Thrust (BT), is observed between the HFT and the MBT. The Himalayan Frontal Fault System includes MBT, BT, HFT and PF active fault structures (Thakur, 2013). The HFFS structures were developed analogous to proto-thrusts in subduction zone, suggesting that the plate boundary is not a single structure, but series of structures across strike. Seismicity recorded by WIHG shows a concentrated belt of seismic events located in the Main Central Thrust Zone and the physiographic transition zone between the Higher and Lesser Himalaya. However, there is quiescence in the Himalayan frontal zone where surface rupture and active faults are reported. GPS measurements indicate the segment between the southern extent of microseismicity zone and the HFT is locked. The great earthquake originating in the locked segment rupture the plate boundary fault and propagate to the Himalaya front and are registered as surface rupture reactivating the fault in the HFFS.
NASA Astrophysics Data System (ADS)
Goto, J.; Miwa, T.; Tsuchi, H.; Karasaki, K.
2009-12-01
The Nuclear Waste Management Organization of Japan (NUMO), after volunteering municipalities arise, will start a three-staged program for selecting a HLW and TRU waste repository site. It is recognized from experiences from various site characterization programs in the world that the hydrologic property of faults is one of the most important parameters in the early stage of the program. It is expected that numerous faults of interest exist in an investigation area of several tens of square kilometers. It is, however, impossible to characterize all these faults in a limited time and budget. This raises problems in the repository designing and safety assessment that we may have to accept unrealistic or over conservative results by using a single model or parameters for all the faults in the area. We, therefore, seek to develop an efficient and practical methodology to characterize hydrologic property of faults. This project is a five year program started in 2007, and comprises the basic methodology development through literature study and its verification through field investigations. The literature study tries to classify faults by correlating their geological features with hydraulic property, to search for the most efficient technology for fault characterization, and to develop a work flow diagram. The field investigation starts from selection of a site and fault(s), followed by existing site data analyses, surface geophysics, geological mapping, trenching, water sampling, a series of borehole investigations and modeling/analyses. Based on the results of the field investigations, we plan to develop a systematic hydrologic characterization methodology of faults. A classification method that correlates combinations of geological features (rock type, fault displacement, fault type, position in a fault zone, fracture zone width, damage zone width) with widths of high permeability zones around a fault zone was proposed through a survey on available documents of the site characterization programs. The field investigation started in 2008, by selecting the Wildcat Fault that cut across the Laurence Berkeley National Laboratory (LBNL) site as the target. Analyses on site-specific data, surface geophysics, geological mapping and trenching have confirmed the approximate location and characteristics of the fault (see Session H48, Onishi, et al). The plan for the remaining years includes borehole investigations at LBNL, and another series of investigations in the northern part of the Wildcat Fault.
NASA Astrophysics Data System (ADS)
Dooley, T. P.; Monastero, F. C.; McClay, K. R.
2007-12-01
Results of scaled physical models of a releasing bend in the transtensional, dextral strike-slip Coso geothermal system located in the southwest Basin and Range, U.S.A., are instructive for understanding crustal thinning and heat flow in such settings. The basic geometry of the Coso system has been approximated to a 30? dextral releasing stepover. Twenty-four model runs were made representing successive structural iterations that attempted to replicate geologic structures found in the field. The presence of a shallow brittle-ductile transition in the field known from a well-documented seismic-aseismic boundary, was accommodated by inclusion of layers of silicone polymer in the models. A single polymer layer models a conservative brittle-ductile transition in the Coso area at a depth of 6 km. Dual polymer layers impose a local elevation of the brittle-ductile transition to a depth of 4 km. The best match to known geologic structures was achieved with a double layer of silicone polymers with an overlying layer of 100 µm silica sand, a 5° oblique divergent motion across the master strike-slip faults, and a thin-sheet basal rubber décollement. Variation in the relative displacement of the two base plates resulted in some switching in basin symmetry, but the primary structural features remained essentially the same. Although classic, basin-bounding sidewall fault structures found in all pull-apart basin analog models formed in our models, there were also atypical complex intra-basin horst structures that formed where the cross-basin fault zone is situated. These horsts are flanked by deep sedimentary basins that were the locus of maximum crustal thinning accomplished via high-angle extensional and oblique-extensional faults that become progressively more listric with depth as the brittle-ductile transition was approached. Crustal thinning was as much as 50% of the original model depth in dual polymer models. The weak layer at the base of the upper crust appears to focus brittle deformation and facilitate formation of listric normal faults. The implications of these modeling efforts are that: 1) Releasing stepovers that have associated weak upper crust will undergo a more rapid rate of crustal thinning due to the strain focusing effect of this ductile layer; 2) The origin of listric normal faults in these analog models is related to the presence of the weak, ductile layer; and, 3) Due to high dilatency related to major intra-basin extension these stepover structures can be the loci for high heat flow.
NASA Astrophysics Data System (ADS)
Wang, H.; Jing, X. J.
2017-07-01
This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.
Property-Based Monitoring of Analog and Mixed-Signal Systems
NASA Astrophysics Data System (ADS)
Havlicek, John; Little, Scott; Maler, Oded; Nickovic, Dejan
In the recent past, there has been a steady growth of the market for consumer embedded devices such as cell phones, GPS and portable multimedia systems. In embedded systems, digital, analog and software components are combined on a single chip, resulting in increasingly complex designs that introduce richer functionality on smaller devices. As a consequence, the potential insertion of errors into a design becomes higher, yielding an increasing need for automated analog and mixed-signal validation tools. In the purely digital setting, formal verification based on properties expressed in industrial specification languages such as PSL and SVA is nowadays successfully integrated in the design flow. On the other hand, the validation of analog and mixed-signal systems still largely depends on simulation-based, ad-hoc methods. In this tutorial, we consider some ingredients of the standard verification methodology that can be successfully exported from digital to analog and mixed-signal setting, in particular property-based monitoring techniques. Property-based monitoring is a lighter approach to the formal verification, where the system is seen as a "black-box" that generates sets of traces, whose correctness is checked against a property, that is its high-level specification. Although incomplete, monitoring is effectively used to catch faults in systems, without guaranteeing their full correctness.
Reactive Transport Analysis of Fault 'Self-sealing' Associated with CO2 Storage
NASA Astrophysics Data System (ADS)
Patil, V.; McPherson, B. J. O. L.; Priewisch, A.; Franz, R. J.
2014-12-01
We present an extensive hydrologic and reactive transport analysis of the Little Grand Wash fault zone (LGWF), a natural analog of fault-associated leakage from an engineered CO2 repository. Injecting anthropogenic CO2 into the subsurface is suggested for climate change mitigation. However, leakage of CO2 from its target storage formation into unintended areas is considered as a major risk involved in CO2 sequestration. In the event of leakage, permeability in leakage pathways like faults may get sealed (reduced) due to precipitation or enhanced (increased) due to dissolution reactions induced by CO2-enriched water, thus influencing migration and fate of the CO2. We hypothesize that faults which act as leakage pathways can seal over time in presence of CO2-enriched waters. An example of such a fault 'self-sealing' is found in the LGWF near Green River, Utah in the Paradox basin, where fault outcrop shows surface and sub-surface fractures filled with calcium carbonate (CaCO3). The LGWF cuts through multiple reservoirs and seal layers piercing a reservoir of naturally occurring CO2, allowing it to leak into overlying aquifers. As the CO2-charged water from shallower aquifers migrates towards atmosphere, a decrease in pCO2 leads to supersaturation of water with respect to CaCO3, which precipitates in the fractures of the fault damage zone. In order to test the nature, extent and time-frame of the fault sealing, we developed reactive flow simulations of the LGWF. Model parameters were chosen based on hydrologic measurements from literature. Model geochemistry was constrained by water analysis of the adjacent Crystal Geyser and observations from a scientific drilling test conducted at the site. Precipitation of calcite in the top portion of the fault model led to a decrease in the porosity value of the damage zone, while clay precipitation led to a decrease in the porosity value of the fault core. We found that the results were sensitive to the fault architecture, relative permeability functions, kinetic parameters for mineral reactions and treatment of molecular diffusion. Major conclusions from this analysis are that a failed (leaking) engineered sequestration site may behave very similar to the LGWF and that under similar conditions some faults are likely to seal over time.
NASA Astrophysics Data System (ADS)
Li, Hongguang; Li, Ming; Li, Cheng; Li, Fucai; Meng, Guang
2017-09-01
This paper dedicates on the multi-faults decoupling of turbo-expander rotor system using Differential-based Ensemble Empirical Mode Decomposition (DEEMD). DEEMD is an improved version of DEMD to resolve the imperfection of mode mixing. The nonlinear behaviors of the turbo-expander considering temperature gradient with crack, rub-impact and pedestal looseness faults are investigated respectively, so that the baseline for the multi-faults decoupling can be established. DEEMD is then utilized on the vibration signals of the rotor system with coupling faults acquired by numerical simulation, and the results indicate that DEEMD can successfully decouple the coupling faults, which is more efficient than EEMD. DEEMD is also applied on the vibration signal of the misalignment coupling with rub-impact fault obtained during the adjustment of the experimental system. The conclusion shows that DEEMD can decompose the practical multi-faults signal and the industrial prospect of DEEMD is verified as well.
NASA Astrophysics Data System (ADS)
Lu, Siliang; Wang, Xiaoxian; He, Qingbo; Liu, Fang; Liu, Yongbin
2016-12-01
Transient signal analysis (TSA) has been proven an effective tool for motor bearing fault diagnosis, but has yet to be applied in processing bearing fault signals with variable rotating speed. In this study, a new TSA-based angular resampling (TSAAR) method is proposed for fault diagnosis under speed fluctuation condition via sound signal analysis. By applying the TSAAR method, the frequency smearing phenomenon is eliminated and the fault characteristic frequency is exposed in the envelope spectrum for bearing fault recognition. The TSAAR method can accurately estimate the phase information of the fault-induced impulses using neither complicated time-frequency analysis techniques nor external speed sensors, and hence it provides a simple, flexible, and data-driven approach that realizes variable-speed motor bearing fault diagnosis. The effectiveness and efficiency of the proposed TSAAR method are verified through a series of simulated and experimental case studies.
Telecommunications Systems Career Ladder, AFSC 307XO.
1981-01-01
standard test tone levels perform impulse noise tests make in-service or out-of- service quality check.s on composite signal transmission levels Even...service or out-of- service quality control (QC) reports maintain trouble and restoration record forms (DD Form 1443) direct circuit or system checks...include: perform fault isolation on analog circuits make in-service or out-of- service quality checks on voice frequency carrier telegraph (VFCT) terminals
Natural analogs in the petroleum industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, J.R.
1995-09-01
This article describes the use of natural analogues in petroleum exploration and includes numerous geologic model descriptions which have historically been used in the prediction of geometries and location of oil and gas accumulations. These geologic models have been passed down to and used by succeeding generations of petroleum geologists. Some examples of these geologic models include the Allan fault-plane model, porosity prediction, basin modelling, prediction of basin compartmentalization, and diagenesis.
Evolvable Hardware for Space Applications
NASA Technical Reports Server (NTRS)
Lohn, Jason; Globus, Al; Hornby, Gregory; Larchev, Gregory; Kraus, William
2004-01-01
This article surveys the research of the Evolvable Systems Group at NASA Ames Research Center. Over the past few years, our group has developed the ability to use evolutionary algorithms in a variety of NASA applications ranging from spacecraft antenna design, fault tolerance for programmable logic chips, atomic force field parameter fitting, analog circuit design, and earth observing satellite scheduling. In some of these applications, evolutionary algorithms match or improve on human performance.
Step-by-step magic state encoding for efficient fault-tolerant quantum computation
Goto, Hayato
2014-01-01
Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation. PMID:25511387
Step-by-step magic state encoding for efficient fault-tolerant quantum computation.
Goto, Hayato
2014-12-16
Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation.
NASA Astrophysics Data System (ADS)
Li, L.
2015-12-01
Both the South China Sea and Canada Basin preserve oceanic spreading centres and adjacent passive continental margins characterized by broad COT zones with hyper-extended continental crust. We have investigated the nature of strain accommodation in the regions immediately adjacent to the oceanic spreading centres in these two basins using 2-D backstripping subsidence reconstructions, coupled with forward modelling constrained by estimates of upper crustal extensional faulting. Modelling is better constrained in the South China Sea but our results for the Beaufort Sea are analogous. Depth-dependent extension is required to explain the great depth of both basins because only modest upper crustal faulting is observed. A weak lower crust in the presence of high heat flow is suggested for both basins. Extension in the COT may continue even after sea-floor spreading has ceased. The analogous results for the two basins considered are discussed in terms of (1) constraining the timing and distribution of crustal thinning along the respective continental margins, (2) defining the processes leading to hyper-extension of continental crust in the respective tectonic settings and (3) illuminating the processes that control hyper-extension in these basins and more generally.
NASA Astrophysics Data System (ADS)
Derode, B.; Cappa, F.; Guglielmi, Y.
2012-04-01
The recent observations of non-volcanic tremors (NVT), slow-slip events (SSE), low- (LFE) and very-low (VLF) frequency earthquakes on seismogenic faults reveal that unexpected, large, non-linear transient deformations occur during the interseismic loading of the earthquake cycle. Such phenomena modify stress to the adjacent locked zones bringing them closer to failure. Recent studies indicated various driving factors such as high-fluid pressures and frictional processes. Here we focus on the role of fluids in the different seismic signatures observed in in-situ fractures slip experiments. Experiments were conducted in critically stressed fractures zone at 250 m-depth within the LSBB underground laboratory (south of France). This experiment seeks to explore the field measurements of temporal variations in fluid and stress through continuous monitoring of seismic waves, fluid pressures and mechanical deformations between boreholes and the ground surface. The fluid pressure was increased step-by-step in a fracture isolated between packers until a maximum value of 35 bars; a pressure analog to ones known to trigger earthquakes at crustal depths. We observed in the seismic signals: (1) Tremor-like signatures, (2) Low Frequency signatures, and (3) sudden and short ruptures like micro-earthquakes. By analogy, we suggest that fluid pressures may trigger these different seismic signatures in active faults.
Fault Isolation Filter for Networked Control System with Event-Triggered Sampling Scheme
Li, Shanbin; Sauter, Dominique; Xu, Bugong
2011-01-01
In this paper, the sensor data is transmitted only when the absolute value of difference between the current sensor value and the previously transmitted one is greater than the given threshold value. Based on this send-on-delta scheme which is one of the event-triggered sampling strategies, a modified fault isolation filter for a discrete-time networked control system with multiple faults is then implemented by a particular form of the Kalman filter. The proposed fault isolation filter improves the resource utilization with graceful fault estimation performance degradation. An illustrative example is given to show the efficiency of the proposed method. PMID:22346590
Research of test fault diagnosis method for micro-satellite PSS
NASA Astrophysics Data System (ADS)
Wu, Haichao; Wang, Jinqi; Yang, Zhi; Yan, Meizhi
2017-11-01
Along with the increase in the number of micro-satellite and the shortening of the product's lifecycle, negative effects of satellite ground test failure become more and more serious. Real-time and efficient fault diagnosis becomes more and more necessary. PSS plays an important role in the satellite ground test's safety and reliability as one of the most important subsystems that guarantees the safety of micro-satellite energy. Take test fault diagnosis method of micro-satellite PSS as research object. On the basis of system features of PSS and classic fault diagnosis methods, propose a kind of fault diagnosis method based on the layered and loose coupling way. This article can provide certain reference for fault diagnosis methods research of other subsystems of micro-satellite.
Software For Fault-Tree Diagnosis Of A System
NASA Technical Reports Server (NTRS)
Iverson, Dave; Patterson-Hine, Ann; Liao, Jack
1993-01-01
Fault Tree Diagnosis System (FTDS) computer program is automated-diagnostic-system program identifying likely causes of specified failure on basis of information represented in system-reliability mathematical models known as fault trees. Is modified implementation of failure-cause-identification phase of Narayanan's and Viswanadham's methodology for acquisition of knowledge and reasoning in analyzing failures of systems. Knowledge base of if/then rules replaced with object-oriented fault-tree representation. Enhancement yields more-efficient identification of causes of failures and enables dynamic updating of knowledge base. Written in C language, C++, and Common LISP.
New Support for Hypotheses of an Ancient Ocean on Mars
NASA Technical Reports Server (NTRS)
Oehler, Dorothy Z.; Allen, Carlton C.
2013-01-01
A new analog for the giant polygons in the Chryse-Acidalia area suggests that those features may have formed in a major body of water - likely a Late Hesperian to Early Amazonian ocean. This analog -terrestrial polygons in subsea, passive margin basins derives from 3D seismic data that show similar-scale, polygonal fault systems in the subsurface of more than 50 terrestrial offshore basins. The terrestrial and martian polygons share similar sizes, basin-wide distributions, tectonic settings, and association with expected fine-grained sediments. Late Hesperian deposition from outflow floods may have triggered formation of these polygons, by providing thick, rapidly-deposited, fine-grained sediments necessary for polygonal fracturing. The restriction of densely occurring polygons to elevations below approx. -4000 m to -4100 m supports inferences that a body of water controlled their formation. Those same elevations appear to restrict occurrence of polygons in Utopia Planitia, suggesting that this analog may apply also to Utopia and that similar processes may have occurred across the martian lowlands.
Connecting slow earthquakes to huge earthquakes.
Obara, Kazushige; Kato, Aitaro
2016-07-15
Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.
Multi-thresholds for fault isolation in the presence of uncertainties.
Touati, Youcef; Mellal, Mohamed Arezki; Benazzouz, Djamel
2016-05-01
Monitoring of the faults is an important task in mechatronics. It involves the detection and isolation of faults which are performed by using the residuals. These residuals represent numerical values that define certain intervals called thresholds. In fact, the fault is detected if the residuals exceed the thresholds. In addition, each considered fault must activate a unique set of residuals to be isolated. However, in the presence of uncertainties, false decisions can occur due to the low sensitivity of certain residuals towards faults. In this paper, an efficient approach to make decision on fault isolation in the presence of uncertainties is proposed. Based on the bond graph tool, the approach is developed in order to generate systematically the relations between residuals and faults. The generated relations allow the estimation of the minimum detectable and isolable fault values. The latter is used to calculate the thresholds of isolation for each residual. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Application of composite dictionary multi-atom matching in gear fault diagnosis.
Cui, Lingli; Kang, Chenhui; Wang, Huaqing; Chen, Peng
2011-01-01
The sparse decomposition based on matching pursuit is an adaptive sparse expression method for signals. This paper proposes an idea concerning a composite dictionary multi-atom matching decomposition and reconstruction algorithm, and the introduction of threshold de-noising in the reconstruction algorithm. Based on the structural characteristics of gear fault signals, a composite dictionary combining the impulse time-frequency dictionary and the Fourier dictionary was constituted, and a genetic algorithm was applied to search for the best matching atom. The analysis results of gear fault simulation signals indicated the effectiveness of the hard threshold, and the impulse or harmonic characteristic components could be separately extracted. Meanwhile, the robustness of the composite dictionary multi-atom matching algorithm at different noise levels was investigated. Aiming at the effects of data lengths on the calculation efficiency of the algorithm, an improved segmented decomposition and reconstruction algorithm was proposed, and the calculation efficiency of the decomposition algorithm was significantly enhanced. In addition it is shown that the multi-atom matching algorithm was superior to the single-atom matching algorithm in both calculation efficiency and algorithm robustness. Finally, the above algorithm was applied to gear fault engineering signals, and achieved good results.
NASA Astrophysics Data System (ADS)
Ye, Jiyang; Liu, Mian
2017-08-01
In Southern California, the Pacific-North America relative plate motion is accommodated by the complex southern San Andreas Fault system that includes many young faults (<2 Ma). The initiation of these young faults and their impact on strain partitioning and fault slip rates are important for understanding the evolution of this plate boundary zone and assessing earthquake hazard in Southern California. Using a three-dimensional viscoelastoplastic finite element model, we have investigated how this plate boundary fault system has evolved to accommodate the relative plate motion in Southern California. Our results show that when the plate boundary faults are not optimally configured to accommodate the relative plate motion, strain is localized in places where new faults would initiate to improve the mechanical efficiency of the fault system. In particular, the Eastern California Shear Zone, the San Jacinto Fault, the Elsinore Fault, and the offshore dextral faults all developed in places of highly localized strain. These younger faults compensate for the reduced fault slip on the San Andreas Fault proper because of the Big Bend, a major restraining bend. The evolution of the fault system changes the apportionment of fault slip rates over time, which may explain some of the slip rate discrepancy between geological and geodetic measurements in Southern California. For the present fault configuration, our model predicts localized strain in western Transverse Ranges and along the dextral faults across the Mojave Desert, where numerous damaging earthquakes occurred in recent years.
Energy savings opportunities in the global digital television transition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Won Young; Gopal, Anand; Phadke, Amol
Globally, terrestrial television (TV) broadcasting is in the midst of a complete transition to digital signals. The last analog terrestrial broadcast is expected to be switched off in the early 2020s. This transition presents huge energy savings opportunities that have thus far been ignored. Digital TV switchovers have likely increased energy consumption as countries have completed transitions by providing digital TV converters to analog TV users, which increase energy consumption and extend the life of energy-inefficient analog TVs. We find that if analog TVs were retired at the time of a digital switchover and replaced with super-efficient flat-panel TVs, suchmore » as light-emitting diode (LED) backlit liquid crystal display (LCD) TVs, there is a combined electricity savings potential of 32 terawatt hours [TWh] per year in countries that have not yet completed their digital TV transition. In view of these findings as well as the dramatic drops of super-efficient TV prices and the unique early-retirement opportunity resulting from cessation of terrestrial analog broadcasts, TV-exchange programs would easily and substantially advance energy efficiency.« less
Energy savings opportunities in the global digital television transition
Park, Won Young; Gopal, Anand; Phadke, Amol
2016-12-20
Globally, terrestrial television (TV) broadcasting is in the midst of a complete transition to digital signals. The last analog terrestrial broadcast is expected to be switched off in the early 2020s. This transition presents huge energy savings opportunities that have thus far been ignored. Digital TV switchovers have likely increased energy consumption as countries have completed transitions by providing digital TV converters to analog TV users, which increase energy consumption and extend the life of energy-inefficient analog TVs. We find that if analog TVs were retired at the time of a digital switchover and replaced with super-efficient flat-panel TVs, suchmore » as light-emitting diode (LED) backlit liquid crystal display (LCD) TVs, there is a combined electricity savings potential of 32 terawatt hours [TWh] per year in countries that have not yet completed their digital TV transition. In view of these findings as well as the dramatic drops of super-efficient TV prices and the unique early-retirement opportunity resulting from cessation of terrestrial analog broadcasts, TV-exchange programs would easily and substantially advance energy efficiency.« less
Creeping Guanxian-Anxian Fault ruptured in the 2008 Mw 7.9 Wenchuan earthquake
NASA Astrophysics Data System (ADS)
He, X.; Li, H.; Wang, H.; Zhang, L.; Si, J.
2017-12-01
Crustal active faults can slide either steadily by aseismic creep, or abruptly by earthquake rupture. Creep can relax continuously the stress and reduce the occurrence of large earthquakes. Identifying the behaviors of active faults plays a crucial role in predicting and preventing earthquake disasters. Based on multi-scale structural analyses for fault rocks from the GAF surface rupture zone and the Wenchuan Earthquake Fault Zone Science Drilling borehole 3P, we detect the analogous "mylonite structures" develop pervasively in GAF fault rocks. Such specious "ductile deformations", showing intensive foliation, spindly clasts, tailing structure, "boudin structure", "augen structure" and S-C fabrics, are actually formed in brittle faulting, which indicates the creeping behavior of the GAF. Furthermore, some special structures hint the creeping mechanism. The cracks and veins developed in fractured clasts imply pressure and fluid control in the faulting. Under the effect of fluid, clasts are dissolved in pressing direction, and solutions are transferred to stress vacancy area at both ends of clasts and deposit to regenerate clay minerals. The clasts thus present spindly shape and are surrounded by orientational clay minerals constituting continuous foliation structure. The clay minerals are dominated by phyllosilicates that can weaken faults and promote pressure solution. Therefore, pressure solution creep and phyllosilicates weakening reasonably interpret the creeping of GAF. Additionally, GPS velocity data show slip rates of the GAF are respectively 1.5 and 12 mm/yr during 1998-2008 and 2009-2011, which also indicate the GAF is in creeping during interseismic period. According to analysis on aftershocks distribution and P-wave velocity with depth and geological section in the Longmenshan thrust belt, we suggest the GAF is creeping in shallow (<10 km) and locked in deep (10-20 km). Comprehensive research shows stress propagated from the west was concentrated near the Yingxiu-Beichuan Fault (YBF) and GAF zones. As stress accumulation reached the limit, the YBF and GAF zones were simultaneously ruptured in 2008 Mw 7.9 Wenchuan earthquake, but the rupture area of the GAF was relatively small due to the presence of shallow creep that relaxed the partial stress.
Finite element models of earthquake cycles in mature strike-slip fault zones
NASA Astrophysics Data System (ADS)
Lynch, John Charles
The research presented in this dissertation is on the subject of strike-slip earthquakes and the stresses that build and release in the Earth's crust during earthquake cycles. Numerical models of these cycles in a layered elastic/viscoelastic crust are produced using the finite element method. A fault that alternately sticks and slips poses a particularly challenging problem for numerical implementation, and a new contact element dubbed the "Velcro" element was developed to address this problem (Appendix A). Additionally, the finite element code used in this study was bench-marked against analytical solutions for some simplified problems (Chapter 2), and the resolving power was tested for the fault region of the models (Appendix B). With the modeling method thus developed, there are two main questions posed. First, in Chapter 3, the effect of a finite-width shear zone is considered. By defining a viscoelastic shear zone beneath a periodically slipping fault, it is found that shear stress concentrates at the edges of the shear zone and thus causes the stress tensor to rotate into non-Andersonian orientations. Several methods are used to examine the stress patterns, including the plunge angles of the principal stresses and a new method that plots the stress tensor in a manner analogous to seismic focal mechanism diagrams. In Chapter 4, a simple San Andreas-like model is constructed, consisting of two great earthquake producing faults separated by a freely-slipping shorter fault. The model inputs of lower crustal viscosity, fault separation distance, and relative breaking strengths are examined for their effect on fault communication. It is found that with a lower crustal viscosity of 1018 Pa s (in the lower range of estimates for California), the two faults tend to synchronize their earthquake cycles, even in the cases where the faults have asymmetric breaking strengths. These models imply that postseismic stress transfer over hundreds of kilometers may play a significant roll in the variability of earthquake repeat times. Specifically, small perturbations in the model parameters can lead to results similar to such observed phenomena as earthquake clustering and disruptions to so-called "characteristic" earthquake cycles.
Complex Plate Tectonic Features on Planetary Bodies: Analogs from Earth
NASA Astrophysics Data System (ADS)
Stock, J. M.; Smrekar, S. E.
2016-12-01
We review the types and scales of observations needed on other rocky planetary bodies (e.g., Mars, Venus, exoplanets) to evaluate evidence of present or past plate motions. Earth's plate boundaries were initially simplified into three basic types (ridges, trenches, and transform faults). Previous studies examined the Moon, Mars, Venus, Mercury and icy moons such as Europa, for evidence of features, including linear rifts, arcuate convergent zones, strike-slip faults, and distributed deformation (rifting or folding). Yet, several aspects merit further consideration. 1) Is the feature active or fossil? Earth's active mid ocean ridges are bathymetric highs, and seafloor depth increases on either side; whereas, fossil mid ocean ridges may be as deep as the surrounding abyssal plain with no major rift valley, although with a minor gravity low (e.g., Osbourn Trough, W. Pacific Ocean). Fossil trenches have less topographic relief than active trenches (e.g., the fossil trench along the Patton Escarpment, west of California). 2) On Earth, fault patterns of spreading centers depend on volcanism. Excess volcanism reduced faulting. Fault visibility increases as spreading rates slow, or as magmatism decreases, producing high-angle normal faults parallel to the spreading center. At magma-poor spreading centers, high resolution bathymetry shows low angle detachment faults with large scale mullions and striations parallel to plate motion (e.g., Mid Atlantic Ridge, Southwest Indian Ridge). 3) Sedimentation on Earth masks features that might be visible on a non-erosional planet. Subduction zones on Earth in areas of low sedimentation have clear trench -parallel faults causing flexural deformation of the downgoing plate; in highly sedimented subduction zones, no such faults can be seen, and there may be no bathymetric trench at all. 4) Areas of Earth with broad upwelling, such as the North Fiji Basin, have complex plate tectonic patterns with many individual but poorly linked ridge segments and transform faults. These details and scales of features should be considered in planning future surveys of altimetry, reflectance, magnetics, compositional, and gravity data from other planetary bodies aimed at understanding the link between a planet's surface and interior, whether via plate tectonics or other processes.
NASA Astrophysics Data System (ADS)
Dellong, David; Gutscher, Marc-Andre; Klingelhoefer, Frauke; Graindorge, David; Kopp, Heidrun; Moretti, Milena; Marsset, Bruno; Mercier de Lepinay, Bernard; Dominguez, Stephane; Malavieille, Jacques
2016-04-01
Recently acquired swath bathymetric data in the Ionian Sea document in unprecedented detail the morphostructure and dynamics of the Calabrian accretionary wedge. A boundary zone between the eastern and western lobes of the accretionary wedge is examined here. Relative displacement between the Calabrian and Peloritan backstops is expected to cause dextral strike-slip deformation between the lobes. A wide-angle seismic profile was acquired in Oct. 2014 with the R/V Meteor (DIONYSUS survey) recorded by 25 Ocean-bottom seismometers (Geomar and Ifremer instruments) and 3 land-stations (INGV stations). Inversion and forward modeling of these seismic data reveal a 5-10 km deep asymmetric rift zone between the Malta Escarpment and the SW tip of Calabria. Analog modeling was performed to test if the origin of this rift could be related to the relative kinematics of the Calabrian and Peloritan backstops. Modeling, using two independently moving backstops, produces a zone of dextral transtension and subsidence in the accretionary wedge between two lobes. This corresponds well to the asymmetric rift observed in the southward prolongation of the straits of Messina faults. Paradoxically however, this dextral displacement does not appear to traverse the external Calabrian accretionary wedge, where prominent curved lineaments observed indicate a sinistral sense of motion. One possible explanation is that the dextral kinematic motion is transferred into a region of crisscrossing faults in the internal portion of the Eastern lobe. The bathymetry and high-resolution reflection seismic images indicate ongoing compression at the deformation front of both the western and eastern lobes. Together with the analog modeling results, these observations unambiguously demonstrate that the western lobe remains tectonically active.
NASA Astrophysics Data System (ADS)
Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei
2018-07-01
Condition monitoring and fault diagnosis of rolling element bearings are significant to guarantee the reliability and functionality of a mechanical system, production efficiency, and plant safety. However, this is almost invariably a formidable challenge because the fault features are often buried by strong background noises and other unstable interference components. To satisfactorily extract the bearing fault features, a whale optimization algorithm (WOA)-optimized orthogonal matching pursuit (OMP) with a combined time-frequency atom dictionary is proposed in this paper. Firstly, a combined time-frequency atom dictionary whose atom is a combination of Fourier dictionary atom and impact time-frequency dictionary atom is designed according to the properties of bearing fault vibration signal. Furthermore, to improve the efficiency and accuracy of signal sparse representation, the WOA is introduced into the OMP algorithm to optimize the atom parameters for best approximating the original signal with the dictionary atoms. The proposed method is validated through analyzing the bearing fault simulation signal and the real vibration signals collected from an experimental bearing and a wheelset bearing of high-speed trains. The comparisons with the respect to the state of the art in the field are illustrated in detail, which highlight the advantages of the proposed method.
NASA Astrophysics Data System (ADS)
Ulrich, T.; Gabriel, A. A.
2016-12-01
The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.
Automatic Single Event Effects Sensitivity Analysis of a 13-Bit Successive Approximation ADC
NASA Astrophysics Data System (ADS)
Márquez, F.; Muñoz, F.; Palomo, F. R.; Sanz, L.; López-Morillo, E.; Aguirre, M. A.; Jiménez, A.
2015-08-01
This paper presents Analog Fault Tolerant University of Seville Debugging System (AFTU), a tool to evaluate the Single-Event Effect (SEE) sensitivity of analog/mixed signal microelectronic circuits at transistor level. As analog cells can behave in an unpredictable way when critical areas interact with the particle hitting, there is a need for designers to have a software tool that allows an automatic and exhaustive analysis of Single-Event Effects influence. AFTU takes the test-bench SPECTRE design, emulates radiation conditions and automatically evaluates vulnerabilities using user-defined heuristics. To illustrate the utility of the tool, the SEE sensitivity of a 13-bits Successive Approximation Analog-to-Digital Converter (ADC) has been analysed. This circuit was selected not only because it was designed for space applications, but also due to the fact that a manual SEE sensitivity analysis would be too time-consuming. After a user-defined test campaign, it was detected that some voltage transients were propagated to a node where a parasitic diode was activated, affecting the offset cancelation, and therefore the whole resolution of the ADC. A simple modification of the scheme solved the problem, as it was verified with another automatic SEE sensitivity analysis.
A signal-based fault detection and classification method for heavy haul wagons
NASA Astrophysics Data System (ADS)
Li, Chunsheng; Luo, Shihui; Cole, Colin; Spiryagin, Maksym; Sun, Yanquan
2017-12-01
This paper proposes a signal-based fault detection and isolation (FDI) system for heavy haul wagons considering the special requirements of low cost and robustness. The sensor network of the proposed system consists of just two accelerometers mounted on the front left and rear right of the carbody. Seven fault indicators (FIs) are proposed based on the cross-correlation analyses of the sensor-collected acceleration signals. Bolster spring fault conditions are focused on in this paper, including two different levels (small faults and moderate faults) and two locations (faults in the left and right bolster springs of the first bogie). A fully detailed dynamic model of a typical 40t axle load heavy haul wagon is developed to evaluate the deterioration of dynamic behaviour under proposed fault conditions and demonstrate the detectability of the proposed FDI method. Even though the fault conditions considered in this paper did not deteriorate the wagon dynamic behaviour dramatically, the proposed FIs show great sensitivity to the bolster spring faults. The most effective and efficient FIs are chosen for fault detection and classification. Analysis results indicate that it is possible to detect changes in bolster stiffness of ±25% and identify the fault location.
NASA Astrophysics Data System (ADS)
Morgan, J. K.
2014-12-01
Particle-based numerical simulations allow detailed investigations of small-scale processes and mechanisms associated with fault initiation and slip, which emerge naturally in such models. This study investigates the evolving mechanical conditions and associated micro-mechanisms during transient slip on a weak decollement propagating beneath a growing contractional wedge (e.g., accretionary prism, fold and thrust belt). The models serve as analogs of the seismic cycle, although lacking full earthquake dynamics. Nonetheless, the mechanical evolution of both decollement and upper plate can be monitored, and correlated with the particle-scale physical and contact properties, providing insights into changes that accompany such stick-slip behavior. In this study, particle assemblages consolidated under gravity and bonded to impart cohesion, are pushed at a constant velocity above a weak, unbonded decollement surface. Forward propagation of decollement slip occurs in discrete pulses, modulated by heterogeneous stress conditions (e.g., roughness, contact bridging) along the fault. Passage of decollement slip resets the stress along this horizon, producing distinct patterns: shear stress is enhanced in front of the slipped decollement due to local contact bridging and fault locking; shear stress minima occur immediately above the tip, denoting local stress release and contact reorganization following slip; more mature portions of the fault exhibit intermediate shear stress, reflecting more stable contact force distributions and magnitudes. This pattern of shear stress pre-conditions the decollement for future slip events, which must overcome the high stresses at the fault tip. Long-term slip along the basal decollement induces upper plate contraction. When upper plate stresses reach critical strength conditions, new thrust faults break through the upper plate, relieving stresses and accommodating horizontal shortening. Decollement activity retreats back to the newly formed thrust fault. The cessation of upper plate fault slip causes gradual increases in upper plate stresses, rebuilding shear stresses along the decollement and enabling renewed pulses of decollement slip. Thus, upper plate deformation occurs out of phase with decollement propagation.
NASA Astrophysics Data System (ADS)
Liu, B.; Shi, B.
2010-12-01
An earthquake with ML4.1 occurred at Shacheng, Hebei, China, on July 20, 1995, followed by 28 aftershocks with 0.9≤ML≤4.0 (Chen et al, 2005). According to ZÚÑIGA (1993), for the 1995 ML4.1 Shacheng earthquake sequence, the main shock is corresponding to undershoot, while aftershocks should match overshoot. With the suggestion that the dynamic rupture processes of the overshoot aftershocks could be related to the crack (sub-fault) extension inside the main fault. After main shock, the local stresses concentration inside the fault may play a dominant role in sustain the crack extending. Therefore, the main energy dissipation mechanism should be the aftershocks fracturing process associated with the crack extending. We derived minimum radiation energy criterion (MREC) following variational principle (Kanamori and Rivera, 2004)(ES/M0')min≧[3M0/(ɛπμR3)](v/β)3, where ES and M0' are radiated energy and seismic moment gained from observation, μ is the modulus of fault rigidity, ɛ is the parameter of ɛ=M0'/M0,M0 is seismic moment and R is rupture size on the fault, v and β are rupture speed and S-wave speed. From II and III crack extending model, we attempt to reconcile a uniform expression for calculate seismic radiation efficiency ηG, which can be used to restrict the upper limit efficiency and avoid the non-physics phenomenon that radiation efficiency is larger than 1. In ML 4.1 Shacheng earthquake sequence, the rupture speed of the main shock was about 0.86 of S-wave speed β according to MREC, closing to the Rayleigh wave speed, while the rupture speeds of the remained 28 aftershocks ranged from 0.05β to 0.55β. The rupture speed was 0.9β, and most of the aftershocks are no more than 0.35β using II and III crack extending model. In addition, the seismic radiation efficiencies for this earthquake sequence were: for the most aftershocks, the radiation efficiencies were less than 10%, inferring a low seismic efficiency, whereas the radiation efficiency was 78% for the main shock. The essential difference in the earthquake energy partition for the aftershock source dynamics indicated that the fracture energy dissipation could not be ignored in the source parameter estimation for the earthquake faulting, especially for small earthquakes. Otherwise, the radiated seismic energy could be overestimated or underestimated.
Tectonic Evolution of the Çayirhan Neogene Basin (Ankara), Central Turkey
NASA Astrophysics Data System (ADS)
Behzad, Bezhan; Koral, Hayrettin; İşb&idot; l, Duygu; Karaaǧa; ç, Serdal
2016-04-01
Çayırhan (Ankara) is located at crossroads of the Western Anatolian extensional region, analogous to the Basin and Range Province, and suture zone of the Neotethys-Ocean, which is locus of the North Anatolian Transform since the Late Miocene. To the north of Çayırhan (Ankara), a Neogene sedimentary basin comprises Lower-Middle Miocene and Upper Miocene age formations, characterized by swamp, fluvial and lacustrine settings respectively. This sequence is folded and transected by neotectonic faults. The Sekli thrust fault is older than the Lower-Middle Miocene age formations. The Davutoǧlan fault is younger than the Lower-Middle Miocene formations and is contemporaneous to the Upper Miocene formation. The Çatalkaya fault is younger than the Upper Miocene formation. The sedimentary and tectonic features provide information on mode, timing and evolution of this Neogene age sedimentary basin in Central Turkey. It is concluded that the region underwent a period of uplift and erosion under the influence of contractional tectonics prior to the Early-Middle Miocene, before becoming a semi-closed basin under influence of transtensional tectonics during the Early-Middle Miocene and under influence of predominantly extensional tectonics during the post-Late Miocene times. Keywords: Tectonics, Extension, Transtension, Stratigraphy, Neotectonic features.
Patterns of brittle deformation under extension on Venus
NASA Technical Reports Server (NTRS)
Neumann, G. A.; Zuber, M. T.
1994-01-01
The development of fractures at regular length scales is a widespread feature of Venusian tectonics. Models of lithospheric deformation under extension based on non-Newtonian viscous flow and brittle-plastic flow develop localized failure at preferred wavelengths that depend on lithospheric thickness and stratification. The characteristic wavelengths seen in rift zones and tessera can therefore provide constraints on crustal and thermal structure. Analytic solutions were obtained for growth rates in infinitesimal perturbations imposed on a one-dimensional, layered rheology. Brittle layers were approximated by perfectly-plastic, uniform strength, overlying ductile layers exhibiting thermally-activated power-law creep. This study investigates the formation of faults under finite amounts of extension, employing a finite-element approach. Our model incorporates non-linear viscous rheology and a Coulomb failure envelope. An initial perturbation in crustal thickness gives rise to necking instabilities. A small amount of velocity weakening serves to localize deformation into planar regions of high strain rate. Such planes are analogous to normal faults seen in terrestrial rift zones. These 'faults' evolve to low angle under finite extension. Fault spacing, orientation and location, and the depth to the brittle-ductile transition, depend in a complex way on lateral variations in crustal thickness. In general, we find that multiple wavelengths of deformation can arise from the interaction of crustal and mantle lithosphere.
On the next generation of reliability analysis tools
NASA Technical Reports Server (NTRS)
Babcock, Philip S., IV; Leong, Frank; Gai, Eli
1987-01-01
The current generation of reliability analysis tools concentrates on improving the efficiency of the description and solution of the fault-handling processes and providing a solution algorithm for the full system model. The tools have improved user efficiency in these areas to the extent that the problem of constructing the fault-occurrence model is now the major analysis bottleneck. For the next generation of reliability tools, it is proposed that techniques be developed to improve the efficiency of the fault-occurrence model generation and input. Further, the goal is to provide an environment permitting a user to provide a top-down design description of the system from which a Markov reliability model is automatically constructed. Thus, the user is relieved of the tedious and error-prone process of model construction, permitting an efficient exploration of the design space, and an independent validation of the system's operation is obtained. An additional benefit of automating the model construction process is the opportunity to reduce the specialized knowledge required. Hence, the user need only be an expert in the system he is analyzing; the expertise in reliability analysis techniques is supplied.
Estimation of Faults in DC Electrical Power System
NASA Technical Reports Server (NTRS)
Gorinevsky, Dimitry; Boyd, Stephen; Poll, Scott
2009-01-01
This paper demonstrates a novel optimization-based approach to estimating fault states in a DC power system. Potential faults changing the circuit topology are included along with faulty measurements. Our approach can be considered as a relaxation of the mixed estimation problem. We develop a linear model of the circuit and pose a convex problem for estimating the faults and other hidden states. A sparse fault vector solution is computed by using 11 regularization. The solution is computed reliably and efficiently, and gives accurate diagnostics on the faults. We demonstrate a real-time implementation of the approach for an instrumented electrical power system testbed, the ADAPT testbed at NASA ARC. The estimates are computed in milliseconds on a PC. The approach performs well despite unmodeled transients and other modeling uncertainties present in the system.
Survivable algorithms and redundancy management in NASA's distributed computing systems
NASA Technical Reports Server (NTRS)
Malek, Miroslaw
1992-01-01
The design of survivable algorithms requires a solid foundation for executing them. While hardware techniques for fault-tolerant computing are relatively well understood, fault-tolerant operating systems, as well as fault-tolerant applications (survivable algorithms), are, by contrast, little understood, and much more work in this field is required. We outline some of our work that contributes to the foundation of ultrareliable operating systems and fault-tolerant algorithm design. We introduce our consensus-based framework for fault-tolerant system design. This is followed by a description of a hierarchical partitioning method for efficient consensus. A scheduler for redundancy management is introduced, and application-specific fault tolerance is described. We give an overview of our hybrid algorithm technique, which is an alternative to the formal approach given.
Farrington, R.B.; Pruett, J.C. Jr.
1984-05-14
A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.
Farrington, Robert B.; Pruett, Jr., James C.
1986-01-01
A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.
Study of fault-tolerant software technology
NASA Technical Reports Server (NTRS)
Slivinski, T.; Broglio, C.; Wild, C.; Goldberg, J.; Levitt, K.; Hitt, E.; Webb, J.
1984-01-01
Presented is an overview of the current state of the art of fault-tolerant software and an analysis of quantitative techniques and models developed to assess its impact. It examines research efforts as well as experience gained from commercial application of these techniques. The paper also addresses the computer architecture and design implications on hardware, operating systems and programming languages (including Ada) of using fault-tolerant software in real-time aerospace applications. It concludes that fault-tolerant software has progressed beyond the pure research state. The paper also finds that, although not perfectly matched, newer architectural and language capabilities provide many of the notations and functions needed to effectively and efficiently implement software fault-tolerance.
NASA Technical Reports Server (NTRS)
Smith, T. B., III; Lala, J. H.
1984-01-01
The FTMP architecture is a high reliability computer concept modeled after a homogeneous multiprocessor architecture. Elements of the FTMP are operated in tight synchronism with one another and hardware fault-detection and fault-masking is provided which is transparent to the software. Operating system design and user software design is thus greatly simplified. Performance of the FTMP is also comparable to that of a simplex equivalent due to the efficiency of fault handling hardware. The FTMP project constructed an engineering module of the FTMP, programmed the machine and extensively tested the architecture through fault injection and other stress testing. This testing confirmed the soundness of the FTMP concepts.
A Segmented Ion-Propulsion Engine
NASA Technical Reports Server (NTRS)
Brophy, John R.
1992-01-01
New design approach for high-power (100-kW class or greater) ion engines conceptually divides single engine into combination of smaller discharge chambers integrated to operate as single large engine. Analogous to multicylinder automobile engine, benefits include reduction in required accelerator system span-to-gap ratio for large-area engines, reduction in required hollow-cathode emission current, mitigation of plasma-uniformity problem, increased tolerance to accelerator system faults, and reduction in vacuum-system pumping speed.
Analog Fault Diagnosis of Large-Scale Electronic Circuits.
1983-08-01
is invertible. Note that Eq. G - Government Expenditure on Goods (26) is in general nonlinear while Equation (27) is and Services linear. The latter...is achieved at the expense of T - Taxes on Income more test points. 4 R = Government Regulator Navid and Willson, Jr., (71 considered the diagnosis...Theoretically, both approaches are still under development and all seem feasible. It is the purpose of this report to compare these two approaches numerically by
2013-05-01
representation of a centralized control system on a turbine engine. All actuators and sensors are point-to-point cabled to the controller ( FADEC ) which...electronics themselves. Figure 1: Centralized Control System Each function resides within the FADEC and uses Unique Point-to-Point Analog...distributed control system on the same turbine engine. The actuators and sensors interface to Smart Nodes which, in turn communicate to the FADEC via
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Howard; Braun, James E.
This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Howard; Braun, James E.
2015-12-31
This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less
STRUCTURAL HETEROGENEITIES AND PALEO FLUID FLOW IN AN ANALOG SANDSTONE RESERVOIR 2001-2004
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pollard, David; Aydin, Atilla
2005-02-22
Fractures and faults are brittle structural heterogeneities that can act both as conduits and barriers with respect to fluid flow in rock. This range in the hydraulic effects of fractures and faults greatly complicates the challenges faced by geoscientists working on important problems: from groundwater aquifer and hydrocarbon reservoir management, to subsurface contaminant fate and transport, to underground nuclear waste isolation, to the subsurface sequestration of CO2 produced during fossil-fuel combustion. The research performed under DOE grant DE-FG03-94ER14462 aimed to address these challenges by laying a solid foundation, based on detailed geological mapping, laboratory experiments, and physical process modeling, onmore » which to build our interpretive and predictive capabilities regarding the structure, patterns, and fluid flow properties of fractures and faults in sandstone reservoirs. The material in this final technical report focuses on the period of the investigation from July 1, 2001 to October 31, 2004. The Aztec Sandstone at the Valley of Fire, Nevada, provides an unusually rich natural laboratory in which exposures of joints, shear deformation bands, compaction bands and faults at scales ranging from centimeters to kilometers can be studied in an analog for sandstone aquifers and reservoirs. The suite of structures there has been documented and studied in detail using a combination of low-altitude aerial photography, outcrop-scale mapping and advanced computational analysis. In addition, chemical alteration patterns indicative of multiple paleo fluid flow events have been mapped at outcrop, local and regional scales. The Valley of Fire region has experienced multiple episodes of fluid flow and this is readily evident in the vibrant patterns of chemical alteration from which the Valley of Fire derives its name. We have successfully integrated detailed field and petrographic observation and analysis, process-based mechanical modeling, and numerical simulation of fluid flow to study a typical sandstone aquifer/reservoir at a variety of scales. We have produced many tools and insights which can be applied to active subsurface flow systems and practical problems of pressing global importance.« less
3D seismic attribute expressions of deep offshore Niger Delta
NASA Astrophysics Data System (ADS)
Anyiam, Uzonna Okenna
Structural and stratigraphic interpretation of 3D seismic data for reservoir characterization in an area affected by dense faulting, such as the Niger Delta, is typically difficult and strongly model driven because of problems with imaging. In the Freeman field, located about 120km offshore southwestern Niger Delta at about 1300m water depth, 3D seismic attribute-based analogs, and structural and stratigraphic based geometric models are combined to help enhance and constrain the interpretation. The objectives being to show how 3D seismic attribute analysis enhances seismic interpretation, develop structural style and stratigraphic architecture models and identify trap mechanisms in the study area; with the main purpose of producing structural and stratigraphic framework analogs to aid exploration and production companies, as well as researchers in better understanding the structural style, stratigraphic framework and trap mechanism of the Miocene to Pliocene Agbada Formation reservoirs in the deep Offshore Niger Delta Basin. A multidisciplinary approach which involved analyses of calculated variance-based coherence cube, spectral decomposition box probe and root-mean-square amplitude attributes, sequence stratigraphy based well correlation, and structural modeling; were undertaken to achieve these objectives. Studies reveal a massive northwest-southeast trending shale cored detachment fold anticline, with associated normal faults; interpreted to have been folded and faulted by localized compression resulting from a combination of differential loading on the deep-seated overpressured-ductile-undercompacted-marine Akata shale, and gravitational collapse of the Niger delta continental slope due to influx of sediments. Crestal extension resulting from this localized compression, is believed to have given rise to the synthetic, antithetic and newly observed crossing conjugate normal faults in the study area. This structure is unique to the existing types of principal oil field structures in the Niger Delta. Stratigraphic results show that the Mid-Miocene to Pliocene Agbada Formation reservoirs of the Freeman field occur as part of a channelized fan system; mostly deposited as turbidites in an unconfined distributary environment; except one that occurs as channel sand within a submarine canyon that came across and eroded previously deposited distributary fan complex, at the time. Hence, prospective area for hydrocarbon exploration is suggested southwest of the Freeman field.
Graph-based real-time fault diagnostics
NASA Technical Reports Server (NTRS)
Padalkar, S.; Karsai, G.; Sztipanovits, J.
1988-01-01
A real-time fault detection and diagnosis capability is absolutely crucial in the design of large-scale space systems. Some of the existing AI-based fault diagnostic techniques like expert systems and qualitative modelling are frequently ill-suited for this purpose. Expert systems are often inadequately structured, difficult to validate and suffer from knowledge acquisition bottlenecks. Qualitative modelling techniques sometimes generate a large number of failure source alternatives, thus hampering speedy diagnosis. In this paper we present a graph-based technique which is well suited for real-time fault diagnosis, structured knowledge representation and acquisition and testing and validation. A Hierarchical Fault Model of the system to be diagnosed is developed. At each level of hierarchy, there exist fault propagation digraphs denoting causal relations between failure modes of subsystems. The edges of such a digraph are weighted with fault propagation time intervals. Efficient and restartable graph algorithms are used for on-line speedy identification of failure source components.
Large transient fault current test of an electrical roll ring
NASA Technical Reports Server (NTRS)
Yenni, Edward J.; Birchenough, Arthur G.
1992-01-01
The space station uses precision rotary gimbals to provide for sun tracking of its photoelectric arrays. Electrical power, command signals and data are transferred across the gimbals by roll rings. Roll rings have been shown to be capable of highly efficient electrical transmission and long life, through tests conducted at the NASA Lewis Research Center and Honeywell's Satellite and Space Systems Division in Phoenix, AZ. Large potential fault currents inherent to the power system's DC distribution architecture, have brought about the need to evaluate the effects of large transient fault currents on roll rings. A test recently conducted at Lewis subjected a roll ring to a simulated worst case space station electrical fault. The system model used to obtain the fault profile is described, along with details of the reduced order circuit that was used to simulate the fault. Test results comparing roll ring performance before and after the fault are also presented.
Methods to enhance seismic faults and construct fault surfaces
NASA Astrophysics Data System (ADS)
Wu, Xinming; Zhu, Zhihui
2017-10-01
Faults are often apparent as reflector discontinuities in a seismic volume. Numerous types of fault attributes have been proposed to highlight fault positions from a seismic volume by measuring reflection discontinuities. These attribute volumes, however, can be sensitive to noise and stratigraphic features that are also apparent as discontinuities in a seismic volume. We propose a matched filtering method to enhance a precomputed fault attribute volume, and simultaneously estimate fault strikes and dips. In this method, a set of efficient 2D exponential filters, oriented by all possible combinations of strike and dip angles, are applied to the input attribute volume to find the maximum filtering responses at all samples in the volume. These maximum filtering responses are recorded to obtain the enhanced fault attribute volume while the corresponding strike and dip angles, that yield the maximum filtering responses, are recoded to obtain volumes of fault strikes and dips. By doing this, we assume that a fault surface is locally planar, and a 2D smoothing filter will yield a maximum response if the smoothing plane coincides with a local fault plane. With the enhanced fault attribute volume and the estimated fault strike and dip volumes, we then compute oriented fault samples on the ridges of the enhanced fault attribute volume, and each sample is oriented by the estimated fault strike and dip. Fault surfaces can be constructed by directly linking the oriented fault samples with consistent fault strikes and dips. For complicated cases with missing fault samples and noisy samples, we further propose to use a perceptual grouping method to infer fault surfaces that reasonably fit the positions and orientations of the fault samples. We apply these methods to 3D synthetic and real examples and successfully extract multiple intersecting fault surfaces and complete fault surfaces without holes.
A SiGe Quadrature Pulse Modulator for Superconducting Qubit State Manipulation
NASA Astrophysics Data System (ADS)
Kwende, Randy; Bardin, Joseph
Manipulation of the quantum states of microwave superconducting qubits typically requires the generation of coherent modulated microwave pulses. While many off-the-shelf instruments are capable of generating such pulses, a more integrated approach is likely required if fault-tolerant quantum computing architectures are to be implemented. In this work, we present progress towards a pulse generator specifically designed to drive superconducing qubits. The device is implemented in a commercial silicon process and has been designed with energy-efficiency and scalability in mind. Pulse generation is carried out using a unique approach in which modulation is applied directly to the in-phase and quadrature components of a carrier signal in the 1-10 GHz frequency range through a unique digital-analog conversion process designed specifically for this application. The prototype pulse generator can be digitally programmed and supports sequencing of pulses with independent amplitude and phase waveforms. These amplitude and phase waveforms can be digitally programmed through a serial programming interface. Detailed performance of the pulse generator at room temperature and 4 K will be presented.
Mineral target areas in Nevada from geological analysis of LANDSAT-1 imagery
NASA Technical Reports Server (NTRS)
Abdel-Gawad, M.; Tubbesing, L.
1975-01-01
Geological analysis of LANDSAT-1 Scene MSS 1053-17540 suggests that certain known mineral districts in east-central Nevada frequently occur near faults or at faults or lineament intersections and areas of complex deformation and flexures. Seventeen (17) areas of analogous characteristics were identified as favorable targets for mineral exploration. During reconnaissance field trips eleven areas were visited. In three areas evidence was found of mining and/or prospecting not known before the field trips. In four areas favorable structural and alteration features were observed which call for more detailed field studies. In one of the four areas limonitic iron oxide samples were found in the regolith of a brecciated dolomite ridge. This area contains quartz veins, granitic and volcanic rocks and lies near the intersection of two linear fault structures identified in the LANDSAT-1 imagery. Semiquantitative spectroscopic analysis of selected portions of the samples showed abnormal contents of arsenic, molybdenum, copper, lead, zinc, and silver. These limonitic samples found were not in situ and further field studies are required to assess their source and significance.
Fault Detection, Isolation and Recovery (FDIR) Portable Liquid Oxygen Hardware Demonstrator
NASA Technical Reports Server (NTRS)
Oostdyk, Rebecca L.; Perotti, Jose M.
2011-01-01
The Fault Detection, Isolation and Recovery (FDIR) hardware demonstration will highlight the effort being conducted by Constellation's Ground Operations (GO) to provide the Launch Control System (LCS) with system-level health management during vehicle processing and countdown activities. A proof-of-concept demonstration of the FDIR prototype established the capability of the software to provide real-time fault detection and isolation using generated Liquid Hydrogen data. The FDIR portable testbed unit (presented here) aims to enhance FDIR by providing a dynamic simulation of Constellation subsystems that feed the FDIR software live data based on Liquid Oxygen system properties. The LO2 cryogenic ground system has key properties that are analogous to the properties of an electronic circuit. The LO2 system is modeled using electrical components and an equivalent circuit is designed on a printed circuit board to simulate the live data. The portable testbed is also be equipped with data acquisition and communication hardware to relay the measurements to the FDIR application running on a PC. This portable testbed is an ideal capability to perform FDIR software testing, troubleshooting, training among others.
An approach to secure weather and climate models against hardware faults
NASA Astrophysics Data System (ADS)
Düben, Peter D.; Dawson, Andrew
2017-03-01
Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelization to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. In this paper, we present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform model simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13 % for the shallow water model.
An approach to secure weather and climate models against hardware faults
NASA Astrophysics Data System (ADS)
Düben, Peter; Dawson, Andrew
2017-04-01
Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelisation to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. We present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13% for the shallow water model.
A Sparsity-Promoted Method Based on Majorization-Minimization for Weak Fault Feature Enhancement
Hao, Yansong; Song, Liuyang; Tang, Gang; Yuan, Hongfang
2018-01-01
Fault transient impulses induced by faulty components in rotating machinery usually contain substantial interference. Fault features are comparatively weak in the initial fault stage, which renders fault diagnosis more difficult. In this case, a sparse representation method based on the Majorzation-Minimization (MM) algorithm is proposed to enhance weak fault features and extract the features from strong background noise. However, the traditional MM algorithm suffers from two issues, which are the choice of sparse basis and complicated calculations. To address these challenges, a modified MM algorithm is proposed in which a sparse optimization objective function is designed firstly. Inspired by the Basis Pursuit (BP) model, the optimization function integrates an impulsive feature-preserving factor and a penalty function factor. Second, a modified Majorization iterative method is applied to address the convex optimization problem of the designed function. A series of sparse coefficients can be achieved through iterating, which only contain transient components. It is noteworthy that there is no need to select the sparse basis in the proposed iterative method because it is fixed as a unit matrix. Then the reconstruction step is omitted, which can significantly increase detection efficiency. Eventually, envelope analysis of the sparse coefficients is performed to extract weak fault features. Simulated and experimental signals including bearings and gearboxes are employed to validate the effectiveness of the proposed method. In addition, comparisons are made to prove that the proposed method outperforms the traditional MM algorithm in terms of detection results and efficiency. PMID:29597280
A Sparsity-Promoted Method Based on Majorization-Minimization for Weak Fault Feature Enhancement.
Ren, Bangyue; Hao, Yansong; Wang, Huaqing; Song, Liuyang; Tang, Gang; Yuan, Hongfang
2018-03-28
Fault transient impulses induced by faulty components in rotating machinery usually contain substantial interference. Fault features are comparatively weak in the initial fault stage, which renders fault diagnosis more difficult. In this case, a sparse representation method based on the Majorzation-Minimization (MM) algorithm is proposed to enhance weak fault features and extract the features from strong background noise. However, the traditional MM algorithm suffers from two issues, which are the choice of sparse basis and complicated calculations. To address these challenges, a modified MM algorithm is proposed in which a sparse optimization objective function is designed firstly. Inspired by the Basis Pursuit (BP) model, the optimization function integrates an impulsive feature-preserving factor and a penalty function factor. Second, a modified Majorization iterative method is applied to address the convex optimization problem of the designed function. A series of sparse coefficients can be achieved through iterating, which only contain transient components. It is noteworthy that there is no need to select the sparse basis in the proposed iterative method because it is fixed as a unit matrix. Then the reconstruction step is omitted, which can significantly increase detection efficiency. Eventually, envelope analysis of the sparse coefficients is performed to extract weak fault features. Simulated and experimental signals including bearings and gearboxes are employed to validate the effectiveness of the proposed method. In addition, comparisons are made to prove that the proposed method outperforms the traditional MM algorithm in terms of detection results and efficiency.
NASA Astrophysics Data System (ADS)
Ai, Yan-Ting; Guan, Jiao-Yue; Fei, Cheng-Wei; Tian, Jing; Zhang, Feng-Ling
2017-05-01
To monitor rolling bearing operating status with casings in real time efficiently and accurately, a fusion method based on n-dimensional characteristic parameters distance (n-DCPD) was proposed for rolling bearing fault diagnosis with two types of signals including vibration signal and acoustic emission signals. The n-DCPD was investigated based on four information entropies (singular spectrum entropy in time domain, power spectrum entropy in frequency domain, wavelet space characteristic spectrum entropy and wavelet energy spectrum entropy in time-frequency domain) and the basic thought of fusion information entropy fault diagnosis method with n-DCPD was given. Through rotor simulation test rig, the vibration and acoustic emission signals of six rolling bearing faults (ball fault, inner race fault, outer race fault, inner-ball faults, inner-outer faults and normal) are collected under different operation conditions with the emphasis on the rotation speed from 800 rpm to 2000 rpm. In the light of the proposed fusion information entropy method with n-DCPD, the diagnosis of rolling bearing faults was completed. The fault diagnosis results show that the fusion entropy method holds high precision in the recognition of rolling bearing faults. The efforts of this study provide a novel and useful methodology for the fault diagnosis of an aeroengine rolling bearing.
Distributed Fault Detection Based on Credibility and Cooperation for WSNs in Smart Grids.
Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong
2017-04-28
Due to the increasingly important role in monitoring and data collection that sensors play, accurate and timely fault detection is a key issue for wireless sensor networks (WSNs) in smart grids. This paper presents a novel distributed fault detection mechanism for WSNs based on credibility and cooperation. Firstly, a reasonable credibility model of a sensor is established to identify any suspicious status of the sensor according to its own temporal data correlation. Based on the credibility model, the suspicious sensor is then chosen to launch fault diagnosis requests. Secondly, the sending time of fault diagnosis request is discussed to avoid the transmission overhead brought about by unnecessary diagnosis requests and improve the efficiency of fault detection based on neighbor cooperation. The diagnosis reply of a neighbor sensor is analyzed according to its own status. Finally, to further improve the accuracy of fault detection, the diagnosis results of neighbors are divided into several classifications to judge the fault status of the sensors which launch the fault diagnosis requests. Simulation results show that this novel mechanism can achieve high fault detection ratio with a small number of fault diagnoses and low data congestion probability.
Distributed Fault Detection Based on Credibility and Cooperation for WSNs in Smart Grids
Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong
2017-01-01
Due to the increasingly important role in monitoring and data collection that sensors play, accurate and timely fault detection is a key issue for wireless sensor networks (WSNs) in smart grids. This paper presents a novel distributed fault detection mechanism for WSNs based on credibility and cooperation. Firstly, a reasonable credibility model of a sensor is established to identify any suspicious status of the sensor according to its own temporal data correlation. Based on the credibility model, the suspicious sensor is then chosen to launch fault diagnosis requests. Secondly, the sending time of fault diagnosis request is discussed to avoid the transmission overhead brought about by unnecessary diagnosis requests and improve the efficiency of fault detection based on neighbor cooperation. The diagnosis reply of a neighbor sensor is analyzed according to its own status. Finally, to further improve the accuracy of fault detection, the diagnosis results of neighbors are divided into several classifications to judge the fault status of the sensors which launch the fault diagnosis requests. Simulation results show that this novel mechanism can achieve high fault detection ratio with a small number of fault diagnoses and low data congestion probability. PMID:28452925
Algorithm-Based Fault Tolerance Integrated with Replication
NASA Technical Reports Server (NTRS)
Some, Raphael; Rennels, David
2008-01-01
In a proposed approach to programming and utilization of commercial off-the-shelf computing equipment, a combination of algorithm-based fault tolerance (ABFT) and replication would be utilized to obtain high degrees of fault tolerance without incurring excessive costs. The basic idea of the proposed approach is to integrate ABFT with replication such that the algorithmic portions of computations would be protected by ABFT, and the logical portions by replication. ABFT is an extremely efficient, inexpensive, high-coverage technique for detecting and mitigating faults in computer systems used for algorithmic computations, but does not protect against errors in logical operations surrounding algorithms.
Imaging of earthquake faults using small UAVs as a pathfinder for air and space observations
Donnellan, Andrea; Green, Joseph; Ansar, Adnan; Aletky, Joseph; Glasscoe, Margaret; Ben-Zion, Yehuda; Arrowsmith, J. Ramón; DeLong, Stephen B.
2017-01-01
Large earthquakes cause billions of dollars in damage and extensive loss of life and property. Geodetic and topographic imaging provide measurements of transient and long-term crustal deformation needed to monitor fault zones and understand earthquakes. Earthquake-induced strain and rupture characteristics are expressed in topographic features imprinted on the landscapes of fault zones. Small UAVs provide an efficient and flexible means to collect multi-angle imagery to reconstruct fine scale fault zone topography and provide surrogate data to determine requirements for and to simulate future platforms for air- and space-based multi-angle imaging.
Efficient fault diagnosis of helicopter gearboxes
NASA Technical Reports Server (NTRS)
Chin, H.; Danai, K.; Lewicki, D. G.
1993-01-01
Application of a diagnostic system to a helicopter gearbox is presented. The diagnostic system is a nonparametric pattern classifier that uses a multi-valued influence matrix (MVIM) as its diagnostic model and benefits from a fast learning algorithm that enables it to estimate its diagnostic model from a small number of measurement-fault data. To test this diagnostic system, vibration measurements were collected from a helicopter gearbox test stand during accelerated fatigue tests and at various fault instances. The diagnostic results indicate that the MVIM system can accurately detect and diagnose various gearbox faults so long as they are included in training.
Fuzzy model-based fault detection and diagnosis for a pilot heat exchanger
NASA Astrophysics Data System (ADS)
Habbi, Hacene; Kidouche, Madjid; Kinnaert, Michel; Zelmat, Mimoun
2011-04-01
This article addresses the design and real-time implementation of a fuzzy model-based fault detection and diagnosis (FDD) system for a pilot co-current heat exchanger. The design method is based on a three-step procedure which involves the identification of data-driven fuzzy rule-based models, the design of a fuzzy residual generator and the evaluation of the residuals for fault diagnosis using statistical tests. The fuzzy FDD mechanism has been implemented and validated on the real co-current heat exchanger, and has been proven to be efficient in detecting and isolating process, sensor and actuator faults.
NASA Astrophysics Data System (ADS)
Guo, Jun; Lu, Siliang; Zhai, Chao; He, Qingbo
2018-02-01
An automatic bearing fault diagnosis method is proposed for permanent magnet synchronous generators (PMSGs), which are widely installed in wind turbines subjected to low rotating speeds, speed fluctuations, and electrical device noise interferences. The mechanical rotating angle curve is first extracted from the phase current of a PMSG by sequentially applying a series of algorithms. The synchronous sampled vibration signal of the fault bearing is then resampled in the angular domain according to the obtained rotating phase information. Considering that the resampled vibration signal is still overwhelmed by heavy background noise, an adaptive stochastic resonance filter is applied to the resampled signal to enhance the fault indicator and facilitate bearing fault identification. Two types of fault bearings with different fault sizes in a PMSG test rig are subjected to experiments to test the effectiveness of the proposed method. The proposed method is fully automated and thus shows potential for convenient, highly efficient and in situ bearing fault diagnosis for wind turbines subjected to harsh environments.
Automatic Channel Fault Detection on a Small Animal APD-Based Digital PET Scanner
NASA Astrophysics Data System (ADS)
Charest, Jonathan; Beaudoin, Jean-François; Cadorette, Jules; Lecomte, Roger; Brunet, Charles-Antoine; Fontaine, Réjean
2014-10-01
Avalanche photodiode (APD) based positron emission tomography (PET) scanners show enhanced imaging capabilities in terms of spatial resolution and contrast due to the one to one coupling and size of individual crystal-APD detectors. However, to ensure the maximal performance, these PET scanners require proper calibration by qualified scanner operators, which can become a cumbersome task because of the huge number of channels they are made of. An intelligent system (IS) intends to alleviate this workload by enabling a diagnosis of the observational errors of the scanner. The IS can be broken down into four hierarchical blocks: parameter extraction, channel fault detection, prioritization and diagnosis. One of the main activities of the IS consists in analyzing available channel data such as: normalization coincidence counts and single count rates, crystal identification classification data, energy histograms, APD bias and noise thresholds to establish the channel health status that will be used to detect channel faults. This paper focuses on the first two blocks of the IS: parameter extraction and channel fault detection. The purpose of the parameter extraction block is to process available data on individual channels into parameters that are subsequently used by the fault detection block to generate the channel health status. To ensure extensibility, the channel fault detection block is divided into indicators representing different aspects of PET scanner performance: sensitivity, timing, crystal identification and energy. Some experiments on a 8 cm axial length LabPET scanner located at the Sherbrooke Molecular Imaging Center demonstrated an erroneous channel fault detection rate of 10% (with a 95% confidence interval (CI) of [9, 11]) which is considered tolerable. Globally, the IS achieves a channel fault detection efficiency of 96% (CI: [95, 97]), which proves that many faults can be detected automatically. Increased fault detection efficiency would be advantageous but, the achieved results would already benefit scanner operators in their maintenance task.
Low-Power Analog Processing for Sensing Applications: Low-Frequency Harmonic Signal Classification
White, Daniel J.; William, Peter E.; Hoffman, Michael W.; Balkir, Sina
2013-01-01
A low-power analog sensor front-end is described that reduces the energy required to extract environmental sensing spectral features without using Fast Fouriér Transform (FFT) or wavelet transforms. An Analog Harmonic Transform (AHT) allows selection of only the features needed by the back-end, in contrast to the FFT, where all coefficients must be calculated simultaneously. We also show that the FFT coefficients can be easily calculated from the AHT results by a simple back-substitution. The scheme is tailored for low-power, parallel analog implementation in an integrated circuit (IC). Two different applications are tested with an ideal front-end model and compared to existing studies with the same data sets. Results from the military vehicle classification and identification of machine-bearing fault applications shows that the front-end suits a wide range of harmonic signal sources. Analog-related errors are modeled to evaluate the feasibility of and to set design parameters for an IC implementation to maintain good system-level performance. Design of a preliminary transistor-level integrator circuit in a 0.13 μm complementary metal-oxide-silicon (CMOS) integrated circuit process showed the ability to use online self-calibration to reduce fabrication errors to a sufficiently low level. Estimated power dissipation is about three orders of magnitude less than similar vehicle classification systems that use commercially available FFT spectral extraction. PMID:23892765
The aircraft energy efficiency active controls technology program
NASA Technical Reports Server (NTRS)
Hood, R. V., Jr.
1977-01-01
Broad outlines of the NASA Aircraft Energy Efficiency Program for expediting the application of active controls technology to civil transport aircraft are presented. Advances in propulsion and airframe technology to cut down on fuel consumption and fuel costs, a program for an energy-efficient transport, and integrated analysis and design technology in aerodynamics, structures, and active controls are envisaged. Fault-tolerant computer systems and fault-tolerant flight control system architectures are under study. Contracts with leading manufacturers for research and development work on wing-tip extensions and winglets for the B-747, a wing load alleviation system, elastic mode suppression, maneuver-load control, and gust alleviation are mentioned.
Robust optimization with transiently chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Sumi, R.; Molnár, B.; Ercsey-Ravasz, M.
2014-05-01
Efficiently solving hard optimization problems has been a strong motivation for progress in analog computing. In a recent study we presented a continuous-time dynamical system for solving the NP-complete Boolean satisfiability (SAT) problem, with a one-to-one correspondence between its stable attractors and the SAT solutions. While physical implementations could offer great efficiency, the transiently chaotic dynamics raises the question of operability in the presence of noise, unavoidable on analog devices. Here we show that the probability of finding solutions is robust to noise intensities well above those present on real hardware. We also developed a cellular neural network model realizable with analog circuits, which tolerates even larger noise intensities. These methods represent an opportunity for robust and efficient physical implementations.
Multi-version software reliability through fault-avoidance and fault-tolerance
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.
1989-01-01
A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.
Geosphere - Cryosphere Interactions in the Saint Elias orogen, Alaska and Yukon (Invited)
NASA Astrophysics Data System (ADS)
Bruhn, R. L.; Sauber, J. M.; Forster, R. R.; Cotton, M. M.
2009-12-01
North America's largest alpine and piedmont glaciers occur in the Saint Elias orogen, where microplate collision together with the transition from transform faulting to subduction along the North American plate boundary, create extreme topographic relief, unusually high annual precipitation by orographic lift, and crustal displacements induced by both tectonic and glacio-isostatic deformation. Lithosphere-scale structure dominates the spatial pattern of glaciation; the piedmont Bering and Agassiz-Malaspina glaciers lay along deeply eroded troughs where reverse faults rise from the underlying Aleutian megathrust. The alpine Seward and Bagley Ice Valley glaciers flow along an early Tertiary plate boundary that has been reactivated by reverse faulting, and also by dextral shearing at the NW end of the Fairweather transform fault. Folding above a crustal-scale fault ramp near Icy Bay localizes orographic uplift of air masses, creating alpine glaciers that spill off the highlands into large ice falls, and rapidly dissect evolving structure by erosion. The rate and orientation of ice surface velocities, and the location of crevassing and folding partly reflect changes in basal topography of the glaciers caused by differential erosion of strata, and juxtaposition of variably oriented structures across faults. The effects of basal topography on ice flow are investigated using remote sensing measurements and analog models of glacier flow over uneven topography. Deformation of the ice in turn affects englacial hydrology and sub-ice fluvial systems, potentially impacting ice mass balance, on-set of surging, and loci of glacier quakes. The glaciers impact tectonics by localizing uplift and exhumation within the orogen, and modulating tectonic stress fields as ice masses wax and wane. This is particularly evident in crustal seismicity rates at annual to decadal time scales, while stratigraphy of coastal terraces record both earthquake deformation and glacial isostasy over millennia.
Automatic detection of electric power troubles (AI application)
NASA Technical Reports Server (NTRS)
Wang, Caroline; Zeanah, Hugh; Anderson, Audie; Patrick, Clint
1987-01-01
The design goals for the Automatic Detection of Electric Power Troubles (ADEPT) were to enhance Fault Diagnosis Techniques in a very efficient way. ADEPT system was designed in two modes of operation: (1) Real time fault isolation, and (2) a local simulator which simulates the models theoretically.
Fault Diagnostics for Turbo-Shaft Engine Sensors Based on a Simplified On-Board Model
Lu, Feng; Huang, Jinquan; Xing, Yaodong
2012-01-01
Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient. PMID:23112645
Fault diagnostics for turbo-shaft engine sensors based on a simplified on-board model.
Lu, Feng; Huang, Jinquan; Xing, Yaodong
2012-01-01
Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.
Dreaming of Graben in the Labyrinth of the Night
2016-06-29
Noctis Labyrinthus is a highly tectonized region immediately to the west of Valles Marineris. It formed when Mars' crust stretched itself apart. In this region, the crust first stretched in a north-south direction (as evidenced by the east-west trending scarp) and then in an east-west direction (as evidenced by the north-south trending smaller scarps). This sort of tectonic stretching creates faults in the crust (cracks along with masses of rock slide. This process is totally unrelated to Earth's plate tectonics.). The lower portions between faults are called "grabens" and the interspersed higher portions are called "horsts." The Basin and Range tectonic province of the western United States is a close Earth analog to Noctis Labyrinthus, which is Latin for "labyrinth of the night." http://photojournal.jpl.nasa.gov/catalog/PIA20740
Model-Based Diagnosis and Prognosis of a Water Recycling System
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Hafiychuk, Vasyl; Goebel, Kai Frank
2013-01-01
A water recycling system (WRS) deployed at NASA Ames Research Center s Sustainability Base (an energy efficient office building that integrates some novel technologies developed for space applications) will serve as a testbed for long duration testing of next generation spacecraft water recycling systems for future human spaceflight missions. This system cleans graywater (waste water collected from sinks and showers) and recycles it into clean water. Like all engineered systems, the WRS is prone to standard degradation due to regular use, as well as other faults. Diagnostic and prognostic applications will be deployed on the WRS to ensure its safe, efficient, and correct operation. The diagnostic and prognostic results can be used to enable condition-based maintenance to avoid unplanned outages, and perhaps extend the useful life of the WRS. Diagnosis involves detecting when a fault occurs, isolating the root cause of the fault, and identifying the extent of damage. Prognosis involves predicting when the system will reach its end of life irrespective of whether an abnormal condition is present or not. In this paper, first, we develop a physics model of both nominal and faulty system behavior of the WRS. Then, we apply an integrated model-based diagnosis and prognosis framework to the simulation model of the WRS for several different fault scenarios to detect, isolate, and identify faults, and predict the end of life in each fault scenario, and present the experimental results.
Conjecturing via Reconceived Classical Analogy
ERIC Educational Resources Information Center
Lee, Kyeong-Hwa; Sriraman, Bharath
2011-01-01
Analogical reasoning is believed to be an efficient means of problem solving and construction of knowledge during the search for and the analysis of new mathematical objects. However, there is growing concern that despite everyday usage, learners are unable to transfer analogical reasoning to learning situations. This study aims at facilitating…
NASA Astrophysics Data System (ADS)
Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo
2017-08-01
The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.
Distributed adaptive diagnosis of sensor faults using structural response data
NASA Astrophysics Data System (ADS)
Dragos, Kosmas; Smarsly, Kay
2016-10-01
The reliability and consistency of wireless structural health monitoring (SHM) systems can be compromised by sensor faults, leading to miscalibrations, corrupted data, or even data loss. Several research approaches towards fault diagnosis, referred to as ‘analytical redundancy’, have been proposed that analyze the correlations between different sensor outputs. In wireless SHM, most analytical redundancy approaches require centralized data storage on a server for data analysis, while other approaches exploit the on-board computing capabilities of wireless sensor nodes, analyzing the raw sensor data directly on board. However, using raw sensor data poses an operational constraint due to the limited power resources of wireless sensor nodes. In this paper, a new distributed autonomous approach towards sensor fault diagnosis based on processed structural response data is presented. The inherent correlations among Fourier amplitudes of acceleration response data, at peaks corresponding to the eigenfrequencies of the structure, are used for diagnosis of abnormal sensor outputs at a given structural condition. Representing an entirely data-driven analytical redundancy approach that does not require any a priori knowledge of the monitored structure or of the SHM system, artificial neural networks (ANN) are embedded into the sensor nodes enabling cooperative fault diagnosis in a fully decentralized manner. The distributed analytical redundancy approach is implemented into a wireless SHM system and validated in laboratory experiments, demonstrating the ability of wireless sensor nodes to self-diagnose sensor faults accurately and efficiently with minimal data traffic. Besides enabling distributed autonomous fault diagnosis, the embedded ANNs are able to adapt to the actual condition of the structure, thus ensuring accurate and efficient fault diagnosis even in case of structural changes.
Software-Controlled Caches in the VMP Multiprocessor
1986-03-01
programming system level that Processors is tuned for the VMP design. In this vein, we are interested in exploring how far the software support can go to ...handled in software, analogously to the handling agement of the shared program state is familiar and of virtual memory page faults. Hardware support for...ensure good behavior, as opposed to how Each cache miss results in bus traffic. Table 2 pro- vides the bus cost for the "average" cache miss. Fig
Broadband optical equalizer using fault tolerant digital micromirrors.
Riza, Nabeel; Mughal, M Junaid
2003-06-30
For the first time, the design and demonstration of a near continuous spectral processing mode broadband equalizer is described using the earlier proposed macro-pixel spatial approach for multiwavelength fiber-optic attenuation in combination with a high spectral resolution broadband transmissive volume Bragg grating. The demonstrated design features low loss and low polarization dependent loss with broadband operation. Such an analog mode spectral processor can impact optical applications ranging from test and instrumentation to dynamic alloptical networks.
NASA Technical Reports Server (NTRS)
Fernandez-Remolar, D. C.; Prieto-Ballesteros, O.; Rodriquez, N.; Davila, F.; Stevens, T.; Amils, R.; Gomez-Elvira, J.; Stoker, C.
2005-01-01
Geochemistry and mineralogy on Mars surface characterized by the MER Opportunity Rover suggest that early Mars hosted acidic environments in the Meridiani Planum region [1, 2]. Such extreme paleoenvironments have been suggested to be a regional expression of the global Mars geological cycle that induced acidic conditions by sulfur complexation and iron buffering of aqueous solutions [3]. Under these assumptions, underground reservoirs of acidic brines and, thereby, putative acidic cryptobiospheres, may be expected. The MARTE project [4, 5] has performed a drilling campaign to search for acidic and anaerobic biospheres in R o Tinto basement [6] that may be analogs of these hypothetical communities occurring in cryptic habitats of Mars. This Rio Tinto geological region is characterized by the occurrence of huge metallic deposits of iron sulfides [7]. Late intensive diagenesis of rocks driven by a compressive regimen [8] largely reduced the porosity of rocks and induced a cortical thickening through thrusting and inverse faulting and folding. Such structures play an essential role in transporting and storing water underground as any other aquifers do in the Earth. Once the underground water reservoirs of the Ro Tinto basement contact the hydrothermal pyrite deposits, acidic brines are produced by the release of sulfates and iron through the oxidation of sulfides [9].
NASA Astrophysics Data System (ADS)
Li, Lu; Stephenson, Randell; Clift, Peter D.
2016-11-01
Both the Canada Basin (a sub-basin within the Amerasia Basin) and southwest (SW) South China Sea preserve oceanic spreading centres and adjacent passive continental margins characterized by broad COT zones with hyper-extended continental crust. We have investigated strain accommodation in the regions immediately adjacent to the oceanic spreading centres in these two basins using 2-D backstripping subsidence reconstructions, coupled with forward modelling constrained by estimates of upper crustal extensional faulting. Modelling is better constrained in the SW South China Sea but our results for the Canada Basin are analogous. Depth-dependent extension is required to explain the great depth of both basins because only modest upper crustal faulting is observed. A weak lower crust in the presence of high heat flow and, accordingly, a lower crust that extends far more the upper crust are suggested for both basins. Extension in the COT may have continued even after seafloor spreading has ceased. The analogous results for the two basins considered are discussed in terms of (1) constraining the timing and distribution of crustal thinning along the respective continental margins, (2) defining the processes leading to hyper-extension of continental crust in the respective tectonic settings and (3) illuminating the processes that control hyper-extension in these basins and more generally.
NASA Astrophysics Data System (ADS)
Khawaja, Taimoor Saleem
A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classification for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to find a good trade-off between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data is able to distinguish between normal behavior and any abnormal or novel data during real-time operation. The results of the scheme are interpreted as a posterior probability of health (1 - probability of fault). As shown through two case studies in Chapter 3, the scheme is well suited for diagnosing imminent faults in dynamical non-linear systems. Finally, the failure prognosis scheme is based on an incremental weighted Bayesian LS-SVR machine. It is particularly suited for online deployment given the incremental nature of the algorithm and the quick optimization problem solved in the LS-SVR algorithm. By way of kernelization and a Gaussian Mixture Modeling (GMM) scheme, the algorithm can estimate "possibly" non-Gaussian posterior distributions for complex non-linear systems. An efficient regression scheme associated with the more rigorous core algorithm allows for long-term predictions, fault growth estimation with confidence bounds and remaining useful life (RUL) estimation after a fault is detected. The leading contributions of this thesis are (a) the development of a novel Bayesian Anomaly Detector for efficient and reliable Fault Detection and Identification (FDI) based on Least Squares Support Vector Machines, (b) the development of a data-driven real-time architecture for long-term Failure Prognosis using Least Squares Support Vector Machines, (c) Uncertainty representation and management using Bayesian Inference for posterior distribution estimation and hyper-parameter tuning, and finally (d) the statistical characterization of the performance of diagnosis and prognosis algorithms in order to relate the efficiency and reliability of the proposed schemes.
The Architecture and Frictional Properties of Faults in Shale
NASA Astrophysics Data System (ADS)
De Paola, N.; Imber, J.; Murray, R.; Holdsworth, R.
2015-12-01
The geometry of brittle fault zones in shale rocks, as well as their frictional properties at reservoir conditions, are still poorly understood. Nevertheless, these factors may control the very low recovery factors (25% for gas and 5% for oil) obtained during fracking operations. Extensional brittle fault zones (maximum displacement < 3 m) cut exhumed oil mature black shales in the Cleveland Basin (UK). Fault cores up to 50 cm wide accommodated most of the displacement, and are defined by a stair-step geometry. Their internal architecture is characterised by four distinct fault rock domains: foliated gouges; breccias; hydraulic breccias; and a slip zone up to 20 mm thick, composed of a fine-grained black gouge. Hydraulic breccias are located within dilational jogs with aperture of up to 20 cm. Brittle fracturing and cataclastic flow are the dominant deformation mechanisms in the fault core of shale faults. Velocity-step and slide-hold-slide experiments at sub-seismic slip rates (microns/s) were performed in a rotary shear apparatus under dry, water and brine-saturated conditions, for displacements of up to 46 cm. Both the protolith shale and the slip zone black gouge display shear localization, velocity strengthening behaviour and negative healing rates, suggesting that slow, stable sliding faulting should occur within the protolith rocks and slip zone gouges. Experiments at seismic speed (1.3 m/s), performed on the same materials under dry conditions, show that after initial friction values of 0.5-0.55, friction decreases to steady-state values of 0.1-0.15 within the first 10 mm of slip. Contrastingly, water/brine saturated gouge mixtures, exhibit almost instantaneous attainment of very low steady-state sliding friction (0.1), suggesting that seismic ruptures may efficiently propagate in the slip zone of fluid-saturated shale faults. Stable sliding in faults in shale can cause slow fault/fracture propagation, affecting the rate at which new fracture areas are created and, hence, limiting oil and gas production during reservoir stimulation. However, fluid saturated conditions can favour seismic slip propagation, with fast and efficient creation of new fracture areas. These processes are very effective at dilational jogs, where fluid circulation may be enhanced, facilitating oil and gas production.
Structures associated with strike-slip faults that bound landslide elements
Fleming, R.W.; Johnson, A.M.
1989-01-01
Large landslides are bounded on their flanks and on elements within the landslides by structures analogous to strike-slip faults. We observed the formation of thwse strike-slip faults and associated structures at two large landslides in central Utah during 1983-1985. The strike-slip faults in landslides are nearly vertical but locally may dip a few degrees toward or away from the moving ground. Fault surfaces are slickensided, and striations are subparallel to the ground surface. Displacement along strike-slip faults commonly produces scarps; scarps occur where local relief of the failure surface or ground surface is displaced and becomes adjacent to higher or lower ground, or where the landslide is thickening or thinning as a result of internal deformation. Several types of structures are formed at the ground surface as a strike-slip fault, which is fully developed at some depth below the ground surface, propagates upward in response to displacement. The simplest structure is a tension crack oriented at 45?? clockwise or counterclockwise from the trend of an underlying right- or left-lateral strike-slip fault, respectively. The tension cracks are typically arranged en echelon with the row of cracks parallel to the trace of the underlying strike-slip fault. Another common structure that forms above a developing strike-slip fault is a fault segment. Fault segments are discontinuous strike-slip faults that contain the same sense of slip but are turned clockwise or counterclockwise from a few to perhaps 20?? from the underlying strike-slip fault. The fault segments are slickensided and striated a few centimeters below the ground surface; continued displacement of the landslide causes the fault segments to open and a short tension crack propagates out of one or both ends of the fault segments. These structures, open fault segments containing a short tension crack, are termed compound cracks; and the short tension crack that propagates from the tip of the fault segment is typically oriented 45?? to the trend of the underlying fault. Fault segments are also typically arranged en echelon above the upward-propagating strike-slip fault. Continued displacement of the landslide causes the ground to buckle between the tension crack portions of the compound cracks. Still more displacement produces a thrust fault on one or both limbs of the buckle fold. These compressional structures form at right angles to the short tension cracks at the tips of the fault segments. Thus, the compressional structures are bounded on their ends by one face of a tension crack and detached from underlying material by thrusting or buckling. The tension cracks, fault segments, compound cracks, folds, and thrusts are ephemeral; they are created and destroyed with continuing displacement of the landslide. Ultimately, the structures are replaced by a throughgoing strike-slip fault. At one landslide, we observed the creation and destruction of the ephemeral structures as the landslide enlarged. Displacement of a few centimeters to about a decimeter was sufficient to produce scattered tension cracks and fault segments. Sets of compound cracks with associated folds and thrusts were produced by displacements of up to 1 m, and 1 to 2 m of displacement was required to produce a throughgoing strike-slip fault. The type of first-formed structure above an upward-propagating strike-slip fault is apparently controlled by the rheology of the material. Brittle material such as dry topsoil or the compact surface of a gravel road produces echelon tension cracks and sets of tension cracks and compressional structures, wherein the cracks and compressional structures are normal to each other and 45?? to the strike-slip fault at depth. First-formed structures in more ductile material such as moist cohesive soil are fault segments. In very ductile material such as soft clay and very wet soil in swampy areas, the first-formed structure is a throughgoing strike-slip fault. There are othe
Deformation driven by subduction and microplate collision: Geodynamics of Cook Inlet basin, Alaska
Bruhn, R.L.; Haeussler, Peter J.
2006-01-01
Late Neogene and younger deformation in Cook Inlet basin is caused by dextral transpression in the plate margin of south-central Alaska. Collision and subduction of the Yakutat microplate at the northeastern end of the Aleutian subduction zone is driving the accretionary complex of the Chugach and Kenai Mountains toward the Alaska Range on the opposite side of the basin. This deformation creates belts of fault-cored anticlines that are prolific traps of hydrocarbons and are also potential sources for damaging earthquakes. The faults dip steeply, extend into the Mesozoic basement beneath the Tertiary basin fill, and form conjugate flower structures at some localities. Comparing the geometry of the natural faults and folds with analog models created in a sandbox deformation apparatus suggests that some of the faults accommodate significant dextral as well as reverse-slip motion. We develop a tectonic model in which dextral shearing and horizontal shortening of the basin is driven by microplate collision with an additional component of thrust-type strain caused by plate subduction. This model predicts temporally fluctuating stress fields that are coupled to the recurrence intervals of large-magnitude subduction zone earthquakes. The maximum principal compressive stress is oriented east-southeast to east-northeast with nearly vertical least compressive stress when the basin's lithosphere is mostly decoupled from the underlying subduction megathrust. This stress tensor is compatible with principal stresses inferred from focal mechanisms of earthquakes that occur within the crust beneath Cook Inlet basin. Locking of the megathrust between great magnitude earthquakes may cause the maximum principal compressive stress to rotate toward the northwest. Moderate dipping faults that strike north to northeast may be optimally oriented for rupture in the ambient stress field, but steeply dipping faults within the cores of some anticlines are unfavorably oriented with respect to both modeled and observed stress fields, suggesting that elevated fluid pressure may be required to trigger fault rupture. ?? 2006 Geological Society of America.
Growth and gravitational collapse of a mountain front of the Eastern Cordillera of Colombia
NASA Astrophysics Data System (ADS)
Kammer, Andreas; Montana, Jorge; Piraquive, Alejandro
2016-04-01
The Eastern Cordillera of Colombia is bracketed between the moderately east-dipping flank of the Central Cordillera on its western and the gently bent Guayana shield on its eastern side. It evolved as a response to a considerable displacement transfer from the Nazca to the Southamerican plate since the Oligocene break-up of the Farallon plate. One of its distinctive traits refers to its significant shortening by penetrative strain at lower and folding at higher structural levels, approximating a wholesale pure-shear in analogy to a vice model or a crustal welt sandwiched between rigid buttresses. This contrasting behavior may be explained by the spatial coincidence between Neogene mountain belt and a forebulge that shaped the foreland trough during a Cretaceous subduction cycle and was very effective in localizing a weakening of the backarc region comprised between two basin margin faults. In this paper we examine a two-phase evolution of the Eastern mountain front. Up to the late Miocene deformation was restrained by the inherited eastern basin margin fault and as the cordilleran crust extruded, a deformation front with an amplitude similar the present structural relief of up to 10.000 m may have built up. In the Pliocene convergence changed from a roughly strike-perpendicular to an oblique E-W direction and caused N-S trending faults to branch off from the deformation front. This shortening was partly driven by a gravitational collapse of the Miocene deformation front, that became fragmented by normal faults and extruded E on newly formed Pliocene thrust faults. Normal faults display displacements of up to 3000 m and channelized hydrothermal fluids, leading to the formation of widely distributed fault breccias and giving rise to a prolific Emerald mineralization. In terms of wedge dynamics, the Pliocene breaching of the early formed deformation front helped to establish a critical taper.
NASA Astrophysics Data System (ADS)
Harlow, J.
2016-12-01
Arabia Terra's (AT) pock-marked topography in the expansive upland region of Mars Northern Hemisphere has been assumed to be the result of impact crater bombardment. However, examination of several craters by researchers revealed morphologies inconsistent with neighboring craters of similar size and age. These 'craters' share features with terrestrial super-eruption calderas, and are considered a new volcanic construct on Mars called `plains-style' caldera complexes. Eden Patera (EP), located on the northern boundary of AT is a reference type for these calderas. EP lacks well-preserved impact crater morphologies, including a decreasing depth to diameter ratio. Conversely, Eden shares geomorphological attributes with terrestrial caldera complexes such as Valles Caldera (New Mexico): arcuate caldera walls, concentric fracturing/faulting, flat-topped benches, irregular geometric circumferences, etc. This study focuses on peripheral fractures surrounding EP to provide further evidence of calderas within the AT region. Scaled balloon experiments mimicking terrestrial caldera analogs have showcased fracturing/faulting patterns and relationships of caldera systems. These experiments show: 1) radial fracturing (perpendicular to caldera rim) upon inflation, 2) concentric faulting (parallel to sub-parallel to caldera rim) during evacuation, and 3) intersecting radial and concentric peripheral faulting from resurgence. Utilizing Mars Reconnaissance Orbiter Context Camera (CTX) imagery, peripheral fracturing is analyzed using GIS to study variations in peripheral fracture geometries relative to the caldera rim. Visually, concentric fractures dominate within 20 km, radial fractures prevail between 20 and 50 km, followed by gradation into randomly oriented and highly angular intersections in the fretted terrain region. Rose diagrams of orientation relative to north expose uniformly oriented mean regional stresses, but do not illuminate localized caldera stresses. Further examination of orientation relative to caldera rim reveals expected orientations of ±30° on rose diagrams, taking into account the geometric nature of concentric faulting. These results establish a quantitative geometric system to differentiate localized from regional faulting surrounding Eden Patera.
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
Digital-analog quantum simulation of generalized Dicke models with superconducting circuits
NASA Astrophysics Data System (ADS)
Lamata, Lucas
2017-03-01
We propose a digital-analog quantum simulation of generalized Dicke models with superconducting circuits, including Fermi- Bose condensates, biased and pulsed Dicke models, for all regimes of light-matter coupling. We encode these classes of problems in a set of superconducting qubits coupled with a bosonic mode implemented by a transmission line resonator. Via digital-analog techniques, an efficient quantum simulation can be performed in state-of-the-art circuit quantum electrodynamics platforms, by suitable decomposition into analog qubit-bosonic blocks and collective single-qubit pulses through digital steps. Moreover, just a single global analog block would be needed during the whole protocol in most of the cases, superimposed with fast periodic pulses to rotate and detune the qubits. Therefore, a large number of digital steps may be attained with this approach, providing a reduced digital error. Additionally, the number of gates per digital step does not grow with the number of qubits, rendering the simulation efficient. This strategy paves the way for the scalable digital-analog quantum simulation of many-body dynamics involving bosonic modes and spin degrees of freedom with superconducting circuits.
Digital-analog quantum simulation of generalized Dicke models with superconducting circuits
Lamata, Lucas
2017-01-01
We propose a digital-analog quantum simulation of generalized Dicke models with superconducting circuits, including Fermi- Bose condensates, biased and pulsed Dicke models, for all regimes of light-matter coupling. We encode these classes of problems in a set of superconducting qubits coupled with a bosonic mode implemented by a transmission line resonator. Via digital-analog techniques, an efficient quantum simulation can be performed in state-of-the-art circuit quantum electrodynamics platforms, by suitable decomposition into analog qubit-bosonic blocks and collective single-qubit pulses through digital steps. Moreover, just a single global analog block would be needed during the whole protocol in most of the cases, superimposed with fast periodic pulses to rotate and detune the qubits. Therefore, a large number of digital steps may be attained with this approach, providing a reduced digital error. Additionally, the number of gates per digital step does not grow with the number of qubits, rendering the simulation efficient. This strategy paves the way for the scalable digital-analog quantum simulation of many-body dynamics involving bosonic modes and spin degrees of freedom with superconducting circuits. PMID:28256559
Wang, Tianyang; Chu, Fulei; Han, Qinkai
2017-03-01
Identifying the differences between the spectra or envelope spectra of a faulty signal and a healthy baseline signal is an efficient planetary gearbox local fault detection strategy. However, causes other than local faults can also generate the characteristic frequency of a ring gear fault; this may further affect the detection of a local fault. To address this issue, a new filtering algorithm based on the meshing resonance phenomenon is proposed. In detail, the raw signal is first decomposed into different frequency bands and levels. Then, a new meshing index and an MRgram are constructed to determine which bands belong to the meshing resonance frequency band. Furthermore, an optimal filter band is selected from this MRgram. Finally, the ring gear fault can be detected according to the envelope spectrum of the band-pass filtering result. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Fault detection and diagnosis of photovoltaic systems
NASA Astrophysics Data System (ADS)
Wu, Xing
The rapid growth of the solar industry over the past several years has expanded the significance of photovoltaic (PV) systems. One of the primary aims of research in building-integrated PV systems is to improve the performance of the system's efficiency, availability, and reliability. Although much work has been done on technological design to increase a photovoltaic module's efficiency, there is little research so far on fault diagnosis for PV systems. Faults in a PV system, if not detected, may not only reduce power generation, but also threaten the availability and reliability, effectively the "security" of the whole system. In this paper, first a circuit-based simulation baseline model of a PV system with maximum power point tracking (MPPT) is developed using MATLAB software. MATLAB is one of the most popular tools for integrating computation, visualization and programming in an easy-to-use modeling environment. Second, data collection of a PV system at variable surface temperatures and insolation levels under normal operation is acquired. The developed simulation model of PV system is then calibrated and improved by comparing modeled I-V and P-V characteristics with measured I--V and P--V characteristics to make sure the simulated curves are close to those measured values from the experiments. Finally, based on the circuit-based simulation model, a PV model of various types of faults will be developed by changing conditions or inputs in the MATLAB model, and the I--V and P--V characteristic curves, and the time-dependent voltage and current characteristics of the fault modalities will be characterized for each type of fault. These will be developed as benchmark I-V or P-V, or prototype transient curves. If a fault occurs in a PV system, polling and comparing actual measured I--V and P--V characteristic curves with both normal operational curves and these baseline fault curves will aid in fault diagnosis.
Critical fault patterns determination in fault-tolerant computer systems
NASA Technical Reports Server (NTRS)
Mccluskey, E. J.; Losq, J.
1978-01-01
The method proposed tries to enumerate all the critical fault-patterns (successive occurrences of failures) without analyzing every single possible fault. The conditions for the system to be operating in a given mode can be expressed in terms of the static states. Thus, one can find all the system states that correspond to a given critical mode of operation. The next step consists in analyzing the fault-detection mechanisms, the diagnosis algorithm and the process of switch control. From them, one can find all the possible system configurations that can result from a failure occurrence. Thus, one can list all the characteristics, with respect to detection, diagnosis, and switch control, that failures must have to constitute critical fault-patterns. Such an enumeration of the critical fault-patterns can be directly used to evaluate the overall system tolerance to failures. Present research is focused on how to efficiently make use of these system-level characteristics to enumerate all the failures that verify these characteristics.
Fault Diagnosis for Micro-Gas Turbine Engine Sensors via Wavelet Entropy
Yu, Bing; Liu, Dongdong; Zhang, Tianhong
2011-01-01
Sensor fault diagnosis is necessary to ensure the normal operation of a gas turbine system. However, the existing methods require too many resources and this need can’t be satisfied in some occasions. Since the sensor readings are directly affected by sensor state, sensor fault diagnosis can be performed by extracting features of the measured signals. This paper proposes a novel fault diagnosis method for sensors based on wavelet entropy. Based on the wavelet theory, wavelet decomposition is utilized to decompose the signal in different scales. Then the instantaneous wavelet energy entropy (IWEE) and instantaneous wavelet singular entropy (IWSE) are defined based on the previous wavelet entropy theory. Subsequently, a fault diagnosis method for gas turbine sensors is proposed based on the results of a numerically simulated example. Then, experiments on this method are carried out on a real micro gas turbine engine. In the experiment, four types of faults with different magnitudes are presented. The experimental results show that the proposed method for sensor fault diagnosis is efficient. PMID:22163734
Fault diagnosis for micro-gas turbine engine sensors via wavelet entropy.
Yu, Bing; Liu, Dongdong; Zhang, Tianhong
2011-01-01
Sensor fault diagnosis is necessary to ensure the normal operation of a gas turbine system. However, the existing methods require too many resources and this need can't be satisfied in some occasions. Since the sensor readings are directly affected by sensor state, sensor fault diagnosis can be performed by extracting features of the measured signals. This paper proposes a novel fault diagnosis method for sensors based on wavelet entropy. Based on the wavelet theory, wavelet decomposition is utilized to decompose the signal in different scales. Then the instantaneous wavelet energy entropy (IWEE) and instantaneous wavelet singular entropy (IWSE) are defined based on the previous wavelet entropy theory. Subsequently, a fault diagnosis method for gas turbine sensors is proposed based on the results of a numerically simulated example. Then, experiments on this method are carried out on a real micro gas turbine engine. In the experiment, four types of faults with different magnitudes are presented. The experimental results show that the proposed method for sensor fault diagnosis is efficient.
NASA Astrophysics Data System (ADS)
Osmundsen, P. T.; Péron-Pinvidic, G.
2018-03-01
The large-magnitude faults that control crustal thinning and excision at rifted margins combine into laterally persistent structural boundaries that separate margin domains of contrasting morphology and structure. We term them breakaway complexes. At the Mid-Norwegian margin, we identify five principal breakaway complexes that separate the proximal, necking, distal, and outer margin domains. Downdip and lateral interactions between the faults that constitute breakaway complexes became fundamental to the evolution of the 3-D margin architecture. Different types of fault interaction are observed along and between these faults, but simple models for fault growth will not fully describe their evolution. These structures operate on the crustal scale, cut large thicknesses of heterogeneously layered lithosphere, and facilitate fundamental margin processes such as deformation coupling and exhumation. Variations in large-magnitude fault geometry, erosional footwall incision, and subsequent differential subsidence along the main breakaway complexes likely record the variable efficiency of these processes.
NASA Astrophysics Data System (ADS)
Ostapchuk, Alexey; Saltykov, Nikolay
2017-04-01
Excessive tectonic stresses accumulated in the area of rock discontinuity are released while a process of slip along preexisting faults. Spectrum of slip modes includes not only creeps and regular earthquakes but also some transitional regimes - slow-slip events, low-frequency and very low-frequency earthquakes. However, there is still no agreement in Geophysics community if such fast and slow events have mutual nature [Peng, Gomberg, 2010] or they present different physical phenomena [Ide et al., 2007]. Models of nucleation and evolution of fault slip events could be evolved by laboratory experiments in which regularities of shear deformation of gouge-filled fault are investigated. In the course of the work we studied deformation regularities of experimental fault by slider frictional experiments for development of unified law of evolution of fault and revelation of its parameters responsible for deformation mode realization. The experiments were conducted as a classic slider-model experiment, in which block under normal and shear stresses moves along interface. The volume between two rough surfaces was filled by thin layer of granular matter. Shear force was applied by a spring which deformed with a constant rate. In such experiments elastic energy was accumulated in the spring, and regularities of its releases were determined by regularities of frictional behaviour of experimental fault. A full spectrum of slip modes was simulated in laboratory experiments. Slight change of gouge characteristics (granule shape, content of clay), viscosity of interstitial fluid and level of normal stress make it possible to obtained gradual transformation of the slip modes from steady sliding and slow slip to regular stick-slip, with various amplitude of 'coseismic' displacement. Using method of asymptotic analogies we have shown that different slip modes can be specified in term of single formalism and preparation of different slip modes have uniform evolution law. It is shown that shear stiffness of experimental fault is the parameter, which control realization of certain slip modes. It is worth to be mentioned that different serious of transformation is characterized by functional dependences, which have general view and differ only in normalization factors. Findings authenticate that slow and fast slip events have mutual nature. Determination of fault stiffness and testing of fault gouge allow to estimate intensity of seismic events. The reported study was funded by RFBR according to the research project № 16-05-00694.
NASA Astrophysics Data System (ADS)
Hassanabadi, Amir Hossein; Shafiee, Masoud; Puig, Vicenc
2018-01-01
In this paper, sensor fault diagnosis of a singular delayed linear parameter varying (LPV) system is considered. In the considered system, the model matrices are dependent on some parameters which are real-time measurable. The case of inexact parameter measurements is considered which is close to real situations. Fault diagnosis in this system is achieved via fault estimation. For this purpose, an augmented system is created by including sensor faults as additional system states. Then, an unknown input observer (UIO) is designed which estimates both the system states and the faults in the presence of measurement noise, disturbances and uncertainty induced by inexact measured parameters. Error dynamics and the original system constitute an uncertain system due to inconsistencies between real and measured values of the parameters. Then, the robust estimation of the system states and the faults are achieved with H∞ performance and formulated with a set of linear matrix inequalities (LMIs). The designed UIO is also applicable for fault diagnosis of singular delayed LPV systems with unmeasurable scheduling variables. The efficiency of the proposed approach is illustrated with an example.
The Bootheel lineament, the 1811-1812 New Madrid earthquake sequence, and modern seismicity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schweig, E.S.; Ellis, M.A.
1992-01-01
Pedologic, geomorphic, and geochronologic data suggest that liquefaction occurred along the Bootheel lineament of Missouri and Arkansas during the 1811-1812 New Madrid earthquake sequence. The authors propose that the lineament may be the surface trace of a relatively young fault zone consisting of multiple strike-slip flower structures. These structures have been interpreted over a zone at least 5 km wide exhibiting deformed strata at least as young as a regional Eocene/Quaternary unconformity. In physical models, flower structures form in less rigid material in response to low finite displacement across a discrete strike-slip shear zone in a rigid basement. By analogy,more » the Bootheel lineament may represent the most recent attempt of a strike-slip fault zone of relatively low displacement to propagate through a weak cover. In addition, the Bootheel lineament extends between two well-established, seismically active strike-slip fault zones that current form a restraining step. Restraining steps along strike-slip fault zones are inherently unstable, and thus the Bootheel lineament may be acting to smooth the trace of the New Madrid seismic zone as displacement increases. The current seismic inactivity along the Bootheel lineament may be explained by sequential accommodation of complex strain in which the stress field is highly variable within the source volume. In other words, the current stress field may not represent that which operated during the 1811-1812 sequence. Alternatively, an earthquake on a fault associated with the bootheel lineament may have released sufficient strain energy to temporarily shut down activity.« less
Practical Issues in Implementing Software Reliability Measurement
NASA Technical Reports Server (NTRS)
Nikora, Allen P.; Schneidewind, Norman F.; Everett, William W.; Munson, John C.; Vouk, Mladen A.; Musa, John D.
1999-01-01
Many ways of estimating software systems' reliability, or reliability-related quantities, have been developed over the past several years. Of particular interest are methods that can be used to estimate a software system's fault content prior to test, or to discriminate between components that are fault-prone and those that are not. The results of these methods can be used to: 1) More accurately focus scarce fault identification resources on those portions of a software system most in need of it. 2) Estimate and forecast the risk of exposure to residual faults in a software system during operation, and develop risk and safety criteria to guide the release of a software system to fielded use. 3) Estimate the efficiency of test suites in detecting residual faults. 4) Estimate the stability of the software maintenance process.
Gligorijevic, Jovan; Gajic, Dragoljub; Brkovic, Aleksandar; Savic-Gajic, Ivana; Georgieva, Olga; Di Gennaro, Stefano
2016-03-01
The packaging materials industry has already recognized the importance of Total Productive Maintenance as a system of proactive techniques for improving equipment reliability. Bearing faults, which often occur gradually, represent one of the foremost causes of failures in the industry. Therefore, detection of their faults in an early stage is quite important to assure reliable and efficient operation. We present a new automated technique for early fault detection and diagnosis in rolling-element bearings based on vibration signal analysis. Following the wavelet decomposition of vibration signals into a few sub-bands of interest, the standard deviation of obtained wavelet coefficients is extracted as a representative feature. Then, the feature space dimension is optimally reduced to two using scatter matrices. In the reduced two-dimensional feature space the fault detection and diagnosis is carried out by quadratic classifiers. Accuracy of the technique has been tested on four classes of the recorded vibrations signals, i.e., normal, with the fault of inner race, outer race, and ball operation. The overall accuracy of 98.9% has been achieved. The new technique can be used to support maintenance decision-making processes and, thus, to increase reliability and efficiency in the industry by preventing unexpected faulty operation of bearings.
Gligorijevic, Jovan; Gajic, Dragoljub; Brkovic, Aleksandar; Savic-Gajic, Ivana; Georgieva, Olga; Di Gennaro, Stefano
2016-01-01
The packaging materials industry has already recognized the importance of Total Productive Maintenance as a system of proactive techniques for improving equipment reliability. Bearing faults, which often occur gradually, represent one of the foremost causes of failures in the industry. Therefore, detection of their faults in an early stage is quite important to assure reliable and efficient operation. We present a new automated technique for early fault detection and diagnosis in rolling-element bearings based on vibration signal analysis. Following the wavelet decomposition of vibration signals into a few sub-bands of interest, the standard deviation of obtained wavelet coefficients is extracted as a representative feature. Then, the feature space dimension is optimally reduced to two using scatter matrices. In the reduced two-dimensional feature space the fault detection and diagnosis is carried out by quadratic classifiers. Accuracy of the technique has been tested on four classes of the recorded vibrations signals, i.e., normal, with the fault of inner race, outer race, and ball operation. The overall accuracy of 98.9% has been achieved. The new technique can be used to support maintenance decision-making processes and, thus, to increase reliability and efficiency in the industry by preventing unexpected faulty operation of bearings. PMID:26938541
A structural model decomposition framework for systems health management
NASA Astrophysics Data System (ADS)
Roychoudhury, I.; Daigle, M.; Bregon, A.; Pulido, B.
Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.
A Structural Model Decomposition Framework for Systems Health Management
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino
2013-01-01
Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.
NASA Astrophysics Data System (ADS)
Wang, Lei; Bai, Bing; Li, Xiaochun; Liu, Mingze; Wu, Haiqing; Hu, Shaobin
2016-07-01
Induced seismicity and fault reactivation associated with fluid injection and depletion were reported in hydrocarbon, geothermal, and waste fluid injection fields worldwide. Here, we establish an analytical model to assess fault reactivation surrounding a reservoir during fluid injection and extraction that considers the stress concentrations at the fault tips and the effects of fault length. In this model, induced stress analysis in a full-space under the plane strain condition is implemented based on Eshelby's theory of inclusions in terms of a homogeneous, isotropic, and poroelastic medium. The stress intensity factor concept in linear elastic fracture mechanics is adopted as an instability criterion for pre-existing faults in surrounding rocks. To characterize the fault reactivation caused by fluid injection and extraction, we define a new index, the "fault reactivation factor" η, which can be interpreted as an index of fault stability in response to fluid pressure changes per unit within a reservoir resulting from injection or extraction. The critical fluid pressure change within a reservoir is also determined by the superposition principle using the in situ stress surrounding a fault. Our parameter sensitivity analyses show that the fault reactivation tendency is strongly sensitive to fault location, fault length, fault dip angle, and Poisson's ratio of the surrounding rock. Our case study demonstrates that the proposed model focuses on the mechanical behavior of the whole fault, unlike the conventional methodologies. The proposed method can be applied to engineering cases related to injection and depletion within a reservoir owing to its efficient computational codes implementation.
On the impact of approximate computation in an analog DeSTIN architecture.
Young, Steven; Lu, Junjie; Holleman, Jeremy; Arel, Itamar
2014-05-01
Deep machine learning (DML) holds the potential to revolutionize machine learning by automating rich feature extraction, which has become the primary bottleneck of human engineering in pattern recognition systems. However, the heavy computational burden renders DML systems implemented on conventional digital processors impractical for large-scale problems. The highly parallel computations required to implement large-scale deep learning systems are well suited to custom hardware. Analog computation has demonstrated power efficiency advantages of multiple orders of magnitude relative to digital systems while performing nonideal computations. In this paper, we investigate typical error sources introduced by analog computational elements and their impact on system-level performance in DeSTIN--a compositional deep learning architecture. These inaccuracies are evaluated on a pattern classification benchmark, clearly demonstrating the robustness of the underlying algorithm to the errors introduced by analog computational elements. A clear understanding of the impacts of nonideal computations is necessary to fully exploit the efficiency of analog circuits.
Configurable analog-digital conversion using the neural engineering framework
Mayr, Christian G.; Partzsch, Johannes; Noack, Marko; Schüffny, Rene
2014-01-01
Efficient Analog-Digital Converters (ADC) are one of the mainstays of mixed-signal integrated circuit design. Besides the conventional ADCs used in mainstream ICs, there have been various attempts in the past to utilize neuromorphic networks to accomplish an efficient crossing between analog and digital domains, i.e., to build neurally inspired ADCs. Generally, these have suffered from the same problems as conventional ADCs, that is they require high-precision, handcrafted analog circuits and are thus not technology portable. In this paper, we present an ADC based on the Neural Engineering Framework (NEF). It carries out a large fraction of the overall ADC process in the digital domain, i.e., it is easily portable across technologies. The analog-digital conversion takes full advantage of the high degree of parallelism inherent in neuromorphic networks, making for a very scalable ADC. In addition, it has a number of features not commonly found in conventional ADCs, such as a runtime reconfigurability of the ADC sampling rate, resolution and transfer characteristic. PMID:25100933
NASA Astrophysics Data System (ADS)
Agarwal, Smriti; Singh, Dharmendra
2016-04-01
Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.
NASA Technical Reports Server (NTRS)
Carreno, Victor A.; Choi, G.; Iyer, R. K.
1990-01-01
A simulation study is described which predicts the susceptibility of an advanced control system to electrical transients resulting in logic errors, latched errors, error propagation, and digital upset. The system is based on a custom-designed microprocessor and it incorporates fault-tolerant techniques. The system under test and the method to perform the transient injection experiment are described. Results for 2100 transient injections are analyzed and classified according to charge level, type of error, and location of injection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renaut, R.W.; Owen, R.B.
An unusual group of cherts found at saline, alkaline Lake Bogoria in the Kenya Rift differs from the Magadi-type cherts commonly associated with saline, alkaline lakes. The cherts are opaline, rich in diatoms, and formed from a siliceous, probably gelatinous, precursor that precipitated around submerged alkaline hot springs during a Holocene phase of high lake level. Silica precipitation resulted from rapid drop in the temperature of the spring waters and, possibly, pH. Lithification began before subaerial exposure. Ancient analogous cherts are likely to be localized deposits along fault lines.
Crustal strength anisotropy influences landscape form and longevity
NASA Astrophysics Data System (ADS)
Roy, S. G.; Koons, P. O.; Upton, P.; Tucker, G. E.
2013-12-01
Lithospheric deformation is increasingly recognized as integral to landscape evolution. Here we employ a coupled orogenic and landscape model to test the hypothesis that strain-induced crustal failure exerts the dominant control on rates and patterns of orogenic landscape evolution. We assume that erodibility is inversely proportional to cohesion for bedrock rivers host to bedload abrasion. Crustal failure can potentially reduce cohesion by several orders of magnitude along meter scale planar fault zones. The strain-induced cohesion field is generated by use of a strain softening upper crustal rheology in our orogenic model. Based on the results of our coupled model, we predict that topographic anisotropy found in natural orogens is largely a consequence of strain-induced anisotropy in the near surface strength field. The lifespan and geometry of mountain ranges are strongly sensitive to 1) the acute division in erodibility values between the damaged fault zones and the surrounding intact rock and 2) the fault zone orientations for a given tectonic regime. The large division in erodibility between damaged and intact rock combined with the dependence on fault zone orientation provides a spectrum of rates at which a landscape will respond to tectonic or climatic perturbations. Knickpoint migration is about an order of magnitude faster along the exposed cores of fault zones when compared to rates in intact rock, and migration rate increases with fault dip. The contrast in relative erosion rate confines much of the early stage fluvial erosion and establishes a major drainage network that reflects the orientations of exposed fault zones. Slower erosion into the surrounding intact rock typically creates small tributaries that link orthogonally to the structurally confined channels. The large divide in fluvial erosion rate permits the long term persistence of the tectonic signal in the landscape and partly contributes to orogen longevity. Landscape morphology and channel tortuosity together provide critical information on the orientation and spatial distribution of fault damage and the relevant tectonic regime. Our landscape evolution models express similar mechanisms and produce drainage network patterns analogous to those seen in the Southern Alps of New Zealand and the Himalayan Eastern Syntaxis, both centers of active lithospheric deformation.
NASA Astrophysics Data System (ADS)
Martín-Martín, Manuel; Estévez, Antonio; Martín-Rojas, Ivan; Guerrera, Francesco; Alcalá, Francisco J.; Serrano, Francisco; Tramontana, Mario
2018-03-01
The Agost Basin is characterized by a Miocene-Quaternary shallow marine and continental infilling controlled by the evolution of several curvilinear faults involving salt tectonics derived from Triassic rocks. From the Serravallian on, the area experienced a horizontal maximum compression with a rotation of the maximum stress axis from E-W to N-S. The resulting deformation gave rise to a strike-slip fault whose evolution is characterized progressively by three stages: (1) stepover/releasing bend with a dextral motion of blocks; (2) very close to pure horizontal compression; and (3) restraining bend with a sinistral movement of blocks. In particular, after an incipient fracturing stage, faults generated a pull-apart basin with terraced sidewall fault and graben subzones developed in the context of a dextral stepover during the lower part of late Miocene p.p. The occurrence of Triassic shales and evaporites played a fundamental role in the tectonic evolution of the study area. The salty material flowed along faults during this stage generating salt walls in root zones and salt push-up structures at the surface. During the purely compressive stage (middle part of late Miocene p.p.) the salt walls were squeezed to form extrusive mushroom-like structures. The large amount of clayish and salty material that surfaced was rapidly eroded and deposited into the basin, generating prograding fan clinoforms. The occurrence of shales and evaporites (both in the margins of the basin and in the proper infilling) favored folding of basin deposits, faulting, and the formation of rising blocks. Later, in the last stage (upper part of late Miocene p.p.), the area was affected by sinistral restraining conditions and faults must have bent to their current shape. The progressive folding of the basin and deformation of margins changed the supply points and finally caused the end of deposition and the beginning of the current erosive systems. On the basis of the interdisciplinary results, the Agost Basin can be considered a key case of the interference between salt tectonics and the evolution of strike-slip fault zones. The reconstructed model has been compared with several scaled sandbox analogical models and with some natural pull-apart basins.
Regional patterns of earthquake-triggered landslides and their relation to ground motion
NASA Astrophysics Data System (ADS)
Meunier, Patrick; Hovius, Niels; Haines, A. John
2007-10-01
We have documented patterns of landsliding associated with large earthquakes on three thrust faults: the Northridge earthquake in California, Chi-Chi earthquake in Taiwan, and two earthquakes on the Ramu-Markham fault bounding the Finisterre Mountains of Papua New Guinea. In each case, landslide densities are shown to be greatest in the area of strongest ground acceleration and to decay with distance from the epicenter. In California and Taiwan, the density of co-seismic landslides is linearly and highly correlated with both the vertical and horizontal components of measured peak ground acceleration. Based on this observation, we derive an expression for the spatial variation of landslide density analogous with regional seismic attenuation laws. In its general form, this expression applies to our three examples, and we determine best fit values for individual cases. Our findings open a window on the construction of shake maps from geomorphic observations for earthquakes in non-instrumented regions.
NASA Astrophysics Data System (ADS)
Webb, A. G.; He, D.; Yu, H.
2015-12-01
This presentation and another presentation led by Dawn Kellett will preface a ten-minute open discussion on how the Himalayan middle crust was developed and emplaced. Current hypotheses are transitioning from a set including wedge extrusion, channel flow with focused denudation, and tectonic wedging to a revised dichotomy: models with intense upper plate out-of-sequence activity (i.e., tunneling of channel flow, and critical taper wedge behavior) versus models in which the upper plate mainly records basal accretion of horses (i.e., duplexing). Critical taper and duplexing offer a simple contrast that can be illustrated via food analogies. If a wedge is critical, it churns internally like a pile of CheeriosTM cereal pushed up an inclined plane. Stacking of a duplex acts like a deli meat-slicing machine: slice after slice is cut from the intact block to a stack of slices, but neither the block (~down-going plate) nor the stack (~upper plate) features much internal deformation. Thus critical taper and channel tunneling models predict much processing via out-of-sequence deformation, whereas duplexing predicts in-sequence thrusting. The two concepts may be considered end-members. Recent work shows that the Himalayan middle crust has been assembled along a series of mainly southwards-younging thrust faults. The thrust faults separate 1-5 km thick panels that experienced similar metamorphic cycles during different time periods. Out-of-sequence deformation is rare, with its apparent significance enhanced because of the high throw-to-heave ratio of out-of-sequence thrusting. Flattening fabrics developed prior to thrusting have been interpreted to record either (1) southwards channel tunneling across the upper plate, or (2) fabric development during metamorphism of the down-going plate. We will argue that the thrust faults dominantly represent in-sequence duplexing, and therefore conclude that the Himalaya and analogous hot orogens behave like other accretionary orogens.
Initiation of a thrust fault revealed by analog experiments
NASA Astrophysics Data System (ADS)
Dotare, Tatsuya; Yamada, Yasuhiro; Adam, Juergen; Hori, Takane; Sakaguchi, Hide
2016-08-01
To reveal in detail the process of initiation of a thrust fault, we conducted analog experiments with dry quartz sand using a high-resolution digital image correlation technique to identify minor shear-strain patterns for every 27 μm of shortening (with an absolute displacement accuracy of 0.5 μm). The experimental results identified a number of "weak shear bands" and minor uplift prior to the initiation of a thrust in cross-section view. The observations suggest that the process is closely linked to the activity of an adjacent existing thrust, and can be divided into three stages. Stage 1 is characterized by a series of abrupt and short-lived weak shear bands at the location where the thrust will subsequently be generated. The area that will eventually be the hanging wall starts to uplift before the fault forms. The shear strain along the existing thrust decreases linearly during this stage. Stage 2 is defined by the generation of the new thrust and active displacements along it, identified by the shear strain along the thrust. The location of the new thrust may be constrained by its back-thrust, generally produced at the foot of the surface slope. The activity of the existing thrust falls to zero once the new thrust is generated, although these two events are not synchronous. Stage 3 of the thrust is characterized by a constant displacement that corresponds to the shortening applied to the model. Similar minor shear bands have been reported in the toe area of the Nankai accretionary prism, SW Japan. By comparing several transects across this subduction margin, we can classify the lateral variations in the structural geometry into the same stages of deformation identified in our experiments. Our findings may also be applied to the evaluation of fracture distributions in thrust belts during unconventional hydrocarbon exploration and production.
Spatial and Temporal Variations in Slip Partitioning During Oblique Convergence Experiments
NASA Astrophysics Data System (ADS)
Beyer, J. L.; Cooke, M. L.; Toeneboehn, K.
2017-12-01
Physical experiments of oblique convergence in wet kaolin demonstrate the development of slip partitioning, where two faults accommodate strain via different slip vectors. In these experiments, the second fault forms after the development of the first fault. As one strain component is relieved by one fault, the local stress field then favors the development of a second fault with different slip sense. A suite of physical experiments reveals three styles of slip partitioning development controlled by the convergence angle and presence of a pre-existing fault. In experiments with low convergence angles, strike-slip faults grow prior to reverse faults (Type 1) regardless of whether the fault is precut or not. In experiments with moderate convergence angles, slip partitioning is dominantly controlled by the presence of a pre-existing fault. In all experiments, the primarily reverse fault forms first. Slip partitioning then develops with the initiation of strike-slip along the precut fault (Type 2) or growth of a secondary reverse fault where the first fault is steepest. Subsequently, the slip on the first fault transitions to primarily strike-slip (Type 3). Slip rates and rakes along the slip partitioned faults for both precut and uncut experiments vary temporally, suggesting that faults in these slip-partitioned systems are constantly adapting to the conditions produced by slip along nearby faults in the system. While physical experiments show the evolution of slip partitioning, numerical simulations of the experiments provide information about both the stress and strain fields, which can be used to compute the full work budget, providing insight into the mechanisms that drive slip partitioning. Preliminary simulations of precut experiments show that strain energy density (internal work) can be used to predict fault growth, highlighting where fault growth can reduce off-fault deformation in the physical experiments. In numerical simulations of uncut experiments with a first non-planar oblique slip fault, strain energy density is greatest where the first fault is steepest, as less convergence is accommodated along this portion of the fault. The addition of a second slip-partitioning fault to the system decreases external work indicating that these faults increase the mechanical efficiency of the system.
Poka Yoke system based on image analysis and object recognition
NASA Astrophysics Data System (ADS)
Belu, N.; Ionescu, L. M.; Misztal, A.; Mazăre, A.
2015-11-01
Poka Yoke is a method of quality management which is related to prevent faults from arising during production processes. It deals with “fail-sating” or “mistake-proofing”. The Poka-yoke concept was generated and developed by Shigeo Shingo for the Toyota Production System. Poka Yoke is used in many fields, especially in monitoring production processes. In many cases, identifying faults in a production process involves a higher cost than necessary cost of disposal. Usually, poke yoke solutions are based on multiple sensors that identify some nonconformities. This means the presence of different equipment (mechanical, electronic) on production line. As a consequence, coupled with the fact that the method itself is an invasive, affecting the production process, would increase its price diagnostics. The bulky machines are the means by which a Poka Yoke system can be implemented become more sophisticated. In this paper we propose a solution for the Poka Yoke system based on image analysis and identification of faults. The solution consists of a module for image acquisition, mid-level processing and an object recognition module using associative memory (Hopfield network type). All are integrated into an embedded system with AD (Analog to Digital) converter and Zync 7000 (22 nm technology).
NASA Astrophysics Data System (ADS)
Chen, Jian; Randall, Robert Bond; Peeters, Bart
2016-06-01
Artificial Neural Networks (ANNs) have the potential to solve the problem of automated diagnostics of piston slap faults, but the critical issue for the successful application of ANN is the training of the network by a large amount of data in various engine conditions (different speed/load conditions in normal condition, and with different locations/levels of faults). On the other hand, the latest simulation technology provides a useful alternative in that the effect of clearance changes may readily be explored without recourse to cutting metal, in order to create enough training data for the ANNs. In this paper, based on some existing simplified models of piston slap, an advanced multi-body dynamic simulation software was used to simulate piston slap faults with different speeds/loads and clearance conditions. Meanwhile, the simulation models were validated and updated by a series of experiments. Three-stage network systems are proposed to diagnose piston faults: fault detection, fault localisation and fault severity identification. Multi Layer Perceptron (MLP) networks were used in the detection stage and severity/prognosis stage and a Probabilistic Neural Network (PNN) was used to identify which cylinder has faults. Finally, it was demonstrated that the networks trained purely on simulated data can efficiently detect piston slap faults in real tests and identify the location and severity of the faults as well.
NASA Astrophysics Data System (ADS)
Bradbury, Kelly K.; Davis, Colter R.; Shervais, John W.; Janecke, Susanne U.; Evans, James P.
2015-05-01
We examine the fine-scale variations in mineralogical composition, geochemical alteration, and texture of the fault-related rocks from the Phase 3 whole-rock core sampled between 3,187.4 and 3,301.4 m measured depth within the San Andreas Fault Observatory at Depth (SAFOD) borehole near Parkfield, California. This work provides insight into the physical and chemical properties, structural architecture, and fluid-rock interactions associated with the actively deforming traces of the San Andreas Fault zone at depth. Exhumed outcrops within the SAF system comprised of serpentinite-bearing protolith are examined for comparison at San Simeon, Goat Rock State Park, and Nelson Creek, California. In the Phase 3 SAFOD drillcore samples, the fault-related rocks consist of multiple juxtaposed lenses of sheared, foliated siltstone and shale with block-in-matrix fabric, black cataclasite to ultracataclasite, and sheared serpentinite-bearing, finely foliated fault gouge. Meters-wide zones of sheared rock and fault gouge correlate to the sites of active borehole casing deformation and are characterized by scaly clay fabric with multiple discrete slip surfaces or anastomosing shear zones that surround conglobulated or rounded clasts of compacted clay and/or serpentinite. The fine gouge matrix is composed of Mg-rich clays and serpentine minerals (saponite ± palygorskite, and lizardite ± chrysotile). Whole-rock geochemistry data show increases in Fe-, Mg-, Ni-, and Cr-oxides and hydroxides, Fe-sulfides, and C-rich material, with a total organic content of >1 % locally in the fault-related rocks. The faults sampled in the field are composed of meters-thick zones of cohesive to non-cohesive, serpentinite-bearing foliated clay gouge and black fine-grained fault rock derived from sheared Franciscan Formation or serpentinized Coast Range Ophiolite. X-ray diffraction of outcrop samples shows that the foliated clay gouge is composed primarily of saponite and serpentinite, with localized increases in Ni- and Cr-oxides and C-rich material over several meters. Mesoscopic and microscopic textures and deformation mechanisms interpreted from the outcrop sites are remarkably similar to those observed in the SAFOD core. Micro-scale to meso-scale fabrics observed in the SAFOD core exhibit textural characteristics that are common in deformed serpentinites and are often attributed to aseismic deformation with episodic seismic slip. The mineralogy and whole-rock geochemistry results indicate that the fault zone experienced transient fluid-rock interactions with fluids of varying chemical composition, including evidence for highly reducing, hydrocarbon-bearing fluids.
A novel KFCM based fault diagnosis method for unknown faults in satellite reaction wheels.
Hu, Di; Sarosh, Ali; Dong, Yun-Feng
2012-03-01
Reaction wheels are one of the most critical components of the satellite attitude control system, therefore correct diagnosis of their faults is quintessential for efficient operation of these spacecraft. The known faults in any of the subsystems are often diagnosed by supervised learning algorithms, however, this method fails to work correctly when a new or unknown fault occurs. In such cases an unsupervised learning algorithm becomes essential for obtaining the correct diagnosis. Kernel Fuzzy C-Means (KFCM) is one of the unsupervised algorithms, although it has its own limitations; however in this paper a novel method has been proposed for conditioning of KFCM method (C-KFCM) so that it can be effectively used for fault diagnosis of both known and unknown faults as in satellite reaction wheels. The C-KFCM approach involves determination of exact class centers from the data of known faults, in this way discrete number of fault classes are determined at the start. Similarity parameters are derived and determined for each of the fault data point. Thereafter depending on the similarity threshold each data point is issued with a class label. The high similarity points fall into one of the 'known-fault' classes while the low similarity points are labeled as 'unknown-faults'. Simulation results show that as compared to the supervised algorithm such as neural network, the C-KFCM method can effectively cluster historical fault data (as in reaction wheels) and diagnose the faults to an accuracy of more than 91%. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hsieh, Shang Yu; Neubauer, Franz; Cloetingh, Sierd; Willingshofer, Ernst; Sokoutis, Dimitrios
2014-05-01
The internal structure of major strike-slip faults is still poorly understood, particularly how the deep structure could be inferred from its surface expression (Molnar and Dayem, 2011 and references therein). Previous analogue experiments suggest that the convergence angle is the most influential factor (Leever et al., 2011). Further analogue modeling may allow a better understanding how to extrapolate surface structures to the subsurface geometry of strike-slip faults. Various scenarios of analogue experiments were designed to represent strike-slip faults in nature from different geological settings. As such key parameters, which are investigated in this study include: (a) the angle of convergence, (b) the thickness of brittle layer, (c) the influence of a rheological weak layer within the crust, and (d) influence of a thick and rheologically weak layer at the base of the crust. The latter aimed to simulate the effect of a hot metamorphic core complex or an alignment of uprising plutons bordered by a transtensional/transpressional strike-slip fault. The experiments are aimed to explain first order structures along major transcurrent strike-slip faults such as the Altyn, Kunlun, San Andrea and Greendale (Darfield earthquake 2010) faults. The preliminary results show that convergence angle significantly influences the overall geometry of the transpressive system with greater convergence angles resulting in wider fault zones and higher elevation. Different positions, densities and viscosities of weak rheological layers have not only different surface expressions but also affect the fault geometry in the subsurface. For instance, rheological weak material in the bottom layer results in stretching when experiment reaches a certain displacement and a buildup of a less segmented, wide positive flower structure. At the surface, a wide fault valley in the middle of the fault zone is the reflection of stretching along the velocity discontinuity at depth. In models with a thin and rheologically weaker layer in the middle of the brittle layer, deformation is distributed over more faults and the geometry of the fault zone below and above the weak zone shows significant differences, suggesting that the correlation of structures across a weak layer has to be supported by geophysical data, which help constraining the geometry of the deep part. This latter experiment has significantly similar phenomena in reality, such as few pressure ridges along Altyn fault. The experimental results underline the need to understand the role of the convergence angle and the influence of rheology on fault evolution, in order to connect between surface deformation and subsurface geometry. References Leever, K. A., Gabrielsen, R. H., Sokoutis, D., Willingshofer, E., 2011. The effect of convergence angle on the kinematic evolution of strain partitioning in transpressional brittle wedges: Insight from analog modeling and high-resolution digital image analysis. Tectonics, 30(2), TC2013. Molnar, P., Dayem, K.E., 2010. Major intracontinental strike-slip faults and contrasts in lithospheric strength. Geosphere, 6, 444-467.
Differential Fault Analysis on CLEFIA
NASA Astrophysics Data System (ADS)
Chen, Hua; Wu, Wenling; Feng, Dengguo
CLEFIA is a new 128-bit block cipher proposed by SONY corporation recently. The fundamental structure of CLEFIA is a generalized Feistel structure consisting of 4 data lines. In this paper, the strength of CLEFIA against the differential fault attack is explored. Our attack adopts the byte-oriented model of random faults. Through inducing randomly one byte fault in one round, four bytes of faults can be simultaneously obtained in the next round, which can efficiently reduce the total induce times in the attack. After attacking the last several rounds' encryptions, the original secret key can be recovered based on some analysis of the key schedule. The data complexity analysis and experiments show that only about 18 faulty ciphertexts are needed to recover the entire 128-bit secret key and about 54 faulty ciphertexts for 192/256-bit keys.
Test pattern generation for ILA sequential circuits
NASA Technical Reports Server (NTRS)
Feng, YU; Frenzel, James F.; Maki, Gary K.
1993-01-01
An efficient method of generating test patterns for sequential machines implemented using one-dimensional, unilateral, iterative logic arrays (ILA's) of BTS pass transistor networks is presented. Based on a transistor level fault model, the method affords a unique opportunity for real-time fault detection with improved fault coverage. The resulting test sets are shown to be equivalent to those obtained using conventional gate level models, thus eliminating the need for additional test patterns. The proposed method advances the simplicity and ease of the test pattern generation for a special class of sequential circuitry.
The detection error of thermal test low-frequency cable based on M sequence correlation algorithm
NASA Astrophysics Data System (ADS)
Wu, Dongliang; Ge, Zheyang; Tong, Xin; Du, Chunlin
2018-04-01
The problem of low accuracy and low efficiency of off-line detecting on thermal test low-frequency cable faults could be solved by designing a cable fault detection system, based on FPGA export M sequence code(Linear feedback shift register sequence) as pulse signal source. The design principle of SSTDR (Spread spectrum time-domain reflectometry) reflection method and hardware on-line monitoring setup figure is discussed in this paper. Testing data show that, this detection error increases with fault location of thermal test low-frequency cable.
Rolling bearing fault diagnosis based on information fusion using Dempster-Shafer evidence theory
NASA Astrophysics Data System (ADS)
Pei, Di; Yue, Jianhai; Jiao, Jing
2017-10-01
This paper presents a fault diagnosis method for rolling bearing based on information fusion. Acceleration sensors are arranged at different position to get bearing vibration data as diagnostic evidence. The Dempster-Shafer (D-S) evidence theory is used to fuse multi-sensor data to improve diagnostic accuracy. The efficiency of the proposed method is demonstrated by the high speed train transmission test bench. The results of experiment show that the proposed method in this paper improves the rolling bearing fault diagnosis accuracy compared with traditional signal analysis methods.
NASA Astrophysics Data System (ADS)
Shao, Haidong; Jiang, Hongkai; Zhang, Haizhou; Duan, Wenjing; Liang, Tianchen; Wu, Shuaipeng
2018-02-01
The vibration signals collected from rolling bearing are usually complex and non-stationary with heavy background noise. Therefore, it is a great challenge to efficiently learn the representative fault features of the collected vibration signals. In this paper, a novel method called improved convolutional deep belief network (CDBN) with compressed sensing (CS) is developed for feature learning and fault diagnosis of rolling bearing. Firstly, CS is adopted for reducing the vibration data amount to improve analysis efficiency. Secondly, a new CDBN model is constructed with Gaussian visible units to enhance the feature learning ability for the compressed data. Finally, exponential moving average (EMA) technique is employed to improve the generalization performance of the constructed deep model. The developed method is applied to analyze the experimental rolling bearing vibration signals. The results confirm that the developed method is more effective than the traditional methods.
Semithiobambus[6]uril is a transmembrane anion transporter.
Lang, Chao; Mohite, Amar; Deng, Xiaoli; Yang, Feihu; Dong, Zeyuan; Xu, Jiayun; Liu, Junqiu; Keinan, Ehud; Reany, Ofer
2017-07-04
Semithiobambus[6]uril is shown to be an efficient transmembrane anion transporter. Although all bambusuril analogs (having either O, S or N atoms in their portals) are excellent anion binders, only the sulfur analog is also an effective anion transporter capable of polarizing lipid membranes through selective anion uniport. This notable divergence reflects significant differences in the lipophilic character of the bambusuril analogs.
NASA Astrophysics Data System (ADS)
Bian, D.; Lin, A.
2016-12-01
Distinguishing the seismic ruptures during the earthquake from a lot of fractures in borehole core is very important to understand rupture processes and seismic efficiency. In particular, a great earthquake like the 1995 Mw 7.2 Kobe earthquake, but again, evidence has been limited to the grain size analysis and the color of fault gouge. In the past two decades, increasing geological evidence has emerged that seismic faults and shear zones within the middle to upper crust play a crucial role in controlling the architectures of crustal fluid migration. Rock-fluid interactions along seismogenic faults give us a chance to find the seismic ruptures from the same event. Recently, a new project of "Drilling into Fault Damage Zone" has being conducted by Kyoto University on the Nojima Fault again after 20 years of the 1995 Kobe earthquake for an integrated multidisciplinary study on the assessment of activity of active faults involving active tectonics, geochemistry and geochronology of active fault zones. In this work, we report on the signature of slip plane inside the Nojima Fault associated with individual earthquakes on the basis of trace element and isotope analyses. Trace element concentrations and 87Sr/86Sr ratios of fault gouge and host rocks were determined by an inductively coupled plasma mass spectrometer (ICP-MS) and thermal ionization mass spectrometry (TIMS). Samples were collected from two trenches and an outcrop of Nojima Fault which. Based on the geochemical result, we interpret these geochemical results in terms of fluid-rock interactions recorded in fault friction during earthquake. The trace-element enrichment pattern of the slip plane can be explained by fluid-rock interactions at high temperature. It also can help us find the main coseismic fault slipping plane inside the thick fault gouge zone.
NASA Astrophysics Data System (ADS)
Wang, K.; Fialko, Y. A.
2017-12-01
The Mw 7.7 Balochistan earthquake occurred on September 24th, 2013 in southwestern Pakistan. The earthquake rupture was characterized by mostly left-lateral strike slip, with a limited thrust component, on a system of curved, non-vertical (dip angle of 45-75 deg.) faults, including the Hoshab fault, and the Chaman fault at the North-East end of the rupture. We used Interferometric Synthetic Aperture Radar (InSAR) data from Sentinel-1 mission to derive the timeseries of postseismic displacements due to the 2013 Balochistan earthquake. Data from one ascending and two descending satellite tracks reveal robust post-seismic deformation during the observation period (October 2014 to April 2017). The postseismic InSAR observations are characterized by the line of sight (LOS) displacements primarily on the hanging wall side of the fault. The LOS displacements have different signs in data from the ascending and descending tracks (decreases and increases in the radar range, respectively), indicating that the postseismic deformation following the 2013 Balochistan earthquake was dominated by horizontal motion with the same sense as the coseismic motion. Kinematic inversions show that the observed InSAR LOS displacements are well explained by the left-lateral afterslip downdip of the high coseismic slip area. Contributions from the viscoelastic relaxation and poroelastic rebound seem to be negligible during the observation period. We also observe a sharp discontinuity in the postseismic displacement field on the North-East continuation of the 2013 rupture, along the Chaman fault. We verify that this discontinuity is not due to aftershocks, as the relative LOS velocities across this discontinuity show a gradually decelerating motion throughout the observation period. These observations are indicative of a creeping fault segment at the North-East end of the 2013 earthquake rupture that likely acted as a barrier to the rupture propagation. Analysis of Envisat data acquired prior to the 2013 event (2004-2010) confirms creep on the respective fault segment at a rate of 5-6 mm/yr. The creep rate has increased by more than an order of magnitude after the 2013 event. The inferred along-strike variations in the degree of fault locking may be analogous to those on the central section of the San Andreas fault in California.
Fault diagnosis method based on FFT-RPCA-SVM for Cascaded-Multilevel Inverter.
Wang, Tianzhen; Qi, Jie; Xu, Hao; Wang, Yide; Liu, Lei; Gao, Diju
2016-01-01
Thanks to reduced switch stress, high quality of load wave, easy packaging and good extensibility, the cascaded H-bridge multilevel inverter is widely used in wind power system. To guarantee stable operation of system, a new fault diagnosis method, based on Fast Fourier Transform (FFT), Relative Principle Component Analysis (RPCA) and Support Vector Machine (SVM), is proposed for H-bridge multilevel inverter. To avoid the influence of load variation on fault diagnosis, the output voltages of the inverter is chosen as the fault characteristic signals. To shorten the time of diagnosis and improve the diagnostic accuracy, the main features of the fault characteristic signals are extracted by FFT. To further reduce the training time of SVM, the feature vector is reduced based on RPCA that can get a lower dimensional feature space. The fault classifier is constructed via SVM. An experimental prototype of the inverter is built to test the proposed method. Compared to other fault diagnosis methods, the experimental results demonstrate the high accuracy and efficiency of the proposed method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Designing Fault-Injection Experiments for the Reliability of Embedded Systems
NASA Technical Reports Server (NTRS)
White, Allan L.
2012-01-01
This paper considers the long-standing problem of conducting fault-injections experiments to establish the ultra-reliability of embedded systems. There have been extensive efforts in fault injection, and this paper offers a partial summary of the efforts, but these previous efforts have focused on realism and efficiency. Fault injections have been used to examine diagnostics and to test algorithms, but the literature does not contain any framework that says how to conduct fault-injection experiments to establish ultra-reliability. A solution to this problem integrates field-data, arguments-from-design, and fault-injection into a seamless whole. The solution in this paper is to derive a model reduction theorem for a class of semi-Markov models suitable for describing ultra-reliable embedded systems. The derivation shows that a tight upper bound on the probability of system failure can be obtained using only the means of system-recovery times, thus reducing the experimental effort to estimating a reasonable number of easily-observed parameters. The paper includes an example of a system subject to both permanent and transient faults. There is a discussion of integrating fault-injection with field-data and arguments-from-design.
Yi, Cai; Lin, Jianhui; Zhang, Weihua; Ding, Jianming
2015-01-01
As train loads and travel speeds have increased over time, railway axle bearings have become critical elements which require more efficient non-destructive inspection and fault diagnostics methods. This paper presents a novel and adaptive procedure based on ensemble empirical mode decomposition (EEMD) and Hilbert marginal spectrum for multi-fault diagnostics of axle bearings. EEMD overcomes the limitations that often hypothesize about data and computational efforts that restrict the application of signal processing techniques. The outputs of this adaptive approach are the intrinsic mode functions that are treated with the Hilbert transform in order to obtain the Hilbert instantaneous frequency spectrum and marginal spectrum. Anyhow, not all the IMFs obtained by the decomposition should be considered into Hilbert marginal spectrum. The IMFs’ confidence index arithmetic proposed in this paper is fully autonomous, overcoming the major limit of selection by user with experience, and allows the development of on-line tools. The effectiveness of the improvement is proven by the successful diagnosis of an axle bearing with a single fault or multiple composite faults, e.g., outer ring fault, cage fault and pin roller fault. PMID:25970256
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Jie; Hou, Zhangshuan; Fang, Yilin
2015-06-01
A series of numerical test cases reflecting broad and realistic ranges of geological formation and preexisting fault properties was developed to systematically evaluate the impacts of preexisting faults on pressure buildup and ground surface uplift during CO₂ injection. Numerical test cases were conducted using a coupled hydro-geomechanical simulator, eSTOMP (extreme-scale Subsurface Transport over Multiple Phases). For efficient sensitivity analysis and reliable construction of a reduced-order model, a quasi-Monte Carlo sampling method was applied to effectively sample a high-dimensional input parameter space to explore uncertainties associated with hydrologic, geologic, and geomechanical properties. The uncertainty quantification results show that the impacts onmore » geomechanical response from the pre-existing faults mainly depend on reservoir and fault permeability. When the fault permeability is two to three orders of magnitude smaller than the reservoir permeability, the fault can be considered as an impermeable block that resists fluid transport in the reservoir, which causes pressure increase near the fault. When the fault permeability is close to the reservoir permeability, or higher than 10⁻¹⁵ m² in this study, the fault can be considered as a conduit that penetrates the caprock, connecting the fluid flow between the reservoir and the upper rock.« less
Intelligent classifier for dynamic fault patterns based on hidden Markov model
NASA Astrophysics Data System (ADS)
Xu, Bo; Feng, Yuguang; Yu, Jinsong
2006-11-01
It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.
A review of fault tolerant control strategies applied to proton exchange membrane fuel cell systems
NASA Astrophysics Data System (ADS)
Dijoux, Etienne; Steiner, Nadia Yousfi; Benne, Michel; Péra, Marie-Cécile; Pérez, Brigitte Grondin
2017-08-01
Fuel cells are powerful systems for power generation. They have a good efficiency and do not generate greenhouse gases. This technology involves a lot of scientific fields, which leads to the appearance of strongly inter-dependent parameters. This makes the system particularly hard to control and increases fault's occurrence frequency. These two issues call for the necessity to maintain the system performance at the expected level, even in faulty operating conditions. It is called "fault tolerant control" (FTC). The present paper aims to give the state of the art of FTC applied to the proton exchange membrane fuel cell (PEMFC). The FTC approach is composed of two parts. First, a diagnosis part allows the identification and the isolation of a fault; it requires a good a priori knowledge of all the possible faults. Then, a control part allows an optimal control strategy to find the best operating point to recover/mitigate the fault; it requires the knowledge of the degradation phenomena and their mitigation strategies.
Adaptively Adjusted Event-Triggering Mechanism on Fault Detection for Networked Control Systems.
Wang, Yu-Long; Lim, Cheng-Chew; Shi, Peng
2016-12-08
This paper studies the problem of adaptively adjusted event-triggering mechanism-based fault detection for a class of discrete-time networked control system (NCS) with applications to aircraft dynamics. By taking into account the fault occurrence detection progress and the fault occurrence probability, and introducing an adaptively adjusted event-triggering parameter, a novel event-triggering mechanism is proposed to achieve the efficient utilization of the communication network bandwidth. Both the sensor-to-control station and the control station-to-actuator network-induced delays are taken into account. The event-triggered sensor and the event-triggered control station are utilized simultaneously to establish new network-based closed-loop models for the NCS subject to faults. Based on the established models, the event-triggered simultaneous design of fault detection filter (FDF) and controller is presented. A new algorithm for handling the adaptively adjusted event-triggering parameter is proposed. Performance analysis verifies the effectiveness of the adaptively adjusted event-triggering mechanism, and the simultaneous design of FDF and controller.
Multi-sensor information fusion method for vibration fault diagnosis of rolling bearing
NASA Astrophysics Data System (ADS)
Jiao, Jing; Yue, Jianhai; Pei, Di
2017-10-01
Bearing is a key element in high-speed electric multiple unit (EMU) and any defect of it can cause huge malfunctioning of EMU under high operation speed. This paper presents a new method for bearing fault diagnosis based on least square support vector machine (LS-SVM) in feature-level fusion and Dempster-Shafer (D-S) evidence theory in decision-level fusion which were used to solve the problems about low detection accuracy, difficulty in extracting sensitive characteristics and unstable diagnosis system of single-sensor in rolling bearing fault diagnosis. Wavelet de-nosing technique was used for removing the signal noises. LS-SVM was used to make pattern recognition of the bearing vibration signal, and then fusion process was made according to the D-S evidence theory, so as to realize recognition of bearing fault. The results indicated that the data fusion method improved the performance of the intelligent approach in rolling bearing fault detection significantly. Moreover, the results showed that this method can efficiently improve the accuracy of fault diagnosis.
NASA Astrophysics Data System (ADS)
Wang, I.-Ting; Chang, Chih-Cheng; Chiu, Li-Wen; Chou, Teyuh; Hou, Tuo-Hung
2016-09-01
The implementation of highly anticipated hardware neural networks (HNNs) hinges largely on the successful development of a low-power, high-density, and reliable analog electronic synaptic array. In this study, we demonstrate a two-layer Ta/TaO x /TiO2/Ti cross-point synaptic array that emulates the high-density three-dimensional network architecture of human brains. Excellent uniformity and reproducibility among intralayer and interlayer cells were realized. Moreover, at least 50 analog synaptic weight states could be precisely controlled with minimal drifting during a cycling endurance test of 5000 training pulses at an operating voltage of 3 V. We also propose a new state-independent bipolar-pulse-training scheme to improve the linearity of weight updates. The improved linearity considerably enhances the fault tolerance of HNNs, thus improving the training accuracy.
Strain-dependent partial slip on rock fractures under seismic-frequency torsion
NASA Astrophysics Data System (ADS)
Saltiel, Seth; Bonner, Brian P.; Ajo-Franklin, Jonathan B.
2017-05-01
Measurements of nonlinear modulus and attenuation of fractures provide the opportunity to probe their mechanical state. We have adapted a low-frequency torsional apparatus to explore the seismic signature of fractures under low normal stress, simulating low effective stress environments such as shallow or high pore pressure reservoirs. We report strain-dependent modulus and attenuation for fractured samples of Duperow dolomite (a carbon sequestration target reservoir in Montana), Blue Canyon Dome rhyolite (a geothermal analog reservoir in New Mexico), and Montello granite (a deep basement disposal analog from Wisconsin). We use a simple single effective asperity partial slip model to fit our measured stress-strain curves and solve for the friction coefficient, contact radius, and full slip condition. These observations have the potential to develop into new field techniques for measuring differences in frictional properties during reservoir engineering manipulations and estimate the stress conditions where reservoir fractures and faults begin to fully slip.
Model-based reconfiguration: Diagnosis and recovery
NASA Technical Reports Server (NTRS)
Crow, Judy; Rushby, John
1994-01-01
We extend Reiter's general theory of model-based diagnosis to a theory of fault detection, identification, and reconfiguration (FDIR). The generality of Reiter's theory readily supports an extension in which the problem of reconfiguration is viewed as a close analog of the problem of diagnosis. Using a reconfiguration predicate 'rcfg' analogous to the abnormality predicate 'ab,' we derive a strategy for reconfiguration by transforming the corresponding strategy for diagnosis. There are two obvious benefits of this approach: algorithms for diagnosis can be exploited as algorithms for reconfiguration and we have a theoretical framework for an integrated approach to FDIR. As a first step toward realizing these benefits we show that a class of diagnosis engines can be used for reconfiguration and we discuss algorithms for integrated FDIR. We argue that integrating recovery and diagnosis is an essential next step if this technology is to be useful for practical applications.
NASA Technical Reports Server (NTRS)
Belcastro, C. M.
1984-01-01
Advanced composite aircraft designs include fault-tolerant computer-based digital control systems with thigh reliability requirements for adverse as well as optimum operating environments. Since aircraft penetrate intense electromagnetic fields during thunderstorms, onboard computer systems maya be subjected to field-induced transient voltages and currents resulting in functional error modes which are collectively referred to as digital system upset. A methodology was developed for assessing the upset susceptibility of a computer system onboard an aircraft flying through a lightning environment. Upset error modes in a general-purpose microprocessor were studied via tests which involved the random input of analog transients which model lightning-induced signals onto interface lines of an 8080-based microcomputer from which upset error data were recorded. The application of Markov modeling to upset susceptibility estimation is discussed and a stochastic model development.
E-learning platform for automated testing of electronic circuits using signature analysis method
NASA Astrophysics Data System (ADS)
Gherghina, Cǎtǎlina; Bacivarov, Angelica; Bacivarov, Ioan C.; Petricǎ, Gabriel
2016-12-01
Dependability of electronic circuits can be ensured only through testing of circuit modules. This is done by generating test vectors and their application to the circuit. Testability should be viewed as a concerted effort to ensure maximum efficiency throughout the product life cycle, from conception and design stage, through production to repairs during products operating. In this paper, is presented the platform developed by authors for training for testability in electronics, in general and in using signature analysis method, in particular. The platform allows highlighting the two approaches in the field namely analog and digital signature of circuits. As a part of this e-learning platform, it has been developed a database for signatures of different electronic components meant to put into the spotlight different techniques implying fault detection, and from this there were also self-repairing techniques of the systems with this kind of components. An approach for realizing self-testing circuits based on MATLAB environment and using signature analysis method is proposed. This paper analyses the benefits of signature analysis method and simulates signature analyzer performance based on the use of pseudo-random sequences, too.
NASA Astrophysics Data System (ADS)
Andersen, C.; Theissen-Krah, S.; Hannington, M.; Rüpke, L.; Petersen, S.
2017-06-01
The potential of mining seafloor massive sulfide deposits for metals such as Cu, Zn, and Au is currently debated. One key challenge is to predict where the largest deposits worth mining might form, which in turn requires understanding the pattern of subseafloor hydrothermal mass and energy transport. Numerical models of heat and fluid flow are applied to illustrate the important role of fault zone properties (permeability and width) in controlling mass accumulation at hydrothermal vents at slow spreading ridges. We combine modeled mass-flow rates, vent temperatures, and vent field dimensions with the known fluid chemistry at the fault-controlled Logatchev 1 hydrothermal field of the Mid-Atlantic Ridge. We predict that the 135 kilotons of SMS at this site (estimated by other studies) can have accumulated with a minimum depositional efficiency of 5% in the known duration of hydrothermal venting (58,200 year age of the deposit). In general, the most productive faults must provide an efficient fluid pathway while at the same time limit cooling due to mixing with entrained cold seawater. This balance is best met by faults that are just wide and permeable enough to control a hydrothermal plume rising through the oceanic crust. Model runs with increased basal heat input, mimicking a heat flow contribution from along-axis, lead to higher mass fluxes and vent temperatures, capable of significantly higher SMS accumulation rates. Nonsteady state conditions, such as the influence of a cooling magmatic intrusion beneath the fault zone, also can temporarily increase the mass flux while sustaining high vent temperatures.
NASA Astrophysics Data System (ADS)
Adams, M.; Kempka, T.; Chabab, E.; Ziegler, M.
2018-02-01
Estimating the efficiency and sustainability of geological subsurface utilization, i.e., Carbon Capture and Storage (CCS) requires an integrated risk assessment approach, considering the occurring coupled processes, beside others, the potential reactivation of existing faults. In this context, hydraulic and mechanical parameter uncertainties as well as different injection rates have to be considered and quantified to elaborate reliable environmental impact assessments. Consequently, the required sensitivity analyses consume significant computational time due to the high number of realizations that have to be carried out. Due to the high computational costs of two-way coupled simulations in large-scale 3D multiphase fluid flow systems, these are not applicable for the purpose of uncertainty and risk assessments. Hence, an innovative semi-analytical hydromechanical coupling approach for hydraulic fault reactivation will be introduced. This approach determines the void ratio evolution in representative fault elements using one preliminary base simulation, considering one model geometry and one set of hydromechanical parameters. The void ratio development is then approximated and related to one reference pressure at the base of the fault. The parametrization of the resulting functions is then directly implemented into a multiphase fluid flow simulator to carry out the semi-analytical coupling for the simulation of hydromechanical processes. Hereby, the iterative parameter exchange between the multiphase and mechanical simulators is omitted, since the update of porosity and permeability is controlled by one reference pore pressure at the fault base. The suggested procedure is capable to reduce the computational time required by coupled hydromechanical simulations of a multitude of injection rates by a factor of up to 15.
NASA Astrophysics Data System (ADS)
Jordan, T. A.; Ferraccioli, F.; Ross, N.; Siegert, M. J.; Corr, H.; Leat, P. T.; Bingham, R. G.; Rippin, D. M.; le Brocq, A.
2012-04-01
The >500 km wide Weddell Sea Rift was a major focus for Jurassic extension and magmatism during the early stages of Gondwana break-up, and underlies the Weddell Sea Embayment, which separates East Antarctica from a collage of crustal blocks in West Antarctica. Here we present new aeromagnetic data combined with airborne radar and gravity data collected during the 2010-11 field season over the Institute and Moeller ice stream in West Antarctica. Our interpretations identify the major tectonic boundaries between the Weddell Sea Rift, the Ellsworth-Whitmore Mountains block and East Antarctica. Digitally enhanced aeromagnetic data and gravity anomalies indicate the extent of Proterozoic basement, Middle Cambrian rift-related volcanic rocks, Jurassic granites, and post Jurassic sedimentary infill. Two new joint magnetic and gravity models were constructed, constrained by 2D and 3D magnetic depth-to-source estimates to assess the extent of Proterozoic basement and the thickness of major Jurassic intrusions and post-Jurassic sedimentary infill. The Jurassic granites are modelled as 5-8 km thick and emplaced at the transition between the thicker crust of the Ellsworth-Whitmore Mountains block and the thinner crust of the Weddell Sea Rift, and within the Pagano Fault Zone, a newly identified ~75 km wide left-lateral strike-slip fault system that we interpret as a major tectonic boundary between East and West Antarctica. We also suggest a possible analogy between the Pagano Fault Zone and the Dead Sea transform. In this scenario the Jurassic Pagano Fault Zone is the kinematic link between extension in the Weddell Sea Rift and convergence across the Pacific margin of West Antarctica, as the Dead Sea transform links Red Sea extension to compression within the Zagros Mountains.
The Potential For A Large Earthquake In Intraplate Europe: The Contribution Of Remote Sensing
NASA Astrophysics Data System (ADS)
Kervyn, F.; Ferry, M.; Peters, G.; Alasset, P.-J.; Jacques, E.; Meghraoui, M.
The use of SAR interferometry for the computation of high resolution Digital Eleva- tion Models for various applications in neotectonics and geomorphology is increasing dramatically. The approach merges map-DEM, interferometric-DEM, satellite radar and optical images (ERS, SPOT, ASTER), aerial photographs, geophysical data and field observations into a single representation. This representation enables greater constraint on the identification of active faults and therefore gives an improved un- derstanding of complex active zones. Recent studies of the Lower and Upper Rhine graben display evidence of active deformation. Despite the low slip rate~0.1 mm/yr, vegetation cover and anthropic activity, we demonstrate that the surface deformation, although extremely sublte is preserved. In comparison, the Rukwa rift (East Africa) is a region with negligable anthropic activity, has a semi-arid climate and a higher deformation rate (1 - 4 mm/yr). Both rifts exhibit similar characteristics, such as: (1) half graben structures, (2) fault lengths ranging from 20 to 40 km, (3) graben width~ 40 km, (4) seismic activity with M 6 - 6.5 (1910 Rukwa, M~7.3). The Basel-Reinach fault, southern Upper Rhine graben, has been identified and characterised as responsi- ble for the 1356 earthquake (M 6.2 - 6.5). Three paleoearthquakes were demonstrated to have occurred within the last 8500 years, yielding a mean uplift rate of 0.21 mm/yr. Assuming that the physical parameters, geometry, and fault behavior are comparable, rifting processes with high deformation rates may serve as analogs to active regions with slower deformation. An intraplate European event rupturing the whole of the fault may possibly reach M 7.
NASA Astrophysics Data System (ADS)
Asphaug, Erik
2008-09-01
Asteroid 433 Eros is regarded as "fractured monolith" or "shatter pile". But models of fragmentation and disruption (e.g. Benz and Asphaug 1999) predict that any large rocky asteroid should be transformed into a jumble of dust, gravel, talus and boulders, simply because it is much easier to comminute an asteroid than to catastrophically disrupt it. Sometimes the relatively high density of Eros is taken as evidence for a fractured monolithic structure, although the inferred bulk porosity of Eros ( 20-30%) is what one expects for a rubble pile, and is about the porosity of sand and talus. The focus here is that a rubble pile structure is contraindicated by the pronounced network of linear fault-like structures (Buczkowski et al. 2008), some of which radiate from recent large impacts such as Psyche, and which form rectangular boundaries around some of the medium-sized craters. This needs an explanation. Here it is proposed, and quantitatively addressed, that the majority of these faults occur just in the upper tens of meters, where cohesion exceeds gravitational stress even for loose piles of lunar-like regolith. Assuming Eros regolith has the cohesion ( 1 kPa) measured for lunar regolith, then faulting is expected to a depth of 10 m, directly analogous to how faults occur in the upper layers of beach sand. The fact that Eros has few steep slopes anywhere, except for the angles of repose within its craters, at a baseline of 100 m (Zuber et al. 2002), is satisfied by the hypothesis that Eros is a rubble pile rather than a shattered monolith. The low fault stress implied by the above, supports the findings of dense networks of linear structures, ubiquitous features which are otherwise difficult to explain as fractures in a rocky target which has not been disrupted or jumbled against its very low gravity.
Building exploration with leeches Hirudo verbana.
Adamatzky, Andrew; Sirakoulis, Georgios Ch
2015-08-01
Safe evacuation of people from building and outdoor environments, and search and rescue operations, always will remain actual in course of all socio-technological developments. Modern facilities offer a range of automated systems to guide residents towards emergency exists. The systems are assumed to be infallible. But what if they fail? How occupants not familiar with a building layout will be looking for exits in case of very limited visibility where tactile sensing is the only way to assess the environment? Analogous models of human behaviour, and socio-dynamics in general, are provided to be fruitful ways to explore alternative, or would-be scenarios. Crowd, or a single person, dynamics could be imitated using particle systems, reaction-diffusion chemical medium, electro-magnetic fields, or social insects. Each type of analogous model offer unique insights on behavioural patterns of natural systems in constrained geometries. In this particular paper we have chosen leeches to analyse patterns of exploration. Reasons are two-fold. First, when deprived from other stimuli leeches change their behavioural modes in an automated regime in response to mechanical stimulation. Therefore leeches can give us invaluable information on how human beings might behave under stress and limited visibility. Second, leeches are ideal blueprints of future soft-bodied rescue robots. Leeches have modular nervous circuitry with a rich behavioral spectrum. Leeches are multi-functional, fault-tolerant with autonomous inter-segment coordination and adaptive decision-making. We aim to answer the question: how efficiently a real building can be explored and whether there any dependencies on the pathways of exploration and geometrical complexity of the building. In our case studies we use templates made on the floor plan of real building. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Brown, Jessica A.; Pack, Lindsey R.; Fowler, Jason D.; Suo, Zucai
2011-01-01
Antiviral nucleoside analogs have been developed to inhibit the enzymatic activities of the hepatitis B virus (HBV) polymerase, thereby preventing the replication and production of HBV. However, the usage of these analogs can be limited by drug toxicity because the 5′-triphosphates of these nucleoside analogs (nucleotide analogs) are potential substrates for human DNA polymerases to incorporate into host DNA. Although they are poor substrates for human replicative DNA polymerases, it remains to be established whether these nucleotide analogs are substrates for the recently discovered human X- and Y-family DNA polymerases. Using pre-steady state kinetic techniques, we have measured the substrate specificity values for human DNA polymerases β, λ, η, ι, κ, and Rev1 incorporating the active forms of the following anti-HBV nucleoside analogs approved for clinical use: adefovir, tenofovir, lamivudine, telbivudine, and entecavir. Compared to the incorporation of a natural nucleotide, most of the nucleotide analogs were incorporated less efficiently (2 to >122,000) by the six human DNA polymerases. In addition, the potential for entecavir and telbivudine, two drugs which possess a 3′-hydroxyl, to become embedded into human DNA was examined by primer extension and DNA ligation assays. These results suggested that telbivudine functions as a chain terminator while entecavir was efficiently extended by the six enzymes and was a substrate for human DNA ligase I. Our findings suggested that incorporation of anti-HBV nucleotide analogs catalyzed by human X- and Y-family polymerases may contribute to clinical toxicity. PMID:22132702
NASA Astrophysics Data System (ADS)
Mannino, Irene; Cianfarra, Paola; Salvini, Francesco
2010-05-01
Permeability in carbonates is strongly influenced by the presence of brittle deformation patterns, i.e pressure-solution surfaces, extensional fractures, and faults. Carbonate rocks achieve fracturing both during diagenesis and tectonic processes. Attitude, spatial distribution and connectivity of brittle deformation features rule the secondary permeability of carbonatic rocks and therefore the accumulation and the pathway of deep fluids (ground-water, hydrocarbon). This is particularly true in fault zones, where the damage zone and the fault core show different hydraulic properties from the pristine rock as well as between them. To improve the knowledge of fault architecture and faults hydraulic properties we study the brittle deformation patterns related to fault kinematics in carbonate successions. In particular we focussed on the damage-zone fracturing evolution. Fieldwork was performed in Meso-Cenozoic carbonate units of the Latium-Abruzzi Platform, Central Apennines, Italy. These units represent field analogues of rock reservoir in the Southern Apennines. We combine the study of rock physical characteristics of 22 faults and quantitative analyses of brittle deformation for the same faults, including bedding attitudes, fracturing type, attitudes, and spatial intensity distribution by using the dimension/spacing ratio, namely H/S ratio where H is the dimension of the fracture and S is the spacing between two analogous fractures of the same set. Statistical analyses of structural data (stereonets, contouring and H/S transect) were performed to infer a focussed, general algorithm that describes the expected intensity of fracturing process. The analytical model was fit to field measurements by a Montecarlo-convergent approach. This method proved a useful tool to quantify complex relations with a high number of variables. It creates a large sequence of possible solution parameters and results are compared with field data. For each item an error mean value is computed (RMS), representing the effectiveness of the fit and so the validity of this analysis. Eventually, the method selects the set of parameters that produced the least values. The tested algorithm describes the expected H/S values as a function of the distance from the fault core (D), the clay content (S), and the fault throw (T). The preliminary results of the Montecarlo inversion show that the distance (D) has the most effective influence in the H/S spatial distribution and the H/S value decreases with the distance from the fault-core. The rheological parameter shows a value similar to the diagenetic H/S values (1-1.5). The resulting equation has a reasonable RMS value of 0.116. The results of the Montecarlo models were finally implemented in FRAP, a fault environment modelling software. It is a true 4D tool that can predict stress conditions and permeability architecture associated to a given faults during single or multiple tectonic events. We present some models of fault-related fracturing among the studied faults performed by FRAP and we compare them with the field measurements, to test the validity of our methodology.
An Integrated Framework for Model-Based Distributed Diagnosis and Prognosis
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Daigle, Matthew J.; Roychoudhury, Indranil
2012-01-01
Diagnosis and prognosis are necessary tasks for system reconfiguration and fault-adaptive control in complex systems. Diagnosis consists of detection, isolation and identification of faults, while prognosis consists of prediction of the remaining useful life of systems. This paper presents a novel integrated framework for model-based distributed diagnosis and prognosis, where system decomposition is used to enable the diagnosis and prognosis tasks to be performed in a distributed way. We show how different submodels can be automatically constructed to solve the local diagnosis and prognosis problems. We illustrate our approach using a simulated four-wheeled rover for different fault scenarios. Our experiments show that our approach correctly performs distributed fault diagnosis and prognosis in an efficient and robust manner.
Ma, Jian; Lu, Chen; Liu, Hongmei
2015-01-01
The aircraft environmental control system (ECS) is a critical aircraft system, which provides the appropriate environmental conditions to ensure the safe transport of air passengers and equipment. The functionality and reliability of ECS have received increasing attention in recent years. The heat exchanger is a particularly significant component of the ECS, because its failure decreases the system’s efficiency, which can lead to catastrophic consequences. Fault diagnosis of the heat exchanger is necessary to prevent risks. However, two problems hinder the implementation of the heat exchanger fault diagnosis in practice. First, the actual measured parameter of the heat exchanger cannot effectively reflect the fault occurrence, whereas the heat exchanger faults are usually depicted by utilizing the corresponding fault-related state parameters that cannot be measured directly. Second, both the traditional Extended Kalman Filter (EKF) and the EKF-based Double Model Filter have certain disadvantages, such as sensitivity to modeling errors and difficulties in selection of initialization values. To solve the aforementioned problems, this paper presents a fault-related parameter adaptive estimation method based on strong tracking filter (STF) and Modified Bayes classification algorithm for fault detection and failure mode classification of the heat exchanger, respectively. Heat exchanger fault simulation is conducted to generate fault data, through which the proposed methods are validated. The results demonstrate that the proposed methods are capable of providing accurate, stable, and rapid fault diagnosis of the heat exchanger. PMID:25823010
Ma, Jian; Lu, Chen; Liu, Hongmei
2015-01-01
The aircraft environmental control system (ECS) is a critical aircraft system, which provides the appropriate environmental conditions to ensure the safe transport of air passengers and equipment. The functionality and reliability of ECS have received increasing attention in recent years. The heat exchanger is a particularly significant component of the ECS, because its failure decreases the system's efficiency, which can lead to catastrophic consequences. Fault diagnosis of the heat exchanger is necessary to prevent risks. However, two problems hinder the implementation of the heat exchanger fault diagnosis in practice. First, the actual measured parameter of the heat exchanger cannot effectively reflect the fault occurrence, whereas the heat exchanger faults are usually depicted by utilizing the corresponding fault-related state parameters that cannot be measured directly. Second, both the traditional Extended Kalman Filter (EKF) and the EKF-based Double Model Filter have certain disadvantages, such as sensitivity to modeling errors and difficulties in selection of initialization values. To solve the aforementioned problems, this paper presents a fault-related parameter adaptive estimation method based on strong tracking filter (STF) and Modified Bayes classification algorithm for fault detection and failure mode classification of the heat exchanger, respectively. Heat exchanger fault simulation is conducted to generate fault data, through which the proposed methods are validated. The results demonstrate that the proposed methods are capable of providing accurate, stable, and rapid fault diagnosis of the heat exchanger.
Three-dimensional curved grid finite-difference modelling for non-planar rupture dynamics
NASA Astrophysics Data System (ADS)
Zhang, Zhenguo; Zhang, Wei; Chen, Xiaofei
2014-11-01
In this study, we present a new method for simulating the 3-D dynamic rupture process occurring on a non-planar fault. The method is based on the curved-grid finite-difference method (CG-FDM) proposed by Zhang & Chen and Zhang et al. to simulate the propagation of seismic waves in media with arbitrary irregular surface topography. While keeping the advantages of conventional FDM, that is computational efficiency and easy implementation, the CG-FDM also is flexible in modelling the complex fault model by using general curvilinear grids, and thus is able to model the rupture dynamics of a fault with complex geometry, such as oblique dipping fault, non-planar fault, fault with step-over, fault branching, even if irregular topography exists. The accuracy and robustness of this new method have been validated by comparing with the previous results of Day et al., and benchmarks for rupture dynamics simulations. Finally, two simulations of rupture dynamics with complex fault geometry, that is a non-planar fault and a fault rupturing a free surface with topography, are presented. A very interesting phenomenon was observed that topography can weaken the tendency for supershear transition to occur when rupture breaks out at a free surface. Undoubtedly, this new method provides an effective, at least an alternative, tool to simulate the rupture dynamics of a complex non-planar fault, and can be applied to model the rupture dynamics of a real earthquake with complex geometry.
Focused seismicity triggered by flank instability on Kīlauea's Southwest Rift Zone
NASA Astrophysics Data System (ADS)
Judson, Josiah; Thelen, Weston A.; Greenfield, Tim; White, Robert S.
2018-03-01
Swarms of earthquakes at the head of the Southwest Rift Zone on Kīlauea Volcano, Hawai´i, reveal an interaction of normal and strike-slip faulting associated with movement of Kīlauea's south flank. A relocated subset of earthquakes between January 2012 and August 2014 are highly focused in space and time at depths that are coincident with the south caldera magma reservoir beneath the southern margin of Kīlauea Caldera. Newly calculated focal mechanisms are dominantly dextral shear with a north-south preferred fault orientation. Two earthquakes within this focused area of seismicity have normal faulting mechanisms, indicating two mechanisms of failure in very close proximity (10's of meters to 100 m). We suggest a model where opening along the Southwest Rift Zone caused by seaward motion of the south flank permits injection of magma and subsequent freezing of a plug, which then fails in a right-lateral strike-slip sense, consistent with the direction of movement of the south flank. The seismicity is concentrated in an area where a constriction occurs between a normal fault and the deeper magma transport system into the Southwest Rift Zone. Although in many ways the Southwest Rift Zone appears analogous to the more active East Rift Zone, the localization of the largest seismicity (>M2.5) within the swarms to a small volume necessitates a different model than has been proposed to explain the lineament outlined by earthquakes along the East Rift Zone.
NASA Astrophysics Data System (ADS)
Li, Yongbo; Yang, Yuantao; Li, Guoyan; Xu, Minqiang; Huang, Wenhu
2017-07-01
Health condition identification of planetary gearboxes is crucial to reduce the downtime and maximize productivity. This paper aims to develop a novel fault diagnosis method based on modified multi-scale symbolic dynamic entropy (MMSDE) and minimum redundancy maximum relevance (mRMR) to identify the different health conditions of planetary gearbox. MMSDE is proposed to quantify the regularity of time series, which can assess the dynamical characteristics over a range of scales. MMSDE has obvious advantages in the detection of dynamical changes and computation efficiency. Then, the mRMR approach is introduced to refine the fault features. Lastly, the obtained new features are fed into the least square support vector machine (LSSVM) to complete the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault types of planetary gearboxes.
Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram.
Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi
2016-09-13
Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features.
Scattering transform and LSPTSVM based fault diagnosis of rotating machinery
NASA Astrophysics Data System (ADS)
Ma, Shangjun; Cheng, Bo; Shang, Zhaowei; Liu, Geng
2018-05-01
This paper proposes an algorithm for fault diagnosis of rotating machinery to overcome the shortcomings of classical techniques which are noise sensitive in feature extraction and time consuming for training. Based on the scattering transform and the least squares recursive projection twin support vector machine (LSPTSVM), the method has the advantages of high efficiency and insensitivity for noise signal. Using the energy of the scattering coefficients in each sub-band, the features of the vibration signals are obtained. Then, an LSPTSVM classifier is used for fault diagnosis. The new method is compared with other common methods including the proximal support vector machine, the standard support vector machine and multi-scale theory by using fault data for two systems, a motor bearing and a gear box. The results show that the new method proposed in this study is more effective for fault diagnosis of rotating machinery.
Wavelet subspace decomposition of thermal infrared images for defect detection in artworks
NASA Astrophysics Data System (ADS)
Ahmad, M. Z.; Khan, A. A.; Mezghani, S.; Perrin, E.; Mouhoubi, K.; Bodnar, J. L.; Vrabie, V.
2016-07-01
Health of ancient artworks must be routinely monitored for their adequate preservation. Faults in these artworks may develop over time and must be identified as precisely as possible. The classical acoustic testing techniques, being invasive, risk causing permanent damage during periodic inspections. Infrared thermometry offers a promising solution to map faults in artworks. It involves heating the artwork and recording its thermal response using infrared camera. A novel strategy based on pseudo-random binary excitation principle is used in this work to suppress the risks associated with prolonged heating. The objective of this work is to develop an automatic scheme for detecting faults in the captured images. An efficient scheme based on wavelet based subspace decomposition is developed which favors identification of, the otherwise invisible, weaker faults. Two major problems addressed in this work are the selection of the optimal wavelet basis and the subspace level selection. A novel criterion based on regional mutual information is proposed for the latter. The approach is successfully tested on a laboratory based sample as well as real artworks. A new contrast enhancement metric is developed to demonstrate the quantitative efficiency of the algorithm. The algorithm is successfully deployed for both laboratory based and real artworks.
Fault-Tolerant and Elastic Streaming MapReduce with Decentralized Coordination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumbhare, Alok; Frincu, Marc; Simmhan, Yogesh
2015-06-29
The MapReduce programming model, due to its simplicity and scalability, has become an essential tool for processing large data volumes in distributed environments. Recent Stream Processing Systems (SPS) extend this model to provide low-latency analysis of high-velocity continuous data streams. However, integrating MapReduce with streaming poses challenges: first, the runtime variations in data characteristics such as data-rates and key-distribution cause resource overload, that inturn leads to fluctuations in the Quality of the Service (QoS); and second, the stateful reducers, whose state depends on the complete tuple history, necessitates efficient fault-recovery mechanisms to maintain the desired QoS in the presence ofmore » resource failures. We propose an integrated streaming MapReduce architecture leveraging the concept of consistent hashing to support runtime elasticity along with locality-aware data and state replication to provide efficient load-balancing with low-overhead fault-tolerance and parallel fault-recovery from multiple simultaneous failures. Our evaluation on a private cloud shows up to 2:8 improvement in peak throughput compared to Apache Storm SPS, and a low recovery latency of 700 -1500 ms from multiple failures.« less
Electric machine differential for vehicle traction control and stability control
NASA Astrophysics Data System (ADS)
Kuruppu, Sandun Shivantha
Evolving requirements in energy efficiency and tightening regulations for reliable electric drivetrains drive the advancement of the hybrid electric (HEV) and full electric vehicle (EV) technology. Different configurations of EV and HEV architectures are evaluated for their performance. The future technology is trending towards utilizing distinctive properties in electric machines to not only to improve efficiency but also to realize advanced road adhesion controls and vehicle stability controls. Electric machine differential (EMD) is such a concept under current investigation for applications in the near future. Reliability of a power train is critical. Therefore, sophisticated fault detection schemes are essential in guaranteeing reliable operation of a complex system such as an EMD. The research presented here emphasize on implementation of a 4kW electric machine differential, a novel single open phase fault diagnostic scheme, an implementation of a real time slip optimization algorithm and an electric machine differential based yaw stability improvement study. The proposed d-q current signature based SPO fault diagnostic algorithm detects the fault within one electrical cycle. The EMD based extremum seeking slip optimization algorithm reduces stopping distance by 30% compared to hydraulic braking based ABS.
NASA Astrophysics Data System (ADS)
Karson, J.; Horst, A. J.; Nanfito, A.
2011-12-01
Iceland has long been used as an analog for studies of seafloor spreading. Despite its thick (~25 km) oceanic crust and subaerial lavas, many features associated with accretion along mid-ocean ridge spreading centers, and the processes that generate them, are well represented in the actively spreading Neovolcanic Zone and deeply glaciated Tertiary crust that flanks it. Integrated results of structural and geodetic studies show that the plate boundary zone on Iceland is a complex array of linked structures bounding major crustal blocks or microplates, similar to oceanic microplates. Major rift zones propagate N and S from the hotspot centered beneath the Vatnajökull icecap in SE central Iceland. The southern propagator has extended southward beyond the South Iceland Seismic Zone transform fault to the Westman Islands, resulting in abandonment of the Eastern Rift Zone. Continued propagation may cause abandonment of the Reykjanes Ridge. The northern propagator is linked to the southern end of the receding Kolbeinsey Ridge to the north. The NNW-trending Kerlingar Pseudo-fault bounds the propagator system to the E. The Tjörnes Transform Fault links the propagator tip to the Kolbeinsey Ridge and appears to be migrating northward in incremental steps, leaving a swath of deformed crustal blocks in its wake. Block rotations, concentrated mainly to the west of the propagators, are clockwise to the N of the hotspot and counter-clockwise to the S, possibly resulting in a component of NS divergence across EW-oriented rift zones. These rotations may help accommodate adjustments of the plate boundary zone to the relative movements of the N American and Eurasian plates. The rotated crustal blocks are composed of highly anisotropic crust with rift-parallel internal fabric generated by spreading processes. Block rotations result in reactivation of spreading-related faults as major rift-parallel, strike-slip faults. Structural details found in Iceland can help provide information that is difficult or impossible to obtain in propagating systems of the deep seafloor.
NASA Astrophysics Data System (ADS)
Vennemann, Alan
My research investigates the structure of the Indio Mountains in southwest Texas, 34 kilometers southwest of Van Horn, at the UTEP (University of Texas at El Paso) Field Station using newly acquired active-source seismic data. The area is underlain by deformed Cretaceous sedimentary rocks that represent a transgressive sequence nearly 2 km in total stratigraphic thickness. The rocks were deposited in mid Cretaceous extensional basins and later contracted into fold-thrust structures during Laramide orogenesis. The stratigraphic sequence is an analog for similar areas that are ideal for pre-salt petroleum reservoirs, such as reservoirs off the coasts of Brazil and Angola (Li, 2014; Fox, 2016; Kattah, 2017). The 1-km-long 2-D shallow seismic reflection survey that I planned and led during May 2016 was the first at the UTEP Field Station, providing critical subsurface information that was previously lacking. The data were processed with Landmark ProMAX seismic processing software to create a seismic reflection image of the Bennett Thrust Fault and additional imbricate faulting not expressed at the surface. Along the 1-km line, reflection data were recorded with 200 4.5 Hz geophones, using 100 150-gram explosive charges and 490 sledge-hammer blows for sources. A seismic reflection profile was produced using the lower frequency explosive dataset, which was used in the identification of the Bennett Thrust Fault and additional faulting and folding in the subsurface. This dataset provides three possible interpretations for the subsurface geometries of the faulting and folding present. However, producing a seismic reflection image with the higher frequency sledge-hammer sourced dataset for interpretation proved more challenging. While there are no petroleum plays in the Indio Mountains region, imaging and understanding subsurface structural and lithological geometries and how that geometry directs potential fluid flow has implications for other regions with petroleum plays.
Local response of a glacier to annual filling and drainage of an ice-marginal lake
Walder, J.S.; Trabant, D.C.; Cunico, M.; Fountain, A.G.; Anderson, S.P.; Anderson, R. Scott; Malm, A.
2006-01-01
Ice-marginal Hidden Creek Lake, Alaska, USA, outbursts annually over the course of 2-3 days. As the lake fills, survey targets on the surface of the 'ice dam' (the glacier adjacent to the lake) move obliquely to the ice margin and rise substantially. As the lake drains, ice motion speeds up, becomes nearly perpendicular to the face of the ice dam, and the ice surface drops. Vertical movement of the ice dam probably reflects growth and decay of a wedge of water beneath the ice dam, in line with established ideas about jo??kulhlaup mechanics. However, the distribution of vertical ice movement, with a narrow (50-100 m wide) zone where the uplift rate decreases by 90%, cannot be explained by invoking flexure of the ice dam in a fashion analogous to tidal flexure of a floating glacier tongue or ice shelf. Rather, the zone of large uplift-rate gradient is a fault zone: ice-dam deformation is dominated by movement along high-angle faults that cut the ice dam through its entire thickness, with the sense of fault slip reversing as the lake drains. Survey targets spanning the zone of steep uplift gradient move relative to one another in a nearly reversible fashion as the lake fills and drains. The horizontal strain rate also undergoes a reversal across this zone, being compressional as the lake fills, but extensional as the lake drains. Frictional resistance to fault-block motion probably accounts for the fact that lake level falls measurably before the onset of accelerated horizontal motion and vertical downdrop. As the overall fault pattern is the same from year to year, even though ice is lost by calving, the faults must be regularly regenerated, probably by linkage of surface and bottom crevasses as ice is advected toward the lake basin.
Ruleman, C.A.; Thompson, R.A.; Shroba, R.R.; Anderson, M.; Drenth, B.J.; Rotzien, J.; Lyon, J.
2013-01-01
The Sunshine Valley-Costilla Plain, a structural subbasin of the greater San Luis Basin of the northern Rio Grande rift, is bounded to the north and south by the San Luis Hills and the Red River fault zone, respectively. Surficial mapping, neotectonic investigations, geochronology, and geophysics demonstrate that the structural, volcanic, and geomorphic evolution of the basin involves the intermingling of climatic cycles and spatially and temporally varying tectonic activity of the Rio Grande rift system. Tectonic activity has transferred between range-bounding and intrabasin faults creating relict landforms of higher tectonic-activity rates along the mountain-piedmont junction. Pliocene–Pleistocene average long-term slip rates along the southern Sangre de Cristo fault zone range between 0.1 and 0.2 mm/year with late Pleistocene slip rates approximately half (0.06 mm/year) of the longer Quaternary slip rate. During the late Pleistocene, climatic influences have been dominant over tectonic influences on mountain-front geomorphic processes. Geomorphic evidence suggests that this once-closed subbasin was integrated into the Rio Grande prior to the integration of the once-closed northern San Luis Basin, north of the San Luis Hills, Colorado; however, deep canyon incision, north of the Red River and south of the San Luis Hills, initiated relatively coeval to the integration of the northern San Luis Basin.Long-term projections of slip rates applied to a 1.6 km basin depth defined from geophysical modeling suggests that rifting initiated within this subbasin between 20 and 10 Ma. Geologic mapping and geophysical interpretations reveal a complex network of northwest-, northeast-, and north-south–trending faults. Northwest- and northeast-trending faults show dual polarity and are crosscut by north-south– trending faults. This structural model possibly provides an analog for how some intracontinental rift structures evolve through time.
A unique Austin Chalk reservoir, Van field, Van Zandt County, Texas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowe, J.T.
1990-09-01
Significant shallow oil production from the Austin Chalk was established in the Van field, Van Zandt County, in East Texas in the late 1980s. The Van field structure is a complexly faulted domal anticline created by salt intrusion. The Woodbine sands, which underlie the Austin Chalk, have been and continue to be the predominant reservoir rocks in the field. Evidence indicates that faults provided vertical conduits for migration of Woodbine oil into the Austin Chalk where it was trapped along the structural crest. The most prolific Austin Chalk production is on the upthrown side of the main field fault, asmore » is the Woodbine. The Austin Chalk is a soft, white to light gray limestone composed mostly of coccoliths with some pelecypods. Unlike the Austin Chalk in the Giddings and Pearsall fields, the chalk at Van was not as deeply buried and therefore did not become brittle and susceptible to tensional or cryptic fracturing. The shallow burial in the Van field was also important in that it allowed the chalk to retain primary microporosity. The production comes entirely from this primary porosity. In addition to the structural position and underlying oil source from the Woodbine, the depositional environment and associated lithofacies are also keys to the reservoir quality in the Van field as demonstrated by cores from the upthrown and downthrown (less productive) sides of the main field fault. It appears that at the time of Austin Chalk deposition, the main field fault was active and caused the upthrown side to be a structural high and a more agreeable environment for benthonic organisms such as pelecypods and worms. The resulting bioturbation enhanced the reservoir's permeability enough to allow migration and entrapment of the oil. Future success in exploration for analogous Austin Chalk reservoirs will require the combination of a favorable environment of deposition, a nearby Woodbine oil source, and a faulted trap that will provide the conduit for migration.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borgia, A.; Burr, J.; Montero, W.
1990-08-30
Long sublinear ridges and related scarps located at the base of large volcanic structures are frequently interpreted as normal faults associated with extensional regional stress. In contrast, the ridges bordering the Central Costa Rica volcanic range (CCRVR) are the topographic expression of hanging wall asymmetric angular anticlines overlying low-angle thrust faults at the base of the range. These faults formed by gravitational failure and slumping of the flanks of the range due to the weight of the volcanic edifices and were perhaps triggered by the intrusion of magma over the past 20,000 years. These anticlines are hypothesized to occur alongmore » the base of the volcano, where the thrust faults ramp up toward the sea bottom. Ridges and scarps between 2,000 and 5,000 m below sea level are interpreted as the topographic expression of these folds. The authors further suggest that the scarps of the CCRVR and valid scaled terrestrial analogs of the perimeter scarp of the Martian volcano Olympus Mons. They suggest that the crust below Olympus Mons has failed under the load of the volcano, triggering the radial slumping of the flanks of the volcano on basal thrusts. The thrusting would have, in turn, formed the anticlinal ridges and scarps that surround the edifice. The thrust faults may extend all the way to the base of the Martian crust (about 40 km), and they may have been active until almost the end of the volcanic activity. They suggest that gravitational failure and slumping of the flanks of volcanoes is a process common to most large volcanic edifices. In the CCRVR this slumping of the flanks is a slow intermittent process, but it could evolve to rapid massive avalanching leading to catastrophic eruptions. Thus monitoring of uplift and displacement of the folds related to the slump tectonics could become an additional effective method for mitigating volcanic hazards.« less
Deformation during terrane accretion in the Saint Elias orogen, Alaska
Bruhn, R.L.; Pavlis, T.L.; Plafker, G.; Serpa, L.
2004-01-01
The Saint Elias orogen of southern Alaska and adjacent Canada is a complex belt of mountains formed by collision and accretion of the Yakutat terrane into the transition zone from transform faulting to subduction in the northeast Pacific. The orogen is an active analog for tectonic processes that formed much of the North American Cordillera, and is also an important site to study (1) the relationships between climate and tectonics, and (2) structures that generate large- to great-magnitude earthquakes. The Yakutat terrane is a fragment of the North American plate margin that is partly subducted beneath and partly accreted to the continental margin of southern Alaska. Interaction between the Yakutat terrane and the North American and Pacific plates causes significant differences in the style of deformation within the terrane. Deformation in the eastern part of the terrane is caused by strike-slip faulting along the Fairweather transform fault and by reverse faulting beneath the coastal mountains, but there is little deformation immediately offshore. The central part of the orogen is marked by thrusting of the Yakutat terrane beneath the North American plate along the Chugach-Saint Elias fault and development of a wide, thin-skinned fold-and-thrust belt. Strike-slip faulting in this segment may he localized in the hanging wall of the Chugach-Saint Elias fault, or dissipated by thrust faulting beneath a north-northeast-trending belt of active deformation that cuts obliquely across the eastern end of the fold-and-thrust belt. Superimposed folds with complex shapes and plunging hinge lines accommodate horizontal shortening and extension in the western part of the orogen, where the sedimentary cover of the Yakutat terrane is accreted into the upper plate of the Aleutian subduction zone. These three structural segments are separated by transverse tectonic boundaries that cut across the Yakutat terrane and also coincide with the courses of piedmont glaciers that flow from the topographic backbone of the Saint Elias Mountains onto the coastal plain. The Malaspina fault-Pamplona structural zone separates the eastern and central parts of the orogen and is marked by reverse faulting and folding. Onshore, most of this boundary is buried beneath the western or "Agassiz" lobe of the Malaspina piedmont glacier. The boundary between the central fold-and-thrust belt and western zone of superimposed folding lies beneath the middle and lower course of the Bering piedmont glacier. ?? 2004 Geological Society of America.
NASA Astrophysics Data System (ADS)
Gomila, R.; Arancibia, G.; Nehler, M.; Bracke, R.; Morata, D.
2017-12-01
Fault zones and their related structural permeability are a key aspect in the migration of fluids through the continental crust. Therefore, the estimation of the hydraulic properties (palaeopermeability conditions; k) and the spatial distribution of the fracture mesh within the damage zone (DZ) are critical in the assessment of fault zones behavior for fluids. The study of the real spatial distribution of the veinlets of the fracture mesh (3D), feasible with the use of µCT analyses, is a first order factor to unravel both, the real structural permeability conditions of a fault-zone, and the validation of previous (and classical) estimations made in 2D analyses in thin-sections. This work shows the results of a fault-related fracture mesh and its 3D spatial distribution in the damage-zone of the Jorgillo Fault (JF), an ancient subvertical left-lateral strike-slip fault exposed in the Atacama Fault System in northern Chile. The JF is a ca. 20 km long NNW-striking strike-slip fault with sinistral displacement of ca. 4 km. The methodology consisted of drilling 5 mm vertically oriented plugs at several locations within the JF damage zone. Each specimen was scanned with an X-Ray µCT scanner, to assess the fracture mesh, with a voxel resolution of ca. 4.5 µm in the 3D reconstructed data. Tensor permeability modeling, using Lattice-Boltzmann Method, through the segmented microfracture mesh show GMkmin (geometric mean values) of 2.1x10-12 and 9.8x10-13 m2, and GMkmax of 6.4x10-12 and 2.1x10-12 m2. A high degree of anisotropy of the DZ permeability tensor both sides of the JF (eastern and western side, respectively) is observed, where the k values in the kmax plane are 2.4 and 1.9 times higher than the kmin direction at the time of fracture sealing. This style of anisotropy is consistent with the obtained for bedded sandstones supporting the idea that damage zones have an analogous effect - but vertically orientated - on bulk permeability (in low porosity rocks) as stratigraphic layering, where across-strike khorizontal of a fault is lower when compared with the kvertical and kfault parallel. Acknowledgements: This work is a contribution to FONDAP-CONICYT Project 15090013 and CONICYT- BMBF International Scientific Collaborative Research Program Project PCCI130025/FKZ01DN14033. R.G. Ph.D. is funded by CONICYT Scholarship 21140021.
Methyl phenlactonoates are efficient strigolactone analogs with simple structure
Jamil, Muhammad; Kountche, Boubacar A; Haider, Imran; Guo, Xiujie; Ntui, Valentine O; Jia, Kun-Peng; Hameed, Umar S; Nakamura, Hidemitsu; Lyu, Ying; Jiang, Kai; Hirabayashi, Kei; Tanokura, Masaru; Arold, Stefan T; Asami, Tadao
2018-01-01
abstract Strigolactones (SLs) are a new class of phytohormones that also act as germination stimulants for root parasitic plants, such as Striga spp., and as branching factors for symbiotic arbuscular mycorrhizal fungi. Sources for natural SLs are very limited. Hence, efficient and simple SL analogs are needed for elucidating SL-related biological processes as well as for agricultural applications. Based on the structure of the non-canonical SL methyl carlactonoate, we developed a new, easy to synthesize series of analogs, termed methyl phenlactonoates (MPs), evaluated their efficacy in exerting different SL functions, and determined their affinity for SL receptors from rice and Striga hermonthica. Most of the MPs showed considerable activity in regulating plant architecture, triggering leaf senescence, and inducing parasitic seed germination. Moreover, some MPs outperformed GR24, a widely used SL analog with a complex structure, in exerting particular SL functions, such as modulating Arabidopsis roots architecture and inhibiting rice tillering. Thus, MPs will help in elucidating the functions of SLs and are promising candidates for agricultural applications. Moreover, MPs demonstrate that slight structural modifications clearly impact the efficiency in exerting particular SL functions, indicating that structural diversity of natural SLs may mirror a functional specificity. PMID:29300919
An area and power-efficient analog li-ion battery charger circuit.
Do Valle, Bruno; Wentz, Christian T; Sarpeshkar, Rahul
2011-04-01
The demand for greater battery life in low-power consumer electronics and implantable medical devices presents a need for improved energy efficiency in the management of small rechargeable cells. This paper describes an ultra-compact analog lithium-ion (Li-ion) battery charger with high energy efficiency. The charger presented here utilizes the tanh basis function of a subthreshold operational transconductance amplifier to smoothly transition between constant-current and constant-voltage charging regimes without the need for additional area- and power-consuming control circuitry. Current-domain circuitry for end-of-charge detection negates the need for precision-sense resistors in either the charging path or control loop. We show theoretically and experimentally that the low-frequency pole-zero nature of most battery impedances leads to inherent stability of the analog control loop. The circuit was fabricated in an AMI 0.5-μm complementary metal-oxide semiconductor process, and achieves 89.7% average power efficiency and an end voltage accuracy of 99.9% relative to the desired target 4.2 V, while consuming 0.16 mm(2) of chip area. To date and to the best of our knowledge, this design represents the most area-efficient and most energy-efficient battery charger circuit reported in the literature.
A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network
Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing
2015-01-01
This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information. PMID:25938760
A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network.
Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing
2015-01-01
This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information.
Lightweight Battery Charge Regulator Used to Track Solar Array Peak Power
NASA Technical Reports Server (NTRS)
Soeder, James F.; Button, Robert M.
1999-01-01
A battery charge regulator based on the series-connected boost regulator (SCBR) technology has been developed for high-voltage spacecraft applications. The SCBR regulates the solar array power during insolation to prevent battery overcharge or undercharge conditions. It can also be used to provide regulated battery output voltage to spacecraft loads if necessary. This technology uses industry-standard dc-dc converters and a unique interconnection to provide size, weight, efficiency, fault tolerance, and modularity benefits over existing systems. The high-voltage SCBR shown in the photograph has demonstrated power densities of over 1000 watts per kilogram (W/kg). Using four 150-W dc-dc converter modules, it can process 2500 W of power at 120 Vdc with a minimum input voltage of 90 Vdc. Efficiency of the SCBR was 94 to 98 percent over the entire operational range. Internally, the unit is made of two separate SCBR s, each with its own analog control circuitry, to demonstrate the modularity of the technology. The analog controllers regulate the output current and incorporate the output voltage limit with active current sharing between the two units. They also include voltage and current telemetry, on/off control, and baseplate temperature sensors. For peak power tracking, the SCBR was connected to a LabView-based data acquisition system for telemetry and control. A digital control algorithm for tracking the peak power point of a solar array was developed using the principle of matching the source impedance with the load impedance for maximum energy transfer. The algorithm was successfully demonstrated in a simulated spacecraft electrical system at the Boeing PhantomWorks High Voltage Test Facility in Seattle, Washington. The system consists of a 42-string, high-voltage solar array simulator, a 77-cell, 80-ampere-hour (A-hr) nickel-hydrogen battery, and a constant power-load module. The SCBR and the LabView control algorithm successfully tracked the solar array peak power point through various load transients, including sunlight discharge transients when the total load exceeded the maximum solar array output power.
NASA Astrophysics Data System (ADS)
Rowlett, Hugh; Forsyth, Donald W.
1984-07-01
New air gun reflection profiles, 3.5-kHz reflection profiles, and microearthquake data recorded by an array of ocean bottom seismographs form the basis for this study of the transition from a spreading center to a major transform fault. Disturbances of the thick, normally flat-lying, turbidite deposits provide indications of recent vertical motions. At the western intersection of the fracture zone with the median valley there is a depression in the sediments that represents the southerly extension of the median valley into the fracture zone valley. The depression is terminated abruptly on the south by the active transform fault, which acts as a locus for vertical as well as horizontal displacement. Flat-lying, undisturbed sediments terminate abruptly at the fault. The western boundary of the depression is much broader and is characterized by a series of slumplike steps. To the west, there is little or no evidence for uplift or tilting of sediments which might indicate vertical recovery of the crust as it spreads away from the depression. This suggests that uplift and recovery out of the depression is episodic in nature and has been inactive over the last million years along the western boundary. To the east there is clear evidence of uplift and tilting of sedimentary layers. A basement ridge emerging from the sediments is currently being uplifted and rotated in a manner analogous to processes responsible for the creation and cancellation of median valley relief. The transition between the spreading center and the transform fault appears to take place within 1-2 km. The width of the transform fault just east of the depression is less than a kilometer. Microearthquakes were located and displayed by new methods that directly account for nonlinearities associated with small arrays. Microearthquakes located by three or more ocean bottom seismometers show that the greatest seismic activity occurs along the eastern walls of the median valley, at the basement ridge, in the eastern portion of the depression and in the crestal mountains. Very little activity is associated with the western edge of the transform depression and the trace of the transform fault.
Layered clustering multi-fault diagnosis for hydraulic piston pump
NASA Astrophysics Data System (ADS)
Du, Jun; Wang, Shaoping; Zhang, Haiyan
2013-04-01
Efficient diagnosis is very important for improving reliability and performance of aircraft hydraulic piston pump, and it is one of the key technologies in prognostic and health management system. In practice, due to harsh working environment and heavy working loads, multiple faults of an aircraft hydraulic pump may occur simultaneously after long time operations. However, most existing diagnosis methods can only distinguish pump faults that occur individually. Therefore, new method needs to be developed to realize effective diagnosis of simultaneous multiple faults on aircraft hydraulic pump. In this paper, a new method based on the layered clustering algorithm is proposed to diagnose multiple faults of an aircraft hydraulic pump that occur simultaneously. The intensive failure mechanism analyses of the five main types of faults are carried out, and based on these analyses the optimal combination and layout of diagnostic sensors is attained. The three layered diagnosis reasoning engine is designed according to the faults' risk priority number and the characteristics of different fault feature extraction methods. The most serious failures are first distinguished with the individual signal processing. To the desultory faults, i.e., swash plate eccentricity and incremental clearance increases between piston and slipper, the clustering diagnosis algorithm based on the statistical average relative power difference (ARPD) is proposed. By effectively enhancing the fault features of these two faults, the ARPDs calculated from vibration signals are employed to complete the hypothesis testing. The ARPDs of the different faults follow different probability distributions. Compared with the classical fast Fourier transform-based spectrum diagnosis method, the experimental results demonstrate that the proposed algorithm can diagnose the multiple faults, which occur synchronously, with higher precision and reliability.
Complexity, Testability, and Fault Analysis of Digital, Analog, and Hybrid Systems.
1984-09-30
E. Moret Table 2. Decision Table. Example 3 ond and third rules are inconsistent, since Raining? Yes No No both could apparently apply when it is...misclassification; a similar to be approach based on game theory was de- p.. - 0.134. scribed in SLAG71 and a third in KULK76. When the class assignments are...to a change from (X, Y)=(I,1) to (X,Y)=(O,O); the second corresponds to a change in the value of the function g=X+Y; and the third corresponds to a
A-Priori Rupture Models for Northern California Type-A Faults
Wills, Chris J.; Weldon, Ray J.; Field, Edward H.
2008-01-01
This appendix describes how a-priori rupture models were developed for the northern California Type-A faults. As described in the main body of this report, and in Appendix G, ?a-priori? models represent an initial estimate of the rate of single and multi-segment surface ruptures on each fault. Whether or not a given model is moment balanced (i.e., satisfies section slip-rate data) depends on assumptions made regarding the average slip on each segment in each rupture (which in turn depends on the chosen magnitude-area relationship). Therefore, for a given set of assumptions, or branch on the logic tree, the methodology of the present Working Group (WGCEP-2007) is to find a final model that is as close as possible to the a-priori model, in the least squares sense, but that also satisfies slip rate and perhaps other data. This is analogous the WGCEP- 2002 approach of effectively voting on the relative rate of each possible rupture, and then finding the closest moment-balance model (under a more limiting set of assumptions than adopted by the present WGCEP, as described in detail in Appendix G). The 2002 Working Group Report (WCCEP, 2003, referred to here as WGCEP-2002), created segmented earthquake rupture forecast models for all faults in the region, including some that had been designated as Type B faults in the NSHMP, 1996, and one that had not previously been considered. The 2002 National Seismic Hazard Maps used the values from WGCEP-2002 for all the faults in the region, essentially treating all the listed faults as Type A faults. As discussed in Appendix A, the current WGCEP found that there are a number of faults with little or no data on slip-per-event, or dates of previous earthquakes. As a result, the WGCEP recommends that faults with minimal available earthquake recurrence data: the Greenville, Mount Diablo, San Gregorio, Monte Vista-Shannon and Concord-Green Valley be modeled as Type B faults to be consistent with similarly poorly-known faults statewide. As a result, the modified segmented models discussed here only concern the San Andreas, Hayward-Rodgers Creek, and Calaveras faults. Given the extensive level of effort given by the recent Bay-Area WGCEP-2002, our approach has been to adopt their final average models as our preferred a-prior models. We have modified the WGCEP-2002 models where necessary to match data that were not available or not used by that WGCEP and where the models needed by WGCEP-2007 for a uniform statewide model require different assumptions and/or logic-tree branch weights. In these cases we have made what are usually slight modifications to the WGCEP-2002 model. This Appendix presents the minor changes needed to accomodate updated information and model construction. We do not attempt to reproduce here the extensive documentation of data, model parameters and earthquake probablilities in the WG-2002 report.
Sharma, Arun K; Sk, Ugir Hossain; He, Pengfei; Peters, Jeffrey M; Amin, Shantu
2010-07-15
Peroxisome proliferator-activated receptors (PPARs) are ligand-activated transcription factors and members of the nuclear hormone receptor superfamily. Herein, we describe an efficient synthesis of a novel isosteric selenium analog of the highly specific PPARbeta/delta ligand 2-methyl-4-((4-methyl-2-(4-trifluoromethylphenyl)-1,3-thiazol-5-yl)-methylsulfanyl)phenoxy-acetic acid (GW501516; 1). The study examined the efficiency of the novel selenium analog 2-methyl-4-((4-methyl-2-(4-trifluoromethylphenyl)-1,3-selenazol-5-yl)-methylsulfanyl)phenoxy-acetic acid (2) to activate PPARbeta/delta and the effect of ligand activation of PPARbeta/delta on cell proliferation and target gene expression in human HaCaT keratinocytes. The results showed that similar to GW501516, the Se-analog 2 increased expression of the known PPARbeta/delta target gene angiopoietin-like protein 4 (ANGPTL4); the compound 2 was comparable in efficacy as compared to GW501516. Consistent with a large body of evidence, the Se-analog inhibited cell proliferation in HaCaT keratinocytes similar to that observed with GW501516. In summary, the novel Se-analog 2 has been developed as a potent PPARbeta/delta ligand that may possess additional anti-cancer properties of selenium. 2010 Elsevier Ltd. All rights reserved.
Singh-Blom, Amrita; Hughes, Randall A; Ellington, Andrew D
2014-05-20
Residue-specific incorporation of non-canonical amino acids into proteins is usually performed in vivo using amino acid auxotrophic strains and replacing the natural amino acid with an unnatural amino acid analog. Herein, we present an efficient amino acid depleted cell-free protein synthesis system that can be used to study residue-specific replacement of a natural amino acid by an unnatural amino acid analog. This system combines a simple methodology and high protein expression titers with a high-efficiency analog substitution into a target protein. To demonstrate the productivity and efficacy of a cell-free synthesis system for residue-specific incorporation of unnatural amino acids in vitro, we use this system to show that 5-fluorotryptophan and 6-fluorotryptophan substituted streptavidin retain the ability to bind biotin despite protein-wide replacement of a natural amino acid for the amino acid analog. We envisage this amino acid depleted cell-free synthesis system being an economical and convenient format for the high-throughput screening of a myriad of amino acid analogs with a variety of protein targets for the study and functional characterization of proteins substituted with unnatural amino acids when compared to the currently employed in vivo methodologies. Copyright © 2014 Elsevier B.V. All rights reserved.
Automatic Fault Characterization via Abnormality-Enhanced Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronevetsky, G; Laguna, I; de Supinski, B R
Enterprise and high-performance computing systems are growing extremely large and complex, employing hundreds to hundreds of thousands of processors and software/hardware stacks built by many people across many organizations. As the growing scale of these machines increases the frequency of faults, system complexity makes these faults difficult to detect and to diagnose. Current system management techniques, which focus primarily on efficient data access and query mechanisms, require system administrators to examine the behavior of various system services manually. Growing system complexity is making this manual process unmanageable: administrators require more effective management tools that can detect faults and help tomore » identify their root causes. System administrators need timely notification when a fault is manifested that includes the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize system behavior. However, the complex effects of system faults make these tools difficult to apply effectively. This paper investigates the application of classification and clustering algorithms to fault detection and characterization. We show experimentally that naively applying these methods achieves poor accuracy. Further, we design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy. Our experiments demonstrate that these techniques can detect and characterize faults with 65% accuracy, compared to just 5% accuracy for naive approaches.« less
A fault-tolerant intelligent robotic control system
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Tso, Kam Sing
1993-01-01
This paper describes the concept, design, and features of a fault-tolerant intelligent robotic control system being developed for space and commercial applications that require high dependability. The comprehensive strategy integrates system level hardware/software fault tolerance with task level handling of uncertainties and unexpected events for robotic control. The underlying architecture for system level fault tolerance is the distributed recovery block which protects against application software, system software, hardware, and network failures. Task level fault tolerance provisions are implemented in a knowledge-based system which utilizes advanced automation techniques such as rule-based and model-based reasoning to monitor, diagnose, and recover from unexpected events. The two level design provides tolerance of two or more faults occurring serially at any level of command, control, sensing, or actuation. The potential benefits of such a fault tolerant robotic control system include: (1) a minimized potential for damage to humans, the work site, and the robot itself; (2) continuous operation with a minimum of uncommanded motion in the presence of failures; and (3) more reliable autonomous operation providing increased efficiency in the execution of robotic tasks and decreased demand on human operators for controlling and monitoring the robotic servicing routines.
NASA Astrophysics Data System (ADS)
Hsieh, Cheng-En; Huang, Wen-Jeng; Chang, Ping-Yu; Lo, Wei
2016-04-01
An unmanned aerial vehicle (UAV) with a digital camera is an efficient tool for geologists to investigate structure patterns in the field. By setting ground control points (GCPs), UAV-based photogrammetry provides high-quality and quantitative results such as a digital surface model (DSM) and orthomosaic and elevational images. We combine the elevational outcrop 3D model and a digital surface model together to analyze the structural characteristics of Sanyi active fault in Houli-Fengyuan area, western Taiwan. Furthermore, we collect resistivity survey profiles and drilling core data in the Fengyuan District in order to build the subsurface fault geometry. The ground sample distance (GSD) of an elevational outcrop 3D model is 3.64 cm/pixel in this study. Our preliminary result shows that 5 fault branches are distributed 500 meters wide on the elevational outcrop and the width of Sanyi fault zone is likely much great than this value. Together with our field observations, we propose a structural evolution model to demonstrate how the 5 fault branches developed. The resistivity survey profiles show that Holocene gravel was disturbed by the Sanyi fault in Fengyuan area.
Robust In-Flight Sensor Fault Diagnostics for Aircraft Engine Based on Sliding Mode Observers
Chang, Xiaodong; Huang, Jinquan; Lu, Feng
2017-01-01
For a sensor fault diagnostic system of aircraft engines, the health performance degradation is an inevitable interference that cannot be neglected. To address this issue, this paper investigates an integrated on-line sensor fault diagnostic scheme for a commercial aircraft engine based on a sliding mode observer (SMO). In this approach, one sliding mode observer is designed for engine health performance tracking, and another for sensor fault reconstruction. Both observers are employed in in-flight applications. The results of the former SMO are analyzed for post-flight updating the baseline model of the latter. This idea is practical and feasible since the updating process does not require the algorithm to be regulated or redesigned, so that ground-based intervention is avoided, and the update process is implemented in an economical and efficient way. With this setup, the robustness of the proposed scheme to the health degradation is much enhanced and the latter SMO is able to fulfill sensor fault reconstruction over the course of the engine life. The proposed sensor fault diagnostic system is applied to a nonlinear simulation of a commercial aircraft engine, and its effectiveness is evaluated in several fault scenarios. PMID:28398255
Robust In-Flight Sensor Fault Diagnostics for Aircraft Engine Based on Sliding Mode Observers.
Chang, Xiaodong; Huang, Jinquan; Lu, Feng
2017-04-11
For a sensor fault diagnostic system of aircraft engines, the health performance degradation is an inevitable interference that cannot be neglected. To address this issue, this paper investigates an integrated on-line sensor fault diagnostic scheme for a commercial aircraft engine based on a sliding mode observer (SMO). In this approach, one sliding mode observer is designed for engine health performance tracking, and another for sensor fault reconstruction. Both observers are employed in in-flight applications. The results of the former SMO are analyzed for post-flight updating the baseline model of the latter. This idea is practical and feasible since the updating process does not require the algorithm to be regulated or redesigned, so that ground-based intervention is avoided, and the update process is implemented in an economical and efficient way. With this setup, the robustness of the proposed scheme to the health degradation is much enhanced and the latter SMO is able to fulfill sensor fault reconstruction over the course of the engine life. The proposed sensor fault diagnostic system is applied to a nonlinear simulation of a commercial aircraft engine, and its effectiveness is evaluated in several fault scenarios.
An Efficient, Optimized Synthesis of Fentanyl and Related Analogs
Valdez, Carlos A.; Leif, Roald N.; Mayer, Brian P.; ...
2014-09-18
The alternate and optimized syntheses of the parent opioid fentanyl and its analogs are described. The routes presented exhibit high-yielding transformations leading to these powerful analgesics after optimization studies were carried out for each synthetic step. The general three-step strategy produced a panel of four fentanyls in excellent yields (73–78%) along with their more commonly encountered hydrochloride and citric acid salts. In conclusion, the following strategy offers the opportunity for the gram-scale, efficient production of this interesting class of opioid alkaloids.
An Efficient, Optimized Synthesis of Fentanyl and Related Analogs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valdez, Carlos A.; Leif, Roald N.; Mayer, Brian P.
The alternate and optimized syntheses of the parent opioid fentanyl and its analogs are described. The routes presented exhibit high-yielding transformations leading to these powerful analgesics after optimization studies were carried out for each synthetic step. The general three-step strategy produced a panel of four fentanyls in excellent yields (73–78%) along with their more commonly encountered hydrochloride and citric acid salts. In conclusion, the following strategy offers the opportunity for the gram-scale, efficient production of this interesting class of opioid alkaloids.
Fault diagnosis of helical gearbox using acoustic signal and wavelets
NASA Astrophysics Data System (ADS)
Pranesh, SK; Abraham, Siju; Sugumaran, V.; Amarnath, M.
2017-05-01
The efficient transmission of power in machines is needed and gears are an appropriate choice. Faults in gears result in loss of energy and money. The monitoring and fault diagnosis are done by analysis of the acoustic and vibrational signals which are generally considered to be unwanted by products. This study proposes the usage of machine learning algorithm for condition monitoring of a helical gearbox by using the sound signals produced by the gearbox. Artificial faults were created and subsequently signals were captured by a microphone. An extensive study using different wavelet transformations for feature extraction from the acoustic signals was done, followed by waveletselection and feature selection using J48 decision tree and feature classification was performed using K star algorithm. Classification accuracy of 100% was obtained in the study
Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture
NASA Astrophysics Data System (ADS)
Meng, Chunfang
2017-03-01
We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.
Multicomponent seismic loss estimation on the North Anatolian Fault Zone (Turkey)
NASA Astrophysics Data System (ADS)
karimzadeh Naghshineh, S.; Askan, A.; Erberik, M. A.; Yakut, A.
2015-12-01
Seismic loss estimation is essential to incorporate seismic risk of structures into an efficient decision-making framework. Evaluation of seismic damage of structures requires a multidisciplinary approach including earthquake source characterization, seismological prediction of earthquake-induced ground motions, prediction of structural responses exposed to ground shaking, and finally estimation of induced damage to structures. As the study region, Erzincan, a city on the eastern part of Turkey is selected which is located in the conjunction of three active strike-slip faults as North Anatolian Fault, North East Anatolian Fault and Ovacik fault. Erzincan city center is in a pull-apart basin underlain by soft sediments that has experienced devastating earthquakes such as the 27 December 1939 (Ms=8.0) and the 13 March 1992 (Mw=6.6) events, resulting in extensive amount of physical as well as economical losses. These losses are attributed to not only the high seismicity of the area but also as a result of the seismic vulnerability of the constructed environment. This study focuses on the seismic damage estimation of Erzincan using both regional seismicity and local building information. For this purpose, first, ground motion records are selected from a set of scenario events simulated with the stochastic finite fault methodology using regional seismicity parameters. Then, existing building stock are classified into specified groups represented with equivalent single-degree-of-freedom systems. Through these models, the inelastic dynamic structural responses are investigated with non-linear time history analysis. To assess the potential seismic damage in the study area, fragility curves for the classified structural types are derived. Finally, the estimated damage is compared with the observed damage during the 1992 Erzincan earthquake. The results are observed to have a reasonable match indicating the efficiency of the ground motion simulations and building analyses.
CO2/Brine transport into shallow aquifers along fault zones.
Keating, Elizabeth H; Newell, Dennis L; Viswanathan, Hari; Carey, J W; Zyvoloski, G; Pawar, Rajesh
2013-01-02
Unintended release of CO(2) from carbon sequestration reservoirs poses a well-recognized risk to groundwater quality. Research has largely focused on in situ CO(2)-induced pH depression and subsequent trace metal mobilization. In this paper we focus on a second mechanism: upward intrusion of displaced brine or brackish-water into a shallow aquifer as a result of CO(2) injection. Studies of two natural analog sites provide insights into physical and chemical mechanisms controlling both brackish water and CO(2) intrusion into shallow aquifers along fault zones. At the Chimayó, New Mexico site, shallow groundwater near the fault is enriched in CO(2) and, in some places, salinity is significantly elevated. In contrast, at the Springerville, Arizona site CO(2) is leaking upward through brine aquifers but does not appear to be increasing salinity in the shallow aquifer. Using multiphase transport simulations we show conditions under which significant CO(2) can be transported through deep brine aquifers into shallow layers. Only a subset of these conditions favor entrainment of salinity into the shallow aquifer: high aspect-ratio leakage pathways and viscous coupling between the fluid phases. Recognition of the conditions under which salinity is favored to be cotransported with CO(2) into shallow aquifers will be important in environmental risk assessments.
NASA Technical Reports Server (NTRS)
1985-01-01
The primary objective of the Test Active Control Technology (ACT) System laboratory tests was to verify and validate the system concept, hardware, and software. The initial lab tests were open loop hardware tests of the Test ACT System as designed and built. During the course of the testing, minor problems were uncovered and corrected. Major software tests were run. The initial software testing was also open loop. These tests examined pitch control laws, wing load alleviation, signal selection/fault detection (SSFD), and output management. The Test ACT System was modified to interface with the direct drive valve (DDV) modules. The initial testing identified problem areas with DDV nonlinearities, valve friction induced limit cycling, DDV control loop instability, and channel command mismatch. The other DDV issue investigated was the ability to detect and isolate failures. Some simple schemes for failure detection were tested but were not completely satisfactory. The Test ACT System architecture continues to appear promising for ACT/FBW applications in systems that must be immune to worst case generic digital faults, and be able to tolerate two sequential nongeneric faults with no reduction in performance. The challenge in such an implementation would be to keep the analog element sufficiently simple to achieve the necessary reliability.
Efficient design of CMOS TSC checkers
NASA Technical Reports Server (NTRS)
Biddappa, Anita; Shamanna, Manjunath K.; Maki, Gary; Whitaker, Sterling
1990-01-01
This paper considers the design of an efficient, robustly testable, CMOS Totally Self-Checking (TSC) Checker for k-out-of-2k codes. Most existing implementations use primitive gates and assume the single stuck-at fault model. The self-testing property has been found to fail for CMOS TSC checkers under the stuck-open fault model due to timing skews and arbitrary delays in the circuit. A new four level design using CMOS primitive gates (NAND, NOR, INVERTERS) is presented. This design retains its properties under the stuck-open fault model. Additionally, this method offers an impressive reduction (greater than 70 percent) in gate count, gate inputs, and test set size when compared to the existing method. This implementation is easily realizable and is based on Anderson's technique. A thorough comparative study has been made on the proposed implementation and Kundu's implementation and the results indicate that the proposed one is better than Kundu's in all respects for k-out-of-2k codes.
Transparent Ada rendezvous in a fault tolerant distributed system
NASA Technical Reports Server (NTRS)
Racine, Roger
1986-01-01
There are many problems associated with distributing an Ada program over a loosely coupled communication network. Some of these problems involve the various aspects of the distributed rendezvous. The problems addressed involve supporting the delay statement in a selective call and supporting the else clause in a selective call. Most of these difficulties are compounded by the need for an efficient communication system. The difficulties are compounded even more by considering the possibility of hardware faults occurring while the program is running. With a hardware fault tolerant computer system, it is possible to design a distribution scheme and communication software which is efficient and allows Ada semantics to be preserved. An Ada design for the communications software of one such system will be presented, including a description of the services provided in the seven layers of an International Standards Organization (ISO) Open System Interconnect (OSI) model communications system. The system capabilities (hardware and software) that allow this communication system will also be described.
Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest
Ma, Suliang; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan
2018-01-01
Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods. PMID:29659548
NASA Astrophysics Data System (ADS)
Jiang, Li; Shi, Tielin; Xuan, Jianping
2012-05-01
Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.
Induction motor inter turn fault detection using infrared thermographic analysis
NASA Astrophysics Data System (ADS)
Singh, Gurmeet; Anil Kumar, T. Ch.; Naikan, V. N. A.
2016-07-01
Induction motors are the most commonly used prime movers in industries. These are subjected to various environmental, thermal and load stresses that ultimately reduces the motor efficiency and later leads to failure. Inter turn fault is the second most commonly observed faults in the motors and is considered the most severe. It can lead to the failure of complete phase and can even cause accidents, if left undetected or untreated. This paper proposes an online and non invasive technique that uses infrared thermography, in order to detect the presence of inter turn fault in induction motor drive. Two methods have been proposed that detect the fault and estimate its severity. One method uses transient thermal monitoring during the start of motor and other applies pseudo coloring technique on infrared image of the motor, after it reaches a thermal steady state. The designed template for pseudo-coloring is in acquiescence with the InterNational Electrical Testing Association (NETA) thermographic standard. An index is proposed to assess the severity of the fault present in the motor.
Fault detection and diagnosis in asymmetric multilevel inverter using artificial neural network
NASA Astrophysics Data System (ADS)
Raj, Nithin; Jagadanand, G.; George, Saly
2018-04-01
The increased component requirement to realise multilevel inverter (MLI) fallout in a higher fault prospect due to power semiconductors. In this scenario, efficient fault detection and diagnosis (FDD) strategies to detect and locate the power semiconductor faults have to be incorporated in addition to the conventional protection systems. Even though a number of FDD methods have been introduced in the symmetrical cascaded H-bridge (CHB) MLIs, very few methods address the FDD in asymmetric CHB-MLIs. In this paper, the gate-open circuit FDD strategy in asymmetric CHB-MLI is presented. Here, a single artificial neural network (ANN) is used to detect and diagnose the fault in both binary and trinary configurations of the asymmetric CHB-MLIs. In this method, features of the output voltage of the MLIs are used as to train the ANN for FDD method. The results prove the validity of the proposed method in detecting and locating the fault in both asymmetric MLI configurations. Finally, the ANN response to the input parameter variation is also analysed to access the performance of the proposed ANN-based FDD strategy.
Yi, Qu; Zhan-ming, Li; Er-chao, Li
2012-11-01
A new fault detection and diagnosis (FDD) problem via the output probability density functions (PDFs) for non-gausian stochastic distribution systems (SDSs) is investigated. The PDFs can be approximated by radial basis functions (RBFs) neural networks. Different from conventional FDD problems, the measured information for FDD is the output stochastic distributions and the stochastic variables involved are not confined to Gaussian ones. A (RBFs) neural network technique is proposed so that the output PDFs can be formulated in terms of the dynamic weighings of the RBFs neural network. In this work, a nonlinear adaptive observer-based fault detection and diagnosis algorithm is presented by introducing the tuning parameter so that the residual is as sensitive as possible to the fault. Stability and Convergency analysis is performed in fault detection and fault diagnosis analysis for the error dynamic system. At last, an illustrated example is given to demonstrate the efficiency of the proposed algorithm, and satisfactory results have been obtained. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
ASCS online fault detection and isolation based on an improved MPCA
NASA Astrophysics Data System (ADS)
Peng, Jianxin; Liu, Haiou; Hu, Yuhui; Xi, Junqiang; Chen, Huiyan
2014-09-01
Multi-way principal component analysis (MPCA) has received considerable attention and been widely used in process monitoring. A traditional MPCA algorithm unfolds multiple batches of historical data into a two-dimensional matrix and cut the matrix along the time axis to form subspaces. However, low efficiency of subspaces and difficult fault isolation are the common disadvantages for the principal component model. This paper presents a new subspace construction method based on kernel density estimation function that can effectively reduce the storage amount of the subspace information. The MPCA model and the knowledge base are built based on the new subspace. Then, fault detection and isolation with the squared prediction error (SPE) statistic and the Hotelling ( T 2) statistic are also realized in process monitoring. When a fault occurs, fault isolation based on the SPE statistic is achieved by residual contribution analysis of different variables. For fault isolation of subspace based on the T 2 statistic, the relationship between the statistic indicator and state variables is constructed, and the constraint conditions are presented to check the validity of fault isolation. Then, to improve the robustness of fault isolation to unexpected disturbances, the statistic method is adopted to set the relation between single subspace and multiple subspaces to increase the corrective rate of fault isolation. Finally fault detection and isolation based on the improved MPCA is used to monitor the automatic shift control system (ASCS) to prove the correctness and effectiveness of the algorithm. The research proposes a new subspace construction method to reduce the required storage capacity and to prove the robustness of the principal component model, and sets the relationship between the state variables and fault detection indicators for fault isolation.
Erosion controls transpressional wedge kinematics
NASA Astrophysics Data System (ADS)
Leever, K. A.; Oncken, O.
2012-04-01
High resolution digital image analysis of analogue tectonic models reveals that erosion strongly influences the kinematics of brittle transpressional wedges. In the basally-driven experimental setup with low-angle transpression (convergence angle of 20 degrees) and a homogeneous brittle rheology, a doubly vergent wedge develops above the linear basal velocity discontinuity. In the erosive case, the experiment is interrupted and the wedge topography fully removed at displacement increments of ~3/4 the model thickness. The experiments are observed by a stereo pair of high resolution CCD cameras and the incremental displacement field calculated by Digital Particle Image Velocimetry (DPIV). From this dataset, fault slip on individual fault segments - magnitude and angle on the horizontal plane relative to the fault trace - is extracted using the method of Leever et al. (2011). In the non-erosive case, after an initial stage of strain localization, the wedge experiences two transient stages of (1) oblique slip and (2) localized strain partitioning. In the second stage, the fault slip angle on the pro-shear(s) rotates by some 30 degrees from oblique to near-orthogonal. Kinematic steady state is attained in the third stage when a through-going central strike-slip zone develops above the basal velocity discontinuity. In this stage, strain is localized on two main faults (or fault zones) and fully partitioned between plate boundary-parallel displacement on the central strike-slip zone and near-orthogonal reverse faulting at the front (pro-side) of the wedge. The fault slip angle on newly formed pro-shears in this stage is stable at 60-65 degrees (see also Leever et al., 2011). In contrast, in the erosive case, slip remains more oblique on the pro-shears throughout the experiment and a separate central strike-slip zone does not form, i.e. strain partitioning does not fully develop. In addition, more faults are active simultaneously. Definition of stages is based on slip on the retro-side of the wedge. In the first stage, the slip angle on the retro-shear is 27 +/- 12 degrees. In a subsequent stage, slip on the retro-side is partitioned between strike-slip and oblique (~35 degrees) faulting. In the third stage, the slip angle on the retro side stabilizes at ~10 degrees. The pro-shears are characterized by very different kinematics. Two pro-shears tend to be active simultaneously, the extinction of the older fault shortly followed by the initiation of a new one in a forelandward breaking sequence. Throughout the experiment, the fault slip on the pro-shears is 40-60 degrees at their initiation, gradually decreasing to nearly strike-slip at the moment of fault extinction. This is a rotation of similar magnitude but in the reverse direction compared to the non-erosive case. The fault planes themselves do not rotate. Leever, K. A., R. H. Gabrielsen, D. Sokoutis, and E. Willingshofer (2011), The effect of convergence angle on the kinematic evolution of strain partitioning in transpressional brittle wedges: Insight from analog modeling and high-resolution digital image analysis, Tectonics, 30(2), TC2013.
Using Small UAS for Mission Simulation, Science Validation, and Definition
NASA Astrophysics Data System (ADS)
Abakians, H.; Donnellan, A.; Chapman, B. D.; Williford, K. H.; Francis, R.; Ehlmann, B. L.; Smith, A. T.
2017-12-01
Small Unmanned Aerial Systems (sUAS) are increasingly being used across JPL and NASA for science data collection, mission simulation, and mission validation. They can also be used as proof of concept for development of autonomous capabilities for Earth and planetary exploration. sUAS are useful for reconstruction of topography and imagery for a variety of applications ranging from fault zone morphology, Mars analog studies, geologic mapping, photometry, and estimation of vegetation structure. Imagery, particularly multispectral imagery can be used for identifying materials such as fault lithology or vegetation type. Reflectance maps can be produced for wetland or other studies. Topography and imagery observations are useful in radar studies such as from UAVSAR or the future NISAR mission to validate 3D motions and to provide imagery in areas of disruption where the radar measurements decorrelate. Small UAS are inexpensive to operate, reconfigurable, and agile, making them a powerful platform for validating mission science measurements, and also for providing surrogate data for existing or future missions.
Quantum computing with Majorana fermion codes
NASA Astrophysics Data System (ADS)
Litinski, Daniel; von Oppen, Felix
2018-05-01
We establish a unified framework for Majorana-based fault-tolerant quantum computation with Majorana surface codes and Majorana color codes. All logical Clifford gates are implemented with zero-time overhead. This is done by introducing a protocol for Pauli product measurements with tetrons and hexons which only requires local 4-Majorana parity measurements. An analogous protocol is used in the fault-tolerant setting, where tetrons and hexons are replaced by Majorana surface code patches, and parity measurements are replaced by lattice surgery, still only requiring local few-Majorana parity measurements. To this end, we discuss twist defects in Majorana fermion surface codes and adapt the technique of twist-based lattice surgery to fermionic codes. Moreover, we propose a family of codes that we refer to as Majorana color codes, which are obtained by concatenating Majorana surface codes with small Majorana fermion codes. Majorana surface and color codes can be used to decrease the space overhead and stabilizer weight compared to their bosonic counterparts.
Co-seismic thermal dissociation of carbonate fault rocks: Naukluft Thrust, central Namibia
NASA Astrophysics Data System (ADS)
Rowe, C. D.; Miller, J. A.; Sylvester, F.; Backeberg, N.; Faber, C.; Mapani, B.
2009-12-01
Frictional heating has been shown to dissociate carbonate minerals in fault rocks and rock slides at high velocities, producing in-situ fluid pressure spikes and resulting in very low effective friction. We describe the textural and geochemical effects of repeated events of frictional-thermal dissociation and fluidization along a low-angle continental thrust fault. The Naukluft Thrust in central Namibia is a regional décollement along which the Naukluft Nappe Complex was emplaced over the Nama Basin in the southern foreland of the ~ 550Ma Damara Orogen. Fault rocks in the thrust show a coupled geochemical and structural evolution driven by dolomitization reactions during fault activity and facilitated by fluid flow along the fault surface. The earliest developed fault rocks are calcite-rich calcmylonites which were progressively dolomitized along foliation. Above a critical dolomite/calcite ratio, the rocks show only brittle deformation fabrics dominated by breccias, cataclasites, and locally, a thin (1-3cm) microcrystalline, smooth white ultracataclasite. The fault is characterized by the prevalence of an unusual “gritty dolomite” yellow cataclasite containing very well rounded clasts in massive to flow-banded fine dolomitic matrix. This cataclasite, locally known as the “gritty dolomite”, may reach thicknesses of up to ~ 10m without evidence of internal cross-cutting relations with randomly distributed clasts (an “unsorted” texture). The gritty dolomite also forms clastic injections into the hanging wall of the fault, frequently where the fault surface changes orientation. Color-cathodoluminescence images show that individual carbonate grains within the “gritty dolomite” have multiple layers of thin (~10-100 micron) dolomite coatings and that the grains were smoothed and rounded between each episode of coating precipitation. Coated grains are in contact with one another but grain cores are never seen in contact. CL-bright red dolomite which forms the coatings is never observed as pore-fill between grains or other geometries typical of cement precipitates. Smoothness and radial symmetry of the coatings suggest that the grains were coated in suspension by very fine material, potentially analogous to the frictionally-generated CaO developed on the base of some landslides in carbonate rocks (Hewitt, 1988). The very thick layers of cataclasite without internal crosscutting suggest free particle paths associated with fluidization at high fluid pressure and low effective normal stress. We suggest that co-seismic frictional heating along the Naukluft Thrust caused dissociation of dolomite fault rock, producing in-situ spikes in fluid pressure (CO2) and very fine caustic CaO which chemically attacked the carbonate grains in suspension causing the smoothing and rounding. These residues then coated individual grains prior to loss of fluid pressure and settling in the fault zone. Such an event would have been associated with near total strength drop along the Naukluft Thrust. Hewitt, K., 1988 Science, v. 242, no. 4875, p. 64-67.
NASA Astrophysics Data System (ADS)
Major, J. R.; Eichhubl, P.; Urquhart, A.; Dewers, T. A.
2012-12-01
An understanding of the coupled chemical and mechanical properties of reservoir and seal units undergoing CO2 injection is critical for modeling reservoir behavior in response to the introduction of CO2. The implementation of CO2 sequestration as a mitigation strategy for climate change requires extensive risk assessment that relies heavily on computer models of subsurface reservoirs. Numerical models are fundamentally limited by the quality and validity of their input parameters. Existing models generally lack constraints on diagenesis, failing to account for the coupled geochemical or geomechanical processes that affect reservoir and seal unit properties during and after CO2 injection. For example, carbonate dissolution or precipitation after injection of CO2 into subsurface brines may significantly alter the geomechanical properties of reservoir and seal units and thus lead to solution-enhancement or self-sealing of fractures. Acidified brines may erode and breach sealing units. In addition, subcritical fracture growth enhanced by the presence of CO2 could ultimately compromise the integrity of sealing units, or enhance permeability and porosity of the reservoir itself. Such unknown responses to the introduction of CO2 can be addressed by laboratory and field-based observations and measurements. Studies of natural analogs like Crystal Geyser, Utah are thus a critical part of CO2 sequestration research. The Little Grand Wash and Salt Wash fault systems near Green River, Utah, host many fossil and active CO2 seeps, including Crystal Geyser, serving as a faulted anticline CO2 reservoir analog. The site has been extensively studied for sequestration and reservoir applications, but less attention has been paid to the diagenetic and geomechanical aspects of the fault zone. XRD analysis of reservoir and sealing rocks collected along transects across the Little Grand Wash Fault reveal mineralogical trends in the Summerville Fm (a siltstone seal unit) with calcite and smectite increasing toward to the fault, whereas illite decreases. These trends are likely the result of CO2-related diagenesis, and similar trends are also observed in sandstone units at the site. Fracture mechanics testing of unaltered and CO2-altered sandstone and siltstone samples shows that CO2-related diagenesis, which is indicated by bleaching of the Entrada Fm, has significantly decreased the fracture resistance. The subcritical fracture index is similarly affected by alteration. These compositional and mechanical changes are expected to affect the extent, geometry, and flow properties of fracture networks in CO2 sequestration systems, and thus may significantly affect reservoir and seal performance in CO2 reservoirs. This work was funded in part by the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Investigation of fault modes in permanent magnet synchronous machines for traction applications
NASA Astrophysics Data System (ADS)
Choi, Gilsu
Over the past few decades, electric motor drives have been more widely adopted to power the transportation sector to reduce our dependence on foreign oil and carbon emissions. Permanent magnet synchronous machines (PMSMs) are popular in many applications in the aerospace and automotive industries that require high power density and high efficiency. However, the presence of magnets that cannot be turned off in the event of a fault has always been an issue that hinders adoption of PMSMs in these demanding applications. This work investigates the design and analysis of PMSMs for automotive traction applications with particular emphasis on fault-mode operation caused by faults appearing at the terminals of the machine. New models and analytical techniques are introduced for evaluating the steady-state and dynamic response of PMSM drives to various fault conditions. Attention is focused on modeling the PMSM drive including nonlinear magnetic behavior under several different fault conditions, evaluating the risks of irreversible demagnetization caused by the large fault currents, as well as developing fault mitigation techniques in terms of both the fault currents and demagnetization risks. Of the major classes of machine terminal faults that can occur in PMSMs, short-circuit (SC) faults produce much more dangerous fault currents than open-circuit faults. The impact of different PMSM topologies and parameters on their responses to symmetrical and asymmetrical short-circuit (SSC & ASC) faults has been investigated. A detailed investigation on both the SSC and ASC faults is presented including both closed-form and numerical analysis. The demagnetization characteristics caused by high fault-mode stator currents (i.e., armature reaction) for different types of PMSMs are investigated. A thorough analysis and comparison of the relative demagnetization vulnerability for different types of PMSMs is presented. This analysis includes design guidelines and recommendations for minimizing the demagnetization risks while examining corresponding trade-offs. Two PM machines have been tested to validate the predicted fault currents and braking torque as well as demagnetization risks in PMSM drives. The generality and scalability of key results have also been demonstrated by analyzing several PM machines with a variety of stator, rotor, and winding configurations for various power ratings.
A General theory of Signal Integration for Fault-Tolerant Dynamic Distributed Sensor Networks
1993-10-01
related to a) the architecture and fault- tolerance of the distributed sensor network, b) the proper synchronisation of sensor signals, c) the...Computational complexities of the problem of distributed detection. 5) Issues related to recording of events and synchronization in distributed sensor...Intervals for Synchronization in Real Time Distributed Systems", Submitted to Electronic Encyclopedia. 3. V. G. Hegde and S. S. Iyengar "Efficient
NASA Astrophysics Data System (ADS)
Rama Krishna, K.; Ramachandran, K. I.
2018-02-01
Crack propagation is a major cause of failure in rotating machines. It adversely affects the productivity, safety, and the machining quality. Hence, detecting the crack’s severity accurately is imperative for the predictive maintenance of such machines. Fault diagnosis is an established concept in identifying the faults, for observing the non-linear behaviour of the vibration signals at various operating conditions. In this work, we find the classification efficiencies for both original and the reconstructed vibrational signals. The reconstructed signals are obtained using Variational Mode Decomposition (VMD), by splitting the original signal into three intrinsic mode functional components and framing them accordingly. Feature extraction, feature selection and feature classification are the three phases in obtaining the classification efficiencies. All the statistical features from the original signals and reconstructed signals are found out in feature extraction process individually. A few statistical parameters are selected in feature selection process and are classified using the SVM classifier. The obtained results show the best parameters and appropriate kernel in SVM classifier for detecting the faults in bearings. Hence, we conclude that better results were obtained by VMD and SVM process over normal process using SVM. This is owing to denoising and filtering the raw vibrational signals.
A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings
Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun
2017-01-01
The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components. PMID:28524088
Towards New Metrics for High-Performance Computing Resilience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hukerikar, Saurabh; Ashraf, Rizwan A; Engelmann, Christian
Ensuring the reliability of applications is becoming an increasingly important challenge as high-performance computing (HPC) systems experience an ever-growing number of faults, errors and failures. While the HPC community has made substantial progress in developing various resilience solutions, it continues to rely on platform-based metrics to quantify application resiliency improvements. The resilience of an HPC application is concerned with the reliability of the application outcome as well as the fault handling efficiency. To understand the scope of impact, effective coverage and performance efficiency of existing and emerging resilience solutions, there is a need for new metrics. In this paper, wemore » develop new ways to quantify resilience that consider both the reliability and the performance characteristics of the solutions from the perspective of HPC applications. As HPC systems continue to evolve in terms of scale and complexity, it is expected that applications will experience various types of faults, errors and failures, which will require applications to apply multiple resilience solutions across the system stack. The proposed metrics are intended to be useful for understanding the combined impact of these solutions on an application's ability to produce correct results and to evaluate their overall impact on an application's performance in the presence of various modes of faults.« less
Fault tolerant vector control of induction motor drive
NASA Astrophysics Data System (ADS)
Odnokopylov, G.; Bragin, A.
2014-10-01
For electric composed of technical objects hazardous industries, such as nuclear, military, chemical, etc. an urgent task is to increase their resiliency and survivability. The construction principle of vector control system fault-tolerant asynchronous electric. Displaying recovery efficiency three-phase induction motor drive in emergency mode using two-phase vector control system. The process of formation of a simulation model of the asynchronous electric unbalance in emergency mode. When modeling used coordinate transformation, providing emergency operation electric unbalance work. The results of modeling transient phase loss motor stator. During a power failure phase induction motor cannot save circular rotating field in the air gap of the motor and ensure the restoration of its efficiency at rated torque and speed.
Study on fault diagnosis and load feedback control system of combine harvester
NASA Astrophysics Data System (ADS)
Li, Ying; Wang, Kun
2017-01-01
In order to timely gain working status parameters of operating parts in combine harvester and improve its operating efficiency, fault diagnosis and load feedback control system is designed. In the system, rotation speed sensors were used to gather these signals of forward speed and rotation speeds of intermediate shaft, conveying trough, tangential and longitudinal flow threshing rotors, grain conveying auger. Using C8051 single chip microcomputer (SCM) as processor for main control unit, faults diagnosis and forward speed control were carried through by rotation speed ratio analysis of each channel rotation speed and intermediate shaft rotation speed by use of multi-sensor fused fuzzy control algorithm, and these processing results would be sent to touch screen and display work status of combine harvester. Field trials manifest that fault monitoring and load feedback control system has good man-machine interaction and the fault diagnosis method based on rotation speed ratios has low false alarm rate, and the system can realize automation control of forward speed for combine harvester.
Method of gear fault diagnosis based on EEMD and improved Elman neural network
NASA Astrophysics Data System (ADS)
Zhang, Qi; Zhao, Wei; Xiao, Shungen; Song, Mengmeng
2017-05-01
Aiming at crack and wear and so on of gears Fault information is difficult to diagnose usually due to its weak, a gear fault diagnosis method that is based on EEMD and improved Elman neural network fusion is proposed. A number of IMF components are obtained by decomposing denoised all kinds of fault signals with EEMD, and the pseudo IMF components is eliminated by using the correlation coefficient method to obtain the effective IMF component. The energy characteristic value of each effective component is calculated as the input feature quantity of Elman neural network, and the improved Elman neural network is based on standard network by adding a feedback factor. The fault data of normal gear, broken teeth, cracked gear and attrited gear were collected by field collecting. The results were analyzed by the diagnostic method proposed in this paper. The results show that compared with the standard Elman neural network, Improved Elman neural network has the advantages of high diagnostic efficiency.
Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram
Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi
2016-01-01
Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features. PMID:27649171
Jamil, Majid; Sharma, Sanjeev Kumar; Singh, Rajveer
2015-01-01
This paper focuses on the detection and classification of the faults on electrical power transmission line using artificial neural networks. The three phase currents and voltages of one end are taken as inputs in the proposed scheme. The feed forward neural network along with back propagation algorithm has been employed for detection and classification of the fault for analysis of each of the three phases involved in the process. A detailed analysis with varying number of hidden layers has been performed to validate the choice of the neural network. The simulation results concluded that the present method based on the neural network is efficient in detecting and classifying the faults on transmission lines with satisfactory performances. The different faults are simulated with different parameters to check the versatility of the method. The proposed method can be extended to the Distribution network of the Power System. The various simulations and analysis of signals is done in the MATLAB(®) environment.
Shallow near-fault material self organizes so it is just nonlinear in typical strong shaking
NASA Astrophysics Data System (ADS)
Sleep, N. H.
2011-12-01
Cracking within shallow compliant fault zones self-organizes so that strong dynamic stresses marginally exceed the elastic limit. To the first order, the compliant material experiences strain boundary conditions imposed by underlying stiffer rock. A major strike-slip fault yields simple dimensional relations. The near-field velocity pulse is essentially a Love wave. The dynamic strain is the ratio of the measured particle velocity over the deep S-wave velocity. The shallow dynamic stress is this quantity times the local shear modulus. I obtain the equilibrium shear modulus by starting a sequence of earthquakes with intact stiff rock surrounding the shallow fault zone. The imposed dynamic strain in stiff rock causes Coulomb failure and leaves cracks in it wake. Cracked rock is more compliant than the original intact rock. Each subsequent event causes more cracking until the rock becomes compliant enough that it just reaches its elastic limit. Further events maintain the material at the shear modulus where it just fails. Analogously, shallow damaged regolith forms with its shear modulus and S-wave velocity increasing with depth so it just reaches failure during typical strong shaking. The general conclusion is that shallow rocks in seismically active areas just become nonlinear during typical shaking. This process causes transient changes in S-wave velocity, but not strong nonlinear attenuation of seismic waves. Wave amplitudes significantly larger than typical ones would strongly attenuate and strongly damage the rock. The equilibrium shear modulus and S-wave velocity depend only modestly on the effective coefficient of internal friction.
Depth varying rupture properties during the 2015 Mw 7.8 Gorkha (Nepal) earthquake
NASA Astrophysics Data System (ADS)
Yue, Han; Simons, Mark; Duputel, Zacharie; Jiang, Junle; Fielding, Eric; Liang, Cunren; Owen, Susan; Moore, Angelyn; Riel, Bryan; Ampuero, Jean Paul; Samsonov, Sergey V.
2017-09-01
On April 25th 2015, the Mw 7.8 Gorkha (Nepal) earthquake ruptured a portion of the Main Himalayan Thrust underlying Kathmandu and surrounding regions. We develop kinematic slip models of the Gorkha earthquake using both a regularized multi-time-window (MTW) approach and an unsmoothed Bayesian formulation, constrained by static and high rate GPS observations, synthetic aperture radar (SAR) offset images, interferometric SAR (InSAR), and teleseismic body wave records. These models indicate that Kathmandu is located near the updip limit of fault slip and approximately 20 km south of the centroid of fault slip. Fault slip propagated unilaterally along-strike in an ESE direction for approximately 140 km with a 60 km cross-strike extent. The deeper portions of the fault are characterized by a larger ratio of high frequency (0.03-0.2 Hz) to low frequency slip than the shallower portions. From both the MTW and Bayesian results, we can resolve depth variations in slip characteristics, with higher slip roughness, higher rupture velocity, longer rise time and higher complexity of subfault source time functions in the deeper extents of the rupture. The depth varying nature of rupture characteristics suggests that the up-dip portions are characterized by relatively continuous rupture, while the down-dip portions may be better characterized by a cascaded rupture. The rupture behavior and the tectonic setting indicate that the earthquake may have ruptured both fully seismically locked and a deeper transitional portions of the collision interface, analogous to what has been seen in major subduction zone earthquakes.
Fault tolerant features and experiments of ANTS distributed real-time system
NASA Astrophysics Data System (ADS)
Dominic-Savio, Patrick; Lo, Jien-Chung; Tufts, Donald W.
1995-01-01
The ANTS project at the University of Rhode Island introduces the concept of Active Nodal Task Seeking (ANTS) as a way to efficiently design and implement dependable, high-performance, distributed computing. This paper presents the fault tolerant design features that have been incorporated in the ANTS experimental system implementation. The results of performance evaluations and fault injection experiments are reported. The fault-tolerant version of ANTS categorizes all computing nodes into three groups. They are: the up-and-running green group, the self-diagnosing yellow group and the failed red group. Each available computing node will be placed in the yellow group periodically for a routine diagnosis. In addition, for long-life missions, ANTS uses a monitoring scheme to identify faulty computing nodes. In this monitoring scheme, the communication pattern of each computing node is monitored by two other nodes.
Dynamic test input generation for multiple-fault isolation
NASA Technical Reports Server (NTRS)
Schaefer, Phil
1990-01-01
Recent work is Causal Reasoning has provided practical techniques for multiple fault diagnosis. These techniques provide a hypothesis/measurement diagnosis cycle. Using probabilistic methods, they choose the best measurements to make, then update fault hypotheses in response. For many applications such as computers and spacecraft, few measurement points may be accessible, or values may change quickly as the system under diagnosis operates. In these cases, a hypothesis/measurement cycle is insufficient. A technique is presented for a hypothesis/test-input/measurement diagnosis cycle. In contrast to generating tests a priori for determining device functionality, it dynamically generates tests in response to current knowledge about fault probabilities. It is shown how the mathematics previously used for measurement specification can be applied to the test input generation process. An example from an efficient implementation called Multi-Purpose Causal (MPC) is presented.
A hybrid robust fault tolerant control based on adaptive joint unscented Kalman filter.
Shabbouei Hagh, Yashar; Mohammadi Asl, Reza; Cocquempot, Vincent
2017-01-01
In this paper, a new hybrid robust fault tolerant control scheme is proposed. A robust H ∞ control law is used in non-faulty situation, while a Non-Singular Terminal Sliding Mode (NTSM) controller is activated as soon as an actuator fault is detected. Since a linear robust controller is designed, the system is first linearized through the feedback linearization method. To switch from one controller to the other, a fuzzy based switching system is used. An Adaptive Joint Unscented Kalman Filter (AJUKF) is used for fault detection and diagnosis. The proposed method is based on the simultaneous estimation of the system states and parameters. In order to show the efficiency of the proposed scheme, a simulated 3-DOF robotic manipulator is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Earthquake rupture process recreated from a natural fault surface
Parsons, Thomas E.; Minasian, Diane L.
2015-01-01
What exactly happens on the rupture surface as an earthquake nucleates, spreads, and stops? We cannot observe this directly, and models depend on assumptions about physical conditions and geometry at depth. We thus measure a natural fault surface and use its 3D coordinates to construct a replica at 0.1 m resolution to obviate geometry uncertainty. We can recreate stick-slip behavior on the resulting finite element model that depends solely on observed fault geometry. We clamp the fault together and apply steady state tectonic stress until seismic slip initiates and terminates. Our recreated M~1 earthquake initiates at contact points where there are steep surface gradients because infinitesimal lateral displacements reduce clamping stress most efficiently there. Unclamping enables accelerating slip to spread across the surface, but the fault soon jams up because its uneven, anisotropic shape begins to juxtapose new high-relief sticking points. These contacts would ultimately need to be sheared off or strongly deformed before another similar earthquake could occur. Our model shows that an important role is played by fault-wall geometry, though we do not include effects of varying fluid pressure or exotic rheologies on the fault surfaces. We extrapolate our results to large fault systems using observed self-similarity properties, and suggest that larger ruptures might begin and end in a similar way, though the scale of geometrical variation in fault shape that can arrest a rupture necessarily scales with magnitude. In other words, fault segmentation may be a magnitude dependent phenomenon and could vary with each subsequent rupture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helmey, Sherif Samir; Rice, Ambrose Eugene; Hatch, Duane Michael
Unnatural heavy metal-containing amino acid analogs have shown to be very important in the analysis of protein structure, using methods such as X-ray crystallography, mass spectroscopy, and NMR spectroscopy. Synthesis and incorporation of selenium-containing methionine analogs has already been shown in the literature however with some drawbacks due to toxicity to host organisms. Thus synthesis of heavy metal tryptophan analogs should prove to be more effective since the amino acid tryptophan is naturally less abundant in many proteins. For example, bioincorporation of β-seleno[3,2-b]pyrrolyl-L-alanine ([4,5]SeTrp) and β-selenolo[2,3-b]pyrrolyl-L-alanine ([6,7]SeTrp) has been shown in the following proteins without structural or catalytic perturbations: humanmore » annexin V, barstar, and dihydrofolate reductase. The reported synthesis of these Se-containing analogs is currently not efficient for commercial purposes. Thus a more efficient, concise, high-yield synthesis of selenotryptophan, as well as the corresponding, tellurotryptophan, will be necessary for wide spread use of these unnatural amino acid analogs. This research will highlight our progress towards a synthetic route of both [6,7]SeTrp and [6,7]TeTrp, which ultimately will be used to study the effect on the catalytic activity of Lignin Peroxidase (LiP).« less
Digital-analog quantum simulation of generalized Dicke models with superconducting circuits
NASA Astrophysics Data System (ADS)
Lamata, Lucas
We propose a digital-analog quantum simulation of generalized Dicke models with superconducting circuits, including Fermi-Bose condensates, biased and pulsed Dicke models, for all regimes of light-matter coupling. We encode these classes of problems in a set of superconducting qubits coupled with a bosonic mode implemented by a transmission line resonator. Via digital-analog techniques, an efficient quantum simulation can be performed in state-of-the-art circuit quantum electrodynamics platforms, by suitable decomposition into analog qubit-bosonic blocks and collective single-qubit pulses through digital steps. Moreover, just a single global analog block would be needed during the whole protocol in most of the cases, superimposed with fast periodic pulses to rotate and detune the qubits. Therefore, a large number of digital steps may be attained with this approach, providing a reduced digital error. Additionally, the number of gates per digital step does not grow with the number of qubits, rendering the simulation efficient. This strategy paves the way for the scalable digital-analog quantum simulation of many-body dynamics involving bosonic modes and spin degrees of freedom with superconducting circuits. The author wishes to acknowledge discussions with I. Arrazola, A. Mezzacapo, J. S. Pedernales, and E. Solano, and support from Ramon y Cajal Grant RYC-2012-11391, Spanish MINECO/FEDER FIS2015-69983-P, UPV/EHU UFI 11/55 and Project EHUA14/04.
Rupture Dynamics Simulation for Non-Planar fault by a Curved Grid Finite Difference Method
NASA Astrophysics Data System (ADS)
Zhang, Z.; Zhu, G.; Chen, X.
2011-12-01
We first implement the non-staggered finite difference method to solve the dynamic rupture problem, with split-node, for non-planar fault. Split-node method for dynamic simulation has been used widely, because of that it's more precise to represent the fault plane than other methods, for example, thick fault, stress glut and so on. The finite difference method is also a popular numeric method to solve kinematic and dynamic problem in seismology. However, previous works focus most of theirs eyes on the staggered-grid method, because of its simplicity and computational efficiency. However this method has its own disadvantage comparing to non-staggered finite difference method at some fact for example describing the boundary condition, especially the irregular boundary, or non-planar fault. Zhang and Chen (2006) proposed the MacCormack high order non-staggered finite difference method based on curved grids to precisely solve irregular boundary problem. Based upon on this non-staggered grid method, we make success of simulating the spontaneous rupture problem. The fault plane is a kind of boundary condition, which could be irregular of course. So it's convinced that we could simulate rupture process in the case of any kind of bending fault plane. We will prove this method is valid in the case of Cartesian coordinate first. In the case of bending fault, the curvilinear grids will be used.
A Dynamic Finite Element Method for Simulating the Physics of Faults Systems
NASA Astrophysics Data System (ADS)
Saez, E.; Mora, P.; Gross, L.; Weatherley, D.
2004-12-01
We introduce a dynamic Finite Element method using a novel high level scripting language to describe the physical equations, boundary conditions and time integration scheme. The library we use is the parallel Finley library: a finite element kernel library, designed for solving large-scale problems. It is incorporated as a differential equation solver into a more general library called escript, based on the scripting language Python. This library has been developed to facilitate the rapid development of 3D parallel codes, and is optimised for the Australian Computational Earth Systems Simulator Major National Research Facility (ACcESS MNRF) supercomputer, a 208 processor SGI Altix with a peak performance of 1.1 TFlops. Using the scripting approach we obtain a parallel FE code able to take advantage of the computational efficiency of the Altix 3700. We consider faults as material discontinuities (the displacement, velocity, and acceleration fields are discontinuous at the fault), with elastic behavior. The stress continuity at the fault is achieved naturally through the expression of the fault interactions in the weak formulation. The elasticity problem is solved explicitly in time, using the Saint Verlat scheme. Finally, we specify a suitable frictional constitutive relation and numerical scheme to simulate fault behaviour. Our model is based on previous work on modelling fault friction and multi-fault systems using lattice solid-like models. We adapt the 2D model for simulating the dynamics of parallel fault systems described to the Finite-Element method. The approach uses a frictional relation along faults that is slip and slip-rate dependent, and the numerical integration approach introduced by Mora and Place in the lattice solid model. In order to illustrate the new Finite Element model, single and multi-fault simulation examples are presented.
Yoon, Hongkyu; Major, Jonathan; Dewers, Thomas; ...
2017-01-05
Dissolved CO 2 in the subsurface resulting from geological CO 2 storage may react with minerals in fractured rocks, confined aquifers, or faults, resulting in mineral precipitation and dissolution. The overall rate of reaction can be affected by coupled processes including hydrodynamics, transport, and reactions at the (sub) pore-scale. In this work pore-scale modeling of coupled fluid flow, reactive transport, and heterogeneous reactions at the mineral surface is applied to account for permeability alterations caused by precipitation-induced pore-blocking. This paper is motivated by observations of CO 2 seeps from a natural CO 2 sequestration analog, Crystal Geyser, Utah. Observations alongmore » the surface exposure of the Little Grand Wash fault indicate the lateral migration of CO 2 seep sites (i.e., alteration zones) of 10–50 m width with spacing on the order of ~100 m over time. Sandstone permeability in alteration zones is reduced by 3–4 orders of magnitude by carbonate cementation compared to unaltered zones. One granular porous medium and one fracture network systems are used to conceptually represent permeable porous media and locations of conduits controlled by fault-segment intersections and/or topography, respectively. Simulation cases accounted for a range of reaction regimes characterized by the Damköhler (Da) and Peclet (Pe) numbers. Pore-scale simulation results demonstrate that combinations of transport (Pe), geochemical conditions (Da), solution chemistry, and pore and fracture configurations contributed to match key patterns observed in the field of how calcite precipitation alters flow paths by pore plugging. This comparison of simulation results with field observations reveals mechanistic explanations of the lateral migration and enhances our understanding of subsurface processes associated with the CO 2 injection. In addition, permeability and porosity relations are constructed from pore-scale simulations which account for a range of reaction regimes characterized by the Da and Pe numbers. Finally, the functional relationships obtained from pore-scale simulations can be used in a continuum scale model that may account for large-scale phenomena mimicking lateral migration of surface CO 2 seeps.« less
NASA Astrophysics Data System (ADS)
Jeppson, T.; Tobin, H. J.
2013-12-01
In the summer of 2005, Phase 2 of the San Andreas Fault Observatory at Depth (SAFOD) borehole was completed and logged with wireline tools including a dipole sonic tool to measure P- and S-wave velocities. A zone of anomalously low velocity was detected from 3150 to 3414 m measured depth (MD), corresponding with the subsurface location of the San Andreas Fault Zone (SAFZ). This low velocity zone is 5-30% slower than the surrounding host rock. Within this broad low-velocity zone, several slip surfaces were identified as well as two actively deforming shear zones: the southwest deformation zone (SDZ) and the central deformation zone (CDZ), located at 3192 and 3302 m MD, respectively. The SAFZ had also previously been identified as a low velocity zone in seismic velocity inversion models. The anomalously low velocity was hypothesized to result from either (a) brittle deformation in the damage zone of the fault, (b) high fluid pressures with in the fault zone, or (c) lithological variation, or a combination of the above. We measured P- and S-wave velocities at ultrasonic frequencies on saturated 2.5 cm diameter core plug samples taken from SAFOD core obtained in 2007 from within the low velocity zone. The resulting values fall into two distinct groups: foliated fault gouge and non-gouge. Samples of the foliated fault gouge have P-wave velocities between 2.3-3.5 km/s while non-gouge samples lie between 4.1-5.4 km/s over a range of effective pressures from 5-70 MPa. There is a good correlation between the log measurements and laboratory values of P-and S wave velocity at in situ pressure conditions especially for the foliated fault gouge. For non-gouge samples the laboratory values are approximately 0.08-0.73 km/s faster than the log values. This difference places the non-gouge velocities within the Great Valley siltstone velocity range, as measured by logs and ultrasonic measurements performed on outcrop samples. As a high fluid pressure zone was not encountered during SAFOD drilling, we use the ultrasonic velocities of SAFOD core and analogous outcrop samples to determine if the velocity reduction is due to lithologic variations or the presence of deformational fabrics and alteration in the fault zone. Preliminary analysis indicates that while the decrease in velocity across the broad fault zone is heavily influenced by fractures, the extremely low velocities associated with the actively deforming zones are more likely caused by the development of scaly fabric with clay coatings on the fracture surfaces. Analysis of thin sections and well logs are used to support this interpretation.
Automated Generation of Fault Management Artifacts from a Simple System Model
NASA Technical Reports Server (NTRS)
Kennedy, Andrew K.; Day, John C.
2013-01-01
Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.
Fault Diagnosis Method for a Mine Hoist in the Internet of Things Environment.
Li, Juanli; Xie, Jiacheng; Yang, Zhaojian; Li, Junjie
2018-06-13
To reduce the difficulty of acquiring and transmitting data in mining hoist fault diagnosis systems and to mitigate the low efficiency and unreasonable reasoning process problems, a fault diagnosis method for mine hoisting equipment based on the Internet of Things (IoT) is proposed in this study. The IoT requires three basic architectural layers: a perception layer, network layer, and application layer. In the perception layer, we designed a collaborative acquisition system based on the ZigBee short distance wireless communication technology for key components of the mine hoisting equipment. Real-time data acquisition was achieved, and a network layer was created by using long-distance wireless General Packet Radio Service (GPRS) transmission. The transmission and reception platforms for remote data transmission were able to transmit data in real time. A fault diagnosis reasoning method is proposed based on the improved Dezert-Smarandache Theory (DSmT) evidence theory, and fault diagnosis reasoning is performed. Based on interactive technology, a humanized and visualized fault diagnosis platform is created in the application layer. The method is then verified. A fault diagnosis test of the mine hoisting mechanism shows that the proposed diagnosis method obtains complete diagnostic data, and the diagnosis results have high accuracy and reliability.
3D Dynamic Rupture Simulations along the Wasatch Fault, Utah, Incorporating Rough-fault Topography
NASA Astrophysics Data System (ADS)
Withers, Kyle; Moschetti, Morgan
2017-04-01
Studies have found that the Wasatch Fault has experienced successive large magnitude (>Mw 7.2) earthquakes, with an average recurrence interval near 350 years. To date, no large magnitude event has been recorded along the fault, with the last rupture along the Salt Lake City segment occurring 1300 years ago. Because of this, as well as the lack of strong ground motion records in basins and from normal-faulting earthquakes worldwide, seismic hazard in the region is not well constrained. Previous numerical simulations have modeled deterministic ground motion in the heavily populated regions of Utah, near Salt Lake City, but were primarily restricted to low frequencies ( 1 Hz). Our goal is to better assess broadband ground motions from the Wasatch Fault Zone. Here, we extend deterministic ground motion prediction to higher frequencies ( 5 Hz) in this region by using physics-based spontaneous dynamic rupture simulations along a normal fault with characteristics derived from geologic observations. We use a summation by parts finite difference code (Waveqlab3D) with rough-fault topography following a self-similar fractal distribution (over length scales from 100 m to the size of the fault) and include off-fault plasticity to simulate ruptures > Mw 6.5. Geometric complexity along fault planes has previously been shown to generate broadband sources with spectral energy matching that of observations. We investigate the impact of varying the hypocenter location, as well as the influence that multiple realizations of rough-fault topography have on the rupture process and resulting ground motion. We utilize Waveqlab3's computational efficiency to model wave-propagation to a significant distance from the fault with media heterogeneity at both long and short spatial wavelengths. These simulations generate a synthetic dataset of ground motions to compare with GMPEs, in terms of both the median and inter and intraevent variability.
NASA Astrophysics Data System (ADS)
Aziz, Nur Liyana Afiqah Abdul; Siah Yap, Keem; Afif Bunyamin, Muhammad
2013-06-01
This paper presents a new approach of the fault detection for improving efficiency of circulating water system (CWS) in a power generation plant using a hybrid Fuzzy Logic System (FLS) and Extreme Learning Machine (ELM) neural network. The FLS is a mathematical tool for calculating the uncertainties where precision and significance are applied in the real world. It is based on natural language which has the ability of "computing the word". The ELM is an extremely fast learning algorithm for neural network that can completed the training cycle in a very short time. By combining the FLS and ELM, new hybrid model, i.e., FLS-ELM is developed. The applicability of this proposed hybrid model is validated in fault detection in CWS which may help to improve overall efficiency of power generation plant, hence, consuming less natural recourses and producing less pollutions.
Stress drop with constant, scale independent seismic efficiency and overshoot
Beeler, N.M.
2001-01-01
To model dissipated and radiated energy during earthquake stress drop, I calculate dynamic fault slip using a single degree of freedom spring-slider block and a laboratory-based static/kinetic fault strength relation with a dynamic stress drop proportional to effective normal stress. The model is scaled to earthquake size assuming a circular rupture; stiffness varies inversely with rupture radius, and rupture duration is proportional to radius. Calculated seismic efficiency, the ratio of radiated to total energy expended during stress drop, is in good agreement with laboratory and field observations. Predicted overshoot, a measure of how much the static stress drop exceeds the dynamic stress drop, is higher than previously published laboratory and seismic observations and fully elasto-dynamic calculations. Seismic efficiency and overshoot are constant, independent of normal stress and scale. Calculated variation of apparent stress with seismic moment resembles the observational constraints of McGarr [1999].
Saltiel, Seth; Bonner, Brian P.; Ajo-Franklin, Jonathan B.
2017-05-05
Measurements of nonlinear modulus and attenuation of fractures provide the opportunity to probe their mechanical state. We have adapted a low-frequency torsional apparatus to explore the seismic signature of fractures under low normal stress, simulating low effective stress environments such as shallow or high pore pressure reservoirs. We report strain-dependent modulus and attenuation for fractured samples of Duperow dolomite (a carbon sequestration target reservoir in Montana), Blue Canyon Dome rhyolite (a geothermal analog reservoir in New Mexico), and Montello granite (a deep basement disposal analog from Wisconsin). We use a simple single effective asperity partial slip model to fit ourmore » measured stress-strain curves, and solve for the friction coefficient, contact radius, and full slip condition. These observations have the potential to develop into new field techniques for measuring differences in frictional properties during reservoir engineering manipulations and estimate the stress conditions where reservoir fractures and faults begin to fully slip.« less
A Model-Based Expert System for Space Power Distribution Diagnostics
NASA Technical Reports Server (NTRS)
Quinn, Todd M.; Schlegelmilch, Richard F.
1994-01-01
When engineers diagnose system failures, they often use models to confirm system operation. This concept has produced a class of advanced expert systems that perform model-based diagnosis. A model-based diagnostic expert system for the Space Station Freedom electrical power distribution test bed is currently being developed at the NASA Lewis Research Center. The objective of this expert system is to autonomously detect and isolate electrical fault conditions. Marple, a software package developed at TRW, provides a model-based environment utilizing constraint suspension. Originally, constraint suspension techniques were developed for digital systems. However, Marple provides the mechanisms for applying this approach to analog systems such as the test bed, as well. The expert system was developed using Marple and Lucid Common Lisp running on a Sun Sparc-2 workstation. The Marple modeling environment has proved to be a useful tool for investigating the various aspects of model-based diagnostics. This report describes work completed to date and lessons learned while employing model-based diagnostics using constraint suspension within an analog system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saltiel, Seth; Bonner, Brian P.; Ajo-Franklin, Jonathan B.
Measurements of nonlinear modulus and attenuation of fractures provide the opportunity to probe their mechanical state. We have adapted a low-frequency torsional apparatus to explore the seismic signature of fractures under low normal stress, simulating low effective stress environments such as shallow or high pore pressure reservoirs. We report strain-dependent modulus and attenuation for fractured samples of Duperow dolomite (a carbon sequestration target reservoir in Montana), Blue Canyon Dome rhyolite (a geothermal analog reservoir in New Mexico), and Montello granite (a deep basement disposal analog from Wisconsin). We use a simple single effective asperity partial slip model to fit ourmore » measured stress-strain curves, and solve for the friction coefficient, contact radius, and full slip condition. These observations have the potential to develop into new field techniques for measuring differences in frictional properties during reservoir engineering manipulations and estimate the stress conditions where reservoir fractures and faults begin to fully slip.« less
Return to Golf Following Left Total Hip Arthroplasty in a Golfer Who is Right Handed
Betlach, Michael; Senkarik, Ryan; Smith, Robyn; Voight, Michael
2007-01-01
Background Research indicates return to golf is a safe activity following total hip arthroplasty (THA). Frequently, individuals have shown both physical faults and swing faults after THA, which can persist even following rehabilitation. Physical limitations and pain often lead to faults in the golfers swing, most notably “hanging back.” These problems may not be improved after surgery unless the proper re-training takes place. Objectives Using pre-surgical as well as post-surgical information, physical faults and swing faults were identified. A corrective training protocol was developed to normalize physical and swing limitations. Case description The patient is a 52-year old male golfer who underwent left total hip arthroplasty secondary to left hip osteoarthritis. Video analysis both pre and post surgery indicated the patient was “hanging back.” This “hanging back” can lead to an inefficient golf swing and potential injury. Following a physical evaluation, a training protocol was designed to correct abnormal physical findings to assist the patient in creating an efficient golf swing. Outcomes The patient was able to swing the golf club with proper weighting of the lead lower extremity, significant improvement of swing efficiency, and return to play at a zero handicap following a corrective training protocol. Discussion A return to full weight bearing, functional strength, range of motion, stability, and balance are critical to regaining the physical skills necessary to properly swing the golf club. Further, mastery of these objective components lend themselves to the trust needed to load the lead leg with confidence during the golf swing. PMID:21509144
Milton, Daniel J.
1964-01-01
The vesicular glass from Köfels, Tyrol, contains grains of quartz that have been partially melted but not dissolved in the matrix glass. This phenomenon has been observed in similar glasses formed by friction along a thrust fault and by meteorite impact, but not in volcanic glasses. The explosion of a small nuclear device buried behind a steep slope produced a geologic structure that is a good small-scale model of that at Köfels. Impact of a large meteorite would have an effect analogous to that of a subsurface nuclear explosion and is the probable cause of the Köfels feature.
High speed, long distance, data transmission multiplexing circuit
Mariotti, Razvan
1991-01-01
A high speed serial data transmission multiplexing circuit, which is operable to accurately transmit data over long distances (up to 3 Km), and to multiplex, select and continuously display real time analog signals in a bandwidth from DC to 100 Khz. The circuit is made fault tolerant by use of a programmable flywheel algorithm, which enables the circuit to tolerate one transmission error before losing synchronization of the transmitted frames of data. A method of encoding and framing captured and transmitted data is used which has a low overhead and prevents some particular transmitted data patterns from locking an included detector/decoder circuit.
Substance geology of the western desert in Egypt and Sudan revealed by Shuttle Imaging Radar (SIR-A)
NASA Technical Reports Server (NTRS)
Breed, C. S.; Schaber, G. G.; Mccauley, J. F.; Grolier, M. J.; Haynes, C. V.; Elachi, C.; Blom, R.; Issawi, B.; Mchugh, W. P.
1983-01-01
A correlation of known archaeologic sites with the mapped locations of the streamcourses is expected and may lead to new interpretations of early human history in the Sahara. The valley networks, faults, and other subjacent bedrock features mapped on the SIR-A images are promising areas for ground water and mineral exploration. Additionally, the analogies between the interplay of wind and running water in the geologic history of the Sahara and of Mars are strengthened by the SIR-A discoveries of relict drainage systems beneath the eolian veneer of Egypt and Sudan.
Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health
NASA Technical Reports Server (NTRS)
2004-01-01
Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate expected sensor values for targeted fault scenarios. Taken together, this information provides an efficient condensation of the engineering experience and engine flow physics needed for sensor selection. The systematic sensor selection strategy is composed of three primary algorithms. The core of the selection process is a genetic algorithm that iteratively improves a defined quality measure of selected sensor suites. A merit algorithm is employed to compute the quality measure for each test sensor suite presented by the selection process. The quality measure is based on the fidelity of fault detection and the level of fault source discrimination provided by the test sensor suite. An inverse engine model, whose function is to derive hardware performance parameters from sensor data, is an integral part of the merit algorithm. The final component is a statistical evaluation algorithm that characterizes the impact of interference effects, such as control-induced sensor variation and sensor noise, on the probability of fault detection and isolation for optimal and near-optimal sensor suites.
Geology of epithermal silver-gold bulk-mining targets, bodie district, Mono County, California
Hollister, V.F.; Silberman, M.L.
1995-01-01
The Bodie mining district in Mono County, California, is zoned with a core polymetallic-quartz vein system and silver- and gold-bearing quartz-adularia veins north and south of the core. The veins formed as a result of repeated normal faulting during doming shortly after extrusion of felsic flows and tuffs, and the magmatic-hydrothermal event seems to span at least 2 Ma. Epithermal mineralization accompanied repeated movement of the normal faults, resulting in vein development in the planes of the faults. The veins occur in a very large area of argillic alteration. Individual mineralized structures commonly formed new fracture planes during separate fault movements, with resulting broad zones of veinlets growing in the walls of the major vein-faults. The veinlet swarms have been found to constitute a target estimated at 75,000,000 tons, averaging 0.037 ounce gold per ton. The target is amenable to bulkmining exploitation. The epithermal mineralogy is simple, with electrum being the most important precious metal mineral. The host veins are typical low-sulfide banded epithermal quartz and adularia structures that filled voids created by the faulting. Historical data show that beneficiation of the simple vein mineralogy is very efficient. ?? 1995 Oxford University Press.
Object-oriented fault tree models applied to system diagnosis
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, F. A.
1990-01-01
When a diagnosis system is used in a dynamic environment, such as the distributed computer system planned for use on Space Station Freedom, it must execute quickly and its knowledge base must be easily updated. Representing system knowledge as object-oriented augmented fault trees provides both features. The diagnosis system described here is based on the failure cause identification process of the diagnostic system described by Narayanan and Viswanadham. Their system has been enhanced in this implementation by replacing the knowledge base of if-then rules with an object-oriented fault tree representation. This allows the system to perform its task much faster and facilitates dynamic updating of the knowledge base in a changing diagnosis environment. Accessing the information contained in the objects is more efficient than performing a lookup operation on an indexed rule base. Additionally, the object-oriented fault trees can be easily updated to represent current system status. This paper describes the fault tree representation, the diagnosis algorithm extensions, and an example application of this system. Comparisons are made between the object-oriented fault tree knowledge structure solution and one implementation of a rule-based solution. Plans for future work on this system are also discussed.
Spectral Regression Based Fault Feature Extraction for Bearing Accelerometer Sensor Signals
Xia, Zhanguo; Xia, Shixiong; Wan, Ling; Cai, Shiyu
2012-01-01
Bearings are not only the most important element but also a common source of failures in rotary machinery. Bearing fault prognosis technology has been receiving more and more attention recently, in particular because it plays an increasingly important role in avoiding the occurrence of accidents. Therein, fault feature extraction (FFE) of bearing accelerometer sensor signals is essential to highlight representative features of bearing conditions for machinery fault diagnosis and prognosis. This paper proposes a spectral regression (SR)-based approach for fault feature extraction from original features including time, frequency and time-frequency domain features of bearing accelerometer sensor signals. SR is a novel regression framework for efficient regularized subspace learning and feature extraction technology, and it uses the least squares method to obtain the best projection direction, rather than computing the density matrix of features, so it also has the advantage in dimensionality reduction. The effectiveness of the SR-based method is validated experimentally by applying the acquired vibration signals data to bearings. The experimental results indicate that SR can reduce the computation cost and preserve more structure information about different bearing faults and severities, and it is demonstrated that the proposed feature extraction scheme has an advantage over other similar approaches. PMID:23202017
Robust dead reckoning system for mobile robots based on particle filter and raw range scan.
Duan, Zhuohua; Cai, Zixing; Min, Huaqing
2014-09-04
Robust dead reckoning is a complicated problem for wheeled mobile robots (WMRs), where the robots are faulty, such as the sticking of sensors or the slippage of wheels, for the discrete fault models and the continuous states have to be estimated simultaneously to reach a reliable fault diagnosis and accurate dead reckoning. Particle filters are one of the most promising approaches to handle hybrid system estimation problems, and they have also been widely used in many WMRs applications, such as pose tracking, SLAM, video tracking, fault identification, etc. In this paper, the readings of a laser range finder, which may be also interfered with by noises, are used to reach accurate dead reckoning. The main contribution is that a systematic method to implement fault diagnosis and dead reckoning in a particle filter framework concurrently is proposed. Firstly, the perception model of a laser range finder is given, where the raw scan may be faulty. Secondly, the kinematics of the normal model and different fault models for WMRs are given. Thirdly, the particle filter for fault diagnosis and dead reckoning is discussed. At last, experiments and analyses are reported to show the accuracy and efficiency of the presented method.
Robust Dead Reckoning System for Mobile Robots Based on Particle Filter and Raw Range Scan
Duan, Zhuohua; Cai, Zixing; Min, Huaqing
2014-01-01
Robust dead reckoning is a complicated problem for wheeled mobile robots (WMRs), where the robots are faulty, such as the sticking of sensors or the slippage of wheels, for the discrete fault models and the continuous states have to be estimated simultaneously to reach a reliable fault diagnosis and accurate dead reckoning. Particle filters are one of the most promising approaches to handle hybrid system estimation problems, and they have also been widely used in many WMRs applications, such as pose tracking, SLAM, video tracking, fault identification, etc. In this paper, the readings of a laser range finder, which may be also interfered with by noises, are used to reach accurate dead reckoning. The main contribution is that a systematic method to implement fault diagnosis and dead reckoning in a particle filter framework concurrently is proposed. Firstly, the perception model of a laser range finder is given, where the raw scan may be faulty. Secondly, the kinematics of the normal model and different fault models for WMRs are given. Thirdly, the particle filter for fault diagnosis and dead reckoning is discussed. At last, experiments and analyses are reported to show the accuracy and efficiency of the presented method. PMID:25192318
NASA Astrophysics Data System (ADS)
Mercuri, Marco; Scuderi, Marco Maria; Tesei, Telemaco; Carminati, Eugenio; Collettini, Cristiano
2018-04-01
A great number of earthquakes occur within thick carbonate sequences in the shallow crust. At the same time, carbonate fault rocks exhumed from a depth < 6 km (i.e., from seismogenic depths) exhibit the coexistence of structures related to brittle (i.e., cataclasis) and ductile deformation processes (i.e., pressure-solution and granular plasticity). We performed friction experiments on water-saturated simulated carbonate-bearing faults for a wide range of normal stresses (from 5 to 120 MPa) and slip velocities (from 0.3 to 100 μm/s). At high normal stresses (σn > 20 MPa) fault gouges undergo strain-weakening, that is more pronounced at slow slip velocities, and causes a significant reduction of frictional strength, from μ = 0.7 to μ = 0.47. Microstructural analysis show that fault gouge weakening is driven by deformation accommodated by cataclasis and pressure-insensitive deformation processes (pressure solution and granular plasticity) that become more efficient at slow slip velocity. The reduction in frictional strength caused by strain weakening behaviour promoted by the activation of pressure-insensitive deformation might play a significant role in carbonate-bearing faults mechanics.
Use of Terrestrial Laser Scanner for Rigid Airport Pavement Management.
Barbarella, Maurizio; D'Amico, Fabrizio; De Blasiis, Maria Rosaria; Di Benedetto, Alessandro; Fiani, Margherita
2017-12-26
The evaluation of the structural efficiency of airport infrastructures is a complex task. Faulting is one of the most important indicators of rigid pavement performance. The aim of our study is to provide a new method for faulting detection and computation on jointed concrete pavements. Nowadays, the assessment of faulting is performed with the use of laborious and time-consuming measurements that strongly hinder aircraft traffic. We proposed a field procedure for Terrestrial Laser Scanner data acquisition and a computation flow chart in order to identify and quantify the fault size at each joint of apron slabs. The total point cloud has been used to compute the least square plane fitting those points. The best-fit plane for each slab has been computed too. The attitude of each slab plane with respect to both the adjacent ones and the apron reference plane has been determined by the normal vectors to the surfaces. Faulting has been evaluated as the difference in elevation between the slab planes along chosen sections. For a more accurate evaluation of the faulting value, we have then considered a few strips of data covering rectangular areas of different sizes across the joints. The accuracy of the estimated quantities has been computed too.
NASA Astrophysics Data System (ADS)
Alder, S.; Smith, S. A. F.; Scott, J. M.
2016-10-01
The >200 km long Moonlight Fault Zone (MFZ) in southern New Zealand was an Oligocene basin-bounding normal fault zone that reactivated in the Miocene as a high-angle reverse fault (present dip angle 65°-75°). Regional exhumation in the last c. 5 Ma has resulted in deep exposures of the MFZ that present an opportunity to study the structure and deformation processes that were active in a basin-scale reverse fault at basement depths. Syn-rift sediments are preserved only as thin fault-bound slivers. The hanging wall and footwall of the MFZ are mainly greenschist facies quartzofeldspathic schists that have a steeply-dipping (55°-75°) foliation subparallel to the main fault trace. In more fissile lithologies (e.g. greyschists), hanging-wall deformation occurred by the development of foliation-parallel breccia layers up to a few centimetres thick. Greyschists in the footwall deformed mainly by folding and formation of tabular, foliation-parallel breccias up to 1 m wide. Where the hanging-wall contains more competent lithologies (e.g. greenschist facies metabasite) it is laced with networks of pseudotachylyte that formed parallel to the host rock foliation in a damage zone extending up to 500 m from the main fault trace. The fault core contains an up to 20 m thick sequence of breccias, cataclasites and foliated cataclasites preserving evidence for the progressive development of interconnected networks of (partly authigenic) chlorite and muscovite. Deformation in the fault core occurred by cataclasis of quartz and albite, frictional sliding of chlorite and muscovite grains, and dissolution-precipitation. Combined with published friction and permeability data, our observations suggest that: 1) host rock lithology and anisotropy were the primary controls on the structure of the MFZ at basement depths and 2) high-angle reverse slip was facilitated by the low frictional strength of fault core materials. Restriction of pseudotachylyte networks to the hanging-wall of the MFZ further suggests that the wide, phyllosilicate-rich fault core acted as an efficient hydrological barrier, resulting in a relatively hydrous footwall and fault core but a relatively dry hanging-wall.
FDI and Accommodation Using NN Based Techniques
NASA Astrophysics Data System (ADS)
Garcia, Ramon Ferreiro; de Miguel Catoira, Alberto; Sanz, Beatriz Ferreiro
Massive application of dynamic backpropagation neural networks is used on closed loop control FDI (fault detection and isolation) tasks. The process dynamics is mapped by means of a trained backpropagation NN to be applied on residual generation. Process supervision is then applied to discriminate faults on process sensors, and process plant parameters. A rule based expert system is used to implement the decision making task and the corresponding solution in terms of faults accommodation and/or reconfiguration. Results show an efficient and robust FDI system which could be used as the core of an SCADA or alternatively as a complement supervision tool operating in parallel with the SCADA when applied on a heat exchanger.
Park, Sung-Yun; Cho, Jihyun; Lee, Kyuseok; Yoon, Euisik
2015-12-01
We report a pulse width modulation (PWM) buck converter that is able to achieve a power conversion efficiency (PCE) of > 80% in light loads 100 μA) for implantable biomedical systems. In order to achieve a high PCE for the given light loads, the buck converter adaptively reconfigures the size of power PMOS and NMOS transistors and their gate drivers in accordance with load currents, while operating at a fixed frequency of 1 MHz. The buck converter employs the analog-digital hybrid control scheme for coarse/fine adjustment of power transistors. The coarse digital control generates an approximate duty cycle necessary for driving a given load and selects an appropriate width of power transistors to minimize redundant power dissipation. The fine analog control provides the final tuning of the duty cycle to compensate for the error from the coarse digital control. The mode switching between the analog and digital controls is accomplished by a mode arbiter which estimates the average of duty cycles for the given load condition from limit cycle oscillations (LCO) induced by coarse adjustment. The fabricated buck converter achieved a peak efficiency of 86.3% at 1.4 mA and > 80% efficiency for a wide range of load conditions from 45 μA to 4.1 mA, while generating 1 V output from 2.5-3.3 V supply. The converter occupies 0.375 mm(2) in 0.18 μm CMOS processes and requires two external components: 1.2 μF capacitor and 6.8 μH inductor.
Determining preventability of pediatric readmissions using fault tree analysis.
Jonas, Jennifer A; Devon, Erin Pete; Ronan, Jeanine C; Ng, Sonia C; Owusu-McKenzie, Jacqueline Y; Strausbaugh, Janet T; Fieldston, Evan S; Hart, Jessica K
2016-05-01
Previous studies attempting to distinguish preventable from nonpreventable readmissions reported challenges in completing reviews efficiently and consistently. (1) Examine the efficiency and reliability of a Web-based fault tree tool designed to guide physicians through chart reviews to a determination about preventability. (2) Investigate root causes of general pediatrics readmissions and identify the percent that are preventable. General pediatricians from The Children's Hospital of Philadelphia used a Web-based fault tree tool to classify root causes of all general pediatrics 15-day readmissions in 2014. The tool guided reviewers through a logical progression of questions, which resulted in 1 of 18 root causes of readmission, 8 of which were considered potentially preventable. Twenty percent of cases were cross-checked to measure inter-rater reliability. Of the 7252 discharges, 248 were readmitted, for an all-cause general pediatrics 15-day readmission rate of 3.4%. Of those readmissions, 15 (6.0%) were deemed potentially preventable, corresponding to 0.2% of total discharges. The most common cause of potentially preventable readmissions was premature discharge. For the 50 cross-checked cases, both reviews resulted in the same root cause for 44 (86%) of files (κ = 0.79; 95% confidence interval: 0.60-0.98). Completing 1 review using the tool took approximately 20 minutes. The Web-based fault tree tool helped physicians to identify root causes of hospital readmissions and classify them as either preventable or not preventable in an efficient and consistent way. It also confirmed that only a small percentage of general pediatrics 15-day readmissions are potentially preventable. Journal of Hospital Medicine 2016;11:329-335. © 2016 Society of Hospital Medicine. © 2016 Society of Hospital Medicine.
Thermo-mechanical pressurization of experimental faults in cohesive rocks during seismic slip
NASA Astrophysics Data System (ADS)
Violay, M.; Di Toro, G.; Nielsen, S.; Spagnuolo, E.; Burg, J. P.
2015-11-01
Earthquakes occur because fault friction weakens with increasing slip and slip rates. Since the slipping zones of faults are often fluid-saturated, thermo-mechanical pressurization of pore fluids has been invoked as a mechanism responsible for frictional dynamic weakening, but experimental evidence is lacking. We performed friction experiments (normal stress 25 MPa, maximal slip-rate ∼3 ms-1) on cohesive basalt and marble under (1) room-humidity and (2) immersed in liquid water (drained and undrained) conditions. In both rock types and independently of the presence of fluids, up to 80% of frictional weakening was measured in the first 5 cm of slip. Modest pressurization-related weakening appears only at later stages of slip. Thermo-mechanical pressurization weakening of cohesive rocks can be negligible during earthquakes due to the triggering of more efficient fault lubrication mechanisms (flash heating, frictional melting, etc.).
Lashkari, Negin; Poshtan, Javad; Azgomi, Hamid Fekri
2015-11-01
The three-phase shift between line current and phase voltage of induction motors can be used as an efficient fault indicator to detect and locate inter-turn stator short-circuit (ITSC) fault. However, unbalanced supply voltage is one of the contributing factors that inevitably affect stator currents and therefore the three-phase shift. Thus, it is necessary to propose a method that is able to identify whether the unbalance of three currents is caused by ITSC or supply voltage fault. This paper presents a feedforward multilayer-perceptron Neural Network (NN) trained by back propagation, based on monitoring negative sequence voltage and the three-phase shift. The data which are required for training and test NN are generated using simulated model of stator. The experimental results are presented to verify the superior accuracy of the proposed method. Copyright © 2015. Published by Elsevier Ltd.
Cho, Ming-Yuan; Hoang, Thi Thom
2017-01-01
Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary sequence (PRBS) stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.
Quantitative method of medication system interface evaluation.
Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F
2007-01-01
The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.
Emsbo, P.; Groves, D.I.; Hofstra, A.H.; Bierlein, F.P.
2006-01-01
Northern Nevada hosts the only province that contains multiple world-class Carlin-type gold deposits. The first-order control on the uniqueness of this province is its anomalous far back-arc tectonic setting over the rifted North American paleocontinental margin that separates Precambrian from Phanerozoic subcontinental lithospheric mantle. Globally, most other significant gold provinces form in volcanic arcs and accreted terranes proximal to convergent margins. In northern Nevada, periodic reactivation of basement faults along this margin focused and amplified subsequent geological events. Early basement faults localized Devonian synsedimentary extension and normal faulting. These controlled the geometry of the Devonian sedimentary basin architecture and focused the discharge of basinal brines that deposited syngenetic gold along the basin margins. Inversion of these basins and faults during subsequent contraction produced the complex elongate structural culminations that characterize the anomalous mineral deposit "trends." Subsequently, these features localized repeated episodes of shallow magmatic and hydrothermal activity that also deposited some gold. During a pulse of Eocene extension, these faults focused advection of Carlin-type fluids, which had the opportunity to leach gold from gold-enriched sequences and deposit it in reactive miogeoclinal host rocks below the hydrologic seal at the Roberts Mountain thrust contact. Hence, the vast endowment of the Carlin province resulted from the conjunction of spatially superposed events localized by long-lived basement structures in a highly anomalous tectonic setting, rather than by the sole operation of special magmatic or fluid-related processes. An important indicator of the longevity of this basement control is the superposition of different gold deposit types (e.g., Sedex, porphyry, Carlin-type, epithermal, and hot spring deposits) that formed repeatedly between the Devonian and Miocene time along the trends. Interestingly, the large Cretaceous Alaska-Yukon intrusion-related gold deposits (e.g., Fort Knox) are associated with the northern extension of the same lithospheric margin in the Selwyn basin, which experienced an analogous series of geologic events. ?? Springer-Verlag 2006.
Instability model for recurring large and great earthquakes in southern California
Stuart, W.D.
1985-01-01
The locked section of the San Andreas fault in southern California has experienced a number of large and great earthquakes in the past, and thus is expected to have more in the future. To estimate the location, time, and slip of the next few earthquakes, an earthquake instability model is formulated. The model is similar to one recently developed for moderate earthquakes on the San Andreas fault near Parkfield, California. In both models, unstable faulting (the earthquake analog) is caused by failure of all or part of a patch of brittle, strain-softening fault zone. In the present model the patch extends downward from the ground surface to about 12 km depth, and extends 500 km along strike from Parkfield to the Salton Sea. The variation of patch strength along strike is adjusted by trial until the computed sequence of instabilities matches the sequence of large and great earthquakes since a.d. 1080 reported by Sieh and others. The last earthquake was the M=8.3 Ft. Tejon event in 1857. The resulting strength variation has five contiguous sections of alternately low and high strength. From north to south, the approximate locations of the sections are: (1) Parkfield to Bitterwater Valley, (2) Bitterwater Valley to Lake Hughes, (3) Lake Hughes to San Bernardino, (4) San Bernardino to Palm Springs, and (5) Palm Springs to the Salton Sea. Sections 1, 3, and 5 have strengths between 53 and 88 bars; sections 2 and 4 have strengths between 164 and 193 bars. Patch section ends and unstable rupture ends usually coincide, although one or more adjacent patch sections may fail unstably at once. The model predicts that the next sections of the fault to slip unstably will be 1, 3, and 5; the order and dates depend on the assumed length of an earthquake rupture in about 1700. ?? 1985 Birkha??user Verlag.
Kinematics of fault-related folding derived from a sandbox experiment
NASA Astrophysics Data System (ADS)
Bernard, Sylvain; Avouac, Jean-Philippe; Dominguez, StéPhane; Simoes, Martine
2007-03-01
We analyze the kinematics of fault tip folding at the front of a fold-and-thrust wedge using a sandbox experiment. The analog model consists of sand layers intercalated with low-friction glass bead layers, deposited in a glass-sided experimental device and with a total thickness h = 4.8 cm. A computerized mobile backstop induces progressive horizontal shortening of the sand layers and therefore thrust fault propagation. Active deformation at the tip of the forward propagating basal décollement is monitored along the cross section with a high-resolution CCD camera, and the displacement field between pairs of images is measured from the optical flow technique. In the early stage, when cumulative shortening is less than about h/10, slip along the décollement tapers gradually to zero and the displacement gradient is absorbed by distributed deformation of the overlying medium. In this stage of detachment tip folding, horizontal displacements decrease linearly with distance toward the foreland. Vertical displacements reflect a nearly symmetrical mode of folding, with displacements varying linearly between relatively well defined axial surfaces. When the cumulative slip on the décollement exceeds about h/10, deformation tends to localize on a few discrete shear bands at the front of the system, until shortening exceeds h/8 and deformation gets fully localized on a single emergent frontal ramp. The fault geometry subsequently evolves to a sigmoid shape and the hanging wall deforms by simple shear as it overthrusts the flat ramp system. As long as strain localization is not fully established, the sand layers experience a combination of thickening and horizontal shortening, which induces gradual limb rotation. The observed kinematics can be reduced to simple analytical expressions that can be used to restore fault tip folds, relate finite deformation to incremental folding, and derive shortening rates from deformed geomorphic markers or growth strata.
A hybrid fault diagnosis approach based on mixed-domain state features for rotating machinery.
Xue, Xiaoming; Zhou, Jianzhong
2017-01-01
To make further improvement in the diagnosis accuracy and efficiency, a mixed-domain state features data based hybrid fault diagnosis approach, which systematically blends both the statistical analysis approach and the artificial intelligence technology, is proposed in this work for rolling element bearings. For simplifying the fault diagnosis problems, the execution of the proposed method is divided into three steps, i.e., fault preliminary detection, fault type recognition and fault degree identification. In the first step, a preliminary judgment about the health status of the equipment can be evaluated by the statistical analysis method based on the permutation entropy theory. If fault exists, the following two processes based on the artificial intelligence approach are performed to further recognize the fault type and then identify the fault degree. For the two subsequent steps, mixed-domain state features containing time-domain, frequency-domain and multi-scale features are extracted to represent the fault peculiarity under different working conditions. As a powerful time-frequency analysis method, the fast EEMD method was employed to obtain multi-scale features. Furthermore, due to the information redundancy and the submergence of original feature space, a novel manifold learning method (modified LGPCA) is introduced to realize the low-dimensional representations for high-dimensional feature space. Finally, two cases with 12 working conditions respectively have been employed to evaluate the performance of the proposed method, where vibration signals were measured from an experimental bench of rolling element bearing. The analysis results showed the effectiveness and the superiority of the proposed method of which the diagnosis thought is more suitable for practical application. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Klein, E.; Masson, F.; Duputel, Z.; Yavasoglu, H.; Agram, P. S.
2016-12-01
Over the last two decades, the densification of GPS networks and the development of new radar satellites offered an unprecedented opportunity to study crustal deformation due to faulting. Yet, submarine strike slip fault segments remain a major issue, especially when the landscape appears unfavorable to the use of SAR measurements. It is the case of the North Anatolian fault segments located in the Main Marmara Sea, that remain unbroken ever since the Mw7.4 earthquake of Izmit in 1999, which ended a eastward migrating seismic sequence of Mw > 7 earthquakes. Located directly offshore Istanbul, evaluation of seismic hazard appears capital. But a strong controversy remains over whether these segments are accumulating strain and are likely to experience a major earthquake, or are creeping, resulting both from the simplicity of current geodetic models and the scarcity of geodetic data. We indeed show that 2D infinite fault models cannot account for the complexity of the Marmara fault segments. But current geodetic data in the western region of Istanbul are also insufficient to invert for the coupling using a 3D geometry of the fault. Therefore, we implement a global optimization procedure aiming at identifying the most favorable distribution of GPS stations to explore the strain accumulation. We present here the results of this procedure that allows to determine both the optimal number and location of the new stations. We show that a denser terrestrial survey network can indeed locally improve the resolution on the shallower part of the fault, even more efficiently with permanent stations. But data closer from the fault, only possible by submarine measurements, remain necessary to properly constrain the fault behavior and its potential along strike coupling variations.
Effect of compressibility on the hypervelocity penetration
NASA Astrophysics Data System (ADS)
Song, W. J.; Chen, X. W.; Chen, P.
2018-02-01
We further consider the effect of rod strength by employing the compressible penetration model to study the effect of compressibility on hypervelocity penetration. Meanwhile, we define different instances of penetration efficiency in various modified models and compare these penetration efficiencies to identify the effects of different factors in the compressible model. To systematically discuss the effect of compressibility in different metallic rod-target combinations, we construct three cases, i.e., the penetrations by the more compressible rod into the less compressible target, rod into the analogously compressible target, and the less compressible rod into the more compressible target. The effects of volumetric strain, internal energy, and strength on the penetration efficiency are analyzed simultaneously. It indicates that the compressibility of the rod and target increases the pressure at the rod/target interface. The more compressible rod/target has larger volumetric strain and higher internal energy. Both the larger volumetric strain and higher strength enhance the penetration or anti-penetration ability. On the other hand, the higher internal energy weakens the penetration or anti-penetration ability. The two trends conflict, but the volumetric strain dominates in the variation of the penetration efficiency, which would not approach the hydrodynamic limit if the rod and target are not analogously compressible. However, if the compressibility of the rod and target is analogous, it has little effect on the penetration efficiency.
Near band gap luminescence in hybrid organic-inorganic structures based on sputtered GaN nanorods.
Forsberg, Mathias; Serban, Elena Alexandra; Hsiao, Ching-Lien; Junaid, Muhammad; Birch, Jens; Pozina, Galia
2017-04-26
Novel hybrid organic-inorganic nanostructures fabricated to utilize non-radiative resonant energy transfer mechanism are considered to be extremely attractive for a variety of light emitters for down converting of ultaviolet light and for photovoltaic applications since they can be much more efficient compared to devices grown with common design. Organic-inorganic hybrid structures based on green polyfluorene (F8BT) and GaN (0001) nanorods grown by magnetron sputtering on Si (111) substrates are studied. In such nanorods, stacking faults can form periodic polymorphic quantum wells characterized by bright luminescence. In difference to GaN exciton emission, the recombination rate for the stacking fault related emission increases in the presence of polyfluorene film, which can be understood in terms of Förster interaction mechanism. From comparison of dynamic properties of the stacking fault related luminescence in the hybrid structures and in the bare GaN nanorods, the pumping efficiency of non-radiative resonant energy transfer in hybrids was estimated to be as high as 35% at low temperatures.
Verifiable fault tolerance in measurement-based quantum computation
NASA Astrophysics Data System (ADS)
Fujii, Keisuke; Hayashi, Masahito
2017-09-01
Quantum systems, in general, cannot be simulated efficiently by a classical computer, and hence are useful for solving certain mathematical problems and simulating quantum many-body systems. This also implies, unfortunately, that verification of the output of the quantum systems is not so trivial, since predicting the output is exponentially hard. As another problem, the quantum system is very delicate for noise and thus needs an error correction. Here, we propose a framework for verification of the output of fault-tolerant quantum computation in a measurement-based model. In contrast to existing analyses on fault tolerance, we do not assume any noise model on the resource state, but an arbitrary resource state is tested by using only single-qubit measurements to verify whether or not the output of measurement-based quantum computation on it is correct. Verifiability is equipped by a constant time repetition of the original measurement-based quantum computation in appropriate measurement bases. Since full characterization of quantum noise is exponentially hard for large-scale quantum computing systems, our framework provides an efficient way to practically verify the experimental quantum error correction.
An Efficient Model-based Diagnosis Engine for Hybrid Systems Using Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Narasimhan, Sriram; Roychoudhury, Indranil; Daigle, Matthew; Pulido, Belarmino
2013-01-01
Complex hybrid systems are present in a large range of engineering applications, like mechanical systems, electrical circuits, or embedded computation systems. The behavior of these systems is made up of continuous and discrete event dynamics that increase the difficulties for accurate and timely online fault diagnosis. The Hybrid Diagnosis Engine (HyDE) offers flexibility to the diagnosis application designer to choose the modeling paradigm and the reasoning algorithms. The HyDE architecture supports the use of multiple modeling paradigms at the component and system level. However, HyDE faces some problems regarding performance in terms of complexity and time. Our focus in this paper is on developing efficient model-based methodologies for online fault diagnosis in complex hybrid systems. To do this, we propose a diagnosis framework where structural model decomposition is integrated within the HyDE diagnosis framework to reduce the computational complexity associated with the fault diagnosis of hybrid systems. As a case study, we apply our approach to a diagnostic testbed, the Advanced Diagnostics and Prognostics Testbed (ADAPT), using real data.
Melt fracturing and healing: A mechanism for degassing and origin of silicic obsidian
Cabrera, A.; Weinberg, R.F.; Wright, H.M.N.; Zlotnik, S.; Cas, Ray A.F.
2011-01-01
We present water content transects across a healed fault in pyroclastic obsidian from Lami pumice cone, Lipari, Italy, using synchrotron Fourier transform infrared spectroscopy. Results indicate that rhyolite melt degassed through the fault surface. Transects define a trough of low water content coincident with the fault trace, surrounded on either side by high-water-content plateaus. Plateaus indicate that obsidian on either side of the fault equilibrated at different pressure-temperature (P-T) conditions before being juxtaposed. The curves into the troughs indicate disequilibrium and water loss through diffusion. If we assume constant T, melt equilibrated at pressures differing by 0.74 MPa before juxtaposition, and the fault acted as a low-P permeable path for H2O that diffused from the glass within time scales of 10 and 30 min. Assuming constant P instead, melt on either side could have equilibrated at temperatures differing by as much as 100 ??C, before being brought together. Water content on the fault trace is particularly sensitive to post-healing diffusion. Its preserved value indicates either higher temperature or lower pressure than the surroundings, indicative of shear heating and dynamic decompression. Our results reveal that water contents of obsidian on either side of the faults equilibrated under different P-T conditions and were out of equilibrium with each other when they were juxtaposed due to faulting immediately before the system was quenched. Degassing due to faulting could be linked to cyclical seismic activity and general degassing during silicic volcanic activity, and could be an efficient mechanism of producing low-water-content obsidian. ?? 2011 Geological Society of America.
Soft-Fault Detection Technologies Developed for Electrical Power Systems
NASA Technical Reports Server (NTRS)
Button, Robert M.
2004-01-01
The NASA Glenn Research Center, partner universities, and defense contractors are working to develop intelligent power management and distribution (PMAD) technologies for future spacecraft and launch vehicles. The goals are to provide higher performance (efficiency, transient response, and stability), higher fault tolerance, and higher reliability through the application of digital control and communication technologies. It is also expected that these technologies will eventually reduce the design, development, manufacturing, and integration costs for large, electrical power systems for space vehicles. The main focus of this research has been to incorporate digital control, communications, and intelligent algorithms into power electronic devices such as direct-current to direct-current (dc-dc) converters and protective switchgear. These technologies, in turn, will enable revolutionary changes in the way electrical power systems are designed, developed, configured, and integrated in aerospace vehicles and satellites. Initial successes in integrating modern, digital controllers have proven that transient response performance can be improved using advanced nonlinear control algorithms. One technology being developed includes the detection of "soft faults," those not typically covered by current systems in use today. Soft faults include arcing faults, corona discharge faults, and undetected leakage currents. Using digital control and advanced signal analysis algorithms, we have shown that it is possible to reliably detect arcing faults in high-voltage dc power distribution systems (see the preceding photograph). Another research effort has shown that low-level leakage faults and cable degradation can be detected by analyzing power system parameters over time. This additional fault detection capability will result in higher reliability for long-lived power systems such as reusable launch vehicles and space exploration missions.
Protection of Renewable-dominated Microgrids: Challenges and Potential Solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elkhatib, Mohamed; Ellis, Abraham; Milan Biswal
keywords : Microgrid Protection, Impedance Relay, Signal Processing-based Fault Detec- tion, Networked Microgrids, Communication-Assisted Protection In this report we address the challenge of designing efficient protection system for inverter- dominated microgrids. These microgrids are characterised with limited fault current capacity as a result of current-limiting protection functions of inverters. Typically, inverters limit their fault contribution in sub-cycle time frame to as low as 1.1 per unit. As a result, overcurrent protection could fail completely to detect faults in inverter-dominated microgrids. As part of this project a detailed literature survey of existing and proposed microgrid protection schemes were conducted. The surveymore » concluded that there is a gap in the available microgrid protection methods. The only credible protection solution available in literature for low- fault inverter-dominated microgrids is the differential protection scheme which represents a robust transmission-grade protection solution but at a very high cost. Two non-overcurrent protection schemes were investigated as part of this project; impedance-based protection and transient-based protection. Impedance-based protection depends on monitoring impedance trajectories at feeder relays to detect faults. Two communication-based impedance-based protection schemes were developed. the first scheme utilizes directional elements and pilot signals to locate the fault. The second scheme depends on a Central Protection Unit that communicates with all feeder relays to locate the fault based on directional flags received from feeder relays. The later approach could potentially be adapted to protect networked microgrids and dynamic topology microgrids. Transient-based protection relies on analyzing high frequency transients to detect and locate faults. This approach is very promising but its implementation in the filed faces several challenges. For example, high frequency transients due to faults can be confused with transients due to other events such as capacitor switching. Additionally, while detecting faults by analyzing transients could be doable, locating faults based on analyzing transients is still an open question.« less
NASA Astrophysics Data System (ADS)
Xu, Shiqing; Fukuyama, Eiichi; Yamashita, Futoshi; Mizoguchi, Kazuo; Takizawa, Shigeru; Kawakata, Hironori
2018-05-01
We conduct meter-scale rock friction experiments to study strain rate effect on fault slip and rupture evolution. Two rock samples made of Indian metagabbro, with a nominal contact dimension of 1.5 m long and 0.1 m wide, are juxtaposed and loaded in a direct shear configuration to simulate the fault motion. A series of experimental tests, under constant loading rates ranging from 0.01 mm/s to 1 mm/s and under a fixed normal stress of 6.7 MPa, are performed to simulate conditions with changing strain rates. Load cells and displacement transducers are utilized to examine the macroscopic fault behavior, while high-density arrays of strain gauges close to the fault are used to investigate the local fault behavior. The observations show that the macroscopic peak strength, strength drop, and the rate of strength drop can increase with increasing loading rate. At the local scale, the observations reveal that slow loading rates favor generation of characteristic ruptures that always nucleate in the form of slow slip at about the same location. In contrast, fast loading rates can promote very abrupt rupture nucleation and along-strike scatter of hypocenter locations. At a given propagation distance, rupture speed tends to increase with increasing loading rate. We propose that a strain-rate-dependent fault fragmentation process can enhance the efficiency of fault healing during the stick period, which together with healing time controls the recovery of fault strength. In addition, a strain-rate-dependent weakening mechanism can be activated during the slip period, which together with strain energy selects the modes of fault slip and rupture propagation. The results help to understand the spectrum of fault slip and rock deformation modes in nature, and emphasize the role of heterogeneity in tuning fault behavior under different strain rates.
NASA Astrophysics Data System (ADS)
Wollherr, Stephanie; Gabriel, Alice-Agnes; Uphoff, Carsten
2018-05-01
The dynamics and potential size of earthquakes depend crucially on rupture transfers between adjacent fault segments. To accurately describe earthquake source dynamics, numerical models can account for realistic fault geometries and rheologies such as nonlinear inelastic processes off the slip interface. We present implementation, verification, and application of off-fault Drucker-Prager plasticity in the open source software SeisSol (www.seissol.org). SeisSol is based on an arbitrary high-order derivative modal Discontinuous Galerkin (ADER-DG) method using unstructured, tetrahedral meshes specifically suited for complex geometries. Two implementation approaches are detailed, modelling plastic failure either employing sub-elemental quadrature points or switching to nodal basis coefficients. At fine fault discretizations the nodal basis approach is up to 6 times more efficient in terms of computational costs while yielding comparable accuracy. Both methods are verified in community benchmark problems and by three dimensional numerical h- and p-refinement studies with heterogeneous initial stresses. We observe no spectral convergence for on-fault quantities with respect to a given reference solution, but rather discuss a limitation to low-order convergence for heterogeneous 3D dynamic rupture problems. For simulations including plasticity, a high fault resolution may be less crucial than commonly assumed, due to the regularization of peak slip rate and an increase of the minimum cohesive zone width. In large-scale dynamic rupture simulations based on the 1992 Landers earthquake, we observe high rupture complexity including reverse slip, direct branching, and dynamic triggering. The spatio-temporal distribution of rupture transfers are altered distinctively by plastic energy absorption, correlated with locations of geometrical fault complexity. Computational cost increases by 7% when accounting for off-fault plasticity in the demonstrating application. Our results imply that the combination of fully 3D dynamic modelling, complex fault geometries, and off-fault plastic yielding is important to realistically capture dynamic rupture transfers in natural fault systems.
NASA Astrophysics Data System (ADS)
Hou, Z.; Nguyen, B. N.; Bacon, D. H.; White, M. D.; Murray, C. J.
2016-12-01
A multiphase flow and reactive transport simulator named STOMP-CO2-R has been developed and coupled to the ABAQUS® finite element package for geomechanical analysis enabling comprehensive thermo-hydro-geochemical-mechanical (THMC) analyses. The coupled THMC simulator has been applied to analyze faulted CO2 reservoir responses (e.g., stress and strain distributions, pressure buildup, slip tendency factor, pressure margin to fracture) with various complexities in fault and reservoir structures and mineralogy. Depending on the geological and reaction network settings, long-term injection of CO2 can have a significant effect on the elastic stiffness and permeability of formation rocks. In parallel, an uncertainty quantification framework (UQ-CO2), which consists of entropy-based prior uncertainty representation, efficient sampling, geostatistical reservoir modeling, and effective response surface analysis, has been developed for quantifying risks and uncertainties associated with CO2 sequestration. It has been demonstrated for evaluating risks in CO2 leakage through natural pathways and wellbores, and for developing predictive reduced order models. Recently, a parallel STOMP-CO2-R has been developed and the updated STOMP/ABAQUS model has been proven to have a great scalability, which makes it possible to integrate the model with the UQ framework to effectively and efficiently explore multidimensional parameter space (e.g., permeability, elastic modulus, crack orientation, fault friction coefficient) for a more systematic analysis of induced seismicity risks.
Small-scale seismogenic soft sediment deformation (Hirlatzhöhle, Upper Austria)
NASA Astrophysics Data System (ADS)
Salomon, Martina Lan; Grasemann, Bernhard; Plan, Lukas; Gier, Susanne
2014-05-01
The Hirlatz Cave lies in the Dachstein Massif about 2 km SW of Hallstatt, in the Upper Austrian Salzkammergut. With a length of 101 km, this karst cave, located in the Dachstein nappe (Northern Calcareous Alps), is the second largest known cave system in Austria. Within the cave, in the so-called Lehmklamm, located 2.8 km southeast of the cave entrance, laminated (mm-scale) Quaternary clay-sized sediments with interbedded fine-grained sandy layers are preserved. In these layers, numerous soft sediment deformation structures are preserved in many layers. The unconsolidated sediments show rhythmic layering of brighter, carbonate and quartz rich, and darker, more clay mineral rich horizontal varve-like layers, that are assumed to be fluvio-lacustrine deposits. The present study focuses on a very detailed documentation of an approximately 6.8 x 3 m vertical outcrop that was cut by a small brook. Centimeter to millimeter sized water escape structures (intruded cusps and flame structures), folds (detachment folds, fault bend folds) and faults (normal faults, fault propagation folds, bookshelf faults) are described. Because of the geometric analogy to seismogenic structures which have been described at two orders of magnitude larger scales from areas close to the Dead Sea Fault, we suggest that the formation of the investigated soft-sediment structures was also triggered by seismic events. The structures were mainly formed by three different mechanism: (i) North directed gravitational gliding near the sediment surface; (ii) Liquefaction resulting in a density discontinuity and decreasing in shear strength within in the stratified layers; (iii) Extensional faulting that cut through the stratified layers. Observations of coarsening upwards into sandy layers on the top of the outcrop and current ripple indicate a north-directed flow under phreatic conditions, which is opposite to the present flow direction of the vadose water in the cave. The fact that deformation and erosion mostly occur in the uppermost meter of the outcrop wall suggests a higher seismic activity and at least periodically higher flow rates during sedimentation of the younger deposits. Since several extremely deformed layers occur between undeformed ones, we suggest that deformation of the layers occurred only in the uppermost highly water saturated sediments and that several seismic events lead to the formation of the observed structures. A possible source responsible for the seismic event is the Salzach-Ennstal-Mariazeller-Puchberger (SEMP) strike-slip fault, which accommodates the active extrusion of the Eastern Alps towards the Pannonian Basin.
Finding Faults: Tohoku and other Active Megathrusts/Megasplays
NASA Astrophysics Data System (ADS)
Moore, J. C.; Conin, M.; Cook, B. J.; Kirkpatrick, J. D.; Remitti, F.; Chester, F.; Nakamura, Y.; Lin, W.; Saito, S.; Scientific Team, E.
2012-12-01
Current subduction-fault drilling procedure is to drill a logging hole, identify target faults, then core and instrument them. Seismic data may constrain faults but the additional resolution of borehole logs is necessary for efficient coring and instrumentation under difficult conditions and tight schedules. Thus, refining the methodology of identifying faults in logging data has become important, and thus comparison of log signatures of faults in different locations is worthwhile. At the C0019 (JFAST) drill site, the Tohoku megathrust was principally identified as a decollement where steep cylindrically-folded bedding abruptly flattens below the basal detachment. A similar structural contrast occurs across a megasplay fault in the NanTroSEIZE transect (Site C0004). At the Tohoku decollement, a high gamma-ray value from a pelagic clay layer, predicted as a likely decollement sediment type, strengthens the megathrust interpretation. The original identification of the pelagic clay as a decollement candidate was based on results of previous coring of an oceanic reference site. Negative density anomalies, often seen as low resistivity zones, identified a subsidiary fault in the deformed prism overlying the Tohoku megathrust. Elsewhere, at Barbados, Nankai (Moroto), and Costa Rica, negative density anomalies are associated with the decollement and other faults in hanging walls. Log-based density anomalies in fault zones provide a basis for recognizing in-situ fault zone dilation. At the Tohoku Site C0019, breakouts are present above but not below the megathrust. Changes in breakout orientation and width (stress magnitude) occur across megasplay faults at Sites C0004 and C0010 in the NantroSEIZE transect. Annular pressure anomalies are not apparent at the Tohoku megathrust, but are variably associated with faults and fracture zones drilled along the NanTroSEIZE transect. Overall, images of changes in structural features, negative density anomalies, and changes in breakout occurrence and orientation provide the most common log criteria for recognizing major thrust zones in ocean drilling holes at convergent margins. In the case of JFAST, identification of faults by logging was confirmed during subsequent coring activities, and logging data was critical for successful placement of the observatory down hole.
NASA Astrophysics Data System (ADS)
Dalguer, L. A.; Day, S. M.
2006-12-01
Accuracy in finite difference (FD) solutions to spontaneous rupture problems is controlled principally by the scheme used to represent the fault discontinuity, and not by the grid geometry used to represent the continuum. We have numerically tested three fault representation methods, the Thick Fault (TF) proposed by Madariaga et al (1998), the Stress Glut (SG) described by Andrews (1999), and the Staggered-Grid Split-Node (SGSN) methods proposed by Dalguer and Day (2006), each implemented in a the fourth-order velocity-stress staggered-grid (VSSG) FD scheme. The TF and the SG methods approximate the discontinuity through inelastic increments to stress components ("inelastic-zone" schemes) at a set of stress grid points taken to lie on the fault plane. With this type of scheme, the fault surface is indistinguishable from an inelastic zone with a thickness given by a spatial step dx for the SG, and 2dx for the TF model. The SGSN method uses the traction-at-split-node (TSN) approach adapted to the VSSG FD. This method represents the fault discontinuity by explicitly incorporating discontinuity terms at velocity nodes in the grid, with interactions between the "split nodes" occurring exclusively through the tractions (frictional resistance) acting between them. These tractions in turn are controlled by the jump conditions and a friction law. Our 3D tests problem solutions show that the inelastic-zone TF and SG methods show much poorer performance than does the SGSN formulation. The SG inelastic-zone method achieved solutions that are qualitatively meaningful and quantitatively reliable to within a few percent. The TF inelastic-zone method did not achieve qualitatively agreement with the reference solutions to the 3D test problem, and proved to be sufficiently computationally inefficient that it was not feasible to explore convergence quantitatively. The SGSN method gives very accurate solutions, and is also very efficient. Reliable solution of the rupture time is reached with a median resolution of the cohesive zone of only ~2 grid points, and efficiency is competitive with the Boundary Integral (BI) method. The results presented here demonstrate that appropriate fault representation in a numerical scheme is crucial to reduce uncertainties in numerical simulations of earthquake source dynamics and ground motion, and therefore important to improving our understanding of earthquake physics in general.
NASA Technical Reports Server (NTRS)
Bechtold, I. C.; Liggett, M. A.; Childs, J. F.
1973-01-01
Research based on ERTS-1 MSS imagery and field work in the southern Basin-Range Province of California, Nevada and Arizona has shown regional tectonic control of volcanism, plutonism, mineralization and faulting. This paper covers an area centered on the Colorado River between 34 15' N and 36 45' N. During the mid-Tertiary, the area was the site of plutonism and genetically related volcanism fed by fissure systems now exposed as dike swarms. Dikes, elongate plutons, and coeval normal faults trend generally northward and are believed to have resulted from east-west crustal extension. In the extensional province, gold silver mineralization is closely related to Tertiary igneous activity. Similarities in ore, structural setting, and rock types define a metallogenic district of high potential for exploration. The ERTS imagery also provides a basis for regional inventory of small faults which cut alluvium. This capability for efficient regional surveys of Recent faulting should be considered in land use planning, geologic hazards study, civil engineering and hydrology.
State Tracking and Fault Diagnosis for Dynamic Systems Using Labeled Uncertainty Graph.
Zhou, Gan; Feng, Wenquan; Zhao, Qi; Zhao, Hongbo
2015-11-05
Cyber-physical systems such as autonomous spacecraft, power plants and automotive systems become more vulnerable to unanticipated failures as their complexity increases. Accurate tracking of system dynamics and fault diagnosis are essential. This paper presents an efficient state estimation method for dynamic systems modeled as concurrent probabilistic automata. First, the Labeled Uncertainty Graph (LUG) method in the planning domain is introduced to describe the state tracking and fault diagnosis processes. Because the system model is probabilistic, the Monte Carlo technique is employed to sample the probability distribution of belief states. In addition, to address the sample impoverishment problem, an innovative look-ahead technique is proposed to recursively generate most likely belief states without exhaustively checking all possible successor modes. The overall algorithms incorporate two major steps: a roll-forward process that estimates system state and identifies faults, and a roll-backward process that analyzes possible system trajectories once the faults have been detected. We demonstrate the effectiveness of this approach by applying it to a real world domain: the power supply control unit of a spacecraft.
Zhang, Wei; Peng, Gaoliang; Li, Chuanhao; Chen, Yuanhang; Zhang, Zhujun
2017-01-01
Intelligent fault diagnosis techniques have replaced time-consuming and unreliable human analysis, increasing the efficiency of fault diagnosis. Deep learning models can improve the accuracy of intelligent fault diagnosis with the help of their multilayer nonlinear mapping ability. This paper proposes a novel method named Deep Convolutional Neural Networks with Wide First-layer Kernels (WDCNN). The proposed method uses raw vibration signals as input (data augmentation is used to generate more inputs), and uses the wide kernels in the first convolutional layer for extracting features and suppressing high frequency noise. Small convolutional kernels in the preceding layers are used for multilayer nonlinear mapping. AdaBN is implemented to improve the domain adaptation ability of the model. The proposed model addresses the problem that currently, the accuracy of CNN applied to fault diagnosis is not very high. WDCNN can not only achieve 100% classification accuracy on normal signals, but also outperform the state-of-the-art DNN model which is based on frequency features under different working load and noisy environment conditions. PMID:28241451
Sanchez, Richard D.; Hudnut, Kenneth W.
2004-01-01
Aerial mapping of the San Andreas Fault System can be realized more efficiently and rapidly without ground control and conventional aerotriangulation. This is achieved by the direct geopositioning of the exterior orientation of a digital imaging sensor by use of an integrated Global Positioning System (GPS) receiver and an Inertial Navigation System (INS). A crucial issue to this particular type of aerial mapping is the accuracy, scale, consistency, and speed achievable by such a system. To address these questions, an Applanix Digital Sensor System (DSS) was used to examine its potential for near real-time mapping. Large segments of vegetation along the San Andreas and Cucamonga faults near the foothills of the San Bernardino and San Gabriel Mountains were burned to the ground in the California wildfires of October-November 2003. A 175 km corridor through what once was a thickly vegetated and hidden fault surface was chosen for this study. Both faults pose a major hazard to the greater Los Angeles metropolitan area and a near real-time mapping system could provide information vital to a post-disaster response.
NASA Astrophysics Data System (ADS)
Tranos, Markos D.
2018-02-01
Synthetic heterogeneous fault-slip data as driven by Andersonian compressional stress tensors were used to examine the efficiency of best-fit stress inversion methods in separating them. Heterogeneous fault-slip data are separated only if (a) they have been driven by stress tensors defining 'hybrid' compression (R < 0.375), and their σ1 axes differ in trend more than 30° (R = 0) or 50° (R = 0.25). Separation is not feasible if they have been driven by (b) 'real' (R ≥ 0.375) and 'hybrid' compressional tensors having their σ1 axes in similar trend, or (c) 'real' compressional tensors. In case (a), the Stress Tensor Discriminator Faults (STDF) exist in more than 50% of the activated fault slip data while in cases (b) and (c), they exist in percentages of much less than 50% or not at all. They constitute a necessary discriminatory tool for the establishment and comparison of two compressional stress tensors determined by a best-fit stress inversion method. The best-fit stress inversion methods are not able to determine more than one 'real' compressional stress tensor, as far as the thrust stacking in an orogeny is concerned. They can only possibly discern stress differences in the late-orogenic faulting processes, but not between the main- and late-orogenic stages.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trevena, A.S.; Varga, R.J.; Collins, I.D.
Salin basin of central Myanmar is a tertiary fore-arc basin that extends over 10,000 mi{sup 2} and contains 30,000+ ft of siliciclastic rocks. In the western Salin basin, Tertiary deltaic and fluvial formations contain thousands of feet of lithic sandstones that alternate with transgressive shallow marine shales. Facies and paleocurrent studies indicate deposition by north-to-south prograding tidal deltas and associated fluvial systems in a semi-restricted basin. Presence of serpentinite and volcanic clasts in Tertiary sandstones may imply that the basin was bounded to the east by the volcanic arc and to the west by a fore-arc accretionary ridge throughout muchmore » of the Cenozoic. Salin basin is currently defined by a regional north/south-trending syncline with uplifts along the eastern and western margins. Elongate folds along the eastern basin margin verge to the east and lie above the reverse faults that dip west; much of Myanmar's present hydrocarbon production is from these structures. Analogous structures occur along the western margin, but verge to the west and are associated with numerous hydrocarbon seeps and hand-dug wells. These basin-bounding structures are the result of fault-propagation folding. In the western Salin basin, major detachments occur within the shaly Tabyin and Laungshe formations. Fault ramps propagated through steep forelimbs on the western sides of the folds, resulting in highly asymmetric footwall synclines. Stratigraphic and apatite fission track data are consistent with dominantly Plio-Pleistocene uplift, with limited uplift beginning approximately 10 Ma. Paleostress analysis of fault/slickenside data indicates that fold and thrust structures formed during regional east/west compression and are not related in any simple way to regional transpression as suggested by plate kinematics.« less
Initiation and Along-Axis Segmentation of Seaward-Dipping Volcanic Sequences Captured in Afar
NASA Astrophysics Data System (ADS)
Ebinger, C.; Wolfenden, E.; Yirgu, G.; Keir, D.
2003-12-01
The Afar triple junction zone provides a unique opportunity to examine the early development of magmatic margins, as respective limbs of the triple junction capture different stages of the breakup process. Initial rifting in the southernmost Red Sea occurred concurrent with, or soon after flood basaltic magmatism at ~31 Ma in the Ethiopia-Yemen plume province, whereas the northern part of the Main Ethiopian rift initiated after 12 Ma. Both rift systems initiated with the development of high-angle border fault systems bounding broad basins, but 8-10 My after rifting we see riftward migration of strain from the western border fault to narrow zones of increasingly more basaltic magmatism. These localised zones of faulting and volcanism (magmatic segments) show a segmentation independent of the border fault segmentation. The much older, more evolved magmatic segments in the southern Red Sea, where not onlapped by Pliocene-Recent sedimentary strata, dip steeply riftward and define a regional eastward flexure into transitional oceanic crust, as indicated by gravity models constrained by seismic refraction and receiver function data. The southern Red Sea magmatic segments have been abandoned in Pliocene-Recent triple junction reorganisations, whereas the process of seaward-dipping volcanic sequence emplacement is ongoing in the seismically and volcanically active Main Ethiopian rift. Field, remote sensing, gravity, and seismicity data from the Main Ethiopian and southern Red Sea rifts indicate that seaward-dipping volcanic sequences initiate in moderately stretched continental crust above a narrow zone of dike-intrusion. Our comparison of active and ancient magmatic segments show that they are the precursors to seaward-dipping volcanic sequences analogous to those seen on passive continental margins, and provides insights into the initiation of along-axis segmentation of seafloor-spreading centers.
Beeler, Nicholas M.; Kilgore, Brian D.; McGarr, Arthur F.; Fletcher, Jon Peter B.; Evans, John R.; Steven R. Baker,
2012-01-01
We have conducted dynamic rupture propagation experiments to establish the relations between in-source stress drop, fracture energy and the resulting particle velocity during slip of an unconfined 2 m long laboratory fault at normal stresses between 4 and 8 MPa. To produce high fracture energy in the source we use a rough fault that has a large slip weakening distance. An artifact of the high fracture energy is that the nucleation zone is large such that precursory slip reduces fault strength over a large fraction of the total fault length prior to dynamic rupture, making the initial stress non-uniform. Shear stress, particle velocity, fault slip and acceleration were recorded coseismically at multiple locations along strike and at small fault-normal distances. Stress drop increases weakly with normal stress. Average slip rate depends linearly on the fault strength loss and on static stress drop, both with a nonzero intercept. A minimum fracture energy of 1.8 J/m2 and a linear slip weakening distance of 33 μm are inferred from the intercept. The large slip weakening distance also affects the average slip rate which is reduced by in-source energy dissipation from on-fault fracture energy.Because of the low normal stress and small per event slip (∼86 μm), no thermal weakening such as melting or pore fluid pressurization occurs in these experiments. Despite the relatively high fracture energy, and the very low heat production, energy partitioning during these laboratory earthquakes is very similar to typical earthquake source properties. The product of fracture energy and fault area is larger than the radiated energy. Seismic efficiency is low at ∼2%. The ratio of apparent stress to static stress drop is ∼27%, consistent with measured overshoot. The fracture efficiency is ∼33%. The static and dynamic stress drops when extrapolated to crustal stresses are 2–7.3 MPa and in the range of typical earthquake stress drops. As the relatively high fracture energy reduces the slip velocities in these experiments, the extrapolated average particle velocities for crustal stresses are 0.18–0.6 m/s. That these experiments are consistent with typical earthquake source properties suggests, albeit indirectly, that thermal weakening mechanisms such as thermal pressurization and melting which lead to near complete stress drops, dominate earthquake source properties only for exceptional events unless crustal stresses are low.
Electronics for Deep Space Cryogenic Applications
NASA Technical Reports Server (NTRS)
Patterson, R. L.; Hammond, A.; Dickman, J. E.; Gerber, S. S.; Elbuluk, M. E.; Overton, E.
2002-01-01
Deep space probes and planetary exploration missions require electrical power management and control systems that are capable of efficient and reliable operation in very cold temperature environments. Typically, in deep space probes, heating elements are used to keep the spacecraft electronics near room temperature. The utilization of power electronics designed for and operated at low temperature will contribute to increasing efficiency and improving reliability of space power systems. At NASA Glenn Research Center, commercial-off-the-shelf devices as well as developed components are being investigated for potential use at low temperatures. These devices include semiconductor switching devices, magnetics, and capacitors. Integrated circuits such as digital-to-analog and analog-to-digital converters, DC/DC converters, operational amplifiers, and oscillators are also being evaluated. In this paper, results will be presented for selected analog-to-digital converters, oscillators, DC/DC converters, and pulse width modulation (PWM) controllers.
Evaluation of the Next-Gen Exercise Software Interface in the NEEMO Analog
NASA Technical Reports Server (NTRS)
Hanson, Andrea; Kalogera, Kent; Sandor, Aniko; Hardy, Marc; Frank, Andrew; English, Kirk; Williams, Thomas; Perera, Jeevan; Amonette, William
2017-01-01
NSBRI (National Space Biomedical Research Institute) funded research grant to develop the 'NextGen' exercise software for the NEEMO (NASA Extreme Environment Mission Operations) analog. Develop a software architecture to integrate instructional, motivational and socialization techniques into a common portal to enhance exercise countermeasures in remote environments. Increase user efficiency and satisfaction, and institute commonality across multiple exercise systems. Utilized GUI (Graphical User Interface) design principals focused on intuitive ease of use to minimize training time and realize early user efficiency. Project requirement to test the software in an analog environment. Top Level Project Aims: 1) Improve the usability of crew interface software to exercise CMS (Crew Management System) through common app-like interfaces. 2) Introduce virtual instructional motion training. 3) Use virtual environment to provide remote socialization with family and friends, improve exercise technique, adherence, motivation and ultimately performance outcomes.
NASA Astrophysics Data System (ADS)
Potirakis, Stelios M.; Zitis, Pavlos I.; Eftaxias, Konstantinos
2013-07-01
The field of study of complex systems considers that the dynamics of complex systems are founded on universal principles that may be used to describe a great variety of scientific and technological approaches of different types of natural, artificial, and social systems. Several authors have suggested that earthquake dynamics and the dynamics of economic (financial) systems can be analyzed within similar mathematical frameworks. We apply concepts of the nonextensive statistical physics, on time-series data of observable manifestations of the underlying complex processes ending up with these different extreme events, in order to support the suggestion that a dynamical analogy exists between a financial crisis (in the form of share or index price collapse) and a single earthquake. We also investigate the existence of such an analogy by means of scale-free statistics (the Gutenberg-Richter distribution of event sizes). We show that the populations of: (i) fracto-electromagnetic events rooted in the activation of a single fault, emerging prior to a significant earthquake, (ii) the trade volume events of different shares/economic indices, prior to a collapse, and (iii) the price fluctuation (considered as the difference of maximum minus minimum price within a day) events of different shares/economic indices, prior to a collapse, follow both the traditional Gutenberg-Richter law as well as a nonextensive model for earthquake dynamics, with similar parameter values. The obtained results imply the existence of a dynamic analogy between earthquakes and economic crises, which moreover follow the dynamics of seizures, magnetic storms and solar flares.
NASA Astrophysics Data System (ADS)
Newman, W. I.; Turcotte, D. L.
2002-12-01
We have studied a hybrid model combining the forest-fire model with the site-percolation model in order to better understand the earthquake cycle. We consider a square array of sites. At each time step, a "tree" is dropped on a randomly chosen site and is planted if the site is unoccupied. When a cluster of "trees" spans the site (a percolating cluster), all the trees in the cluster are removed ("burned") in a "fire." The removal of the cluster is analogous to a characteristic earthquake and planting "trees" is analogous to increasing the regional stress. The clusters are analogous to the metastable regions of a fault over which an earthquake rupture can propagate once triggered. We find that the frequency-area statistics of the metastable regions are power-law with a negative exponent of two (as in the forest-fire model). This is analogous to the Gutenberg-Richter distribution of seismicity. This "self-organized critical behavior" can be explained in terms of an inverse cascade of clusters. Individual trees move from small to larger clusters until they are destroyed. This inverse cascade of clusters is self-similar and the power-law distribution of cluster sizes has been shown to have an exponent of two. We have quantified the forecasting of the spanning fires using error diagrams. The assumption that "fires" (earthquakes) are quasi-periodic has moderate predictability. The density of trees gives an improved degree of predictability, while the size of the largest cluster of trees provides a substantial improvement in forecasting a "fire."
Earthquake rupture dynamics in poorly lithified sediments
NASA Astrophysics Data System (ADS)
De Paola, N.; Bullock, R. J.; Holdsworth, R.; Marco, S.; Nielsen, S. B.
2017-12-01
Several recent large earthquakes have generated anomalously large slip patches when propagating through fluid-saturated, clay-rich sediments near the surface. Friction experiments at seismic slip rates show that such sediments are extremely weak and deform with very little energy dissipation, which facilitates rupture propagation. Although dynamic weakening may explain the ease of rupture propagation through such sediments, it cannot account for the peculiar slow rupture velocity and low radiation efficiency exhibited by some large, shallow ruptures. Here, we integrate field and experimental datasets to describe on- and off-fault deformation in natural syn-depositional seismogenic faults (< 35 ka) in shallow, clay-rich, poorly lithified sediments from the Dead Sea Fault system, Israel. The data are then used to estimate the energy dissipated by on- and off-fault damage during earthquake rupture through shallow, clay-rich sediments. Our mechanical and field data show localised principal slip zones (PSZs) that deform by particulate flow, with little energy dissipated by brittle fracturing with cataclasis. Conversely, we show that coseismic brittle and ductile deformation in the damage zones outwith the PSZ, which cannot be replicated in small-scale laboratory experiments, is a significant energy sink, contributing to an energy dissipation that is one order of magnitude greater than that estimated from laboratory experiments alone. In particular, a greater proportion of dissipated energy would result in lower radiation efficiency, due to a reduced proportion of radiated energy, plus slower rupture velocity and more energy radiation in the low frequency range than might be anticipated from laboratory experiments alone. This result is in better agreement with seismological estimates of fracture energy, implying that off-fault damage can account for the geophysical characteristics of earthquake ruptures as they pass through clay-rich sediments in the shallow crust.
A new hybrid numerical scheme for simulating fault ruptures with near-fault bulk inhomogeneities
NASA Astrophysics Data System (ADS)
Hajarolasvadi, S.; Elbanna, A. E.
2017-12-01
The Finite Difference (FD) and Boundary Integral (BI) Method have been extensively used to model spontaneously propagating shear cracks, which can serve as a useful idealization of natural earthquakes. While FD suffers from artificial dispersion and numerical dissipation and has a large computational cost as it requires the discretization of the whole volume of interest, it can be applied to a wider range of problems including ones with bulk nonlinearities and heterogeneities. On the other hand, in the BI method, the numerical consideration is confined to the crack path only, with the elastodynamic response of the bulk expressed in terms of integral relations between displacement discontinuities and tractions along the crack. Therefore, this method - its spectral boundary integral (SBI) formulation in particular - is much faster and more computationally efficient than other bulk methods such as FD. However, its application is restricted to linear elastic bulk and planar faults. This work proposes a novel hybrid numerical scheme that combines FD and the SBI to enable treating fault zone nonlinearities and heterogeneities with unprecedented resolution and in a more computationally efficient way. The main idea of the method is to enclose the inhomgeneities in a virtual strip that is introduced for computational purposes only. This strip is then discretized using a volume-based numerical method, chosen here to be the finite difference method while the virtual boundaries of the strip are handled using the SBI formulation that represents the two elastic half spaces outside the strip. Modeling the elastodynamic response in these two halfspaces needs to be carried out by an Independent Spectral Formulation before joining them to the strip with the appropriate boundary conditions. Dirichlet and Neumann boundary conditions are imposed on the strip and the two half-spaces, respectively, at each time step to propagate the solution forward. We demonstrate the validity of the approach using two examples for dynamic rupture propagation: one in the presence of a low velocity layer and the other in which off-fault plasticity is permitted. This approach is more computationally efficient than pure FD and expands the range of applications of SBI beyond the current state of the art.
Protection of Renewable-dominated Microgrids: Challenges and Potential Solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elkhatib, Mohamed; Ellis, Abraham; Biswal, Milan
In this report we address the challenge of designing efficient protection system for inverter- dominated microgrids. These microgrids are characterised with limited fault current capacity as a result of current-limiting protection functions of inverters. Typically, inverters limit their fault contribution in sub-cycle time frame to as low as 1.1 per unit. As a result, overcurrent protection could fail completely to detect faults in inverter-dominated microgrids. As part of this project a detailed literature survey of existing and proposed microgrid protection schemes were conducted. The survey concluded that there is a gap in the available microgrid protection methods. The only crediblemore » protection solution available in literature for low- fault inverter-dominated microgrids is the differential protection scheme which represents a robust transmission-grade protection solution but at a very high cost. Two non-overcurrent protection schemes were investigated as part of this project; impedance-based protection and transient-based protection. Impedance-based protection depends on monitoring impedance trajectories at feeder relays to detect faults. Two communication-based impedance-based protection schemes were developed. the first scheme utilizes directional elements and pilot signals to locate the fault. The second scheme depends on a Central Protection Unit that communicates with all feeder relays to locate the fault based on directional flags received from feeder relays. The later approach could potentially be adapted to protect networked microgrids and dynamic topology microgrids. Transient-based protection relies on analyzing high frequency transients to detect and locate faults. This approach is very promising but its implementation in the filed faces several challenges. For example, high frequency transients due to faults can be confused with transients due to other events such as capacitor switching. Additionally, while detecting faults by analyzing transients could be doable, locating faults based on analyzing transients is still an open question.« less
Verifying Digital Components of Physical Systems: Experimental Evaluation of Test Quality
NASA Astrophysics Data System (ADS)
Laputenko, A. V.; López, J. E.; Yevtushenko, N. V.
2018-03-01
This paper continues the study of high quality test derivation for verifying digital components which are used in various physical systems; those are sensors, data transfer components, etc. We have used logic circuits b01-b010 of the package of ITC'99 benchmarks (Second Release) for experimental evaluation which as stated before, describe digital components of physical systems designed for various applications. Test sequences are derived for detecting the most known faults of the reference logic circuit using three different approaches to test derivation. Three widely used fault types such as stuck-at-faults, bridges, and faults which slightly modify the behavior of one gate are considered as possible faults of the reference behavior. The most interesting test sequences are short test sequences that can provide appropriate guarantees after testing, and thus, we experimentally study various approaches to the derivation of the so-called complete test suites which detect all fault types. In the first series of experiments, we compare two approaches for deriving complete test suites. In the first approach, a shortest test sequence is derived for testing each fault. In the second approach, a test sequence is pseudo-randomly generated by the use of an appropriate software for logic synthesis and verification (ABC system in our study) and thus, can be longer. However, after deleting sequences detecting the same set of faults, a test suite returned by the second approach is shorter. The latter underlines the fact that in many cases it is useless to spend `time and efforts' for deriving a shortest distinguishing sequence; it is better to use the test minimization afterwards. The performed experiments also show that the use of only randomly generated test sequences is not very efficient since such sequences do not detect all the faults of any type. After reaching the fault coverage around 70%, saturation is observed, and the fault coverage cannot be increased anymore. For deriving high quality short test suites, the approach that is the combination of randomly generated sequences together with sequences which are aimed to detect faults not detected by random tests, allows to reach the good fault coverage using shortest test sequences.
NASA Astrophysics Data System (ADS)
Bhattacharya, P.; Viesca, R. C.
2017-12-01
In the absence of in situ field-scale observations of quantities such as fault slip, shear stress and pore pressure, observational constraints on models of fault slip have mostly been limited to laboratory and/or remote observations. Recent controlled fluid-injection experiments on well-instrumented faults fill this gap by simultaneously monitoring fault slip and pore pressure evolution in situ [Gugleilmi et al., 2015]. Such experiments can reveal interesting fault behavior, e.g., Gugleilmi et al. report fluid-activated aseismic slip followed only subsequently by the onset of micro-seismicity. We show that the Gugleilmi et al. dataset can be used to constrain the hydro-mechanical model parameters of a fluid-activated expanding shear rupture within a Bayesian framework. We assume that (1) pore-pressure diffuses radially outward (from the injection well) within a permeable pathway along the fault bounded by a narrow damage zone about the principal slip surface; (2) pore-pressure increase ativates slip on a pre-stressed planar fault due to reduction in frictional strength (expressed as a constant friction coefficient times the effective normal stress). Owing to efficient, parallel, numerical solutions to the axisymmetric fluid-diffusion and crack problems (under the imposed history of injection), we are able to jointly fit the observed history of pore-pressure and slip using an adaptive Monte Carlo technique. Our hydrological model provides an excellent fit to the pore-pressure data without requiring any statistically significant permeability enhancement due to the onset of slip. Further, for realistic elastic properties of the fault, the crack model fits both the onset of slip and its early time evolution reasonably well. However, our model requires unrealistic fault properties to fit the marked acceleration of slip observed later in the experiment (coinciding with the triggering of microseismicity). Therefore, besides producing meaningful and internally consistent bounds on in-situ fault properties like permeability, storage coefficient, resolved stresses, friction and the shear modulus, our results also show that fitting the complete observed time history of slip requires alternative model considerations, such as variations in fault mechanical properties or friction coefficient with slip.
Seismic variability and structural controls on fluid migration in Northern Oklahoma
NASA Astrophysics Data System (ADS)
Lambert, C.; Keranen, K. M.; Stevens, N. T.
2016-12-01
The broad region of seismicity in northern Oklahoma encompasses distinct structural settings; notably, the area contains both high-length, high-offset faults bounding a major structural uplift (the Nemaha uplift), and also encompasses regions of distributed, low-length, low-offset faults on either side of the uplift. Seismicity differs between these structural settings in mode of migration, rate, magnitude, and mechanism. Here we use our catalog from 2015-2016, acquired using a dense network of 55 temporary broadband seismometers, complemented by data from 40+ regional stations, including the IRIS Wavefields stations. We compare seismicity between these structural settings using precise earthquake locations, focal mechanism solutions, and body-wave tomography. Within and along the dominant Nemaha uplift, earthquakes rarely occur on one of the primary uplift-bounding faults. Earthquakes instead occur within the uplift on isolated, discrete faults, and migrate gradually along these faults at 20-30 m/day. The regions peripheral to the uplift hosted the majority of earthquakes within the year, on multiple series of frequently unmapped, densely-spaced, subparallel faults. We did not detect a similar slow migration along these faults. Earthquakes instead occurred via progressive failure of individual segments along a fault, or jumped abruptly from one fault to another nearby. Mechanisms in both regions are dominantly strike-slip, with the interpreted dominant fault plane orientation rotating from N100E in the Wavefields area (west of the uplift) to N50E (within the uplift). We interpret that the distinct variation in seismicity may result from the variation in fault density and length between the uplift and the surrounding regions. Seismic velocity within the upper basement of the uplift is lower than the velocity on either side, possibly indicative of enhanced fracturing within the uplift, as seen in the Nemaha uplift to the north. The fracturing, along with the large faults, may create fluid pathways that facilitate pressure diffusion. Conversely, outside of the uplift, the numerous small-offset faults that are reactivated appear to be less efficient fluid pathways, inhibiting pressure diffusion and resulting in a higher seismicity rate.
NHPP-Based Software Reliability Models Using Equilibrium Distribution
NASA Astrophysics Data System (ADS)
Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi
Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.
Fault tolerant and lifetime control architecture for autonomous vehicles
NASA Astrophysics Data System (ADS)
Bogdanov, Alexander; Chen, Yi-Liang; Sundareswaran, Venkataraman; Altshuler, Thomas
2008-04-01
Increased vehicle autonomy, survivability and utility can provide an unprecedented impact on mission success and are one of the most desirable improvements for modern autonomous vehicles. We propose a general architecture of intelligent resource allocation, reconfigurable control and system restructuring for autonomous vehicles. The architecture is based on fault-tolerant control and lifetime prediction principles, and it provides improved vehicle survivability, extended service intervals, greater operational autonomy through lower rate of time-critical mission failures and lesser dependence on supplies and maintenance. The architecture enables mission distribution, adaptation and execution constrained on vehicle and payload faults and desirable lifetime. The proposed architecture will allow managing missions more efficiently by weighing vehicle capabilities versus mission objectives and replacing the vehicle only when it is necessary.
Losiak, Anna; Gołębiowska, Izabela; Orgel, Csilla; Moser, Linda; MacArthur, Jane; Boyd, Andrea; Hettrich, Sebastian; Jones, Natalie; Groemer, Gernot
2014-05-01
MARS2013 was an integrated Mars analog field simulation in eastern Morocco performed by the Austrian Space Forum between February 1 and 28, 2013. The purpose of this paper is to discuss the system of data processing and utilization adopted by the Remote Science Support (RSS) team during this mission. The RSS team procedures were designed to optimize operational efficiency of the Flightplan, field crew, and RSS teams during a long-term analog mission with an introduced 10 min time delay in communication between "Mars" and Earth. The RSS workflow was centered on a single-file, easy-to-use, spatially referenced database that included all the basic information about the conditions at the site of study, as well as all previous and planned activities. This database was prepared in Google Earth software. The lessons learned from MARS2013 RSS team operations are as follows: (1) using a spatially referenced database is an efficient way of data processing and data utilization in a long-term analog mission with a large amount of data to be handled, (2) mission planning based on iterations can be efficiently supported by preparing suitability maps, (3) the process of designing cartographical products should start early in the planning stages of a mission and involve representatives of all teams, (4) all team members should be trained in usage of cartographical products, (5) technical problems (e.g., usage of a geological map while wearing a space suit) should be taken into account when planning a work flow for geological exploration, (6) a system that helps the astronauts to efficiently orient themselves in the field should be designed as part of future analog studies.
NASA Astrophysics Data System (ADS)
Arriola, David; Thielecke, Frank
2017-09-01
Electromechanical actuators have become a key technology for the onset of power-by-wire flight control systems in the next generation of commercial aircraft. The design of robust control and monitoring functions for these devices capable to mitigate the effects of safety-critical faults is essential in order to achieve the required level of fault tolerance. A primary flight control system comprising two electromechanical actuators nominally operating in active-active mode is considered. A set of five signal-based monitoring functions are designed using a detailed model of the system under consideration which includes non-linear parasitic effects, measurement and data acquisition effects, and actuator faults. Robust detection thresholds are determined based on the analysis of parametric and input uncertainties. The designed monitoring functions are verified experimentally and by simulation through the injection of faults in the validated model and in a test-rig suited to the actuation system under consideration, respectively. They guarantee a robust and efficient fault detection and isolation with a low risk of false alarms, additionally enabling the correct reconfiguration of the system for an enhanced operational availability. In 98% of the performed experiments and simulations, the correct faults were detected and confirmed within the time objectives set.
Use of Terrestrial Laser Scanner for Rigid Airport Pavement Management
Di Benedetto, Alessandro; Fiani, Margherita
2017-01-01
The evaluation of the structural efficiency of airport infrastructures is a complex task. Faulting is one of the most important indicators of rigid pavement performance. The aim of our study is to provide a new method for faulting detection and computation on jointed concrete pavements. Nowadays, the assessment of faulting is performed with the use of laborious and time-consuming measurements that strongly hinder aircraft traffic. We proposed a field procedure for Terrestrial Laser Scanner data acquisition and a computation flow chart in order to identify and quantify the fault size at each joint of apron slabs. The total point cloud has been used to compute the least square plane fitting those points. The best-fit plane for each slab has been computed too. The attitude of each slab plane with respect to both the adjacent ones and the apron reference plane has been determined by the normal vectors to the surfaces. Faulting has been evaluated as the difference in elevation between the slab planes along chosen sections. For a more accurate evaluation of the faulting value, we have then considered a few strips of data covering rectangular areas of different sizes across the joints. The accuracy of the estimated quantities has been computed too. PMID:29278386
Early Mesozoic rift basin architecture and sediment routing system in the Moroccan High Atlas
NASA Astrophysics Data System (ADS)
Perez, N.; Teixell, A.; Gomez, D.
2016-12-01
Late Permian to Triassic extensional systems associated with Pangea breakup governed the structural framework and rift basin architecture that was inherited by Cenozoic High Atlas Mountains in Morocco. U-Pb detrital zircon geochronologic and mapping results from Permo-Triassic deposits now incorporated into the High Atlas Mountains provide new constraints on the geometry and interconnectivity among synextensional depocenters. U-Pb detrital zircon data provide provenance constraints of Permo-Triassic deposits, highlighting temporal changes in sediment sources and revealing the spatial pattern of sediment routing along the rift. We also characterize the U-Pb detrital zircon geochronologic signature of distinctive interfingering fluvial, tidal, and aeolian facies that are preferentially preserved near the controlling normal faults. These results highlight complex local sediment mixing patterns potentially linked to the interplay between fault motion, eustatic, and erosion/transport processes. We compare our U-Pb geochronologic results with existing studies of Gondwanan and Laurentian cratonic blocks to investigate continent scale sediment routing pathways, and with analogous early Mesozoic extensional systems situated in South America (Mitu basin, Peru) and North America (Newark Basin) to assess sediment mixing patterns in rift basins.
What is the earthquake fracture energy?
NASA Astrophysics Data System (ADS)
Di Toro, G.; Nielsen, S. B.; Passelegue, F. X.; Spagnuolo, E.; Bistacchi, A.; Fondriest, M.; Murphy, S.; Aretusini, S.; Demurtas, M.
2016-12-01
The energy budget of an earthquake is one of the main open questions in earthquake physics. During seismic rupture propagation, the elastic strain energy stored in the rock volume that bounds the fault is converted into (1) gravitational work (relative movement of the wall rocks bounding the fault), (2) in- and off-fault damage of the fault zone rocks (due to rupture propagation and frictional sliding), (3) frictional heating and, of course, (4) seismic radiated energy. The difficulty in the budget determination arises from the measurement of some parameters (e.g., the temperature increase in the slipping zone which constraints the frictional heat), from the not well constrained size of the energy sinks (e.g., how large is the rock volume involved in off-fault damage?) and from the continuous exchange of energy from different sinks (for instance, fragmentation and grain size reduction may result from both the passage of the rupture front and frictional heating). Field geology studies, microstructural investigations, experiments and modelling may yield some hints. Here we discuss (1) the discrepancies arising from the comparison of the fracture energy measured in experiments reproducing seismic slip with the one estimated from seismic inversion for natural earthquakes and (2) the off-fault damage induced by the diffusion of frictional heat during simulated seismic slip in the laboratory. Our analysis suggests, for instance, that the so called earthquake fracture energy (1) is mainly frictional heat for small slips and (2), with increasing slip, is controlled by the geometrical complexity and other plastic processes occurring in the damage zone. As a consequence, because faults are rapidly and efficiently lubricated upon fast slip initiation, the dominant dissipation mechanism in large earthquakes may not be friction but be the off-fault damage due to fault segmentation and stress concentrations in a growing region around the fracture tip.
Improving Distributed Diagnosis Through Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino
2011-01-01
Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.
Surface faults in the gulf coastal plain between Victoria and Beaumont, Texas
Verbeek, Earl R.
1979-01-01
Displacement of the land surface by faulting is widespread in the Houston-Galveston region, an area which has undergone moderate to severe land subsidence associated with fluid withdrawal (principally water, and to a lesser extent, oil and gas). A causative link between subsidence and fluid extraction has been convincingly reported in the published literature. However, the degree to which fluid withdrawal affects fault movement in the Texas Gulf Coast, and the mechanism(s) by which this occurs are as yet unclear. Faults that offset the ground surface are not confined to the large (>6000-km2) subsidence “bowl” centered on Houston, but rather are common and characteristic features of Gulf Coast geology. Current observations and conclusions concerning surface faults mapped in a 35,000-km2 area between Victoria and Beaumont, Texas (which area includes the Houston subsidence bowl) may be summarized as follows: (1) Hundreds of faults cutting the Pleistocene and Holocene sediments exposed in the coastal plain have been mapped. Many faults lie well outside the Houston-Galveston region; of these, more than 10% are active, as shown by such features as displaced, fractured, and patched road surfaces, structural failure of buildings astride faults, and deformed railroad tracks. (2) Complex patterns of surface faults are common above salt domes. Both radial patterns (for example, in High Island, Blue Ridge, Clam Lake, and Clinton domes) and crestal grabens (for example, in the South Houston and Friendswood-Webster domes) have been recognized. Elongate grabens connecting several known and suspected salt domes, such as the fault zone connecting Mykawa, Friendswood-Webster, and Clear Lake domes, suggest fault development above rising salt ridges. (3) Surface faults associated with salt domes tend to be short (<5 km in length), numerous, curved in map view, and of diverse trend. Intersecting faults are common. In contrast, surface faults in areas unaffected by salt diapirism are frequently mappable for appreciable distances (>10 km), occur singly or in simple grabens, have gently sinuous traces, and tend to lie roughly parallel to the ENE-NE “coastwise” trend common to regional growth faults identified in subsurface Tertiary sediments. (4) Evidence to support the thesis that surface scarps are the shallow expression of faults extending downward into the Tertiary section is mostly indirect, but nonetheless reasonably convincing. Certainly the patterns of crestal grabens and radiating faults mapped on the surface above salt domes are more than happenstance; analogous fault patterns have been documented around these structures at depth. Similarly, some of the long surface faults not associated with salt domes seem to have subsurface counterparts among known regional growth faults documented through well logs and seismic data. Correlations between surface scarps and faults offsetting subsurface data are not conclusive because of the large vertical distances (1900- 3800 m) involved in making the most of the inferred connections. Nevertheless, the large number of successful correlations - in trend, movement sense, and position - suggests that many surface scarps represent merely the most recent displacements on faults formed during the Tertiary. (5) Upstream-facing fault scarps in this region of low relief can be significant impediments to streams. Locally, both abandoned, mud-filled Pleistocene distributary channels and, more commonly, Holocene drainage lines still occupied by perennial streams reflect the influence of faulting on their development. Some bend sharply near faults and have tended to flow along or pond against the base of scarps; others meander within topographically expressed grabens. Such evidence for Quaternary displacement of the ground surface is widespread in the Texas Gulf coast. In the general, however, streams in areas now offset by faulting show no disruption of their courses where they cross fault scarps. Such scarps are probably very young, and where they can be demonstrated to partly or wholly predate fluid withdrawal, very recent natural fault activity is indicated. (6) Early aerial photographs (1930) of the entire region and topographic maps (1915-16 surveys) of Harris County (Houston and vicinity) show that many faults had already displaced the land surface at a time when appreciable pressure declines in subjacent strata were localized to relatively few areas of large-scale pumping. Prehistoric faulting of the land surface, as noted above, appears to have affected much of the Texas Gulf Coast. (7) A relation between groundwater extraction and current motion on active faults is suspected because of the increased incidence of ground failure in the Houston-Galveston subsidence bowl. This argument is weakened somewhat by recognition of numerous surface faults, some of them active today, far beyond the periphery of the strongly subsiding area. Moreover, tilt beam records from two monitored faults in northwest Houston and accounts of fault damage from local residents demonstrate a complex, episodic nature of fault creep which can only partially be correlated with groundwater production. Nevertheless, although specific mechanisms are in doubt, the extraction of groundwater from shallow (<800-m) sands is probably a major factor in contributing to current displacement of the ground surface in the Houston-Galveston region. Within this large area, the number of faults recognizable from aerial photographs has increased at least tenfold between 1930 and 1970. Elsewhere in the Texas Gulf Coast only a moderate increase has been noted, some of which is possibly attributable to oil and gas production. Surface fault density in the Houston-Galveston region is far greater than in any other area of the Texas Gulf Coast investigated to date. A plausible explanation for these differences is that large overdrafts of groundwater over an extended period of time in the Houston-Galveston region have stimulated fault activity there. Throughout the Texas Gulf Coast, however, a natural contribution to fault motion remains a distinct possibility.
Flexible, reconfigurable, power efficient transmitter and method
NASA Technical Reports Server (NTRS)
Bishop, James W. (Inventor); Zaki, Nazrul H. Mohd (Inventor); Newman, David Childress (Inventor); Bundick, Steven N. (Inventor)
2011-01-01
A flexible, reconfigurable, power efficient transmitter device and method is provided. In one embodiment, the method includes receiving outbound data and determining a mode of operation. When operating in a first mode the method may include modulation mapping the outbound data according a modulation scheme to provide first modulation mapped digital data, converting the first modulation mapped digital data to an analog signal that comprises an intermediate frequency (IF) analog signal, upconverting the IF analog signal to produce a first modulated radio frequency (RF) signal based on a local oscillator signal, amplifying the first RF modulated signal to produce a first RF output signal, and outputting the first RF output signal via an isolator. In a second mode of operation method may include modulation mapping the outbound data according a modulation scheme to provide second modulation mapped digital data, converting the second modulation mapped digital data to a first digital baseband signal, conditioning the first digital baseband signal to provide a first analog baseband signal, modulating one or more carriers with the first analog baseband signal to produce a second modulated RF signal based on a local oscillator signal, amplifying the second RF modulated signal to produce a second RF output signal, and outputting the second RF output signal via the isolator. The digital baseband signal may comprise an in-phase (I) digital baseband signal and a quadrature (Q) baseband signal.
NASA Astrophysics Data System (ADS)
Horst, A. J.; Varga, R. J.; Gee, J. S.; Karson, J.
2011-12-01
Oceanic propagating rifts create migrating transform fault zones on the seafloor that leave a wake of deformed and rotated crustal blocks between abandoned transform fault stands. Faulting and rotation kinematics in these areas are inferred from bathymetric lineaments and earthquake focal mechanisms, but the details of crustal deformation associated with migrating oceanic transforms is inhibited by limited seafloor exposures and access. A similar propagating rift and migrating transform system occurs in thick oceanic-like crust of Northern Iceland, providing an additional perspective on kinematics of these systems. The Tjörnes Fracture Zone (TFZ) in Northern Iceland is a broad region of deformation thought to have formed ~7 Ma. Right-lateral motion is accommodated mostly on two WNW-trending seismically active fault zones, the Grímsey Seismic Zone and the Húsavík-Flatey Fault (HFF), spaced ~40 km apart. Both are primarily offshore; however, deformation south of the HFF is partly exposed on land over an area of >10 km (N/S) and >25 km (E/W) on the peninsula of Flateyjarskagi. Previous work has shown that average lava flow orientations progressively change from 160°/12° SW (~20 km south from HFF), to 183°/25° NW (~12 km S of HFF), and 212°/33° NW (~6 km S of HFF). Dike orientations also progressively change from 010°/85° SE (parallel to the Northern Rift Zone), clockwise to 110°/75° SW (nearly parallel to the HFF) near the HFF. Pervasive strike-slip faulting is evident along the HFF as well as on isolated faults to the south. Between these, NNE-striking left-lateral, oblique-slip faults occur near the HFF but appear to decrease in occurrence to the south. These relationships have been interpreted as either the result of transform shear deformation (secondary features) or construction in a stress field that varies as the transform is approached (primary features). Paleomagnetic data from across the area can test these hypotheses. Mean paleomagnetic remanence directions of normal polarity lavas from two areas ~6 and ~12 km south of the HFF both have easterly declinations and moderate positive inclinations, with nearly antipodal reverse directions. Dikes sampled in the area ~6 km south of HFF reveal remanence directions indistinguishable from those of the lavas at the 95% confidence level. After tilt correction, the mean remanence directions for the area ~6km south of the HFF are statistically distinct from the expected Geocentric Axial Dipole (GAD) direction suggesting an additional ~40° or more of vertical-axis rotation. Tilt-corrected remanence directions of lavas ~12 km south of the HFF are nearly coincident with the GAD suggesting little additional rotation. Geological field relations and fault-slip data imply a two-stage reconstruction involving tilting followed by approximately vertical-axis rotations. The deformation within the TFZ may be analogous to that of migrating oceanic transform faults, transform faults associated with propagating rifts, and microplates.
Strenkowska, Malwina; Grzela, Renata; Majewski, Maciej; Wnek, Katarzyna; Kowalska, Joanna; Lukaszewicz, Maciej; Zuberek, Joanna; Darzynkiewicz, Edward; Kuhn, Andreas N.; Sahin, Ugur; Jemielity, Jacek
2016-01-01
Along with a growing interest in mRNA-based gene therapies, efforts are increasingly focused on reaching the full translational potential of mRNA, as a major obstacle for in vivo applications is sufficient expression of exogenously delivered mRNA. One method to overcome this limitation is chemically modifying the 7-methylguanosine cap at the 5′ end of mRNA (m7Gppp-RNA). We report a novel class of cap analogs designed as reagents for mRNA modification. The analogs carry a 1,2-dithiodiphosphate moiety at various positions along a tri- or tetraphosphate bridge, and thus are termed 2S analogs. These 2S analogs have high affinities for translation initiation factor 4E, and some exhibit remarkable resistance against the SpDcp1/2 decapping complex when introduced into RNA. mRNAs capped with 2S analogs combining these two features exhibit high translation efficiency in cultured human immature dendritic cells. These properties demonstrate that 2S analogs are potentially beneficial for mRNA-based therapies such as anti-cancer immunization. PMID:27903882
Synthetic analog computation in living cells.
Daniel, Ramiz; Rubens, Jacob R; Sarpeshkar, Rahul; Lu, Timothy K
2013-05-30
A central goal of synthetic biology is to achieve multi-signal integration and processing in living cells for diagnostic, therapeutic and biotechnology applications. Digital logic has been used to build small-scale circuits, but other frameworks may be needed for efficient computation in the resource-limited environments of cells. Here we demonstrate that synthetic analog gene circuits can be engineered to execute sophisticated computational functions in living cells using just three transcription factors. Such synthetic analog gene circuits exploit feedback to implement logarithmically linear sensing, addition, ratiometric and power-law computations. The circuits exhibit Weber's law behaviour as in natural biological systems, operate over a wide dynamic range of up to four orders of magnitude and can be designed to have tunable transfer functions. Our circuits can be composed to implement higher-order functions that are well described by both intricate biochemical models and simple mathematical functions. By exploiting analog building-block functions that are already naturally present in cells, this approach efficiently implements arithmetic operations and complex functions in the logarithmic domain. Such circuits may lead to new applications for synthetic biology and biotechnology that require complex computations with limited parts, need wide-dynamic-range biosensing or would benefit from the fine control of gene expression.
IADE: a system for intelligent automatic design of bioisosteric analogs
NASA Astrophysics Data System (ADS)
Ertl, Peter; Lewis, Richard
2012-11-01
IADE, a software system supporting molecular modellers through the automatic design of non-classical bioisosteric analogs, scaffold hopping and fragment growing, is presented. The program combines sophisticated cheminformatics functionalities for constructing novel analogs and filtering them based on their drug-likeness and synthetic accessibility using automatic structure-based design capabilities: the best candidates are selected according to their similarity to the template ligand and to their interactions with the protein binding site. IADE works in an iterative manner, improving the fitness of designed molecules in every generation until structures with optimal properties are identified. The program frees molecular modellers from routine, repetitive tasks, allowing them to focus on analysis and evaluation of the automatically designed analogs, considerably enhancing their work efficiency as well as the area of chemical space that can be covered. The performance of IADE is illustrated through a case study of the design of a nonclassical bioisosteric analog of a farnesyltransferase inhibitor—an analog that has won a recent "Design a Molecule" competition.
IADE: a system for intelligent automatic design of bioisosteric analogs.
Ertl, Peter; Lewis, Richard
2012-11-01
IADE, a software system supporting molecular modellers through the automatic design of non-classical bioisosteric analogs, scaffold hopping and fragment growing, is presented. The program combines sophisticated cheminformatics functionalities for constructing novel analogs and filtering them based on their drug-likeness and synthetic accessibility using automatic structure-based design capabilities: the best candidates are selected according to their similarity to the template ligand and to their interactions with the protein binding site. IADE works in an iterative manner, improving the fitness of designed molecules in every generation until structures with optimal properties are identified. The program frees molecular modellers from routine, repetitive tasks, allowing them to focus on analysis and evaluation of the automatically designed analogs, considerably enhancing their work efficiency as well as the area of chemical space that can be covered. The performance of IADE is illustrated through a case study of the design of a nonclassical bioisosteric analog of a farnesyltransferase inhibitor--an analog that has won a recent "Design a Molecule" competition.
A probabilistic dynamic energy model for ad-hoc wireless sensors network with varying topology
NASA Astrophysics Data System (ADS)
Al-Husseini, Amal
In this dissertation we investigate the behavior of Wireless Sensor Networks (WSNs) from the degree distribution and evolution perspective. In specific, we focus on implementation of a scale-free degree distribution topology for energy efficient WSNs. WSNs is an emerging technology that finds its applications in different areas such as environment monitoring, agricultural crop monitoring, forest fire monitoring, and hazardous chemical monitoring in war zones. This technology allows us to collect data without human presence or intervention. Energy conservation/efficiency is one of the major issues in prolonging the active life WSNs. Recently, many energy aware and fault tolerant topology control algorithms have been presented, but there is dearth of research focused on energy conservation/efficiency of WSNs. Therefore, we study energy efficiency and fault-tolerance in WSNs from the degree distribution and evolution perspective. Self-organization observed in natural and biological systems has been directly linked to their degree distribution. It is widely known that scale-free distribution bestows robustness, fault-tolerance, and access efficiency to system. Fascinated by these properties, we propose two complex network theoretic self-organizing models for adaptive WSNs. In particular, we focus on adopting the Barabasi and Albert scale-free model to fit into the constraints and limitations of WSNs. We developed simulation models to conduct numerical experiments and network analysis. The main objective of studying these models is to find ways to reducing energy usage of each node and balancing the overall network energy disrupted by faulty communication among nodes. The first model constructs the wireless sensor network relative to the degree (connectivity) and remaining energy of every individual node. We observed that it results in a scale-free network structure which has good fault tolerance properties in face of random node failures. The second model considers additional constraints on the maximum degree of each node as well as the energy consumption relative to degree changes. This gives more realistic results from a dynamical network perspective. It results in balanced network-wide energy consumption. The results show that networks constructed using the proposed approach have good properties for different centrality measures. The outcomes of the presented research are beneficial to building WSN control models with greater self-organization properties which leads to optimal energy consumption.
Pattern formation during healing of fluid-filled cracks: an analog experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
F. Renard; D. K. Dysthe; J. G. Feder
2009-11-01
The formation and subsequent healing of cracks and crack networks may control such diverse phenomena as the strengthening of fault zones between earthquakes, fluid migrations in the Earth's crust, or the transport of radioactive materials in nuclear waste disposal. An intriguing pattern-forming process can develop during healing of fluid-filled cracks, where pockets of fluid remain permanently trapped in the solid as the crack tip is displaced driven by surface energy. Here, we present the results of analog experiments in which a liquid was injected into a colloidal inorganic gel to obtain penny-shaped cracks that were subsequently allowed to close andmore » heal under the driving effect of interfacial tension. Depending on the properties of the gel and the injected liquid, two modes of healing were obtained. In the first mode, the crack healed completely through a continuous process. The second mode of healing was discontinuous and was characterized by a 'zipper-like' closure of a front that moved along the crack perimeter, trapping fluid that may eventually form inclusions trapped in the solid. This instability occurred only when the velocity of the crack tip decreased to zero. Our experiments provide a cheap and simple analog to reveal how aligned arrays of fluid inclusions may be captured along preexisting fracture planes and how small amounts of fluids can be permanently trapped in solids, modifying irreversibly their material properties.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, L.; Wilson, T.H.; Shumaker, R.C.
1993-08-01
Seismic interpretations of the Granny Creek oil field in West Virginia suggest the presence of numerous small-scale fracture zones and faults. Seismic disruptions interpreted as faults and/or fracture zones are represented by abrupt reflection offsets, local amplitude reductions, and waveform changes. These features are enhanced through reprocessing, and the majority of the improvements to the data result from the surface consistent application of zero-phase deconvolution. Reprocessing yields a 20% improvement of resolution. Seismic interpretations of these features as small faults and fracture zones are supported by nearby offset vertical seismic profiles and by their proximity to wells between which directmore » communication occurs during waterflooding. Four sets of faults are interpreted based on subsurface and seismic data. Direct interwell communication is interpreted to be associated only with a northeast-trending set of faults, which are believed to have detached structural origins. Subsequent reactivation of deeper basement faults may have opened fractures along this trend. These faults have a limited effect on primary production, but cause many well-communication problems and reduce secondary production. Seismic detection of these zones is important to the economic and effective design of secondary recovery operations, because direct well communication often results in significant reduction of sweep efficiency during waterflooding. Prior information about the location of these zones would allow secondary recovery operations to avoid potential problem areas and increase oil recovery.« less
NASA Astrophysics Data System (ADS)
Wang, Lei; Liu, Zhiwen; Miao, Qiang; Zhang, Xin
2018-03-01
A time-frequency analysis method based on ensemble local mean decomposition (ELMD) and fast kurtogram (FK) is proposed for rotating machinery fault diagnosis. Local mean decomposition (LMD), as an adaptive non-stationary and nonlinear signal processing method, provides the capability to decompose multicomponent modulation signal into a series of demodulated mono-components. However, the occurring mode mixing is a serious drawback. To alleviate this, ELMD based on noise-assisted method was developed. Still, the existing environmental noise in the raw signal remains in corresponding PF with the component of interest. FK has good performance in impulse detection while strong environmental noise exists. But it is susceptible to non-Gaussian noise. The proposed method combines the merits of ELMD and FK to detect the fault for rotating machinery. Primarily, by applying ELMD the raw signal is decomposed into a set of product functions (PFs). Then, the PF which mostly characterizes fault information is selected according to kurtosis index. Finally, the selected PF signal is further filtered by an optimal band-pass filter based on FK to extract impulse signal. Fault identification can be deduced by the appearance of fault characteristic frequencies in the squared envelope spectrum of the filtered signal. The advantages of ELMD over LMD and EEMD are illustrated in the simulation analyses. Furthermore, the efficiency of the proposed method in fault diagnosis for rotating machinery is demonstrated on gearbox case and rolling bearing case analyses.
Study of a unified hardware and software fault-tolerant architecture
NASA Technical Reports Server (NTRS)
Lala, Jaynarayan; Alger, Linda; Friend, Steven; Greeley, Gregory; Sacco, Stephen; Adams, Stuart
1989-01-01
A unified architectural concept, called the Fault Tolerant Processor Attached Processor (FTP-AP), that can tolerate hardware as well as software faults is proposed for applications requiring ultrareliable computation capability. An emulation of the FTP-AP architecture, consisting of a breadboard Motorola 68010-based quadruply redundant Fault Tolerant Processor, four VAX 750s as attached processors, and four versions of a transport aircraft yaw damper control law, is used as a testbed in the AIRLAB to examine a number of critical issues. Solutions of several basic problems associated with N-Version software are proposed and implemented on the testbed. This includes a confidence voter to resolve coincident errors in N-Version software. A reliability model of N-Version software that is based upon the recent understanding of software failure mechanisms is also developed. The basic FTP-AP architectural concept appears suitable for hosting N-Version application software while at the same time tolerating hardware failures. Architectural enhancements for greater efficiency, software reliability modeling, and N-Version issues that merit further research are identified.
Nonuniform code concatenation for universal fault-tolerant quantum computing
NASA Astrophysics Data System (ADS)
Nikahd, Eesa; Sedighi, Mehdi; Saheb Zamani, Morteza
2017-09-01
Using transversal gates is a straightforward and efficient technique for fault-tolerant quantum computing. Since transversal gates alone cannot be computationally universal, they must be combined with other approaches such as magic state distillation, code switching, or code concatenation to achieve universality. In this paper we propose an alternative approach for universal fault-tolerant quantum computing, mainly based on the code concatenation approach proposed in [T. Jochym-O'Connor and R. Laflamme, Phys. Rev. Lett. 112, 010505 (2014), 10.1103/PhysRevLett.112.010505], but in a nonuniform fashion. The proposed approach is described based on nonuniform concatenation of the 7-qubit Steane code with the 15-qubit Reed-Muller code, as well as the 5-qubit code with the 15-qubit Reed-Muller code, which lead to two 49-qubit and 47-qubit codes, respectively. These codes can correct any arbitrary single physical error with the ability to perform a universal set of fault-tolerant gates, without using magic state distillation.
NASA Astrophysics Data System (ADS)
Rainaud, Jean-François; Clochard, Vincent; Delépine, Nicolas; Crabié, Thomas; Poudret, Mathieu; Perrin, Michel; Klein, Emmanuel
2018-07-01
Accurate reservoir characterization is needed all along the development of an oil and gas field study. It helps building 3D numerical reservoir simulation models for estimating the original oil and gas volumes in place and for simulating fluid flow behaviors. At a later stage of the field development, reservoir characterization can also help deciding which recovery techniques need to be used for fluids extraction. In complex media, such as faulted reservoirs, flow behavior predictions within volumes close to faults can be a very challenging issue. During the development plan, it is necessary to determine which types of communication exist between faults or which potential barriers exist for fluid flows. The solving of these issues rests on accurate fault characterization. In most cases, faults are not preserved along reservoir characterization workflows. The memory of the interpreted faults from seismic is not kept during seismic inversion and further interpretation of the result. The goal of our study is at first to integrate a 3D fault network as a priori information into a model-based stratigraphic inversion procedure. Secondly, we apply our methodology on a well-known oil and gas case study over a typical North Sea field (UK Northern North Sea) in order to demonstrate its added value for determining reservoir properties. More precisely, the a priori model is composed of several geological units populated by physical attributes, they are extrapolated from well log data following the deposition mode, but usually a priori model building methods respect neither the 3D fault geometry nor the stratification dips on the fault sides. We address this difficulty by applying an efficient flattening method for each stratigraphic unit in our workflow. Even before seismic inversion, the obtained stratigraphic model has been directly used to model synthetic seismic on our case study. Comparisons between synthetic seismic obtained from our 3D fault network model give much lower residuals than with a "basic" stratigraphic model. Finally, we apply our model-based inversion considering both faulted and non-faulted a priori models. By comparing the rock impedances results obtain in the two cases, we can see a better delineation of the Brent-reservoir compartments by using the 3D faulted a priori model built with our method.
High-fidelity spin measurement on the nitrogen-vacancy center
NASA Astrophysics Data System (ADS)
Hanks, Michael; Trupke, Michael; Schmiedmayer, Jörg; Munro, William J.; Nemoto, Kae
2017-10-01
Nitrogen-vacancy (NV) centers in diamond are versatile candidates for many quantum information processing tasks, ranging from quantum imaging and sensing through to quantum communication and fault-tolerant quantum computers. Critical to almost every potential application is an efficient mechanism for the high fidelity readout of the state of the electronic and nuclear spins. Typically such readout has been achieved through an optically resonant fluorescence measurement, but the presence of decay through a meta-stable state will limit its efficiency to the order of 99%. While this is good enough for many applications, it is insufficient for large scale quantum networks and fault-tolerant computational tasks. Here we explore an alternative approach based on dipole induced transparency (state-dependent reflection) in an NV center cavity QED system, using the most recent knowledge of the NV center’s parameters to determine its feasibility, including the decay channels through the meta-stable subspace and photon ionization. We find that single-shot measurements above fault-tolerant thresholds should be available in the strong coupling regime for a wide range of cavity-center cooperativities, using a majority voting approach utilizing single photon detection. Furthermore, extremely high fidelity measurements are possible using weak optical pulses.
NASA Astrophysics Data System (ADS)
Ren, Lei; Zhang, Lin; Tao, Fei; (Luke) Zhang, Xiaolong; Luo, Yongliang; Zhang, Yabin
2012-08-01
Multidisciplinary design of complex products leads to an increasing demand for high performance simulation (HPS) platforms. One great challenge is how to achieve high efficient utilisation of large-scale simulation resources in distributed and heterogeneous environments. This article reports a virtualisation-based methodology to realise a HPS platform. This research is driven by the issues concerning large-scale simulation resources deployment and complex simulation environment construction, efficient and transparent utilisation of fine-grained simulation resources and high reliable simulation with fault tolerance. A framework of virtualisation-based simulation platform (VSIM) is first proposed. Then the article investigates and discusses key approaches in VSIM, including simulation resources modelling, a method to automatically deploying simulation resources for dynamic construction of system environment, and a live migration mechanism in case of faults in run-time simulation. Furthermore, the proposed methodology is applied to a multidisciplinary design system for aircraft virtual prototyping and some experiments are conducted. The experimental results show that the proposed methodology can (1) significantly improve the utilisation of fine-grained simulation resources, (2) result in a great reduction in deployment time and an increased flexibility for simulation environment construction and (3)achieve fault tolerant simulation.
An Autonomous Self-Aware and Adaptive Fault Tolerant Routing Technique for Wireless Sensor Networks
Abba, Sani; Lee, Jeong-A
2015-01-01
We propose an autonomous self-aware and adaptive fault-tolerant routing technique (ASAART) for wireless sensor networks. We address the limitations of self-healing routing (SHR) and self-selective routing (SSR) techniques for routing sensor data. We also examine the integration of autonomic self-aware and adaptive fault detection and resiliency techniques for route formation and route repair to provide resilience to errors and failures. We achieved this by using a combined continuous and slotted prioritized transmission back-off delay to obtain local and global network state information, as well as multiple random functions for attaining faster routing convergence and reliable route repair despite transient and permanent node failure rates and efficient adaptation to instantaneous network topology changes. The results of simulations based on a comparison of the ASAART with the SHR and SSR protocols for five different simulated scenarios in the presence of transient and permanent node failure rates exhibit a greater resiliency to errors and failure and better routing performance in terms of the number of successfully delivered network packets, end-to-end delay, delivered MAC layer packets, packet error rate, as well as efficient energy conservation in a highly congested, faulty, and scalable sensor network. PMID:26295236
An Autonomous Self-Aware and Adaptive Fault Tolerant Routing Technique for Wireless Sensor Networks.
Abba, Sani; Lee, Jeong-A
2015-08-18
We propose an autonomous self-aware and adaptive fault-tolerant routing technique (ASAART) for wireless sensor networks. We address the limitations of self-healing routing (SHR) and self-selective routing (SSR) techniques for routing sensor data. We also examine the integration of autonomic self-aware and adaptive fault detection and resiliency techniques for route formation and route repair to provide resilience to errors and failures. We achieved this by using a combined continuous and slotted prioritized transmission back-off delay to obtain local and global network state information, as well as multiple random functions for attaining faster routing convergence and reliable route repair despite transient and permanent node failure rates and efficient adaptation to instantaneous network topology changes. The results of simulations based on a comparison of the ASAART with the SHR and SSR protocols for five different simulated scenarios in the presence of transient and permanent node failure rates exhibit a greater resiliency to errors and failure and better routing performance in terms of the number of successfully delivered network packets, end-to-end delay, delivered MAC layer packets, packet error rate, as well as efficient energy conservation in a highly congested, faulty, and scalable sensor network.
NASA Astrophysics Data System (ADS)
Zhang, Yan; Tang, Baoping; Liu, Ziran; Chen, Rengxiang
2016-02-01
Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses hidden in vibration signals and performs well for bearing fault diagnosis.
NASA Astrophysics Data System (ADS)
Kong, Changduk; Lim, Semyeong
2011-12-01
Recently, the health monitoring system of major gas path components of gas turbine uses mostly the model based method like the Gas Path Analysis (GPA). This method is to find quantity changes of component performance characteristic parameters such as isentropic efficiency and mass flow parameter by comparing between measured engine performance parameters such as temperatures, pressures, rotational speeds, fuel consumption, etc. and clean engine performance parameters without any engine faults which are calculated by the base engine performance model. Currently, the expert engine diagnostic systems using the artificial intelligent methods such as Neural Networks (NNs), Fuzzy Logic and Genetic Algorithms (GAs) have been studied to improve the model based method. Among them the NNs are mostly used to the engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base if there are large amount of learning data. In addition, it has a very complex structure for finding effectively single type faults or multiple type faults of gas path components. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measured performance data, and proposes a fault diagnostic system using the base engine performance model and the artificial intelligent methods such as Fuzzy logic and Neural Network. The proposed diagnostic system isolates firstly the faulted components using Fuzzy Logic, then quantifies faults of the identified components using the NN leaned by fault learning data base, which are obtained from the developed base performance model. In leaning the NN, the Feed Forward Back Propagation (FFBP) method is used. Finally, it is verified through several test examples that the component faults implanted arbitrarily in the engine are well isolated and quantified by the proposed diagnostic system.
On boundary-element models of elastic fault interaction
NASA Astrophysics Data System (ADS)
Becker, T. W.; Schott, B.
2002-12-01
We present the freely available, modular, and UNIX command-line based boundary-element program interact. It is yet another implementation of Crouch and Starfield's (1983) 2-D and Okada's (1992) half-space solutions for constant slip on planar fault segments in an elastic medium. Using unconstrained or non-negative, standard-package matrix routines, the code can solve for slip distributions on faults given stress boundary conditions, or vice versa, both in a local or global reference frame. Based on examples of complex fault geometries from structural geology, we discuss the effects of different stress boundary conditions on the predicted slip distributions of interacting fault systems. Such one-step calculations can be useful to estimate the moment-release efficiency of alternative fault geometries, and so to evaluate the likelihood which system may be realized in nature. A further application of the program is the simulation of cyclic fault rupture based on simple static-kinetic friction laws. We comment on two issues: First, that of the appropriate rupture algorithm. Cellular models of seismicity often employ an exhaustive rupture scheme: fault cells fail if some critical stress is reached, then cells slip once-only by a given amount, and subsequently the redistributed stress is used to check for triggered activations on other cells. We show that this procedure can lead to artificial complexity in seismicity if time-to-failure is not calculated carefully because of numerical noise. Second, we address the question if foreshocks can be viewed as direct expressions of a simple statistical distribution of frictional strength on individual faults. Repetitive failure models based on a random distribution of frictional coefficients initially show irregular seismicity. By repeatedly selecting weaker patches, the fault then evolves into a quasi-periodic cycle. Each time, the pre-mainshock events build up the cumulative moment release in a non-linear fashion. These temporal seismicity patterns roughly resemble the accelerated moment-release features which are sometimes observed in nature.
Matrix-vector multiplication using digital partitioning for more accurate optical computing
NASA Technical Reports Server (NTRS)
Gary, C. K.
1992-01-01
Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.
NASA Astrophysics Data System (ADS)
Goto, J.; Moriya, T.; Yoshimura, K.; Tsuchi, H.; Karasaki, K.; Onishi, T.; Ueta, K.; Tanaka, S.; Kiho, K.
2010-12-01
The Nuclear Waste Management Organization of Japan (NUMO), in collaboration with Lawrence Berkeley National Laboratory (LBNL), has carried out a project to develop an efficient and practical methodology to characterize hydrologic property of faults since 2007, exclusively for the early stage of siting a deep underground repository. A preliminary flowchart of the characterization program and a classification scheme of fault hydrology based on the geological feature have been proposed. These have been tested through the field characterization program on the Wildcat Fault in Berkeley, California. The Wildcat Fault is a relatively large non-active strike-slip fault which is believed to be a subsidiary of the active Hayward Fault. Our classification scheme assumes the contrasting hydrologic features between the linear northern part and the split/spread southern part of the Wildcat Fault. The field characterization program to date has been concentrated in and around the LBNL site on the southern part of the fault. Several lines of electrical and reflection seismic surveys, and subsequent trench investigations, have revealed the approximate distribution and near-surface features of the Wildcat Fault (see also Onishi, et al. and Ueta, et al.). Three 150m deep boreholes, WF-1 to WF-3, have been drilled on a line normal to the trace of the fault in the LBNL site. Two vertical holes were placed to characterize the undisturbed Miocene sedimentary formations at the eastern and western sides of the fault (WF-1 and WF-2 respectively). WF-2 on the western side intersected the rock formation, which was expected only in WF-1, and several of various intensities. Therefore, WF-3, originally planned as inclined to penetrate the fault, was replaced by the vertical hole further to the west. It again encountered unexpected rocks and faults. Preliminary results of in-situ hydraulic tests suggested that the transmissivity of WF-1 is ten to one hundred times higher than WF-2. The monitoring of hydraulic pressure displayed different head distribution patterns between WF-1 and WF-2 (see also Karasaki, et al.). Based on these results, three hypotheses on the distribution of the Wildcat Fault were proposed: (a) a vertical fault in between WF-1 and WF-2, (b) a more gently dipping fault intersected in WF-2 and WF-3, and (c) a wide zone of faults extending between WF-1 and WF-3. At present, WF-4, an inclined hole to penetrate the possible (eastern?) master fault, is ongoing to test these hypotheses. After the WF-4 investigation, hydrologic and geochemical analyses and modeling of the southern part of the fault will be carried out. A simpler field characterization program will also be carried out in the northern part of the fault. Finally, all the results will be synthesized to improve the comprehensive methodology.
Fault Tolerant Frequent Pattern Mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shohdy, Sameh; Vishnu, Abhinav; Agrawal, Gagan
FP-Growth algorithm is a Frequent Pattern Mining (FPM) algorithm that has been extensively used to study correlations and patterns in large scale datasets. While several researchers have designed distributed memory FP-Growth algorithms, it is pivotal to consider fault tolerant FP-Growth, which can address the increasing fault rates in large scale systems. In this work, we propose a novel parallel, algorithm-level fault-tolerant FP-Growth algorithm. We leverage algorithmic properties and MPI advanced features to guarantee an O(1) space complexity, achieved by using the dataset memory space itself for checkpointing. We also propose a recovery algorithm that can use in-memory and disk-based checkpointing,more » though in many cases the recovery can be completed without any disk access, and incurring no memory overhead for checkpointing. We evaluate our FT algorithm on a large scale InfiniBand cluster with several large datasets using up to 2K cores. Our evaluation demonstrates excellent efficiency for checkpointing and recovery in comparison to the disk-based approach. We have also observed 20x average speed-up in comparison to Spark, establishing that a well designed algorithm can easily outperform a solution based on a general fault-tolerant programming model.« less
An improved wrapper-based feature selection method for machinery fault diagnosis
2017-01-01
A major issue of machinery fault diagnosis using vibration signals is that it is over-reliant on personnel knowledge and experience in interpreting the signal. Thus, machine learning has been adapted for machinery fault diagnosis. The quantity and quality of the input features, however, influence the fault classification performance. Feature selection plays a vital role in selecting the most representative feature subset for the machine learning algorithm. In contrast, the trade-off relationship between capability when selecting the best feature subset and computational effort is inevitable in the wrapper-based feature selection (WFS) method. This paper proposes an improved WFS technique before integration with a support vector machine (SVM) model classifier as a complete fault diagnosis system for a rolling element bearing case study. The bearing vibration dataset made available by the Case Western Reserve University Bearing Data Centre was executed using the proposed WFS and its performance has been analysed and discussed. The results reveal that the proposed WFS secures the best feature subset with a lower computational effort by eliminating the redundancy of re-evaluation. The proposed WFS has therefore been found to be capable and efficient to carry out feature selection tasks. PMID:29261689
Progress in Computational Simulation of Earthquakes
NASA Technical Reports Server (NTRS)
Donnellan, Andrea; Parker, Jay; Lyzenga, Gregory; Judd, Michele; Li, P. Peggy; Norton, Charles; Tisdale, Edwin; Granat, Robert
2006-01-01
GeoFEST(P) is a computer program written for use in the QuakeSim project, which is devoted to development and improvement of means of computational simulation of earthquakes. GeoFEST(P) models interacting earthquake fault systems from the fault-nucleation to the tectonic scale. The development of GeoFEST( P) has involved coupling of two programs: GeoFEST and the Pyramid Adaptive Mesh Refinement Library. GeoFEST is a message-passing-interface-parallel code that utilizes a finite-element technique to simulate evolution of stress, fault slip, and plastic/elastic deformation in realistic materials like those of faulted regions of the crust of the Earth. The products of such simulations are synthetic observable time-dependent surface deformations on time scales from days to decades. Pyramid Adaptive Mesh Refinement Library is a software library that facilitates the generation of computational meshes for solving physical problems. In an application of GeoFEST(P), a computational grid can be dynamically adapted as stress grows on a fault. Simulations on workstations using a few tens of thousands of stress and displacement finite elements can now be expanded to multiple millions of elements with greater than 98-percent scaled efficiency on over many hundreds of parallel processors (see figure).
Linear complementarity formulation for 3D frictional sliding problems
Kaven, Joern; Hickman, Stephen H.; Davatzes, Nicholas C.; Mutlu, Ovunc
2012-01-01
Frictional sliding on quasi-statically deforming faults and fractures can be modeled efficiently using a linear complementarity formulation. We review the formulation in two dimensions and expand the formulation to three-dimensional problems including problems of orthotropic friction. This formulation accurately reproduces analytical solutions to static Coulomb friction sliding problems. The formulation accounts for opening displacements that can occur near regions of non-planarity even under large confining pressures. Such problems are difficult to solve owing to the coupling of relative displacements and tractions; thus, many geomechanical problems tend to neglect these effects. Simple test cases highlight the importance of including friction and allowing for opening when solving quasi-static fault mechanics models. These results also underscore the importance of considering the effects of non-planarity in modeling processes associated with crustal faulting.
Early Neogene unroofing of the Sierra Nevada de Santa Marta along the Bucaramanga -Santa Marta Fault
NASA Astrophysics Data System (ADS)
Piraquive Bermúdez, Alejandro; Pinzón, Edna; Bernet, Matthias; Kammer, Andreas; Von Quadt, Albrecht; Sarmiento, Gustavo
2016-04-01
Plate interaction between Caribbean and Nazca plates with Southamerica gave rise to an intricate pattern of tectonic blocks in the Northandean realm. Among these microblocks the Sierra Nevada de Santa Marta (SNSM) represents a fault-bounded triangular massif composed of a representative crustal section of the Northandean margin, in which a Precambrian to Late Paleozoic metamorphic belt is overlain by a Triassic to Jurassic magmatic arc and collateral volcanic suites. Its western border fault belongs to the composite Bucaramanga - Santa Marta fault with a combined left lateral-normal displacement. SE of Santa Marta it exposes remnants of an Oligocene marginal basin, which attests to a first Cenoizoic activation of this crustal-scale lineament. The basin fill consists of a sequence of coarse-grained cobble-pebble conglomerates > 1000 m thick that unconformably overlay the Triassic-Jurassic magmatic arc. Its lower sequence is composed of interbedded siltstones; topwards the sequence becomes dominated by coarser fractions. These sedimentary sequences yields valuable information about exhumation and coeval sedimentation processes that affected the massif's western border since the Upper Eocene. In order to analyse uplifting processes associated with tectonics during early Neogene we performed detrital zircon U-Pb geochronology, detrital thermochronology of zircon and apatites coupled with the description of a stratigraphic section and its facies composition. We compared samples from the Aracataca basin with analog sequences found at an equivalent basin at the Oca Fault at the northern margin of the SNSM. Our results show that sediments of both basins were sourced from Precambrian gneisses, along with Mesozoic acid to intermediate plutons; sedimentation started in the Upper Eocene-Oligocene according to palynomorphs, subsequently in the Upper Oligocene a completion of Jurassic to Cretaceous sources was followed by an increase of Precambrian input that became the dominant source for sediments, this shift in provenance is related to an increase in exhumation and erosion rates. The instauration of such a highly erosive regime since the Upper Oligocene attests how the Santa Marta massif was subject to uplifting and erosion, our data shows how in the Upper Oligocene an exhaustion of Cretaceous to Permian sources was followed by an increase in Neo-Proterozoic to Meso-Proterozoic input that is related to the unroofing of the basement rocks, this accelerated exhumation is directly related to the reactivation of the Orihueca Fault as a NW verging thrust at the interior of the massif coeval with Bucaramanga-Santa Marta Fault trans-tensional tectonics in response to the fragmentation of the Farallon plate into the Nazca an Cocos Plates.
Synthetic Biology: A Unifying View and Review Using Analog Circuits.
Teo, Jonathan J Y; Woo, Sung Sik; Sarpeshkar, Rahul
2015-08-01
We review the field of synthetic biology from an analog circuits and analog computation perspective, focusing on circuits that have been built in living cells. This perspective is well suited to pictorially, symbolically, and quantitatively representing the nonlinear, dynamic, and stochastic (noisy) ordinary and partial differential equations that rigorously describe the molecular circuits of synthetic biology. This perspective enables us to construct a canonical analog circuit schematic that helps unify and review the operation of many fundamental circuits that have been built in synthetic biology at the DNA, RNA, protein, and small-molecule levels over nearly two decades. We review 17 circuits in the literature as particular examples of feedforward and feedback analog circuits that arise from special topological cases of the canonical analog circuit schematic. Digital circuit operation of these circuits represents a special case of saturated analog circuit behavior and is automatically incorporated as well. Many issues that have prevented synthetic biology from scaling are naturally represented in analog circuit schematics. Furthermore, the deep similarity between the Boltzmann thermodynamic equations that describe noisy electronic current flow in subthreshold transistors and noisy molecular flux in biochemical reactions has helped map analog circuit motifs in electronics to analog circuit motifs in cells and vice versa via a `cytomorphic' approach. Thus, a body of knowledge in analog electronic circuit design, analysis, simulation, and implementation may also be useful in the robust and efficient design of molecular circuits in synthetic biology, helping it to scale to more complex circuits in the future.
NASA Astrophysics Data System (ADS)
Mittelstaedt, E. L.; Olive, J. A. L.; Barreyre, T.
2016-12-01
Hydrothermal circulation at the axis of mid-ocean ridges has a profound effect on chemical and biological processes in the deep ocean, and influences the thermo-mechanical state of young oceanic lithosphere. Yet, the geometry of fluid pathways beneath the seafloor and its relation to spatial gradients in crustal permeability remain enigmatic. Here we present new laboratory models of hydrothermal circulation aimed at constraining the self-organization of porous convection cells in homogeneous as well as highly heterogeneous crust analogs. Oceanic crust analogs of known permeability are constructed using uniform glass spheres and 3-D printed plastics with a network of mutually perpendicular tubes. These materials are saturated with corn syrup-water mixtures and heated at their base by a resistive silicone strip heater to initiate thermal convection. A layer of pure fluid (i.e., an analog ocean) overlies the porous medium and allows an "open-top" boundary condition. Areas of fluid discharge from the crust into the ocean are identified by illuminating microscopic glass particles carried by the fluid, using laser sheets. Using particle image velocimetry, we estimate fluid discharge rates as well as the location and extent of fluid recharge. Thermo-couples distributed throughout the crust provide insights into the geometry of convection cells at depth, and enable estimates of convective heat flux, which can be compared to the heat supplied at the base of the system. Preliminary results indicate that in homogeneous crust, convection is largely confined to the narrow slot overlying the heat source. Regularly spaced discharge zones appear focused while recharge areas appear diffuse, and qualitatively resemble the along-axis distribution of hydrothermal fields at oceanic spreading centers. By varying the permeability of the crustal analogs, the viscosity of the convecting fluid, and the imposed basal temperature, our experiments span Rayleigh numbers between 10 and 10,000. This allows us to precisely map the conditions of convection initiation, and test scaling relations between the Nusselt and Rayleigh numbers. Finally, we investigate how these scalings and convection geometry change when a slot of high-permeability material (i.e., an analog fault) is introduced in the middle of the porous domain.
Verification of Functional Fault Models and the Use of Resource Efficient Verification Tools
NASA Technical Reports Server (NTRS)
Bis, Rachael; Maul, William A.
2015-01-01
Functional fault models (FFMs) are a directed graph representation of the failure effect propagation paths within a system's physical architecture and are used to support development and real-time diagnostics of complex systems. Verification of these models is required to confirm that the FFMs are correctly built and accurately represent the underlying physical system. However, a manual, comprehensive verification process applied to the FFMs was found to be error prone due to the intensive and customized process necessary to verify each individual component model and to require a burdensome level of resources. To address this problem, automated verification tools have been developed and utilized to mitigate these key pitfalls. This paper discusses the verification of the FFMs and presents the tools that were developed to make the verification process more efficient and effective.
Simulation of demand-response power management in smart city
NASA Astrophysics Data System (ADS)
Kadam, Kshitija
Smart Grids manage energy efficiently through intelligent monitoring and control of all the components connected to the electrical grid. Advanced digital technology, combined with sensors and power electronics, can greatly improve transmission line efficiency. This thesis proposed a model of a deregulated grid which supplied power to diverse set of consumers and allowed them to participate in decision making process through two-way communication. The deregulated market encourages competition at the generation and distribution levels through communication with the central system operator. A software platform was developed and executed to manage the communication, as well for energy management of the overall system. It also demonstrated self-healing property of the system in case a fault occurs, resulting in an outage. The system not only recovered from the fault but managed to do so in a short time with no/minimum human involvement.
Comparison Between 2D and 3D Simulations of Rate Dependent Friction Using DEM
NASA Astrophysics Data System (ADS)
Wang, C.; Elsworth, D.
2017-12-01
Rate-state dependent constitutive laws of frictional evolution have been successful in representing many of the first- and second- order components of earthquake rupture. Although this constitutive law has been successfully applied in numerical models, difficulty remains in efficient implementation of this constitutive law in computationally-expensive granular mechanics simulations using discrete element methods (DEM). This study introduces a novel approach in implementing a rate-dependent constitutive relation of contact friction into DEM. This is essentially an implementation of a slip-weakening constitutive law onto local particle contacts without sacrificing computational efficiency. This implementation allows the analysis of slip stability of simulated fault gouge materials. Velocity-stepping experiments are reported on both uniform and textured distributions of quartz and talc as 3D analogs of gouge mixtures. Distinct local slip stability parameters (a-b) are assigned to the quartz and talc, respectively. We separately vary talc content from 0 to 100% in the uniform mixtures and talc layer thickness from 1 to 20 particles in the textured mixtures. Applied shear displacements are cycled through velocities of 1μm/s and 10μm/s. Frictional evolution data are collected and compared to 2D simulation results. We show that dimensionality significantly impacts the evolution of friction. 3D simulation results are more representative of laboratory observed behavior and numerical noise is shown at a magnitude of 0.01 in terms of friction coefficient. Stability parameters (a-b) can be straightforwardly obtained from analyzing velocity steps, and are different from locally assigned (a-b) values. Sensitivity studies on normal stress, shear velocity, particle size, local (a-b) values, and characteristic slip distance (Dc) show that the implementation is sensitive to local (a-b) values and relations between (Dc) and particle size.
Development of a space-systems network testbed
NASA Technical Reports Server (NTRS)
Lala, Jaynarayan; Alger, Linda; Adams, Stuart; Burkhardt, Laura; Nagle, Gail; Murray, Nicholas
1988-01-01
This paper describes a communications network testbed which has been designed to allow the development of architectures and algorithms that meet the functional requirements of future NASA communication systems. The central hardware components of the Network Testbed are programmable circuit switching communication nodes which can be adapted by software or firmware changes to customize the testbed to particular architectures and algorithms. Fault detection, isolation, and reconfiguration has been implemented in the Network with a hybrid approach which utilizes features of both centralized and distributed techniques to provide efficient handling of faults within the Network.
Detailed Balance Limit of Efficiency of Broadband-Pumped Lasers.
Nechayev, Sergey; Rotschild, Carmel
2017-09-13
Broadband light sources are a wide class of pumping schemes for lasers including LEDs, sunlight and flash lamps. Recently, efficient coupling of broadband light to high-quality micro-cavities has been demonstrated for on-chip applications and low-threshold solar-pumped lasers via cascade energy transfer. However, the conversion of incoherent to coherent light comes with an inherent price of reduced efficiency, which has yet to be assessed. In this paper, we derive the detailed balance limit of efficiency of broadband-pumped lasers and discuss how it is affected by the need to maintain a threshold population inversion and thermodynamically dictated minimal Stokes' shift. We show that lasers' slope efficiency is analogous to the nominal efficiency of solar cells, limited by thermalisation losses and additional unavoidable Stokes' shift. The lasers' power efficiency is analogous to the detailed balance limit of efficiency of solar cells, affected by the cavity mirrors and impedance matching factor, respectively. As an example we analyze the specific case of solar-pumped sensitized Nd 3+ :YAG-like lasers and define the conditions to reach their thermodynamic limit of efficiency. Our work establishes an upper theoretical limit for the efficiency of broadband-pumped lasers. Our general, yet flexible model also provides a way to incorporate other optical and thermodynamic losses and, hence, to estimate the efficiency of non-ideal broadband-pumped lasers.
Dix, Annika; Wartenburger, Isabell; van der Meer, Elke
2016-10-01
This study on analogical reasoning evaluates the impact of fluid intelligence on adaptive changes in neural efficiency over the course of an experiment and specifies the underlying cognitive processes. Grade 10 students (N=80) solved unfamiliar geometric analogy tasks of varying difficulty. Neural efficiency was measured by the event-related desynchronization (ERD) in the alpha band, an indicator of cortical activity. Neural efficiency was defined as a low amount of cortical activity accompanying high performance during problem-solving. Students solved the tasks faster and more accurately the higher their FI was. Moreover, while high FI led to greater cortical activity in the first half of the experiment, high FI was associated with a neurally more efficient processing (i.e., better performance but same amount of cortical activity) in the second half of the experiment. Performance in difficult tasks improved over the course of the experiment for all students while neural efficiency increased for students with higher but decreased for students with lower fluid intelligence. Based on analyses of the alpha sub-bands, we argue that high fluid intelligence was associated with a stronger investment of attentional resource in the integration of information and the encoding of relations in this unfamiliar task in the first half of the experiment (lower-2 alpha band). Students with lower fluid intelligence seem to adapt their applied strategies over the course of the experiment (i.e., focusing on task-relevant information; lower-1 alpha band). Thus, the initially lower cortical activity and its increase in students with lower fluid intelligence might reflect the overcoming of mental overload that was present in the first half of the experiment. Copyright © 2016 Elsevier Inc. All rights reserved.
FDI based on Artificial Neural Network for Low-Voltage-Ride-Through in DFIG-based Wind Turbine.
Adouni, Amel; Chariag, Dhia; Diallo, Demba; Ben Hamed, Mouna; Sbita, Lassaâd
2016-09-01
As per modern electrical grid rules, Wind Turbine needs to operate continually even in presence severe grid faults as Low Voltage Ride Through (LVRT). Hence, a new LVRT Fault Detection and Identification (FDI) procedure has been developed to take the appropriate decision in order to develop the convenient control strategy. To obtain much better decision and enhanced FDI during grid fault, the proposed procedure is based on voltage indicators analysis using a new Artificial Neural Network architecture (ANN). In fact, two features are extracted (the amplitude and the angle phase). It is divided into two steps. The first is fault indicators generation and the second is indicators analysis for fault diagnosis. The first step is composed of six ANNs which are dedicated to describe the three phases of the grid (three amplitudes and three angle phases). Regarding to the second step, it is composed of a single ANN which analysis the indicators and generates a decision signal that describes the function mode (healthy or faulty). On other hand, the decision signal identifies the fault type. It allows distinguishing between the four faulty types. The diagnosis procedure is tested in simulation and experimental prototype. The obtained results confirm and approve its efficiency, rapidity, robustness and immunity to the noise and unknown inputs. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jourdain, A.; Singh, S. C.; Klinger, Y.
2013-12-01
Transform faults are the major discontinuities and define the main segment boundaries along spreading centres but their anatomy is poorly understood because of their complex seafloor morphology, even though they are observed at all types of spreading centres. Here, we present high-resolution seismic reflection images across the sedimented Andaman Sea Transform Fault where the sediments record the faulting and allow studying the evolution of the transform fault both in space and time. Furthermore, sediments allow the imaging of the faults down to the Moho depth that provides insight on the interplay between tectonic and magmatic processes. On the other hand, overlapping spreading centres (OSC) are small-scale discontinuities, possibly transient, and are observed only along fast or intermediate spreading centres. Exceptionally, an overlapping spreading centre is present at the slow spreading Andaman Sea Spreading Centre, which, we suggest, is due to the presence of thick sediments that hamper the efficient hydrothermal circulation allowing magma to stay much longer in the crust at different depths, and up to close to the segment ends, leading to the development of an overlapping spreading. The seismic reflection images across the OSC indicate the presence of large magma bodies in the crust. Seismic images also provide images of active faults allowing to study the link between faulting and magmatism. Interestingly, an earthquake swarm occurred at propagating limb of the OSC in 2006, after the great 2004 Andaman-Sumatra earthquake of Mw=9.3, highlighting the migration of the OSC westward. In this paper, we will show seismic reflection images and interpret these images in the light of bathymetry and earthquake data, and provide the anatomy of the ridge discontinuities along the slow spreading sedimented Andaman Sea Spreading Centre.
NASA Technical Reports Server (NTRS)
Dyar, M. D.
1985-01-01
Compositions analogous to lunar green, organge, and brown glasses were synthesized under consistent conditions, then quenched into a variety of different media when the samples were removed from the furnace. Iron valence and coordination are a direct function of quench media used, spanning the range from brine/ice (most effective quench), water, butyl phthalate, silicone oil, liquid nitrogen, highly reducing CO-CO2 gas, to air (least efficient quench). In the green and brown glasses, Fe(3+) in four-fold and six-fold coordination is observed in the slowest-quenched samples; Fe(2+) coordination varies directly with quench efficiency. Less pronounced changes were observed in the Ti-rich orange glass. Therefore the remote-sensed spectrum of a glass-bearing regolith on the Moon may be influenced by the process by which the glass cooled, and extreme caution must be used when comparing spectra of synthetic glass analogs with real lunar glasses.
NASA Technical Reports Server (NTRS)
Dyar, M. D.
1984-01-01
Compositions analogous to lunar green, orange, and brown glasses were synthesized under consistent conditions, then quenched into a variety of different media when the samples were removed from the furnace. Iron valence and coordination are a direct function of quench media used, spanning the range from brine/ice (most effective quench), water, butyl phthalate, silicone oil, liquid nitrogen, highly reducing CO-CO2 gas, to air (least efficient quench). In the green and brown glasses, Fe(3+) in four-fold and six-fold coordination is observed in the slowest-quenched samples; Fe(2+) coordination varies directly with quench efficiency. Less pronounced changes were observed in the Ti-rich orange glass. Therefore the remote-sensed spectrum of a glass-bearing regolith on the moon may be influenced by the process by which the glass cooled, and extreme caution must be used when comparing spectra of synthetic glass analogs with real lunar glasses.
NASA Astrophysics Data System (ADS)
Gao, Hong-Yue; Liu, Pan; Zeng, Chao; Yao, Qiu-Xiang; Zheng, Zhiqiang; Liu, Jicheng; Zheng, Huadong; Yu, Ying-Jie; Zeng, Zhen-Xiang; Sun, Tao
2016-09-01
We present holographic storage of three-dimensional (3D) images and data in a photopolymer film without any applied electric field. Its absorption and diffraction efficiency are measured, and reflective analog hologram of real object and image of digital information are recorded in the films. The photopolymer is compared with polymer dispersed liquid crystals as holographic materials. Besides holographic diffraction efficiency of the former is little lower than that of the latter, this work demonstrates that the photopolymer is more suitable for analog hologram and big data permanent storage because of its high definition and no need of high voltage electric field. Therefore, our study proposes a potential holographic storage material to apply in large size static 3D holographic displays, including analog hologram displays, digital hologram prints, and holographic disks. Project supported by the National Natural Science Foundation of China (Grant Nos. 11474194, 11004037, and 61101176) and the Natural Science Foundation of Shanghai, China (Grant No. 14ZR1415500).
Lin, J.; Stein, R.S.
2004-01-01
We argue that key features of thrust earthquake triggering, inhibition, and clustering can be explained by Coulomb stress changes, which we illustrate by a suite of representative models and by detailed examples. Whereas slip on surface-cutting thrust faults drops the stress in most of the adjacent crust, slip on blind thrust faults increases the stress on some nearby zones, particularly above the source fault. Blind thrusts can thus trigger slip on secondary faults at shallow depth and typically produce broadly distributed aftershocks. Short thrust ruptures are particularly efficient at triggering earthquakes of similar size on adjacent thrust faults. We calculate that during a progressive thrust sequence in central California the 1983 Mw = 6.7 Coalinga earthquake brought the subsequent 1983 Mw = 6.0 Nunez and 1985 Mw = 6.0 Kettleman Hills ruptures 10 bars and 1 bar closer to Coulomb failure. The idealized stress change calculations also reconcile the distribution of seismicity accompanying large subduction events, in agreement with findings of prior investigations. Subduction zone ruptures are calculated to promote normal faulting events in the outer rise and to promote thrust-faulting events on the periphery of the seismic rupture and its downdip extension. These features are evident in aftershocks of the 1957 Mw = 9.1 Aleutian and other large subduction earthquakes. We further examine stress changes on the rupture surface imparted by the 1960 Mw = 9.5 and 1995 Mw = 8.1 Chile earthquakes, for which detailed slip models are available. Calculated Coulomb stress increases of 2-20 bars correspond closely to sites of aftershocks and postseismic slip, whereas aftershocks are absent where the stress drops by more than 10 bars. We also argue that slip on major strike-slip systems modulates the stress acting on nearby thrust and strike-slip faults. We calculate that the 1857 Mw = 7.9 Fort Tejon earthquake on the San Andreas fault and subsequent interseismic slip brought the Coalinga fault ???1 bar closer to failure but inhibited failure elsewhere on the Coast Ranges thrust faults. The 1857 earthquake also promoted failure on the White Wolf reverse fault by 8 bars, which ruptured in the 1952 Mw = 7.3 Kern County shock but inhibited slip on the left-lateral Garlock fault, which has not ruptured since 1857. We thus contend that stress transfer exerts a control on the seismicity of thrust faults across a broad spectrum of spatial and temporal scales. Copyright 2004 by the American Geophysical Union.
Mixed H2/H∞-Based Fusion Estimation for Energy-Limited Multi-Sensors in Wearable Body Networks
Li, Chao; Zhang, Zhenjiang; Chao, Han-Chieh
2017-01-01
In wireless sensor networks, sensor nodes collect plenty of data for each time period. If all of data are transmitted to a Fusion Center (FC), the power of sensor node would run out rapidly. On the other hand, the data also needs a filter to remove the noise. Therefore, an efficient fusion estimation model, which can save the energy of the sensor nodes while maintaining higher accuracy, is needed. This paper proposes a novel mixed H2/H∞-based energy-efficient fusion estimation model (MHEEFE) for energy-limited Wearable Body Networks. In the proposed model, the communication cost is firstly reduced efficiently while keeping the estimation accuracy. Then, the parameters in quantization method are discussed, and we confirm them by an optimization method with some prior knowledge. Besides, some calculation methods of important parameters are researched which make the final estimates more stable. Finally, an iteration-based weight calculation algorithm is presented, which can improve the fault tolerance of the final estimate. In the simulation, the impacts of some pivotal parameters are discussed. Meanwhile, compared with the other related models, the MHEEFE shows a better performance in accuracy, energy-efficiency and fault tolerance. PMID:29280950
Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Shi, Hongling; Gholami, Khalid El
2014-01-01
Operating system (OS) technology is significant for the proliferation of the wireless sensor network (WSN). With an outstanding OS; the constrained WSN resources (processor; memory and energy) can be utilized efficiently. Moreover; the user application development can be served soundly. In this article; a new hybrid; real-time; memory-efficient; energy-efficient; user-friendly and fault-tolerant WSN OS MIROS is designed and implemented. MIROS implements the hybrid scheduler and the dynamic memory allocator. Real-time scheduling can thus be achieved with low memory consumption. In addition; it implements a mid-layer software EMIDE (Efficient Mid-layer Software for User-Friendly Application Development Environment) to decouple the WSN application from the low-level system. The application programming process can consequently be simplified and the application reprogramming performance improved. Moreover; it combines both the software and the multi-core hardware techniques to conserve the energy resources; improve the node reliability; as well as achieve a new debugging method. To evaluate the performance of MIROS; it is compared with the other WSN OSes (TinyOS; Contiki; SOS; openWSN and mantisOS) from different OS concerns. The final evaluation results prove that MIROS is suitable to be used even on the tight resource-constrained WSN nodes. It can support the real-time WSN applications. Furthermore; it is energy efficient; user friendly and fault tolerant. PMID:25248069