Sample records for deductive fault simulation

  1. A 3D modeling approach to complex faults with multi-source data

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan

    2015-04-01

    Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.

  2. Adjustable Autonomy Testbed

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Schrenkenghost, Debra K.

    2001-01-01

    The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.

  3. 20 CFR 404.510 - When an individual is “without fault” in a deduction overpayment.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 2 2012-04-01 2012-04-01 false When an individual is âwithout faultâ in a deduction overpayment. 404.510 Section 404.510 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL... or Recovery of Overpayments, and Liability of a Certifying Officer § 404.510 When an individual is...

  4. Performance investigation on DCSFCL considering different magnetic materials

    NASA Astrophysics Data System (ADS)

    Yuan, Jiaxin; Zhou, Hang; Zhong, Yongheng; Gan, Pengcheng; Gao, Yanhui; Muramatsu, Kazuhiro; Du, Zhiye; Chen, Baichao

    2018-05-01

    In order to protect high voltage direct current (HVDC) system from destructive consequences caused by fault current, a novel concept of HVDC system fault current limiter (DCSFCL) was proposed previously. Since DCSFCL is based on saturable core reactor theory, iron core becomes the key to the final performance of it. Therefore, three typical kinds of soft magnetic materials were chosen to find out their impact on performances of DCSFCL. Different characteristics of materials were compared and their theoretical deductions were carried out, too. In the meanwhile, 3D models applying those three materials were built separately and finite element analysis simulations were performed to compare these results and further verify the assumptions. It turns out that materials with large saturation flux density value Bs like silicon steel and short demagnetization time like ferrite might be the best choice for DCSFCL, which can be a future research direction of magnetic materials.

  5. Reliability analysis of a wastewater treatment plant using fault tree analysis and Monte Carlo simulation.

    PubMed

    Taheriyoun, Masoud; Moradinejad, Saber

    2015-01-01

    The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.

  6. Deductibles in health insurance

    NASA Astrophysics Data System (ADS)

    Dimitriyadis, I.; Öney, Ü. N.

    2009-11-01

    This study is an extension to a simulation study that has been developed to determine ruin probabilities in health insurance. The study concentrates on inpatient and outpatient benefits for customers of varying age bands. Loss distributions are modelled through the Allianz tool pack for different classes of insureds. Premiums at different levels of deductibles are derived in the simulation and ruin probabilities are computed assuming a linear loading on the premium. The increase in the probability of ruin at high levels of the deductible clearly shows the insufficiency of proportional loading in deductible premiums. The PH-transform pricing rule developed by Wang is analyzed as an alternative pricing rule. A simple case, where an insured is assumed to be an exponential utility decision maker while the insurer's pricing rule is a PH-transform is also treated.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortoleva, Peter J.

    Illustrative embodiments of systems and methods for the deductive multiscale simulation of macromolecules are disclosed. In one illustrative embodiment, a deductive multiscale simulation method may include (i) constructing a set of order parameters that model one or more structural characteristics of a macromolecule, (ii) simulating an ensemble of atomistic configurations for the macromolecule using instantaneous values of the set of order parameters, (iii) simulating thermal-average forces and diffusivities for the ensemble of atomistic configurations, and (iv) evolving the set of order parameters via Langevin dynamics using the thermal-average forces and diffusivities.

  8. 48 CFR 52.232-7 - Payments under Time-and-Materials and Labor-Hour Contracts.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the Contractor to withhold amounts from its billings until a reserve is set aside in an amount that... Disputes clause of this contract. If the Schedule provides rates for overtime, the premium portion of those... Contractor shall not deduct from gross costs the benefits lost without fault or neglect on the part of the...

  9. Fault Tree Handbook

    DTIC Science & Technology

    1981-01-01

    are applied to determine what system states (usually failed states) are possible; deductive methods are applied to determine how a given system state...Similar considerations apply to the single failures of CVA, BVB and CVB and this important additional information has been displayed in the principal...way. The point "maximum tolerable failure" corresponds to the survival point of the company building the aircraft. Above that point, only intolerable

  10. Causation mechanism analysis for haze pollution related to vehicle emission in Guangzhou, China by employing the fault tree approach.

    PubMed

    Huang, Weiqing; Fan, Hongbo; Qiu, Yongfu; Cheng, Zhiyu; Xu, Pingru; Qian, Yu

    2016-05-01

    Recently, China has frequently experienced large-scale, severe and persistent haze pollution due to surging urbanization and industrialization and a rapid growth in the number of motor vehicles and energy consumption. The vehicle emission due to the consumption of a large number of fossil fuels is no doubt a critical factor of the haze pollution. This work is focused on the causation mechanism of haze pollution related to the vehicle emission for Guangzhou city by employing the Fault Tree Analysis (FTA) method for the first time. With the establishment of the fault tree system of "Haze weather-Vehicle exhausts explosive emission", all of the important risk factors are discussed and identified by using this deductive FTA method. The qualitative and quantitative assessments of the fault tree system are carried out based on the structure, probability and critical importance degree analysis of the risk factors. The study may provide a new simple and effective tool/strategy for the causation mechanism analysis and risk management of haze pollution in China. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Introduction to Concurrent Engineering: Electronic Circuit Design and Production Applications

    DTIC Science & Technology

    1992-09-01

    STD-1629. Failure mode distribution data for many different types of parts may be found in RAC publication FMD -91. FMEA utilizes inductive logic in a...contrasts with a Fault Tree Analysis ( FTA ) which utilizes deductive logic in a "top down" approach. In FTA , a system failure is assumed and traced down...Analysis ( FTA ) is a graphical method of risk analysis used to identify critical failure modes within a system or equipment. Utilizing a pictorial approach

  12. Hierarchical Simulation to Assess Hardware and Software Dependability

    NASA Technical Reports Server (NTRS)

    Ries, Gregory Lawrence

    1997-01-01

    This thesis presents a method for conducting hierarchical simulations to assess system hardware and software dependability. The method is intended to model embedded microprocessor systems. A key contribution of the thesis is the idea of using fault dictionaries to propagate fault effects upward from the level of abstraction where a fault model is assumed to the system level where the ultimate impact of the fault is observed. A second important contribution is the analysis of the software behavior under faults as well as the hardware behavior. The simulation method is demonstrated and validated in four case studies analyzing Myrinet, a commercial, high-speed networking system. One key result from the case studies shows that the simulation method predicts the same fault impact 87.5% of the time as is obtained by similar fault injections into a real Myrinet system. Reasons for the remaining discrepancy are examined in the thesis. A second key result shows the reduction in the number of simulations needed due to the fault dictionary method. In one case study, 500 faults were injected at the chip level, but only 255 propagated to the system level. Of these 255 faults, 110 shared identical fault dictionary entries at the system level and so did not need to be resimulated. The necessary number of system-level simulations was therefore reduced from 500 to 145. Finally, the case studies show how the simulation method can be used to improve the dependability of the target system. The simulation analysis was used to add recovery to the target software for the most common fault propagation mechanisms that would cause the software to hang. After the modification, the number of hangs was reduced by 60% for fault injections into the real system.

  13. Hardware Fault Simulator for Microprocessors

    NASA Technical Reports Server (NTRS)

    Hess, L. M.; Timoc, C. C.

    1983-01-01

    Breadboarded circuit is faster and more thorough than software simulator. Elementary fault simulator for AND gate uses three gates and shaft register to simulate stuck-at-one or stuck-at-zero conditions at inputs and output. Experimental results showed hardware fault simulator for microprocessor gave faster results than software simulator, by two orders of magnitude, with one test being applied every 4 microseconds.

  14. Method and system for fault accommodation of machines

    NASA Technical Reports Server (NTRS)

    Goebel, Kai Frank (Inventor); Subbu, Rajesh Venkat (Inventor); Rausch, Randal Thomas (Inventor); Frederick, Dean Kimball (Inventor)

    2011-01-01

    A method for multi-objective fault accommodation using predictive modeling is disclosed. The method includes using a simulated machine that simulates a faulted actual machine, and using a simulated controller that simulates an actual controller. A multi-objective optimization process is performed, based on specified control settings for the simulated controller and specified operational scenarios for the simulated machine controlled by the simulated controller, to generate a Pareto frontier-based solution space relating performance of the simulated machine to settings of the simulated controller, including adjustment to the operational scenarios to represent a fault condition of the simulated machine. Control settings of the actual controller are adjusted, represented by the simulated controller, for controlling the actual machine, represented by the simulated machine, in response to a fault condition of the actual machine, based on the Pareto frontier-based solution space, to maximize desirable operational conditions and minimize undesirable operational conditions while operating the actual machine in a region of the solution space defined by the Pareto frontier.

  15. A circuit-based photovoltaic module simulator with shadow and fault settings

    NASA Astrophysics Data System (ADS)

    Chao, Kuei-Hsiang; Chao, Yuan-Wei; Chen, Jyun-Ping

    2016-03-01

    The main purpose of this study was to develop a photovoltaic (PV) module simulator. The proposed simulator, using electrical parameters from solar cells, could simulate output characteristics not only during normal operational conditions, but also during conditions of partial shadow and fault conditions. Such a simulator should possess the advantages of low cost, small size and being easily realizable. Experiments have shown that results from a proposed PV simulator of this kind are very close to that from simulation software during partial shadow conditions, and with negligible differences during fault occurrence. Meanwhile, the PV module simulator, as developed, could be used on various types of series-parallel connections to form PV arrays, to conduct experiments on partial shadow and fault events occurring in some of the modules. Such experiments are designed to explore the impact of shadow and fault conditions on the output characteristics of the system as a whole.

  16. Advanced diagnostic system for piston slap faults in IC engines, based on the non-stationary characteristics of the vibration signals

    NASA Astrophysics Data System (ADS)

    Chen, Jian; Randall, Robert Bond; Peeters, Bart

    2016-06-01

    Artificial Neural Networks (ANNs) have the potential to solve the problem of automated diagnostics of piston slap faults, but the critical issue for the successful application of ANN is the training of the network by a large amount of data in various engine conditions (different speed/load conditions in normal condition, and with different locations/levels of faults). On the other hand, the latest simulation technology provides a useful alternative in that the effect of clearance changes may readily be explored without recourse to cutting metal, in order to create enough training data for the ANNs. In this paper, based on some existing simplified models of piston slap, an advanced multi-body dynamic simulation software was used to simulate piston slap faults with different speeds/loads and clearance conditions. Meanwhile, the simulation models were validated and updated by a series of experiments. Three-stage network systems are proposed to diagnose piston faults: fault detection, fault localisation and fault severity identification. Multi Layer Perceptron (MLP) networks were used in the detection stage and severity/prognosis stage and a Probabilistic Neural Network (PNN) was used to identify which cylinder has faults. Finally, it was demonstrated that the networks trained purely on simulated data can efficiently detect piston slap faults in real tests and identify the location and severity of the faults as well.

  17. Fault Analysis in Solar Photovoltaic Arrays

    NASA Astrophysics Data System (ADS)

    Zhao, Ye

    Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.

  18. Earthquake cycle simulations with rate-and-state friction and power-law viscoelasticity

    NASA Astrophysics Data System (ADS)

    Allison, Kali L.; Dunham, Eric M.

    2018-05-01

    We simulate earthquake cycles with rate-and-state fault friction and off-fault power-law viscoelasticity for the classic 2D antiplane shear problem of a vertical, strike-slip plate boundary fault. We investigate the interaction between fault slip and bulk viscous flow with experimentally-based flow laws for quartz-diorite and olivine for the crust and mantle, respectively. Simulations using three linear geotherms (dT/dz = 20, 25, and 30 K/km) produce different deformation styles at depth, ranging from significant interseismic fault creep to purely bulk viscous flow. However, they have almost identical earthquake recurrence interval, nucleation depth, and down-dip coseismic slip limit. Despite these similarities, variations in the predicted surface deformation might permit discrimination of the deformation mechanism using geodetic observations. Additionally, in the 25 and 30 K/km simulations, the crust drags the mantle; the 20 K/km simulation also predicts this, except within 10 km of the fault where the reverse occurs. However, basal tractions play a minor role in the overall force balance of the lithosphere, at least for the flow laws used in our study. Therefore, the depth-integrated stress on the fault is balanced primarily by shear stress on vertical, fault-parallel planes. Because strain rates are higher directly below the fault than far from it, stresses are also higher. Thus, the upper crust far from the fault bears a substantial part of the tectonic load, resulting in unrealistically high stresses. In the real Earth, this might lead to distributed plastic deformation or formation of subparallel faults. Alternatively, fault pore pressures in excess of hydrostatic and/or weakening mechanisms such as grain size reduction and thermo-mechanical coupling could lower the strength of the ductile fault root in the lower crust and, concomitantly, off-fault upper crustal stresses.

  19. Using Fault Trees to Advance Understanding of Diagnostic Errors.

    PubMed

    Rogith, Deevakar; Iyengar, M Sriram; Singh, Hardeep

    2017-11-01

    Diagnostic errors annually affect at least 5% of adults in the outpatient setting in the United States. Formal analytic techniques are only infrequently used to understand them, in part because of the complexity of diagnostic processes and clinical work flows involved. In this article, diagnostic errors were modeled using fault tree analysis (FTA), a form of root cause analysis that has been successfully used in other high-complexity, high-risk contexts. How factors contributing to diagnostic errors can be systematically modeled by FTA to inform error understanding and error prevention is demonstrated. A team of three experts reviewed 10 published cases of diagnostic error and constructed fault trees. The fault trees were modeled according to currently available conceptual frameworks characterizing diagnostic error. The 10 trees were then synthesized into a single fault tree to identify common contributing factors and pathways leading to diagnostic error. FTA is a visual, structured, deductive approach that depicts the temporal sequence of events and their interactions in a formal logical hierarchy. The visual FTA enables easier understanding of causative processes and cognitive and system factors, as well as rapid identification of common pathways and interactions in a unified fashion. In addition, it enables calculation of empirical estimates for causative pathways. Thus, fault trees might provide a useful framework for both quantitative and qualitative analysis of diagnostic errors. Future directions include establishing validity and reliability by modeling a wider range of error cases, conducting quantitative evaluations, and undertaking deeper exploration of other FTA capabilities. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.

  20. Tectonic aspects of the guatemala earthquake of 4 february 1976.

    PubMed

    Plafker, G

    1976-09-24

    The locations of surface ruptures and the main shock epicenter indicate that the disastrous Guatemala earthquake of 4 February 1976 was tectonic in origin and generated mainly by slip on the Motagua fault, which has an arcuate roughly east-west trend across central Guatemala. Fault breakage was observed for 230 km. Displacement is predominantly horizontal and sinistral with a maximum measured offset of 340 cm and an average of about 100 cm. Secondary fault breaks trending roughly north-northeast to south-southwest have been found in a zone about 20 km long and 8 km wide extending from the western suburbs of Guatemala City to near Mixco, and similar faults with more subtle surface expression probably occur elsewhere in the Guatemalan Highlands. Displacements on the secondary faults are predominantly extensional and dip-slip, with as much as 15 cm vertical offset on a single fracture. The primary fault that broke during the earthquake involved roughly 10 percent of the length of the great transform fault system that defines the boundary between the Caribbean and North American plates. The observed sinistral displacement is striking confirmation of deductions regarding the late Cenozoic relative motion between these two crustal plates that were based largely on indirect geologic and geophysical evidence. The earthquake-related secondary faulting, together with the complex pattern of geologically young normal faults that occur in the Guatemalan Highlands and elsewhere in western Central America, suggest that the eastern wedge-shaped part of the Caribbean plate, roughly between the Motagua fault system and the volcanic arc, is being pulled apart in tension and left behind as the main mass of the plate moves relatively eastward. Because of their proximity to areas of high population density, shallow-focus earthquakes that originate on the Motagua fault system, on the system of predominantly extensional faults within the western part of the Caribbean plate, and in association with volcanism may pose a more serious seismic hazard than the more numerous (but generally more distant) earthquakes that are generated in the eastward-dipping subduction zone beneath Middle America.

  1. Simulated fault injection - A methodology to evaluate fault tolerant microprocessor architectures

    NASA Technical Reports Server (NTRS)

    Choi, Gwan S.; Iyer, Ravishankar K.; Carreno, Victor A.

    1990-01-01

    A simulation-based fault-injection method for validating fault-tolerant microprocessor architectures is described. The approach uses mixed-mode simulation (electrical/logic analysis), and injects transient errors in run-time to assess the resulting fault impact. As an example, a fault-tolerant architecture which models the digital aspects of a dual-channel real-time jet-engine controller is used. The level of effectiveness of the dual configuration with respect to single and multiple transients is measured. The results indicate 100 percent coverage of single transients. Approximately 12 percent of the multiple transients affect both channels; none result in controller failure since two additional levels of redundancy exist.

  2. PV Systems Reliability Final Technical Report: Ground Fault Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lavrova, Olga; Flicker, Jack David; Johnson, Jay

    We have examined ground faults in PhotoVoltaic (PV) arrays and the efficacy of fuse, current detection (RCD), current sense monitoring/relays (CSM), isolation/insulation (Riso) monitoring, and Ground Fault Detection and Isolation (GFID) using simulations based on a Simulation Program with Integrated Circuit Emphasis SPICE ground fault circuit model, experimental ground faults installed on real arrays, and theoretical equations.

  3. Three-dimensional curved grid finite-difference modelling for non-planar rupture dynamics

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenguo; Zhang, Wei; Chen, Xiaofei

    2014-11-01

    In this study, we present a new method for simulating the 3-D dynamic rupture process occurring on a non-planar fault. The method is based on the curved-grid finite-difference method (CG-FDM) proposed by Zhang & Chen and Zhang et al. to simulate the propagation of seismic waves in media with arbitrary irregular surface topography. While keeping the advantages of conventional FDM, that is computational efficiency and easy implementation, the CG-FDM also is flexible in modelling the complex fault model by using general curvilinear grids, and thus is able to model the rupture dynamics of a fault with complex geometry, such as oblique dipping fault, non-planar fault, fault with step-over, fault branching, even if irregular topography exists. The accuracy and robustness of this new method have been validated by comparing with the previous results of Day et al., and benchmarks for rupture dynamics simulations. Finally, two simulations of rupture dynamics with complex fault geometry, that is a non-planar fault and a fault rupturing a free surface with topography, are presented. A very interesting phenomenon was observed that topography can weaken the tendency for supershear transition to occur when rupture breaks out at a free surface. Undoubtedly, this new method provides an effective, at least an alternative, tool to simulate the rupture dynamics of a complex non-planar fault, and can be applied to model the rupture dynamics of a real earthquake with complex geometry.

  4. Modeling, Detection, and Disambiguation of Sensor Faults for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Balaban, Edward; Saxena, Abhinav; Bansal, Prasun; Goebel, Kai F.; Curran, Simon

    2009-01-01

    Sensor faults continue to be a major hurdle for systems health management to reach its full potential. At the same time, few recorded instances of sensor faults exist. It is equally difficult to seed particular sensor faults. Therefore, research is underway to better understand the different fault modes seen in sensors and to model the faults. The fault models can then be used in simulated sensor fault scenarios to ensure that algorithms can distinguish between sensor faults and system faults. The paper illustrates the work with data collected from an electro-mechanical actuator in an aerospace setting, equipped with temperature, vibration, current, and position sensors. The most common sensor faults, such as bias, drift, scaling, and dropout were simulated and injected into the experimental data, with the goal of making these simulations as realistic as feasible. A neural network based classifier was then created and tested on both experimental data and the more challenging randomized data sequences. Additional studies were also conducted to determine sensitivity of detection and disambiguation efficacy to severity of fault conditions.

  5. Design for dependability: A simulation-based approach. Ph.D. Thesis, 1993

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar K.

    1994-01-01

    This research addresses issues in simulation-based system level dependability analysis of fault-tolerant computer systems. The issues and difficulties of providing a general simulation-based approach for system level analysis are discussed and a methodology that address and tackle these issues is presented. The proposed methodology is designed to permit the study of a wide variety of architectures under various fault conditions. It permits detailed functional modeling of architectural features such as sparing policies, repair schemes, routing algorithms as well as other fault-tolerant mechanisms, and it allows the execution of actual application software. One key benefit of this approach is that the behavior of a system under faults does not have to be pre-defined as it is normally done. Instead, a system can be simulated in detail and injected with faults to determine its failure modes. The thesis describes how object-oriented design is used to incorporate this methodology into a general purpose design and fault injection package called DEPEND. A software model is presented that uses abstractions of application programs to study the behavior and effect of software on hardware faults in the early design stage when actual code is not available. Finally, an acceleration technique that combines hierarchical simulation, time acceleration algorithms and hybrid simulation to reduce simulation time is introduced.

  6. 3D Dynamic Rupture Simulations along Dipping Faults, with a focus on the Wasatch Fault Zone, Utah

    NASA Astrophysics Data System (ADS)

    Withers, K.; Moschetti, M. P.

    2017-12-01

    We study dynamic rupture and ground motion from dip-slip faults in regions that have high-seismic hazard, such as the Wasatch fault zone, Utah. Previous numerical simulations have modeled deterministic ground motion along segments of this fault in the heavily populated regions near Salt Lake City but were restricted to low frequencies ( 1 Hz). We seek to better understand the rupture process and assess broadband ground motions and variability from the Wasatch Fault Zone by extending deterministic ground motion prediction to higher frequencies (up to 5 Hz). We perform simulations along a dipping normal fault (40 x 20 km along strike and width, respectively) with characteristics derived from geologic observations to generate a suite of ruptures > Mw 6.5. This approach utilizes dynamic simulations (fully physics-based models, where the initial stress drop and friction law are imposed) using a summation by parts (SBP) method. The simulations include rough-fault topography following a self-similar fractal distribution (over length scales from 100 m to the size of the fault) in addition to off-fault plasticity. Energy losses from heat and other mechanisms, modeled as anelastic attenuation, are also included, as well as free-surface topography, which can significantly affect ground motion patterns. We compare the effect of material structure and both rate and state and slip-weakening friction laws have on rupture propagation. The simulations show reduced slip and moment release in the near surface with the inclusion of plasticity, better agreeing with observations of shallow slip deficit. Long-wavelength fault geometry imparts a non-uniform stress distribution along both dip and strike, influencing the preferred rupture direction and hypocenter location, potentially important for seismic hazard estimation.

  7. The Design and Semi-Physical Simulation Test of Fault-Tolerant Controller for Aero Engine

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Zhang, Xin; Zhang, Tianhong

    2017-11-01

    A new fault-tolerant control method for aero engine is proposed, which can accurately diagnose the sensor fault by Kalman filter banks and reconstruct the signal by real-time on-board adaptive model combing with a simplified real-time model and an improved Kalman filter. In order to verify the feasibility of the method proposed, a semi-physical simulation experiment has been carried out. Besides the real I/O interfaces, controller hardware and the virtual plant model, semi-physical simulation system also contains real fuel system. Compared with the hardware-in-the-loop (HIL) simulation, semi-physical simulation system has a higher degree of confidence. In order to meet the needs of semi-physical simulation, a rapid prototyping controller with fault-tolerant control ability based on NI CompactRIO platform is designed and verified on the semi-physical simulation test platform. The result shows that the controller can realize the aero engine control safely and reliably with little influence on controller performance in the event of fault on sensor.

  8. Earthquake cycle modeling of multi-segmented faults: dynamic rupture and ground motion simulation of the 1992 Mw 7.3 Landers earthquake.

    NASA Astrophysics Data System (ADS)

    Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.

    2017-12-01

    We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the simulated ground motions will be validated by comparison of simulated response spectra with recorded response spectra and with response spectra from ground motion prediction models. This research is sponsored by the Japan Nuclear Regulation Authority.

  9. Characterization of the faulted behavior of digital computers and fault tolerant systems

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Miner, Paul S.

    1989-01-01

    A development status evaluation is presented for efforts conducted at NASA-Langley since 1977, toward the characterization of the latent fault in digital fault-tolerant systems. Attention is given to the practical, high speed, generalized gate-level logic system simulator developed, as well as to the validation methodology used for the simulator, on the basis of faultable software and hardware simulations employing a prototype MIL-STD-1750A processor. After validation, latency tests will be performed.

  10. Simulating spontaneous aseismic and seismic slip events on evolving faults

    NASA Astrophysics Data System (ADS)

    Herrendörfer, Robert; van Dinther, Ylona; Pranger, Casper; Gerya, Taras

    2017-04-01

    Plate motion along tectonic boundaries is accommodated by different slip modes: steady creep, seismic slip and slow slip transients. Due to mainly indirect observations and difficulties to scale results from laboratory experiments to nature, it remains enigmatic which fault conditions favour certain slip modes. Therefore, we are developing a numerical modelling approach that is capable of simulating different slip modes together with the long-term fault evolution in a large-scale tectonic setting. We extend the 2D, continuum mechanics-based, visco-elasto-plastic thermo-mechanical model that was designed to simulate slip transients in large-scale geodynamic simulations (van Dinther et al., JGR, 2013). We improve the numerical approach to accurately treat the non-linear problem of plasticity (see also EGU 2017 abstract by Pranger et al.). To resolve a wide slip rate spectrum on evolving faults, we develop an invariant reformulation of the conventional rate-and-state dependent friction (RSF) and adapt the time step (Lapusta et al., JGR, 2000). A crucial part of this development is a conceptual ductile fault zone model that relates slip rates along discrete planes to the effective macroscopic plastic strain rates in the continuum. We test our implementation first in a simple 2D setup with a single fault zone that has a predefined initial thickness. Results show that deformation localizes in case of steady creep and for very slow slip transients to a bell-shaped strain rate profile across the fault zone, which suggests that a length scale across the fault zone may exist. This continuum length scale would overcome the common mesh-dependency in plasticity simulations and question the conventional treatment of aseismic slip on infinitely thin fault zones. We test the introduction of a diffusion term (similar to the damage description in Lyakhovsky et al., JMPS, 2011) into the state evolution equation and its effect on (de-)localization during faster slip events. We compare the slip spectrum in our simulations to conventional RSF simulations (Liu and Rice, JGR, 2007). We further demonstrate the capability of simulating the evolution of a fault zone and simultaneous occurrence of slip transients. From small random initial distributions of the state variable in an otherwise homogeneous medium, deformation localizes and forms curved zones of reduced states. These spontaneously formed fault zones host slip transients, which in turn contribute to the growth of the fault zone.

  11. Knowledge representation requirements for model sharing between model-based reasoning and simulation in process flow domains

    NASA Technical Reports Server (NTRS)

    Throop, David R.

    1992-01-01

    The paper examines the requirements for the reuse of computational models employed in model-based reasoning (MBR) to support automated inference about mechanisms. Areas in which the theory of MBR is not yet completely adequate for using the information that simulations can yield are identified, and recent work in these areas is reviewed. It is argued that using MBR along with simulations forces the use of specific fault models. Fault models are used so that a particular fault can be instantiated into the model and run. This in turn implies that the component specification language needs to be capable of encoding any fault that might need to be sensed or diagnosed. It also means that the simulation code must anticipate all these faults at the component level.

  12. Earthquakes and aseismic creep associated with growing fault-related folds

    NASA Astrophysics Data System (ADS)

    Burke, C. C.; Johnson, K. M.

    2017-12-01

    Blind thrust faults overlain by growing anticlinal folds pose a seismic risk to many urban centers in the world. A large body of research has focused on using fold and growth strata geometry to infer the rate of slip on the causative fault and the distribution of off-fault deformation. However, because we have had few recorded large earthquakes on blind faults underlying folds, it remains unclear how much of the folding occurs during large earthquakes or during the interseismic period accommodated by aseismic creep. Numerous kinematic and mechanical models as well as field observations demonstrate that flexural slip between sedimentary layering is an important mechanism of fault-related folding. In this study, we run boundary element models of flexural-slip fault-related folding to examine the extent to which energy is released seismically or aseismically throughout the evolution of the fold and fault. We assume a fault imbedded in viscoelastic mechanical layering under frictional contact. We assign depth-dependent frictional properties and adopt a rate-state friction formulation to simulate slip over time. We find that in many cases, a large percentage (greater than 50%) of fold growth is accomplished by aseismic creep at bedding and fault contacts. The largest earthquakes tend to occur on the fault, but a significant portion of the seismicity is distributed across bedding contacts through the fold. We are currently working to quantify these results using a large number of simulations with various fold and fault geometries. Result outputs include location, duration, and magnitude of events. As more simulations are completed, these results from different fold and fault geometries will provide insight into how much folding occurs from these slip events. Generalizations from these simulations can be compared with observations of active fault-related folds and used in the future to inform seismic hazard studies.

  13. Interaction Behavior between Thrust Faulting and the National Highway No. 3 - Tianliao III bridge as Determined using Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Li, C. H.; Wu, L. C.; Chan, P. C.; Lin, M. L.

    2016-12-01

    The National Highway No. 3 - Tianliao III Bridge is located in the southwestern Taiwan mudstone area and crosses the Chekualin fault. Since the bridge was opened to traffic, it has been repaired 11 times. To understand the interaction behavior between thrust faulting and the bridge, a discrete element method-based software program, PFC, was applied to conduct a numerical analysis. A 3D model for simulating the thrust faulting and bridge was established, as shown in Fig. 1. In this conceptual model, the length and width were 50 and 10 m, respectively. Part of the box bottom was moveable, simulating the displacement of the thrust fault. The overburden stratum had a height of 5 m with fault dip angles of 20° (Fig. 2). The bottom-up strata were mudstone, clay, and sand, separately. The uplift was 1 m, which was 20% of the stratum thickness. In accordance with the investigation, the position of the fault tip was set, depending on the fault zone, and the bridge deformation was observed (Fig. 3). By setting "Monitoring Balls" in the numerical model to analyzes bridge displacement, we determined that the bridge deck deflection increased as the uplift distance increased. Furthermore, the force caused by the loading of the bridge deck and fault dislocation was determined to cause a down deflection of the P1 and P2 bridge piers. Finally, the fault deflection trajectory of the P4 pier displayed the maximum displacement (Fig. 4). Similar behavior has been observed through numerical simulation as well as field monitoring data. Usage of the discrete element model (PFC3D) to simulate the deformation behavior between thrust faulting and the bridge provided feedback for the design and improved planning of the bridge.

  14. A Dynamic Finite Element Method for Simulating the Physics of Faults Systems

    NASA Astrophysics Data System (ADS)

    Saez, E.; Mora, P.; Gross, L.; Weatherley, D.

    2004-12-01

    We introduce a dynamic Finite Element method using a novel high level scripting language to describe the physical equations, boundary conditions and time integration scheme. The library we use is the parallel Finley library: a finite element kernel library, designed for solving large-scale problems. It is incorporated as a differential equation solver into a more general library called escript, based on the scripting language Python. This library has been developed to facilitate the rapid development of 3D parallel codes, and is optimised for the Australian Computational Earth Systems Simulator Major National Research Facility (ACcESS MNRF) supercomputer, a 208 processor SGI Altix with a peak performance of 1.1 TFlops. Using the scripting approach we obtain a parallel FE code able to take advantage of the computational efficiency of the Altix 3700. We consider faults as material discontinuities (the displacement, velocity, and acceleration fields are discontinuous at the fault), with elastic behavior. The stress continuity at the fault is achieved naturally through the expression of the fault interactions in the weak formulation. The elasticity problem is solved explicitly in time, using the Saint Verlat scheme. Finally, we specify a suitable frictional constitutive relation and numerical scheme to simulate fault behaviour. Our model is based on previous work on modelling fault friction and multi-fault systems using lattice solid-like models. We adapt the 2D model for simulating the dynamics of parallel fault systems described to the Finite-Element method. The approach uses a frictional relation along faults that is slip and slip-rate dependent, and the numerical integration approach introduced by Mora and Place in the lattice solid model. In order to illustrate the new Finite Element model, single and multi-fault simulation examples are presented.

  15. 3D Dynamic Rupture Simulations along the Wasatch Fault, Utah, Incorporating Rough-fault Topography

    NASA Astrophysics Data System (ADS)

    Withers, Kyle; Moschetti, Morgan

    2017-04-01

    Studies have found that the Wasatch Fault has experienced successive large magnitude (>Mw 7.2) earthquakes, with an average recurrence interval near 350 years. To date, no large magnitude event has been recorded along the fault, with the last rupture along the Salt Lake City segment occurring 1300 years ago. Because of this, as well as the lack of strong ground motion records in basins and from normal-faulting earthquakes worldwide, seismic hazard in the region is not well constrained. Previous numerical simulations have modeled deterministic ground motion in the heavily populated regions of Utah, near Salt Lake City, but were primarily restricted to low frequencies ( 1 Hz). Our goal is to better assess broadband ground motions from the Wasatch Fault Zone. Here, we extend deterministic ground motion prediction to higher frequencies ( 5 Hz) in this region by using physics-based spontaneous dynamic rupture simulations along a normal fault with characteristics derived from geologic observations. We use a summation by parts finite difference code (Waveqlab3D) with rough-fault topography following a self-similar fractal distribution (over length scales from 100 m to the size of the fault) and include off-fault plasticity to simulate ruptures > Mw 6.5. Geometric complexity along fault planes has previously been shown to generate broadband sources with spectral energy matching that of observations. We investigate the impact of varying the hypocenter location, as well as the influence that multiple realizations of rough-fault topography have on the rupture process and resulting ground motion. We utilize Waveqlab3's computational efficiency to model wave-propagation to a significant distance from the fault with media heterogeneity at both long and short spatial wavelengths. These simulations generate a synthetic dataset of ground motions to compare with GMPEs, in terms of both the median and inter and intraevent variability.

  16. Modeling and Fault Simulation of Propellant Filling System

    NASA Astrophysics Data System (ADS)

    Jiang, Yunchun; Liu, Weidong; Hou, Xiaobo

    2012-05-01

    Propellant filling system is one of the key ground plants in launching site of rocket that use liquid propellant. There is an urgent demand for ensuring and improving its reliability and safety, and there is no doubt that Failure Mode Effect Analysis (FMEA) is a good approach to meet it. Driven by the request to get more fault information for FMEA, and because of the high expense of propellant filling, in this paper, the working process of the propellant filling system in fault condition was studied by simulating based on AMESim. Firstly, based on analyzing its structure and function, the filling system was modular decomposed, and the mathematic models of every module were given, based on which the whole filling system was modeled in AMESim. Secondly, a general method of fault injecting into dynamic system was proposed, and as an example, two typical faults - leakage and blockage - were injected into the model of filling system, based on which one can get two fault models in AMESim. After that, fault simulation was processed and the dynamic characteristics of several key parameters were analyzed under fault conditions. The results show that the model can simulate effectively the two faults, and can be used to provide guidance for the filling system maintain and amelioration.

  17. Experimental analysis of computer system dependability

    NASA Technical Reports Server (NTRS)

    Iyer, Ravishankar, K.; Tang, Dong

    1993-01-01

    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance.

  18. Modelling Fault Zone Evolution: Implications for fluid flow.

    NASA Astrophysics Data System (ADS)

    Moir, H.; Lunn, R. J.; Shipton, Z. K.

    2009-04-01

    Flow simulation models are of major interest to many industries including hydrocarbon, nuclear waste, sequestering of carbon dioxide and mining. One of the major uncertainties in these models is in predicting the permeability of faults, principally in the detailed structure of the fault zone. Studying the detailed structure of a fault zone is difficult because of the inaccessible nature of sub-surface faults and also because of their highly complex nature; fault zones show a high degree of spatial and temporal heterogeneity i.e. the properties of the fault change as you move along the fault, they also change with time. It is well understood that faults influence fluid flow characteristics. They may act as a conduit or a barrier or even as both by blocking flow across the fault while promoting flow along it. Controls on fault hydraulic properties include cementation, stress field orientation, fault zone components and fault zone geometry. Within brittle rocks, such as granite, fracture networks are limited but provide the dominant pathway for flow within this rock type. Research at the EU's Soultz-sous-Forệt Hot Dry Rock test site [Evans et al., 2005] showed that 95% of flow into the borehole was associated with a single fault zone at 3490m depth, and that 10 open fractures account for the majority of flow within the zone. These data underline the critical role of faults in deep flow systems and the importance of achieving a predictive understanding of fault hydraulic properties. To improve estimates of fault zone permeability, it is important to understand the underlying hydro-mechanical processes of fault zone formation. In this research, we explore the spatial and temporal evolution of fault zones in brittle rock through development and application of a 2D hydro-mechanical finite element model, MOPEDZ. The authors have previously presented numerical simulations of the development of fault linkage structures from two or three pre-existing joints, the results of which compare well to features observed in mapped exposures. For these simple simulations from a small number of pre-existing joints the fault zone evolves in a predictable way: fault linkage is governed by three key factors: Stress ratio of s1 (maximum compressive stress) to s3(minimum compressive stress), original geometry of the pre-existing structures (contractional vs. dilational geometries) and the orientation of the principle stress direction (σ1) to the pre-existing structures. In this paper we present numerical simulations of the temporal and spatial evolution of fault linkage structures from many pre-existing joints. The initial location, size and orientations of these joints are based on field observations of cooling joints in granite from the Sierra Nevada. We show that the constantly evolving geometry and local stress field perturbations contribute significantly to fault zone evolution. The location and orientations of linkage structures previously predicted by the simple simulations are consistent with the predicted geometries in the more complex fault zones, however, the exact location at which individual structures form is not easily predicted. Markedly different fault zone geometries are predicted when the pre-existing joints are rotated with respect to the maximum compressive stress. In particular, fault surfaces range from evolving smooth linear structures to producing complex ‘stepped' fault zone geometries. These geometries have a significant effect on simulations of along and across-fault flow.

  19. Insurance Applications of Active Fault Maps Showing Epistemic Uncertainty

    NASA Astrophysics Data System (ADS)

    Woo, G.

    2005-12-01

    Insurance loss modeling for earthquakes utilizes available maps of active faulting produced by geoscientists. All such maps are subject to uncertainty, arising from lack of knowledge of fault geometry and rupture history. Field work to undertake geological fault investigations drains human and monetary resources, and this inevitably limits the resolution of fault parameters. Some areas are more accessible than others; some may be of greater social or economic importance than others; some areas may be investigated more rapidly or diligently than others; or funding restrictions may have curtailed the extent of the fault mapping program. In contrast with the aleatory uncertainty associated with the inherent variability in the dynamics of earthquake fault rupture, uncertainty associated with lack of knowledge of fault geometry and rupture history is epistemic. The extent of this epistemic uncertainty may vary substantially from one regional or national fault map to another. However aware the local cartographer may be, this uncertainty is generally not conveyed in detail to the international map user. For example, an area may be left blank for a variety of reasons, ranging from lack of sufficient investigation of a fault to lack of convincing evidence of activity. Epistemic uncertainty in fault parameters is of concern in any probabilistic assessment of seismic hazard, not least in insurance earthquake risk applications. A logic-tree framework is appropriate for incorporating epistemic uncertainty. Some insurance contracts cover specific high-value properties or transport infrastructure, and therefore are extremely sensitive to the geometry of active faulting. Alternative Risk Transfer (ART) to the capital markets may also be considered. In order for such insurance or ART contracts to be properly priced, uncertainty should be taken into account. Accordingly, an estimate is needed for the likelihood of surface rupture capable of causing severe damage. Especially where a high deductible is in force, this requires estimation of the epistemic uncertainty on fault geometry and activity. Transport infrastructure insurance is of practical interest in seismic countries. On the North Anatolian Fault in Turkey, there is uncertainty over an unbroken segment between the eastern end of the Dazce Fault and Bolu. This may have ruptured during the 1944 earthquake. Existing hazard maps may simply use a question mark to flag uncertainty. However, a far more informative type of hazard map might express spatial variations in the confidence level associated with a fault map. Through such visual guidance, an insurance risk analyst would be better placed to price earthquake cover, allowing for epistemic uncertainty.

  20. Rupture Dynamics and Seismic Radiation on Rough Faults for Simulation-Based PSHA

    NASA Astrophysics Data System (ADS)

    Mai, P. M.; Galis, M.; Thingbaijam, K. K. S.; Vyas, J. C.; Dunham, E. M.

    2017-12-01

    Simulation-based ground-motion predictions may augment PSHA studies in data-poor regions or provide additional shaking estimations, incl. seismic waveforms, for critical facilities. Validation and calibration of such simulation approaches, based on observations and GMPE's, is important for engineering applications, while seismologists push to include the precise physics of the earthquake rupture process and seismic wave propagation in 3D heterogeneous Earth. Geological faults comprise both large-scale segmentation and small-scale roughness that determine the dynamics of the earthquake rupture process and its radiated seismic wavefield. We investigate how different parameterizations of fractal fault roughness affect the rupture evolution and resulting near-fault ground motions. Rupture incoherence induced by fault roughness generates realistic ω-2 decay for high-frequency displacement amplitude spectra. Waveform characteristics and GMPE-based comparisons corroborate that these rough-fault rupture simulations generate realistic synthetic seismogram for subsequent engineering application. Since dynamic rupture simulations are computationally expensive, we develop kinematic approximations that emulate the observed dynamics. Simplifying the rough-fault geometry, we find that perturbations in local moment tensor orientation are important, while perturbations in local source location are not. Thus, a planar fault can be assumed if the local strike, dip, and rake are maintained. The dynamic rake angle variations are anti-correlated with local dip angles. Based on a dynamically consistent Yoffe source-time function, we show that the seismic wavefield of the approximated kinematic rupture well reproduces the seismic radiation of the full dynamic source process. Our findings provide an innovative pseudo-dynamic source characterization that captures fault roughness effects on rupture dynamics. Including the correlations between kinematic source parameters, we present a new pseudo-dynamic rupture modeling approach for computing broadband ground-motion time-histories for simulation-based PSHA

  1. The SCEC/USGS dynamic earthquake rupture code verification exercise

    USGS Publications Warehouse

    Harris, R.A.; Barall, M.; Archuleta, R.; Dunham, E.; Aagaard, Brad T.; Ampuero, J.-P.; Bhat, H.; Cruz-Atienza, Victor M.; Dalguer, L.; Dawson, P.; Day, S.; Duan, B.; Ely, G.; Kaneko, Y.; Kase, Y.; Lapusta, N.; Liu, Yajing; Ma, S.; Oglesby, D.; Olsen, K.; Pitarka, A.; Song, S.; Templeton, E.

    2009-01-01

    Numerical simulations of earthquake rupture dynamics are now common, yet it has been difficult to test the validity of these simulations because there have been few field observations and no analytic solutions with which to compare the results. This paper describes the Southern California Earthquake Center/U.S. Geological Survey (SCEC/USGS) Dynamic Earthquake Rupture Code Verification Exercise, where codes that simulate spontaneous rupture dynamics in three dimensions are evaluated and the results produced by these codes are compared using Web-based tools. This is the first time that a broad and rigorous examination of numerous spontaneous rupture codes has been performed—a significant advance in this science. The automated process developed to attain this achievement provides for a future where testing of codes is easily accomplished.Scientists who use computer simulations to understand earthquakes utilize a range of techniques. Most of these assume that earthquakes are caused by slip at depth on faults in the Earth, but hereafter the strategies vary. Among the methods used in earthquake mechanics studies are kinematic approaches and dynamic approaches.The kinematic approach uses a computer code that prescribes the spatial and temporal evolution of slip on the causative fault (or faults). These types of simulations are very helpful, especially since they can be used in seismic data inversions to relate the ground motions recorded in the field to slip on the fault(s) at depth. However, these kinematic solutions generally provide no insight into the physics driving the fault slip or information about why the involved fault(s) slipped that much (or that little). In other words, these kinematic solutions may lack information about the physical dynamics of earthquake rupture that will be most helpful in forecasting future events.To help address this issue, some researchers use computer codes to numerically simulate earthquakes and construct dynamic, spontaneous rupture (hereafter called “spontaneous rupture”) solutions. For these types of numerical simulations, rather than prescribing the slip function at each location on the fault(s), just the friction constitutive properties and initial stress conditions are prescribed. The subsequent stresses and fault slip spontaneously evolve over time as part of the elasto-dynamic solution. Therefore, spontaneous rupture computer simulations of earthquakes allow us to include everything that we know, or think that we know, about earthquake dynamics and to test these ideas against earthquake observations.

  2. Machine learning of fault characteristics from rocket engine simulation data

    NASA Technical Reports Server (NTRS)

    Ke, Min; Ali, Moonis

    1990-01-01

    Transformation of data into knowledge through conceptual induction has been the focus of our research described in this paper. We have developed a Machine Learning System (MLS) to analyze the rocket engine simulation data. MLS can provide to its users fault analysis, characteristics, and conceptual descriptions of faults, and the relationships of attributes and sensors. All the results are critically important in identifying faults.

  3. Testability analysis on a hydraulic system in a certain equipment based on simulation model

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Cong, Hua; Liu, Yuanhong; Feng, Fuzhou

    2018-03-01

    Aiming at the problem that the complicated structure and the shortage of fault statistics information in hydraulic systems, a multi value testability analysis method based on simulation model is proposed. Based on the simulation model of AMESim, this method injects the simulated faults and records variation of test parameters ,such as pressure, flow rate, at each test point compared with those under normal conditions .Thus a multi-value fault-test dependency matrix is established. Then the fault detection rate (FDR) and fault isolation rate (FIR) are calculated based on the dependency matrix. Finally the system of testability and fault diagnosis capability are analyzed and evaluated, which can only reach a lower 54%(FDR) and 23%(FIR). In order to improve testability performance of the system,. number and position of the test points are optimized on the system. Results show the proposed test placement scheme can be used to solve the problems that difficulty, inefficiency and high cost in the system maintenance.

  4. 3-D simulation of hanging wall effect at dam site

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Xu, Y.

    2017-12-01

    Hanging wall effect is one of the near fault effects. This paper focuses on the difference of the ground motions on the hanging wall side between the footwall side of the fault at dam site considering the key factors, such as actual topography, the rupture process. For this purpose, 3-D ground motions are numerically simulated by the spectrum element method (SEM), which takes into account the physical mechanism of generation and propagation of seismic waves. With the SEM model of 548 million DOFs, excitation and propagation of seismic waves are simulated to compare the difference between the ground motion on the hanging wall side and that on the footwall side. Take Dagangshan region located in China as an example, several seismogenic finite faults with different dip angle are simulated to investigate the hanging wall effect. Furthermore, by comparing the ground motions of the receiving points, the influence of several factors on hanging wall effect is investigated, such as the dip of the fault and the fault type (strike slip fault or dip-slip fault). The peak acceleration on the hanging wall side is obviously larger than those on the footwall side, which numerically evidences the hanging wall effect. Besides, the simulation shows that only when the dip is less than 70° does the hanging wall effect deserve attention.

  5. Tsunami simulation using submarine displacement calculated from simulation of ground motion due to seismic source model

    NASA Astrophysics Data System (ADS)

    Akiyama, S.; Kawaji, K.; Fujihara, S.

    2013-12-01

    Since fault fracturing due to an earthquake can simultaneously cause ground motion and tsunami, it is appropriate to evaluate the ground motion and the tsunami by single fault model. However, several source models are used independently in the ground motion simulation or the tsunami simulation, because of difficulty in evaluating both phenomena simultaneously. Many source models for the 2011 off the Pacific coast of Tohoku Earthquake are proposed from the inversion analyses of seismic observations or from those of tsunami observations. Most of these models show the similar features, which large amount of slip is located at the shallower part of fault area near the Japan Trench. This indicates that the ground motion and the tsunami can be evaluated by the single source model. Therefore, we examine the possibility of the tsunami prediction, using the fault model estimated from seismic observation records. In this study, we try to carry out the tsunami simulation using the displacement field of oceanic crustal movements, which is calculated from the ground motion simulation of the 2011 off the Pacific coast of Tohoku Earthquake. We use two fault models by Yoshida et al. (2011), which are based on both the teleseismic body wave and on the strong ground motion records. Although there is the common feature in those fault models, the amount of slip near the Japan trench is lager in the fault model from the strong ground motion records than in that from the teleseismic body wave. First, the large-scale ground motion simulations applying those fault models used by the voxel type finite element method are performed for the whole eastern Japan. The synthetic waveforms computed from the simulations are generally consistent with the observation records of K-NET (Kinoshita (1998)) and KiK-net stations (Aoi et al. (2000)), deployed by the National Research Institute for Earth Science and Disaster Prevention (NIED). Next, the tsunami simulations are performed by the finite difference calculation based on the shallow water theory. The initial wave height for tsunami generation is estimated from the vertical displacement of ocean bottom due to the crustal movements, which is obtained from the ground motion simulation mentioned above. The results of tsunami simulations are compared with the observations of the GPS wave gauges to evaluate the validity for the tsunami prediction using the fault model based on the seismic observation records.

  6. Self-checking self-repairing computer nodes using the mirror processor

    NASA Technical Reports Server (NTRS)

    Tamir, Yuval

    1992-01-01

    Circuitry added to fault-tolerant systems for concurrent error deduction usually reduces performance. Using a technique called micro rollback, it is possible to eliminate most of the performance penalty of concurrent error detection. Error detection is performed in parallel with intermodule communication, and erroneous state changes are later undone. The author reports on the design and implementation of a VLSI RISC microprocessor, called the Mirror Processor (MP), which is capable of micro rollback. In order to achieve concurrent error detection, two MP chips operate in lockstep, comparing external signals and a signature of internal signals every clock cycle. If a mismatch is detected, both processors roll back to the beginning of the cycle when the error occurred. In some cases the erroneous state is corrected by copying a value from the fault-free processor to the faulty processor. The architecture, microarchitecture, and VLSI implementation of the MP, emphasizing its error-detection, error-recovery, and self-diagnosis capabilities, are described.

  7. Dynamic rupture simulations of the 2016 Mw7.8 Kaikōura earthquake: a cascading multi-fault event

    NASA Astrophysics Data System (ADS)

    Ulrich, T.; Gabriel, A. A.; Ampuero, J. P.; Xu, W.; Feng, G.

    2017-12-01

    The Mw7.8 Kaikōura earthquake struck the Northern part of New Zealand's South Island roughly one year ago. It ruptured multiple segments of the contractional North Canterbury fault zone and of the Marlborough fault system. Field observations combined with satellite data suggest a rupture path involving partly unmapped faults separated by large stepover distances larger than 5 km, the maximum distance usually considered by the latest seismic hazard assessment methods. This might imply distant rupture transfer mechanisms generally not considered in seismic hazard assessment. We present high-resolution 3D dynamic rupture simulations of the Kaikōura earthquake under physically self-consistent initial stress and strength conditions. Our simulations are based on recent finite-fault slip inversions that constrain fault system geometry and final slip distribution from remote sensing, surface rupture and geodetic data (Xu et al., 2017). We assume a uniform background stress field, without lateral fault stress or strength heterogeneity. We use the open-source software SeisSol (www.seissol.org) which is based on an arbitrary high-order accurate DERivative Discontinuous Galerkin method (ADER-DG). Our method can account for complex fault geometries, high resolution topography and bathymetry, 3D subsurface structure, off-fault plasticity and modern friction laws. It enables the simulation of seismic wave propagation with high-order accuracy in space and time in complex media. We show that a cascading rupture driven by dynamic triggering can break all fault segments that were involved in this earthquake without mechanically requiring an underlying thrust fault. Our prefered fault geometry connects most fault segments: it does not features stepover larger than 2 km. The best scenario matches the main macroscopic characteristics of the earthquake, including its apparently slow rupture propagation caused by zigzag cascading, the moment magnitude and the overall inferred slip distribution. We observe a high sensitivity of cascading dynamics on fault-step over distance and off-fault energy dissipation.

  8. Comparison of Observed Spatio-temporal Aftershock Patterns with Earthquake Simulator Results

    NASA Astrophysics Data System (ADS)

    Kroll, K.; Richards-Dinger, K. B.; Dieterich, J. H.

    2013-12-01

    Due to the complex nature of faulting in southern California, knowledge of rupture behavior near fault step-overs is of critical importance to properly quantify and mitigate seismic hazards. Estimates of earthquake probability are complicated by the uncertainty that a rupture will stop at or jump a fault step-over, which affects both the magnitude and frequency of occurrence of earthquakes. In recent years, earthquake simulators and dynamic rupture models have begun to address the effects of complex fault geometries on earthquake ground motions and rupture propagation. Early models incorporated vertical faults with highly simplified geometries. Many current studies examine the effects of varied fault geometry, fault step-overs, and fault bends on rupture patterns; however, these works are limited by the small numbers of integrated fault segments and simplified orientations. The previous work of Kroll et al., 2013 on the northern extent of the 2010 El Mayor-Cucapah rupture in the Yuha Desert region uses precise aftershock relocations to show an area of complex conjugate faulting within the step-over region between the Elsinore and Laguna Salada faults. Here, we employ an innovative approach of incorporating this fine-scale fault structure defined through seismological, geologic and geodetic means in the physics-based earthquake simulator, RSQSim, to explore the effects of fine-scale structures on stress transfer and rupture propagation and examine the mechanisms that control aftershock activity and local triggering of other large events. We run simulations with primary fault structures in state of California and northern Baja California and incorporate complex secondary faults in the Yuha Desert region. These models produce aftershock activity that enables comparison between the observed and predicted distribution and allow for examination of the mechanisms that control them. We investigate how the spatial and temporal distribution of aftershocks are affected by changes to model parameters such as shear and normal stress, rate-and-state frictional properties, fault geometry, and slip rate.

  9. Dynamic rupture simulation of the 2017 Mw 7.8 Kaikoura (New Zealand) earthquake: Is spontaneous multi-fault rupture expected?

    NASA Astrophysics Data System (ADS)

    Ando, R.; Kaneko, Y.

    2017-12-01

    The coseismic rupture of the 2016 Kaikoura earthquake propagated over the distance of 150 km along the NE-SW striking fault system in the northern South Island of New Zealand. The analysis of In-SAR, GPS and field observations (Hamling et al., 2017) revealed that the most of the rupture occurred along the previously mapped active faults, involving more than seven major fault segments. These fault segments, mostly dipping to northwest, are distributed in a quite complex manner, manifested by fault branching and step-over structures. Back-projection rupture imaging shows that the rupture appears to jump between three sub-parallel fault segments in sequence from the south to north (Kaiser et al., 2017). The rupture seems to be terminated on the Needles fault in Cook Strait. One of the main questions is whether this multi-fault rupture can be naturally explained with the physical basis. In order to understand the conditions responsible for the complex rupture process, we conduct fully dynamic rupture simulations that account for 3-D non-planar fault geometry embedded in an elastic half-space. The fault geometry is constrained by previous In-SAR observations and geological inferences. The regional stress field is constrained by the result of stress tensor inversion based on focal mechanisms (Balfour et al., 2005). The fault is governed by a relatively simple, slip-weakening friction law. For simplicity, the frictional parameters are uniformly distributed as there is no direct estimate of them except for a shallow portion of the Kekerengu fault (Kaneko et al., 2017). Our simulations show that the rupture can indeed propagate through the complex fault system once it is nucleated at the southernmost segment. The simulated slip distribution is quite heterogeneous, reflecting the nature of non-planar fault geometry, fault branching and step-over structures. We find that optimally oriented faults exhibit larger slip, which is consistent with the slip model of Hamling et al. (2017). We conclude that the first order characteristics of this event may be interpreted by the effect of irregularity in the fault geometry.

  10. Simulations of tremor-related creep reveal a weak crustal root of the San Andreas Fault

    USGS Publications Warehouse

    Shelly, David R.; Bradley, Andrew M.; Johnson, Kaj M.

    2013-01-01

    Deep aseismic roots of faults play a critical role in transferring tectonic loads to shallower, brittle crustal faults that rupture in large earthquakes. Yet, until the recent discovery of deep tremor and creep, direct inference of the physical properties of lower-crustal fault roots has remained elusive. Observations of tremor near Parkfield, CA provide the first evidence for present-day localized slip on the deep extension of the San Andreas Fault and triggered transient creep events. We develop numerical simulations of fault slip to show that the spatiotemporal evolution of triggered tremor near Parkfield is consistent with triggered fault creep governed by laboratory-derived friction laws between depths of 20–35 km on the fault. Simulated creep and observed tremor northwest of Parkfield nearly ceased for 20–30 days in response to small coseismic stress changes of order 104 Pa from the 2003 M6.5 San Simeon Earthquake. Simulated afterslip and observed tremor following the 2004 M6.0 Parkfield earthquake show a coseismically induced pulse of rapid creep and tremor lasting for 1 day followed by a longer 30 day period of sustained accelerated rates due to propagation of shallow afterslip into the lower crust. These creep responses require very low effective normal stress of ~1 MPa on the deep San Andreas Fault and near-neutral-stability frictional properties expected for gabbroic lower-crustal rock.

  11. Locating Anomalies in Complex Data Sets Using Visualization and Simulation

    NASA Technical Reports Server (NTRS)

    Panetta, Karen

    2001-01-01

    The research goals are to create a simulation framework that can accept any combination of models written at the gate or behavioral level. The framework provides the ability to fault simulate and create scenarios of experiments using concurrent simulation. In order to meet these goals we have had to fulfill the following requirements. The ability to accept models written in VHDL, Verilog or the C languages. The ability to propagate faults through any model type. The ability to create experiment scenarios efficiently without generating every possible combination of variables. The ability to accept adversity of fault models beyond the single stuck-at model. Major development has been done to develop a parser that can accept models written in various languages. This work has generated considerable attention from other universities and industry for its flexibility and usefulness. The parser uses LEXX and YACC to parse Verilog and C. We have also utilized our industrial partnership with Alternative System's Inc. to import vhdl into our simulator. For multilevel simulation, we needed to modify the simulator architecture to accept models that contained multiple outputs. This enabled us to accept behavioral components. The next major accomplishment was the addition of "functional fault models". Functional fault models change the behavior of a gate or model. For example, a bridging fault can make an OR gate behave like an AND gate. This has applications beyond fault simulation. This modeling flexibility will make the simulator more useful for doing verification and model comparison. For instance, two or more versions of an ALU can be comparatively simulated in a single execution. The results will show where and how the models differed so that the performance and correctness of the models may be evaluated. A considerable amount of time has been dedicated to validating the simulator performance on larger models provided by industry and other universities.

  12. 3D fault curvature and fractal roughness: Insights for rupture dynamics and ground motions using a Discontinous Galerkin method

    NASA Astrophysics Data System (ADS)

    Ulrich, Thomas; Gabriel, Alice-Agnes

    2017-04-01

    Natural fault geometries are subject to a large degree of uncertainty. Their geometrical structure is not directly observable and may only be inferred from surface traces, or geophysical measurements. Most studies aiming at assessing the potential seismic hazard of natural faults rely on idealised shaped models, based on observable large-scale features. Yet, real faults are wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. Dynamic rupture simulations aim to capture the observed complexity of earthquake sources and ground-motions. From a numerical point of view, incorporating rough faults in such simulations is challenging - it requires optimised codes able to run efficiently on high-performance computers and simultaneously handle complex geometries. Physics-based rupture dynamics hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Moreover, the simulated ground-motions present many similarities with observed ground-motions records. Thus, such simulations may foster our understanding of earthquake source processes, and help deriving more accurate seismic hazard estimates. In this presentation, the software package SeisSol (www.seissol.org), based on an ADER-Discontinuous Galerkin scheme, is used to solve the spontaneous dynamic earthquake rupture problem. The usage of tetrahedral unstructured meshes naturally allows for complicated fault geometries. However, SeisSol's high-order discretisation in time and space is not particularly suited for small-scale fault roughness. We will demonstrate modelling conditions under which SeisSol resolves rupture dynamics on rough faults accurately. The strong impact of the geometric gradient of the fault surface on the rupture process is then shown in 3D simulations. Following, the benefits of explicitly modelling fault curvature and roughness, in distinction to prescribing heterogeneous initial stress conditions on a planar fault, is demonstrated. Furthermore, we show that rupture extend, rupture front coherency and rupture speed are highly dependent on the initial amplitude of stress acting on the fault, defined by the normalized prestress factor R, the ratio of the potential stress drop over the breakdown stress drop. The effects of fault complexity are particularly pronounced for lower R. By low-pass filtering a rough fault at several cut-off wavelengths, we then try to capture rupture complexity using a simplified fault geometry. We find that equivalent source dynamics can only be obtained using a scarcely filtered fault associated with a reduced stress level. To investigate the wavelength-dependent roughness effect, the fault geometry is bandpass-filtered over several spectral ranges. We show that geometric fluctuations cause rupture velocity fluctuations of similar length scale. The impact of fault geometry is especially pronounced when the rupture front velocity is near supershear. Roughness fluctuations significantly smaller than the rupture front characteristic dimension (cohesive zone size) affect only macroscopic rupture properties, thus, posing a minimum length scale limiting the required resolution of 3D fault complexity. Lastly, the effect of fault curvature and roughness on the simulated ground-motions is assessed. Despite employing a simple linear slip weakening friction law, the simulated ground-motions compare well with estimates from ground motions prediction equations, even at relatively high frequencies.

  13. Studies of Fault Interactions and Regional Seismicity Using Numerical Simulations

    NASA Astrophysics Data System (ADS)

    Yikilmaz, Mehmet Burak

    Numerical simulations are routinely used for weather and climate forecasting. It is desirable to simulate regional seismicity for seismic hazard analysis. One such simulation tool is the Virtual California earthquake simulator. We have used Virtual California (VC) to study various aspects of fault interaction and analyzed the statistics of earthquake recurrence times and magnitudes generated synthetically. The first chapter of this dissertation investigates the behavior of seismology simulations using three relatively simple models involving a straight strike-slip fault. We show that a series of historical earthquakes observed along the Nankai Trough in Japan exhibit similar patterns to those obtained in our model II. In the second chapter we utilize Virtual California to study regional seismicity in northern California. We generate synthetic catalogs of seismicity using a composite simulation. We use these catalogs to analyze frequency-magnitude and recurrence interval statistics on both a regional and fault specific level and compare our modeled rates of seismicity and spatial variability with observations. The final chapter explores the jump distance for a propagating rupture over a stepping strike-slip fault. Our study indicates that between 2.5 and 5.5 km of the separation distance, the percentage of events that jump from one fault to the next decreases significantly. We find that these step-over distance values are in good agreement with geologically observed values.

  14. Eigenvector of gravity gradient tensor for estimating fault dips considering fault type

    NASA Astrophysics Data System (ADS)

    Kusumoto, Shigekazu

    2017-12-01

    The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile.

  15. Pseudo-dynamic source characterization accounting for rough-fault effects

    NASA Astrophysics Data System (ADS)

    Galis, Martin; Thingbaijam, Kiran K. S.; Mai, P. Martin

    2016-04-01

    Broadband ground-motion simulations, ideally for frequencies up to ~10Hz or higher, are important for earthquake engineering; for example, seismic hazard analysis for critical facilities. An issue with such simulations is realistic generation of radiated wave-field in the desired frequency range. Numerical simulations of dynamic ruptures propagating on rough faults suggest that fault roughness is necessary for realistic high-frequency radiation. However, simulations of dynamic ruptures are too expensive for routine applications. Therefore, simplified synthetic kinematic models are often used. They are usually based on rigorous statistical analysis of rupture models inferred by inversions of seismic and/or geodetic data. However, due to limited resolution of the inversions, these models are valid only for low-frequency range. In addition to the slip, parameters such as rupture-onset time, rise time and source time functions are needed for complete spatiotemporal characterization of the earthquake rupture. But these parameters are poorly resolved in the source inversions. To obtain a physically consistent quantification of these parameters, we simulate and analyze spontaneous dynamic ruptures on rough faults. First, by analyzing the impact of fault roughness on the rupture and seismic radiation, we develop equivalent planar-fault kinematic analogues of the dynamic ruptures. Next, we investigate the spatial interdependencies between the source parameters to allow consistent modeling that emulates the observed behavior of dynamic ruptures capturing the rough-fault effects. Based on these analyses, we formulate a framework for pseudo-dynamic source model, physically consistent with the dynamic ruptures on rough faults.

  16. An Analysis of Failure Handling in Chameleon, A Framework for Supporting Cost-Effective Fault Tolerant Services

    NASA Technical Reports Server (NTRS)

    Haakensen, Erik Edward

    1998-01-01

    The desire for low-cost reliable computing is increasing. Most current fault tolerant computing solutions are not very flexible, i.e., they cannot adapt to reliability requirements of newly emerging applications in business, commerce, and manufacturing. It is important that users have a flexible, reliable platform to support both critical and noncritical applications. Chameleon, under development at the Center for Reliable and High-Performance Computing at the University of Illinois, is a software framework. for supporting cost-effective adaptable networked fault tolerant service. This thesis details a simulation of fault injection, detection, and recovery in Chameleon. The simulation was written in C++ using the DEPEND simulation library. The results obtained from the simulation included the amount of overhead incurred by the fault detection and recovery mechanisms supported by Chameleon. In addition, information about fault scenarios from which Chameleon cannot recover was gained. The results of the simulation showed that both critical and noncritical applications can be executed in the Chameleon environment with a fairly small amount of overhead. No single point of failure from which Chameleon could not recover was found. Chameleon was also found to be capable of recovering from several multiple failure scenarios.

  17. The View of Scientific Inquiry Conveyed by Simulation-Based Virtual Laboratories

    ERIC Educational Resources Information Center

    Chen, Sufen

    2010-01-01

    With an increasing number of studies evincing the effectiveness of simulation-based virtual laboratories (VLs), researchers have discussed replacing traditional laboratories. However, the approach of doing science endorsed by VLs has not been carefully examined. A survey of 233 online VLs revealed that hypothetico-deductive (HD) logic prevails in…

  18. What Can We Learn from a Simple Physics-Based Earthquake Simulator?

    NASA Astrophysics Data System (ADS)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2018-03-01

    Physics-based earthquake simulators are becoming a popular tool to investigate on the earthquake occurrence process. So far, the development of earthquake simulators is commonly led by the approach "the more physics, the better". However, this approach may hamper the comprehension of the outcomes of the simulator; in fact, within complex models, it may be difficult to understand which physical parameters are the most relevant to the features of the seismic catalog at which we are interested. For this reason, here, we take an opposite approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple simulator may be more informative than a complex one for some specific scientific objectives, because it is more understandable. Our earthquake simulator has three main components: the first one is a realistic tectonic setting, i.e., a fault data set of California; the second is the application of quantitative laws for earthquake generation on each single fault, and the last is the fault interaction modeling through the Coulomb Failure Function. The analysis of this simple simulator shows that: (1) the short-term clustering can be reproduced by a set of faults with an almost periodic behavior, which interact according to a Coulomb failure function model; (2) a long-term behavior showing supercycles of the seismic activity exists only in a markedly deterministic framework, and quickly disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault; (3) faults that are strongly coupled in terms of Coulomb failure function model are synchronized in time only in a marked deterministic framework, and as before, such a synchronization disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault. Overall, the results show that even in a simple and perfectly known earthquake occurrence world, introducing a small degree of stochasticity may blur most of the deterministic time features, such as long-term trend and synchronization among nearby coupled faults.

  19. Dynamic characteristics of a 20 kHz resonant power system - Fault identification and fault recovery

    NASA Technical Reports Server (NTRS)

    Wasynczuk, O.

    1988-01-01

    A detailed simulation of a dc inductor resonant driver and receiver is used to demonstrate the transient characteristics of a 20 kHz resonant power system during fault and overload conditions. The simulated system consists of a dc inductor resonant inverter (driver), a 50-meter transmission cable, and a dc inductor resonant receiver load. Of particular interest are the driver and receiver performance during fault and overload conditions and on the recovery characteristics following removal of the fault. The information gained from these studies sets the stage for further work in fault identification and autonomous power system control.

  20. FAULT PROPAGATION AND EFFECTS ANALYSIS FOR DESIGNING AN ONLINE MONITORING SYSTEM FOR THE SECONDARY LOOP OF A NUCLEAR POWER PLANT PART OF A HYBRID ENERGY SYSTEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Huijuan; Diao, Xiaoxu; Li, Boyuan

    This paper studies the propagation and effects of faults of critical components that pertain to the secondary loop of a nuclear power plant found in Nuclear Hybrid Energy Systems (NHES). This information is used to design an on-line monitoring (OLM) system which is capable of detecting and forecasting faults that are likely to occur during NHES operation. In this research, the causes, features, and effects of possible faults are investigated by simulating the propagation of faults in the secondary loop. The simulation is accomplished by using the Integrated System Failure Analysis (ISFA). ISFA is used for analyzing hardware and softwaremore » faults during the conceptual design phase. In this paper, the models of system components required by ISFA are initially constructed. Then, the fault propagation analysis is implemented, which is conducted under the bounds set by acceptance criteria derived from the design of an OLM system. The result of the fault simulation is utilized to build a database for fault detection and diagnosis, provide preventive measures, and propose an optimization plan for the OLM system.« less

  1. Modeling fluid flow and heat transfer at Basin and Range faults: preliminary results for Leach hot springs, Nevada

    USGS Publications Warehouse

    López, Dina L.; Smith, Leslie; Storey, Michael L.; Nielson, Dennis L.

    1994-01-01

    The hydrothermal systems of the Basin and Range Province are often located at or near major range bounding normal faults. The flow of fluid and energy at these faults is affected by the advective transfer of heat and fluid from an to the adjacent mountain ranges and valleys, This paper addresses the effect of the exchange of fluid and energy between the country rock, the valley fill sediments, and the fault zone, on the fluid and heat flow regimes at the fault plane. For comparative purposes, the conditions simulated are patterned on Leach Hot Springs in southern Grass Valley, Nevada. Our simulations indicated that convection can exist at the fault plane even when the fault is exchanging significant heat and fluid with the surrounding country rock and valley fill sediments. The temperature at the base of the fault decreased with increasing permeability of the country rock. Higher groundwater discharge from the fault and lower temperatures at the base of the fault are favored by high country rock permabilities and fault transmissivities. Preliminary results suggest that basal temperatures and flow rates for Leach Hot Springs can not be simulated with a fault 3 km deep and an average regional heat flow of 150 mW/m2 because the basal temperature and mass discharge rates are too low. A fault permeable to greater depths or a higher regional heat flow may be indicated for these springs.

  2. Spatial and Temporal Variations in Slip Partitioning During Oblique Convergence Experiments

    NASA Astrophysics Data System (ADS)

    Beyer, J. L.; Cooke, M. L.; Toeneboehn, K.

    2017-12-01

    Physical experiments of oblique convergence in wet kaolin demonstrate the development of slip partitioning, where two faults accommodate strain via different slip vectors. In these experiments, the second fault forms after the development of the first fault. As one strain component is relieved by one fault, the local stress field then favors the development of a second fault with different slip sense. A suite of physical experiments reveals three styles of slip partitioning development controlled by the convergence angle and presence of a pre-existing fault. In experiments with low convergence angles, strike-slip faults grow prior to reverse faults (Type 1) regardless of whether the fault is precut or not. In experiments with moderate convergence angles, slip partitioning is dominantly controlled by the presence of a pre-existing fault. In all experiments, the primarily reverse fault forms first. Slip partitioning then develops with the initiation of strike-slip along the precut fault (Type 2) or growth of a secondary reverse fault where the first fault is steepest. Subsequently, the slip on the first fault transitions to primarily strike-slip (Type 3). Slip rates and rakes along the slip partitioned faults for both precut and uncut experiments vary temporally, suggesting that faults in these slip-partitioned systems are constantly adapting to the conditions produced by slip along nearby faults in the system. While physical experiments show the evolution of slip partitioning, numerical simulations of the experiments provide information about both the stress and strain fields, which can be used to compute the full work budget, providing insight into the mechanisms that drive slip partitioning. Preliminary simulations of precut experiments show that strain energy density (internal work) can be used to predict fault growth, highlighting where fault growth can reduce off-fault deformation in the physical experiments. In numerical simulations of uncut experiments with a first non-planar oblique slip fault, strain energy density is greatest where the first fault is steepest, as less convergence is accommodated along this portion of the fault. The addition of a second slip-partitioning fault to the system decreases external work indicating that these faults increase the mechanical efficiency of the system.

  3. Onboard Nonlinear Engine Sensor and Component Fault Diagnosis and Isolation Scheme

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong

    2011-01-01

    A method detects and isolates in-flight sensor, actuator, and component faults for advanced propulsion systems. In sharp contrast to many conventional methods, which deal with either sensor fault or component fault, but not both, this method considers sensor fault, actuator fault, and component fault under one systemic and unified framework. The proposed solution consists of two main components: a bank of real-time, nonlinear adaptive fault diagnostic estimators for residual generation, and a residual evaluation module that includes adaptive thresholds and a Transferable Belief Model (TBM)-based residual evaluation scheme. By employing a nonlinear adaptive learning architecture, the developed approach is capable of directly dealing with nonlinear engine models and nonlinear faults without the need of linearization. Software modules have been developed and evaluated with the NASA C-MAPSS engine model. Several typical engine-fault modes, including a subset of sensor/actuator/components faults, were tested with a mild transient operation scenario. The simulation results demonstrated that the algorithm was able to successfully detect and isolate all simulated faults as long as the fault magnitudes were larger than the minimum detectable/isolable sizes, and no misdiagnosis occurred

  4. Dynamic rupture scenarios from Sumatra to Iceland - High-resolution earthquake source physics on natural fault systems

    NASA Astrophysics Data System (ADS)

    Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Wollherr, Stephanie

    2017-04-01

    Capturing the observed complexity of earthquake sources in dynamic rupture simulations may require: non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and fault strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure. All of these factors have been independently shown to alter dynamic rupture behavior and thus possibly influence the degree of realism attainable via simulated ground motions. In this presentation we will show examples of high-resolution earthquake scenarios, e.g. based on the 2004 Sumatra-Andaman Earthquake, the 1994 Northridge earthquake and a potential rupture of the Husavik-Flatey fault system in Northern Iceland. The simulations combine a multitude of representations of source complexity at the necessary spatio-temporal resolution enabled by excellent scalability on modern HPC systems. Such simulations allow an analysis of the dominant factors impacting earthquake source physics and ground motions given distinct tectonic settings or distinct focuses of seismic hazard assessment. Across all simulations, we find that fault geometry concurrently with the regional background stress state provide a first order influence on source dynamics and the emanated seismic wave field. The dynamic rupture models are performed with SeisSol, a software package based on an ADER-Discontinuous Galerkin scheme for solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. Use of unstructured tetrahedral meshes allows for a realistic representation of the non-planar fault geometry, subsurface structure and bathymetry. The results presented highlight the fact that modern numerical methods are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis.

  5. Self-adaptive Fault-Tolerance of HLA-Based Simulations in the Grid Environment

    NASA Astrophysics Data System (ADS)

    Huang, Jijie; Chai, Xudong; Zhang, Lin; Li, Bo Hu

    The objects of a HLA-based simulation can access model services to update their attributes. However, the grid server may be overloaded and refuse the model service to handle objects accesses. Because these objects have been accessed this model service during last simulation loop and their medium state are stored in this server, this may terminate the simulation. A fault-tolerance mechanism must be introduced into simulations. But the traditional fault-tolerance methods cannot meet the above needs because the transmission latency between a federate and the RTI in grid environment varies from several hundred milliseconds to several seconds. By adding model service URLs to the OMT and expanding the HLA services and model services with some interfaces, this paper proposes a self-adaptive fault-tolerance mechanism of simulations according to the characteristics of federates accessing model services. Benchmark experiments indicate that the expanded HLA/RTI can make simulations self-adaptively run in the grid environment.

  6. Simulation-based reasoning about the physical propagation of fault effects

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan; Li, Dalu

    1990-01-01

    The research described deals with the effects of faults on complex physical systems, with particular emphasis on aircraft and spacecraft systems. Given that a malfunction has occurred and been diagnosed, the goal is to determine how that fault will propagate to other subsystems, and what the effects will be on vehicle functionality. In particular, the use of qualitative spatial simulation to determine the physical propagation of fault effects in 3-D space is described.

  7. Rupture Dynamics Simulation for Non-Planar fault by a Curved Grid Finite Difference Method

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Zhu, G.; Chen, X.

    2011-12-01

    We first implement the non-staggered finite difference method to solve the dynamic rupture problem, with split-node, for non-planar fault. Split-node method for dynamic simulation has been used widely, because of that it's more precise to represent the fault plane than other methods, for example, thick fault, stress glut and so on. The finite difference method is also a popular numeric method to solve kinematic and dynamic problem in seismology. However, previous works focus most of theirs eyes on the staggered-grid method, because of its simplicity and computational efficiency. However this method has its own disadvantage comparing to non-staggered finite difference method at some fact for example describing the boundary condition, especially the irregular boundary, or non-planar fault. Zhang and Chen (2006) proposed the MacCormack high order non-staggered finite difference method based on curved grids to precisely solve irregular boundary problem. Based upon on this non-staggered grid method, we make success of simulating the spontaneous rupture problem. The fault plane is a kind of boundary condition, which could be irregular of course. So it's convinced that we could simulate rupture process in the case of any kind of bending fault plane. We will prove this method is valid in the case of Cartesian coordinate first. In the case of bending fault, the curvilinear grids will be used.

  8. A comparison between rate-and-state friction and microphysical models, based on numerical simulations of fault slip

    NASA Astrophysics Data System (ADS)

    van den Ende, M. P. A.; Chen, J.; Ampuero, J.-P.; Niemeijer, A. R.

    2018-05-01

    Rate-and-state friction (RSF) is commonly used for the characterisation of laboratory friction experiments, such as velocity-step tests. However, the RSF framework provides little physical basis for the extrapolation of these results to the scales and conditions of natural fault systems, and so open questions remain regarding the applicability of the experimentally obtained RSF parameters for predicting seismic cycle transients. As an alternative to classical RSF, microphysics-based models offer means for interpreting laboratory and field observations, but are generally over-simplified with respect to heterogeneous natural systems. In order to bridge the temporal and spatial gap between the laboratory and nature, we have implemented existing microphysical model formulations into an earthquake cycle simulator. Through this numerical framework, we make a direct comparison between simulations exhibiting RSF-controlled fault rheology, and simulations in which the fault rheology is dictated by the microphysical model. Even though the input parameters for the RSF simulation are directly derived from the microphysical model, the microphysics-based simulations produce significantly smaller seismic event sizes than the RSF-based simulation, and suggest a more stable fault slip behaviour. Our results reveal fundamental limitations in using classical rate-and-state friction for the extrapolation of laboratory results. The microphysics-based approach offers a more complete framework in this respect, and may be used for a more detailed study of the seismic cycle in relation to material properties and fault zone pressure-temperature conditions.

  9. Numerical simulations of earthquakes and the dynamics of fault systems using the Finite Element method.

    NASA Astrophysics Data System (ADS)

    Kettle, L. M.; Mora, P.; Weatherley, D.; Gross, L.; Xing, H.

    2006-12-01

    Simulations using the Finite Element method are widely used in many engineering applications and for the solution of partial differential equations (PDEs). Computational models based on the solution of PDEs play a key role in earth systems simulations. We present numerical modelling of crustal fault systems where the dynamic elastic wave equation is solved using the Finite Element method. This is achieved using a high level computational modelling language, escript, available as open source software from ACcESS (Australian Computational Earth Systems Simulator), the University of Queensland. Escript is an advanced geophysical simulation software package developed at ACcESS which includes parallel equation solvers, data visualisation and data analysis software. The escript library was implemented to develop a flexible Finite Element model which reliably simulates the mechanism of faulting and the physics of earthquakes. Both 2D and 3D elastodynamic models are being developed to study the dynamics of crustal fault systems. Our final goal is to build a flexible model which can be applied to any fault system with user-defined geometry and input parameters. To study the physics of earthquake processes, two different time scales must be modelled, firstly the quasi-static loading phase which gradually increases stress in the system (~100years), and secondly the dynamic rupture process which rapidly redistributes stress in the system (~100secs). We will discuss the solution of the time-dependent elastic wave equation for an arbitrary fault system using escript. This involves prescribing the correct initial stress distribution in the system to simulate the quasi-static loading of faults to failure; determining a suitable frictional constitutive law which accurately reproduces the dynamics of the stick/slip instability at the faults; and using a robust time integration scheme. These dynamic models generate data and information that can be used for earthquake forecasting.

  10. Spatial Evaluation and Verification of Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  11. Modeling of fault reactivation and induced seismicity during hydraulic fracturing of shale-gas reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rutqvist, Jonny; Rinaldi, Antonio P.; Cappa, Frédéric

    2013-07-01

    We have conducted numerical simulation studies to assess the potential for injection-induced fault reactivation and notable seismic events associated with shale-gas hydraulic fracturing operations. The modeling is generally tuned towards conditions usually encountered in the Marcellus shale play in the Northeastern US at an approximate depth of 1500 m (~;;4,500 feet). Our modeling simulations indicate that when faults are present, micro-seismic events are possible, the magnitude of which is somewhat larger than the one associated with micro-seismic events originating from regular hydraulic fracturing because of the larger surface area that is available for rupture. The results of our simulations indicatedmore » fault rupture lengths of about 10 to 20 m, which, in rare cases can extend to over 100 m, depending on the fault permeability, the in situ stress field, and the fault strength properties. In addition to a single event rupture length of 10 to 20 m, repeated events and aseismic slip amounted to a total rupture length of 50 m, along with a shear offset displacement of less than 0.01 m. This indicates that the possibility of hydraulically induced fractures at great depth (thousands of meters) causing activation of faults and creation of a new flow path that can reach shallow groundwater resources (or even the surface) is remote. The expected low permeability of faults in producible shale is clearly a limiting factor for the possible rupture length and seismic magnitude. In fact, for a fault that is initially nearly-impermeable, the only possibility of larger fault slip event would be opening by hydraulic fracturing; this would allow pressure to penetrate the matrix along the fault and to reduce the frictional strength over a sufficiently large fault surface patch. However, our simulation results show that if the fault is initially impermeable, hydraulic fracturing along the fault results in numerous small micro-seismic events along with the propagation, effectively preventing larger events from occurring. Nevertheless, care should be taken with continuous monitoring of induced seismicity during the entire injection process to detect any runaway fracturing along faults.« less

  12. Simulation of fault performance of a diesel engine driven brushless alternator through PSPICE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayanan, S.S.Y.; Ananthakrishnan, P.; Hangari, V.U.

    1995-12-31

    Analysis of the fault performance of a brushless alternator with damper windings in the main alternator has been handled ab initio as a total modeling and simulation problem through proper application of Park`s equivalent circuit approach individually to the main exciter alternator units of the brushless alternator and the same has been implemented through PSPICE. The accuracy of the parameters used in the modeling and results obtained through PSPICE implementation are then evaluated for a specific 125 kVA brushless alternator in two stages as followed: first, by comparison of the predicted fault performance obtained from simulation of the 125 kVAmore » main alternator alone treated as a conventional alternator with the results obtained through the use of closed form analytical expressions available in the literature for fault currents and torques in such conventional alternators. Secondly, by comparison of some of the simulation results with those obtained experimentally on the brushless alternator itself. To enable proper calculation of derating factors to be used in the design of such brushless alternators, simulation results then include harmonic analysis of the steady state fault currents and torques. Throughout these studies, the brushless alternator is treated to be on no load at the instant of occurrence of fault.« less

  13. Staged-Fault Testing of Distance Protection Relay Settings

    NASA Astrophysics Data System (ADS)

    Havelka, J.; Malarić, R.; Frlan, K.

    2012-01-01

    In order to analyze the operation of the protection system during induced fault testing in the Croatian power system, a simulation using the CAPE software has been performed. The CAPE software (Computer-Aided Protection Engineering) is expert software intended primarily for relay protection engineers, which calculates current and voltage values during faults in the power system, so that relay protection devices can be properly set up. Once the accuracy of the simulation model had been confirmed, a series of simulations were performed in order to obtain the optimal fault location to test the protection system. The simulation results were used to specify the test sequence definitions for the end-to-end relay testing using advanced testing equipment with GPS synchronization for secondary injection in protection schemes based on communication. The objective of the end-to-end testing was to perform field validation of the protection settings, including verification of the circuit breaker operation, telecommunication channel time and the effectiveness of the relay algorithms. Once the end-to-end secondary injection testing had been completed, the induced fault testing was performed with three-end lines loaded and in service. This paper describes and analyses the test procedure, consisting of CAPE simulations, end-to-end test with advanced secondary equipment and staged-fault test of a three-end power line in the Croatian transmission system.

  14. In-flight Fault Detection and Isolation in Aircraft Flight Control Systems

    NASA Technical Reports Server (NTRS)

    Azam, Mohammad; Pattipati, Krishna; Allanach, Jeffrey; Poll, Scott; Patterson-Hine, Ann

    2005-01-01

    In this paper we consider the problem of test design for real-time fault detection and isolation (FDI) in the flight control system of fixed-wing aircraft. We focus on the faults that are manifested in the control surface elements (e.g., aileron, elevator, rudder and stabilizer) of an aircraft. For demonstration purposes, we restrict our focus on the faults belonging to nine basic fault classes. The diagnostic tests are performed on the features extracted from fifty monitored system parameters. The proposed tests are able to uniquely isolate each of the faults at almost all severity levels. A neural network-based flight control simulator, FLTZ(Registered TradeMark), is used for the simulation of various faults in fixed-wing aircraft flight control systems for the purpose of FDI.

  15. Ground-motion signature of dynamic ruptures on rough faults

    NASA Astrophysics Data System (ADS)

    Mai, P. Martin; Galis, Martin; Thingbaijam, Kiran K. S.; Vyas, Jagdish C.

    2016-04-01

    Natural earthquakes occur on faults characterized by large-scale segmentation and small-scale roughness. This multi-scale geometrical complexity controls the dynamic rupture process, and hence strongly affects the radiated seismic waves and near-field shaking. For a fault system with given segmentation, the question arises what are the conditions for producing large-magnitude multi-segment ruptures, as opposed to smaller single-segment events. Similarly, for variable degrees of roughness, ruptures may be arrested prematurely or may break the entire fault. In addition, fault roughness induces rupture incoherence that determines the level of high-frequency radiation. Using HPC-enabled dynamic-rupture simulations, we generate physically self-consistent rough-fault earthquake scenarios (M~6.8) and their associated near-source seismic radiation. Because these computations are too expensive to be conducted routinely for simulation-based seismic hazard assessment, we thrive to develop an effective pseudo-dynamic source characterization that produces (almost) the same ground-motion characteristics. Therefore, we examine how variable degrees of fault roughness affect rupture properties and the seismic wavefield, and develop a planar-fault kinematic source representation that emulates the observed dynamic behaviour. We propose an effective workflow for improved pseudo-dynamic source modelling that incorporates rough-fault effects and its associated high-frequency radiation in broadband ground-motion computation for simulation-based seismic hazard assessment.

  16. Effect of fault roughness on aftershock distribution and post co-seismic strain accumulation.

    NASA Astrophysics Data System (ADS)

    Aslam, K.; Daub, E. G.

    2017-12-01

    We perform physics-based simulations of earthquake rupture propagation on geometrically complex strike-slip faults. We consider many different realization of the fault roughness and obtain heterogeneous stress fields by performing dynamic rupture simulation of large earthquakes. We calculate the Coulomb failure function (CFF) for all these realizations so that we can quantify zones of stress increase/shadows surrounding the main fault and compare our results to seismic catalogs. To do this comparison, we use relocated earthquake catalogs from Northern and Southern California. We specify the range of fault roughness parameters based on past observational studies. The Hurst exponent (H) varies in range from 0.5 to 1 and RMS height to wavelength ratio ( RMS deviation of a fault profile from planarity) has values between 10-2 to 10-3. For any realization of fault roughness, the Probability density function (PDF) values relative to the mean CFF change show a wider spread near the fault and this spread squeezes into a narrow band as we move away from fault. For lower value of RMS ratio ( 10-3), we see bigger zones of stress change near the hypocenter and for higher value of RMS ratio ( 10-2), we see alternate zones of stress increase/decrease surrounding the fault to have comparable lengths. We also couple short-term dynamic rupture simulation with long-term tectonic modelling. We do this by giving the stress output from one of the dynamic rupture simulation (of a single realization of fault roughness) to long term tectonic model (LTM) as initial condition and then run LTM over duration of seismic cycle. This short term and long term coupling enables us to understand how heterogeneous stresses due to fault geometry influence the dynamics of strain accumulation in the post-seismic and inter-seismic phase of seismic cycle.

  17. The role of elasticity in simulating long-term tectonic extension

    NASA Astrophysics Data System (ADS)

    Olive, Jean-Arthur; Behn, Mark D.; Mittelstaedt, Eric; Ito, Garrett; Klein, Benjamin Z.

    2016-05-01

    While elasticity is a defining characteristic of the Earth's lithosphere, it is often ignored in numerical models of long-term tectonic processes in favour of a simpler viscoplastic description. Here we assess the consequences of this assumption on a well-studied geodynamic problem: the growth of normal faults at an extensional plate boundary. We conduct 2-D numerical simulations of extension in elastoplastic and viscoplastic layers using a finite difference, particle-in-cell numerical approach. Our models simulate a range of faulted layer thicknesses and extension rates, allowing us to quantify the role of elasticity on three key observables: fault-induced topography, fault rotation, and fault life span. In agreement with earlier studies, simulations carried out in elastoplastic layers produce rate-independent lithospheric flexure accompanied by rapid fault rotation and an inverse relationship between fault life span and faulted layer thickness. By contrast, models carried out with a viscoplastic lithosphere produce results that may qualitatively resemble the elastoplastic case, but depend strongly on the product of extension rate and layer viscosity U × ηL. When this product is high, fault growth initially generates little deformation of the footwall and hanging wall blocks, resulting in unrealistic, rigid block-offset in topography across the fault. This configuration progressively transitions into a regime where topographic decay associated with flexure is fully accommodated within the numerical domain. In addition, high U × ηL favours the sequential growth of multiple short-offset faults as opposed to a large-offset detachment. We interpret these results by comparing them to an analytical model for the fault-induced flexure of a thin viscous plate. The key to understanding the viscoplastic model results lies in the rate-dependence of the flexural wavelength of a viscous plate, and the strain rate dependence of the force increase associated with footwall and hanging wall bending. This behaviour produces unrealistic deformation patterns that can hinder the geological relevance of long-term rifting models that assume a viscoplastic rheology.

  18. Simulation Based Earthquake Forecasting with RSQSim

    NASA Astrophysics Data System (ADS)

    Gilchrist, J. J.; Jordan, T. H.; Dieterich, J. H.; Richards-Dinger, K. B.

    2016-12-01

    We are developing a physics-based forecasting model for earthquake ruptures in California. We employ the 3D boundary element code RSQSim to generate synthetic catalogs with millions of events that span up to a million years. The simulations incorporate rate-state fault constitutive properties in complex, fully interacting fault systems. The Unified California Earthquake Rupture Forecast Version 3 (UCERF3) model and data sets are used for calibration of the catalogs and specification of fault geometry. Fault slip rates match the UCERF3 geologic slip rates and catalogs are tuned such that earthquake recurrence matches the UCERF3 model. Utilizing the Blue Waters Supercomputer, we produce a suite of million-year catalogs to investigate the epistemic uncertainty in the physical parameters used in the simulations. In particular, values of the rate- and state-friction parameters a and b, the initial shear and normal stress, as well as the earthquake slip speed, are varied over several simulations. In addition to testing multiple models with homogeneous values of the physical parameters, the parameters a, b, and the normal stress are varied with depth as well as in heterogeneous patterns across the faults. Cross validation of UCERF3 and RSQSim is performed within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM) to determine the affect of the uncertainties in physical parameters observed in the field and measured in the lab, on the uncertainties in probabilistic forecasting. We are particularly interested in the short-term hazards of multi-event sequences due to complex faulting and multi-fault ruptures.

  19. DEPEND: A simulation-based environment for system level dependability analysis

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar; Iyer, Ravishankar K.

    1992-01-01

    The design and evaluation of highly reliable computer systems is a complex issue. Designers mostly develop such systems based on prior knowledge and experience and occasionally from analytical evaluations of simplified designs. A simulation-based environment called DEPEND which is especially geared for the design and evaluation of fault-tolerant architectures is presented. DEPEND is unique in that it exploits the properties of object-oriented programming to provide a flexible framework with which a user can rapidly model and evaluate various fault-tolerant systems. The key features of the DEPEND environment are described, and its capabilities are illustrated with a detailed analysis of a real design. In particular, DEPEND is used to simulate the Unix based Tandem Integrity fault-tolerance and evaluate how well it handles near-coincident errors caused by correlated and latent faults. Issues such as memory scrubbing, re-integration policies, and workload dependent repair times which affect how the system handles near-coincident errors are also evaluated. Issues such as the method used by DEPEND to simulate error latency and the time acceleration technique that provides enormous simulation speed up are also discussed. Unlike any other simulation-based dependability studies, the use of these approaches and the accuracy of the simulation model are validated by comparing the results of the simulations, with measurements obtained from fault injection experiments conducted on a production Tandem Integrity machine.

  20. Modeling earthquake magnitudes from injection-induced seismicity on rough faults

    NASA Astrophysics Data System (ADS)

    Maurer, J.; Dunham, E. M.; Segall, P.

    2017-12-01

    It is an open question whether perturbations to the in-situ stress field due to fluid injection affect the magnitudes of induced earthquakes. It has been suggested that characteristics such as the total injected fluid volume control the size of induced events (e.g., Baisch et al., 2010; Shapiro et al., 2011). On the other hand, Van der Elst et al. (2016) argue that the size distribution of induced earthquakes follows Gutenberg-Richter, the same as tectonic events. Numerical simulations support the idea that ruptures nucleating inside regions with high shear-to-effective normal stress ratio may not propagate into regions with lower stress (Dieterich et al., 2015; Schmitt et al., 2015), however, these calculations are done on geometrically smooth faults. Fang & Dunham (2013) show that rupture length on geometrically rough faults is variable, but strongly dependent on background shear/effective normal stress. In this study, we use a 2-D elasto-dynamic rupture simulator that includes rough fault geometry and off-fault plasticity (Dunham et al., 2011) to simulate earthquake ruptures under realistic conditions. We consider aggregate results for faults with and without stress perturbations due to fluid injection. We model a uniform far-field background stress (with local perturbations around the fault due to geometry), superimpose a poroelastic stress field in the medium due to injection, and compute the effective stress on the fault as inputs to the rupture simulator. Preliminary results indicate that even minor stress perturbations on the fault due to injection can have a significant impact on the resulting distribution of rupture lengths, but individual results are highly dependent on the details of the local stress perturbations on the fault due to geometric roughness.

  1. SOM neural network fault diagnosis method of polymerization kettle equipment optimized by improved PSO algorithm.

    PubMed

    Wang, Jie-sheng; Li, Shu-xia; Gao, Jie

    2014-01-01

    For meeting the real-time fault diagnosis and the optimization monitoring requirements of the polymerization kettle in the polyvinyl chloride resin (PVC) production process, a fault diagnosis strategy based on the self-organizing map (SOM) neural network is proposed. Firstly, a mapping between the polymerization process data and the fault pattern is established by analyzing the production technology of polymerization kettle equipment. The particle swarm optimization (PSO) algorithm with a new dynamical adjustment method of inertial weights is adopted to optimize the structural parameters of SOM neural network. The fault pattern classification of the polymerization kettle equipment is to realize the nonlinear mapping from symptom set to fault set according to the given symptom set. Finally, the simulation experiments of fault diagnosis are conducted by combining with the industrial on-site historical data of the polymerization kettle and the simulation results show that the proposed PSO-SOM fault diagnosis strategy is effective.

  2. Hardware fault insertion and instrumentation system: Mechanization and validation

    NASA Technical Reports Server (NTRS)

    Benson, J. W.

    1987-01-01

    Automated test capability for extensive low-level hardware fault insertion testing is developed. The test capability is used to calibrate fault detection coverage and associated latency times as relevant to projecting overall system reliability. Described are modifications made to the NASA Ames Reconfigurable Flight Control System (RDFCS) Facility to fully automate the total test loop involving the Draper Laboratories' Fault Injector Unit. The automated capability provided included the application of sequences of simulated low-level hardware faults, the precise measurement of fault latency times, the identification of fault symptoms, and bulk storage of test case results. A PDP-11/60 served as a test coordinator, and a PDP-11/04 as an instrumentation device. The fault injector was controlled by applications test software in the PDP-11/60, rather than by manual commands from a terminal keyboard. The time base was especially developed for this application to use a variety of signal sources in the system simulator.

  3. A Design of Finite Memory Residual Generation Filter for Sensor Fault Detection

    NASA Astrophysics Data System (ADS)

    Kim, Pyung Soo

    2017-04-01

    In the current paper, a residual generation filter with finite memory structure is proposed for sensor fault detection. The proposed finite memory residual generation filter provides the residual by real-time filtering of fault vector using only the most recent finite measurements and inputs on the window. It is shown that the residual given by the proposed residual generation filter provides the exact fault for noisefree systems. The proposed residual generation filter is specified to the digital filter structure for the amenability to hardware implementation. Finally, to illustrate the capability of the proposed residual generation filter, extensive simulations are performed for the discretized DC motor system with two types of sensor faults, incipient soft bias-type fault and abrupt bias-type fault. In particular, according to diverse noise levels and windows lengths, meaningful simulation results are given for the abrupt bias-type fault.

  4. A note on adding viscoelasticity to earthquake simulators

    USGS Publications Warehouse

    Pollitz, Fred

    2017-01-01

    Here, I describe how time‐dependent quasi‐static stress transfer can be implemented in an earthquake simulator code that is used to generate long synthetic seismicity catalogs. Most existing seismicity simulators use precomputed static stress interaction coefficients to rapidly implement static stress transfer in fault networks with typically tens of thousands of fault patches. The extension to quasi‐static deformation, which accounts for viscoelasticity of Earth’s ductile lower crust and mantle, involves the precomputation of additional interaction coefficients that represent time‐dependent stress transfer among the model fault patches, combined with defining and evolving additional state variables that track this stress transfer. The new approach is illustrated with application to a California‐wide synthetic fault network.

  5. Detection of CMOS bridging faults using minimal stuck-at fault test sets

    NASA Technical Reports Server (NTRS)

    Ijaz, Nabeel; Frenzel, James F.

    1993-01-01

    The performance of minimal stuck-at fault test sets at detecting bridging faults are evaluated. New functional models of circuit primitives are presented which allow accurate representation of bridging faults under switch-level simulation. The effectiveness of the patterns is evaluated using both voltage and current testing.

  6. CO2 Push-Pull Single Fault Injection Simulations

    DOE Data Explorer

    Borgia, Andrea; Oldenburg, Curtis (ORCID:0000000201326016); Zhang, Rui; Pan, Lehua; Daley, Thomas M.; Finsterle, Stefan; Ramakrishnan, T.S.; Doughty, Christine; Jung, Yoojin; Lee, Kyung Jae; Altundas, Bilgin; Chugunov, Nikita

    2017-09-21

    ASCII text files containing grid-block name, X-Y-Z location, and multiple parameters from TOUGH2 simulation output of CO2 injection into an idealized single fault representing a dipping normal fault at the Desert Peak geothermal field (readable by GMS). The fault is composed of a damage zone, a fault gouge and a slip plane. The runs are described in detail in the following: Borgia A., Oldenburg C.M., Zhang R., Jung Y., Lee K.J., Doughty C., Daley T.M., Chugunov N., Altundas B, Ramakrishnan T.S., 2017. Carbon Dioxide Injection for Enhanced Characterization of Faults and Fractures in Geothermal Systems. Proceedings of the 42st Workshop on Geothermal Reservoir Engineering, Stanford University, Stanford, California, February 13-17.

  7. Dynamic rupture scenarios from Sumatra to Iceland - High-resolution earthquake source physics on natural fault systems

    NASA Astrophysics Data System (ADS)

    Gabriel, A. A.; Madden, E. H.; Ulrich, T.; Wollherr, S.

    2016-12-01

    Capturing the observed complexity of earthquake sources in dynamic rupture simulations may require: non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure. All of these factors have been independently shown to alter dynamic rupture behavior and thus possibly influence the degree of realism attainable via simulated ground motions. In this presentation we will show examples of high-resolution earthquake scenarios, e.g. based on the 2004 Sumatra-Andaman Earthquake and a potential rupture of the Husavik-Flatey fault system in Northern Iceland. The simulations combine a multitude of representations of source complexity at the necessary spatio-temporal resolution enabled by excellent scalability on modern HPC systems. Such simulations allow an analysis of the dominant factors impacting earthquake source physics and ground motions given distinct tectonic settings or distinct focuses of seismic hazard assessment. Across all simulations, we find that fault geometry concurrently with the regional background stress state provide a first order influence on source dynamics and the emanated seismic wave field. The dynamic rupture models are performed with SeisSol, a software package based on an ADER-Discontinuous Galerkin scheme for solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. Use of unstructured tetrahedral meshes allows for a realistic representation of the non-planar fault geometry, subsurface structure and bathymetry. The results presented highlight the fact that modern numerical methods are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis.

  8. Simulated tsunami inundation for a range of Cascadia megathrust earthquake scenarios at Bandon, Oregon, USA

    USGS Publications Warehouse

    Witter, Robert C.; Zhang, Yinglong J.; Wang, Kelin; Priest, George R.; Goldfinger, Chris; Stimely, Laura; English, John T.; Ferro, Paul A.

    2013-01-01

    Characterizations of tsunami hazards along the Cascadia subduction zone hinge on uncertainties in megathrust rupture models used for simulating tsunami inundation. To explore these uncertainties, we constructed 15 megathrust earthquake scenarios using rupture models that supply the initial conditions for tsunami simulations at Bandon, Oregon. Tsunami inundation varies with the amount and distribution of fault slip assigned to rupture models, including models where slip is partitioned to a splay fault in the accretionary wedge and models that vary the updip limit of slip on a buried fault. Constraints on fault slip come from onshore and offshore paleoseismological evidence. We rank each rupture model using a logic tree that evaluates a model’s consistency with geological and geophysical data. The scenarios provide inputs to a hydrodynamic model, SELFE, used to simulate tsunami generation, propagation, and inundation on unstructured grids with <5–15 m resolution in coastal areas. Tsunami simulations delineate the likelihood that Cascadia tsunamis will exceed mapped inundation lines. Maximum wave elevations at the shoreline varied from ∼4 m to 25 m for earthquakes with 9–44 m slip and Mw 8.7–9.2. Simulated tsunami inundation agrees with sparse deposits left by the A.D. 1700 and older tsunamis. Tsunami simulations for large (22–30 m slip) and medium (14–19 m slip) splay fault scenarios encompass 80%–95% of all inundation scenarios and provide reasonable guidelines for land-use planning and coastal development. The maximum tsunami inundation simulated for the greatest splay fault scenario (36–44 m slip) can help to guide development of local tsunami evacuation zones.

  9. Frictional behavior of large displacement experimental faults

    USGS Publications Warehouse

    Beeler, N.M.; Tullis, T.E.; Blanpied, M.L.; Weeks, J.D.

    1996-01-01

    The coefficient of friction and velocity dependence of friction of initially bare surfaces and 1-mm-thick simulated fault gouges (400 mm at 25??C and 25 MPa normal stress. Steady state negative friction velocity dependence and a steady state fault zone microstructure are achieved after ???18 mm displacement, and an approximately constant strength is reached after a few tens of millimeters of sliding on initially bare surfaces. Simulated fault gouges show a large but systematic variation of friction, velocity dependence of friction, dilatancy, and degree of localization with displacement. At short displacement (<10 mm), simulated gouge is strong, velocity strengthening and changes in sliding velocity are accompanied by relatively large changes in dilatancy rate. With continued displacement, simulated gouges become progressively weaker and less velocity strengthening, the velocity dependence of dilatancy rate decreases, and deformation becomes localized into a narrow basal shear which at its most localized is observed to be velocity weakening. With subsequent displacement, the fault restrengthens, returns to velocity strengthening, or to velocity neutral, the velocity dependence of dilatancy rate becomes larger, and deformation becomes distributed. Correlation of friction, velocity dependence of friction and of dilatancy rate, and degree of localization at all displacements in simulated gouge suggest that all quantities are interrelated. The observations do not distinguish the independent variables but suggest that the degree of localization is controlled by the fault strength, not by the friction velocity dependence. The friction velocity dependence and velocity dependence of dilatancy rate can be used as qualitative measures of the degree of localization in simulated gouge, in agreement with previous studies. Theory equating the friction velocity dependence of simulated gouge to the sum of the friction velocity dependence of bare surfaces and the velocity dependence of dilatancy rate of simulated gouge fails to quantitatively account for the experimental observations.

  10. How to build and teach with QuakeCaster: an earthquake demonstration and exploration tool

    USGS Publications Warehouse

    Linton, Kelsey; Stein, Ross S.

    2015-01-01

    QuakeCaster is an interactive, hands-on teaching model that simulates earthquakes and their interactions along a plate-boundary fault. QuakeCaster contains the minimum number of physical processes needed to demonstrate most observable earthquake features. A winch to steadily reel in a line simulates the steady plate tectonic motions far from the plate boundaries. A granite slider in frictional contact with a nonskid rock-like surface simulates a fault at a plate boundary. A rubber band connecting the line to the slider simulates the elastic character of the Earth’s crust. By stacking and unstacking sliders and cranking in the winch, one can see the results of changing the shear stress and the clamping stress on a fault. By placing sliders in series with rubber bands between them, one can simulate the interaction of earthquakes along a fault, such as cascading or toggling shocks. By inserting a load scale into the line, one can measure the stress acting on the fault throughout the earthquake cycle. As observed for real earthquakes, QuakeCaster events are not periodic, time-predictable, or slip-predictable. QuakeCaster produces rare but unreliable “foreshocks.” When fault gouge builds up, the friction goes to zero and fault creep is seen without large quakes. QuakeCaster events produce very small amounts of fault gouge that strongly alter its behavior, resulting in smaller, more frequent shocks as the gouge accumulates. QuakeCaster is designed so that students or audience members can operate it and record its output. With a stopwatch and ruler one can measure and plot the timing, slip distance, and force results of simulated earthquakes. People of all ages can use the QuakeCaster model to explore hypotheses about earthquake occurrence. QuakeCaster takes several days and about $500.00 in materials to build.

  11. Integrated Fault Diagnosis Algorithm for Motor Sensors of In-Wheel Independent Drive Electric Vehicles.

    PubMed

    Jeon, Namju; Lee, Hyeongcheol

    2016-12-12

    An integrated fault-diagnosis algorithm for a motor sensor of in-wheel independent drive electric vehicles is presented. This paper proposes a method that integrates the high- and low-level fault diagnoses to improve the robustness and performance of the system. For the high-level fault diagnosis of vehicle dynamics, a planar two-track non-linear model is first selected, and the longitudinal and lateral forces are calculated. To ensure redundancy of the system, correlation between the sensor and residual in the vehicle dynamics is analyzed to detect and separate the fault of the drive motor system of each wheel. To diagnose the motor system for low-level faults, the state equation of an interior permanent magnet synchronous motor is developed, and a parity equation is used to diagnose the fault of the electric current and position sensors. The validity of the high-level fault-diagnosis algorithm is verified using Carsim and Matlab/Simulink co-simulation. The low-level fault diagnosis is verified through Matlab/Simulink simulation and experiments. Finally, according to the residuals of the high- and low-level fault diagnoses, fault-detection flags are defined. On the basis of this information, an integrated fault-diagnosis strategy is proposed.

  12. A Viscoelastic earthquake simulator with application to the San Francisco Bay region

    USGS Publications Warehouse

    Pollitz, Fred F.

    2009-01-01

    Earthquake simulation on synthetic fault networks carries great potential for characterizing the statistical patterns of earthquake occurrence. I present an earthquake simulator based on elastic dislocation theory. It accounts for the effects of interseismic tectonic loading, static stress steps at the time of earthquakes, and postearthquake stress readjustment through viscoelastic relaxation of the lower crust and mantle. Earthquake rupture initiation and termination are determined with a Coulomb failure stress criterion and the static cascade model. The simulator is applied to interacting multifault systems: one, a synthetic two-fault network, and the other, a fault network representative of the San Francisco Bay region. The faults are discretized both along strike and along dip and can accommodate both strike slip and dip slip. Stress and seismicity functions are evaluated over 30,000 yr trial time periods, resulting in a detailed statistical characterization of the fault systems. Seismicity functions such as the coefficient of variation and a- and b-values exhibit systematic patterns with respect to simple model parameters. This suggests that reliable estimation of the controlling parameters of an earthquake simulator is a prerequisite to the interpretation of its output in terms of seismic hazard.

  13. Progress in Computational Simulation of Earthquakes

    NASA Technical Reports Server (NTRS)

    Donnellan, Andrea; Parker, Jay; Lyzenga, Gregory; Judd, Michele; Li, P. Peggy; Norton, Charles; Tisdale, Edwin; Granat, Robert

    2006-01-01

    GeoFEST(P) is a computer program written for use in the QuakeSim project, which is devoted to development and improvement of means of computational simulation of earthquakes. GeoFEST(P) models interacting earthquake fault systems from the fault-nucleation to the tectonic scale. The development of GeoFEST( P) has involved coupling of two programs: GeoFEST and the Pyramid Adaptive Mesh Refinement Library. GeoFEST is a message-passing-interface-parallel code that utilizes a finite-element technique to simulate evolution of stress, fault slip, and plastic/elastic deformation in realistic materials like those of faulted regions of the crust of the Earth. The products of such simulations are synthetic observable time-dependent surface deformations on time scales from days to decades. Pyramid Adaptive Mesh Refinement Library is a software library that facilitates the generation of computational meshes for solving physical problems. In an application of GeoFEST(P), a computational grid can be dynamically adapted as stress grows on a fault. Simulations on workstations using a few tens of thousands of stress and displacement finite elements can now be expanded to multiple millions of elements with greater than 98-percent scaled efficiency on over many hundreds of parallel processors (see figure).

  14. Fault Gauge Numerical Simulation : Dynamic Rupture Propagation and Local Energy Partitioning

    NASA Astrophysics Data System (ADS)

    Mollon, G.

    2017-12-01

    In this communication, we present dynamic simulations of the local (centimetric) behaviour of a fault filled with a granular gauge submitted to dynamic rupture. The numerical tool (Fig. 1) combines classical Discrete Element Modelling (albeit with the ability to deal with arbitrary grain shapes) for the simualtion of the gauge, and continuous modelling for the simulation of the acoustic waves emission and propagation. In a first part, the model is applied to the simulation of steady-state shearing of the fault under remote displacement boudary conditions, in order to observe the shear accomodation at the interface (R1 cracks, localization, wear, etc.). It also makes it possible to fit to desired values the Rate and State Friction properties of the granular gauge by adapting the contact laws between grains. Such simulations provide quantitative insight in the steady-state energy partitionning between fracture, friction and acoustic emissions as a function of the shear rate. In a second part, the model is submitted to dynamic rupture. For that purpose, the fault is elastically preloaded just below rupture, and a displacement pulse is applied at one end of the sample (and on only one side of the fault). This allows to observe the propagation of the instability along the fault and the interplay between this propagation and the local granular phenomena. Energy partitionning is then observed both in space and time.

  15. Dynamic ruptures on faults of complex geometry: insights from numerical simulations, from large-scale curvature to small-scale fractal roughness

    NASA Astrophysics Data System (ADS)

    Ulrich, T.; Gabriel, A. A.

    2016-12-01

    The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.

  16. Analytic Confusion Matrix Bounds for Fault Detection and Isolation Using a Sum-of-Squared- Residuals Approach

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2009-01-01

    Given a system which can fail in 1 or n different ways, a fault detection and isolation (FDI) algorithm uses sensor data in order to determine which fault is the most likely to have occurred. The effectiveness of an FDI algorithm can be quantified by a confusion matrix, which i ndicates the probability that each fault is isolated given that each fault has occurred. Confusion matrices are often generated with simulation data, particularly for complex systems. In this paper we perform FDI using sums of squares of sensor residuals (SSRs). We assume that the sensor residuals are Gaussian, which gives the SSRs a chi-squared distribution. We then generate analytic lower and upper bounds on the confusion matrix elements. This allows for the generation of optimal sensor sets without numerical simulations. The confusion matrix bound s are verified with simulated aircraft engine data.

  17. Fault displacement hazard assessment for nuclear installations based on IAEA safety standards

    NASA Astrophysics Data System (ADS)

    Fukushima, Y.

    2016-12-01

    In the IAEA Safety NS-R-3, surface fault displacement hazard assessment (FDHA) is required for the siting of nuclear installations. If any capable faults exist in the candidate site, IAEA recommends the consideration of alternative sites. However, due to the progress in palaeoseismological investigations, capable faults may be found in existing site. In such a case, IAEA recommends to evaluate the safety using probabilistic FDHA (PFDHA), which is an empirical approach based on still quite limited database. Therefore a basic and crucial improvement is to increase the database. In 2015, IAEA produced a TecDoc-1767 on Palaeoseismology as a reference for the identification of capable faults. Another IAEA Safety Report 85 on ground motion simulation based on fault rupture modelling provides an annex introducing recent PFDHAs and fault displacement simulation methodologies. The IAEA expanded the project of FDHA for the probabilistic approach and the physics based fault rupture modelling. The first approach needs a refinement of the empirical methods by building a world wide database, and the second approach needs to shift from kinematic to the dynamic scheme. Both approaches can complement each other, since simulated displacement can fill the gap of a sparse database and geological observations can be useful to calibrate the simulations. The IAEA already supported a workshop in October 2015 to discuss the existing databases with the aim of creating a common worldwide database. A consensus of a unified database was reached. The next milestone is to fill the database with as many fault rupture data sets as possible. Another IAEA work group had a WS in November 2015 to discuss the state-of-the-art PFDHA as well as simulation methodologies. Two groups jointed a consultancy meeting in February 2016, shared information, identified issues, discussed goals and outputs, and scheduled future meetings. Now we may aim at coordinating activities for the whole FDHA tasks jointly.

  18. Dynamic Simulations for the Seismic Behavior on the Shallow Part of the Fault Plane in the Subduction Zone during Mega-Thrust Earthquakes

    NASA Astrophysics Data System (ADS)

    Tsuda, K.; Dorjapalam, S.; Dan, K.; Ogawa, S.; Watanabe, T.; Uratani, H.; Iwase, S.

    2012-12-01

    The 2011 Tohoku-Oki earthquake (M9.0) produced some distinct features such as huge slips on the order of several ten meters around the shallow part of the fault and different areas with radiating seismic waves for different periods (e.g., Lay et al., 2012). These features, also reported during the past mega-thrust earthquakes in the subduction zone such as the 2004 Sumatra earthquake (M9.2) and the 2010 Chile earthquake (M8.8), get attentions as the distinct features if the rupture of the mega-thrust earthquakes reaches to the shallow part of the fault plane. Although various kinds of observations for the seismic behavior (rupture process and ground motion characteristics etc.) on the shallow part of the fault plane during the mega-trust earthquakes have been reported, the number of analytical or numerical studies based on dynamic simulation is still limited. Wendt et al. (2009), for example, revealed that the different distribution of initial stress produces huge differences in terms of the seismic behavior and vertical displacements on the surface. In this study, we carried out the dynamic simulations in order to get a better understanding about the seismic behavior on the shallow part of the fault plane during mega-thrust earthquakes. We used the spectral element method (Ampuero, 2009) that is able to incorporate the complex fault geometry into simulation as well as to save computational resources. The simulation utilizes the slip-weakening law (Ida, 1972). In order to get a better understanding about the seismic behavior on the shallow part of the fault plane, some parameters controlling seismic behavior for dynamic faulting such as critical slip distance (Dc), initial stress conditions and friction coefficients were changed and we also put the asperity on the fault plane. These understandings are useful for the ground motion prediction for future mega-thrust earthquakes such as the earthquakes along the Nankai Trough.

  19. Automatic Detection of Electric Power Troubles (ADEPT)

    NASA Technical Reports Server (NTRS)

    Wang, Caroline; Zeanah, Hugh; Anderson, Audie; Patrick, Clint; Brady, Mike; Ford, Donnie

    1988-01-01

    ADEPT is an expert system that integrates knowledge from three different suppliers to offer an advanced fault-detection system, and is designed for two modes of operation: real-time fault isolation and simulated modeling. Real time fault isolation of components is accomplished on a power system breadboard through the Fault Isolation Expert System (FIES II) interface with a rule system developed in-house. Faults are quickly detected and displayed and the rules and chain of reasoning optionally provided on a Laser printer. This system consists of a simulated Space Station power module using direct-current power supplies for Solar arrays on three power busses. For tests of the system's ability to locate faults inserted via switches, loads are configured by an INTEL microcomputer and the Symbolics artificial intelligence development system. As these loads are resistive in nature, Ohm's Law is used as the basis for rules by which faults are located. The three-bus system can correct faults automatically where there is a surplus of power available on any of the three busses. Techniques developed and used can be applied readily to other control systems requiring rapid intelligent decisions. Simulated modelling, used for theoretical studies, is implemented using a modified version of Kennedy Space Center's KATE (Knowledge-Based Automatic Test Equipment), FIES II windowing, and an ADEPT knowledge base. A load scheduler and a fault recovery system are currently under development to support both modes of operation.

  20. Pressure Monitoring to Detect Fault Rupture Due to CO 2 Injection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keating, Elizabeth; Dempsey, David; Pawar, Rajesh

    The capacity for fault systems to be reactivated by fluid injection is well-known. In the context of CO 2 sequestration, however, the consequence of reactivated faults with respect to leakage and monitoring is poorly understood. Using multi-phase fluid flow simulations, this study addresses key questions concerning the likelihood of ruptures, the timing of consequent upward leakage of CO 2, and the effectiveness of pressure monitoring in the reservoir and overlying zones for rupture detection. A range of injection scenarios was simulated using random sampling of uncertain parameters. These include the assumed distance between the injector and the vulnerable fault zone,more » the critical overpressure required for the fault to rupture, reservoir permeability, and the CO 2 injection rate. We assumed a conservative scenario, in which if at any time during the five-year simulations the critical fault overpressure is exceeded, the fault permeability is assumed to instantaneously increase. For the purposes of conservatism we assume that CO 2 injection continues ‘blindly’ after fault rupture. We show that, despite this assumption, in most cases the CO 2 plume does not reach the base of the ruptured fault after 5 years. As a result, one possible implication of this result is that leak mitigation strategies such as pressure management have a reasonable chance of preventing a CO 2 leak.« less

  1. Pressure Monitoring to Detect Fault Rupture Due to CO 2 Injection

    DOE PAGES

    Keating, Elizabeth; Dempsey, David; Pawar, Rajesh

    2017-08-18

    The capacity for fault systems to be reactivated by fluid injection is well-known. In the context of CO 2 sequestration, however, the consequence of reactivated faults with respect to leakage and monitoring is poorly understood. Using multi-phase fluid flow simulations, this study addresses key questions concerning the likelihood of ruptures, the timing of consequent upward leakage of CO 2, and the effectiveness of pressure monitoring in the reservoir and overlying zones for rupture detection. A range of injection scenarios was simulated using random sampling of uncertain parameters. These include the assumed distance between the injector and the vulnerable fault zone,more » the critical overpressure required for the fault to rupture, reservoir permeability, and the CO 2 injection rate. We assumed a conservative scenario, in which if at any time during the five-year simulations the critical fault overpressure is exceeded, the fault permeability is assumed to instantaneously increase. For the purposes of conservatism we assume that CO 2 injection continues ‘blindly’ after fault rupture. We show that, despite this assumption, in most cases the CO 2 plume does not reach the base of the ruptured fault after 5 years. As a result, one possible implication of this result is that leak mitigation strategies such as pressure management have a reasonable chance of preventing a CO 2 leak.« less

  2. MHDL CAD tool with fault circuit handling

    NASA Astrophysics Data System (ADS)

    Espinosa Flores-Verdad, Guillermo; Altamirano Robles, Leopoldo; Osorio Roque, Leticia

    2003-04-01

    Behavioral modeling and simulation, with Analog Hardware and Mixed Signal Description High Level Languages (MHDLs), have generated the development of diverse simulation tools that allow handling the requirements of the modern designs. These systems have million of transistors embedded and they are radically diverse between them. This tendency of simulation tools is exemplified by the development of languages for modeling and simulation, whose applications are the re-use of complete systems, construction of virtual prototypes, realization of test and synthesis. This paper presents the general architecture of a Mixed Hardware Description Language, based on the standard 1076.1-1999 IEEE VHDL Analog and Mixed-Signal Extensions known as VHDL-AMS. This architecture is novel by consider the modeling and simulation of faults. The main modules of the CAD tool are briefly described in order to establish the information flow and its transformations, starting from the description of a circuit model, going throw the lexical analysis, mathematical models generation and the simulation core, ending at the collection of the circuit behavior as simulation"s data. In addition, the incorporated mechanisms to the simulation core are explained in order to realize the handling of faults into the circuit models. Currently, the CAD tool works with algebraic and differential descriptions for the circuit models, nevertheless the language design is open to be able to handle different model types: Fuzzy Models, Differentials Equations, Transfer Functions and Tables. This applies for fault models too, in this sense the CAD tool considers the inclusion of mutants and saboteurs. To exemplified the results obtained until now, the simulated behavior of a circuit is shown when it is fault free and when it has been modified by the inclusion of a fault as a mutant or a saboteur. The obtained results allow the realization of a virtual diagnosis for mixed circuits. This language works in a UNIX system; it was developed with an object-oriented methodology and programmed in C++.

  3. Combination of inquiry learning model and computer simulation to improve mastery concept and the correlation with critical thinking skills (CTS)

    NASA Astrophysics Data System (ADS)

    Nugraha, Muhamad Gina; Kaniawati, Ida; Rusdiana, Dadi; Kirana, Kartika Hajar

    2016-02-01

    Among the purposes of physics learning at high school is to master the physics concepts and cultivate scientific attitude (including critical attitude), develop inductive and deductive reasoning skills. According to Ennis et al., inductive and deductive reasoning skills are part of critical thinking. Based on preliminary studies, both of the competence are lack achieved, it is seen from student learning outcomes is low and learning processes that are not conducive to cultivate critical thinking (teacher-centered learning). One of learning model that predicted can increase mastery concepts and train CTS is inquiry learning model aided computer simulations. In this model, students were given the opportunity to be actively involved in the experiment and also get a good explanation with the computer simulations. From research with randomized control group pretest-posttest design, we found that the inquiry learning model aided computer simulations can significantly improve students' mastery concepts than the conventional (teacher-centered) method. With inquiry learning model aided computer simulations, 20% of students have high CTS, 63.3% were medium and 16.7% were low. CTS greatly contribute to the students' mastery concept with a correlation coefficient of 0.697 and quite contribute to the enhancement mastery concept with a correlation coefficient of 0.603.

  4. Model-based development of a fault signature matrix to improve solid oxide fuel cell systems on-site diagnosis

    NASA Astrophysics Data System (ADS)

    Polverino, Pierpaolo; Pianese, Cesare; Sorrentino, Marco; Marra, Dario

    2015-04-01

    The paper focuses on the design of a procedure for the development of an on-field diagnostic algorithm for solid oxide fuel cell (SOFC) systems. The diagnosis design phase relies on an in-deep analysis of the mutual interactions among all system components by exploiting the physical knowledge of the SOFC system as a whole. This phase consists of the Fault Tree Analysis (FTA), which identifies the correlations among possible faults and their corresponding symptoms at system components level. The main outcome of the FTA is an inferential isolation tool (Fault Signature Matrix - FSM), which univocally links the faults to the symptoms detected during the system monitoring. In this work the FTA is considered as a starting point to develop an improved FSM. Making use of a model-based investigation, a fault-to-symptoms dependency study is performed. To this purpose a dynamic model, previously developed by the authors, is exploited to simulate the system under faulty conditions. Five faults are simulated, one for the stack and four occurring at BOP level. Moreover, the robustness of the FSM design is increased by exploiting symptom thresholds defined for the investigation of the quantitative effects of the simulated faults on the affected variables.

  5. Mechatronics technology in predictive maintenance method

    NASA Astrophysics Data System (ADS)

    Majid, Nurul Afiqah A.; Muthalif, Asan G. A.

    2017-11-01

    This paper presents recent mechatronics technology that can help to implement predictive maintenance by combining intelligent and predictive maintenance instrument. Vibration Fault Simulation System (VFSS) is an example of mechatronics system. The focus of this study is the prediction on the use of critical machines to detect vibration. Vibration measurement is often used as the key indicator of the state of the machine. This paper shows the choice of the appropriate strategy in the vibration of diagnostic process of the mechanical system, especially rotating machines, in recognition of the failure during the working process. In this paper, the vibration signature analysis is implemented to detect faults in rotary machining that includes imbalance, mechanical looseness, bent shaft, misalignment, missing blade bearing fault, balancing mass and critical speed. In order to perform vibration signature analysis for rotating machinery faults, studies have been made on how mechatronics technology is used as predictive maintenance methods. Vibration Faults Simulation Rig (VFSR) is designed to simulate and understand faults signatures. These techniques are based on the processing of vibrational data in frequency-domain. The LabVIEW-based spectrum analyzer software is developed to acquire and extract frequency contents of faults signals. This system is successfully tested based on the unique vibration fault signatures that always occur in a rotating machinery.

  6. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Howard; Braun, James E.

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less

  7. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Howard; Braun, James E.

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less

  8. Fault-Sensitivity and Wear-Out Analysis of VLSI Systems.

    DTIC Science & Technology

    1995-06-01

    DESCRIPTION MIXED-MODE HIERARCIAIFAULT DESCRIPTION FAULT SIMULATION TYPE OF FAULT TRANSIENT/STUCK-AT LOCATION/TIME * _AUTOMATIC FAULT INJECTION TRACE...4219-4224, December 1985. [15] J. Sosnowski, "Evaluation of transient hazards in microprocessor controll - ers," Digest, FTCS-16, The Sixteenth

  9. The impact of splay faults on fluid flow, solute transport, and pore pressure distribution in subduction zones: A case study offshore the Nicoya Peninsula, Costa Rica

    NASA Astrophysics Data System (ADS)

    Lauer, Rachel M.; Saffer, Demian M.

    2015-04-01

    Observations of seafloor seeps on the continental slope of many subduction zones illustrate that splay faults represent a primary hydraulic connection to the plate boundary at depth, carry deeply sourced fluids to the seafloor, and are in some cases associated with mud volcanoes. However, the role of these structures in forearc hydrogeology remains poorly quantified. We use a 2-D numerical model that simulates coupled fluid flow and solute transport driven by fluid sources from tectonically driven compaction and smectite transformation to investigate the effects of permeable splay faults on solute transport and pore pressure distribution. We focus on the Nicoya margin of Costa Rica as a case study, where previous modeling and field studies constrain flow rates, thermal structure, and margin geology. In our simulations, splay faults accommodate up to 33% of the total dewatering flux, primarily along faults that outcrop within 25 km of the trench. The distribution and fate of dehydration-derived fluids is strongly dependent on thermal structure, which determines the locus of smectite transformation. In simulations of a cold end-member margin, smectite transformation initiates 30 km from the trench, and 64% of the dehydration-derived fluids are intercepted by splay faults and carried to the middle and upper slope, rather than exiting at the trench. For a warm end-member, smectite transformation initiates 7 km from the trench, and the associated fluids are primarily transmitted to the trench via the décollement (50%), and faults intercept only 21% of these fluids. For a wide range of splay fault permeabilities, simulated fluid pressures are near lithostatic where the faults intersect overlying slope sediments, providing a viable mechanism for the formation of mud volcanoes.

  10. Learning from physics-based earthquake simulators: a minimal approach

    NASA Astrophysics Data System (ADS)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2017-04-01

    Physics-based earthquake simulators are aimed to generate synthetic seismic catalogs of arbitrary length, accounting for fault interaction, elastic rebound, realistic fault networks, and some simple earthquake nucleation process like rate and state friction. Through comparison of synthetic and real catalogs seismologists can get insights on the earthquake occurrence process. Moreover earthquake simulators can be used to to infer some aspects of the statistical behavior of earthquakes within the simulated region, by analyzing timescales not accessible through observations. The develoment of earthquake simulators is commonly led by the approach "the more physics, the better", pushing seismologists to go towards simulators more earth-like. However, despite the immediate attractiveness, we argue that this kind of approach makes more and more difficult to understand which physical parameters are really relevant to describe the features of the seismic catalog at which we are interested. For this reason, here we take an opposite minimal approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple model may be more informative than a complex one for some specific scientific objectives, because it is more understandable. The model has three main components: the first one is a realistic tectonic setting, i.e., a fault dataset of California; the other two components are quantitative laws for earthquake generation on each single fault, and the Coulomb Failure Function for modeling fault interaction. The final goal of this work is twofold. On one hand, we aim to identify the minimum set of physical ingredients that can satisfactorily reproduce the features of the real seismic catalog, such as short-term seismic cluster, and to investigate on the hypothetical long-term behavior, and faults synchronization. On the other hand, we want to investigate the limits of predictability of the model itself.

  11. A distributed fault-detection and diagnosis system using on-line parameter estimation

    NASA Technical Reports Server (NTRS)

    Guo, T.-H.; Merrill, W.; Duyar, A.

    1991-01-01

    The development of a model-based fault-detection and diagnosis system (FDD) is reviewed. The system can be used as an integral part of an intelligent control system. It determines the faults of a system from comparison of the measurements of the system with a priori information represented by the model of the system. The method of modeling a complex system is described and a description of diagnosis models which include process faults is presented. There are three distinct classes of fault modes covered by the system performance model equation: actuator faults, sensor faults, and performance degradation. A system equation for a complete model that describes all three classes of faults is given. The strategy for detecting the fault and estimating the fault parameters using a distributed on-line parameter identification scheme is presented. A two-step approach is proposed. The first step is composed of a group of hypothesis testing modules, (HTM) in parallel processing to test each class of faults. The second step is the fault diagnosis module which checks all the information obtained from the HTM level, isolates the fault, and determines its magnitude. The proposed FDD system was demonstrated by applying it to detect actuator and sensor faults added to a simulation of the Space Shuttle Main Engine. The simulation results show that the proposed FDD system can adequately detect the faults and estimate their magnitudes.

  12. Large transient fault current test of an electrical roll ring

    NASA Technical Reports Server (NTRS)

    Yenni, Edward J.; Birchenough, Arthur G.

    1992-01-01

    The space station uses precision rotary gimbals to provide for sun tracking of its photoelectric arrays. Electrical power, command signals and data are transferred across the gimbals by roll rings. Roll rings have been shown to be capable of highly efficient electrical transmission and long life, through tests conducted at the NASA Lewis Research Center and Honeywell's Satellite and Space Systems Division in Phoenix, AZ. Large potential fault currents inherent to the power system's DC distribution architecture, have brought about the need to evaluate the effects of large transient fault currents on roll rings. A test recently conducted at Lewis subjected a roll ring to a simulated worst case space station electrical fault. The system model used to obtain the fault profile is described, along with details of the reduced order circuit that was used to simulate the fault. Test results comparing roll ring performance before and after the fault are also presented.

  13. Learning and diagnosing faults using neural networks

    NASA Technical Reports Server (NTRS)

    Whitehead, Bruce A.; Kiech, Earl L.; Ali, Moonis

    1990-01-01

    Neural networks have been employed for learning fault behavior from rocket engine simulator parameters and for diagnosing faults on the basis of the learned behavior. Two problems in applying neural networks to learning and diagnosing faults are (1) the complexity of the sensor data to fault mapping to be modeled by the neural network, which implies difficult and lengthy training procedures; and (2) the lack of sufficient training data to adequately represent the very large number of different types of faults which might occur. Methods are derived and tested in an architecture which addresses these two problems. First, the sensor data to fault mapping is decomposed into three simpler mappings which perform sensor data compression, hypothesis generation, and sensor fusion. Efficient training is performed for each mapping separately. Secondly, the neural network which performs sensor fusion is structured to detect new unknown faults for which training examples were not presented during training. These methods were tested on a task of fault diagnosis by employing rocket engine simulator data. Results indicate that the decomposed neural network architecture can be trained efficiently, can identify faults for which it has been trained, and can detect the occurrence of faults for which it has not been trained.

  14. Spontaneous Aseismic and Seismic Slip Transients on Evolving Faults Simulated in a Continuum-Mechanics Framework

    NASA Astrophysics Data System (ADS)

    Herrendoerfer, R.; Gerya, T.; van Dinther, Y.

    2016-12-01

    The convergent plate motion in subduction zones is accommodated by different slip modes: potentially dangerous seismic slip and imperceptible, but instrumentally detectable slow slip transients or steady slip. Despite an increasing number of observations and insights from laboratory experiments, it remains enigmatic which local on- and off-fault conditions favour slip modes of different source characteristics (i.e., slip velocity, duration, seismic moment). Therefore, we are working towards a numerical model that is able to simulate different slip modes in a consistent way with the long-term evolution of the fault system. We extended our 2D, continuum mechanics-based, visco-elasto-plastic seismo-thermo-mechanical (STM) model, which simulated cycles of earthquake-like ruptures, albeit only at plate tectonic slip rates (van Dinther et al, JGR, 2013). To model a wider slip spectrum including seismic slip rates, we, besides improving the general numerical approach, implemented an invariant reformulation of the conventional rate-and state dependent friction (RSF) and an adaptive time-stepping scheme (Lapusta and Rice, JGR, 2001). In a simple setup with predominantly elastic plates that are juxtaposed along a predefined fault of certain width, we vary the characteristic slip distance, the mean normal stress and the size of the rate-weakening zone. We show that the resulting stability transitions from decaying oscillations, periodic slow slip, complex periodic to seismic slip agree with those of conventional RSF seismic cycle simulations (e.g. Liu and Rice, JGR, 2007). Additionally, we will present results of the investigation concerning the effect of the fault width and geometry on the generation of different slip modes. Ultimately, instead of predefining a fault, we simulate the spatio-temporal evolution of a complex fault system that is consistent with the plate motions and rheology. For simplicity, we parametrize the fault development through linear slip-weakening of cohesion and apply RSF friction only in cohesionless material. We report preliminary results of the interaction between slip modes and the fault growth during different fault evolution stages.

  15. Simulation of broad-band strong ground motion for a hypothetical Mw 7.1 earthquake on the Enriquillo Fault in Haiti

    NASA Astrophysics Data System (ADS)

    Douilly, Roby; Mavroeidis, George P.; Calais, Eric

    2017-10-01

    The devastating 2010 Mw 7.0 Haiti earthquake demonstrated the need to improve mitigation and preparedness for future seismic events in the region. Previous studies have shown that the earthquake did not occur on the Enriquillo Fault, the main plate boundary fault running through the heavily populated Port-au-Prince region, but on the nearby and previously unknown transpressional Léogâne Fault. Slip on that fault has increased stresses on the segment of Enriquillo Fault to the east of Léogâne, which terminates in the ˜3-million-inhabitant capital city of Port-au-Prince. In this study, we investigate ground shaking in the vicinity of Port-au-Prince, if a hypothetical rupture similar to the 2010 Haiti earthquake occurred on that segment of the Enriquillo Fault. We use a finite element method and assumptions on regional tectonic stress to simulate the low-frequency ground motion components using dynamic rupture propagation for a 52-km-long segment. We consider eight scenarios by varying parameters such as hypocentre location, initial shear stress and fault dip. The high-frequency ground motion components are simulated using the specific barrier model in the context of the stochastic modeling approach. The broad-band ground motion synthetics are subsequently obtained by combining the low-frequency components from the dynamic rupture simulation with the high-frequency components from the stochastic simulation using matched filtering at a crossover frequency of 1 Hz. Results show that rupture on a vertical Enriquillo Fault generates larger horizontal permanent displacements in Léogâne and Port-au-Prince than rupture on a south-dipping Enriquillo Fault. The mean horizontal peak ground acceleration (PGA), computed at several sites of interest throughout Port-au-Prince, has a value of ˜0.45 g, whereas the maximum horizontal PGA in Port-au-Prince is ˜0.60 g. Even though we only consider a limited number of rupture scenarios, our results suggest more intense ground shaking for the city of Port-au-Prince than during the already very damaging 2010 Haiti earthquake.

  16. Are Physics-Based Simulators Ready for Prime Time? Comparisons of RSQSim with UCERF3 and Observations.

    NASA Astrophysics Data System (ADS)

    Milner, K. R.; Shaw, B. E.; Gilchrist, J. J.; Jordan, T. H.

    2017-12-01

    Probabilistic seismic hazard analysis (PSHA) is typically performed by combining an earthquake rupture forecast (ERF) with a set of empirical ground motion prediction equations (GMPEs). ERFs have typically relied on observed fault slip rates and scaling relationships to estimate the rate of large earthquakes on pre-defined fault segments, either ignoring or relying on expert opinion to set the rates of multi-fault or multi-segment ruptures. Version 3 of the Uniform California Earthquake Rupture Forecast (UCERF3) is a significant step forward, replacing expert opinion and fault segmentation with an inversion approach that matches observations better than prior models while incorporating multi-fault ruptures. UCERF3 is a statistical model, however, and doesn't incorporate the physics of earthquake nucleation, rupture propagation, and stress transfer. We examine the feasibility of replacing UCERF3, or components therein, with physics-based rupture simulators such as the Rate-State Earthquake Simulator (RSQSim), developed by Dieterich & Richards-Dinger (2010). RSQSim simulations on the UCERF3 fault system produce catalogs of seismicity that match long term rates on major faults, and produce remarkable agreement with UCERF3 when carried through to PSHA calculations. Averaged over a representative set of sites, the RSQSim-UCERF3 hazard-curve differences are comparable to the small differences between UCERF3 and its predecessor, UCERF2. The hazard-curve agreement between the empirical and physics-based models provides substantial support for the PSHA methodology. RSQSim catalogs include many complex multi-fault ruptures, which we compare with the UCERF3 rupture-plausibility metrics as well as recent observations. Complications in generating physically plausible kinematic descriptions of multi-fault ruptures have thus far prevented us from using UCERF3 in the CyberShake physics-based PSHA platform, which replaces GMPEs with deterministic ground motion simulations. RSQSim produces full slip/time histories that can be directly implemented as sources in CyberShake, without relying on the conditional hypocenter and slip distributions needed for the UCERF models. We also compare RSQSim with time-dependent PSHA calculations based on multi-fault renewal models.

  17. On the implementation of faults in finite-element glacial isostatic adjustment models

    NASA Astrophysics Data System (ADS)

    Steffen, Rebekka; Wu, Patrick; Steffen, Holger; Eaton, David W.

    2014-01-01

    Stresses induced in the crust and mantle by continental-scale ice sheets during glaciation have triggered earthquakes along pre-existing faults, commencing near the end of the deglaciation. In order to get a better understanding of the relationship between glacial loading/unloading and fault movement due to the spatio-temporal evolution of stresses, a commonly used model for glacial isostatic adjustment (GIA) is extended by including a fault structure. Solving this problem is enabled by development of a workflow involving three cascaded finite-element simulations. Each step has identical lithospheric and mantle structure and properties, but evolving stress conditions along the fault. The purpose of the first simulation is to compute the spatio-temporal evolution of rebound stress when the fault is tied together. An ice load with a parabolic profile and simple ice history is applied to represent glacial loading of the Laurentide Ice Sheet. The results of the first step describe the evolution of the stress and displacement induced by the rebound process. The second step in the procedure augments the results of the first, by computing the spatio-temporal evolution of total stress (i.e. rebound stress plus tectonic background stress and overburden pressure) and displacement with reaction forces that can hold the model in equilibrium. The background stress is estimated by assuming that the fault is in frictional equilibrium before glaciation. The third step simulates fault movement induced by the spatio-temporal evolution of total stress by evaluating fault stability in a subroutine. If the fault remains stable, no movement occurs; in case of fault instability, the fault displacement is computed. We show an example of fault motion along a 45°-dipping fault at the ice-sheet centre for a two-dimensional model. Stable conditions along the fault are found during glaciation and the initial part of deglaciation. Before deglaciation ends, the fault starts to move, and fault offsets of up to 22 m are obtained. A fault scarp at the surface of 19.74 m is determined. The fault is stable in the following time steps with a high stress accumulation at the fault tip. Along the upper part of the fault, GIA stresses are released in one earthquake.

  18. An innovative computationally efficient hydromechanical coupling approach for fault reactivation in geological subsurface utilization

    NASA Astrophysics Data System (ADS)

    Adams, M.; Kempka, T.; Chabab, E.; Ziegler, M.

    2018-02-01

    Estimating the efficiency and sustainability of geological subsurface utilization, i.e., Carbon Capture and Storage (CCS) requires an integrated risk assessment approach, considering the occurring coupled processes, beside others, the potential reactivation of existing faults. In this context, hydraulic and mechanical parameter uncertainties as well as different injection rates have to be considered and quantified to elaborate reliable environmental impact assessments. Consequently, the required sensitivity analyses consume significant computational time due to the high number of realizations that have to be carried out. Due to the high computational costs of two-way coupled simulations in large-scale 3D multiphase fluid flow systems, these are not applicable for the purpose of uncertainty and risk assessments. Hence, an innovative semi-analytical hydromechanical coupling approach for hydraulic fault reactivation will be introduced. This approach determines the void ratio evolution in representative fault elements using one preliminary base simulation, considering one model geometry and one set of hydromechanical parameters. The void ratio development is then approximated and related to one reference pressure at the base of the fault. The parametrization of the resulting functions is then directly implemented into a multiphase fluid flow simulator to carry out the semi-analytical coupling for the simulation of hydromechanical processes. Hereby, the iterative parameter exchange between the multiphase and mechanical simulators is omitted, since the update of porosity and permeability is controlled by one reference pore pressure at the fault base. The suggested procedure is capable to reduce the computational time required by coupled hydromechanical simulations of a multitude of injection rates by a factor of up to 15.

  19. Fault-Mechanism Simulator

    ERIC Educational Resources Information Center

    Guyton, J. W.

    1972-01-01

    An inexpensive, simple mechanical model of a fault can be produced to simulate the effects leading to an earthquake. This model has been used successfully with students from elementary to college levels and can be demonstrated to classes as large as thirty students. (DF)

  20. Model uncertainties of the 2002 update of California seismic hazard maps

    USGS Publications Warehouse

    Cao, T.; Petersen, M.D.; Frankel, A.D.

    2005-01-01

    In this article we present and explore the source and ground-motion model uncertainty and parametric sensitivity for the 2002 update of the California probabilistic seismic hazard maps. Our approach is to implement a Monte Carlo simulation that allows for independent sampling from fault to fault in each simulation. The source-distance dependent characteristics of the uncertainty maps of seismic hazard are explained by the fundamental uncertainty patterns from four basic test cases, in which the uncertainties from one-fault and two-fault systems are studied in detail. The California coefficient of variation (COV, ratio of the standard deviation to the mean) map for peak ground acceleration (10% of exceedance in 50 years) shows lower values (0.1-0.15) along the San Andreas fault system and other class A faults than along class B faults (0.2-0.3). High COV values (0.4-0.6) are found around the Garlock, Anacapa-Dume, and Palos Verdes faults in southern California and around the Maacama fault and Cascadia subduction zone in northern California.

  1. Integrated Fault Diagnosis Algorithm for Motor Sensors of In-Wheel Independent Drive Electric Vehicles

    PubMed Central

    Jeon, Namju; Lee, Hyeongcheol

    2016-01-01

    An integrated fault-diagnosis algorithm for a motor sensor of in-wheel independent drive electric vehicles is presented. This paper proposes a method that integrates the high- and low-level fault diagnoses to improve the robustness and performance of the system. For the high-level fault diagnosis of vehicle dynamics, a planar two-track non-linear model is first selected, and the longitudinal and lateral forces are calculated. To ensure redundancy of the system, correlation between the sensor and residual in the vehicle dynamics is analyzed to detect and separate the fault of the drive motor system of each wheel. To diagnose the motor system for low-level faults, the state equation of an interior permanent magnet synchronous motor is developed, and a parity equation is used to diagnose the fault of the electric current and position sensors. The validity of the high-level fault-diagnosis algorithm is verified using Carsim and Matlab/Simulink co-simulation. The low-level fault diagnosis is verified through Matlab/Simulink simulation and experiments. Finally, according to the residuals of the high- and low-level fault diagnoses, fault-detection flags are defined. On the basis of this information, an integrated fault-diagnosis strategy is proposed. PMID:27973431

  2. The Deformation of Overburden Soil and Interaction with Pile Foundations of Bridges Induced by Normal Faulting

    NASA Astrophysics Data System (ADS)

    Wu, Liang-Chun; Li, Chien-Hung; Chan, Pei-Chen; Lin, Ming-Lang

    2017-04-01

    According to the investigations of well-known disastrous earthquakes in recent years, ground deformation induced by faulting is one of the causes for engineering structure damages in addition to strong ground motion. Most of structures located on faulting zone has been destroyed by fault offset. Take the Norcia Earthquake in Italy (2016, Mw=6.2) as an example, the highway bridge in Arquata crossing the rupture area of the active normal fault suffered a quantity of displacement which causing abutment settlement, the piers of bridge fractured and so on. However, The Seismic Design Provisions and Commentary for Highway Bridges in Taiwan, the stating of it in the general rule of first chapter, the design in bridges crossing active fault: "This specification is not applicable of making design in bridges crossing or near active fault, that design ought to the other particular considerations ".This indicates that the safty of bridges crossing active fault are not only consider the seismic performance, the most ground deformation should be attended. In this research, to understand the failure mechanism and the deformation characteristics, we will organize the case which the bridges subjected faulting at home and abroad. The processes of research are through physical sandbox experiment and numerical simulation by discrete element models (PFC3-D). The normal fault case in Taiwan is Shanchiao Fault. As above, the research can explore the deformation in overburden soil and the influences in the foundations of bridges by normal faulting. While we can understand the behavior of foundations, we will make the bridge superstructures into two separations, simple beam and continuous beam and make a further research on the main control variables in bridges by faulting. Through the above mentioned, we can then give appropriate suggestions about planning considerations and design approaches. This research presents results from sandbox experiment and 3-D numerical analysis to simulate overburden soil and embedded pile foundations subjected to normal faulting. In order to validate this numerical model, it is compared to sandbox experiments. Since the 3-D numerical analysis corresponds to the sandbox expeiments, the response of pile foundations and ground deformation induced by normal faulting are discussed. To understand the 3-D behavior of ground deformation and pile foundations, the observation such as the triangular shear zone, the width of primary deformation zone and the inclination, displacements, of the pile foundations are discussed in experiments and simulations. Furthermore, to understand the safty of bridges crossing faulting zone. The different superstructures of bridges, simple beam and continuous beam will be discussed subsequently in simulations.

  3. Variability of recurrence interval for New Zealand surface-rupturing paleoearthquakes

    NASA Astrophysics Data System (ADS)

    Nicol, A., , Prof; Robinson, R., Jr.; Van Dissen, R. J.; Harvison, A.

    2015-12-01

    Recurrence interval (RI) for successive earthquakes on individual faults is recorded by paleoseismic datasets for surface-rupturing earthquakes which, in New Zealand, have magnitudes of >Mw ~6 to 7.2 depending on the thickness of the brittle crust. New Zealand faults examined have mean RI of ~130 to 8500 yrs, with an upper bound censored by the sample duration (<30 kyr) and an inverse relationship to fault slip rate. Frequency histograms, probability density functions (PDFs) and coefficient of variation (CoV= standard deviation/arithmetic mean) values have been used to quantify RI variability for geological and simulated earthquakes on >100 New Zealand active faults. RI for individual faults can vary by more than an order of magnitude. CoV of RI for paleoearthquake data comprising 4-10 events ranges from ~0.2 to 1 with a mean of 0.6±0.2. These values are generally comparable to simulated earthquakes (>100 events per fault) and suggest that RI ranges from quasi periodic (e.g., ~0.2-0.5) to random (e.g., ~1.0). Comparison of earthquake simulation and paleoearthquake data indicates that the mean and CoV of RI can be strongly influenced by sampling artefacts including; the magnitude of completeness, the dimensionality of spatial sampling and the duration of the sample period. Despite these sampling issues RI for the best of the geological data (i.e. >6 events) and earthquake simulations are described by log-normal or Weibull distributions with long recurrence tails (~3 times the mean) and provide a basis for quantifying real RI variability (rather than sampling artefacts). Our analysis indicates that CoV of RI is negatively related to fault slip rate. These data are consistent with the notion that fault interaction and associated stress perturbations arising from slip on larger faults are more likely to advance or retard future slip on smaller faults than visa versa.

  4. Negative Selection Algorithm for Aircraft Fault Detection

    NASA Technical Reports Server (NTRS)

    Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.

    2004-01-01

    We investigated a real-valued Negative Selection Algorithm (NSA) for fault detection in man-in-the-loop aircraft operation. The detection algorithm uses body-axes angular rate sensory data exhibiting the normal flight behavior patterns, to generate probabilistically a set of fault detectors that can detect any abnormalities (including faults and damages) in the behavior pattern of the aircraft flight. We performed experiments with datasets (collected under normal and various simulated failure conditions) using the NASA Ames man-in-the-loop high-fidelity C-17 flight simulator. The paper provides results of experiments with different datasets representing various failure conditions.

  5. Uemachi flexure zone investigated by borehole database and numeical simulation

    NASA Astrophysics Data System (ADS)

    Inoue, N.; Kitada, N.; Takemura, K.

    2014-12-01

    The Uemachi fault zone extending north and south, locates in the center of the Osaka City, in Japan. The Uemachi fault is a blind reverse fault and forms the flexure zone. The effects of the Uemachi flexure zone are considered in constructing of lifelines and buildings. In this region, the geomorphological survey is difficult because of the regression of transgression. Many organizations have carried out investigations of fault structures. Various surveys have been conducted, such as seismic reflection survey in and around Osaka. Many borehole data for construction conformations have been collected and the geotechnical borehole database has been constructed. The investigation with several geological borehole data provides the subsurface geological information to the geotechnical borehole database. Various numerical simulations have been carried out to investigate the growth of a blind reverse fault in unconsolidated sediments. The displacement of the basement was given in two ways. One is based on the fault movement, such as dislocation model, the other is a movement of basement block of hanging wall. The Drucker-Prager and elastic model were used for the sediment and basement, respectively. The simulation with low and high angle fault movements, show the good agree with the actual distribution of the marine clay inferred from borehole data in the northern and southern Uemachi fault flexure zone, respectively. This research is partly funded by the Comprehensive Research on the Uemachi Fault Zone (from FY2010 to FY2012) by The Ministry of Education, Culture, Sports, Science and Technology (MEXT).

  6. A technique for evaluating the application of the pin-level stuck-at fault model to VLSI circuits

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Finelli, George B.

    1987-01-01

    Accurate fault models are required to conduct the experiments defined in validation methodologies for highly reliable fault-tolerant computers (e.g., computers with a probability of failure of 10 to the -9 for a 10-hour mission). Described is a technique by which a researcher can evaluate the capability of the pin-level stuck-at fault model to simulate true error behavior symptoms in very large scale integrated (VLSI) digital circuits. The technique is based on a statistical comparison of the error behavior resulting from faults applied at the pin-level of and internal to a VLSI circuit. As an example of an application of the technique, the error behavior of a microprocessor simulation subjected to internal stuck-at faults is compared with the error behavior which results from pin-level stuck-at faults. The error behavior is characterized by the time between errors and the duration of errors. Based on this example data, the pin-level stuck-at fault model is found to deliver less than ideal performance. However, with respect to the class of faults which cause a system crash, the pin-level, stuck-at fault model is found to provide a good modeling capability.

  7. Automatic Detection of Electric Power Troubles (ADEPT)

    NASA Technical Reports Server (NTRS)

    Wang, Caroline; Zeanah, Hugh; Anderson, Audie; Patrick, Clint; Brady, Mike; Ford, Donnie

    1988-01-01

    Automatic Detection of Electric Power Troubles (A DEPT) is an expert system that integrates knowledge from three different suppliers to offer an advanced fault-detection system. It is designed for two modes of operation: real time fault isolation and simulated modeling. Real time fault isolation of components is accomplished on a power system breadboard through the Fault Isolation Expert System (FIES II) interface with a rule system developed in-house. Faults are quickly detected and displayed and the rules and chain of reasoning optionally provided on a laser printer. This system consists of a simulated space station power module using direct-current power supplies for solar arrays on three power buses. For tests of the system's ablilty to locate faults inserted via switches, loads are configured by an INTEL microcomputer and the Symbolics artificial intelligence development system. As these loads are resistive in nature, Ohm's Law is used as the basis for rules by which faults are located. The three-bus system can correct faults automatically where there is a surplus of power available on any of the three buses. Techniques developed and used can be applied readily to other control systems requiring rapid intelligent decisions. Simulated modeling, used for theoretical studies, is implemented using a modified version of Kennedy Space Center's KATE (Knowledge-Based Automatic Test Equipment), FIES II windowing, and an ADEPT knowledge base.

  8. Accounting for Fault Roughness in Pseudo-Dynamic Ground-Motion Simulations

    NASA Astrophysics Data System (ADS)

    Mai, P. Martin; Galis, Martin; Thingbaijam, Kiran K. S.; Vyas, Jagdish C.; Dunham, Eric M.

    2017-09-01

    Geological faults comprise large-scale segmentation and small-scale roughness. These multi-scale geometrical complexities determine the dynamics of the earthquake rupture process, and therefore affect the radiated seismic wavefield. In this study, we examine how different parameterizations of fault roughness lead to variability in the rupture evolution and the resulting near-fault ground motions. Rupture incoherence naturally induced by fault roughness generates high-frequency radiation that follows an ω-2 decay in displacement amplitude spectra. Because dynamic rupture simulations are computationally expensive, we test several kinematic source approximations designed to emulate the observed dynamic behavior. When simplifying the rough-fault geometry, we find that perturbations in local moment tensor orientation are important, while perturbations in local source location are not. Thus, a planar fault can be assumed if the local strike, dip, and rake are maintained. We observe that dynamic rake angle variations are anti-correlated with the local dip angles. Testing two parameterizations of dynamically consistent Yoffe-type source-time function, we show that the seismic wavefield of the approximated kinematic ruptures well reproduces the radiated seismic waves of the complete dynamic source process. This finding opens a new avenue for an improved pseudo-dynamic source characterization that captures the effects of fault roughness on earthquake rupture evolution. By including also the correlations between kinematic source parameters, we outline a new pseudo-dynamic rupture modeling approach for broadband ground-motion simulation.

  9. Automatic Detection of Electric Power Troubles (ADEPT)

    NASA Astrophysics Data System (ADS)

    Wang, Caroline; Zeanah, Hugh; Anderson, Audie; Patrick, Clint; Brady, Mike; Ford, Donnie

    1988-11-01

    Automatic Detection of Electric Power Troubles (A DEPT) is an expert system that integrates knowledge from three different suppliers to offer an advanced fault-detection system. It is designed for two modes of operation: real time fault isolation and simulated modeling. Real time fault isolation of components is accomplished on a power system breadboard through the Fault Isolation Expert System (FIES II) interface with a rule system developed in-house. Faults are quickly detected and displayed and the rules and chain of reasoning optionally provided on a laser printer. This system consists of a simulated space station power module using direct-current power supplies for solar arrays on three power buses. For tests of the system's ablilty to locate faults inserted via switches, loads are configured by an INTEL microcomputer and the Symbolics artificial intelligence development system. As these loads are resistive in nature, Ohm's Law is used as the basis for rules by which faults are located. The three-bus system can correct faults automatically where there is a surplus of power available on any of the three buses. Techniques developed and used can be applied readily to other control systems requiring rapid intelligent decisions. Simulated modeling, used for theoretical studies, is implemented using a modified version of Kennedy Space Center's KATE (Knowledge-Based Automatic Test Equipment), FIES II windowing, and an ADEPT knowledge base.

  10. Fault detection and classification in electrical power transmission system using artificial neural network.

    PubMed

    Jamil, Majid; Sharma, Sanjeev Kumar; Singh, Rajveer

    2015-01-01

    This paper focuses on the detection and classification of the faults on electrical power transmission line using artificial neural networks. The three phase currents and voltages of one end are taken as inputs in the proposed scheme. The feed forward neural network along with back propagation algorithm has been employed for detection and classification of the fault for analysis of each of the three phases involved in the process. A detailed analysis with varying number of hidden layers has been performed to validate the choice of the neural network. The simulation results concluded that the present method based on the neural network is efficient in detecting and classifying the faults on transmission lines with satisfactory performances. The different faults are simulated with different parameters to check the versatility of the method. The proposed method can be extended to the Distribution network of the Power System. The various simulations and analysis of signals is done in the MATLAB(®) environment.

  11. 3D numerical simulations of multiphase continental rifting

    NASA Astrophysics Data System (ADS)

    Naliboff, J.; Glerum, A.; Brune, S.

    2017-12-01

    Observations of rifted margin architecture suggest continental breakup occurs through multiple phases of extension with distinct styles of deformation. The initial rifting stages are often characterized by slow extension rates and distributed normal faulting in the upper crust decoupled from deformation in the lower crust and mantle lithosphere. Further rifting marks a transition to higher extension rates and coupling between the crust and mantle lithosphere, with deformation typically focused along large-scale detachment faults. Significantly, recent detailed reconstructions and high-resolution 2D numerical simulations suggest that rather than remaining focused on a single long-lived detachment fault, deformation in this phase may progress toward lithospheric breakup through a complex process of fault interaction and development. The numerical simulations also suggest that an initial phase of distributed normal faulting can play a key role in the development of these complex fault networks and the resulting finite deformation patterns. Motivated by these findings, we will present 3D numerical simulations of continental rifting that examine the role of temporal increases in extension velocity on rifted margin structure. The numerical simulations are developed with the massively parallel finite-element code ASPECT. While originally designed to model mantle convection using advanced solvers and adaptive mesh refinement techniques, ASPECT has been extended to model visco-plastic deformation that combines a Drucker Prager yield criterion with non-linear dislocation and diffusion creep. To promote deformation localization, the internal friction angle and cohesion weaken as a function of accumulated plastic strain. Rather than prescribing a single zone of weakness to initiate deformation, an initial random perturbation of the plastic strain field combined with rapid strain weakening produces distributed normal faulting at relatively slow rates of extension in both 2D and 3D simulations. Our presentation will focus on both the numerical assumptions required to produce these results and variations in 3D rifted margin architecture arising from a transition from slow to rapid rates of extension.

  12. Simulation-Based Probabilistic Seismic Hazard Assessment Using System-Level, Physics-Based Models: Assembling Virtual California

    NASA Astrophysics Data System (ADS)

    Rundle, P. B.; Rundle, J. B.; Morein, G.; Donnellan, A.; Turcotte, D.; Klein, W.

    2004-12-01

    The research community is rapidly moving towards the development of an earthquake forecast technology based on the use of complex, system-level earthquake fault system simulations. Using these topologically and dynamically realistic simulations, it is possible to develop ensemble forecasting methods similar to that used in weather and climate research. To effectively carry out such a program, one needs 1) a topologically realistic model to simulate the fault system; 2) data sets to constrain the model parameters through a systematic program of data assimilation; 3) a computational technology making use of modern paradigms of high performance and parallel computing systems; and 4) software to visualize and analyze the results. In particular, we focus attention on a new version of our code Virtual California (version 2001) in which we model all of the major strike slip faults in California, from the Mexico-California border to the Mendocino Triple Junction. Virtual California is a "backslip model", meaning that the long term rate of slip on each fault segment in the model is matched to the observed rate. We use the historic data set of earthquakes larger than magnitude M > 6 to define the frictional properties of 650 fault segments (degrees of freedom) in the model. To compute the dynamics and the associated surface deformation, we use message passing as implemented in the MPICH standard distribution on a Beowulf clusters consisting of >10 cpus. We also will report results from implementing the code on significantly larger machines so that we can begin to examine much finer spatial scales of resolution, and to assess scaling properties of the code. We present results of simulations both as static images and as mpeg movies, so that the dynamical aspects of the computation can be assessed by the viewer. We compute a variety of statistics from the simulations, including magnitude-frequency relations, and compare these with data from real fault systems. We report recent results on use of Virtual California for probabilistic earthquake forecasting for several sub-groups of major faults in California. These methods have the advantage that system-level fault interactions are explicitly included, as well as laboratory-based friction laws.

  13. Thermomechanical earthquake cycle simulations with rate-and-state friction and nonlinear viscoelasticity

    NASA Astrophysics Data System (ADS)

    Allison, K. L.; Dunham, E. M.

    2017-12-01

    We simulate earthquake cycles on a 2D strike-slip fault, modeling both rate-and-state fault friction and an off-fault nonlinear power-law rheology. The power-law rheology involves an effective viscosity that is a function of temperature and stress, and therefore varies both spatially and temporally. All phases of the earthquake cycle are simulated, allowing the model to spontaneously generate earthquakes, and to capture frictional afterslip and postseismic and interseismic viscous flow. We investigate the interaction between fault slip and bulk viscous flow, using experimentally-based flow laws for quartz-diorite in the crust and olivine in the mantle, representative of the Mojave Desert region in Southern California. We first consider a suite of three linear geotherms which are constant in time, with dT/dz = 20, 25, and 30 K/km. Though the simulations produce very different deformation styles in the lower crust, ranging from significant interseismc fault creep to purely bulk viscous flow, they have almost identical earthquake recurrence interval, nucleation depth, and down-dip coseismic slip limit. This indicates that bulk viscous flow and interseismic fault creep load the brittle crust similarly. The simulations also predict unrealistically high stresses in the upper crust, resulting from the fact that the lower crust and upper mantle are relatively weak far from the fault, and from the relatively small role that basal tractions on the base of the crust play in the force balance of the lithosphere. We also find that for the warmest model, the effective viscosity varies by an order of magnitude in the interseismic period, whereas for the cooler models it remains roughly constant. Because the rheology is highly sensitive to changes in temperature, in addition to the simulations with constant temperature we also consider the effect of heat generation. We capture both frictional heat generation and off-fault viscous shear heating, allowing these in turn to alter the effective viscosity. The resulting temperature changes may reduce the width of the shear zone in the lower crust and upper mantle, and reduce the effective viscosity.

  14. Conception et mises a l'essai d'un environnement d'apprentissage integrant l'experimentation assistee par ordinateur et la simulation assistee par ordinateur

    NASA Astrophysics Data System (ADS)

    Riopel, Martin

    To make science laboratory sessions more instructive, we have developed a learning environment that will allow students enrolled in a mechanics course at college or university level to engage in a scientific modelization process by combining computer-simulated experimentation and microcomputer-based laboratories. The main goal is to assist and facilitate both inductive and deductive reasoning. Within this computer application, each action can also be automatically recorded and identified while the student is using the software. The most original part of the environment is to let the student compare the simulated animation with the real video by superposing the images. We used the software with students and observed that they effectively engaged in a modelization process that included both inductive and deductive reasoning. We also observed that the students were able to use the software to produce adequate answers to questions concerning both previously taught and new theoretical concepts in physics. The students completed the experiment about twice as fast as usual and considered that using the software resulted in a better understanding of the phenomenon. We conclude that this use of the computer in science education can broaden the range of possibilities for learning and for teaching and can provide new avenues for researchers who can use it to record and study students' path of reasoning. We also believe that it would be interesting to investigate more some of the benefits associated with this environment, particularly the acceleration effect, the improvement of students' reasoning and the equilibrium between induction and deduction that we observed within this research.

  15. A Simplified Model for Multiphase Leakage through Faults with Applications for CO2 Storage

    NASA Astrophysics Data System (ADS)

    Watson, F. E.; Doster, F.

    2017-12-01

    In the context of geological CO2 storage, faults in the subsurface could affect storage security by acting as high permeability pathways which allow CO2 to flow upwards and away from the storage formation. To assess the likelihood of leakage through faults and the impacts faults might have on storage security numerical models are required. However, faults are complex geological features, usually consisting of a fault core surrounded by a highly fractured damage zone. A direct representation of these in a numerical model would require very fine grid resolution and would be computationally expensive. Here, we present the development of a reduced complexity model for fault flow using the vertically integrated formulation. This model captures the main features of the flow but does not require us to resolve the vertical dimension, nor the fault in the horizontal dimension, explicitly. It is thus less computationally expensive than full resolution models. Consequently, we can quickly model many realisations for parameter uncertainty studies of CO2 injection into faulted reservoirs. We develop the model based on explicitly simulating local 3D representations of faults for characteristic scenarios using the Matlab Reservoir Simulation Toolbox (MRST). We have assessed the impact of variables such as fault geometry, porosity and permeability on multiphase leakage rates.

  16. Computing and Visualizing the Complex Dynamics of Earthquake Fault Systems: Towards Ensemble Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Rundle, J.; Rundle, P.; Donnellan, A.; Li, P.

    2003-12-01

    We consider the problem of the complex dynamics of earthquake fault systems, and whether numerical simulations can be used to define an ensemble forecasting technology similar to that used in weather and climate research. To effectively carry out such a program, we need 1) a topological realistic model to simulate the fault system; 2) data sets to constrain the model parameters through a systematic program of data assimilation; 3) a computational technology making use of modern paradigms of high performance and parallel computing systems; and 4) software to visualize and analyze the results. In particular, we focus attention of a new version of our code Virtual California (version 2001) in which we model all of the major strike slip faults extending throughout California, from the Mexico-California border to the Mendocino Triple Junction. We use the historic data set of earthquakes larger than magnitude M > 6 to define the frictional properties of all 654 fault segments (degrees of freedom) in the model. Previous versions of Virtual California had used only 215 fault segments to model the strike slip faults in southern California. To compute the dynamics and the associated surface deformation, we use message passing as implemented in the MPICH standard distribution on a small Beowulf cluster consisting of 10 cpus. We are also planning to run the code on significantly larger machines so that we can begin to examine much finer spatial scales of resolution, and to assess scaling properties of the code. We present results of simulations both as static images and as mpeg movies, so that the dynamical aspects of the computation can be assessed by the viewer. We also compute a variety of statistics from the simulations, including magnitude-frequency relations, and compare these with data from real fault systems.

  17. The Seismicity of the Central Apennines Region Studied by Means of a Physics-Based Earthquake Simulator

    NASA Astrophysics Data System (ADS)

    Console, R.; Vannoli, P.; Carluccio, R.

    2016-12-01

    The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation. The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation.

  18. Faults simulations for three-dimensional reservoir-geomechanical models with the extended finite element method

    NASA Astrophysics Data System (ADS)

    Prévost, Jean H.; Sukumar, N.

    2016-01-01

    Faults are geological entities with thicknesses several orders of magnitude smaller than the grid blocks typically used to discretize reservoir and/or over-under-burden geological formations. Introducing faults in a complex reservoir and/or geomechanical mesh therefore poses significant meshing difficulties. In this paper, we consider the strong-coupling of solid displacement and fluid pressure in a three-dimensional poro-mechanical (reservoir-geomechanical) model. We introduce faults in the mesh without meshing them explicitly, by using the extended finite element method (X-FEM) in which the nodes whose basis function support intersects the fault are enriched within the framework of partition of unity. For the geomechanics, the fault is treated as an internal displacement discontinuity that allows slipping to occur using a Mohr-Coulomb type criterion. For the reservoir, the fault is either an internal fluid flow conduit that allows fluid flow in the fault as well as to enter/leave the fault or is a barrier to flow (sealing fault). For internal fluid flow conduits, the continuous fluid pressure approximation admits a discontinuity in its normal derivative across the fault, whereas for an impermeable fault, the pressure approximation is discontinuous across the fault. Equal-order displacement and pressure approximations are used. Two- and three-dimensional benchmark computations are presented to verify the accuracy of the approach, and simulations are presented that reveal the influence of the rate of loading on the activation of faults.

  19. Fault classification method for the driving safety of electrified vehicles

    NASA Astrophysics Data System (ADS)

    Wanner, Daniel; Drugge, Lars; Stensson Trigell, Annika

    2014-05-01

    A fault classification method is proposed which has been applied to an electric vehicle. Potential faults in the different subsystems that can affect the vehicle directional stability were collected in a failure mode and effect analysis. Similar driveline faults were grouped together if they resembled each other with respect to their influence on the vehicle dynamic behaviour. The faults were physically modelled in a simulation environment before they were induced in a detailed vehicle model under normal driving conditions. A special focus was placed on faults in the driveline of electric vehicles employing in-wheel motors of the permanent magnet type. Several failures caused by mechanical and other faults were analysed as well. The fault classification method consists of a controllability ranking developed according to the functional safety standard ISO 26262. The controllability of a fault was determined with three parameters covering the influence of the longitudinal, lateral and yaw motion of the vehicle. The simulation results were analysed and the faults were classified according to their controllability using the proposed method. It was shown that the controllability decreased specifically with increasing lateral acceleration and increasing speed. The results for the electric driveline faults show that this trend cannot be generalised for all the faults, as the controllability deteriorated for some faults during manoeuvres with low lateral acceleration and low speed. The proposed method is generic and can be applied to various other types of road vehicles and faults.

  20. Chip level modeling of LSI devices

    NASA Technical Reports Server (NTRS)

    Armstrong, J. R.

    1984-01-01

    The advent of Very Large Scale Integration (VLSI) technology has rendered the gate level model impractical for many simulation activities critical to the design automation process. As an alternative, an approach to the modeling of VLSI devices at the chip level is described, including the specification of modeling language constructs important to the modeling process. A model structure is presented in which models of the LSI devices are constructed as single entities. The modeling structure is two layered. The functional layer in this structure is used to model the input/output response of the LSI chip. A second layer, the fault mapping layer, is added, if fault simulations are required, in order to map the effects of hardware faults onto the functional layer. Modeling examples for each layer are presented. Fault modeling at the chip level is described. Approaches to realistic functional fault selection and defining fault coverage for functional faults are given. Application of the modeling techniques to single chip and bit slice microprocessors is discussed.

  1. The Virtual Quake earthquake simulator: a simulation-based forecast of the El Mayor-Cucapah region and evidence of predictability in simulated earthquake sequences

    NASA Astrophysics Data System (ADS)

    Yoder, Mark R.; Schultz, Kasey W.; Heien, Eric M.; Rundle, John B.; Turcotte, Donald L.; Parker, Jay W.; Donnellan, Andrea

    2015-12-01

    In this manuscript, we introduce a framework for developing earthquake forecasts using Virtual Quake (VQ), the generalized successor to the perhaps better known Virtual California (VC) earthquake simulator. We discuss the basic merits and mechanics of the simulator, and we present several statistics of interest for earthquake forecasting. We also show that, though the system as a whole (in aggregate) behaves quite randomly, (simulated) earthquake sequences limited to specific fault sections exhibit measurable predictability in the form of increasing seismicity precursory to large m > 7 earthquakes. In order to quantify this, we develop an alert-based forecasting metric, and show that it exhibits significant information gain compared to random forecasts. We also discuss the long-standing question of activation versus quiescent type earthquake triggering. We show that VQ exhibits both behaviours separately for independent fault sections; some fault sections exhibit activation type triggering, while others are better characterized by quiescent type triggering. We discuss these aspects of VQ specifically with respect to faults in the Salton Basin and near the El Mayor-Cucapah region in southern California, USA and northern Baja California Norte, Mexico.

  2. Development and Testing of Protection Scheme for Renewable-Rich Distribution System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brahma, Sukumar; Ranade, Satish; Elkhatib, Mohamed E.

    As the penetration of renewables increases in the distribution systems, and microgrids are conceived with high penetration of such generation that connects through inverters, fault location and protection of microgrids needs consideration. This report proposes averaged models that help simulate fault scenarios in renewable-rich microgrids, models for locating faults in such microgrids, and comments on the protection models that may be considered for microgrids. Simulation studies are reported to justify the models.

  3. Numerical analysis of the effects induced by normal faults and dip angles on rock bursts

    NASA Astrophysics Data System (ADS)

    Jiang, Lishuai; Wang, Pu; Zhang, Peipeng; Zheng, Pengqiang; Xu, Bin

    2017-10-01

    The study of mining effects under the influences of a normal fault and its dip angle is significant for the prediction and prevention of rock bursts. Based on the geological conditions of panel 2301N in a coalmine, the evolution laws of the strata behaviors of the working face affected by a fault and the instability of the fault induced by mining operations with the working face of the footwall and hanging wall advancing towards a normal fault are studied using UDEC numerical simulation. The mechanism that induces rock burst is revealed, and the influence characteristics of the fault dip angle are analyzed. The results of the numerical simulation are verified by conducting a case study regarding the microseismic events. The results of this study serve as a reference for the prediction of rock bursts and their classification into hazardous areas under similar conditions.

  4. ARGES: an Expert System for Fault Diagnosis Within Space-Based ECLS Systems

    NASA Technical Reports Server (NTRS)

    Pachura, David W.; Suleiman, Salem A.; Mendler, Andrew P.

    1988-01-01

    ARGES (Atmospheric Revitalization Group Expert System) is a demonstration prototype expert system for fault management for the Solid Amine, Water Desorbed (SAWD) CO2 removal assembly, associated with the Environmental Control and Life Support (ECLS) System. ARGES monitors and reduces data in real time from either the SAWD controller or a simulation of the SAWD assembly. It can detect gradual degradations or predict failures. This allows graceful shutdown and scheduled maintenance, which reduces crew maintenance overhead. Status and fault information is presented in a user interface that simulates what would be seen by a crewperson. The user interface employs animated color graphics and an object oriented approach to provide detailed status information, fault identification, and explanation of reasoning in a rapidly assimulated manner. In addition, ARGES recommends possible courses of action for predicted and actual faults. ARGES is seen as a forerunner of AI-based fault management systems for manned space systems.

  5. Self-induced seismicity due to fluid circulation along faults

    NASA Astrophysics Data System (ADS)

    Aochi, Hideo; Poisson, Blanche; Toussaint, Renaud; Rachez, Xavier; Schmittbuhl, Jean

    2014-03-01

    In this paper, we develop a system of equations describing fluid migration, fault rheology, fault thickness evolution and shear rupture during a seismic cycle, triggered either by tectonic loading or by fluid injection. Assuming that the phenomena predominantly take place on a single fault described as a finite permeable zone of variable width, we are able to project the equations within the volumetric fault core onto the 2-D fault interface. From the basis of this `fault lubrication approximation', we simulate the evolution of seismicity when fluid is injected at one point along the fault to model-induced seismicity during an injection test in a borehole that intercepts the fault. We perform several parametric studies to understand the basic behaviour of the system. Fluid transmissivity and fault rheology are key elements. The simulated seismicity generally tends to rapidly evolve after triggering, independently of the injection history and end when the stationary path of fluid flow is established at the outer boundary of the model. This self-induced seismicity takes place in the case where shear rupturing on a planar fault becomes dominant over the fluid migration process. On the contrary, if healing processes take place, so that the fluid mass is trapped along the fault, rupturing occurs continuously during the injection period. Seismicity and fluid migration are strongly influenced by the injection rate and the heterogeneity.

  6. A suite of exercises for verifying dynamic earthquake rupture codes

    USGS Publications Warehouse

    Harris, Ruth A.; Barall, Michael; Aagaard, Brad T.; Ma, Shuo; Roten, Daniel; Olsen, Kim B.; Duan, Benchun; Liu, Dunyu; Luo, Bin; Bai, Kangchen; Ampuero, Jean-Paul; Kaneko, Yoshihiro; Gabriel, Alice-Agnes; Duru, Kenneth; Ulrich, Thomas; Wollherr, Stephanie; Shi, Zheqiang; Dunham, Eric; Bydlon, Sam; Zhang, Zhenguo; Chen, Xiaofei; Somala, Surendra N.; Pelties, Christian; Tago, Josue; Cruz-Atienza, Victor Manuel; Kozdon, Jeremy; Daub, Eric; Aslam, Khurram; Kase, Yuko; Withers, Kyle; Dalguer, Luis

    2018-01-01

    We describe a set of benchmark exercises that are designed to test if computer codes that simulate dynamic earthquake rupture are working as intended. These types of computer codes are often used to understand how earthquakes operate, and they produce simulation results that include earthquake size, amounts of fault slip, and the patterns of ground shaking and crustal deformation. The benchmark exercises examine a range of features that scientists incorporate in their dynamic earthquake rupture simulations. These include implementations of simple or complex fault geometry, off‐fault rock response to an earthquake, stress conditions, and a variety of formulations for fault friction. Many of the benchmarks were designed to investigate scientific problems at the forefronts of earthquake physics and strong ground motions research. The exercises are freely available on our website for use by the scientific community.

  7. Pseudo Random Stimulus Response of Combustion Systems.

    DTIC Science & Technology

    1980-01-01

    is also 7 applicable to the coalescence/dispersion (C/D) micromixing model In the C/D model, micromixing is simulated by considering the reacting...the turbulent fluctuations on the local heat release rate. Thus the micromixing ’noise’ measurements will not be valid, however, deductions

  8. Robust Fault Detection for Switched Fuzzy Systems With Unknown Input.

    PubMed

    Han, Jian; Zhang, Huaguang; Wang, Yingchun; Sun, Xun

    2017-10-03

    This paper investigates the fault detection problem for a class of switched nonlinear systems in the T-S fuzzy framework. The unknown input is considered in the systems. A novel fault detection unknown input observer design method is proposed. Based on the proposed observer, the unknown input can be removed from the fault detection residual. The weighted H∞ performance level is considered to ensure the robustness. In addition, the weighted H₋ performance level is introduced, which can increase the sensibility of the proposed detection method. To verify the proposed scheme, a numerical simulation example and an electromechanical system simulation example are provided at the end of this paper.

  9. Current Sensor Fault Diagnosis Based on a Sliding Mode Observer for PMSM Driven Systems

    PubMed Central

    Huang, Gang; Luo, Yi-Ping; Zhang, Chang-Fan; Huang, Yi-Shan; Zhao, Kai-Hui

    2015-01-01

    This paper proposes a current sensor fault detection method based on a sliding mode observer for the torque closed-loop control system of interior permanent magnet synchronous motors. First, a sliding mode observer based on the extended flux linkage is built to simplify the motor model, which effectively eliminates the phenomenon of salient poles and the dependence on the direct axis inductance parameter, and can also be used for real-time calculation of feedback torque. Then a sliding mode current observer is constructed in αβ coordinates to generate the fault residuals of the phase current sensors. The method can accurately identify abrupt gain faults and slow-variation offset faults in real time in faulty sensors, and the generated residuals of the designed fault detection system are not affected by the unknown input, the structure of the observer, and the theoretical derivation and the stability proof process are concise and simple. The RT-LAB real-time simulation is used to build a simulation model of the hardware in the loop. The simulation and experimental results demonstrate the feasibility and effectiveness of the proposed method. PMID:25970258

  10. A Doppler Transient Model Based on the Laplace Wavelet and Spectrum Correlation Assessment for Locomotive Bearing Fault Diagnosis

    PubMed Central

    Shen, Changqing; Liu, Fang; Wang, Dong; Zhang, Ao; Kong, Fanrang; Tse, Peter W.

    2013-01-01

    The condition of locomotive bearings, which are essential components in trains, is crucial to train safety. The Doppler effect significantly distorts acoustic signals during high movement speeds, substantially increasing the difficulty of monitoring locomotive bearings online. In this study, a new Doppler transient model based on the acoustic theory and the Laplace wavelet is presented for the identification of fault-related impact intervals embedded in acoustic signals. An envelope spectrum correlation assessment is conducted between the transient model and the real fault signal in the frequency domain to optimize the model parameters. The proposed method can identify the parameters used for simulated transients (periods in simulated transients) from acoustic signals. Thus, localized bearing faults can be detected successfully based on identified parameters, particularly period intervals. The performance of the proposed method is tested on a simulated signal suffering from the Doppler effect. Besides, the proposed method is used to analyze real acoustic signals of locomotive bearings with inner race and outer race faults, respectively. The results confirm that the periods between the transients, which represent locomotive bearing fault characteristics, can be detected successfully. PMID:24253191

  11. A Doppler transient model based on the laplace wavelet and spectrum correlation assessment for locomotive bearing fault diagnosis.

    PubMed

    Shen, Changqing; Liu, Fang; Wang, Dong; Zhang, Ao; Kong, Fanrang; Tse, Peter W

    2013-11-18

    The condition of locomotive bearings, which are essential components in trains, is crucial to train safety. The Doppler effect significantly distorts acoustic signals during high movement speeds, substantially increasing the difficulty of monitoring locomotive bearings online. In this study, a new Doppler transient model based on the acoustic theory and the Laplace wavelet is presented for the identification of fault-related impact intervals embedded in acoustic signals. An envelope spectrum correlation assessment is conducted between the transient model and the real fault signal in the frequency domain to optimize the model parameters. The proposed method can identify the parameters used for simulated transients (periods in simulated transients) from acoustic signals. Thus, localized bearing faults can be detected successfully based on identified parameters, particularly period intervals. The performance of the proposed method is tested on a simulated signal suffering from the Doppler effect. Besides, the proposed method is used to analyze real acoustic signals of locomotive bearings with inner race and outer race faults, respectively. The results confirm that the periods between the transients, which represent locomotive bearing fault characteristics, can be detected successfully.

  12. Fault detection and diagnosis of photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Wu, Xing

    The rapid growth of the solar industry over the past several years has expanded the significance of photovoltaic (PV) systems. One of the primary aims of research in building-integrated PV systems is to improve the performance of the system's efficiency, availability, and reliability. Although much work has been done on technological design to increase a photovoltaic module's efficiency, there is little research so far on fault diagnosis for PV systems. Faults in a PV system, if not detected, may not only reduce power generation, but also threaten the availability and reliability, effectively the "security" of the whole system. In this paper, first a circuit-based simulation baseline model of a PV system with maximum power point tracking (MPPT) is developed using MATLAB software. MATLAB is one of the most popular tools for integrating computation, visualization and programming in an easy-to-use modeling environment. Second, data collection of a PV system at variable surface temperatures and insolation levels under normal operation is acquired. The developed simulation model of PV system is then calibrated and improved by comparing modeled I-V and P-V characteristics with measured I--V and P--V characteristics to make sure the simulated curves are close to those measured values from the experiments. Finally, based on the circuit-based simulation model, a PV model of various types of faults will be developed by changing conditions or inputs in the MATLAB model, and the I--V and P--V characteristic curves, and the time-dependent voltage and current characteristics of the fault modalities will be characterized for each type of fault. These will be developed as benchmark I-V or P-V, or prototype transient curves. If a fault occurs in a PV system, polling and comparing actual measured I--V and P--V characteristic curves with both normal operational curves and these baseline fault curves will aid in fault diagnosis.

  13. Modeling of fault activation and seismicity by injection directly into a fault zone associated with hydraulic fracturing of shale-gas reservoirs

    DOE PAGES

    Rutqvist, Jonny; Rinaldi, Antonio P.; Cappa, Frédéric; ...

    2015-03-01

    We conducted three-dimensional coupled fluid-flow and geomechanical modeling of fault activation and seismicity associated with hydraulic fracturing stimulation of a shale-gas reservoir. We simulated a case in which a horizontal injection well intersects a steeply dip- ping fault, with hydraulic fracturing channeled within the fault, during a 3-hour hydraulic fracturing stage. Consistent with field observations, the simulation results show that shale-gas hydraulic fracturing along faults does not likely induce seismic events that could be felt on the ground surface, but rather results in numerous small microseismic events, as well as aseismic deformations along with the fracture propagation. The calculated seismicmore » moment magnitudes ranged from about -2.0 to 0.5, except for one case assuming a very brittle fault with low residual shear strength, for which the magnitude was 2.3, an event that would likely go unnoticed or might be barely felt by humans at its epicenter. The calculated moment magnitudes showed a dependency on injection depth and fault dip. We attribute such dependency to variation in shear stress on the fault plane and associated variation in stress drop upon reactivation. Our simulations showed that at the end of the 3-hour injection, the rupture zone associated with tensile and shear failure extended to a maximum radius of about 200 m from the injection well. The results of this modeling study for steeply dipping faults at 1000 to 2500 m depth is in agreement with earlier studies and field observations showing that it is very unlikely that activation of a fault by shale-gas hydraulic fracturing at great depth (thousands of meters) could cause felt seismicity or create a new flow path (through fault rupture) that could reach shallow groundwater resources.« less

  14. Modeling of fault activation and seismicity by injection directly into a fault zone associated with hydraulic fracturing of shale-gas reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rutqvist, Jonny; Rinaldi, Antonio P.; Cappa, Frédéric

    We conducted three-dimensional coupled fluid-flow and geomechanical modeling of fault activation and seismicity associated with hydraulic fracturing stimulation of a shale-gas reservoir. We simulated a case in which a horizontal injection well intersects a steeply dip- ping fault, with hydraulic fracturing channeled within the fault, during a 3-hour hydraulic fracturing stage. Consistent with field observations, the simulation results show that shale-gas hydraulic fracturing along faults does not likely induce seismic events that could be felt on the ground surface, but rather results in numerous small microseismic events, as well as aseismic deformations along with the fracture propagation. The calculated seismicmore » moment magnitudes ranged from about -2.0 to 0.5, except for one case assuming a very brittle fault with low residual shear strength, for which the magnitude was 2.3, an event that would likely go unnoticed or might be barely felt by humans at its epicenter. The calculated moment magnitudes showed a dependency on injection depth and fault dip. We attribute such dependency to variation in shear stress on the fault plane and associated variation in stress drop upon reactivation. Our simulations showed that at the end of the 3-hour injection, the rupture zone associated with tensile and shear failure extended to a maximum radius of about 200 m from the injection well. The results of this modeling study for steeply dipping faults at 1000 to 2500 m depth is in agreement with earlier studies and field observations showing that it is very unlikely that activation of a fault by shale-gas hydraulic fracturing at great depth (thousands of meters) could cause felt seismicity or create a new flow path (through fault rupture) that could reach shallow groundwater resources.« less

  15. Off-fault plasticity in three-dimensional dynamic rupture simulations using a modal Discontinuous Galerkin method on unstructured meshes: Implementation, verification, and application

    NASA Astrophysics Data System (ADS)

    Wollherr, Stephanie; Gabriel, Alice-Agnes; Uphoff, Carsten

    2018-05-01

    The dynamics and potential size of earthquakes depend crucially on rupture transfers between adjacent fault segments. To accurately describe earthquake source dynamics, numerical models can account for realistic fault geometries and rheologies such as nonlinear inelastic processes off the slip interface. We present implementation, verification, and application of off-fault Drucker-Prager plasticity in the open source software SeisSol (www.seissol.org). SeisSol is based on an arbitrary high-order derivative modal Discontinuous Galerkin (ADER-DG) method using unstructured, tetrahedral meshes specifically suited for complex geometries. Two implementation approaches are detailed, modelling plastic failure either employing sub-elemental quadrature points or switching to nodal basis coefficients. At fine fault discretizations the nodal basis approach is up to 6 times more efficient in terms of computational costs while yielding comparable accuracy. Both methods are verified in community benchmark problems and by three dimensional numerical h- and p-refinement studies with heterogeneous initial stresses. We observe no spectral convergence for on-fault quantities with respect to a given reference solution, but rather discuss a limitation to low-order convergence for heterogeneous 3D dynamic rupture problems. For simulations including plasticity, a high fault resolution may be less crucial than commonly assumed, due to the regularization of peak slip rate and an increase of the minimum cohesive zone width. In large-scale dynamic rupture simulations based on the 1992 Landers earthquake, we observe high rupture complexity including reverse slip, direct branching, and dynamic triggering. The spatio-temporal distribution of rupture transfers are altered distinctively by plastic energy absorption, correlated with locations of geometrical fault complexity. Computational cost increases by 7% when accounting for off-fault plasticity in the demonstrating application. Our results imply that the combination of fully 3D dynamic modelling, complex fault geometries, and off-fault plastic yielding is important to realistically capture dynamic rupture transfers in natural fault systems.

  16. Epistemic uncertainty in California-wide synthetic seismicity simulations

    USGS Publications Warehouse

    Pollitz, Fred F.

    2011-01-01

    The generation of seismicity catalogs on synthetic fault networks holds the promise of providing key inputs into probabilistic seismic-hazard analysis, for example, the coefficient of variation, mean recurrence time as a function of magnitude, the probability of fault-to-fault ruptures, and conditional probabilities for foreshock–mainshock triggering. I employ a seismicity simulator that includes the following ingredients: static stress transfer, viscoelastic relaxation of the lower crust and mantle, and vertical stratification of elastic and viscoelastic material properties. A cascade mechanism combined with a simple Coulomb failure criterion is used to determine the initiation, propagation, and termination of synthetic ruptures. It is employed on a 3D fault network provided by Steve Ward (unpublished data, 2009) for the Southern California Earthquake Center (SCEC) Earthquake Simulators Group. This all-California fault network, initially consisting of 8000 patches, each of ∼12 square kilometers in size, has been rediscretized into Graphic patches, each of ∼1 square kilometer in size, in order to simulate the evolution of California seismicity and crustal stress at magnitude M∼5–8. Resulting synthetic seismicity catalogs spanning 30,000 yr and about one-half million events are evaluated with magnitude-frequency and magnitude-area statistics. For a priori choices of fault-slip rates and mean stress drops, I explore the sensitivity of various constructs on input parameters, particularly mantle viscosity. Slip maps obtained for the southern San Andreas fault show that the ability of segment boundaries to inhibit slip across the boundaries (e.g., to prevent multisegment ruptures) is systematically affected by mantle viscosity.

  17. Epistemic uncertainty in California-wide synthetic seismicity simulations

    USGS Publications Warehouse

    Pollitz, F.F.

    2011-01-01

    The generation of seismicity catalogs on synthetic fault networks holds the promise of providing key inputs into probabilistic seismic-hazard analysis, for example, the coefficient of variation, mean recurrence time as a function of magnitude, the probability of fault-to-fault ruptures, and conditional probabilities for foreshock-mainshock triggering. I employ a seismicity simulator that includes the following ingredients: static stress transfer, viscoelastic relaxation of the lower crust and mantle, and vertical stratification of elastic and viscoelastic material properties. A cascade mechanism combined with a simple Coulomb failure criterion is used to determine the initiation, propagation, and termination of synthetic ruptures. It is employed on a 3D fault network provided by Steve Ward (unpublished data, 2009) for the Southern California Earthquake Center (SCEC) Earthquake Simulators Group. This all-California fault network, initially consisting of 8000 patches, each of ~12 square kilometers in size, has been rediscretized into ~100;000 patches, each of ~1 square kilometer in size, in order to simulate the evolution of California seismicity and crustal stress at magnitude M ~ 5-8. Resulting synthetic seismicity catalogs spanning 30,000 yr and about one-half million events are evaluated with magnitude-frequency and magnitude-area statistics. For a priori choices of fault-slip rates and mean stress drops, I explore the sensitivity of various constructs on input parameters, particularly mantle viscosity. Slip maps obtained for the southern San Andreas fault show that the ability of segment boundaries to inhibit slip across the boundaries (e.g., to prevent multisegment ruptures) is systematically affected by mantle viscosity.

  18. Analysis of Tax-deductible Interest Payments for Re-advanceable Canadian Mortgages

    NASA Astrophysics Data System (ADS)

    Naseem, Almas; Reesor, Mark

    2011-11-01

    According to Canadian tax law the interest on loans used for investment purposes is tax deductible while interest on personal mortgage loans is not. One way of transforming from non-tax deductible to tax deductible interest expenses is to borrow against home equity to make investments. A re-advanceable mortgage is a product specifically designed to take advantage of this tax discrepancy. Using simulation we study the risk associated with the re-advanceable mortgage strategy to provide a better description of the mortgagor's position. We assume that the mortgagor invests the borrowings secured by home equity into a single risky asset (e.g., stock or mutual fund) whose evolution is described by geometric Brownian motion (GBM). With a re-advanceable mortgage we find that the average mortgage payoff time is less than the original mortgage term. However, there is considerable variation in the payoff times with a significant probability of a payoff time exceeding the original mortgage term. Higher income homeowners enjoy a payoff time distribution with both a lower average and a lower standard deviation than low-income homeowners. Thus this strategy is most beneficial to those with the highest income. We also find this strategy protects the homeowner in the event of job loss. This work is important to lenders, financial planners and homeowners to more fully understand the benefits and risk associated with this strategy.

  19. Avionic Air Data Sensors Fault Detection and Isolation by means of Singular Perturbation and Geometric Approach

    PubMed Central

    2017-01-01

    Singular Perturbations represent an advantageous theory to deal with systems characterized by a two-time scale separation, such as the longitudinal dynamics of aircraft which are called phugoid and short period. In this work, the combination of the NonLinear Geometric Approach and the Singular Perturbations leads to an innovative Fault Detection and Isolation system dedicated to the isolation of faults affecting the air data system of a general aviation aircraft. The isolation capabilities, obtained by means of the approach proposed in this work, allow for the solution of a fault isolation problem otherwise not solvable by means of standard geometric techniques. Extensive Monte-Carlo simulations, exploiting a high fidelity aircraft simulator, show the effectiveness of the proposed Fault Detection and Isolation system. PMID:28946673

  20. A methodology towards virtualisation-based high performance simulation platform supporting multidisciplinary design of complex products

    NASA Astrophysics Data System (ADS)

    Ren, Lei; Zhang, Lin; Tao, Fei; (Luke) Zhang, Xiaolong; Luo, Yongliang; Zhang, Yabin

    2012-08-01

    Multidisciplinary design of complex products leads to an increasing demand for high performance simulation (HPS) platforms. One great challenge is how to achieve high efficient utilisation of large-scale simulation resources in distributed and heterogeneous environments. This article reports a virtualisation-based methodology to realise a HPS platform. This research is driven by the issues concerning large-scale simulation resources deployment and complex simulation environment construction, efficient and transparent utilisation of fine-grained simulation resources and high reliable simulation with fault tolerance. A framework of virtualisation-based simulation platform (VSIM) is first proposed. Then the article investigates and discusses key approaches in VSIM, including simulation resources modelling, a method to automatically deploying simulation resources for dynamic construction of system environment, and a live migration mechanism in case of faults in run-time simulation. Furthermore, the proposed methodology is applied to a multidisciplinary design system for aircraft virtual prototyping and some experiments are conducted. The experimental results show that the proposed methodology can (1) significantly improve the utilisation of fine-grained simulation resources, (2) result in a great reduction in deployment time and an increased flexibility for simulation environment construction and (3)achieve fault tolerant simulation.

  1. Automatic detection of electric power troubles (AI application)

    NASA Technical Reports Server (NTRS)

    Wang, Caroline; Zeanah, Hugh; Anderson, Audie; Patrick, Clint

    1987-01-01

    The design goals for the Automatic Detection of Electric Power Troubles (ADEPT) were to enhance Fault Diagnosis Techniques in a very efficient way. ADEPT system was designed in two modes of operation: (1) Real time fault isolation, and (2) a local simulator which simulates the models theoretically.

  2. A Generalised Fault Protection Structure Proposed for Uni-grounded Low-Voltage AC Microgrids

    NASA Astrophysics Data System (ADS)

    Bui, Duong Minh; Chen, Shi-Lin; Lien, Keng-Yu; Jiang, Jheng-Lun

    2016-04-01

    This paper presents three main configurations of uni-grounded low-voltage AC microgrids. Transient situations of a uni-grounded low-voltage (LV) AC microgrid (MG) are simulated through various fault tests and operation transition tests between grid-connected and islanded modes. Based on transient simulation results, available fault protection methods are proposed for main and back-up protection of a uni-grounded AC microgrid. In addition, concept of a generalised fault protection structure of uni-grounded LVAC MGs is mentioned in the paper. As a result, main contributions of the paper are: (i) definition of different uni-grounded LVAC MG configurations; (ii) analysing transient responses of a uni-grounded LVAC microgrid through line-to-line faults, line-to-ground faults, three-phase faults and a microgrid operation transition test, (iii) proposing available fault protection methods for uni-grounded microgrids, such as: non-directional or directional overcurrent protection, under/over voltage protection, differential current protection, voltage-restrained overcurrent protection, and other fault protection principles not based on phase currents and voltages (e.g. total harmonic distortion detection of currents and voltages, using sequence components of current and voltage, 3I0 or 3V0 components), and (iv) developing a generalised fault protection structure with six individual protection zones to be suitable for different uni-grounded AC MG configurations.

  3. Bond graph modeling and experimental verification of a novel scheme for fault diagnosis of rolling element bearings in special operating conditions

    NASA Astrophysics Data System (ADS)

    Mishra, C.; Samantaray, A. K.; Chakraborty, G.

    2016-09-01

    Vibration analysis for diagnosis of faults in rolling element bearings is complicated when the rotor speed is variable or slow. In the former case, the time interval between the fault-induced impact responses in the vibration signal are non-uniform and the signal strength is variable. In the latter case, the fault-induced impact response strength is weak and generally gets buried in the noise, i.e. noise dominates the signal. This article proposes a diagnosis scheme based on a combination of a few signal processing techniques. The proposed scheme initially represents the vibration signal in terms of uniformly resampled angular position of the rotor shaft by using the interpolated instantaneous angular position measurements. Thereafter, intrinsic mode functions (IMFs) are generated through empirical mode decomposition (EMD) of resampled vibration signal which is followed by thresholding of IMFs and signal reconstruction to de-noise the signal and envelope order tracking to diagnose the faults. Data for validating the proposed diagnosis scheme are initially generated from a multi-body simulation model of rolling element bearing which is developed using bond graph approach. This bond graph model includes the ball and cage dynamics, localized fault geometry, contact mechanics, rotor unbalance, and friction and slip effects. The diagnosis scheme is finally validated with experiments performed with the help of a machine fault simulator (MFS) system. Some fault scenarios which could not be experimentally recreated are then generated through simulations and analyzed through the developed diagnosis scheme.

  4. Perspectives from deductible plan enrollees: plan knowledge and anticipated care-seeking changes.

    PubMed

    Reed, Mary; Benedetti, Nancy; Brand, Richard; Newhouse, Joseph P; Hsu, John

    2009-12-29

    Consumer directed health care proposes that patients will engage as informed consumers of health care services by sharing in more of their medical costs, often through deductibles. We examined knowledge of deductible plan details among new enrollees, as well as anticipated care-seeking changes in response to the deductible. In a large integrated delivery system with a range of deductible-based health plans which varied in services included or exempted from deductible, we conducted a mixed-method, cross-sectional telephone interview study. Among 458 adults newly enrolled in a deductible plan (71% response rate), 51% knew they had a deductible, 26% knew the deductible amount, and 6% knew which medical services were included or exempted from their deductible. After adjusting for respondent characteristics, those with more deductible-applicable services and those with lower self-reported health status were significantly more likely to know they had a deductible. Among those who knew of their deductible, half anticipated that it would cause them to delay or avoid medical care, including avoiding doctor's office visits and medical tests, even services that they believed were medically necessary. Many expressed concern about their costs, anticipating the inability to afford care and expressing the desire to change plans. Early in their experience with a deductible, patients had limited awareness of the deductible and little knowledge of the details. Many who knew of the deductible reported that it would cause them to delay or avoid seeking care and were concerned about their healthcare costs.

  5. Analysis on IGBT and Diode Failures in Distribution Electronic Power Transformers

    NASA Astrophysics Data System (ADS)

    Wang, Si-cong; Sang, Zi-xia; Yan, Jiong; Du, Zhi; Huang, Jia-qi; Chen, Zhu

    2018-02-01

    Fault characteristics of power electronic components are of great importance for a power electronic device, and are of extraordinary importance for those applied in power system. The topology structures and control method of Distribution Electronic Power Transformer (D-EPT) are introduced, and an exploration on fault types and fault characteristics for the IGBT and diode failures is presented. The analysis and simulation of different fault types for the fault characteristics lead to the D-EPT fault location scheme.

  6. CO2 Push-Pull Dual (Conjugate) Faults Injection Simulations

    DOE Data Explorer

    Oldenburg, Curtis (ORCID:0000000201326016); Lee, Kyung Jae; Doughty, Christine; Jung, Yoojin; Borgia, Andrea; Pan, Lehua; Zhang, Rui; Daley, Thomas M.; Altundas, Bilgin; Chugunov, Nikita

    2017-07-20

    This submission contains datasets and a final manuscript associated with a project simulating carbon dioxide push-pull into a conjugate fault system modeled after Dixie Valley- sensitivity analysis of significant parameters and uncertainty prediction by data-worth analysis. Datasets include: (1) Forward simulation runs of standard cases (push & pull phases), (2) Local sensitivity analyses (push & pull phases), and (3) Data-worth analysis (push & pull phases).

  7. Re-Evaluation of Event Correlations in Virtual California Using Statistical Analysis

    NASA Astrophysics Data System (ADS)

    Glasscoe, M. T.; Heflin, M. B.; Granat, R. A.; Yikilmaz, M. B.; Heien, E.; Rundle, J.; Donnellan, A.

    2010-12-01

    Fusing the results of simulation tools with statistical analysis methods has contributed to our better understanding of the earthquake process. In a previous study, we used a statistical method to investigate emergent phenomena in data produced by the Virtual California earthquake simulator. The analysis indicated that there were some interesting fault interactions and possible triggering and quiescence relationships between events. We have converted the original code from Matlab to python/C++ and are now evaluating data from the most recent version of Virtual California in order to analyze and compare any new behavior exhibited by the model. The Virtual California earthquake simulator can be used to study fault and stress interaction scenarios for realistic California earthquakes. The simulation generates a synthetic earthquake catalog of events with a minimum size of ~M 5.8 that can be evaluated using statistical analysis methods. Virtual California utilizes realistic fault geometries and a simple Amontons - Coulomb stick and slip friction law in order to drive the earthquake process by means of a back-slip model where loading of each segment occurs due to the accumulation of a slip deficit at the prescribed slip rate of the segment. Like any complex system, Virtual California may generate emergent phenomena unexpected even by its designers. In order to investigate this, we have developed a statistical method that analyzes the interaction between Virtual California fault elements and thereby determine whether events on any given fault elements show correlated behavior. Our method examines events on one fault element and then determines whether there is an associated event within a specified time window on a second fault element. Note that an event in our analysis is defined as any time an element slips, rather than any particular “earthquake” along the entire fault length. Results are then tabulated and then differenced with an expected correlation, calculated by assuming a uniform distribution of events in time. We generate a correlation score matrix, which indicates how weakly or strongly correlated each fault element is to every other in the course of the VC simulation. We calculate correlation scores by summing the difference between the actual and expected correlations over all time window lengths and normalizing by the time window size. The correlation score matrix can focus attention on the most interesting areas for more in-depth analysis of event correlation vs. time. The previous study included 59 faults (639 elements) in the model, which included all the faults save the creeping section of the San Andreas. The analysis spanned 40,000 yrs of Virtual California-generated earthquake data. The newly revised VC model includes 70 faults, 8720 fault elements, and spans 110,000 years. Due to computational considerations, we will evaluate the elements comprising the southern California region, which our previous study indicated showed interesting fault interaction and event triggering/quiescence relationships.

  8. Reliability computation using fault tree analysis

    NASA Technical Reports Server (NTRS)

    Chelson, P. O.

    1971-01-01

    A method is presented for calculating event probabilities from an arbitrary fault tree. The method includes an analytical derivation of the system equation and is not a simulation program. The method can handle systems that incorporate standby redundancy and it uses conditional probabilities for computing fault trees where the same basic failure appears in more than one fault path.

  9. Machine learning techniques for fault isolation and sensor placement

    NASA Technical Reports Server (NTRS)

    Carnes, James R.; Fisher, Douglas H.

    1993-01-01

    Fault isolation and sensor placement are vital for monitoring and diagnosis. A sensor conveys information about a system's state that guides troubleshooting if problems arise. We are using machine learning methods to uncover behavioral patterns over snapshots of system simulations that will aid fault isolation and sensor placement, with an eye towards minimality, fault coverage, and noise tolerance.

  10. The Microcomputer and Instruction in Geometry.

    ERIC Educational Resources Information Center

    Kantowski, Mary Grace

    1981-01-01

    The microcomputer has great potential for making high school geometry more stimulating and more easily understood by the students. The microcomputer can facilitate instruction in both the logico-deductive and spatial-visual aspects of geometry through graphics representations, simulation of motion, and its capability of interacting with the…

  11. Stacking fault energies and slip in nanocrystalline metals.

    PubMed

    Van Swygenhoven, H; Derlet, P M; Frøseth, A G

    2004-06-01

    The search for deformation mechanisms in nanocrystalline metals has profited from the use of molecular dynamics calculations. These simulations have revealed two possible mechanisms; grain boundary accommodation, and intragranular slip involving dislocation emission and absorption at grain boundaries. But the precise nature of the slip mechanism is the subject of considerable debate, and the limitations of the simulation technique need to be taken into consideration. Here we show, using molecular dynamics simulations, that the nature of slip in nanocrystalline metals cannot be described in terms of the absolute value of the stacking fault energy-a correct interpretation requires the generalized stacking fault energy curve, involving both stable and unstable stacking fault energies. The molecular dynamics technique does not at present allow for the determination of rate-limiting processes, so the use of our calculations in the interpretation of experiments has to be undertaken with care.

  12. 26 CFR 15.1-1 - Elections to deduct.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    .... (2) Election to deduct under section 615—(i) General rule. The election to deduct exploration... 26 Internal Revenue 14 2010-04-01 2010-04-01 false Elections to deduct. 15.1-1 Section 15.1-1... Elections to deduct. (a) Manner of making election—(1) Election to deduct under section 617(a). The election...

  13. 26 CFR 20.2053-10 - Deduction for certain foreign death taxes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 14 2010-04-01 2010-04-01 false Deduction for certain foreign death taxes. 20... § 20.2053-10 Deduction for certain foreign death taxes. (a) General rule. A deduction is allowed the... for foreign death taxes. (b) Condition for allowance of deduction. (1) The deduction is not allowed...

  14. 26 CFR 20.2053-10 - Deduction for certain foreign death taxes.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 14 2012-04-01 2012-04-01 false Deduction for certain foreign death taxes. 20... § 20.2053-10 Deduction for certain foreign death taxes. (a) General rule. A deduction is allowed the... for foreign death taxes. (b) Condition for allowance of deduction. (1) The deduction is not allowed...

  15. 26 CFR 20.2053-10 - Deduction for certain foreign death taxes.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 14 2011-04-01 2010-04-01 true Deduction for certain foreign death taxes. 20... § 20.2053-10 Deduction for certain foreign death taxes. (a) General rule. A deduction is allowed the... for foreign death taxes. (b) Condition for allowance of deduction. (1) The deduction is not allowed...

  16. Simulations of Brady's-Type Fault Undergoing CO2 Push-Pull: Pressure-Transient and Sensitivity Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, Yoojin; Doughty, Christine

    Input and output files used for fault characterization through numerical simulation using iTOUGH2. The synthetic data for the push period are generated by running a forward simulation (input parameters are provided in iTOUGH2 Brady GF6 Input Parameters.txt [InvExt6i.txt]). In general, the permeability of the fault gouge, damage zone, and matrix are assumed to be unknown. The input and output files are for the inversion scenario where only pressure transients are available at the monitoring well located 200 m above the injection well and only the fault gouge permeability is estimated. The input files are named InvExt6i, INPUT.tpl, FOFT.ins, CO2TAB, andmore » the output files are InvExt6i.out, pest.fof, and pest.sav (names below are display names). The table graphic in the data files below summarizes the inversion results, and indicates the fault gouge permeability can be estimated even if imperfect guesses are used for matrix and damage zone permeabilities, and permeability anisotropy is not taken into account.« less

  17. On the simultaneous inversion of micro-perforated panels' parameters: Application to single and double air-cavity backed systems.

    PubMed

    Tayong, Rostand B; Manyo Manyo, Jacques A; Siryabe, Emmanuel; Ntamack, Guy E

    2018-04-01

    This study deals with the deduction of parameters of Micro-Perforated Panel (MPP) systems from impedance tube data. It is shown that there is an ambiguity problem that exists between the MPP thickness and its open area ratio. This problem makes it difficult to invert the reflection coefficient data fitting and therefore to deduct the MPP parameters. A technique is proposed to reduce this ambiguity by using an equation that links the hole diameter to the open area ratio. Reflection coefficient data obtained for two specimens with different characteristics is employed for searching the MPP parameters using a simulated annealing algorithm. The results obtained demonstrate the effectiveness of this technique.

  18. A Computational Model of Coupled Multiphase Flow and Geomechanics to Study Fault Slip and Induced Seismicity

    NASA Astrophysics Data System (ADS)

    Juanes, R.; Jha, B.

    2014-12-01

    The coupling between subsurface flow and geomechanical deformation is critical in the assessment of the environmental impacts of groundwater use, underground liquid waste disposal, geologic storage of carbon dioxide, and exploitation of shale gas reserves. In particular, seismicity induced by fluid injection and withdrawal has emerged as a central element of the scientific discussion around subsurface technologies that tap into water and energy resources. Here we present a new computational approach to model coupled multiphase flow and geomechanics of faulted reservoirs. We represent faults as surfaces embedded in a three-dimensional medium by using zero-thickness interface elements to accurately model fault slip under dynamically evolving fluid pressure and fault strength. We incorporate the effect of fluid pressures from multiphase flow in the mechanical stability of faults and employ a rigorous formulation of nonlinear multiphase geomechanics that is capable of handling strong capillary effects. We develop a numerical simulation tool by coupling a multiphase flow simulator with a mechanics simulator, using the unconditionally stable fixed-stress scheme for the sequential solution of two-way coupling between flow and geomechanics. We validate our modeling approach using several synthetic, but realistic, test cases that illustrate the onset and evolution of earthquakes from fluid injection and withdrawal. We also present the application of the coupled flow-geomechanics simulation technology to the post mortem analysis of the Mw=5.1, May 2011 Lorca earthquake in south-east Spain, and assess the potential that the earthquake was induced by groundwater extraction.

  19. Computing elastic‐rebound‐motivated rarthquake probabilities in unsegmented fault models: a new methodology supported by physics‐based simulators

    USGS Publications Warehouse

    Field, Edward H.

    2015-01-01

    A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.

  20. Dynamic 3D simulations of earthquakes on en echelon faults

    USGS Publications Warehouse

    Harris, R.A.; Day, S.M.

    1999-01-01

    One of the mysteries of earthquake mechanics is why earthquakes stop. This process determines the difference between small and devastating ruptures. One possibility is that fault geometry controls earthquake size. We test this hypothesis using a numerical algorithm that simulates spontaneous rupture propagation in a three-dimensional medium and apply our knowledge to two California fault zones. We find that the size difference between the 1934 and 1966 Parkfield, California, earthquakes may be the product of a stepover at the southern end of the 1934 earthquake and show how the 1992 Landers, California, earthquake followed physically reasonable expectations when it jumped across en echelon faults to become a large event. If there are no linking structures, such as transfer faults, then strike-slip earthquakes are unlikely to propagate through stepovers >5 km wide. Copyright 1999 by the American Geophysical Union.

  1. The development of an interim generalized gate logic software simulator

    NASA Technical Reports Server (NTRS)

    Mcgough, J. G.; Nemeroff, S.

    1985-01-01

    A proof-of-concept computer program called IGGLOSS (Interim Generalized Gate Logic Software Simulator) was developed and is discussed. The simulator engine was designed to perform stochastic estimation of self test coverage (fault-detection latency times) of digital computers or systems. A major attribute of the IGGLOSS is its high-speed simulation: 9.5 x 1,000,000 gates/cpu sec for nonfaulted circuits and 4.4 x 1,000,000 gates/cpu sec for faulted circuits on a VAX 11/780 host computer.

  2. Design of penicillin fermentation process simulation system

    NASA Astrophysics Data System (ADS)

    Qi, Xiaoyu; Yuan, Zhonghu; Qi, Xiaoxuan; Zhang, Wenqi

    2011-10-01

    Real-time monitoring for batch process attracts increasing attention. It can ensure safety and provide products with consistent quality. The design of simulation system of batch process fault diagnosis is of great significance. In this paper, penicillin fermentation, a typical non-linear, dynamic, multi-stage batch production process, is taken as the research object. A visual human-machine interactive simulation software system based on Windows operation system is developed. The simulation system can provide an effective platform for the research of batch process fault diagnosis.

  3. The Virtual Quake Earthquake Simulator: Earthquake Probability Statistics for the El Mayor-Cucapah Region and Evidence of Predictability in Simulated Earthquake Sequences

    NASA Astrophysics Data System (ADS)

    Schultz, K.; Yoder, M. R.; Heien, E. M.; Rundle, J. B.; Turcotte, D. L.; Parker, J. W.; Donnellan, A.

    2015-12-01

    We introduce a framework for developing earthquake forecasts using Virtual Quake (VQ), the generalized successor to the perhaps better known Virtual California (VC) earthquake simulator. We discuss the basic merits and mechanics of the simulator, and we present several statistics of interest for earthquake forecasting. We also show that, though the system as a whole (in aggregate) behaves quite randomly, (simulated) earthquake sequences limited to specific fault sections exhibit measurable predictability in the form of increasing seismicity precursory to large m > 7 earthquakes. In order to quantify this, we develop an alert based forecasting metric similar to those presented in Keilis-Borok (2002); Molchan (1997), and show that it exhibits significant information gain compared to random forecasts. We also discuss the long standing question of activation vs quiescent type earthquake triggering. We show that VQ exhibits both behaviors separately for independent fault sections; some fault sections exhibit activation type triggering, while others are better characterized by quiescent type triggering. We discuss these aspects of VQ specifically with respect to faults in the Salton Basin and near the El Mayor-Cucapah region in southern California USA and northern Baja California Norte, Mexico.

  4. Study on conditional probability of surface rupture: effect of fault dip and width of seismogenic layer

    NASA Astrophysics Data System (ADS)

    Inoue, N.

    2017-12-01

    The conditional probability of surface ruptures is affected by various factors, such as shallow material properties, process of earthquakes, ground motions and so on. Toda (2013) pointed out difference of the conditional probability of strike and reverse fault by considering the fault dip and width of seismogenic layer. This study evaluated conditional probability of surface rupture based on following procedures. Fault geometry was determined from the randomly generated magnitude based on The Headquarters for Earthquake Research Promotion (2017) method. If the defined fault plane was not saturated in the assumed width of the seismogenic layer, the fault plane depth was randomly provided within the seismogenic layer. The logistic analysis was performed to two data sets: surface displacement calculated by dislocation methods (Wang et al., 2003) from the defined source fault, the depth of top of the defined source fault. The estimated conditional probability from surface displacement indicated higher probability of reverse faults than that of strike faults, and this result coincides to previous similar studies (i.e. Kagawa et al., 2004; Kataoka and Kusakabe, 2005). On the contrary, the probability estimated from the depth of the source fault indicated higher probability of thrust faults than that of strike and reverse faults, and this trend is similar to the conditional probability of PFDHA results (Youngs et al., 2003; Moss and Ross, 2011). The probability of combined simulated results of thrust and reverse also shows low probability. The worldwide compiled reverse fault data include low fault dip angle earthquake. On the other hand, in the case of Japanese reverse fault, there is possibility that the conditional probability of reverse faults with less low dip angle earthquake shows low probability and indicates similar probability of strike fault (i.e. Takao et al., 2013). In the future, numerical simulation by considering failure condition of surface by the source fault would be performed in order to examine the amount of the displacement and conditional probability quantitatively.

  5. Simultaneous Sensor and Process Fault Diagnostics for Propellant Feed System

    NASA Technical Reports Server (NTRS)

    Cao, J.; Kwan, C.; Figueroa, F.; Xu, R.

    2006-01-01

    The main objective of this research is to extract fault features from sensor faults and process faults by using advanced fault detection and isolation (FDI) algorithms. A tank system that has some common characteristics to a NASA testbed at Stennis Space Center was used to verify our proposed algorithms. First, a generic tank system was modeled. Second, a mathematical model suitable for FDI has been derived for the tank system. Third, a new and general FDI procedure has been designed to distinguish process faults and sensor faults. Extensive simulations clearly demonstrated the advantages of the new design.

  6. Forecast model for great earthquakes at the Nankai Trough subduction zone

    USGS Publications Warehouse

    Stuart, W.D.

    1988-01-01

    An earthquake instability model is formulated for recurring great earthquakes at the Nankai Trough subduction zone in southwest Japan. The model is quasistatic, two-dimensional, and has a displacement and velocity dependent constitutive law applied at the fault plane. A constant rate of fault slip at depth represents forcing due to relative motion of the Philippine Sea and Eurasian plates. The model simulates fault slip and stress for all parts of repeated earthquake cycles, including post-, inter-, pre- and coseismic stages. Calculated ground uplift is in agreement with most of the main features of elevation changes observed before and after the M=8.1 1946 Nankaido earthquake. In model simulations, accelerating fault slip has two time-scales. The first time-scale is several years long and is interpreted as an intermediate-term precursor. The second time-scale is a few days long and is interpreted as a short-term precursor. Accelerating fault slip on both time-scales causes anomalous elevation changes of the ground surface over the fault plane of 100 mm or less within 50 km of the fault trace. ?? 1988 Birkha??user Verlag.

  7. Sensor Fault Detection and Diagnosis Simulation of a Helicopter Engine in an Intelligent Control Framework

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan; Kurtkaya, Mehmet; Duyar, Ahmet

    1994-01-01

    This paper presents an application of a fault detection and diagnosis scheme for the sensor faults of a helicopter engine. The scheme utilizes a model-based approach with real time identification and hypothesis testing which can provide early detection, isolation, and diagnosis of failures. It is an integral part of a proposed intelligent control system with health monitoring capabilities. The intelligent control system will allow for accommodation of faults, reduce maintenance cost, and increase system availability. The scheme compares the measured outputs of the engine with the expected outputs of an engine whose sensor suite is functioning normally. If the differences between the real and expected outputs exceed threshold values, a fault is detected. The isolation of sensor failures is accomplished through a fault parameter isolation technique where parameters which model the faulty process are calculated on-line with a real-time multivariable parameter estimation algorithm. The fault parameters and their patterns can then be analyzed for diagnostic and accommodation purposes. The scheme is applied to the detection and diagnosis of sensor faults of a T700 turboshaft engine. Sensor failures are induced in a T700 nonlinear performance simulation and data obtained are used with the scheme to detect, isolate, and estimate the magnitude of the faults.

  8. Nitsche Extended Finite Element Methods for Earthquake Simulation

    NASA Astrophysics Data System (ADS)

    Coon, Ethan T.

    Modeling earthquakes and geologically short-time-scale events on fault networks is a difficult problem with important implications for human safety and design. These problems demonstrate a. rich physical behavior, in which distributed loading localizes both spatially and temporally into earthquakes on fault systems. This localization is governed by two aspects: friction and fault geometry. Computationally, these problems provide a stern challenge for modelers --- static and dynamic equations must be solved on domains with discontinuities on complex fault systems, and frictional boundary conditions must be applied on these discontinuities. The most difficult aspect of modeling physics on complicated domains is the mesh. Most numerical methods involve meshing the geometry; nodes are placed on the discontinuities, and edges are chosen to coincide with faults. The resulting mesh is highly unstructured, making the derivation of finite difference discretizations difficult. Therefore, most models use the finite element method. Standard finite element methods place requirements on the mesh for the sake of stability, accuracy, and efficiency. The formation of a mesh which both conforms to fault geometry and satisfies these requirements is an open problem, especially for three dimensional, physically realistic fault. geometries. In addition, if the fault system evolves over the course of a dynamic simulation (i.e. in the case of growing cracks or breaking new faults), the geometry must he re-meshed at each time step. This can be expensive computationally. The fault-conforming approach is undesirable when complicated meshes are required, and impossible to implement when the geometry is evolving. Therefore, meshless and hybrid finite element methods that handle discontinuities without placing them on element boundaries are a desirable and natural way to discretize these problems. Several such methods are being actively developed for use in engineering mechanics involving crack propagation and material failure. While some theory and application of these methods exist, implementations for the simulation of networks of many cracks have not yet been considered. For my thesis, I implement and extend one such method, the eXtended Finite Element Method (XFEM), for use in static and dynamic models of fault networks. Once this machinery is developed, it is applied to open questions regarding the behavior of networks of faults, including questions of distributed deformation in fault systems and ensembles of magnitude, location, and frequency in repeat ruptures. The theory of XFEM is augmented to allow for solution of problems with alternating regimes of static solves for elastic stress conditions and short, dynamic earthquakes on networks of faults. This is accomplished using Nitsche's approach for implementing boundary conditions. Finally, an optimization problem is developed to determine tractions along the fault, enabling the calculation of frictional constraints and the rupture front. This method is verified via a series of static, quasistatic, and dynamic problems. Armed with this technique, we look at several problems regarding geometry within the earthquake cycle in which geometry is crucial. We first look at quasistatic simulations on a community fault model of Southern California, and model slip distribution across that system. We find the distribution of deformation across faults compares reasonably well with slip rates across the region, as constrained by geologic data. We find geometry can provide constraints for friction, and consider the minimization of shear strain across the zone as a function of friction and plate loading direction, and infer bounds on fault strength in the region. Then we consider the repeated rupture problem, modeling the full earthquake cycle over the course of many events on several fault geometries. In this work, we look at distributions of events, studying the effect of geometry on statistical metrics of event ensembles. Finally, this thesis is a proof of concept for the XFEM on earthquake cycle models on fault systems. We identify strengths and weaknesses of the method, and identify places for future improvement. We discuss the feasibility of the method's use in three dimensions, and find the method to be a strong candidate for future crustal deformation simulations.

  9. Numerical Methods for the Analysis of Power Transformer Tank Deformation and Rupture Due to Internal Arcing Faults

    PubMed Central

    Yan, Chenguang; Hao, Zhiguo; Zhang, Song; Zhang, Baohui; Zheng, Tao

    2015-01-01

    Power transformer rupture and fire resulting from an arcing fault inside the tank usually leads to significant security risks and serious economic loss. In order to reveal the essence of tank deformation or explosion, this paper presents a 3-D numerical computational tool to simulate the structural dynamic behavior due to overpressure inside transformer tank. To illustrate the effectiveness of the proposed method, a 17.3MJ and a 6.3MJ arcing fault were simulated on a real full-scale 360MVA/220kV oil-immersed transformer model, respectively. By employing the finite element method, the transformer internal overpressure distribution, wave propagation and von-Mises stress were solved. The numerical results indicate that the increase of pressure and mechanical stress distribution are non-uniform and the stress tends to concentrate on connecting parts of the tank as the fault time evolves. Given this feature, it becomes possible to reduce the risk of transformer tank rupture through limiting the fault energy and enhancing the mechanical strength of the local stress concentrative areas. The theoretical model and numerical simulation method proposed in this paper can be used as a substitute for risky and costly field tests in fault overpressure analysis and tank mitigation design of transformers. PMID:26230392

  10. Numerical Methods for the Analysis of Power Transformer Tank Deformation and Rupture Due to Internal Arcing Faults.

    PubMed

    Yan, Chenguang; Hao, Zhiguo; Zhang, Song; Zhang, Baohui; Zheng, Tao

    2015-01-01

    Power transformer rupture and fire resulting from an arcing fault inside the tank usually leads to significant security risks and serious economic loss. In order to reveal the essence of tank deformation or explosion, this paper presents a 3-D numerical computational tool to simulate the structural dynamic behavior due to overpressure inside transformer tank. To illustrate the effectiveness of the proposed method, a 17.3 MJ and a 6.3 MJ arcing fault were simulated on a real full-scale 360MVA/220kV oil-immersed transformer model, respectively. By employing the finite element method, the transformer internal overpressure distribution, wave propagation and von-Mises stress were solved. The numerical results indicate that the increase of pressure and mechanical stress distribution are non-uniform and the stress tends to concentrate on connecting parts of the tank as the fault time evolves. Given this feature, it becomes possible to reduce the risk of transformer tank rupture through limiting the fault energy and enhancing the mechanical strength of the local stress concentrative areas. The theoretical model and numerical simulation method proposed in this paper can be used as a substitute for risky and costly field tests in fault overpressure analysis and tank mitigation design of transformers.

  11. Centrifugal compressor fault diagnosis based on qualitative simulation and thermal parameters

    NASA Astrophysics Data System (ADS)

    Lu, Yunsong; Wang, Fuli; Jia, Mingxing; Qi, Yuanchen

    2016-12-01

    This paper concerns fault diagnosis of centrifugal compressor based on thermal parameters. An improved qualitative simulation (QSIM) based fault diagnosis method is proposed to diagnose the faults of centrifugal compressor in a gas-steam combined-cycle power plant (CCPP). The qualitative models under normal and two faulty conditions have been built through the analysis of the principle of centrifugal compressor. To solve the problem of qualitative description of the observations of system variables, a qualitative trend extraction algorithm is applied to extract the trends of the observations. For qualitative states matching, a sliding window based matching strategy which consists of variables operating ranges constraints and qualitative constraints is proposed. The matching results are used to determine which QSIM model is more consistent with the running state of system. The correct diagnosis of two typical faults: seal leakage and valve stuck in the centrifugal compressor has validated the targeted performance of the proposed method, showing the advantages of fault roots containing in thermal parameters.

  12. A simulation of the San Andreas fault experiment

    NASA Technical Reports Server (NTRS)

    Agreen, R. W.; Smith, D. E.

    1973-01-01

    The San Andreas Fault Experiment, which employs two laser tracking systems for measuring the relative motion of two points on opposite sides of the fault, was simulated for an eight year observation period. The two tracking stations are located near San Diego on the western side of the fault and near Quincy on the eastern side; they are roughly 900 kilometers apart. Both will simultaneously track laser reflector equipped satellites as they pass near the stations. Tracking of the Beacon Explorer C Spacecraft was simulated for these two stations during August and September for eight consecutive years. An error analysis of the recovery of the relative location of Quincy from the data was made, allowing for model errors in the mass of the earth, the gravity field, solar radiation pressure, atmospheric drag, errors in the position of the San Diego site, and laser systems range biases and noise. The results of this simulation indicate that the distance of Quincy from San Diego will be determined each year with a precision of about 10 centimeters. This figure is based on the accuracy of earth models and other parameters available in 1972.

  13. Tsunami simulations of the 1867 Virgin Island earthquake: Constraints on epicenter location and fault parameters

    USGS Publications Warehouse

    Barkan, Roy; ten Brink, Uri S.

    2010-01-01

    The 18 November 1867 Virgin Island earthquake and the tsunami that closely followed caused considerable loss of life and damage in several places in the northeast Caribbean region. The earthquake was likely a manifestation of the complex tectonic deformation of the Anegada Passage, which cuts across the Antilles island arc between the Virgin Islands and the Lesser Antilles. In this article, we attempt to characterize the 1867 earthquake with respect to fault orientation, rake, dip, fault dimensions, and first tsunami wave propagating phase, using tsunami simulations that employ high-resolution multibeam bathymetry. In addition, we present new geophysical and geological observations from the region of the suggested earthquake source. Results of our tsunami simulations based on relative amplitude comparison limit the earthquake source to be along the northern wall of the Virgin Islands basin, as suggested by Reid and Taber (1920), or on the carbonate platform north of the basin, and not in the Virgin Islands basin, as commonly assumed. The numerical simulations suggest the 1867 fault was striking 120°–135° and had a mixed normal and left-lateral motion. First propagating wave phase analysis suggests a fault striking 300°–315° is also possible. The best-fitting rupture length was found to be relatively small (50 km), probably indicating the earthquake had a moment magnitude of ∼7.2. Detailed multibeam echo sounder surveys of the Anegada Passage bathymetry between St. Croix and St. Thomas reveal a scarp, which cuts the northern wall of the Virgin Islands basin. High-resolution seismic profiles further indicate it to be a reasonable fault candidate. However, the fault orientation and the orientation of other subparallel faults in the area are more compatible with right-lateral motion. For the other possible source region, no clear disruption in the bathymetry or seismic profiles was found on the carbonate platform north of the basin.

  14. Source parameters of the 2013 Lushan, Sichuan, Ms7.0 earthquake and estimation of the near-fault strong ground motion

    NASA Astrophysics Data System (ADS)

    Meng, L.; Zhou, L.; Liu, J.

    2013-12-01

    Abstract: The April 20, 2013 Ms 7.0 earthquake in Lushan city, Sichuan province of China occurred as the result of east-west oriented reverse-type motion on a north-south striking fault. The source location suggests the event occurred on the Southern part of Longmenshan fault at a depth of 13km. The Lushan earthquake caused a great of loss of property and 196 deaths. The maximum intensity is up to VIII to IX at Boxing and Lushan city, which are located in the meizoseismal area. In this study, we analyzed the dynamic source process and calculated source spectral parameters, estimated the strong ground motion in the near-fault field based on the Brune's circle model at first. A dynamical composite source model (DCSM) has been developed further to simulate the near-fault strong ground motion with associated fault rupture properties at Boxing and Lushan city, respectively. The results indicate that the frictional undershoot behavior in the dynamic source process of Lushan earthquake, which is actually different from the overshoot activity of the Wenchuan earthquake. Based on the simulated results of the near-fault strong ground motion, described the intensity distribution of the Lushan earthquake field. The simulated intensity indicated that, the maximum intensity value is IX, and region with and above VII almost 16,000km2, which is consistence with observation intensity published online by China Earthquake Administration (CEA) on April 25. Moreover, the numerical modeling developed in this study has great application in the strong ground motion prediction and intensity estimation for the earthquake rescue purpose. In fact, the estimation methods based on the empirical relationship and numerical modeling developed in this study has great application in the strong ground motion prediction for the earthquake source process understand purpose. Keywords: Lushan, Ms7.0 earthquake; near-fault strong ground motion; DCSM; simulated intensity

  15. Application of an Integrated Assessment Model to the Kevin Dome site, Montana

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Minh; Zhang, Ye; Carey, James William

    The objectives of the Integrated Assessment Model is to enable the Fault Swarm algorithm in the National Risk Assessment Partnership, ensure faults are working in the NRAP-IAM tool, calculate hypothetical fault leakage in NRAP-IAM, and compare leakage rates to Eclipse simulations.

  16. 3D features of delayed thermal convection in fault zones: consequences for deep fluid processes in the Tiberias Basin, Jordan Rift Valley

    NASA Astrophysics Data System (ADS)

    Magri, Fabien; Möller, Sebastian; Inbar, Nimrod; Siebert, Christian; Möller, Peter; Rosenthal, Eliyahu; Kühn, Michael

    2015-04-01

    It has been shown that thermal convection in faults can also occur for subcritical Rayleigh conditions. This type of convection develops after a certain period and is referred to as "delayed convection" (Murphy, 1979). The delay in the onset is due to the heat exchange between the damage zone and the surrounding units that adds a thermal buffer along the fault walls. Few numerical studies investigated delayed thermal convection in fractured zones, despite it has the potential to transport energy and minerals over large spatial scales (Tournier, 2000). Here 3D numerical simulations of thermally driven flow in faults are presented in order to investigate the impact of delayed convection on deep fluid processes at basin-scale. The Tiberias Basin (TB), in the Jordan Rift Valley, serves as study area. The TB is characterized by upsurge of deep-seated hot waters along the faulted shores of Lake Tiberias and high temperature gradient that can locally reach 46 °C/km, as in the Lower Yarmouk Gorge (LYG). 3D simulations show that buoyant flow ascend in permeable faults which hydraulic conductivity is estimated to vary between 30 m/yr and 140 m/yr. Delayed convection starts respectively at 46 and 200 kyrs and generate temperature anomalies in agreement with observations. It turned out that delayed convective cells are transient. Cellular patterns that initially develop in permeable units surrounding the faults can trigger convection also within the fault plane. The combination of these two convective modes lead to helicoidal-like flow patterns. This complex flow can explain the location of springs along different fault traces of the TB. Besides being of importance for understanding the hydrogeological processes of the TB (Magri et al., 2015), the presented simulations provide a scenario illustrating fault-induced 3D cells that could develop in any geothermal system. References Magri, F., Inbar, N., Siebert, C., Rosenthal, E., Guttman, J., Möller, P., 2015. Transient simulations of large-scale hydrogeological processes causing temperature and salinity anomalies in the Tiberias Basin. Journal of Hydrology, 520(0), 342-355. Murphy, H.D., 1979. Convective instabilities in vertical fractures and faults. Journal of Geophysical Research: Solid Earth, 84(B11), 6121-6130. Tournier, C., Genthon, P., Rabinowicz, M., 2000. The onset of natural convection in vertical fault planes: consequences for the thermal regime in crystalline basementsand for heat recovery experiments. Geophysical Journal International, 140(3), 500-508.

  17. An approach to secure weather and climate models against hardware faults

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; Dawson, Andrew

    2017-03-01

    Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelization to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. In this paper, we present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform model simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13 % for the shallow water model.

  18. An approach to secure weather and climate models against hardware faults

    NASA Astrophysics Data System (ADS)

    Düben, Peter; Dawson, Andrew

    2017-04-01

    Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelisation to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. We present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13% for the shallow water model.

  19. Latest Progress of Fault Detection and Localization in Complex Electrical Engineering

    NASA Astrophysics Data System (ADS)

    Zhao, Zheng; Wang, Can; Zhang, Yagang; Sun, Yi

    2014-01-01

    In the researches of complex electrical engineering, efficient fault detection and localization schemes are essential to quickly detect and locate faults so that appropriate and timely corrective mitigating and maintenance actions can be taken. In this paper, under the current measurement precision of PMU, we will put forward a new type of fault detection and localization technology based on fault factor feature extraction. Lots of simulating experiments indicate that, although there are disturbances of white Gaussian stochastic noise, based on fault factor feature extraction principal, the fault detection and localization results are still accurate and reliable, which also identifies that the fault detection and localization technology has strong anti-interference ability and great redundancy.

  20. Geomechanical Modeling for Improved CO2 Storage Security

    NASA Astrophysics Data System (ADS)

    Rutqvist, J.; Rinaldi, A. P.; Cappa, F.; Jeanne, P.; Mazzoldi, A.; Urpi, L.; Vilarrasa, V.; Guglielmi, Y.

    2017-12-01

    This presentation summarizes recent modeling studies on geomechanical aspects related to Geologic Carbon Sequestration (GCS,) including modeling potential fault reactivation, seismicity and CO2 leakage. The model simulations demonstrates that the potential for fault reactivation and the resulting seismic magnitude as well as the potential for creating a leakage path through overburden sealing layers (caprock) depends on a number of parameters such as fault orientation, stress field, and rock properties. The model simulations further demonstrate that seismic events large enough to be felt by humans requires brittle fault properties as well as continuous fault permeability allowing for the pressure to be distributed over a large fault patch to be ruptured at once. Heterogeneous fault properties, which are commonly encountered in faults intersecting multilayered shale/sandstone sequences, effectively reduce the likelihood of inducing felt seismicity and also effectively impede upward CO2 leakage. Site specific model simulations of the In Salah CO2 storage site showed that deep fractured zone responses and associated seismicity occurred in the brittle fractured sandstone reservoir, but at a very substantial reservoir overpressure close to the magnitude of the least principal stress. It is suggested that coupled geomechanical modeling be used to guide the site selection and assisting in identification of locations most prone to unwanted and damaging geomechanical changes, and to evaluate potential consequence of such unwanted geomechanical changes. The geomechanical modeling can be used to better estimate the maximum sustainable injection rate or reservoir pressure and thereby provide for improved CO2 storage security. Whether damaging geomechanical changes could actually occur very much depends on the local stress field and local reservoir properties such the presence of ductile rock and faults (which can aseismically accommodate for the stress and strain induced by the injection) or, on the contrary, the presence of more brittle faults that, if critically stressed for shear, might be more prone to induce felt seismicity.

  1. Relationship between displacement and gravity change of Uemachi faults and surrounding faults of Osaka basin, Southwest Japan

    NASA Astrophysics Data System (ADS)

    Inoue, N.; Kitada, N.; Kusumoto, S.; Itoh, Y.; Takemura, K.

    2011-12-01

    The Osaka basin surrounded by the Rokko and Ikoma Ranges is one of the typical Quaternary sedimentary basins in Japan. The Osaka basin has been filled by the Pleistocene Osaka group and the later sediments. Several large cities and metropolitan areas, such as Osaka and Kobe are located in the Osaka basin. The basin is surrounded by E-W trending strike slip faults and N-S trending reverse faults. The N-S trending 42-km-long Uemachi faults traverse in the central part of the Osaka city. The Uemachi faults have been investigated for countermeasures against earthquake disaster. It is important to reveal the detailed fault parameters, such as length, dip and recurrence interval, so on for strong ground motion simulation and disaster prevention. For strong ground motion simulation, the fault model of the Uemachi faults consist of the two parts, the north and south parts, because of the no basement displacement in the central part of the faults. The Ministry of Education, Culture, Sports, Science and Technology started the project to survey of the Uemachi faults. The Disaster Prevention Institute of Kyoto University is carried out various surveys from 2009 to 2012 for 3 years. The result of the last year revealed the higher fault activity of the branch fault than main faults in the central part (see poster of "Subsurface Flexure of Uemachi Fault, Japan" by Kitada et al., in this meeting). Kusumoto et al. (2001) reported that surrounding faults enable to form the similar basement relief without the Uemachi faults model based on a dislocation model. We performed various parameter studies for dislocation model and gravity changes based on simplified faults model, which were designed based on the distribution of the real faults. The model was consisted 7 faults including the Uemachi faults. The dislocation and gravity change were calculated based on the Okada et al. (1985) and Okubo et al. (1993) respectively. The results show the similar basement displacement pattern to the Kusumoto et al. (2001) and no characteristic gravity change pattern. The Quantitative estimation is further problem.

  2. Robust Fault Detection and Isolation for Stochastic Systems

    NASA Technical Reports Server (NTRS)

    George, Jemin; Gregory, Irene M.

    2010-01-01

    This paper outlines the formulation of a robust fault detection and isolation scheme that can precisely detect and isolate simultaneous actuator and sensor faults for uncertain linear stochastic systems. The given robust fault detection scheme based on the discontinuous robust observer approach would be able to distinguish between model uncertainties and actuator failures and therefore eliminate the problem of false alarms. Since the proposed approach involves precise reconstruction of sensor faults, it can also be used for sensor fault identification and the reconstruction of true outputs from faulty sensor outputs. Simulation results presented here validate the effectiveness of the robust fault detection and isolation system.

  3. Synthetic earthquake catalogs simulating seismic activity in the Corinth Gulf, Greece, fault system

    NASA Astrophysics Data System (ADS)

    Console, Rodolfo; Carluccio, Roberto; Papadimitriou, Eleftheria; Karakostas, Vassilis

    2015-01-01

    The characteristic earthquake hypothesis is the basis of time-dependent modeling of earthquake recurrence on major faults. However, the characteristic earthquake hypothesis is not strongly supported by observational data. Few fault segments have long historical or paleoseismic records of individually dated ruptures, and when data and parameter uncertainties are allowed for, the form of the recurrence distribution is difficult to establish. This is the case, for instance, of the Corinth Gulf Fault System (CGFS), for which documents about strong earthquakes exist for at least 2000 years, although they can be considered complete for M ≥ 6.0 only for the latest 300 years, during which only few characteristic earthquakes are reported for individual fault segments. The use of a physics-based earthquake simulator has allowed the production of catalogs lasting 100,000 years and containing more than 500,000 events of magnitudes ≥ 4.0. The main features of our simulation algorithm are (1) an average slip rate released by earthquakes for every single segment in the investigated fault system, (2) heuristic procedures for rupture growth and stop, leading to a self-organized earthquake magnitude distribution, (3) the interaction between earthquake sources, and (4) the effect of minor earthquakes in redistributing stress. The application of our simulation algorithm to the CGFS has shown realistic features in time, space, and magnitude behavior of the seismicity. These features include long-term periodicity of strong earthquakes, short-term clustering of both strong and smaller events, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the higher-magnitude range.

  4. Effect of water phase transition on dynamic ruptures with thermal pressurization: Numerical simulations with changes in physical properties of water

    NASA Astrophysics Data System (ADS)

    Urata, Yumi; Kuge, Keiko; Kase, Yuko

    2015-02-01

    Phase transitions of pore water have never been considered in dynamic rupture simulations with thermal pressurization (TP), although they may control TP. From numerical simulations of dynamic rupture propagation including TP, in the absence of any water phase transition process, we predict that frictional heating and TP are likely to change liquid pore water into supercritical water for a strike-slip fault under depth-dependent stress. This phase transition causes changes of a few orders of magnitude in viscosity, compressibility, and thermal expansion among physical properties of water, thus affecting the diffusion of pore pressure. Accordingly, we perform numerical simulations of dynamic ruptures with TP, considering physical properties that vary with the pressure and temperature of pore water on a fault. To observe the effects of the phase transition, we assume uniform initial stress and no fault-normal variations in fluid density and viscosity. The results suggest that the varying physical properties decrease the total slip in cases with high stress at depth and small shear zone thickness. When fault-normal variations in fluid density and viscosity are included in the diffusion equation, they activate TP much earlier than the phase transition. As a consequence, the total slip becomes greater than that in the case with constant physical properties, eradicating the phase transition effect. Varying physical properties do not affect the rupture velocity, irrespective of the fault-normal variations. Thus, the phase transition of pore water has little effect on dynamic ruptures. Fault-normal variations in fluid density and viscosity may play a more significant role.

  5. Elastic-wave propagation and site amplification in the Salt Lake Valley, Utah, from simulated normal faulting earthquakes

    USGS Publications Warehouse

    Benz, H.M.; Smith, R.B.

    1988-01-01

    The two-dimensional seismic response of the Salt Lake valley to near- and far-field earthquakes has been investigated from simulations of vertically incident plane waves and from normal-faulting earthquakes generated on the basin-bounding Wasatch fault. The plane-wave simulations were compared with observed site amplifications in the Salt Lake valley, based on seismic recordings from nuclear explosions in southern Nevada, that show 10 times greater amplification with the basin than measured values on hard-rock sites. Synthetic seismograms suggest that in the frequency band 0.3 to 1.5 Hz at least one-half the site amplitication can be attributed to the impedance contrast between the basin sediments and higher velocity basement rocks. -from Authors

  6. 26 CFR 1.832-5 - Deductions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... TAXES (CONTINUED) Other Insurance Companies § 1.832-5 Deductions. (a) The deductions allowable are..., insurance companies are allowed a deduction for losses from capital assets sold or exchanged in order to... provisions of section 1212. The deduction is the same as that allowed mutual insurance companies subject to...

  7. 26 CFR 1.832-2 - Deductions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... TAXES (CONTINUED) Other Insurance Companies § 1.832-2 Deductions. (a) The deductions allowable are..., insurance companies are allowed a deduction for losses from capital assets sold or exchanged in order to... provisions of section 1212. The deduction is the same as that allowed mutual insurance companies subject to...

  8. 26 CFR 1.832-2 - Deductions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... TAXES (CONTINUED) Other Insurance Companies § 1.832-2 Deductions. (a) The deductions allowable are..., insurance companies are allowed a deduction for losses from capital assets sold or exchanged in order to... provisions of section 1212. The deduction is the same as that allowed mutual insurance companies subject to...

  9. 26 CFR 1.832-2 - Deductions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... TAXES (CONTINUED) Other Insurance Companies § 1.832-2 Deductions. (a) The deductions allowable are..., insurance companies are allowed a deduction for losses from capital assets sold or exchanged in order to... provisions of section 1212. The deduction is the same as that allowed mutual insurance companies subject to...

  10. 26 CFR 1.832-2 - Deductions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... TAXES (CONTINUED) Other Insurance Companies § 1.832-2 Deductions. (a) The deductions allowable are..., insurance companies are allowed a deduction for losses from capital assets sold or exchanged in order to... provisions of section 1212. The deduction is the same as that allowed mutual insurance companies subject to...

  11. Effects of fault dip and slip rake angles on near-source ground motions: Why rupture directivity was minimal in the 1999 Chi-Chi, Taiwan, earthquake

    USGS Publications Warehouse

    Aagaard, Brad T.; Hall, J.F.; Heaton, T.H.

    2004-01-01

    We study how the fault dip and slip rake angles affect near-source ground velocities and displacements as faulting transitions from strike-slip motion on a vertical fault to thrust motion on a shallow-dipping fault. Ground motions are computed for five fault geometries with different combinations of fault dip and rake angles and common values for the fault area and the average slip. The nature of the shear-wave directivity is the key factor in determining the size and distribution of the peak velocities and displacements. Strong shear-wave directivity requires that (1) the observer is located in the direction of rupture propagation and (2) the rupture propagates parallel to the direction of the fault slip vector. We show that predominantly along-strike rupture of a thrust fault (geometry similar in the Chi-Chi earthquake) minimizes the area subjected to large-amplitude velocity pulses associated with rupture directivity, because the rupture propagates perpendicular to the slip vector; that is, the rupture propagates in the direction of a node in the shear-wave radiation pattern. In our simulations with a shallow hypocenter, the maximum peak-to-peak horizontal velocities exceed 1.5 m/sec over an area of only 200 km2 for the 30??-dipping fault (geometry similar to the Chi-Chi earthquake), whereas for the 60??- and 75??-dipping faults this velocity is exceeded over an area of 2700 km2 . These simulations indicate that the area subjected to large-amplitude long-period ground motions would be larger for events of the same size as Chi-Chi that have different styles of faulting or a deeper hypocenter.

  12. "The Big One" in Taipei: Numerical Simulation Study of the Sanchiao Fault Earthquake Scenarios

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Lee, S.; Ng, S.

    2012-12-01

    Sanchiao fault is a western boundary fault of the Taipei basin located in northern Taiwan, close to the densely populated Taipei metropolitan area. According to the report of Central Geological Survey, the terrestrial portion of the Sanchiao fault can be divided into north and south segments. The south segment is about 13 km and north segment is about 21 km. Recent study demonstrated that there are about 40 km of the fault trace that extended to the marine area offshore of northern Taiwan. Combined with the marine and terrestrial parts, the total fault length of Sanchiao fault could be nearly 70 kilometers. Based on the recipe proposed by IRIKURA and Miyake (2010), we estimate the Sanchiao fault has the potential to produce an earthquake with moment magnitude larger than Mw 7.2. The total area of fault rupture is about 1323 km2, asperity to the total fault plane is 22%, and the slips of the asperity and background are 2.8 m and 1.6 m respectively. Use the characteristic source model based on this assumption, the 3D spectral-element method simulation results indicate that Peak ground acceleration (PGA) is significantly stronger along the surface fault-rupture. The basin effects play an important role when wave propagates in the Taipei basin which cause seismic wave amplified and prolong the shaking for a very long time. It is worth noting that, when the rupture starts from the southern tip of the fault, i.e. the hypocenter locates in the basin, the impact of the Sanchiao fault earthquake to the Taipei metropolitan area will be the most serious. The strong shaking can cover the entire Taipei city, and even across the basin that extended to eastern-most part of northern Taiwan.

  13. Data Files for Ground-Motion Simulations of the 1906 San Francisco Earthquake and Scenario Earthquakes on the Northern San Andreas Fault

    USGS Publications Warehouse

    Aagaard, Brad T.; Barall, Michael; Brocher, Thomas M.; Dolenc, David; Dreger, Douglas; Graves, Robert W.; Harmsen, Stephen; Hartzell, Stephen; Larsen, Shawn; McCandless, Kathleen; Nilsson, Stefan; Petersson, N. Anders; Rodgers, Arthur; Sjogreen, Bjorn; Zoback, Mary Lou

    2009-01-01

    This data set contains results from ground-motion simulations of the 1906 San Francisco earthquake, seven hypothetical earthquakes on the northern San Andreas Fault, and the 1989 Loma Prieta earthquake. The bulk of the data consists of synthetic velocity time-histories. Peak ground velocity on a 1/60th degree grid and geodetic displacements from the simulations are also included. Details of the ground-motion simulations and analysis of the results are discussed in Aagaard and others (2008a,b).

  14. 42 CFR 408.42 - Deduction from railroad retirement benefits.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Deduction from railroad retirement benefits. 408.42... § 408.42 Deduction from railroad retirement benefits. (a) Responsibility for deductions. If an enrollee is entitled to railroad retirement benefits, his or her SMI premiums are deducted from those benefits...

  15. 42 CFR 417.158 - Payroll deductions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Payroll deductions. 417.158 Section 417.158 Public....158 Payroll deductions. Each employing entity that provides payroll deductions as a means of paying... employee's contribution, if any, to be paid through payroll deductions. [59 FR 49841, Sept. 30, 1994] ...

  16. 26 CFR 1.832-2 - Deductions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... TAXES Other Insurance Companies § 1.832-2 Deductions. (a) The deductions allowable are specified in... provisions of section 1212. The deduction is the same as that allowed mutual insurance companies subject to... companies, other than mutual fire insurance companies described in § 1.831-1, are also allowed a deduction...

  17. Methodologies for Adaptive Flight Envelope Estimation and Protection

    NASA Technical Reports Server (NTRS)

    Tang, Liang; Roemer, Michael; Ge, Jianhua; Crassidis, Agamemnon; Prasad, J. V. R.; Belcastro, Christine

    2009-01-01

    This paper reports the latest development of several techniques for adaptive flight envelope estimation and protection system for aircraft under damage upset conditions. Through the integration of advanced fault detection algorithms, real-time system identification of the damage/faulted aircraft and flight envelop estimation, real-time decision support can be executed autonomously for improving damage tolerance and flight recoverability. Particularly, a bank of adaptive nonlinear fault detection and isolation estimators were developed for flight control actuator faults; a real-time system identification method was developed for assessing the dynamics and performance limitation of impaired aircraft; online learning neural networks were used to approximate selected aircraft dynamics which were then inverted to estimate command margins. As off-line training of network weights is not required, the method has the advantage of adapting to varying flight conditions and different vehicle configurations. The key benefit of the envelope estimation and protection system is that it allows the aircraft to fly close to its limit boundary by constantly updating the controller command limits during flight. The developed techniques were demonstrated on NASA s Generic Transport Model (GTM) simulation environments with simulated actuator faults. Simulation results and remarks on future work are presented.

  18. Fault geometric complexity and how it may cause temporal slip-rate variation within an interacting fault system

    NASA Astrophysics Data System (ADS)

    Zielke, Olaf; Arrowsmith, Ramon

    2010-05-01

    Slip-rates along individual faults may differ as a function of measurement time scale. Short-term slip-rates may be higher than the long term rate and vice versa. For example, vertical slip-rates along the Wasatch Fault, Utah are 1.7+/-0.5 mm/yr since 6ka, <0.6 mm/yr since 130ka, and 0.5-0.7 mm/yr since 10Ma (Friedrich et al., 2003). Following conventional earthquake recurrence models like the characteristic earthquake model, this observation implies that the driving strain accumulation rates may have changed over the respective time scales as well. While potential explanations for such slip-rate variations may be found for example in the reorganization of plate tectonic motion or mantle flow dynamics, causing changes in the crustal velocity field over long spatial wavelengths, no single geophysical explanation exists. Temporal changes in earthquake rate (i.e., event clustering) due to elastic interactions within a complex fault system may present an alternative explanation that requires neither variations in strain accumulation rate or nor changes in fault constitutive behavior for frictional sliding. In the presented study, we explore this scenario and investigate how fault geometric complexity, fault segmentation and fault (segment) interaction affect the seismic behavior and slip-rate along individual faults while keeping tectonic stressing-rate and frictional behavior constant in time. For that, we used FIMozFric--a physics-based numerical earthquake simulator, based on Okada's (1992) formulations for internal displacements and strains due to shear and tensile faults in a half-space. Faults are divided into a large number of equal-sized fault patches which communicate via elastic interaction, allowing implementation of geometrically complex, non-planar faults. Each patch has assigned a static and dynamic friction coefficient. The difference between those values is a function of depth--corresponding to the temperature-dependence of velocity-weakening that is observed in laboratory friction experiments and expressed in an [a-b] term in Rate-State-Friction (RSF) theory. Patches in the seismic zone are incrementally loaded during the interseismic phase. An earthquake initiates if shear stress along at least one (seismic) patch exceeds its static frictional strength and may grow in size due to elastic interaction with other fault patches (static stress transfer). Aside from investigating slip-rate variations due to the elastic interactions within a fault system with this tool, we want to show how such modeling results can be very useful in exploring the physics underlying the patterns that the paleoseismology sees and that those methods (simulation and observations) can be merged, with both making important contributions. Using FIMozFric, we generated synthetic seismic records for a large number of fault geometries and structural scenarios to investigate along-fault slip accumulation patterns and the variability of slip at a point. Our simulations show that fault geometric complexity and the accompanied fault interactions and multi-fault ruptures may cause temporal deviations from the average fault slip-rate, in other words phases of earthquake clustering or relative quiescence. Slip-rates along faults within an interacting fault system may change even when the loading function (stressing rate) remains constant and the magnitude of slip rate change is suggested to be proportional to the magnitude of fault interaction. Thus, spatially isolated and structurally mature faults are expected to experience less slip-rate changes than strongly interacting and less mature faults. The magnitude of slip-rate change may serve as a proxy for the magnitude of fault interaction and vice versa.

  19. Evolution of stacking fault tetrahedral and work hardening effect in copper single crystals

    NASA Astrophysics Data System (ADS)

    Liu, Hai Tao; Zhu, Xiu Fu; Sun, Ya Zhou; Xie, Wen Kun

    2017-11-01

    Stacking fault tetrahedral (SFT), generated in machining of copper single crystal as one type of subsurface defects, has significant influence on the performance of workpiece. In this study, molecular dynamics (MD) simulation is used to investigate the evolution of stacking fault tetrahedral in nano-cutting of copper single crystal. The result shows that SFT is nucleated at the intersection of differently oriented stacking fault (SF) planes and SFT evolves from the preform only containing incomplete surfaces into a solid defect. The evolution of SFT contains several stress fluctuations until the complete formation. Nano-indentation simulation is then employed on the machined workpiece from nano-cutting, through which the interaction between SFT and later-formed dislocations in subsurface is studied. In the meanwhile, force-depth curves obtained from nano-indentation on pristine and machined workpieces are compared to analyze the mechanical properties. By simulation of nano-cutting and nano-indentation, it is verified that SFT is a reason of the work hardening effect.

  20. Runtime Speculative Software-Only Fault Tolerance

    DTIC Science & Technology

    2012-06-01

    reliability of RSFT, a in-depth analysis on its window of vulnerability is also discussed and measured via simulated fault injection. The performance...propagation of faults through the entire program. For optimal performance, these techniques have to use herotic alias analysis to find the minimum set of...affect program output. No program source code or alias analysis is needed to analyze the fault propagation ahead of time. 2.3 Limitations of Existing

  1. The 1999 Izmit, Turkey, earthquake: A 3D dynamic stress transfer model of intraearthquake triggering

    USGS Publications Warehouse

    Harris, R.A.; Dolan, J.F.; Hartleb, R.; Day, S.M.

    2002-01-01

    Before the August 1999 Izmit (Kocaeli), Turkey, earthquake, theoretical studies of earthquake ruptures and geological observations had provided estimates of how far an earthquake might jump to get to a neighboring fault. Both numerical simulations and geological observations suggested that 5 km might be the upper limit if there were no transfer faults. The Izmit earthquake appears to have followed these expectations. It did not jump across any step-over wider than 5 km and was instead stopped by a narrower step-over at its eastern end and possibly by a stress shadow caused by a historic large earthquake at its western end. Our 3D spontaneous rupture simulations of the 1999 Izmit earthquake provide two new insights: (1) the west- to east-striking fault segments of this part of the North Anatolian fault are oriented so as to be low-stress faults and (2) the easternmost segment involved in the August 1999 rupture may be dipping. An interesting feature of the Izmit earthquake is that a 5-km-long gap in surface rupture and an adjacent 25° restraining bend in the fault zone did not stop the earthquake. The latter observation is a warning that significant fault bends in strike-slip faults may not arrest future earthquakes.

  2. Model-based diagnosis through Structural Analysis and Causal Computation for automotive Polymer Electrolyte Membrane Fuel Cell systems

    NASA Astrophysics Data System (ADS)

    Polverino, Pierpaolo; Frisk, Erik; Jung, Daniel; Krysander, Mattias; Pianese, Cesare

    2017-07-01

    The present paper proposes an advanced approach for Polymer Electrolyte Membrane Fuel Cell (PEMFC) systems fault detection and isolation through a model-based diagnostic algorithm. The considered algorithm is developed upon a lumped parameter model simulating a whole PEMFC system oriented towards automotive applications. This model is inspired by other models available in the literature, with further attention to stack thermal dynamics and water management. The developed model is analysed by means of Structural Analysis, to identify the correlations among involved physical variables, defined equations and a set of faults which may occur in the system (related to both auxiliary components malfunctions and stack degradation phenomena). Residual generators are designed by means of Causal Computation analysis and the maximum theoretical fault isolability, achievable with a minimal number of installed sensors, is investigated. The achieved results proved the capability of the algorithm to theoretically detect and isolate almost all faults with the only use of stack voltage and temperature sensors, with significant advantages from an industrial point of view. The effective fault isolability is proved through fault simulations at a specific fault magnitude with an advanced residual evaluation technique, to consider quantitative residual deviations from normal conditions and achieve univocal fault isolation.

  3. DEM Simulated Results And Seismic Interpretation of the Red River Fault Displacements in Vietnam

    NASA Astrophysics Data System (ADS)

    Bui, H. T.; Yamada, Y.; Matsuoka, T.

    2005-12-01

    The Song Hong basin is the largest Tertiary sedimentary basin in Viet Nam. Its onset is approximately 32 Ma ago since the left-lateral displacement of the Red River Fault commenced. Many researches on structures, formation and tectonic evolution of the Song Hong basin have been carried out for a long time but there are still remained some problems that needed to put into continuous discussion such as: magnitude of the displacements, magnitude of movement along the faults, the time of tectonic inversion and right lateral displacement. Especially the mechanism of the Song Hong basin formation is still in controversy with many different hypotheses due to the activation of the Red River fault. In this paper PFC2D based on the Distinct Element Method (DEM) was used to simulate the development of the Red River fault system that controlled the development of the Song Hong basin from the onshore to the elongated portion offshore area. The numerical results show the different parts of the stress field such as compress field, non-stress field, pull-apart field of the dynamic mechanism along the Red River fault in the onshore area. This propagation to the offshore area is partitioned into two main branch faults that are corresponding to the Song Chay and Song Lo fault systems and said to restrain the east and west flanks of the Song Hong basin. The simulation of the Red River motion also showed well the left lateral displacement since its onset. Though it is the first time the DEM method was applied to study the deformation and geodynamic evolution of the Song Hong basin, the results showed reliably applied into the structural configuration evaluation of the Song Hong basin.

  4. Dynamic earthquake rupture simulation on nonplanar faults embedded in 3D geometrically complex, heterogeneous Earth models

    NASA Astrophysics Data System (ADS)

    Duru, K.; Dunham, E. M.; Bydlon, S. A.; Radhakrishnan, H.

    2014-12-01

    Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake.The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws.The corresponding initial boundary value problems are both numerically and computationally challenging.In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas.Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods.We present a numerical method for:a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration;b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of seismic waves in heterogeneous media with free surface topography.We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. We have performed extensive numerical experiments using a slip-weakening friction law on non-planar faults, including recent SCEC benchmark problems. We also show simulations on fractal faults revealing the complexity of rupture dynamics on rough faults. We are presently extending our method to rate-and-state friction laws and off-fault plasticity.

  5. Fault Analysis in a Grid Integrated DFIG Based Wind Energy System with NA CB_P Circuit for Ridethrough Capability and Power Quality Improvement

    NASA Astrophysics Data System (ADS)

    Swain, Snehaprava; Ray, Pravat Kumar

    2016-12-01

    In this paper a three phase fault analysis is done on a DFIG based grid integrated wind energy system. A Novel Active Crowbar Protection (NACB_P) system is proposed to enhance the Fault-ride through (FRT) capability of DFIG both for symmetrical as well as unsymmetrical grid faults. Hence improves the power quality of the system. The protection scheme proposed here is designed with a capacitor in series with the resistor unlike the conventional Crowbar (CB) having only resistors. The major function of the capacitor in the protection circuit is to eliminate the ripples generated in the rotor current and to protect the converter as well as the DC-link capacitor. It also compensates reactive power required by the DFIG during fault. Due to these advantages the proposed scheme enhances the FRT capability of the DFIG and also improves the power quality of the whole system. Experimentally the fault analysis is done on a 3hp slip ring induction generator and simulation results are carried out on a 1.7 MVA DFIG based WECS under different types of grid faults in MATLAB/Simulation and functionality of the proposed scheme is verified.

  6. Quasi-dynamic earthquake fault systems with rheological heterogeneity

    NASA Astrophysics Data System (ADS)

    Brietzke, G. B.; Hainzl, S.; Zoeller, G.; Holschneider, M.

    2009-12-01

    Seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates, such models cannot allow for physical statements of the described seismicity. In contrary such empirical stochastic models, physics based earthquake fault systems models allow for a physical reasoning and interpretation of the produced seismicity and system dynamics. Recently different fault system earthquake simulators based on frictional stick-slip behavior have been used to study effects of stress heterogeneity, rheological heterogeneity, or geometrical complexity on earthquake occurrence, spatial and temporal clustering of earthquakes, and system dynamics. Here we present a comparison of characteristics of synthetic earthquake catalogs produced by two different formulations of quasi-dynamic fault system earthquake simulators. Both models are based on discretized frictional faults embedded in an elastic half-space. While one (1) is governed by rate- and state-dependent friction with allowing three evolutionary stages of independent fault patches, the other (2) is governed by instantaneous frictional weakening with scheduled (and therefore causal) stress transfer. We analyze spatial and temporal clustering of events and characteristics of system dynamics by means of physical parameters of the two approaches.

  7. Fault tolerant operation of switched reluctance machine

    NASA Astrophysics Data System (ADS)

    Wang, Wei

    The energy crisis and environmental challenges have driven industry towards more energy efficient solutions. With nearly 60% of electricity consumed by various electric machines in industry sector, advancement in the efficiency of the electric drive system is of vital importance. Adjustable speed drive system (ASDS) provides excellent speed regulation and dynamic performance as well as dramatically improved system efficiency compared with conventional motors without electronics drives. Industry has witnessed tremendous grow in ASDS applications not only as a driving force but also as an electric auxiliary system for replacing bulky and low efficiency auxiliary hydraulic and mechanical systems. With the vast penetration of ASDS, its fault tolerant operation capability is more widely recognized as an important feature of drive performance especially for aerospace, automotive applications and other industrial drive applications demanding high reliability. The Switched Reluctance Machine (SRM), a low cost, highly reliable electric machine with fault tolerant operation capability, has drawn substantial attention in the past three decades. Nevertheless, SRM is not free of fault. Certain faults such as converter faults, sensor faults, winding shorts, eccentricity and position sensor faults are commonly shared among all ASDS. In this dissertation, a thorough understanding of various faults and their influence on transient and steady state performance of SRM is developed via simulation and experimental study, providing necessary knowledge for fault detection and post fault management. Lumped parameter models are established for fast real time simulation and drive control. Based on the behavior of the faults, a fault detection scheme is developed for the purpose of fast and reliable fault diagnosis. In order to improve the SRM power and torque capacity under faults, the maximum torque per ampere excitation are conceptualized and validated through theoretical analysis and experiments. With the proposed optimal waveform, torque production is greatly improved under the same Root Mean Square (RMS) current constraint. Additionally, position sensorless operation methods under phase faults are investigated to account for the combination of physical position sensor and phase winding faults. A comprehensive solution for position sensorless operation under single and multiple phases fault are proposed and validated through experiments. Continuous position sensorless operation with seamless transition between various numbers of phase fault is achieved.

  8. The Fault Block Model: A novel approach for faulted gas reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ursin, J.R.; Moerkeseth, P.O.

    1994-12-31

    The Fault Block Model was designed for the development of gas production from Sleipner Vest. The reservoir consists of marginal marine sandstone of Hugine Formation. Modeling of highly faulted and compartmentalized reservoirs is severely impeded by the nature and extent of known and undetected faults and, in particular, their effectiveness as flow barrier. The model presented is efficient and superior to other models, for highly faulted reservoir, i.e. grid based simulators, because it minimizes the effect of major undetected faults and geological uncertainties. In this article the authors present the Fault Block Model as a new tool to better understandmore » the implications of geological uncertainty in faulted gas reservoirs with good productivity, with respect to uncertainty in well coverage and optimum gas recovery.« less

  9. Rock mechanics. Superplastic nanofibrous slip zones control seismogenic fault friction.

    PubMed

    Verberne, Berend A; Plümper, Oliver; de Winter, D A Matthijs; Spiers, Christopher J

    2014-12-12

    Understanding the internal mechanisms controlling fault friction is crucial for understanding seismogenic slip on active faults. Displacement in such fault zones is frequently localized on highly reflective (mirrorlike) slip surfaces, coated with thin films of nanogranular fault rock. We show that mirror-slip surfaces developed in experimentally simulated calcite faults consist of aligned nanogranular chains or fibers that are ductile at room conditions. These microstructures and associated frictional data suggest a fault-slip mechanism resembling classical Ashby-Verrall superplasticity, capable of producing unstable fault slip. Diffusive mass transfer in nanocrystalline calcite gouge is shown to be fast enough for this mechanism to control seismogenesis in limestone terrains. With nanogranular fault surfaces becoming increasingly recognized in crustal faults, the proposed mechanism may be generally relevant to crustal seismogenesis. Copyright © 2014, American Association for the Advancement of Science.

  10. Computer simulation of earthquakes

    NASA Technical Reports Server (NTRS)

    Cohen, S. C.

    1976-01-01

    Two computer simulation models of earthquakes were studied for the dependence of the pattern of events on the model assumptions and input parameters. Both models represent the seismically active region by mechanical blocks which are connected to one another and to a driving plate. The blocks slide on a friction surface. In the first model elastic forces were employed and time independent friction to simulate main shock events. The size, length, and time and place of event occurrence were influenced strongly by the magnitude and degree of homogeniety in the elastic and friction parameters of the fault region. Periodically reoccurring similar events were frequently observed in simulations with near homogeneous parameters along the fault, whereas, seismic gaps were a common feature of simulations employing large variations in the fault parameters. The second model incorporated viscoelastic forces and time-dependent friction to account for aftershock sequences. The periods between aftershock events increased with time and the aftershock region was confined to that which moved in the main event.

  11. A physics-based earthquake simulator and its application to seismic hazard assessment in Calabria (Southern Italy) region

    USGS Publications Warehouse

    Console, Rodolfo; Nardi, Anna; Carluccio, Roberto; Murru, Maura; Falcone, Giuseppe; Parsons, Thomas E.

    2017-01-01

    The use of a newly developed earthquake simulator has allowed the production of catalogs lasting 100 kyr and containing more than 100,000 events of magnitudes ≥4.5. The model of the fault system upon which we applied the simulator code was obtained from the DISS 3.2.0 database, selecting all the faults that are recognized on the Calabria region, for a total of 22 fault segments. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which can be compared with those of the real observations. The results of the physics-based simulator algorithm were compared with those obtained by an alternative method using a slip-rate balanced technique. Finally, as an example of a possible use of synthetic catalogs, an attenuation law has been applied to all the events reported in the synthetic catalog for the production of maps showing the exceedance probability of given values of PGA on the territory under investigation.

  12. 26 CFR 1.243-1 - Deduction for dividends received by corporations.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 3 2010-04-01 2010-04-01 false Deduction for dividends received by corporations... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Special Deductions for Corporations § 1.243-1 Deduction for dividends received by corporations. (a)(1) A corporation is allowed a deduction under section 243 for...

  13. 26 CFR 1.172-1 - Net operating loss deduction.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 3 2010-04-01 2010-04-01 false Net operating loss deduction. 1.172-1 Section 1... operating loss deduction. (a) Allowance of deduction. Section 172(a) allows as a deduction in computing taxable income for any taxable year subject to the Code the aggregate of the net operating loss carryovers...

  14. 26 CFR 1.108-3 - Intercompany losses and deductions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Intercompany losses and deductions. 1.108-3... Intercompany losses and deductions. (a) General rule. This section applies to certain losses and deductions... attributes to which section 108(b) applies, a loss or deduction not yet taken into account under section 267...

  15. 26 CFR 1.812-2 - Operations loss deduction.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... (CONTINUED) INCOME TAXES Gain and Loss from Operations § 1.812-2 Operations loss deduction. (a) Allowance of deduction. Section 812 provides that a life insurance company shall be allowed a deduction in computing gain... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Operations loss deduction. 1.812-2 Section 1.812...

  16. Modeling of Stick-Slip Behavior in Sheared Granular Fault Gouge Using the Combined Finite-Discrete Element Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Ke; Euser, Bryan J.; Rougier, Esteban

    Sheared granular layers undergoing stick-slip behavior are broadly employed to study the physics and dynamics of earthquakes. In this paper, a two-dimensional implementation of the combined finite-discrete element method (FDEM), which merges the finite element method (FEM) and the discrete element method (DEM), is used to explicitly simulate a sheared granular fault system including both gouge and plate, and to investigate the influence of different normal loads on seismic moment, macroscopic friction coefficient, kinetic energy, gouge layer thickness, and recurrence time between slips. In the FDEM model, the deformation of plates and particles is simulated using the FEM formulation whilemore » particle-particle and particle-plate interactions are modeled using DEM-derived techniques. The simulated seismic moment distributions are generally consistent with those obtained from the laboratory experiments. In addition, the simulation results demonstrate that with increasing normal load, (i) the kinetic energy of the granular fault system increases; (ii) the gouge layer thickness shows a decreasing trend; and (iii) the macroscopic friction coefficient does not experience much change. Analyses of the slip events reveal that, as the normal load increases, more slip events with large kinetic energy release and longer recurrence time occur, and the magnitude of gouge layer thickness decrease also tends to be larger; while the macroscopic friction coefficient drop decreases. Finally, the simulations not only reveal the influence of normal loads on the dynamics of sheared granular fault gouge, but also demonstrate the capabilities of FDEM for studying stick-slip dynamic behavior of granular fault systems.« less

  17. Modeling of Stick-Slip Behavior in Sheared Granular Fault Gouge Using the Combined Finite-Discrete Element Method

    DOE PAGES

    Gao, Ke; Euser, Bryan J.; Rougier, Esteban; ...

    2018-06-20

    Sheared granular layers undergoing stick-slip behavior are broadly employed to study the physics and dynamics of earthquakes. In this paper, a two-dimensional implementation of the combined finite-discrete element method (FDEM), which merges the finite element method (FEM) and the discrete element method (DEM), is used to explicitly simulate a sheared granular fault system including both gouge and plate, and to investigate the influence of different normal loads on seismic moment, macroscopic friction coefficient, kinetic energy, gouge layer thickness, and recurrence time between slips. In the FDEM model, the deformation of plates and particles is simulated using the FEM formulation whilemore » particle-particle and particle-plate interactions are modeled using DEM-derived techniques. The simulated seismic moment distributions are generally consistent with those obtained from the laboratory experiments. In addition, the simulation results demonstrate that with increasing normal load, (i) the kinetic energy of the granular fault system increases; (ii) the gouge layer thickness shows a decreasing trend; and (iii) the macroscopic friction coefficient does not experience much change. Analyses of the slip events reveal that, as the normal load increases, more slip events with large kinetic energy release and longer recurrence time occur, and the magnitude of gouge layer thickness decrease also tends to be larger; while the macroscopic friction coefficient drop decreases. Finally, the simulations not only reveal the influence of normal loads on the dynamics of sheared granular fault gouge, but also demonstrate the capabilities of FDEM for studying stick-slip dynamic behavior of granular fault systems.« less

  18. Comparative Simulations of 2D and 3D Mixed Convection Flow in a Faulted Basin: an Example from the Yarmouk Gorge, Israel and Jordan

    NASA Astrophysics Data System (ADS)

    Magri, F.; Inbar, N.; Raggad, M.; Möller, S.; Siebert, C.; Möller, P.; Kuehn, M.

    2014-12-01

    Lake Kinneret (Lake Tiberias or Sea of Galilee) is the most important freshwater reservoir in the Northern Jordan Valley. Simulations that couple fluid flow, heat and mass transport are built to understand the mechanisms responsible for the salinization of this important resource. Here the effects of permeability distribution on 2D and 3D convective patterns are compared. 2D simulations indicate that thermal brine in Haon and some springs in the Yamourk Gorge (YG) are the result of mixed convection, i.e. the interaction between the regional flow from the bordering heights and thermally-driven flow (Magri et al., 2014). Calibration of the calculated temperature profiles suggests that the faults in Haon and the YG provides paths for ascending hot waters, whereas the fault in the Golan recirculates water between 1 and 2 km depths. At higher depths, faults induce 2D layered convection in the surrounding units. The 2D assumption for a faulted basin can oversimplify the system, and the conclusions might not be fully correct. The 3D results also point to mixed convection as the main mechanism for the thermal anomalies. However, in 3D the convective structures are more complex allowing for longer flow paths and residence times. In the fault planes, hydrothermal convection develops in a finger regime enhancing inflow and outflow of heat in the system. Hot springs can form locally at the surface along the fault trace. By contrast, the layered cells extending from the faults into the surrounding sediments are preserved and are similar to those simulated in 2D. The results are consistent with the theory from Zhao et al. (2003), which predicts that 2D and 3D patterns have the same probability to develop given the permeability and temperature ranges encountered in geothermal fields. The 3D approach has to be preferred to the 2D in order to capture all patterns of convective flow, particularly in the case of planar high permeability regions such as faults. Magri, F., et al., 2014. Potential salinization mechanisms of drinking water due to large-scale flow of brines across faults in the Tiberias Basin. Geophysical Research Abstracts, Vol. 16, Abstract No: EGU2014-8236-1, Wien, AustriaZhao, C., et al., 2003. Convective instability of 3-D fluid-saturated geological fault zones heated from below. Geophysical Journal International, 155, 213-220

  19. Deductibles in health insurance: can the actuarially fair premium reduction exceed the deductible?

    PubMed

    Bakker, F M; van Vliet, R C; van de Ven, W P

    2000-09-01

    The actuarially fair premium reduction in case of a deductible relative to full insurance is affected by: (1) out-of-pocket payments, (2) moral hazard, (3) administrative costs, and, in case of a voluntary deductible, (4) adverse selection. Both the partial effects and the total effect of these factors are analyzed. Moral hazard and adverse selection appear to have a substantial effect on the expected health care costs above a deductible but a small effect on the expected out-of-pocket expenditure. A premium model indicates that for a broad range of deductible amounts the actuarially fair premium reduction exceeds the deductible.

  20. Experimental study on propagation of fault slip along a simulated rock fault

    NASA Astrophysics Data System (ADS)

    Mizoguchi, K.

    2015-12-01

    Around pre-existing geological faults in the crust, we have often observed off-fault damage zone where there are many fractures with various scales, from ~ mm to ~ m and their density typically increases with proximity to the fault. One of the fracture formation processes is considered to be dynamic shear rupture propagation on the faults, which leads to the occurrence of earthquakes. Here, I have conducted experiments on propagation of fault slip along a pre-cut rock surface to investigate the damaging behavior of rocks with slip propagation. For the experiments, I used a pair of metagabbro blocks from Tamil Nadu, India, of which the contacting surface simulates a fault of 35 cm in length and 1cm width. The experiments were done with the similar uniaxial loading configuration to Rosakis et al. (2007). Axial load σ is applied to the fault plane with an angle 60° to the loading direction. When σ is 5kN, normal and shear stresses on the fault are 1.25MPa and 0.72MPa, respectively. Timing and direction of slip propagation on the fault during the experiments were monitored with several strain gauges arrayed at an interval along the fault. The gauge data were digitally recorded with a 1MHz sampling rate and 16bit resolution. When σ is 4.8kN is applied, we observed some fault slip events where a slip nucleates spontaneously in a subsection of the fault and propagates to the whole fault. However, the propagation speed is about 1.2km/s, much lower than the S-wave velocity of the rock. This indicates that the slip events were not earthquake-like dynamic rupture ones. More efforts are needed to reproduce earthquake-like slip events in the experiments. This work is supported by the JSPS KAKENHI (26870912).

  1. An Integrated Crustal Dynamics Simulator

    NASA Astrophysics Data System (ADS)

    Xing, H. L.; Mora, P.

    2007-12-01

    Numerical modelling offers an outstanding opportunity to gain an understanding of the crustal dynamics and complex crustal system behaviour. This presentation provides our long-term and ongoing effort on finite element based computational model and software development to simulate the interacting fault system for earthquake forecasting. A R-minimum strategy based finite-element computational model and software tool, PANDAS, for modelling 3-dimensional nonlinear frictional contact behaviour between multiple deformable bodies with the arbitrarily-shaped contact element strategy has been developed by the authors, which builds up a virtual laboratory to simulate interacting fault systems including crustal boundary conditions and various nonlinearities (e.g. from frictional contact, materials, geometry and thermal coupling). It has been successfully applied to large scale computing of the complex nonlinear phenomena in the non-continuum media involving the nonlinear frictional instability, multiple material properties and complex geometries on supercomputers, such as the South Australia (SA) interacting fault system, South California fault model and Sumatra subduction model. It has been also extended and to simulate the hot fractured rock (HFR) geothermal reservoir system in collaboration of Geodynamics Ltd which is constructing the first geothermal reservoir system in Australia and to model the tsunami generation induced by earthquakes. Both are supported by Australian Research Council.

  2. Simulation of flow in the Edwards Aquifer, San Antonio region, Texas, and refinement of storage and flow concepts

    USGS Publications Warehouse

    Maclay, Robert W.; Land, Larry F.

    1988-01-01

    The Edwards aquifer is a complexly faulted, carbonate aquifer lying within the Balcones fault zone of south-central Texas. The aquifer consists of thin- to massive-bedded limestone and dolomite, most of which is in the form of mudstones and wackestones. Well-developed secondary porosity has formed in association with former erosional surfaces within the carbonate rocks, within dolomitized-burrowed tidal and evaporitic deposits, and along inclined fractures to produce an aquifer with transmissivities greater than 100 ft2/s. The aquifer is recharged mainly by streamflow losses in the outcrop area of the Edwards aquifer and is discharged by major springs located at considerable distances, as much as 150 mi, from the areas of recharge and by wells. Ground-water flow within the Edwards aquifer of the San Antonio region was simulated to investigate concepts relating to the storage and flow characteristics. The concepts of major interest were the effects of barrier faults on flow direction, water levels, springflow, and storage within the aquifer. A general-purpose, finite-difference model, modified to provide the capability of representing barrier faults, was used to simulate ground-water flow and storage in the aquifer. The approach in model development was to conduct a series of simulations beginning with a simple representation of the aquifer framework and then proceeding to subsequent representations of increasing complexity. The simulations investigated the effects of complex geologic structures and of significant changes in transmissivity, anisotropy, and storage coefficient. Initial values of transmissivity, anisotropy, and storage coefficient were estimated based on concepts developed in previous studies. Results of the simulations confirmed the original estimates of transmissivity values (greater than 100 square feet/s) in the confined zone of the aquifer between San Antonio and Comal Springs. A storage coefficient of 0.05 in the unconfined zone of the aquifer produced the best simulation of water levels and springflow. A major interpretation resulting from the simulations is that two essentially independent areas of regional flow were identified in the west and central part of the study area. Flows from the two areas converge at Comal Springs. The directions of computed flux vectors reflected the presence of major barrier faults, which locally deflect patterns of ground-water movement. The most noticeable deflection is the convergence of flow through a geologic structural opening, the Knippa gap, in eastern Uvalde County. A second significant interpretation is that ground-water flow in northeastern Bexar, Comal, and Hays Counties is diverted by barrier faults toward San Marcos Springs, a regional discharge point. Simulations showed that several barrier faults in the northwestern part of the San Antonio area had a significant effect on storage, water levels, and springflow within the Edwards aquifer.

  3. The evolving energy budget of accretionary wedges

    NASA Astrophysics Data System (ADS)

    McBeck, Jessica; Cooke, Michele; Maillot, Bertrand; Souloumiac, Pauline

    2017-04-01

    The energy budget of evolving accretionary systems reveals how deformational processes partition energy as faults slip, topography uplifts, and layer-parallel shortening produces distributed off-fault deformation. The energy budget provides a quantitative framework for evaluating the energetic contribution or consumption of diverse deformation mechanisms. We investigate energy partitioning in evolving accretionary prisms by synthesizing data from physical sand accretion experiments and numerical accretion simulations. We incorporate incremental strain fields and cumulative force measurements from two suites of experiments to design numerical simulations that represent accretionary wedges with stronger and weaker detachment faults. One suite of the physical experiments includes a basal glass bead layer and the other does not. Two physical experiments within each suite implement different boundary conditions (stable base versus moving base configuration). Synthesizing observations from the differing base configurations reduces the influence of sidewall friction because the force vector produced by sidewall friction points in opposite directions depending on whether the base is fixed or moving. With the numerical simulations, we calculate the energy budget at two stages of accretion: at the maximum force preceding the development of the first thrust pair, and at the minimum force following the development of the pair. To identify the appropriate combination of material and fault properties to apply in the simulations, we systematically vary the Young's modulus and the fault static and dynamic friction coefficients in numerical accretion simulations, and identify the set of parameters that minimizes the misfit between the normal force measured on the physical backwall and the numerically simulated force. Following this derivation of the appropriate material and fault properties, we calculate the components of the work budget in the numerical simulations and in the simulated increments of the physical experiments. The work budget components of the physical experiments are determined from backwall force measurements and incremental velocity fields calculated via digital image correlation. Comparison of the energy budget preceding and following the development of the first thrust pair quantifies the tradeoff of work done in distributed deformation and work expended in frictional slip due to the development of the first backthrust and forethrust. In both the numerical and physical experiments, after the pair develops internal work decreases at the expense of frictional work, which increases. Despite the increase in frictional work, the total external work of the system decreases, revealing that accretion faulting leads to gains in efficiency. Comparison of the energy budget of the accretion experiments and simulations with the strong and weak detachments indicate that when the detachment is strong, the total energy consumed in frictional sliding and internal deformation is larger than when the detachment is relatively weak.

  4. Fault activation and induced seismicity in geological carbon storage – Lessons learned from recent modeling studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rutqvist, Jonny; Rinaldi, Antonio P.; Cappa, Frederic

    In the light of current concerns related to induced seismicity associated with geological carbon sequestration (GCS), this paper summarizes lessons learned from recent modeling studies on fault activation, induced seismicity, and potential for leakage associated with deep underground carbon dioxide (CO 2) injection. Model simulations demonstrate that seismic events large enough to be felt by humans require brittle fault properties and continuous fault permeability allowing pressure to be distributed over a large fault patch to be ruptured at once. Heterogeneous fault properties, which are commonly encountered in faults intersecting multilayered shale/sandstone sequences, effectively reduce the likelihood of inducing felt seismicitymore » and also effectively impede upward CO 2 leakage. A number of simulations show that even a sizable seismic event that could be felt may not be capable of opening a new flow path across the entire thickness of an overlying caprock and it is very unlikely to cross a system of multiple overlying caprock units. Site-specific model simulations of the In Salah CO 2 storage demonstration site showed that deep fractured zone responses and associated microseismicity occurred in the brittle fractured sandstone reservoir, but at a very substantial reservoir overpressure close to the magnitude of the least principal stress. We conclude by emphasizing the importance of site investigation to characterize rock properties and if at all possible to avoid brittle rock such as proximity of crystalline basement or sites in hard and brittle sedimentary sequences that are more prone to injection-induced seismicity and permanent damage.« less

  5. Fault activation and induced seismicity in geological carbon storage – Lessons learned from recent modeling studies

    DOE PAGES

    Rutqvist, Jonny; Rinaldi, Antonio P.; Cappa, Frederic; ...

    2016-09-20

    In the light of current concerns related to induced seismicity associated with geological carbon sequestration (GCS), this paper summarizes lessons learned from recent modeling studies on fault activation, induced seismicity, and potential for leakage associated with deep underground carbon dioxide (CO 2) injection. Model simulations demonstrate that seismic events large enough to be felt by humans require brittle fault properties and continuous fault permeability allowing pressure to be distributed over a large fault patch to be ruptured at once. Heterogeneous fault properties, which are commonly encountered in faults intersecting multilayered shale/sandstone sequences, effectively reduce the likelihood of inducing felt seismicitymore » and also effectively impede upward CO 2 leakage. A number of simulations show that even a sizable seismic event that could be felt may not be capable of opening a new flow path across the entire thickness of an overlying caprock and it is very unlikely to cross a system of multiple overlying caprock units. Site-specific model simulations of the In Salah CO 2 storage demonstration site showed that deep fractured zone responses and associated microseismicity occurred in the brittle fractured sandstone reservoir, but at a very substantial reservoir overpressure close to the magnitude of the least principal stress. We conclude by emphasizing the importance of site investigation to characterize rock properties and if at all possible to avoid brittle rock such as proximity of crystalline basement or sites in hard and brittle sedimentary sequences that are more prone to injection-induced seismicity and permanent damage.« less

  6. 42 CFR 408.45 - Deduction from age 72 special payments.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Deduction from age 72 special payments. 408.45... § 408.45 Deduction from age 72 special payments. (a) Deduction of premiums. SMI premiums are deducted from age 72 special payments made under section 228 of the Act or the payments are withheld under...

  7. 26 CFR 20.2053-9 - Deduction for certain State death taxes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 14 2010-04-01 2010-04-01 false Deduction for certain State death taxes. 20... § 20.2053-9 Deduction for certain State death taxes. (a) General rule. A deduction is allowed a... death taxes. However, see section 2058 to determine the deductibility of state death taxes by estates to...

  8. 26 CFR 1.642(g)-2 - Deductions included.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 8 2014-04-01 2014-04-01 false Deductions included. 1.642(g)-2 Section 1.642(g... (CONTINUED) INCOME TAXES (CONTINUED) Estates, Trusts, and Beneficiaries § 1.642(g)-2 Deductions included. It...(g) is applicable be treated in the same way. One deduction or portion of a deduction may be allowed...

  9. 26 CFR 1.642(g)-2 - Deductions included.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 8 2013-04-01 2013-04-01 false Deductions included. 1.642(g)-2 Section 1.642(g... (CONTINUED) INCOME TAXES (CONTINUED) Estates, Trusts, and Beneficiaries § 1.642(g)-2 Deductions included. It...(g) is applicable be treated in the same way. One deduction or portion of a deduction may be allowed...

  10. 26 CFR 1.642(g)-2 - Deductions included.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 8 2011-04-01 2011-04-01 false Deductions included. 1.642(g)-2 Section 1.642(g... (CONTINUED) INCOME TAXES (CONTINUED) Estates, Trusts, and Beneficiaries § 1.642(g)-2 Deductions included. It...(g) is applicable be treated in the same way. One deduction or portion of a deduction may be allowed...

  11. 26 CFR 1.642(g)-2 - Deductions included.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 8 2012-04-01 2012-04-01 false Deductions included. 1.642(g)-2 Section 1.642(g... (CONTINUED) INCOME TAXES (CONTINUED) Estates, Trusts, and Beneficiaries § 1.642(g)-2 Deductions included. It...(g) is applicable be treated in the same way. One deduction or portion of a deduction may be allowed...

  12. 26 CFR 1.642(g)-2 - Deductions included.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Deductions included. 1.642(g)-2 Section 1.642(g... (CONTINUED) INCOME TAXES Estates, Trusts, and Beneficiaries § 1.642(g)-2 Deductions included. It is not required that the total deductions, or the total amount of any deduction, to which section 642(g) is...

  13. 42 CFR 408.45 - Deduction from age 72 special payments.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 2 2013-10-01 2013-10-01 false Deduction from age 72 special payments. 408.45... § 408.45 Deduction from age 72 special payments. (a) Deduction of premiums. SMI premiums are deducted from age 72 special payments made under section 228 of the Act or the payments are withheld under...

  14. 26 CFR 20.2053-9 - Deduction for certain State death taxes.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 14 2012-04-01 2012-04-01 false Deduction for certain State death taxes. 20... § 20.2053-9 Deduction for certain State death taxes. (a) General rule. A deduction is allowed a... death taxes. However, see section 2058 to determine the deductibility of state death taxes by estates to...

  15. 26 CFR 20.2053-9 - Deduction for certain State death taxes.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 14 2011-04-01 2010-04-01 true Deduction for certain State death taxes. 20.2053....2053-9 Deduction for certain State death taxes. (a) General rule. A deduction is allowed a decedent's....2011-2 for the effect which the allowance of this deduction has upon the credit for State death taxes...

  16. Tools for Evaluating Fault Detection and Diagnostic Methods for HVAC Secondary Systems

    NASA Astrophysics Data System (ADS)

    Pourarian, Shokouh

    Although modern buildings are using increasingly sophisticated energy management and control systems that have tremendous control and monitoring capabilities, building systems routinely fail to perform as designed. More advanced building control, operation, and automated fault detection and diagnosis (AFDD) technologies are needed to achieve the goal of net-zero energy commercial buildings. Much effort has been devoted to develop such technologies for primary heating ventilating and air conditioning (HVAC) systems, and some secondary systems. However, secondary systems, such as fan coil units and dual duct systems, although widely used in commercial, industrial, and multifamily residential buildings, have received very little attention. This research study aims at developing tools that could provide simulation capabilities to develop and evaluate advanced control, operation, and AFDD technologies for these less studied secondary systems. In this study, HVACSIM+ is selected as the simulation environment. Besides developing dynamic models for the above-mentioned secondary systems, two other issues related to the HVACSIM+ environment are also investigated. One issue is the nonlinear equation solver used in HVACSIM+ (Powell's Hybrid method in subroutine SNSQ). It has been found from several previous research projects (ASRHAE RP 825 and 1312) that SNSQ is especially unstable at the beginning of a simulation and sometimes unable to converge to a solution. Another issue is related to the zone model in the HVACSIM+ library of components. Dynamic simulation of secondary HVAC systems unavoidably requires an interacting zone model which is systematically and dynamically interacting with building surrounding. Therefore, the accuracy and reliability of the building zone model affects operational data generated by the developed dynamic tool to predict HVAC secondary systems function. The available model does not simulate the impact of direct solar radiation that enters a zone through glazing and the study of zone model is conducted in this direction to modify the existing zone model. In this research project, the following tasks are completed and summarized in this report: 1. Develop dynamic simulation models in the HVACSIM+ environment for common fan coil unit and dual duct system configurations. The developed simulation models are able to produce both fault-free and faulty operational data under a wide variety of faults and severity levels for advanced control, operation, and AFDD technology development and evaluation purposes; 2. Develop a model structure, which includes the grouping of blocks and superblocks, treatment of state variables, initial and boundary conditions, and selection of equation solver, that can simulate a dual duct system efficiently with satisfactory stability; 3. Design and conduct a comprehensive and systematic validation procedure using collected experimental data to validate the developed simulation models under both fault-free and faulty operational conditions; 4. Conduct a numerical study to compare two solution techniques: Powell's Hybrid (PH) and Levenberg-Marquardt (LM) in terms of their robustness and accuracy. 5. Modification of the thermal state of the existing building zone model in HVACSIM+ library of component. This component is revised to consider the transmitted heat through glazing as a heat source for transient building zone load prediction In this report, literature, including existing HVAC dynamic modeling environment and models, HVAC model validation methodologies, and fault modeling and validation methodologies, are reviewed. The overall methodologies used for fault free and fault model development and validation are introduced. Detailed model development and validation results for the two secondary systems, i.e., fan coil unit and dual duct system are summarized. Experimental data mostly from the Iowa Energy Center Energy Resource Station are used to validate the models developed in this project. Satisfactory model performance in both fault free and fault simulation studies is observed for all studied systems.

  17. The Development of Design Tools for Fault Tolerant Quantum Dot Cellular Automata Based Logic

    NASA Technical Reports Server (NTRS)

    Armstrong, Curtis D.; Humphreys, William M.

    2003-01-01

    We are developing software to explore the fault tolerance of quantum dot cellular automata gate architectures in the presence of manufacturing variations and device defects. The Topology Optimization Methodology using Applied Statistics (TOMAS) framework extends the capabilities of the A Quantum Interconnected Network Array Simulator (AQUINAS) by adding front-end and back-end software and creating an environment that integrates all of these components. The front-end tools establish all simulation parameters, configure the simulation system, automate the Monte Carlo generation of simulation files, and execute the simulation of these files. The back-end tools perform automated data parsing, statistical analysis and report generation.

  18. [Development of fixed-base full task space flight training simulator].

    PubMed

    Xue, Liang; Chen, Shan-quang; Chang, Tian-chun; Yang, Hong; Chao, Jian-gang; Li, Zhi-peng

    2003-01-01

    Fixed-base full task flight training simulator is a very critical and important integrated training facility. It is mostly used in training of integrated skills and tasks, such as running the flight program of manned space flight, dealing with faults, operating and controlling spacecraft flight, communicating information between spacecraft and ground. This simulator was made up of several subentries including spacecraft simulation, simulating cabin, sight image, acoustics, main controlling computer, instructor and assistant support. It has implemented many simulation functions, such as spacecraft environment, spacecraft movement, communicating information between spacecraft and ground, typical faults, manual control and operating training, training control, training monitor, training database management, training data recording, system detecting and so on.

  19. 3-D Spontaneous Rupture Simulations of the 2016 Kumamoto, Japan, Earthquake

    NASA Astrophysics Data System (ADS)

    Urata, Yumi; Yoshida, Keisuke; Fukuyama, Eiichi

    2017-04-01

    We investigated the M7.3 Kumamoto, Japan, earthquake to illuminate why and how the rupture of the main shock propagated successfully by 3-D dynamic rupture simulations, assuming a complicated fault geometry estimated based on the distributions of aftershocks. The M7.3 main shock occurred along the Futagawa and Hinagu faults. A few days before, three M6-class foreshocks occurred. Their hypocenters were located along by the Hinagu and Futagawa faults and their focal mechanisms were similar to those of the main shock; therefore, an extensive stress shadow can have been generated on the fault plane of the main shock. First, we estimated the geometry of the fault planes of the three foreshocks as well as that of the main shock based on the temporal evolution of relocated aftershock hypocenters. Then, we evaluated static stress changes on the main shock fault plane due to the occurrence of the three foreshocks assuming elliptical cracks with constant stress drops on the estimated fault planes. The obtained static stress change distribution indicated that the hypocenter of the main shock is located on the region with positive Coulomb failure stress change (ΔCFS) while ΔCFS in the shallow region above the hypocenter was negative. Therefore, these foreshocks could encourage the initiation of the main shock rupture and could hinder the rupture propagating toward the shallow region. Finally, we conducted 3-D dynamic rupture simulations of the main shock using the initial stress distribution, which was the sum of the static stress changes by these foreshocks and the regional stress field. Assuming a slip-weakening law with uniform friction parameters, we conducted 3-D dynamic rupture simulations by varying the friction parameters and the values of the principal stresses. We obtained feasible parameter ranges to reproduce the rupture propagation of the main shock consistent with those revealed by seismic waveform analyses. We also demonstrated that the free surface encouraged the slip evolution of the main shock.

  20. Nucleation and arrest of slow slip earthquakes: mechanisms and nonlinear simulations using realistic fault geometries and heterogeneous medium properties

    NASA Astrophysics Data System (ADS)

    Alves da Silva Junior, J.; Frank, W.; Campillo, M.; Juanes, R.

    2017-12-01

    Current models for slow slip earthquakes (SSE) assume a simplified fault embedded on a homogeneous half-space. In these models SSE events nucleate on the transition from velocity strengthening (VS) to velocity weakening (VW) down dip from the trench and propagate towards the base of the seismogenic zone, where high normal effective stress is assumed to arrest slip. Here, we investigate SSE nucleation and arrest using quasi-static finite element simulations, with rate and state friction, on a domain with heterogeneous properties and realistic fault geometry. We use the fault geometry of the Guerrero Gap in the Cocos subduction zone, where SSE events occurs every 4 years, as a proxy for subduction zone. Our model is calibrated using surface displacements from GPS observations. We apply boundary conditions according to the plate convergence rate and impose a depth-dependent pore pressure on the fault. Our simulations indicate that the fault geometry and elastic properties of the medium play a key role in the arrest of SSE events at the base of the seismogenic zone. SSE arrest occurs due to aseismic deformations of the domain that result in areas with elevated effective stress. SSE nucleation occurs in the transition from VS to VW and propagates as a crack-like expansion with increased nucleation length prior to dynamic instability. Our simulations encompassing multiple seismic cycles indicate SSE interval times between 1 and 10 years and, importantly, a systematic increase of rupture area prior to dynamic instability, followed by a hiatus in the SSE occurrence. We hypothesize that these SSE characteristics, if confirmed by GPS observations in different subduction zones, can add to the understanding of nucleation of large earthquakes in the seismogenic zone.

  1. Kinematic ground motion simulations on rough faults including effects of 3D stochastic velocity perturbations

    USGS Publications Warehouse

    Graves, Robert; Pitarka, Arben

    2016-01-01

    We describe a methodology for generating kinematic earthquake ruptures for use in 3D ground‐motion simulations over the 0–5 Hz frequency band. Our approach begins by specifying a spatially random slip distribution that has a roughly wavenumber‐squared fall‐off. Given a hypocenter, the rupture speed is specified to average about 75%–80% of the local shear wavespeed and the prescribed slip‐rate function has a Kostrov‐like shape with a fault‐averaged rise time that scales self‐similarly with the seismic moment. Both the rupture time and rise time include significant local perturbations across the fault surface specified by spatially random fields that are partially correlated with the underlying slip distribution. We represent velocity‐strengthening fault zones in the shallow (<5  km) and deep (>15  km) crust by decreasing rupture speed and increasing rise time in these regions. Additional refinements to this approach include the incorporation of geometric perturbations to the fault surface, 3D stochastic correlated perturbations to the P‐ and S‐wave velocity structure, and a damage zone surrounding the shallow fault surface characterized by a 30% reduction in seismic velocity. We demonstrate the approach using a suite of simulations for a hypothetical Mw 6.45 strike‐slip earthquake embedded in a generalized hard‐rock velocity structure. The simulation results are compared with the median predictions from the 2014 Next Generation Attenuation‐West2 Project ground‐motion prediction equations and show very good agreement over the frequency band 0.1–5 Hz for distances out to 25 km from the fault. Additionally, the newly added features act to reduce the coherency of the radiated higher frequency (f>1  Hz) ground motions, and homogenize radiation‐pattern effects in this same bandwidth, which move the simulations closer to the statistical characteristics of observed motions as illustrated by comparison with recordings from the 1979 Imperial Valley earthquake.

  2. Kinematic Ground-Motion Simulations on Rough Faults Including Effects of 3D Stochastic Velocity Perturbations

    DOE PAGES

    Graves, Robert; Pitarka, Arben

    2016-08-23

    Here, we describe a methodology for generating kinematic earthquake ruptures for use in 3D ground–motion simulations over the 0–5 Hz frequency band. Our approach begins by specifying a spatially random slip distribution that has a roughly wavenumber–squared fall–off. Given a hypocenter, the rupture speed is specified to average about 75%–80% of the local shear wavespeed and the prescribed slip–rate function has a Kostrov–like shape with a fault–averaged rise time that scales self–similarly with the seismic moment. Both the rupture time and rise time include significant local perturbations across the fault surface specified by spatially random fields that are partially correlatedmore » with the underlying slip distribution. We represent velocity–strengthening fault zones in the shallow (<5 km) and deep (>15 km) crust by decreasing rupture speed and increasing rise time in these regions. Additional refinements to this approach include the incorporation of geometric perturbations to the fault surface, 3D stochastic correlated perturbations to the P– and S–wave velocity structure, and a damage zone surrounding the shallow fault surface characterized by a 30% reduction in seismic velocity. We demonstrate the approach using a suite of simulations for a hypothetical Mw 6.45 strike–slip earthquake embedded in a generalized hard–rock velocity structure. The simulation results are compared with the median predictions from the 2014 Next Generation Attenuation–West2 Project ground–motion prediction equations and show very good agreement over the frequency band 0.1–5 Hz for distances out to 25 km from the fault. Additionally, the newly added features act to reduce the coherency of the radiated higher frequency (f>1 Hz) ground motions, and homogenize radiation–pattern effects in this same bandwidth, which move the simulations closer to the statistical characteristics of observed motions as illustrated by comparison with recordings from the 1979 Imperial Valley earthquake.« less

  3. Development of a dynamic coupled hydro-geomechanical code and its application to induced seismicity

    NASA Astrophysics Data System (ADS)

    Miah, Md Mamun

    This research describes the importance of a hydro-geomechanical coupling in the geologic sub-surface environment from fluid injection at geothermal plants, large-scale geological CO2 sequestration for climate mitigation, enhanced oil recovery, and hydraulic fracturing during wells construction in the oil and gas industries. A sequential computational code is developed to capture the multiphysics interaction behavior by linking a flow simulation code TOUGH2 and a geomechanics modeling code PyLith. Numerical formulation of each code is discussed to demonstrate their modeling capabilities. The computational framework involves sequential coupling, and solution of two sub-problems- fluid flow through fractured and porous media and reservoir geomechanics. For each time step of flow calculation, pressure field is passed to the geomechanics code to compute effective stress field and fault slips. A simplified permeability model is implemented in the code that accounts for the permeability of porous and saturated rocks subject to confining stresses. The accuracy of the TOUGH-PyLith coupled simulator is tested by simulating Terzaghi's 1D consolidation problem. The modeling capability of coupled poroelasticity is validated by benchmarking it against Mandel's problem. The code is used to simulate both quasi-static and dynamic earthquake nucleation and slip distribution on a fault from the combined effect of far field tectonic loading and fluid injection by using an appropriate fault constitutive friction model. Results from the quasi-static induced earthquake simulations show a delayed response in earthquake nucleation. This is attributed to the increased total stress in the domain and not accounting for pressure on the fault. However, this issue is resolved in the final chapter in simulating a single event earthquake dynamic rupture. Simulation results show that fluid pressure has a positive effect on slip nucleation and subsequent crack propagation. This is confirmed by running a sensitivity analysis that shows an increase in injection well distance results in delayed slip nucleation and rupture propagation on the fault.

  4. Dynamics of delayed triggering in multi-segmented foreshock sequence: Evidence from the 2016 Kumamoto, Japan, earthquake

    NASA Astrophysics Data System (ADS)

    Arai, H.; Ando, R.; Aoki, Y.

    2017-12-01

    The 2016 Kumamoto earthquake sequence hit the SW Japan, from April 14th to 16th and its sequence includes two M6-class foreshocks and the main shock (Mw 7.0). Importantly, the detailed surface displacement caused solely by the two foreshocks could be captured by a SAR observation isolated from the mainshock deformation. The foreshocks ruptured the previously mapped Hinagu fault and their hypocentral locations and the aftershock distribution indicates the involvement of two different subparallel faults. Therefore we assumed that the 1st and the 2nd foreshocks respectively ruptured each of the subparallel faults (faults A and B). One of the interesting points of this earthquake is that the two major foreshocks had a temporal gap of 2.5 hours even though the fault A and B are quite close by each other. This suggests that the stress perturbation due to the 1st foreshock is not large enough to trigger the 2nd one right away but that it's large enough to bring about the following earthquake after a delay time.We aim to reproduce the foreshock sequence such as rupture jumping over the subparallel faults by using dynamic rupture simulations. We employed a spatiotemporal-boundary integral equation method accelerated by the Fast Domain Partitioning Method (Ando, 2016, GJI) since this method allows us to construct a complex fault geometry in 3D media. Our model has two faults and a free ground surface. We conducted rupture simulation with various sets of parameters to identify the optimal condition describing the observation.Our simulation results are roughly categorized into 3 cases with regard to the criticality for the rupture jumping. The case 1 (supercritical case) shows the fault A and B ruptured consecutively without any temporal gap. In the case 2 (nearly critical), the rupture on the fault B started with a temporal gap after the fault A finished rupturing, which is what we expected as a reproduction. In the case 3 (subcritical), only the fault A ruptured and its rupture did not transfer to the fault B. We succeed in reproducing rupture jumping over two faults with a temporal gap due to the nucleation by taking account of a velocity strengthening (direct) effect. With a detailed analysis of the case 2, we can constrain ranges of parameters strictly, and this gives us deeper insights into the physics underlying the delayed foreshock activity.

  5. Using Remote Sensing Data to Constrain Models of Fault Interactions and Plate Boundary Deformation

    NASA Astrophysics Data System (ADS)

    Glasscoe, M. T.; Donnellan, A.; Lyzenga, G. A.; Parker, J. W.; Milliner, C. W. D.

    2016-12-01

    Determining the distribution of slip and behavior of fault interactions at plate boundaries is a complex problem. Field and remotely sensed data often lack the necessary coverage to fully resolve fault behavior. However, realistic physical models may be used to more accurately characterize the complex behavior of faults constrained with observed data, such as GPS, InSAR, and SfM. These results will improve the utility of using combined models and data to estimate earthquake potential and characterize plate boundary behavior. Plate boundary faults exhibit complex behavior, with partitioned slip and distributed deformation. To investigate what fraction of slip becomes distributed deformation off major faults, we examine a model fault embedded within a damage zone of reduced elastic rigidity that narrows with depth and forward model the slip and resulting surface deformation. The fault segments and slip distributions are modeled using the JPL GeoFEST software. GeoFEST (Geophysical Finite Element Simulation Tool) is a two- and three-dimensional finite element software package for modeling solid stress and strain in geophysical and other continuum domain applications [Lyzenga, et al., 2000; Glasscoe, et al., 2004; Parker, et al., 2008, 2010]. New methods to advance geohazards research using computer simulations and remotely sensed observations for model validation are required to understand fault slip, the complex nature of fault interaction and plate boundary deformation. These models help enhance our understanding of the underlying processes, such as transient deformation and fault creep, and can aid in developing observation strategies for sUAV, airborne, and upcoming satellite missions seeking to determine how faults behave and interact and assess their associated hazard. Models will also help to characterize this behavior, which will enable improvements in hazard estimation. Validating the model results against remotely sensed observations will allow us to better constrain fault zone rheology and physical properties, having implications for the overall understanding of earthquake physics, fault interactions, plate boundary deformation and earthquake hazard, preparedness and risk reduction.

  6. Dynamics of folding: Impact of fault bend folds on earthquake cycles

    NASA Astrophysics Data System (ADS)

    Sathiakumar, S.; Barbot, S.; Hubbard, J.

    2017-12-01

    Earthquakes in subduction zones and subaerial convergent margins are some of the largest in the world. So far, forecasts of future earthquakes have primarily relied on assessing past earthquakes to look for seismic gaps and slip deficits. However, the roles of fault geometry and off-fault plasticity are typically overlooked. We use structural geology (fault-bend folding theory) to inform fault modeling in order to better understand how deformation is accommodated on the geological time scale and through the earthquake cycle. Fault bends in megathrusts, like those proposed for the Nepal Himalaya, will induce folding of the upper plate. This introduces changes in the slip rate on different fault segments, and therefore on the loading rate at the plate interface, profoundly affecting the pattern of earthquake cycles. We develop numerical simulations of slip evolution under rate-and-state friction and show that this effect introduces segmentation of the earthquake cycle. In crustal dynamics, it is challenging to describe the dynamics of fault-bend folds, because the deformation is accommodated by small amounts of slip parallel to bedding planes ("flexural slip"), localized on axial surface, i.e. folding axes pinned to fault bends. We use dislocation theory to describe the dynamics of folding along these axial surfaces, using analytic solutions that provide displacement and stress kernels to simulate the temporal evolution of folding and assess the effects of folding on earthquake cycles. Studies of the 2015 Gorkha earthquake, Nepal, have shown that fault geometry can affect earthquake segmentation. Here, we show that in addition to the fault geometry, the actual geology of the rocks in the hanging wall of the fault also affect critical parameters, including the loading rate on parts of the fault, based on fault-bend folding theory. Because loading velocity controls the recurrence time of earthquakes, these two effects together are likely to have a strong impact on the earthquake cycle.

  7. Metrics for comparing dynamic earthquake rupture simulations

    USGS Publications Warehouse

    Barall, Michael; Harris, Ruth A.

    2014-01-01

    Earthquakes are complex events that involve a myriad of interactions among multiple geologic features and processes. One of the tools that is available to assist with their study is computer simulation, particularly dynamic rupture simulation. A dynamic rupture simulation is a numerical model of the physical processes that occur during an earthquake. Starting with the fault geometry, friction constitutive law, initial stress conditions, and assumptions about the condition and response of the near‐fault rocks, a dynamic earthquake rupture simulation calculates the evolution of fault slip and stress over time as part of the elastodynamic numerical solution (Ⓔ see the simulation description in the electronic supplement to this article). The complexity of the computations in a dynamic rupture simulation make it challenging to verify that the computer code is operating as intended, because there are no exact analytic solutions against which these codes’ results can be directly compared. One approach for checking if dynamic rupture computer codes are working satisfactorily is to compare each code’s results with the results of other dynamic rupture codes running the same earthquake simulation benchmark. To perform such a comparison consistently, it is necessary to have quantitative metrics. In this paper, we present a new method for quantitatively comparing the results of dynamic earthquake rupture computer simulation codes.

  8. Fault detection and accommodation testing on an F100 engine in an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Myers, L. P.; Baer-Riedhart, J. L.; Maxwell, M. D.

    1985-01-01

    The fault detection and accommodation (FDA) methodology for digital engine-control systems may range from simple comparisons of redundant parameters to the more complex and sophisticated observer models of the entire engine system. Evaluations of the various FDA schemes are done using analytical methods, simulation, and limited-altitude-facility testing. Flight testing of the FDA logic has been minimal because of the difficulty of inducing realistic faults in flight. A flight program was conducted to evaluate the fault detection and accommodation capability of a digital electronic engine control in an F-15 aircraft. The objective of the flight program was to induce selected faults and evaluate the resulting actions of the digital engine controller. Comparisons were made between the flight results and predictions. Several anomalies were found in flight and during the ground test. Simulation results showed that the inducement of dual pressure failures was not feasible since the FDA logic was not designed to accommodate these types of failures.

  9. Analytical simulation and PROFAT II: a new methodology and a computer automated tool for fault tree analysis in chemical process industries.

    PubMed

    Khan, F I; Abbasi, S A

    2000-07-10

    Fault tree analysis (FTA) is based on constructing a hypothetical tree of base events (initiating events) branching into numerous other sub-events, propagating the fault and eventually leading to the top event (accident). It has been a powerful technique used traditionally in identifying hazards in nuclear installations and power industries. As the systematic articulation of the fault tree is associated with assigning probabilities to each fault, the exercise is also sometimes called probabilistic risk assessment. But powerful as this technique is, it is also very cumbersome and costly, limiting its area of application. We have developed a new algorithm based on analytical simulation (named as AS-II), which makes the application of FTA simpler, quicker, and cheaper; thus opening up the possibility of its wider use in risk assessment in chemical process industries. Based on the methodology we have developed a computer-automated tool. The details are presented in this paper.

  10. Failure Diagnosis for the Holdup Tank System via ISFA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Huijuan; Bragg-Sitton, Shannon; Smidts, Carol

    This paper discusses the use of the integrated system failure analysis (ISFA) technique for fault diagnosis for the holdup tank system. ISFA is a simulation-based, qualitative and integrated approach used to study fault propagation in systems containing both hardware and software subsystems. The holdup tank system consists of a tank containing a fluid whose level is controlled by an inlet valve and an outlet valve. We introduce the component and functional models of the system, quantify the main parameters and simulate possible failure-propagation paths based on the fault propagation approach, ISFA. The results show that most component failures in themore » holdup tank system can be identified clearly and that ISFA is viable as a technique for fault diagnosis. Since ISFA is a qualitative technique that can be used in the very early stages of system design, this case study provides indications that it can be used early to study design aspects that relate to robustness and fault tolerance.« less

  11. 2D Simulations of Earthquake Cycles at a Subduction Zone Based on a Rate and State Friction Law -Effects of Pore Fluid Pressure Changes-

    NASA Astrophysics Data System (ADS)

    Mitsui, Y.; Hirahara, K.

    2006-12-01

    There have been a lot of studies that simulate large earthquakes occurring quasi-periodically at a subduction zone, based on the laboratory-derived rate-and-state friction law [eg. Kato and Hirasawa (1997), Hirose and Hirahara (2002)]. All of them assume that pore fluid pressure in the fault zone is constant. However, in the fault zone, pore fluid pressure changes suddenly, due to coseismic pore dilatation [Marone (1990)] and thermal pressurization [Mase and Smith (1987)]. If pore fluid pressure drops and effective normal stress rises, fault slip is decelerated. Inversely, if pore fluid pressure rises and effective normal stress drops, fault slip is accelerated. The effect of pore fluid may cause slow slip events and low-frequency tremor [Kodaira et al. (2004), Shelly et al. (2006)]. For a simple spring model, how pore dilatation affects slip instability was investigated [Segall and Rice (1995), Sleep (1995)]. When the rate of the slip becomes high, pore dilatation occurs and pore pressure drops, and the rate of the slip is restrained. Then the inflow of pore fluid recovers the pore pressure. We execute 2D earthquake cycle simulations at a subduction zone, taking into account such changes of pore fluid pressure following Segall and Rice (1995), in addition to the numerical scheme in Kato and Hirasawa (1997). We do not adopt hydrostatic pore pressure but excess pore pressure for initial condition, because upflow of dehydrated water seems to exist at a subduction zone. In our model, pore fluid is confined to the fault damage zone and flows along the plate interface. The smaller the flow rate is, the later pore pressure recovers. Since effective normal stress keeps larger, the fault slip is decelerated and stress drop becomes smaller. Therefore the smaller flow rate along the fault zone leads to the shorter earthquake recurrence time. Thus, not only the frictional parameters and the subduction rate but also the fault zone permeability affects the recurrence time of earthquake cycle. Further, the existence of heterogeneity in the permeability along the plate interface can bring about other slip behaviors, such as slow slip events. Our simulations indicate that, in addition to the frictional parameters, the permeability within the fault damage zone is one of essential parameters, which controls the whole earthquake cycle.

  12. Fault Slip Distribution and Optimum Sea Surface Displacement of the 2017 Tehuantepec Earthquake in Mexico (Mw 8.2) Estimated from Tsunami Waveforms

    NASA Astrophysics Data System (ADS)

    Gusman, A. R.; Satake, K.; Mulia, I. E.

    2017-12-01

    An intraplate normal fault earthquake (Mw 8.2) occurred on 8 September 2017 in the Tehuantepec seismic gap of the Middle America Trench. The submarine earthquake generated a tsunami which was recorded by coastal tide gauges and offshore DART buoys. We used the tsunami waveforms recorded at 16 stations to estimate the fault slip distribution and an optimum sea surface displacement of the earthquake. A steep fault dipping to the northeast with strike of 315°, dip of 73°and rake of -96° based on the USGS W-phase moment tensor solution was assumed for the slip inversion. To independently estimate the sea surface displacement without assuming earthquake fault parameters, we used the B-spline function for the unit sources. The distribution of the unit sources was optimized by a Genetic Algorithm - Pattern Search (GA-PS) method. Tsunami waveform inversion resolves a spatially compact region of large slip (4-10 m) with a dimension of 100 km along the strike and 80 km along the dip in the depth range between 40 km and 110 km. The seismic moment calculated from the fault slip distribution with assumed rigidity of 6 × 1010 Nm-2 is 2.46 × 1021 Nm (Mw 8.2). The optimum displacement model suggests that the sea surface was uplifted up to 0.5 m and subsided down to -0.8 m. The deep location of large fault slip may be the cause of such small sea surface displacements. The simulated tsunami waveforms from the optimum sea surface displacement can reproduce the observations better than those from fault slip distribution; the normalized root mean square misfit for the sea surface displacement is 0.89, while that for the fault slip distribution is 1.04. We simulated the tsunami propagation using the optimum sea surface displacement model. Large tsunami amplitudes up to 2.5 m were predicted to occur inside and around a lagoon located between Salina Cruz and Puerto Chiapas. Figure 1. a) Sea surface displacement for the 2017 Tehuantepec earthquake estimated by tsunami waveforms. b) Map of simulated maximum tsunami amplitude and comparison between observed (blue circles) and simulated (red circles) tsunami maximum amplitude along the coast.

  13. Three-dimensional long-period groundmotion simulations in the upper Mississippi embayment

    USGS Publications Warehouse

    Macpherson, K.A.; Woolery, E.W.; Wang, Z.; Liu, P.

    2010-01-01

    We employed a 3D velocity model and 3D wave propagation code to simulate long-period ground motions in the upper Mississippi embayment. This region is at risk from large earthquakes in the New Madrid seismic zone (NMSZ) and observational data are sparse, making simulation a valuable tool for predicting the effects of large events. We undertook these simulations to estimate the magnitude of shaking likely to occur and to investigate the influence of the 3D embayment structure and finite-fault mechanics on ground motions. There exist three primary fault zones in the NMSZ, each of which was likely associated with one of the main shocks of the 1811-12 earthquake triplet. For this study, three simulations have been conducted on each major segment, exploring the impact of different epicentral locations and rupture directions on ground motions. The full wave field up to a frequency of 0.5 Hz is computed on a 200 ?? 200 ?? 50-km 3 volume using a staggered-grid finite-difference code. Peak horizontal velocity and bracketed durations were calculated at the free surface. The NMSZ simulations indicate that for the considered bandwidth, finite-fault mechanics such as fault proximity, directivity effect, and slip distribution exert the most control on ground motions. The 3D geologic structure of the upper Mississippi embayment also influences ground motion with indications that amplification is induced by the sharp velocity contrast at the basin edge.

  14. Fuzzy Inference System Approach for Locating Series, Shunt, and Simultaneous Series-Shunt Faults in Double Circuit Transmission Lines

    PubMed Central

    Swetapadma, Aleena; Yadav, Anamika

    2015-01-01

    Many schemes are reported for shunt fault location estimation, but fault location estimation of series or open conductor faults has not been dealt with so far. The existing numerical relays only detect the open conductor (series) fault and give the indication of the faulty phase(s), but they are unable to locate the series fault. The repair crew needs to patrol the complete line to find the location of series fault. In this paper fuzzy based fault detection/classification and location schemes in time domain are proposed for both series faults, shunt faults, and simultaneous series and shunt faults. The fault simulation studies and fault location algorithm have been developed using Matlab/Simulink. Synchronized phasors of voltage and current signals of both the ends of the line have been used as input to the proposed fuzzy based fault location scheme. Percentage of error in location of series fault is within 1% and shunt fault is 5% for all the tested fault cases. Validation of percentage of error in location estimation is done using Chi square test with both 1% and 5% level of significance. PMID:26413088

  15. Fault-tolerant cooperative output regulation for multi-vehicle systems with sensor faults

    NASA Astrophysics Data System (ADS)

    Qin, Liguo; He, Xiao; Zhou, D. H.

    2017-10-01

    This paper presents a unified framework of fault diagnosis and fault-tolerant cooperative output regulation (FTCOR) for a linear discrete-time multi-vehicle system with sensor faults. The FTCOR control law is designed through three steps. A cooperative output regulation (COR) controller is designed based on the internal mode principle when there are no sensor faults. A sufficient condition on the existence of the COR controller is given based on the discrete-time algebraic Riccati equation (DARE). Then, a decentralised fault diagnosis scheme is designed to cope with sensor faults occurring in followers. A residual generator is developed to detect sensor faults of each follower, and a bank of fault-matching estimators are proposed to isolate and estimate sensor faults of each follower. Unlike the current distributed fault diagnosis for multi-vehicle systems, the presented decentralised fault diagnosis scheme in each vehicle reduces the communication and computation load by only using the information of the vehicle. By combing the sensor fault estimation and the COR control law, an FTCOR controller is proposed. Finally, the simulation results demonstrate the effectiveness of the FTCOR controller.

  16. Laboratory observations of fault strength in response to changes in normal stress

    USGS Publications Warehouse

    Kilgore, Brian D.; Lozos, Julian; Beeler, Nicholas M.; Oglesby, David

    2012-01-01

    Changes in fault normal stress can either inhibit or promote rupture propagation, depending on the fault geometry and on how fault shear strength varies in response to the normal stress change. A better understanding of this dependence will lead to improved earthquake simulation techniques, and ultimately, improved earthquake hazard mitigation efforts. We present the results of new laboratory experiments investigating the effects of step changes in fault normal stress on the fault shear strength during sliding, using bare Westerly granite samples, with roughened sliding surfaces, in a double direct shear apparatus. Previous experimental studies examining the shear strength following a step change in the normal stress produce contradictory results: a set of double direct shear experiments indicates that the shear strength of a fault responds immediately, and then is followed by a prolonged slip-dependent response, while a set of shock loading experiments indicates that there is no immediate component, and the response is purely gradual and slip-dependent. In our new, high-resolution experiments, we observe that the acoustic transmissivity and dilatancy of simulated faults in our tests respond immediately to changes in the normal stress, consistent with the interpretations of previous investigations, and verify an immediate increase in the area of contact between the roughened sliding surfaces as normal stress increases. However, the shear strength of the fault does not immediately increase, indicating that the new area of contact between the rough fault surfaces does not appear preloaded with any shear resistance or strength. Additional slip is required for the fault to achieve a new shear strength appropriate for its new loading conditions, consistent with previous observations made during shock loading.

  17. Simulating subduction zone earthquakes using discrete element method: a window into elusive source processes

    NASA Astrophysics Data System (ADS)

    Blank, D. G.; Morgan, J.

    2017-12-01

    Large earthquakes that occur on convergent plate margin interfaces have the potential to cause widespread damage and loss of life. Recent observations reveal that a wide range of different slip behaviors take place along these megathrust faults, which demonstrate both their complexity, and our limited understanding of fault processes and their controls. Numerical modeling provides us with a useful tool that we can use to simulate earthquakes and related slip events, and to make direct observations and correlations among properties and parameters that might control them. Further analysis of these phenomena can lead to a more complete understanding of the underlying mechanisms that accompany the nucleation of large earthquakes, and what might trigger them. In this study, we use the discrete element method (DEM) to create numerical analogs to subduction megathrusts with heterogeneous fault friction. Displacement boundary conditions are applied in order to simulate tectonic loading, which in turn, induces slip along the fault. A wide range of slip behaviors are observed, ranging from creep to stick slip. We are able to characterize slip events by duration, stress drop, rupture area, and slip magnitude, and to correlate the relationships among these quantities. These characterizations allow us to develop a catalog of rupture events both spatially and temporally, for comparison with slip processes on natural faults.

  18. Modeling and Simulation Reliable Spacecraft On-Board Computing

    NASA Technical Reports Server (NTRS)

    Park, Nohpill

    1999-01-01

    The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.

  19. Study on the characteristics of multi-infeed HVDC

    NASA Astrophysics Data System (ADS)

    Li, Ming; Song, Xinli; Liu, Wenzhuo; Xiang, Yinxing; Zhao, Shutao; Su, Zhida; Meng, Hang

    2017-09-01

    China has built more than ten HVDC transmission projects in recent years [1]. Now, east China has formed a multi-HVDC feed pattern grid. It is imminent to study the interaction of the multi-HVDC and the characteristics of it. In this paper, an electromechanical-electromagnetic hybrid model is built with electromechanical data of a certain power network. We use electromagnetic models to simulate the HVDC section and electromechanical models simulate the AC power network [2]. In order to study the characteristics of the grid, this paper adds some faults to the line and analysed the fault characteristics. At last give analysis of the fault characteristics.

  20. Simulating and analyzing engineering parameters of Kyushu Earthquake, Japan, 1997, by empirical Green function method

    NASA Astrophysics Data System (ADS)

    Li, Zongchao; Chen, Xueliang; Gao, Mengtan; Jiang, Han; Li, Tiefei

    2017-03-01

    Earthquake engineering parameters are very important in the engineering field, especially engineering anti-seismic design and earthquake disaster prevention. In this study, we focus on simulating earthquake engineering parameters by the empirical Green's function method. The simulated earthquake (MJMA6.5) occurred in Kyushu, Japan, 1997. Horizontal ground motion is separated as fault parallel and fault normal, in order to assess characteristics of two new direction components. Broadband frequency range of ground motion simulation is from 0.1 to 20 Hz. Through comparing observed parameters and synthetic parameters, we analyzed distribution characteristics of earthquake engineering parameters. From the comparison, the simulated waveform has high similarity with the observed waveform. We found the following. (1) Near-field PGA attenuates radically all around with strip radiation patterns in fault parallel while radiation patterns of fault normal is circular; PGV has a good similarity between observed record and synthetic record, but has different distribution characteristic in different components. (2) Rupture direction and terrain have a large influence on 90 % significant duration. (3) Arias Intensity is attenuating with increasing epicenter distance. Observed values have a high similarity with synthetic values. (4) Predominant period is very different in the part of Kyushu in fault normal. It is affected greatly by site conditions. (5) Most parameters have good reference values where the hypo-central is less than 35 km. (6) The GOF values of all these parameters are generally higher than 45 which means a good result according to Olsen's classification criterion. Not all parameters can fit well. Given these synthetic ground motion parameters, seismic hazard analysis can be performed and earthquake disaster analysis can be conducted in future urban planning.

  1. Finite Element Modeling of Non-linear Coupled Interacting Fault System

    NASA Astrophysics Data System (ADS)

    Xing, H. L.; Zhang, J.; Wyborn, D.

    2009-04-01

    PANDAS - Parallel Adaptive static/dynamic Nonlinear Deformation Analysis System - a novel supercomputer simulation tool is developed for simulating the highly non-linear coupled geomechanical-fluid flow-thermal systems involving heterogeneously fractured geomaterials. PANDAS includes the following key components: Pandas/Pre, ESyS_Crustal, Pandas/Thermo, Pandas/Fluid and Pandas/Post as detailed in the following: • Pandas/Pre is developed to visualise the microseismicity events recorded during the hydraulic stimulation process to further evaluate the fracture location and evolution and geological setting of a certain reservoir, and then generate the mesh by it and/or other commercial graphics software (such as Patran) for the further finite element analysis of various cases; The Delaunay algorithm is applied as a suitable method for mesh generation using such a point set; • ESyS_Crustal is a finite element code developed for the interacting fault system simulation, which employs the adaptive static/dynamic algorithm to simulate the dynamics and evolution of interacting fault systems and processes that are relevant on short to mediate time scales in which several dynamic phenomena related with stick-slip instability along the faults need to be taken into account, i.e. (a). slow quasi-static stress accumulation, (b) rapid dynamic rupture, (c) wave propagation and (d) corresponding stress redistribution due to the energy release along the multiple fault boundaries; those are needed to better describe ruputure/microseimicity/earthquake related phenomena with applications in earthquake forecasting, hazard quantification, exploration, and environmental problems. It has been verified with various available experimental results[1-3]; • Pandas/Thermo is a finite element method based module for the thermal analysis of the fractured porous media; the temperature distribution is calculated from the heat transfer induced by the thermal boundary conditions without/with the coupled fluid effects and the geomechanical energy conversion for the pure/coupled thermal analysis. • Pandas/Fluid is a finite element method based module for simulating the fluid flow in the fractured porous media; the fluid flow velocity and pressure are calculated from energy equilibrium equations without/together with the coupling effects of the thermal and solid rock deformation for an independent/coupled fluid flow analysis; • Pandas/Post is to visualise the simulation results through the integration of VTK and/or Patran. All the above modules can be used independently/together to simulate individual/coupled phenomena (such as interacting fault system dynamics, heat flow and fluid flow) without/with coupling effects. PANDAS has been applied to the following issues: • visualisation of the microseismic events to monitor and determine where/how the underground rupture proceeds during a hydraulic stimulation, to generate the mesh using the recorded data for determining the domain of the ruptured zone and to evaluate the material parameters (i.e. the permeability) for the further numerical analysis; • interacting fault system simulation to determine the relevant complicated dynamic rupture process. • geomechanical-fluid flow coupling analysis to investigate the interactions between fluid flow and deformation in the fractured porous media under different loading conditions. • thermo-fluid flow coupling analysis of a fractured geothermal reservoir system. PANDAS will be further developed for a multiscale simulation of multiphase dynamic behaviour for a certain fractured geothermal reservoir. More details and additional application examples will be given during the presentation. References [1] Xing, H. L., Makinouchi, A. and Mora, P. (2007). Finite element modeling of interacting fault system, Physics of the Earth and Planetary Interiors, 163, 106-121.doi:10.1016/j.pepi.2007.05.006 [2] Xing, H. L., Mora, P., Makinouchi, A. (2006). An unified friction description and its application to simulation of frictional instability using finite element method. Philosophy Magazine, 86, 3453-3475 [3] Xing, H. L., Mora, P.(2006). Construction of an intraplate fault system model of South Australia, and simulation tool for the iSERVO institute seed project.. Pure and Applied Geophysics. 163, 2297-2316. DOI 10.1007/s00024-006-0127-x

  2. Material contrast does not predict earthquake rupture propagation direction

    USGS Publications Warehouse

    Harris, R.A.; Day, S.M.

    2005-01-01

    Earthquakes often occur on faults that juxtapose different rocks. The result is rupture behavior that differs from that of an earthquake occurring on a fault in a homogeneous material. Previous 2D numerical simulations have studied simple cases of earthquake rupture propagation where there is a material contrast across a fault and have come to two different conclusions: 1) earthquake rupture propagation direction can be predicted from the material contrast, and 2) earthquake rupture propagation direction cannot be predicted from the material contrast. In this paper we provide observational evidence from 70 years of earthquakes at Parkfield, CA, and new 3D numerical simulations. Both the observations and the numerical simulations demonstrate that earthquake rupture propagation direction is unlikely to be predictable on the basis of a material contrast. Copyright 2005 by the American Geophysical Union.

  3. Gyro-based Maximum-Likelihood Thruster Fault Detection and Identification

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Lages, Chris; Mah, Robert; Clancy, Daniel (Technical Monitor)

    2002-01-01

    When building smaller, less expensive spacecraft, there is a need for intelligent fault tolerance vs. increased hardware redundancy. If fault tolerance can be achieved using existing navigation sensors, cost and vehicle complexity can be reduced. A maximum likelihood-based approach to thruster fault detection and identification (FDI) for spacecraft is developed here and applied in simulation to the X-38 space vehicle. The system uses only gyro signals to detect and identify hard, abrupt, single and multiple jet on- and off-failures. Faults are detected within one second and identified within one to five accords,

  4. Conditions of Fissuring in a Pumped-Faulted Aquifer System

    NASA Astrophysics Data System (ADS)

    Hernandez-Marin, M.; Burbey, T. J.

    2007-12-01

    Earth fissuring associated with subsidence from groundwater pumping is problematic in many arid-zone heavily pumped basins such as Las Vegas Valley. Long-term pumping at rates considerably greater than the natural recharge rate has stressed the heterogeneous aquifer system resulting in a complex stress-strain regime. A rigorous artificial recharge program coupled with increased surface-water importation has allowed water levels to appreciably recover, which has led to surface rebound in some localities. Nonetheless, new fissures continue to appear, particularly near basin-fill faults that behave as barriers to subsidence bowls. The purpose of this research is to develop a series of computational models to better understand the influence that structure (faults), pumping, and hydrostratigraphy has in the generation and propagation of fissures. The hydrostratigraphy of Las Vegas Valley consists of aquifers, aquitards and a relatively dry vadoze zone that may be as thick as 100m in much of the valley. Quaternary faults are typically depicted as scarps resulting from pre- pumping extensional tectonic events and are probably not responsible for the observed strain. The models developed to simulate the stress-strain and deformation processes in a faulted pumped aquifer-aquitard system of Las Vegas use the ABAQUS CAE (Complete ABAQUS Environment) software system. ABAQUS is a sophisticated engineering industry finite-element modeling package capable of simulating the complex fault- fissure system described here. A brittle failure criteria based on the tensile strength of the materials and the acting stresses (from previous models) are being used to understand how and where fissures are likely to form. , Hypothetical simulations include the role that faults and the vadose zone may play in fissure formation

  5. Fault-Tolerant Control of ANPC Three-Level Inverter Based on Order-Reduction Optimal Control Strategy under Multi-Device Open-Circuit Fault.

    PubMed

    Xu, Shi-Zhou; Wang, Chun-Jie; Lin, Fang-Li; Li, Shi-Xiang

    2017-10-31

    The multi-device open-circuit fault is a common fault of ANPC (Active Neutral-Point Clamped) three-level inverter and effect the operation stability of the whole system. To improve the operation stability, this paper summarized the main solutions currently firstly and analyzed all the possible states of multi-device open-circuit fault. Secondly, an order-reduction optimal control strategy was proposed under multi-device open-circuit fault to realize fault-tolerant control based on the topology and control requirement of ANPC three-level inverter and operation stability. This control strategy can solve the faults with different operation states, and can works in order-reduction state under specific open-circuit faults with specific combined devices, which sacrifices the control quality to obtain the stability priority control. Finally, the simulation and experiment proved the effectiveness of the proposed strategy.

  6. Fault Analysis of Space Station DC Power Systems-Using Neural Network Adaptive Wavelets to Detect Faults

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Wang, Yanchun; Dolce, James L.

    1997-01-01

    This paper describes the application of neural network adaptive wavelets for fault diagnosis of space station power system. The method combines wavelet transform with neural network by incorporating daughter wavelets into weights. Therefore, the wavelet transform and neural network training procedure become one stage, which avoids the complex computation of wavelet parameters and makes the procedure more straightforward. The simulation results show that the proposed method is very efficient for the identification of fault locations.

  7. Fly-By-Light/Power-By-Wire Fault-Tolerant Fiber-Optic Backplane

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2002-01-01

    The design and development of a fault-tolerant fiber-optic backplane to demonstrate feasibility of such architecture is presented. The simulation results of test cases on the backplane in the advent of induced faults are presented, and the fault recovery capability of the architecture is demonstrated. The architecture was designed, developed, and implemented using the Very High Speed Integrated Circuits (VHSIC) Hardware Description Language (VHDL). The architecture was synthesized and implemented in hardware using Field Programmable Gate Arrays (FPGA) on multiple prototype boards.

  8. Paleoclimatic signature in terrestrial flood deposits.

    PubMed

    Koltermann, C E; Gorelick, S M

    1992-06-26

    Large-scale process simulation was used to reconstruct the geologic evolution during the past 600,000 years of an alluvial fan in northern California. In order to reproduce the sedimentary record, the simulation accounted for the dynamics of river flooding, sedimentation, subsidence, land movement that resulted from faulting, and sea level changes. Paleoclimatic trends induced fluctuations in stream flows and dominated the development of the sedimentary deposits. The process simulation approach serves as a quantitative means to explore the genesis of sedimentary architecture and its link to past climatic conditions and fault motion.

  9. Dislocation Dissociation Strongly Influences on Frank—Read Source Nucleation and Microplasticy of Materials with Low Stacking Fault Energy

    NASA Astrophysics Data System (ADS)

    Huang, Min-Sheng; Zhu, Ya-Xin; Li, Zhen-Huan

    2014-04-01

    The influence of dislocation dissociation on the evolution of Frank—Read (F-R) sources is studied using a three-dimensional discrete dislocation dynamics simulation (3D-DDD). The classical Orowan nucleation stress and recently proposed Benzerga nucleation time models for F-R sources are improved. This work shows that it is necessary to introduce the dislocation dissociation scheme into 3D-DDD simulation, especially for simulations on micro-plasticity of small sized materials with low stacking fault energy.

  10. Differential involvement of left prefrontal cortex in inductive and deductive reasoning.

    PubMed

    Goel, Vinod; Dolan, Raymond J

    2004-10-01

    While inductive and deductive reasoning are considered distinct logical and psychological processes, little is known about their respective neural basis. To address this issue we scanned 16 subjects with fMRI, using an event-related design, while they engaged in inductive and deductive reasoning tasks. Both types of reasoning were characterized by activation of left lateral prefrontal and bilateral dorsal frontal, parietal, and occipital cortices. Neural responses unique to each type of reasoning determined from the Reasoning Type (deduction and induction) by Task (reasoning and baseline) interaction indicated greater involvement of left inferior frontal gyrus (BA 44) in deduction than induction, while left dorsolateral (BA 8/9) prefrontal gyrus showed greater activity during induction than deduction. This pattern suggests a dissociation within prefrontal cortex for deductive and inductive reasoning.

  11. Probabilistic approach for earthquake scenarios in the Marmara region from dynamic rupture simulations

    NASA Astrophysics Data System (ADS)

    Aochi, Hideo

    2014-05-01

    The Marmara region (Turkey) along the North Anatolian fault is known as a high potential of large earthquakes in the next decades. For the purpose of seismic hazard/risk evaluation, kinematic and dynamic source models have been proposed (e.g. Oglesby and Mai, GJI, 2012). In general, the simulated earthquake scenarios depend on the hypothesis and cannot be verified before the expected earthquake. We then introduce a probabilistic insight to give the initial/boundary conditions to statistically analyze the simulated scenarios. We prepare different fault geometry models, tectonic loading and hypocenter locations. We keep the same framework of the simulation procedure as the dynamic rupture process of the adjacent 1999 Izmit earthquake (Aochi and Madariaga, BSSA, 2003), as the previous models were able to reproduce the seismological/geodetic aspects of the event. Irregularities in fault geometry play a significant role to control the rupture progress, and a relatively large change in geometry may work as barriers. The variety of the simulate earthquake scenarios should be useful for estimating the variety of the expected ground motion.

  12. Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment

    NASA Astrophysics Data System (ADS)

    Brietzke, G. B.; Hainzl, S.; Zöller, G.

    2012-04-01

    As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).

  13. Strength evolution of simulated carbonate-bearing faults: The role of normal stress and slip velocity

    NASA Astrophysics Data System (ADS)

    Mercuri, Marco; Scuderi, Marco Maria; Tesei, Telemaco; Carminati, Eugenio; Collettini, Cristiano

    2018-04-01

    A great number of earthquakes occur within thick carbonate sequences in the shallow crust. At the same time, carbonate fault rocks exhumed from a depth < 6 km (i.e., from seismogenic depths) exhibit the coexistence of structures related to brittle (i.e., cataclasis) and ductile deformation processes (i.e., pressure-solution and granular plasticity). We performed friction experiments on water-saturated simulated carbonate-bearing faults for a wide range of normal stresses (from 5 to 120 MPa) and slip velocities (from 0.3 to 100 μm/s). At high normal stresses (σn > 20 MPa) fault gouges undergo strain-weakening, that is more pronounced at slow slip velocities, and causes a significant reduction of frictional strength, from μ = 0.7 to μ = 0.47. Microstructural analysis show that fault gouge weakening is driven by deformation accommodated by cataclasis and pressure-insensitive deformation processes (pressure solution and granular plasticity) that become more efficient at slow slip velocity. The reduction in frictional strength caused by strain weakening behaviour promoted by the activation of pressure-insensitive deformation might play a significant role in carbonate-bearing faults mechanics.

  14. Fault Analysis and Detection in Microgrids with High PV Penetration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El Khatib, Mohamed; Hernandez Alvidrez, Javier; Ellis, Abraham

    In this report we focus on analyzing current-controlled PV inverters behaviour under faults in order to develop fault detection schemes for microgrids with high PV penetration. Inverter model suitable for steady state fault studies is presented and the impact of PV inverters on two protection elements is analyzed. The studied protection elements are superimposed quantities based directional element and negative sequence directional element. Additionally, several non-overcurrent fault detection schemes are discussed in this report for microgrids with high PV penetration. A detailed time-domain simulation study is presented to assess the performance of the presented fault detection schemes under different microgridmore » modes of operation.« less

  15. Reverse fault growth and fault interaction with frictional interfaces: insights from analogue models

    NASA Astrophysics Data System (ADS)

    Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio

    2017-04-01

    The association of faulting and folding is a common feature in mountain chains, fold-and-thrust belts, and accretionary wedges. Kinematic models are developed and widely used to explain a range of relationships between faulting and folding. However, these models may result not to be completely appropriate to explain shortening in mechanically heterogeneous rock bodies. Weak layers, bedding surfaces, or pre-existing faults placed ahead of a propagating fault tip may influence the fault propagation rate itself and the associated fold shape. In this work, we employed clay analogue models to investigate how mechanical discontinuities affect the propagation rate and the associated fold shape during the growth of reverse master faults. The simulated master faults dip at 30° and 45°, recalling the range of the most frequent dip angles for active reverse faults that occurs in nature. The mechanical discontinuities are simulated by pre-cutting the clay pack. For both experimental setups (30° and 45° dipping faults) we analyzed three different configurations: 1) isotropic, i.e. without precuts; 2) with one precut in the middle of the clay pack; and 3) with two evenly-spaced precuts. To test the repeatability of the processes and to have a statistically valid dataset we replicate each configuration three times. The experiments were monitored by collecting successive snapshots with a high-resolution camera pointing at the side of the model. The pictures were then processed using the Digital Image Correlation method (D.I.C.), in order to extract the displacement and shear-rate fields. These two quantities effectively show both the on-fault and off-fault deformation, indicating the activity along the newly-formed faults and whether and at what stage the discontinuities (precuts) are reactivated. To study the fault propagation and fold shape variability we marked the position of the fault tips and the fold profiles for every successive step of deformation. Then we compared precut models with isotropic models to evaluate the trends of variability. Our results indicate that the discontinuities are reactivated especially when the tip of the newly-formed fault is either below or connected to them. During the stage of maximum activity along the precut, the faults slow down or even stop their propagation. The fault propagation systematically resumes when the angle between the fault and the precut is about 90° (critical angle); only during this stage the fault crosses the precut. The reactivation of the discontinuities induces an increase of the apical angle of the fault-related fold and produces wider limbs compared to the isotropic reference experiments.

  16. A Unified Nonlinear Adaptive Approach for Detection and Isolation of Engine Faults

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong; Farfan-Ramos, Luis; Simon, Donald L.

    2010-01-01

    A challenging problem in aircraft engine health management (EHM) system development is to detect and isolate faults in system components (i.e., compressor, turbine), actuators, and sensors. Existing nonlinear EHM methods often deal with component faults, actuator faults, and sensor faults separately, which may potentially lead to incorrect diagnostic decisions and unnecessary maintenance. Therefore, it would be ideal to address sensor faults, actuator faults, and component faults under one unified framework. This paper presents a systematic and unified nonlinear adaptive framework for detecting and isolating sensor faults, actuator faults, and component faults for aircraft engines. The fault detection and isolation (FDI) architecture consists of a parallel bank of nonlinear adaptive estimators. Adaptive thresholds are appropriately designed such that, in the presence of a particular fault, all components of the residual generated by the adaptive estimator corresponding to the actual fault type remain below their thresholds. If the faults are sufficiently different, then at least one component of the residual generated by each remaining adaptive estimator should exceed its threshold. Therefore, based on the specific response of the residuals, sensor faults, actuator faults, and component faults can be isolated. The effectiveness of the approach was evaluated using the NASA C-MAPSS turbofan engine model, and simulation results are presented.

  17. The Role of Coseismic Coulomb Stress Changes in Shaping the Hard Link Between Normal Fault Segments

    NASA Astrophysics Data System (ADS)

    Hodge, M.; Fagereng, Å.; Biggs, J.

    2018-01-01

    The mechanism and evolution of fault linkage is important in the growth and development of large faults. Here we investigate the role of coseismic stress changes in shaping the hard links between parallel normal fault segments (or faults), by comparing numerical models of the Coulomb stress change from simulated earthquakes on two en echelon fault segments to natural observations of hard-linked fault geometry. We consider three simplified linking fault geometries: (1) fault bend, (2) breached relay ramp, and (3) strike-slip transform fault. We consider scenarios where either one or both segments rupture and vary the distance between segment tips. Fault bends and breached relay ramps are favored where segments underlap or when the strike-perpendicular distance between overlapping segments is less than 20% of their total length, matching all 14 documented examples. Transform fault linkage geometries are preferred when overlapping segments are laterally offset at larger distances. Few transform faults exist in continental extensional settings, and our model suggests that propagating faults or fault segments may first link through fault bends or breached ramps before reaching sufficient overlap for a transform fault to develop. Our results suggest that Coulomb stresses arising from multisegment ruptures or repeated earthquakes are consistent with natural observations of the geometry of hard links between parallel normal fault segments.

  18. 20 CFR 416.724 - Amounts of penalty deductions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Amounts of penalty deductions. 416.724 Section 416.724 Employees' Benefits SOCIAL SECURITY ADMINISTRATION SUPPLEMENTAL SECURITY INCOME FOR THE AGED, BLIND, AND DISABLED Reports Required Penalty Deductions § 416.724 Amounts of penalty deductions...

  19. 20 CFR 416.724 - Amounts of penalty deductions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Amounts of penalty deductions. 416.724 Section 416.724 Employees' Benefits SOCIAL SECURITY ADMINISTRATION SUPPLEMENTAL SECURITY INCOME FOR THE AGED, BLIND, AND DISABLED Reports Required Penalty Deductions § 416.724 Amounts of penalty deductions...

  20. CO 2 storage and potential fault instability in the St. Lawrence Lowlands sedimentary basin (Quebec, Canada): Insights from coupled reservoir-geomechanical modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konstantinovskaya, E.; Rutqvist, J.; Malo, M.

    2014-01-21

    In this paper, coupled reservoir-geomechanical (TOUGH-FLAC) modeling is applied for the first time to the St. Lawrence Lowlands region to evaluate the potential for shear failure along pre-existing high-angle normal faults, as well as the potential for tensile failure in the caprock units (Utica Shale and Lorraine Group). This activity is part of a general assessment of the potential for safe CO 2 injection into a sandstone reservoir (the Covey Hill Formation) within an Early Paleozoic sedimentary basin. Field and subsurface data are used to estimate the sealing properties of two reservoir-bounding faults (Yamaska and Champlain faults). The spatial variationsmore » in fluid pressure, effective minimum horizontal stress, and shear strain are calculated for different injection rates, using a simplified 2D geological model of the Becancour area, located ~110 km southwest of Quebec City. The simulation results show that initial fault permeability affects the timing, localization, rate, and length of fault shear slip. Contrary to the conventional view, our results suggest that shear failure may start earlier for a permeable fault than for a sealing fault, depending on the site-specific geologic setting. In simulations of a permeable fault, shear slip is nucleated along a 60 m long fault segment in a thin and brittle caprock unit (Utica Shale) trapped below a thicker and more ductile caprock unit (Lorraine Group) – and then subsequently progresses up to the surface. In the case of a sealing fault, shear failure occurs later in time and is localized along a fault segment (300 m) below the caprock units. The presence of the inclined low-permeable Yamaska Fault close to the injection well causes asymmetric fluid-pressure buildup and lateral migration of the CO 2 plume away from the fault, reducing the overall risk of CO 2 leakage along faults. Finally, fluid-pressure-induced tensile fracturing occurs only under extremely high injection rates and is localized below the caprock units, which remain intact, preventing upward CO 2 migration.« less

  1. Petascale computation of multi-physics seismic simulations

    NASA Astrophysics Data System (ADS)

    Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Wollherr, Stephanie; Duru, Kenneth C.

    2017-04-01

    Capturing the observed complexity of earthquake sources in concurrence with seismic wave propagation simulations is an inherently multi-scale, multi-physics problem. In this presentation, we present simulations of earthquake scenarios resolving high-detail dynamic rupture evolution and high frequency ground motion. The simulations combine a multitude of representations of model complexity; such as non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and fault strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure to capture dynamic rupture behavior at the source; and seismic wave attenuation, 3D subsurface structure and bathymetry impacting seismic wave propagation. Performing such scenarios at the necessary spatio-temporal resolution requires highly optimized and massively parallel simulation tools which can efficiently exploit HPC facilities. Our up to multi-PetaFLOP simulations are performed with SeisSol (www.seissol.org), an open-source software package based on an ADER-Discontinuous Galerkin (DG) scheme solving the seismic wave equations in velocity-stress formulation in elastic, viscoelastic, and viscoplastic media with high-order accuracy in time and space. Our flux-based implementation of frictional failure remains free of spurious oscillations. Tetrahedral unstructured meshes allow for complicated model geometry. SeisSol has been optimized on all software levels, including: assembler-level DG kernels which obtain 50% peak performance on some of the largest supercomputers worldwide; an overlapping MPI-OpenMP parallelization shadowing the multiphysics computations; usage of local time stepping; parallel input and output schemes and direct interfaces to community standard data formats. All these factors enable aim to minimise the time-to-solution. The results presented highlight the fact that modern numerical methods and hardware-aware optimization for modern supercomputers are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis. Lastly, we will conclude with an outlook on future exascale ADER-DG solvers for seismological applications.

  2. Estimation of spectral kurtosis

    NASA Astrophysics Data System (ADS)

    Sutawanir

    2017-03-01

    Rolling bearings are the most important elements in rotating machinery. Bearing frequently fall out of service for various reasons: heavy loads, unsuitable lubrications, ineffective sealing. Bearing faults may cause a decrease in performance. Analysis of bearing vibration signals has attracted attention in the field of monitoring and fault diagnosis. Bearing vibration signals give rich information for early detection of bearing failures. Spectral kurtosis, SK, is a parameter in frequency domain indicating how the impulsiveness of a signal varies with frequency. Faults in rolling bearings give rise to a series of short impulse responses as the rolling elements strike faults, SK potentially useful for determining frequency bands dominated by bearing fault signals. SK can provide a measure of the distance of the analyzed bearings from a healthy one. SK provides additional information given by the power spectral density (psd). This paper aims to explore the estimation of spectral kurtosis using short time Fourier transform known as spectrogram. The estimation of SK is similar to the estimation of psd. The estimation falls in model-free estimation and plug-in estimator. Some numerical studies using simulations are discussed to support the methodology. Spectral kurtosis of some stationary signals are analytically obtained and used in simulation study. Kurtosis of time domain has been a popular tool for detecting non-normality. Spectral kurtosis is an extension of kurtosis in frequency domain. The relationship between time domain and frequency domain analysis is establish through power spectrum-autocovariance Fourier transform. Fourier transform is the main tool for estimation in frequency domain. The power spectral density is estimated through periodogram. In this paper, the short time Fourier transform of the spectral kurtosis is reviewed, a bearing fault (inner ring and outer ring) is simulated. The bearing response, power spectrum, and spectral kurtosis are plotted to visualize the pattern of each fault. Keywords: frequency domain Fourier transform, spectral kurtosis, bearing fault

  3. 26 CFR 1.873-1 - Deductions allowed nonresident alien individuals.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 9 2010-04-01 2010-04-01 false Deductions allowed nonresident alien individuals... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Nonresident Aliens and Foreign Corporations § 1.873-1 Deductions allowed nonresident alien individuals. (a) General provisions—(1) Allocation of deductions. In...

  4. Teleseismic body waves from dynamically rupturing shallow thrust faults: Are they opaque for surface-reflected phases?

    USGS Publications Warehouse

    Smith, D.E.; Aagaard, Brad T.; Heaton, T.H.

    2005-01-01

    We investigate whether a shallow-dipping thrust fault is prone to waveslip interactions via surface-reflected waves affecting the dynamic slip. If so, can these interactions create faults that are opaque to radiated energy? Furthermore, in this case of a shallow-dipping thrust fault, can incorrectly assuming a transparent fault while using dislocation theory lead to underestimates of seismic moment? Slip time histories are generated in three-dimensional dynamic rupture simulations while allowing for varying degrees of wave-slip interaction controlled by fault-friction models. Based on the slip time histories, P and SH seismograms are calculated for stations at teleseismic distances. The overburdening pressure caused by gravity eliminates mode I opening except at the tip of the fault near the surface; hence, mode I opening has no effect on the teleseismic signal. Normalizing by a Haskell-like traditional kinematic rupture, we find teleseismic peak-to-peak displacement amplitudes are approximately 1.0 for both P and SH waves, except for the unrealistic case of zero sliding friction. Zero sliding friction has peak-to-peak amplitudes of 1.6 for P and 2.0 for SH waves; the fault slip oscillates about its equilibrium value, resulting in a large nonzero (0.08 Hz) spectral peak not seen in other ruptures. These results indicate wave-slip interactions associated with surface-reflected phases in real earthquakes should have little to no effect on teleseismic motions. Thus, Haskell-like kinematic dislocation theory (transparent fault conditions) can be safety used to simulate teleseismic waveforms in the Earth.

  5. On-board fault management for autonomous spacecraft

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Stephan, Amy; Doyle, Susan C.; Martin, Eric; Sellers, Suzanne

    1991-01-01

    The dynamic nature of the Cargo Transfer Vehicle's (CTV) mission and the high level of autonomy required mandate a complete fault management system capable of operating under uncertain conditions. Such a fault management system must take into account the current mission phase and the environment (including the target vehicle), as well as the CTV's state of health. This level of capability is beyond the scope of current on-board fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems that can meet the needs of spacecraft that have long-range autonomy requirements. We have implemented a model-based approach to fault detection and isolation that does not require explicit characterization of failures prior to launch. It is thus able to detect failures that were not considered in the failure and effects analysis. We have applied this technique to several different subsystems and tested our approach against both simulations and an electrical power system hardware testbed. We present findings from simulation and hardware tests which demonstrate the ability of our model-based system to detect and isolate failures, and describe our work in porting the Ada version of this system to a flight-qualified processor. We also discuss current research aimed at expanding our system to monitor the entire spacecraft.

  6. Multiple incipient sensor faults diagnosis with application to high-speed railway traction devices.

    PubMed

    Wu, Yunkai; Jiang, Bin; Lu, Ningyun; Yang, Hao; Zhou, Yang

    2017-03-01

    This paper deals with the problem of incipient fault diagnosis for a class of Lipschitz nonlinear systems with sensor biases and explores further results of total measurable fault information residual (ToMFIR). Firstly, state and output transformations are introduced to transform the original system into two subsystems. The first subsystem is subject to system disturbances and free from sensor faults, while the second subsystem contains sensor faults but without any system disturbances. Sensor faults in the second subsystem are then formed as actuator faults by using a pseudo-actuator based approach. Since the effects of system disturbances on the residual are completely decoupled, multiple incipient sensor faults can be detected by constructing ToMFIR, and the fault detectability condition is then derived for discriminating the detectable incipient sensor faults. Further, a sliding-mode observers (SMOs) based fault isolation scheme is designed to guarantee accurate isolation of multiple sensor faults. Finally, simulation results conducted on a CRH2 high-speed railway traction device are given to demonstrate the effectiveness of the proposed approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  7. In situ nanoindentation study on plasticity and work hardening in aluminium with incoherent twin boundaries.

    PubMed

    Bufford, D; Liu, Y; Wang, J; Wang, H; Zhang, X

    2014-09-10

    Nanotwinned metals have been the focus of intense research recently, as twin boundaries may greatly enhance mechanical strength, while maintaining good ductility, electrical conductivity and thermal stability. Most prior studies have focused on low stacking-fault energy nanotwinned metals with coherent twin boundaries. In contrast, the plasticity of twinned high stacking-fault energy metals, such as aluminium with incoherent twin boundaries, has not been investigated. Here we report high work hardening capacity and plasticity in highly twinned aluminium containing abundant Σ3{112} incoherent twin boundaries based on in situ nanoindentation studies in a transmission electron microscope and corresponding molecular dynamics simulations. The simulations also reveal drastic differences in deformation mechanisms between nanotwinned copper and twinned aluminium ascribed to stacking-fault energy controlled dislocation-incoherent twin boundary interactions. This study provides new insight into incoherent twin boundary-dominated plasticity in high stacking-fault energy twinned metals.

  8. Application of an improved minimum entropy deconvolution method for railway rolling element bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Cheng, Yao; Zhou, Ning; Zhang, Weihua; Wang, Zhiwei

    2018-07-01

    Minimum entropy deconvolution is a widely-used tool in machinery fault diagnosis, because it enhances the impulse component of the signal. The filter coefficients that greatly influence the performance of the minimum entropy deconvolution are calculated by an iterative procedure. This paper proposes an improved deconvolution method for the fault detection of rolling element bearings. The proposed method solves the filter coefficients by the standard particle swarm optimization algorithm, assisted by a generalized spherical coordinate transformation. When optimizing the filters performance for enhancing the impulses in fault diagnosis (namely, faulty rolling element bearings), the proposed method outperformed the classical minimum entropy deconvolution method. The proposed method was validated in simulation and experimental signals from railway bearings. In both simulation and experimental studies, the proposed method delivered better deconvolution performance than the classical minimum entropy deconvolution method, especially in the case of low signal-to-noise ratio.

  9. Simulative and experimental investigation on stator winding turn and unbalanced supply voltage fault diagnosis in induction motors using Artificial Neural Networks.

    PubMed

    Lashkari, Negin; Poshtan, Javad; Azgomi, Hamid Fekri

    2015-11-01

    The three-phase shift between line current and phase voltage of induction motors can be used as an efficient fault indicator to detect and locate inter-turn stator short-circuit (ITSC) fault. However, unbalanced supply voltage is one of the contributing factors that inevitably affect stator currents and therefore the three-phase shift. Thus, it is necessary to propose a method that is able to identify whether the unbalance of three currents is caused by ITSC or supply voltage fault. This paper presents a feedforward multilayer-perceptron Neural Network (NN) trained by back propagation, based on monitoring negative sequence voltage and the three-phase shift. The data which are required for training and test NN are generated using simulated model of stator. The experimental results are presented to verify the superior accuracy of the proposed method. Copyright © 2015. Published by Elsevier Ltd.

  10. Strain rate effect on fault slip and rupture evolution: Insight from meter-scale rock friction experiments

    NASA Astrophysics Data System (ADS)

    Xu, Shiqing; Fukuyama, Eiichi; Yamashita, Futoshi; Mizoguchi, Kazuo; Takizawa, Shigeru; Kawakata, Hironori

    2018-05-01

    We conduct meter-scale rock friction experiments to study strain rate effect on fault slip and rupture evolution. Two rock samples made of Indian metagabbro, with a nominal contact dimension of 1.5 m long and 0.1 m wide, are juxtaposed and loaded in a direct shear configuration to simulate the fault motion. A series of experimental tests, under constant loading rates ranging from 0.01 mm/s to 1 mm/s and under a fixed normal stress of 6.7 MPa, are performed to simulate conditions with changing strain rates. Load cells and displacement transducers are utilized to examine the macroscopic fault behavior, while high-density arrays of strain gauges close to the fault are used to investigate the local fault behavior. The observations show that the macroscopic peak strength, strength drop, and the rate of strength drop can increase with increasing loading rate. At the local scale, the observations reveal that slow loading rates favor generation of characteristic ruptures that always nucleate in the form of slow slip at about the same location. In contrast, fast loading rates can promote very abrupt rupture nucleation and along-strike scatter of hypocenter locations. At a given propagation distance, rupture speed tends to increase with increasing loading rate. We propose that a strain-rate-dependent fault fragmentation process can enhance the efficiency of fault healing during the stick period, which together with healing time controls the recovery of fault strength. In addition, a strain-rate-dependent weakening mechanism can be activated during the slip period, which together with strain energy selects the modes of fault slip and rupture propagation. The results help to understand the spectrum of fault slip and rock deformation modes in nature, and emphasize the role of heterogeneity in tuning fault behavior under different strain rates.

  11. Pseudo-fault signal assisted EMD for fault detection and isolation in rotating machines

    NASA Astrophysics Data System (ADS)

    Singh, Dheeraj Sharan; Zhao, Qing

    2016-12-01

    This paper presents a novel data driven technique for the detection and isolation of faults, which generate impacts in a rotating equipment. The technique is built upon the principles of empirical mode decomposition (EMD), envelope analysis and pseudo-fault signal for fault separation. Firstly, the most dominant intrinsic mode function (IMF) is identified using EMD of a raw signal, which contains all the necessary information about the faults. The envelope of this IMF is often modulated with multiple vibration sources and noise. A second level decomposition is performed by applying pseudo-fault signal (PFS) assisted EMD on the envelope. A pseudo-fault signal is constructed based on the known fault characteristic frequency of the particular machine. The objective of using external (pseudo-fault) signal is to isolate different fault frequencies, present in the envelope . The pseudo-fault signal serves dual purposes: (i) it solves the mode mixing problem inherent in EMD, (ii) it isolates and quantifies a particular fault frequency component. The proposed technique is suitable for real-time implementation, which has also been validated on simulated fault and experimental data corresponding to a bearing and a gear-box set-up, respectively.

  12. Feasibility analysis of a novel hybrid-type superconducting circuit breaker in multi-terminal HVDC networks

    NASA Astrophysics Data System (ADS)

    Khan, Umer Amir; Lee, Jong-Geon; Seo, In-Jin; Amir, Faisal; Lee, Bang-Wook

    2015-11-01

    Voltage source converter-based HVDC systems (VSC-HVDC) are a better alternative than conventional thyristor-based HVDC systems, especially for developing multi-terminal HVDC systems (MTDC). However, one of the key obstacles in developing MTDC is the absence of an adequate protection system that can quickly detect faults, locate the faulty line and trip the HVDC circuit breakers (DCCBs) to interrupt the DC fault current. In this paper, a novel hybrid-type superconducting circuit breaker (SDCCB) is proposed and feasibility analyses of its application in MTDC are presented. The SDCCB has a superconducting fault current limiter (SFCL) located in the main current path to limit fault currents until the final trip signal is received. After the trip signal the IGBT located in the main line commutates the current into a parallel line where DC current is forced to zero by the combination of IGBTs and surge arresters. Fault simulations for three-, four- and five-terminal MTDC were performed and SDCCB performance was evaluated in these MTDC. Passive current limitation by SFCL caused a significant reduction of fault current interruption stress in the SDCCB. It was observed that the DC current could change direction in MTDC after a fault and the SDCCB was modified to break the DC current in both the forward and reverse directions. The simulation results suggest that the proposed SDCCB could successfully suppress the DC fault current, cause a timely interruption, and isolate the faulty HVDC line in MTDC.

  13. 26 CFR 1.832-5 - Deductions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... TAXES Other Insurance Companies § 1.832-5 Deductions. (a) The deductions allowable are specified in... provisions of section 1212. The deduction is the same as that allowed mutual insurance companies subject to... companies, other than mutual fire insurance companies described in section 831(a)(3)(A) and the regulations...

  14. 42 CFR 408.43 - Deduction from social security benefits.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 2 2011-10-01 2011-10-01 false Deduction from social security benefits. 408.43... § 408.43 Deduction from social security benefits. SSA, acting as CMS's agent, deducts the premiums from the monthly social security benefits if the enrollee is not entitled to railroad retirement benefits...

  15. Children's and Adults' Evaluation of Their Own Inductive Inferences, Deductive Inferences, and Guesses

    ERIC Educational Resources Information Center

    Pillow, Bradford H.; Pearson, RaeAnne M.

    2009-01-01

    Adults' and kindergarten through fourth-grade children's evaluations and explanations of inductive inferences, deductive inferences, and guesses were assessed. Beginning in kindergarten, participants rated deductions as more certain than weak inductions or guesses. Beginning in third grade, deductions were rated as more certain than strong…

  16. 48 CFR 1852.236-71 - Additive or deductive items.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 6 2013-10-01 2013-10-01 false Additive or deductive... and Clauses 1852.236-71 Additive or deductive items. As prescribed in 1836.570(a), insert the following provision: Additive or Deductive Items (MAR 1989) (a) The low bidder for purposes of award shall...

  17. 48 CFR 452.236-70 - Additive or Deductive Items.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Additive or Deductive... Additive or Deductive Items. As prescribed in 436.205, insert the following provision: Additive or... listed in the schedule) those additive or deductive bid items providing the most features of the work...

  18. 48 CFR 1452.236-71 - Additive or Deductive Items.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Additive or Deductive... Additive or Deductive Items. As prescribed in 1436.571, insert the following provision: Additive or... the bidder having the lowest total of the base bid and a combination of additive and deductive items...

  19. 48 CFR 1436.571 - Additive and deductive items.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Additive and deductive... Additive and deductive items. If it appears that funds available for a construction project may be... the work as specified and for one or more additive or deductive bid items which add or omit specified...

  20. 48 CFR 452.236-70 - Additive or Deductive Items.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Additive or Deductive... Additive or Deductive Items. As prescribed in 436.205, insert the following provision: Additive or... listed in the schedule) those additive or deductive bid items providing the most features of the work...

  1. 48 CFR 1836.213-370 - Additive and deductive items.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false Additive and deductive... Special Aspects of Contracting for Construction 1836.213-370 Additive and deductive items. When it appears... the work generally as specified and one or more additive or deductive bid items progressively adding...

  2. 48 CFR 452.236-70 - Additive or Deductive Items.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Additive or Deductive... Additive or Deductive Items. As prescribed in 436.205, insert the following provision: Additive or... listed in the schedule) those additive or deductive bid items providing the most features of the work...

  3. 48 CFR 1836.213-370 - Additive and deductive items.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 6 2013-10-01 2013-10-01 false Additive and deductive... Special Aspects of Contracting for Construction 1836.213-370 Additive and deductive items. When it appears... the work generally as specified and one or more additive or deductive bid items progressively adding...

  4. 48 CFR 1436.571 - Additive and deductive items.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Additive and deductive... Additive and deductive items. If it appears that funds available for a construction project may be... the work as specified and for one or more additive or deductive bid items which add or omit specified...

  5. 48 CFR 1836.213-370 - Additive and deductive items.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false Additive and deductive... Special Aspects of Contracting for Construction 1836.213-370 Additive and deductive items. When it appears... the work generally as specified and one or more additive or deductive bid items progressively adding...

  6. 48 CFR 1836.213-370 - Additive and deductive items.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Additive and deductive... Special Aspects of Contracting for Construction 1836.213-370 Additive and deductive items. When it appears... the work generally as specified and one or more additive or deductive bid items progressively adding...

  7. 48 CFR 1436.571 - Additive and deductive items.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Additive and deductive... Additive and deductive items. If it appears that funds available for a construction project may be... the work as specified and for one or more additive or deductive bid items which add or omit specified...

  8. 48 CFR 452.236-70 - Additive or Deductive Items.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Additive or Deductive... Additive or Deductive Items. As prescribed in 436.205, insert the following provision: Additive or... listed in the schedule) those additive or deductive bid items providing the most features of the work...

  9. 48 CFR 1836.213-370 - Additive and deductive items.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Additive and deductive... Special Aspects of Contracting for Construction 1836.213-370 Additive and deductive items. When it appears... the work generally as specified and one or more additive or deductive bid items progressively adding...

  10. 48 CFR 1436.571 - Additive and deductive items.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Additive and deductive... Additive and deductive items. If it appears that funds available for a construction project may be... the work as specified and for one or more additive or deductive bid items which add or omit specified...

  11. 48 CFR 1452.236-71 - Additive or Deductive Items.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Additive or Deductive... Additive or Deductive Items. As prescribed in 1436.571, insert the following provision: Additive or... the bidder having the lowest total of the base bid and a combination of additive and deductive items...

  12. 48 CFR 1852.236-71 - Additive or deductive items.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Additive or deductive... and Clauses 1852.236-71 Additive or deductive items. As prescribed in 1836.570(a), insert the following provision: Additive or Deductive Items (MAR 1989) (a) The low bidder for purposes of award shall...

  13. 48 CFR 1852.236-71 - Additive or deductive items.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false Additive or deductive... and Clauses 1852.236-71 Additive or deductive items. As prescribed in 1836.570(a), insert the following provision: Additive or Deductive Items (MAR 1989) (a) The low bidder for purposes of award shall...

  14. 48 CFR 452.236-70 - Additive or Deductive Items.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Additive or Deductive... Additive or Deductive Items. As prescribed in 436.205, insert the following provision: Additive or... listed in the schedule) those additive or deductive bid items providing the most features of the work...

  15. 48 CFR 1852.236-71 - Additive or deductive items.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false Additive or deductive... and Clauses 1852.236-71 Additive or deductive items. As prescribed in 1836.570(a), insert the following provision: Additive or Deductive Items (MAR 1989) (a) The low bidder for purposes of award shall...

  16. 48 CFR 1452.236-71 - Additive or Deductive Items.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Additive or Deductive... Additive or Deductive Items. As prescribed in 1436.571, insert the following provision: Additive or... the bidder having the lowest total of the base bid and a combination of additive and deductive items...

  17. 48 CFR 1436.571 - Additive and deductive items.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Additive and deductive... Additive and deductive items. If it appears that funds available for a construction project may be... the work as specified and for one or more additive or deductive bid items which add or omit specified...

  18. 48 CFR 1452.236-71 - Additive or Deductive Items.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Additive or Deductive... Additive or Deductive Items. As prescribed in 1436.571, insert the following provision: Additive or... the bidder having the lowest total of the base bid and a combination of additive and deductive items...

  19. 48 CFR 1452.236-71 - Additive or Deductive Items.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Additive or Deductive... Additive or Deductive Items. As prescribed in 1436.571, insert the following provision: Additive or... the bidder having the lowest total of the base bid and a combination of additive and deductive items...

  20. 12 CFR 347.208 - Assessment base deductions by insured branch.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 5 2012-01-01 2012-01-01 false Assessment base deductions by insured branch... STATEMENTS OF GENERAL POLICY INTERNATIONAL BANKING Foreign Banks § 347.208 Assessment base deductions by..., branches, agencies, or wholly owned subsidiaries may be deducted from the assessment base of the insured...

  1. 12 CFR 347.208 - Assessment base deductions by insured branch.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 4 2011-01-01 2011-01-01 false Assessment base deductions by insured branch... STATEMENTS OF GENERAL POLICY INTERNATIONAL BANKING Foreign Banks § 347.208 Assessment base deductions by..., branches, agencies, or wholly owned subsidiaries may be deducted from the assessment base of the insured...

  2. 22 CFR 512.22 - Deduction from pay.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 2 2014-04-01 2014-04-01 false Deduction from pay. 512.22 Section 512.22 Foreign Relations BROADCASTING BOARD OF GOVERNORS COLLECTION OF DEBTS UNDER THE DEBT COLLECTION ACT OF 1982 Salary Offset § 512.22 Deduction from pay. (a) Deduction by salary offset, from an employee's...

  3. 22 CFR 512.22 - Deduction from pay.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 2 2012-04-01 2009-04-01 true Deduction from pay. 512.22 Section 512.22 Foreign Relations BROADCASTING BOARD OF GOVERNORS COLLECTION OF DEBTS UNDER THE DEBT COLLECTION ACT OF 1982 Salary Offset § 512.22 Deduction from pay. (a) Deduction by salary offset, from an employee's...

  4. 22 CFR 512.22 - Deduction from pay.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 2 2011-04-01 2009-04-01 true Deduction from pay. 512.22 Section 512.22 Foreign Relations BROADCASTING BOARD OF GOVERNORS COLLECTION OF DEBTS UNDER THE DEBT COLLECTION ACT OF 1982 Salary Offset § 512.22 Deduction from pay. (a) Deduction by salary offset, from an employee's...

  5. 22 CFR 512.22 - Deduction from pay.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 2 2013-04-01 2009-04-01 true Deduction from pay. 512.22 Section 512.22 Foreign Relations BROADCASTING BOARD OF GOVERNORS COLLECTION OF DEBTS UNDER THE DEBT COLLECTION ACT OF 1982 Salary Offset § 512.22 Deduction from pay. (a) Deduction by salary offset, from an employee's...

  6. 42 CFR 408.43 - Deduction from social security benefits.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 2 2014-10-01 2014-10-01 false Deduction from social security benefits. 408.43... § 408.43 Deduction from social security benefits. SSA, acting as CMS's agent, deducts the premiums from the monthly social security benefits if the enrollee is not entitled to railroad retirement benefits...

  7. 42 CFR 408.43 - Deduction from social security benefits.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 2 2012-10-01 2012-10-01 false Deduction from social security benefits. 408.43... § 408.43 Deduction from social security benefits. SSA, acting as CMS's agent, deducts the premiums from the monthly social security benefits if the enrollee is not entitled to railroad retirement benefits...

  8. 42 CFR 408.43 - Deduction from social security benefits.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 2 2013-10-01 2013-10-01 false Deduction from social security benefits. 408.43... § 408.43 Deduction from social security benefits. SSA, acting as CMS's agent, deducts the premiums from the monthly social security benefits if the enrollee is not entitled to railroad retirement benefits...

  9. 42 CFR 408.43 - Deduction from social security benefits.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Deduction from social security benefits. 408.43... § 408.43 Deduction from social security benefits. SSA, acting as CMS's agent, deducts the premiums from the monthly social security benefits if the enrollee is not entitled to railroad retirement benefits...

  10. 12 CFR 347.208 - Assessment base deductions by insured branch.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Assessment base deductions by insured branch... STATEMENTS OF GENERAL POLICY INTERNATIONAL BANKING Foreign Banks § 347.208 Assessment base deductions by..., branches, agencies, or wholly owned subsidiaries may be deducted from the assessment base of the insured...

  11. Deductive Error Diagnosis and Inductive Error Generalization for Intelligent Tutoring Systems.

    ERIC Educational Resources Information Center

    Hoppe, H. Ulrich

    1994-01-01

    Examines the deductive approach to error diagnosis for intelligent tutoring systems. Topics covered include the principles of the deductive approach to diagnosis; domain-specific heuristics to solve the problem of generalizing error patterns; and deductive diagnosis and the hypertext-based learning environment. (Contains 26 references.) (JLB)

  12. 29 CFR 1450.23 - Deduction from pay.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 4 2010-07-01 2010-07-01 false Deduction from pay. 1450.23 Section 1450.23 Labor... OWED THE UNITED STATES Salary Offset § 1450.23 Deduction from pay. (a) Deduction by salary offset, from an employee's current disposable pay, shall be subject to the following conditions: (1) Ordinarily...

  13. 22 CFR 512.22 - Deduction from pay.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Deduction from pay. 512.22 Section 512.22... 1982 Salary Offset § 512.22 Deduction from pay. (a) Deduction by salary offset, from an employee's disposable current pay, shall be subject to the following circumstances: (1) When funds are available, the...

  14. Numerical Investigation of Earthquake Nucleation on a Laboratory-Scale Heterogeneous Fault with Rate-and-State Friction

    NASA Astrophysics Data System (ADS)

    Higgins, N.; Lapusta, N.

    2014-12-01

    Many large earthquakes on natural faults are preceded by smaller events, often termed foreshocks, that occur close in time and space to the larger event that follows. Understanding the origin of such events is important for understanding earthquake physics. Unique laboratory experiments of earthquake nucleation in a meter-scale slab of granite (McLaskey and Kilgore, 2013; McLaskey et al., 2014) demonstrate that sample-scale nucleation processes are also accompanied by much smaller seismic events. One potential explanation for these foreshocks is that they occur on small asperities - or bumps - on the fault interface, which may also be the locations of smaller critical nucleation size. We explore this possibility through 3D numerical simulations of a heterogeneous 2D fault embedded in a homogeneous elastic half-space, in an attempt to qualitatively reproduce the laboratory observations of foreshocks. In our model, the simulated fault interface is governed by rate-and-state friction with laboratory-relevant frictional properties, fault loading, and fault size. To create favorable locations for foreshocks, the fault surface heterogeneity is represented as patches of increased normal stress, decreased characteristic slip distance L, or both. Our simulation results indicate that one can create a rate-and-state model of the experimental observations. Models with a combination of higher normal stress and lower L at the patches are closest to matching the laboratory observations of foreshocks in moment magnitude, source size, and stress drop. In particular, we find that, when the local compression is increased, foreshocks can occur on patches that are smaller than theoretical critical nucleation size estimates. The additional inclusion of lower L for these patches helps to keep stress drops within the range observed in experiments, and is compatible with the asperity model of foreshock sources, since one would expect more compressed spots to be smoother (and hence have lower L). In this heterogeneous rate-and-state fault model, the foreshocks interact with each other and with the overall nucleation process through their postseismic slip. The interplay amongst foreshocks, and between foreshocks and the larger-scale nucleation process, is a topic of our future work.

  15. Predicted performance of an integrated modular engine system

    NASA Technical Reports Server (NTRS)

    Binder, Michael; Felder, James L.

    1993-01-01

    Space vehicle propulsion systems are traditionally comprised of a cluster of discrete engines, each with its own set of turbopumps, valves, and a thrust chamber. The Integrated Modular Engine (IME) concept proposes a vehicle propulsion system comprised of multiple turbopumps, valves, and thrust chambers which are all interconnected. The IME concept has potential advantages in fault-tolerance, weight, and operational efficiency compared with the traditional clustered engine configuration. The purpose of this study is to examine the steady-state performance of an IME system with various components removed to simulate fault conditions. An IME configuration for a hydrogen/oxygen expander cycle propulsion system with four sets of turbopumps and eight thrust chambers has been modeled using the Rocket Engine Transient Simulator (ROCETS) program. The nominal steady-state performance is simulated, as well as turbopump thrust chamber and duct failures. The impact of component failures on system performance is discussed in the context of the system's fault tolerant capabilities.

  16. Assessment on the influence of resistive superconducting fault current limiter in VSC-HVDC system

    NASA Astrophysics Data System (ADS)

    Lee, Jong-Geon; Khan, Umer Amir; Hwang, Jae-Sang; Seong, Jae-Kyu; Shin, Woo-Ju; Park, Byung-Bae; Lee, Bang-Wook

    2014-09-01

    Due to fewer risk of commutation failures, harmonic occurrences and reactive power consumptions, Voltage Source Converter (VSC) based HVDC system is known as the optimum solution of HVDC power system for the future power grid. However, the absence of suitable fault protection devices for HVDC system hinders the efficient VSC-HVDC power grid design. In order to enhance the reliability of the VSC-HVDC power grid against the fault current problems, the application of resistive Superconducting Fault Current Limiters (SFCLs) could be considered. Also, SFCLs could be applied to the VSC-HVDC system with integrated AC Power Systems in order to enhance the transient response and the robustness of the system. In this paper, in order to evaluate the role of SFCLs in VSC-HVDC systems and to determine the suitable position of SFCLs in VSC-HVDC power systems integrated with AC power System, a simulation model based on Korea Jeju-Haenam HVDC power system was designed in Matlab Simulink/SimPowerSystems. This designed model was composed of VSC-HVDC system connected with an AC microgrid. Utilizing the designed VSC-HVDC systems, the feasible locations of resistive SFCLs were evaluated when DC line-to-line, DC line-to-ground and three phase AC faults were occurred. Consequently, it was found that the simulation model was effective to evaluate the positive effects of resistive SFCLs for the effective suppression of fault currents in VSC-HVDC systems as well as in integrated AC Systems. Finally, the optimum locations of SFCLs in VSC-HVDC transmission systems were suggested based on the simulation results.

  17. Analogue modelling for localization of deformation in the extensional pull-apart basins: comparison with the west part of NAF, Turkey

    NASA Astrophysics Data System (ADS)

    Bulkan, Sibel; Storti, Fabrizio; Cavozzi, Cristian; Vannucchi, Paola

    2017-04-01

    Analogue modelling remains one of the best methods for investigating progressive deformation of pull apart systems in strike slip faults that are poorly known. Analogue model experiments for the North Anatolian Fault (NAF) system around the Sea of Marmara are extremely rare in the geological literature. Our purpose in this work is to monitor the relation between the horizontal propagation and branching of the strike slip fault, and the structural and topographic expression resulting from this process. These experiments may provide insights into the geometric evolution and kinematic of west part of the NAF system. For this purpose, we run several 3D sand box experiments, appropriately scaled. Plexiglass sheets were purposely cut to simulate the geometry of the NAF. Silicone was placed on the top of these to simulate the viscous lower crust, while the brittle upper crust was simulated with pure dry sand. Dextral relative fault motion was imposed as well using different velocities to reproduce different strain rates and pull apart formation at the releasing bend. Our experiments demonstrate the variation of the shear zone shapes and how the master-fault propagates during the deformation, helping to cover the gaps between geodetic and geologic slip information. Lower crustal flow may explain how the deformation is transferred to the upper crust, and stress partitioned among the strike slip faults and pull-apart basin systems. Stress field evolution seems to play an interesting role to help strain localization. We compare the results of these experiments with natural examples around the western part of NAF and with seismic observations.

  18. Fault trees for decision making in systems analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, Howard E.

    1975-10-09

    The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut setsmore » according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure.« less

  19. Power flow analysis and optimal locations of resistive type superconducting fault current limiters.

    PubMed

    Zhang, Xiuchang; Ruiz, Harold S; Geng, Jianzhao; Shen, Boyang; Fu, Lin; Zhang, Heng; Coombs, Tim A

    2016-01-01

    Based on conventional approaches for the integration of resistive-type superconducting fault current limiters (SFCLs) on electric distribution networks, SFCL models largely rely on the insertion of a step or exponential resistance that is determined by a predefined quenching time. In this paper, we expand the scope of the aforementioned models by considering the actual behaviour of an SFCL in terms of the temperature dynamic power-law dependence between the electrical field and the current density, characteristic of high temperature superconductors. Our results are compared to the step-resistance models for the sake of discussion and clarity of the conclusions. Both SFCL models were integrated into a power system model built based on the UK power standard, to study the impact of these protection strategies on the performance of the overall electricity network. As a representative renewable energy source, a 90 MVA wind farm was considered for the simulations. Three fault conditions were simulated, and the figures for the fault current reduction predicted by both fault current limiting models have been compared in terms of multiple current measuring points and allocation strategies. Consequently, we have shown that the incorporation of the E - J characteristics and thermal properties of the superconductor at the simulation level of electric power systems, is crucial for estimations of reliability and determining the optimal locations of resistive type SFCLs in distributed power networks. Our results may help decision making by distribution network operators regarding investment and promotion of SFCL technologies, as it is possible to determine the maximum number of SFCLs necessary to protect against different fault conditions at multiple locations.

  20. A novel Lagrangian approach for the stable numerical simulation of fault and fracture mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franceschini, Andrea; Ferronato, Massimiliano, E-mail: massimiliano.ferronato@unipd.it; Janna, Carlo

    The simulation of the mechanics of geological faults and fractures is of paramount importance in several applications, such as ensuring the safety of the underground storage of wastes and hydrocarbons or predicting the possible seismicity triggered by the production and injection of subsurface fluids. However, the stable numerical modeling of ground ruptures is still an open issue. The present work introduces a novel formulation based on the use of the Lagrange multipliers to prescribe the constraints on the contact surfaces. The variational formulation is modified in order to take into account the frictional work along the activated fault portion accordingmore » to the principle of maximum plastic dissipation. The numerical model, developed in the framework of the Finite Element method, provides stable solutions with a fast convergence of the non-linear problem. The stabilizing properties of the proposed model are emphasized with the aid of a realistic numerical example dealing with the generation of ground fractures due to groundwater withdrawal in arid regions. - Highlights: • A numerical model is developed for the simulation of fault and fracture mechanics. • The model is implemented in the framework of the Finite Element method and with the aid of Lagrange multipliers. • The proposed formulation introduces a new contribution due to the frictional work on the portion of activated fault. • The resulting algorithm is highly non-linear as the portion of activated fault is itself unknown. • The numerical solution is validated against analytical results and proves to be stable also in realistic applications.« less

  1. Effect of Fault Parameter Uncertainties on PSHA explored by Monte Carlo Simulations: A case study for southern Apennines, Italy

    NASA Astrophysics Data System (ADS)

    Akinci, A.; Pace, B.

    2017-12-01

    In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.

  2. Multi-faults decoupling on turbo-expander using differential-based ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Li, Hongguang; Li, Ming; Li, Cheng; Li, Fucai; Meng, Guang

    2017-09-01

    This paper dedicates on the multi-faults decoupling of turbo-expander rotor system using Differential-based Ensemble Empirical Mode Decomposition (DEEMD). DEEMD is an improved version of DEMD to resolve the imperfection of mode mixing. The nonlinear behaviors of the turbo-expander considering temperature gradient with crack, rub-impact and pedestal looseness faults are investigated respectively, so that the baseline for the multi-faults decoupling can be established. DEEMD is then utilized on the vibration signals of the rotor system with coupling faults acquired by numerical simulation, and the results indicate that DEEMD can successfully decouple the coupling faults, which is more efficient than EEMD. DEEMD is also applied on the vibration signal of the misalignment coupling with rub-impact fault obtained during the adjustment of the experimental system. The conclusion shows that DEEMD can decompose the practical multi-faults signal and the industrial prospect of DEEMD is verified as well.

  3. Robust fault detection of turbofan engines subject to adaptive controllers via a Total Measurable Fault Information Residual (ToMFIR) technique.

    PubMed

    Chen, Wen; Chowdhury, Fahmida N; Djuric, Ana; Yeh, Chih-Ping

    2014-09-01

    This paper provides a new design of robust fault detection for turbofan engines with adaptive controllers. The critical issue is that the adaptive controllers can depress the faulty effects such that the actual system outputs remain the pre-specified values, making it difficult to detect faults/failures. To solve this problem, a Total Measurable Fault Information Residual (ToMFIR) technique with the aid of system transformation is adopted to detect faults in turbofan engines with adaptive controllers. This design is a ToMFIR-redundancy-based robust fault detection. The ToMFIR is first introduced and existing results are also summarized. The Detailed design process of the ToMFIRs is presented and a turbofan engine model is simulated to verify the effectiveness of the proposed ToMFIR-based fault-detection strategy. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Application of a Bank of Kalman Filters for Aircraft Engine Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2003-01-01

    In this paper, a bank of Kalman filters is applied to aircraft gas turbine engine sensor and actuator fault detection and isolation (FDI) in conjunction with the detection of component faults. This approach uses multiple Kalman filters, each of which is designed for detecting a specific sensor or actuator fault. In the event that a fault does occur, all filters except the one using the correct hypothesis will produce large estimation errors, thereby isolating the specific fault. In the meantime, a set of parameters that indicate engine component performance is estimated for the detection of abrupt degradation. The proposed FDI approach is applied to a nonlinear engine simulation at nominal and aged conditions, and the evaluation results for various engine faults at cruise operating conditions are given. The ability of the proposed approach to reliably detect and isolate sensor and actuator faults is demonstrated.

  5. Data-based fault-tolerant control for affine nonlinear systems with actuator faults.

    PubMed

    Xie, Chun-Hua; Yang, Guang-Hong

    2016-09-01

    This paper investigates the fault-tolerant control (FTC) problem for unknown nonlinear systems with actuator faults including stuck, outage, bias and loss of effectiveness. The upper bounds of stuck faults, bias faults and loss of effectiveness faults are unknown. A new data-based FTC scheme is proposed. It consists of the online estimations of the bounds and a state-dependent function. The estimations are adjusted online to compensate automatically the actuator faults. The state-dependent function solved by using real system data helps to stabilize the system. Furthermore, all signals in the resulting closed-loop system are uniformly bounded and the states converge asymptotically to zero. Compared with the existing results, the proposed approach is data-based. Finally, two simulation examples are provided to show the effectiveness of the proposed approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Fault diagnosis of motor bearing with speed fluctuation via angular resampling of transient sound signals

    NASA Astrophysics Data System (ADS)

    Lu, Siliang; Wang, Xiaoxian; He, Qingbo; Liu, Fang; Liu, Yongbin

    2016-12-01

    Transient signal analysis (TSA) has been proven an effective tool for motor bearing fault diagnosis, but has yet to be applied in processing bearing fault signals with variable rotating speed. In this study, a new TSA-based angular resampling (TSAAR) method is proposed for fault diagnosis under speed fluctuation condition via sound signal analysis. By applying the TSAAR method, the frequency smearing phenomenon is eliminated and the fault characteristic frequency is exposed in the envelope spectrum for bearing fault recognition. The TSAAR method can accurately estimate the phase information of the fault-induced impulses using neither complicated time-frequency analysis techniques nor external speed sensors, and hence it provides a simple, flexible, and data-driven approach that realizes variable-speed motor bearing fault diagnosis. The effectiveness and efficiency of the proposed TSAAR method are verified through a series of simulated and experimental case studies.

  7. A fault isolation method based on the incidence matrix of an augmented system

    NASA Astrophysics Data System (ADS)

    Chen, Changxiong; Chen, Liping; Ding, Jianwan; Wu, Yizhong

    2018-03-01

    A new approach is proposed for isolating faults and fast identifying the redundant sensors of a system in this paper. By introducing fault signal as additional state variable, an augmented system model is constructed by the original system model, fault signals and sensor measurement equations. The structural properties of an augmented system model are provided in this paper. From the viewpoint of evaluating fault variables, the calculating correlations of the fault variables in the system can be found, which imply the fault isolation properties of the system. Compared with previous isolation approaches, the highlights of the new approach are that it can quickly find the faults which can be isolated using exclusive residuals, at the same time, and can identify the redundant sensors in the system, which are useful for the design of diagnosis system. The simulation of a four-tank system is reported to validate the proposed method.

  8. Selection of test paths for solder joint intermittent connection faults under DC stimulus

    NASA Astrophysics Data System (ADS)

    Huakang, Li; Kehong, Lv; Jing, Qiu; Guanjun, Liu; Bailiang, Chen

    2018-06-01

    The test path of solder joint intermittent connection faults under direct-current stimulus is examined in this paper. According to the physical structure of the circuit, a network model is established first. A network node is utilised to represent the test node. The path edge refers to the number of intermittent connection faults in the path. Then, the selection criteria of the test path based on the node degree index are proposed and the solder joint intermittent connection faults are covered using fewer test paths. Finally, three circuits are selected to verify the method. To test if the intermittent fault is covered by the test paths, the intermittent fault is simulated by a switch. The results show that the proposed method can detect the solder joint intermittent connection fault using fewer test paths. Additionally, the number of detection steps is greatly reduced without compromising fault coverage.

  9. Multi-Physics Modelling of Fault Mechanics Using REDBACK: A Parallel Open-Source Simulator for Tightly Coupled Problems

    NASA Astrophysics Data System (ADS)

    Poulet, Thomas; Paesold, Martin; Veveakis, Manolis

    2017-03-01

    Faults play a major role in many economically and environmentally important geological systems, ranging from impermeable seals in petroleum reservoirs to fluid pathways in ore-forming hydrothermal systems. Their behavior is therefore widely studied and fault mechanics is particularly focused on the mechanisms explaining their transient evolution. Single faults can change in time from seals to open channels as they become seismically active and various models have recently been presented to explain the driving forces responsible for such transitions. A model of particular interest is the multi-physics oscillator of Alevizos et al. (J Geophys Res Solid Earth 119(6), 4558-4582, 2014) which extends the traditional rate and state friction approach to rate and temperature-dependent ductile rocks, and has been successfully applied to explain spatial features of exposed thrusts as well as temporal evolutions of current subduction zones. In this contribution we implement that model in REDBACK, a parallel open-source multi-physics simulator developed to solve such geological instabilities in three dimensions. The resolution of the underlying system of equations in a tightly coupled manner allows REDBACK to capture appropriately the various theoretical regimes of the system, including the periodic and non-periodic instabilities. REDBACK can then be used to simulate the drastic permeability evolution in time of such systems, where nominally impermeable faults can sporadically become fluid pathways, with permeability increases of several orders of magnitude.

  10. Suppression of slip and rupture velocity increased by thermal pressurization: Effect of dilatancy

    NASA Astrophysics Data System (ADS)

    Urata, Yumi; Kuge, Keiko; Kase, Yuko

    2013-11-01

    investigated the effect of dilatancy on dynamic rupture propagation on a fault where thermal pressurization (TP) is in effect, taking into account permeability varying with porosity; the study is based on three-dimensional (3-D) numerical simulations of spontaneous ruptures obeying a slip-weakening friction law and Coulomb failure criterion. The effects of dilatancy on dynamic ruptures interacting with TP have been often investigated in one- or two-dimensional numerical simulations. The sole 3-D numerical simulation gave attention only to the behavior at a single point on a fault. Moreover, with the sole exception based on a single-degree-freedom spring-slider model, the previous simulations including dilatancy and TP have not considered changes in hydraulic diffusivity. However, the hydraulic diffusivity, which strongly affects TP, can vary as a power of porosity. In this study, we apply a power law relationship between permeability and porosity. We consider both reversible and irreversible changes in porosity, assuming that the irreversible change is proportional to the slip rate and dilatancy coefficient ɛ. Our numerical simulations suggest that the effects of dilatancy can suppress slip and rupture velocity increased by TP. The results reveal that the amount of slip on the fault decreases with increasing ɛ or exponent of the power law, and the rupture velocity is predominantly suppressed by ɛ. This was observed regardless of whether the applied stresses were high or low. The deficit of the final slip in relation to ɛ can be smaller as the fault size is larger.

  11. Simulation-driven machine learning: Bearing fault classification

    NASA Astrophysics Data System (ADS)

    Sobie, Cameron; Freitas, Carina; Nicolai, Mike

    2018-01-01

    Increasing the accuracy of mechanical fault detection has the potential to improve system safety and economic performance by minimizing scheduled maintenance and the probability of unexpected system failure. Advances in computational performance have enabled the application of machine learning algorithms across numerous applications including condition monitoring and failure detection. Past applications of machine learning to physical failure have relied explicitly on historical data, which limits the feasibility of this approach to in-service components with extended service histories. Furthermore, recorded failure data is often only valid for the specific circumstances and components for which it was collected. This work directly addresses these challenges for roller bearings with race faults by generating training data using information gained from high resolution simulations of roller bearing dynamics, which is used to train machine learning algorithms that are then validated against four experimental datasets. Several different machine learning methodologies are compared starting from well-established statistical feature-based methods to convolutional neural networks, and a novel application of dynamic time warping (DTW) to bearing fault classification is proposed as a robust, parameter free method for race fault detection.

  12. Latent component-based gear tooth fault detection filter using advanced parametric modeling

    NASA Astrophysics Data System (ADS)

    Ettefagh, M. M.; Sadeghi, M. H.; Rezaee, M.; Chitsaz, S.

    2009-10-01

    In this paper, a new parametric model-based filter is proposed for gear tooth fault detection. The designing of the filter consists of identifying the most proper latent component (LC) of the undamaged gearbox signal by analyzing the instant modules (IMs) and instant frequencies (IFs) and then using the component with lowest IM as the proposed filter output for detecting fault of the gearbox. The filter parameters are estimated by using the LC theory in which an advanced parametric modeling method has been implemented. The proposed method is applied on the signals, extracted from simulated gearbox for detection of the simulated gear faults. In addition, the method is used for quality inspection of the produced Nissan-Junior vehicle gearbox by gear profile error detection in an industrial test bed. For evaluation purpose, the proposed method is compared with the previous parametric TAR/AR-based filters in which the parametric model residual is considered as the filter output and also Yule-Walker and Kalman filter are implemented for estimating the parameters. The results confirm the high performance of the new proposed fault detection method.

  13. Models of recurrent strike-slip earthquake cycles and the state of crustal stress

    NASA Technical Reports Server (NTRS)

    Lyzenga, Gregory A.; Raefsky, Arthur; Mulligan, Stephanie G.

    1991-01-01

    Numerical models of the strike-slip earthquake cycle, assuming a viscoelastic asthenosphere coupling model, are examined. The time-dependent simulations incorporate a stress-driven fault, which leads to tectonic stress fields and earthquake recurrence histories that are mutually consistent. Single-fault simulations with constant far-field plate motion lead to a nearly periodic earthquake cycle and a distinctive spatial distribution of crustal shear stress. The predicted stress distribution includes a local minimum in stress at depths less than typical seismogenic depths. The width of this stress 'trough' depends on the magnitude of crustal stress relative to asthenospheric drag stresses. The models further predict a local near-fault stress maximum at greater depths, sustained by the cyclic transfer of strain from the elastic crust to the ductile asthenosphere. Models incorporating both low-stress and high-stress fault strength assumptions are examined, under Newtonian and non-Newtonian rheology assumptions. Model results suggest a preference for low-stress (a shear stress level of about 10 MPa) fault models, in agreement with previous estimates based on heat flow measurements and other stress indicators.

  14. The influence of geologic structures on deformation due to ground water withdrawal.

    PubMed

    Burbey, Thomas J

    2008-01-01

    A 62 day controlled aquifer test was conducted in thick alluvial deposits at Mesquite, Nevada, for the purpose of monitoring horizontal and vertical surface deformations using a high-precision global positioning system (GPS) network. Initial analysis of the data indicated an anisotropic aquifer system on the basis of the observed radial and tangential deformations. However, new InSAR data seem to indicate that the site may be bounded by an oblique normal fault as the subsidence bowl is both truncated to the northwest and offset from the pumping well to the south. A finite-element numerical model was developed using ABAQUS to evaluate the potential location and hydromechanical properties of the fault based on the observed horizontal deformations. Simulation results indicate that for the magnitude and direction of motion at the pumping well and at other GPS stations, which is toward the southeast (away from the inferred fault), the fault zone (5 m wide) must possess a very high permeability and storage coefficient and cross the study area in a northeast-southwest direction. Simulated horizontal and vertical displacements that include the fault zone closely match observed displacements and indicate the likelihood of the presence of the inferred fault. This analysis shows how monitoring horizontal displacements can provide valuable information about faults, and boundary conditions in general, in evaluating aquifer systems during an aquifer test.

  15. Enhanced characterization of faults and fractures at EGS sites by CO2 injection coupled with active seismic monitoring, pressure-transient testing, and well logging

    NASA Astrophysics Data System (ADS)

    Oldenburg, C. M.; Daley, T. M.; Borgia, A.; Zhang, R.; Doughty, C.; Jung, Y.; Altundas, B.; Chugunov, N.; Ramakrishnan, T. S.

    2016-12-01

    Faults and fractures in geothermal systems are difficult to image and characterize because they are nearly indistinguishable from host rock using traditional seismic and well-logging tools. We are investigating the use of CO2 injection and production (push-pull) in faults and fractures for contrast enhancement for better characterization by active seismic, well logging, and push-pull pressure transient analysis. Our approach consists of numerical simulation and feasibility assessment using conceptual models of potential enhanced geothermal system (EGS) sites such as Brady's Hot Spring and others. Faults in the deep subsurface typically have associated damage and gouge zones that provide a larger volume for uptake of CO2 than the slip plane alone. CO2 injected for push-pull well testing has a preference for flowing in the fault and fractures because CO2 is non-wetting relative to water and the permeability of open fractures and fault gouge is much higher than matrix. We are carrying out numerical simulations of injection and withdrawal of CO2 using TOUGH2/ECO2N. Simulations show that CO2 flows into the slip plane and gouge and damage zones and is driven upward by buoyancy during the push cycle over day-long time scales. Recovery of CO2 during the pull cycle is limited because of buoyancy effects. We then use the CO2 saturation field simulated by TOUGH2 in our anisotropic finite difference code from SPICE-with modifications for fracture compliance-that we use to model elastic wave propagation. Results show time-lapse differences in seismic response using a surface source. Results suggest that CO2 can be best imaged using time-lapse differencing of the P-wave and P-to-S-wave scattering in a vertical seismic profile (VSP) configuration. Wireline well-logging tools that measure electrical conductivity show promise as another means to detect and image the CO2-filled fracture near the injection well and potential monitoring well(s), especially if a saline-water pre-flush is carried out to enhance conductivity contrast. Pressure-transient analysis is also carried out to further constrain fault zone characteristics. These multiple complementary characterization approaches are being used to develop working models of fault and fracture zone characteristics relevant to EGS energy recovery.

  16. Coupled multiphase flow and geomechanics analysis of the 2011 Lorca earthquake

    NASA Astrophysics Data System (ADS)

    Jha, B.; Hager, B. H.; Juanes, R.; Bechor, N.

    2013-12-01

    We present a new approach for modeling coupled multiphase flow and geomechanics of faulted reservoirs. We couple a flow simulator with a mechanics simulator using the unconditionally stable fixed-stress sequential solution scheme [Kim et al, 2011]. We model faults as surfaces of discontinuity using interface elements [Aagaard et al, 2008]. This allows us to model stick-slip behavior on the fault surface for dynamically evolving fault strength. We employ a rigorous formulation of nonlinear multiphase geomechanics [Coussy, 1995], which is based on the increment in mass of fluid phases instead of the traditional, and less accurate, scheme based on the change in porosity. Our nonlinear formulation is capable of handling strong capillarity and large changes in saturation in the reservoir. To account for the effect of surface stresses along fluid-fluid interfaces, we use the equivalent pore pressure in the definition of the multiphase effective stress [Coussy et al, 1998; Kim et al, 2013]. We use our simulation tool to study the 2011 Lorca earthquake [Gonzalez et al, 2012], which has received much attention because of its potential anthropogenic triggering (long-term groundwater withdrawal leading to slip along the regional Alhama de Murcia fault). Our coupled fluid flow and geomechanics approach to model fault slip allowed us to take a fresh look at this seismic event, which to date has only been analyzed using simple elastic dislocation models and point source solutions. Using a three-dimensional model of the Lorca region, we simulate the groundwater withdrawal and subsequent unloading of the basin over the period of interest (1960-2010). We find that groundwater withdrawal leads to unloading of the crust and changes in the stress across the impermeable fault plane. Our analysis suggests that the combination of these two factors played a critical role in inducing the fault slip that ultimately led to the Lorca earthquake. Aagaard, B. T., M. G. Knepley, and C. A. Williams (2013), Journal of Geophysical Research, Solid Earth, 118, 3059-3079 Coussy, O. (1995), Mechanics of Porous Continua, John Wiley and Sons, England. Coussy, O., R. Eymard, and T. Lassabatere (1998), J. Eng. Mech., 124(6), 658-557. Kim, J., H. A. Tchelepi, and R. Juanes (2011), Comput. Methods Appl. Mech. Eng., 200, 1591-1606. Gonzalez, P. J., K. F. Tiampo, M. Palano, F. Cannavo, and J. Fernandez (2012), Nature Geoscience.

  17. Near-Source Shaking and Dynamic Rupture in Plastic Media

    NASA Astrophysics Data System (ADS)

    Gabriel, A.; Mai, P. M.; Dalguer, L. A.; Ampuero, J. P.

    2012-12-01

    Recent well recorded earthquakes show a high degree of complexity at the source level that severely affects the resulting ground motion in near and far-field seismic data. In our study, we focus on investigating source-dominated near-field ground motion features from numerical dynamic rupture simulations in an elasto-visco-plastic bulk. Our aim is to contribute to a more direct connection from theoretical and computational results to field and seismological observations. Previous work showed that a diversity of rupture styles emerges from simulations on faults governed by velocity-and-state-dependent friction with rapid velocity-weakening at high slip rate. For instance, growing pulses lead to re-activation of slip due to gradual stress build-up near the hypocenter, as inferred in some source studies of the 2011 Tohoku-Oki earthquake. Moreover, off-fault energy dissipation implied physical limits on extreme ground motion by limiting peak slip rate and rupture velocity. We investigate characteristic features in near-field strong ground motion generated by dynamic in-plane rupture simulations. We present effects of plasticity on source process signatures, off-fault damage patterns and ground shaking. Independent of rupture style, asymmetric damage patterns across the fault are produced that contribute to the total seismic moment, and even dominantly at high angles between the fault and the maximum principal background stress. The off-fault plastic strain fields induced by transitions between rupture styles reveal characteristic signatures of the mechanical source processes during the transition. Comparing different rupture styles in elastic and elasto-visco-plastic media to identify signatures of off-fault plasticity, we find varying degrees of alteration of near-field radiation due to plastic energy dissipation. Subshear pulses suffer more peak particle velocity reduction due to plasticity than cracks. Supershear ruptures are affected even more. The occurrence of multiple rupture fronts affect seismic potency release rate, amplitude spectra, peak particle velocity distributions and near-field seismograms. Our simulations enable us to trace features of source processes in synthetic seismograms, for example exhibiting a re-activation of slip. Such physical models may provide starting points for future investigations of field properties of earthquake source mechanisms and natural fault conditions. In the long-term, our findings may be helpful for seismic hazard analysis and the improvement of seismic source models.

  18. Study of Stand-Alone Microgrid under Condition of Faults on Distribution Line

    NASA Astrophysics Data System (ADS)

    Malla, S. G.; Bhende, C. N.

    2014-10-01

    The behavior of stand-alone microgrid is analyzed under the condition of faults on distribution feeders. During fault since battery is not able to maintain dc-link voltage within limit, the resistive dump load control is presented to do so. An inverter control is proposed to maintain balanced voltages at PCC under the unbalanced load condition and to reduce voltage unbalance factor (VUF) at load points. The proposed inverter control also has facility to protect itself from high fault current. Existing maximum power point tracker (MPPT) algorithm is modified to limit the speed of generator during fault. Extensive simulation results using MATLAB/SIMULINK established that the performance of the controllers is quite satisfactory under different fault conditions as well as unbalanced load conditions.

  19. 20 CFR 404.457 - Deductions where taxes neither deducted from wages of certain maritime employees nor paid.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... wages of certain maritime employees nor paid. 404.457 Section 404.457 Employees' Benefits SOCIAL... maritime employees nor paid. (a) When deduction is required. A deduction is required where: (1) An... Administration or, for services performed before February 11, 1942, through the United States Maritime Commission...

  20. 26 CFR 1.941-1 - Special deduction for China Trade Act corporations.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 10 2014-04-01 2013-04-01 true Special deduction for China Trade Act... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) China Trade Act Corporations § 1.941-1 Special deduction for China Trade Act corporations. In addition to the deductions from taxable income otherwise...

  1. 26 CFR 1.941-1 - Special deduction for China Trade Act corporations.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 10 2013-04-01 2013-04-01 false Special deduction for China Trade Act... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) China Trade Act Corporations § 1.941-1 Special deduction for China Trade Act corporations. In addition to the deductions from taxable income otherwise...

  2. Children's and Adults' Judgments of the Certainty of Deductive Inferences, Inductive Inferences, and Guesses

    ERIC Educational Resources Information Center

    Pillow, Bradford H.; Pearson, RaeAnne M.; Hecht, Mary; Bremer, Amanda

    2010-01-01

    Children and adults rated their own certainty following inductive inferences, deductive inferences, and guesses. Beginning in kindergarten, participants rated deductions as more certain than weak inductions or guesses. Deductions were rated as more certain than strong inductions beginning in Grade 3, and fourth-grade children and adults…

  3. 26 CFR 1.941-1 - Special deduction for China Trade Act corporations.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 10 2011-04-01 2011-04-01 false Special deduction for China Trade Act... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) China Trade Act Corporations § 1.941-1 Special deduction for China Trade Act corporations. In addition to the deductions from taxable income otherwise...

  4. 26 CFR 1.941-1 - Special deduction for China Trade Act corporations.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 10 2010-04-01 2010-04-01 false Special deduction for China Trade Act... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES China Trade Act Corporations § 1.941-1 Special deduction for China Trade Act corporations. In addition to the deductions from taxable income otherwise allowed such a...

  5. 42 CFR 409.89 - Exemption of kidney donors from deductible and coinsurance requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 2 2011-10-01 2011-10-01 false Exemption of kidney donors from deductible and... Deductibles and Coinsurance § 409.89 Exemption of kidney donors from deductible and coinsurance requirements... furnished to an individual in connection with the donation of a kidney for transplant surgery. ...

  6. 42 CFR 409.89 - Exemption of kidney donors from deductible and coinsurance requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Exemption of kidney donors from deductible and... Deductibles and Coinsurance § 409.89 Exemption of kidney donors from deductible and coinsurance requirements... furnished to an individual in connection with the donation of a kidney for transplant surgery. ...

  7. 25 CFR 163.25 - Forest management deductions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Forest management deductions. 163.25 Section 163.25... Forest Management and Operations § 163.25 Forest management deductions. (a) Pursuant to the provisions of 25 U.S.C. 413 and 25 U.S.C. 3105, a forest management deduction shall be withheld from the gross...

  8. 26 CFR 20.2053-10 - Deduction for certain foreign death taxes.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 2055, but only if (1) the conditions stated in paragraph (b) of this section are met, and (2) an... for foreign death taxes. (b) Condition for allowance of deduction. (1) The deduction is not allowed.... For allowance of the deduction, it is sufficient if either of these conditions is satisfied. Thus, in...

  9. 26 CFR 20.2053-10 - Deduction for certain foreign death taxes.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 2055, but only if (1) the conditions stated in paragraph (b) of this section are met, and (2) an... for foreign death taxes. (b) Condition for allowance of deduction. (1) The deduction is not allowed.... For allowance of the deduction, it is sufficient if either of these conditions is satisfied. Thus, in...

  10. 48 CFR 252.236-7007 - Additive or deductive items.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Additive or deductive... of Provisions And Clauses 252.236-7007 Additive or deductive items. As prescribed in 236.570(b)(5), use the following provision: Additive or Deductive Items (DEC 1991) (a) The low offeror and the items...

  11. 48 CFR 252.236-7007 - Additive or deductive items.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Additive or deductive... of Provisions And Clauses 252.236-7007 Additive or deductive items. As prescribed in 236.570(b)(5), use the following provision: Additive or Deductive Items (DEC 1991) (a) The low offeror and the items...

  12. 48 CFR 1852.236-71 - Additive or deductive items.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Additive or deductive items... 1852.236-71 Additive or deductive items. As prescribed in 1836.570(a), insert the following provision: Additive or Deductive Items (MAR 1989) (a) The low bidder for purposes of award shall be the conforming...

  13. 48 CFR 252.236-7007 - Additive or deductive items.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Additive or deductive... of Provisions And Clauses 252.236-7007 Additive or deductive items. As prescribed in 236.570(b)(5), use the following provision: Additive or Deductive Items (DEC 1991) (a) The low offeror and the items...

  14. 48 CFR 252.236-7007 - Additive or deductive items.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Additive or deductive... of Provisions And Clauses 252.236-7007 Additive or deductive items. As prescribed in 236.570(b)(5), use the following provision: Additive or Deductive Items (DEC 1991) (a) The low offeror and the items...

  15. 48 CFR 252.236-7007 - Additive or deductive items.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Additive or deductive... of Provisions And Clauses 252.236-7007 Additive or deductive items. As prescribed in 236.570(b)(5), use the following provision: Additive or Deductive Items (DEC 1991) (a) The low offeror and the items...

  16. 26 CFR 1.642(d)-1 - Net operating loss deduction.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Net operating loss deduction. 1.642(d)-1 Section... TAX (CONTINUED) INCOME TAXES Estates, Trusts, and Beneficiaries § 1.642(d)-1 Net operating loss deduction. The net operating loss deduction allowed by section 172 is available to estates and trusts...

  17. 26 CFR 1.1402(a)-7 - Net operating loss deduction.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 12 2010-04-01 2010-04-01 false Net operating loss deduction. 1.1402(a)-7...) INCOME TAX (CONTINUED) INCOME TAXES Tax on Self-Employment Income § 1.1402(a)-7 Net operating loss deduction. The deduction provided by section 172, relating to net operating losses sustained in years other...

  18. 25 CFR 163.25 - Forest management deductions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false Forest management deductions. 163.25 Section 163.25... Forest Management and Operations § 163.25 Forest management deductions. (a) Pursuant to the provisions of 25 U.S.C. 413 and 25 U.S.C. 3105, a forest management deduction shall be withheld from the gross...

  19. 20 CFR 361.11 - Procedures for salary offset: When deductions may begin.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Procedures for salary offset: When deductions... § 361.11 Procedures for salary offset: When deductions may begin. (a) Deductions to liquidate an... a debt is completed, offset shall be made from subsequent payments of any nature (e.g., final salary...

  20. 38 CFR 8.5 - Authorization for deduction of premiums from compensation, retirement pay, or pension.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Authorization for deduction of premiums from compensation, retirement pay, or pension. 8.5 Section 8.5 Pensions, Bonuses, and... Authorization for deduction of premiums from compensation, retirement pay, or pension. Deductions from benefits...

  1. Large earthquakes and creeping faults

    USGS Publications Warehouse

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  2. Ground Fault Overvoltage With Inverter-Interfaced Distributed Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ropp, Michael; Hoke, Anderson; Chakraborty, Sudipta

    Ground Fault Overvoltage can occur in situations in which a four-wire distribution circuit is energized by an ungrounded voltage source during a single phase to ground fault. The phenomenon is well-documented with ungrounded synchronous machines, but there is considerable discussion about whether inverters cause this phenomenon, and consequently whether inverters require effective grounding. This paper examines the overvoltages that can be supported by inverters during single phase to ground faults via theory, simulation and experiment, identifies the relevant physical mechanisms, quantifies expected levels of overvoltage, and makes recommendations for optimal mitigation.

  3. Process fault detection and nonlinear time series analysis for anomaly detection in safeguards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, T.L.; Mullen, M.F.; Wangen, L.E.

    In this paper we discuss two advanced techniques, process fault detection and nonlinear time series analysis, and apply them to the analysis of vector-valued and single-valued time-series data. We investigate model-based process fault detection methods for analyzing simulated, multivariate, time-series data from a three-tank system. The model-predictions are compared with simulated measurements of the same variables to form residual vectors that are tested for the presence of faults (possible diversions in safeguards terminology). We evaluate two methods, testing all individual residuals with a univariate z-score and testing all variables simultaneously with the Mahalanobis distance, for their ability to detect lossmore » of material from two different leak scenarios from the three-tank system: a leak without and with replacement of the lost volume. Nonlinear time-series analysis tools were compared with the linear methods popularized by Box and Jenkins. We compare prediction results using three nonlinear and two linear modeling methods on each of six simulated time series: two nonlinear and four linear. The nonlinear methods performed better at predicting the nonlinear time series and did as well as the linear methods at predicting the linear values.« less

  4. Frictional and hydraulic behaviour of carbonate fault gouge during fault reactivation - An experimental study

    NASA Astrophysics Data System (ADS)

    Delle Piane, Claudio; Giwelli, Ausama; Clennell, M. Ben; Esteban, Lionel; Nogueira Kiewiet, Melissa Cristina D.; Kiewiet, Leigh; Kager, Shane; Raimon, John

    2016-10-01

    We present a novel experimental approach devised to test the hydro-mechanical behaviour of different structural elements of carbonate fault rocks during experimental re-activation. Experimentally faulted core plugs were subject to triaxial tests under water saturated conditions simulating depletion processes in reservoirs. Different fault zone structural elements were created by shearing initially intact travertine blocks (nominal size: 240 × 110 × 150 mm) to a maximum displacement of 20 and 120 mm under different normal stresses. Meso-and microstructural features of these sample and the thickness to displacement ratio characteristics of their deformation zones allowed to classify them as experimentally created damage zones (displacement of 20 mm) and fault cores (displacement of 120 mm). Following direct shear testing, cylindrical plugs with diameter of 38 mm were drilled across the slip surface to be re-activated in a conventional triaxial configuration monitoring the permeability and frictional behaviour of the samples as a function of applied stress. All re-activation experiments on faulted plugs showed consistent frictional response consisting of an initial fast hardening followed by apparent yield up to a friction coefficient of approximately 0.6 attained at around 2 mm of displacement. Permeability in the re-activation experiments shows exponential decay with increasing mean effective stress. The rate of permeability decline with mean effective stress is higher in the fault core plugs than in the simulated damage zone ones. It can be concluded that the presence of gouge in un-cemented carbonate faults results in their sealing character and that leakage cannot be achieved by renewed movement on the fault plane alone, at least not within the range of slip measureable with our apparatus (i.e. approximately 7 mm of cumulative displacement). Additionally, it is shown that under sub seismic slip rates re-activated carbonate faults remain strong and no frictional weakening was observed during re-activation.

  5. The effect of segmented fault zones on earthquake rupture propagation and termination

    NASA Astrophysics Data System (ADS)

    Huang, Y.

    2017-12-01

    A fundamental question in earthquake source physics is what can control the nucleation and termination of an earthquake rupture. Besides stress heterogeneities and variations in frictional properties, damaged fault zones (DFZs) that surround major strike-slip faults can contribute significantly to earthquake rupture propagation. Previous earthquake rupture simulations usually characterize DFZs as several-hundred-meter-wide layers with lower seismic velocities than host rocks, and find earthquake ruptures in DFZs can exhibit slip pulses and oscillating rupture speeds that ultimately enhance high-frequency ground motions. However, real DFZs are more complex than the uniform low-velocity structures, and show along-strike variations of damages that may be correlated with historical earthquake ruptures. These segmented structures can either prohibit or assist rupture propagation and significantly affect the final sizes of earthquakes. For example, recent dense array data recorded at the San Jacinto fault zone suggests the existence of three prominent DFZs across the Anza seismic gap and the south section of the Clark branch, while no prominent DFZs were identified near the ends of the Anza seismic gap. To better understand earthquake rupture in segmented fault zones, we will present dynamic rupture simulations that calculate the time-varying rupture process physically by considering the interactions between fault stresses, fault frictional properties, and material heterogeneities. We will show that whether an earthquake rupture can break through the intact rock outside the DFZ depend on the nucleation size of the earthquake and the rupture propagation distance in the DFZ. Moreover, material properties of the DFZ, stress conditions along the fault, and friction properties of the fault also have a critical impact on rupture propagation and termination. We will also present scenarios of San Jacinto earthquake ruptures and show the parameter space that is favorable for rupture propagation through the Anza seismic gap. Our results suggest that a priori knowledge of properties of segmented fault zones is of great importance for predicting sizes of future large earthquakes on major faults.

  6. 26 CFR 1.167(a)-10 - When depreciation deduction is allowable.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false When depreciation deduction is allowable. 1.167... Corporations § 1.167(a)-10 When depreciation deduction is allowable. (a) A taxpayer should deduct the proper depreciation allowance each year and may not increase his depreciation allowances in later years by reason of...

  7. 26 CFR 20.2053-6 - Deduction for taxes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 14 2010-04-01 2010-04-01 false Deduction for taxes. 20.2053-6 Section 20.2053... TAXES ESTATE TAX; ESTATES OF DECEDENTS DYING AFTER AUGUST 16, 1954 Taxable Estate § 20.2053-6 Deduction for taxes. (a) In general—(1) Taxes are deductible in computing a decedent's gross estate— (i) Only as...

  8. 7 CFR 3.81 - Procedures for salary offset: when deductions may begin.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 1 2010-01-01 2010-01-01 false Procedures for salary offset: when deductions may... Salary Offset § 3.81 Procedures for salary offset: when deductions may begin. (a) Deductions to liquidate... Offset Salary to collect from the employee's current pay. (b) If the employee filed a petition for a...

  9. 78 FR 41961 - Submission for Review: 3206-0170, Application for Refund of Retirement Deductions/FERS (SF 3106...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-12

    ... Retirement Deductions/FERS (SF 3106) and Current/Former Spouse(s) Notification of Application for Refund of Retirement Deductions Under FERS (SF 3106A) AGENCY: U.S. Office of Personnel Management. ACTION: 30-Day... Current/Former Spouse(s) Notification of Application for Refund of Retirement Deductions Under FERS (SF...

  10. 26 CFR 1.249-1 - Limitation on deduction of bond premium on repurchase.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... A issues a callable 20-year convertible bond at face for $1,000 bearing interest at 10 percent per... 26 Internal Revenue 3 2010-04-01 2010-04-01 false Limitation on deduction of bond premium on... deduction of bond premium on repurchase. (a) Limitation—(1) General rule. No deduction is allowed to the...

  11. 26 CFR 1.163-12 - Deduction of original issue discount on instrument held by related foreign person.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... on which the amount is includible in income is determined with reference to the method of accounting... 26 Internal Revenue 2 2012-04-01 2012-04-01 false Deduction of original issue discount on... Deductions for Individuals and Corporations § 1.163-12 Deduction of original issue discount on instrument...

  12. 26 CFR 1.163-12 - Deduction of original issue discount on instrument held by related foreign person.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... on which the amount is includible in income is determined with reference to the method of accounting... 26 Internal Revenue 2 2014-04-01 2014-04-01 false Deduction of original issue discount on... Deductions for Individuals and Corporations § 1.163-12 Deduction of original issue discount on instrument...

  13. 26 CFR 1.163-12 - Deduction of original issue discount on instrument held by related foreign person.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... on which the amount is includible in income is determined with reference to the method of accounting... 26 Internal Revenue 2 2013-04-01 2013-04-01 false Deduction of original issue discount on... Deductions for Individuals and Corporations § 1.163-12 Deduction of original issue discount on instrument...

  14. Network Connectivity for Permanent, Transient, Independent, and Correlated Faults

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Sicher, Courtney; henry, Courtney

    2012-01-01

    This paper develops a method for the quantitative analysis of network connectivity in the presence of both permanent and transient faults. Even though transient noise is considered a common occurrence in networks, a survey of the literature reveals an emphasis on permanent faults. Transient faults introduce a time element into the analysis of network reliability. With permanent faults it is sufficient to consider the faults that have accumulated by the end of the operating period. With transient faults the arrival and recovery time must be included. The number and location of faults in the system is a dynamic variable. Transient faults also introduce system recovery into the analysis. The goal is the quantitative assessment of network connectivity in the presence of both permanent and transient faults. The approach is to construct a global model that includes all classes of faults: permanent, transient, independent, and correlated. A theorem is derived about this model that give distributions for (1) the number of fault occurrences, (2) the type of fault occurrence, (3) the time of the fault occurrences, and (4) the location of the fault occurrence. These results are applied to compare and contrast the connectivity of different network architectures in the presence of permanent, transient, independent, and correlated faults. The examples below use a Monte Carlo simulation, but the theorem mentioned above could be used to guide fault-injections in a laboratory.

  15. Model-based design and experimental verification of a monitoring concept for an active-active electromechanical aileron actuation system

    NASA Astrophysics Data System (ADS)

    Arriola, David; Thielecke, Frank

    2017-09-01

    Electromechanical actuators have become a key technology for the onset of power-by-wire flight control systems in the next generation of commercial aircraft. The design of robust control and monitoring functions for these devices capable to mitigate the effects of safety-critical faults is essential in order to achieve the required level of fault tolerance. A primary flight control system comprising two electromechanical actuators nominally operating in active-active mode is considered. A set of five signal-based monitoring functions are designed using a detailed model of the system under consideration which includes non-linear parasitic effects, measurement and data acquisition effects, and actuator faults. Robust detection thresholds are determined based on the analysis of parametric and input uncertainties. The designed monitoring functions are verified experimentally and by simulation through the injection of faults in the validated model and in a test-rig suited to the actuation system under consideration, respectively. They guarantee a robust and efficient fault detection and isolation with a low risk of false alarms, additionally enabling the correct reconfiguration of the system for an enhanced operational availability. In 98% of the performed experiments and simulations, the correct faults were detected and confirmed within the time objectives set.

  16. Spectral element modelling of fault-plane reflections arising from fluid pressure distributions

    USGS Publications Warehouse

    Haney, M.; Snieder, R.; Ampuero, J.-P.; Hofmann, R.

    2007-01-01

    The presence of fault-plane reflections in seismic images, besides indicating the locations of faults, offers a possible source of information on the properties of these poorly understood zones. To better understand the physical mechanism giving rise to fault-plane reflections in compacting sedimentary basins, we numerically model the full elastic wavefield via the spectral element method (SEM) for several different fault models. Using well log data from the South Eugene Island field, offshore Louisiana, we derive empirical relationships between the elastic parameters (e.g. P-wave velocity and density) and the effective-stress along both normal compaction and unloading paths. These empirical relationships guide the numerical modelling and allow the investigation of how differences in fluid pressure modify the elastic wavefield. We choose to simulate the elastic wave equation via SEM since irregular model geometries can be accommodated and slip boundary conditions at an interface, such as a fault or fracture, are implemented naturally. The method we employ for including a slip interface retains the desirable qualities of SEM in that it is explicit in time and, therefore, does not require the inversion of a large matrix. We performa complete numerical study by forward modelling seismic shot gathers over a faulted earth model using SEM followed by seismic processing of the simulated data. With this procedure, we construct post-stack time-migrated images of the kind that are routinely interpreted in the seismic exploration industry. We dip filter the seismic images to highlight the fault-plane reflections prior to making amplitude maps along the fault plane. With these amplitude maps, we compare the reflectivity from the different fault models to diagnose which physical mechanism contributes most to observed fault reflectivity. To lend physical meaning to the properties of a locally weak fault zone characterized as a slip interface, we propose an equivalent-layer model under the assumption of weak scattering. This allows us to use the empirical relationships between density, velocity and effective stress from the South Eugene Island field to relate a slip interface to an amount of excess pore-pressure in a fault zone. ?? 2007 The Authors Journal compilation ?? 2007 RAS.

  17. Effects of Fault Segmentation, Mechanical Interaction, and Structural Complexity on Earthquake-Generated Deformation

    NASA Astrophysics Data System (ADS)

    Haddad, David Elias

    Earth's topographic surface forms an interface across which the geodynamic and geomorphic engines interact. This interaction is best observed along crustal margins where topography is created by active faulting and sculpted by geomorphic processes. Crustal deformation manifests as earthquakes at centennial to millennial timescales. Given that nearly half of Earth's human population lives along active fault zones, a quantitative understanding of the mechanics of earthquakes and faulting is necessary to build accurate earthquake forecasts. My research relies on the quantitative documentation of the geomorphic expression of large earthquakes and the physical processes that control their spatiotemporal distributions. The first part of my research uses high-resolution topographic lidar data to quantitatively document the geomorphic expression of historic and prehistoric large earthquakes. Lidar data allow for enhanced visualization and reconstruction of structures and stratigraphy exposed by paleoseismic trenches. Lidar surveys of fault scarps formed by the 1992 Landers earthquake document the centimeter-scale erosional landforms developed by repeated winter storm-driven erosion. The second part of my research employs a quasi-static numerical earthquake simulator to explore the effects of fault roughness, friction, and structural complexities on earthquake-generated deformation. My experiments show that fault roughness plays a critical role in determining fault-to-fault rupture jumping probabilities. These results corroborate the accepted 3-5 km rupture jumping distance for smooth faults. However, my simulations show that the rupture jumping threshold distance is highly variable for rough faults due to heterogeneous elastic strain energies. Furthermore, fault roughness controls spatiotemporal variations in slip rates such that rough faults exhibit lower slip rates relative to their smooth counterparts. The central implication of these results lies in guiding the interpretation of paleoseismically derived slip rates that are used to form earthquake forecasts. The final part of my research evaluates a set of Earth science-themed lesson plans that I designed for elementary-level learning-disabled students. My findings show that a combination of concept delivery techniques is most effective for learning-disabled students and should incorporate interactive slide presentations, tactile manipulatives, teacher-assisted concept sketches, and student-led teaching to help learning-disabled students grasp Earth science concepts.

  18. Model-Based Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Kumar, Aditya; Viassolo, Daniel

    2008-01-01

    The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.

  19. Method and apparatus for transfer function simulator for testing complex systems

    NASA Technical Reports Server (NTRS)

    Kavaya, M. J. (Inventor)

    1985-01-01

    A method and apparatus for testing the operation of a complex stabilization circuit in a closed loop system is presented. The method is comprised of a programmed analog or digital computing system for implementing the transfer function of a load thereby providing a predictable load. The digital computing system employs a table stored in a microprocessor in which precomputed values of the load transfer function are stored for values of input signal from the stabilization circuit over the range of interest. This technique may be used not only for isolating faults in the stabilization circuit, but also for analyzing a fault in a faulty load by so varying parameters of the computing system as to simulate operation of the actual load with the fault.

  20. Drive and protection circuit for converter module of cascaded H-bridge STATCOM

    NASA Astrophysics Data System (ADS)

    Wang, Xuan; Yuan, Hongliang; Wang, Xiaoxing; Wang, Shuai; Fu, Yongsheng

    2018-04-01

    Drive and protection circuit is an important part of power electronics, which is related to safe and stable operation issues in the power electronics. The drive and protection circuit is designed for the cascaded H-bridge STATCOM. This circuit can realize flexible dead-time setting, operation status self-detection, fault priority protection and detailed fault status uploading. It can help to improve the reliability of STATCOM's operation. Finally, the proposed circuit is tested and analyzed by power electronic simulation software PSPICE (Simulation Program with IC Emphasis) and a series of experiments. Further studies showed that the proposed circuit can realize drive and control of H-bridge circuit, meanwhile it also can realize fast processing faults and have advantage of high reliability.

  1. Safety and reliability analysis in a polyvinyl chloride batch process using dynamic simulator-case study: Loss of containment incident.

    PubMed

    Rizal, Datu; Tani, Shinichi; Nishiyama, Kimitoshi; Suzuki, Kazuhiko

    2006-10-11

    In this paper, a novel methodology in batch plant safety and reliability analysis is proposed using a dynamic simulator. A batch process involving several safety objects (e.g. sensors, controller, valves, etc.) is activated during the operational stage. The performance of the safety objects is evaluated by the dynamic simulation and a fault propagation model is generated. By using the fault propagation model, an improved fault tree analysis (FTA) method using switching signal mode (SSM) is developed for estimating the probability of failures. The timely dependent failures can be considered as unavailability of safety objects that can cause the accidents in a plant. Finally, the rank of safety object is formulated as performance index (PI) and can be estimated using the importance measures. PI shows the prioritization of safety objects that should be investigated for safety improvement program in the plants. The output of this method can be used for optimal policy in safety object improvement and maintenance. The dynamic simulator was constructed using Visual Modeler (VM, the plant simulator, developed by Omega Simulation Corp., Japan). A case study is focused on the loss of containment (LOC) incident at polyvinyl chloride (PVC) batch process which is consumed the hazardous material, vinyl chloride monomer (VCM).

  2. Near-surface versus fault zone damage following the 1999 Chi-Chi earthquake: Observation and simulation of repeating earthquakes

    USGS Publications Warehouse

    Chen, Kate Huihsuan; Furumura, Takashi; Rubinstein, Justin L.

    2015-01-01

    We observe crustal damage and its subsequent recovery caused by the 1999 M7.6 Chi-Chi earthquake in central Taiwan. Analysis of repeating earthquakes in Hualien region, ~70 km east of the Chi-Chi earthquake, shows a remarkable change in wave propagation beginning in the year 2000, revealing damage within the fault zone and distributed across the near surface. We use moving window cross correlation to identify a dramatic decrease in the waveform similarity and delays in the S wave coda. The maximum delay is up to 59 ms, corresponding to a 7.6% velocity decrease averaged over the wave propagation path. The waveform changes on either side of the fault are distinct. They occur in different parts of the waveforms, affect different frequencies, and the size of the velocity reductions is different. Using a finite difference method, we simulate the effect of postseismic changes in the wavefield by introducing S wave velocity anomaly in the fault zone and near the surface. The models that best fit the observations point to pervasive damage in the near surface and deep, along-fault damage at the time of the Chi-Chi earthquake. The footwall stations show the combined effect of near-surface and the fault zone damage, where the velocity reduction (2–7%) is twofold to threefold greater than the fault zone damage observed in the hanging wall stations. The physical models obtained here allow us to monitor the temporal evolution and recovering process of the Chi-Chi fault zone damage.

  3. Sliding Mode Fault Tolerant Control with Adaptive Diagnosis for Aircraft Engines

    NASA Astrophysics Data System (ADS)

    Xiao, Lingfei; Du, Yanbin; Hu, Jixiang; Jiang, Bin

    2018-03-01

    In this paper, a novel sliding mode fault tolerant control method is presented for aircraft engine systems with uncertainties and disturbances on the basis of adaptive diagnostic observer. By taking both sensors faults and actuators faults into account, the general model of aircraft engine control systems which is subjected to uncertainties and disturbances, is considered. Then, the corresponding augmented dynamic model is established in order to facilitate the fault diagnosis and fault tolerant controller design. Next, a suitable detection observer is designed to detect the faults effectively. Through creating an adaptive diagnostic observer and based on sliding mode strategy, the sliding mode fault tolerant controller is constructed. Robust stabilization is discussed and the closed-loop system can be stabilized robustly. It is also proven that the adaptive diagnostic observer output errors and the estimations of faults converge to a set exponentially, and the converge rate greater than some value which can be adjusted by choosing designable parameters properly. The simulation on a twin-shaft aircraft engine verifies the applicability of the proposed fault tolerant control method.

  4. 3D dynamic rupture simulation and local tomography studies following the 2010 Haiti earthquake

    NASA Astrophysics Data System (ADS)

    Douilly, Roby

    The 2010 M7.0 Haiti earthquake was the first major earthquake in southern Haiti in 250 years. As this event could represent the beginning of a new period of active seismicity in the region, and in consideration of how vulnerable the population is to earthquake damage, it is important to understand the nature of this event and how it has influenced seismic hazards in the region. Most significantly, the 2010 earthquake occurred on the secondary Leogâne thrust fault (two fault segments), not the Enriquillo Fault, the major strike-slip fault in the region, despite it being only a few kilometers away. We first use a finite element model to simulate rupture along the Leogâne fault. We varied friction and background stress to investigate the conditions that best explain observed surface deformations and why the rupture did not to jump to the nearby Enriquillo fault. Our model successfully replicated rupture propagation along the two segments of the Leogâne fault, and indicated that a significant stress increase occurred on the top and to the west of the Enriquillo fault. We also investigated the potential ground shaking level in this region if a rupture similar to the Mw 7.0 2010 Haiti earthquake were to occur on the Enriquillo fault. We used a finite element method and assumptions on regional stress to simulate low frequency dynamic rupture propagation for the segment of the Enriquillo fault closer to the capital. The high-frequency ground motion components were calculated using the specific barrier model, and the hybrid synthetics were obtained by combining the low-frequencies ( 1Hz) from the stochastic simulation using matched filtering at a crossover frequency of 1 Hz. The average horizontal peak ground acceleration, computed at several sites of interest through Port-au-Prince (the capital), has a value of 0.35g. Finally, we investigated the 3D local tomography of this region. We considered 897 high-quality records from the earthquake catalog as recorded by temporary station deployments. We only considered events that had at least 6 P and 6 S arrivals, and an azimuthal gap less then 180 degrees, to simultaneously invert for hypocenters and 3D velocity structure in southern Haiti. We used the program VELEST to define a minimum 1D velocity model, which was then used as a starting model in the computer algorithm SIMULPS14 to produce the 3D tomography. Our results show a pronounced low velocity zone across the Logne fault, which is consistent with the sedimentary basin location from the geologic map. We also observe a southeast low velocity zone, which is consistent with a predefined structure in the morphology. Low velocity structure usually correlates with broad zones of deformation, such as the presence of cracks or faults, or from the presence of fluid in the crust. This work provides information that can be used in future studies focusing on how changes in material properties can affect rupture propagation, which is useful to assess the seismic hazard that Haiti and other regions are facing.

  5. Understanding Loss Deductions For Yard Trees

    Treesearch

    John Greene

    1998-01-01

    The sudden destruction of trees or other yard plants due to a fire, storm, or massive insect attack qualifies for a casualty loss deduction. Unfortnately, the casualty loss rules for personal use property allow deductions only for large losses. To calculate your deduction, start with the lesser of the decrease in fair market value of your property caused by the loss of...

  6. 26 CFR 1.1312-5 - Correlative deductions and inclusions for trusts or estates and legatees, beneficiaries, or heirs.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 11 2010-04-01 2010-04-01 true Correlative deductions and inclusions for trusts... of Tax Between Years and Special Limitations § 1.1312-5 Correlative deductions and inclusions for... the amount of the deduction allowed by sections 651 and 661 or the inclusion in taxable income of the...

  7. 26 CFR 1.221-1 - Deduction for interest paid on qualified education loans after December 31, 2001.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... § 1.163-7, original issue discount on a qualified education loan is not deductible until paid. See... education loans after December 31, 2001. 1.221-1 Section 1.221-1 Internal Revenue INTERNAL REVENUE SERVICE... Deductions for Individuals § 1.221-1 Deduction for interest paid on qualified education loans after December...

  8. 26 CFR 1.221-1 - Deduction for interest paid on qualified education loans after December 31, 2001.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... § 1.163-7, original issue discount on a qualified education loan is not deductible until paid. See... education loans after December 31, 2001. 1.221-1 Section 1.221-1 Internal Revenue INTERNAL REVENUE SERVICE... Deductions for Individuals § 1.221-1 Deduction for interest paid on qualified education loans after December...

  9. 26 CFR 1.221-1 - Deduction for interest paid on qualified education loans after December 31, 2001.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... § 1.163-7, original issue discount on a qualified education loan is not deductible until paid. See... education loans after December 31, 2001. 1.221-1 Section 1.221-1 Internal Revenue INTERNAL REVENUE SERVICE... Deductions for Individuals § 1.221-1 Deduction for interest paid on qualified education loans after December...

  10. A comparative study of ground motion hybrid simulations and the modified NGA ground motion predictive equations for directivity and its application to the the Marmara Sea region (Turkey)

    NASA Astrophysics Data System (ADS)

    Pischiutta, M.; Akinci, A.; Spagnuolo, E.; Taroni, M.; Herrero, A.; Aochi, H.

    2016-12-01

    We have simulated strong ground motions for two Mw>7.0 rupture scenarios on the North Anatolian Fault, in the Marmara Sea within 10-20 km from Istanbul. This city is characterized by one of the highest levels of seismic risk in Europe and the Mediterranean region. The increased risk in Istanbul is due to eight destructive earthquakes that ruptured the fault system and left a seismic gap at the western portion of the 1000km-long North Anatolian Fault Zone. To estimate the ground motion characteristics and its variability in the region we have simulated physics-based rupture scenarios, producing hybrid broadband time histories. We have merged two simulation techniques: a full 3D wave propagation method to generate low-frequency seismograms (Aochi and Ulrich, 2015) and the stochastic finite-fault model approach based on a dynamic corner frequency (Motazedian and Atkinson, 2005) to simulate high-frequency seismograms (Akinci et al., 2016, submitted to BSSA, 2016). They are merged to compute realistic broadband hybrid time histories. The comparison of ground motion intensity measures (PGA, PGV, SA) resulting from our simulations with those predicted by the recent Ground Motion Prediction Equations (GMPEs) in the region (Boore & Atkinson, 2008; Chiou & Young, 2008; Akkar & Bommer, 2010; Akkar & Cagnan, 2010) seems to indicate that rupture directivity and super-shear rupture effects affect the ground motion in the Marmara Sea region. In order to account for the rupture directivity we improve the comparison using the directivity predictor proposed by Spudich & Chiu (2008). This study highlights the importance of the rupture directivity for the hazard estimation in the Marmara Sea region, especially for the city of Istanbul.

  11. Rapid acceleration leads to rapid weakening in earthquake-like laboratory experiments

    USGS Publications Warehouse

    Chang, Jefferson C.; Lockner, David A.; Reches, Z.

    2012-01-01

    After nucleation, a large earthquake propagates as an expanding rupture front along a fault. This front activates countless fault patches that slip by consuming energy stored in Earth’s crust. We simulated the slip of a fault patch by rapidly loading an experimental fault with energy stored in a spinning flywheel. The spontaneous evolution of strength, acceleration, and velocity indicates that our experiments are proxies of fault-patch behavior during earthquakes of moment magnitude (Mw) = 4 to 8. We show that seismically determined earthquake parameters (e.g., displacement, velocity, magnitude, or fracture energy) can be used to estimate the intensity of the energy release during an earthquake. Our experiments further indicate that high acceleration imposed by the earthquake’s rupture front quickens dynamic weakening by intense wear of the fault zone.

  12. 3D ground‐motion simulations of Mw 7 earthquakes on the Salt Lake City segment of the Wasatch fault zone: Variability of long‐period (T≥1  s) ground motions and sensitivity to kinematic rupture parameters

    USGS Publications Warehouse

    Moschetti, Morgan P.; Hartzell, Stephen; Ramirez-Guzman, Leonardo; Frankel, Arthur; Angster, Stephen J.; Stephenson, William J.

    2017-01-01

    We examine the variability of long‐period (T≥1  s) earthquake ground motions from 3D simulations of Mw 7 earthquakes on the Salt Lake City segment of the Wasatch fault zone, Utah, from a set of 96 rupture models with varying slip distributions, rupture speeds, slip velocities, and hypocenter locations. Earthquake ruptures were prescribed on a 3D fault representation that satisfies geologic constraints and maintained distinct strands for the Warm Springs and for the East Bench and Cottonwood faults. Response spectral accelerations (SA; 1.5–10 s; 5% damping) were measured, and average distance scaling was well fit by a simple functional form that depends on the near‐source intensity level SA0(T) and a corner distance Rc:SA(R,T)=SA0(T)(1+(R/Rc))−1. Period‐dependent hanging‐wall effects manifested and increased the ground motions by factors of about 2–3, though the effects appeared partially attributable to differences in shallow site response for sites on the hanging wall and footwall of the fault. Comparisons with modern ground‐motion prediction equations (GMPEs) found that the simulated ground motions were generally consistent, except within deep sedimentary basins, where simulated ground motions were greatly underpredicted. Ground‐motion variability exhibited strong lateral variations and, at some sites, exceeded the ground‐motion variability indicated by GMPEs. The effects on the ground motions of changing the values of the five kinematic rupture parameters can largely be explained by three predominant factors: distance to high‐slip subevents, dynamic stress drop, and changes in the contributions from directivity. These results emphasize the need for further characterization of the underlying distributions and covariances of the kinematic rupture parameters used in 3D ground‐motion simulations employed in probabilistic seismic‐hazard analyses.

  13. Neural adaptive observer-based sensor and actuator fault detection in nonlinear systems: Application in UAV.

    PubMed

    Abbaspour, Alireza; Aboutalebi, Payam; Yen, Kang K; Sargolzaei, Arman

    2017-03-01

    A new online detection strategy is developed to detect faults in sensors and actuators of unmanned aerial vehicle (UAV) systems. In this design, the weighting parameters of the Neural Network (NN) are updated by using the Extended Kalman Filter (EKF). Online adaptation of these weighting parameters helps to detect abrupt, intermittent, and incipient faults accurately. We apply the proposed fault detection system to a nonlinear dynamic model of the WVU YF-22 unmanned aircraft for its evaluation. The simulation results show that the new method has better performance in comparison with conventional recurrent neural network-based fault detection strategies. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  14. An Intelligent Actuator Fault Reconstruction Scheme for Robotic Manipulators.

    PubMed

    Xiao, Bing; Yin, Shen

    2018-02-01

    This paper investigates a difficult problem of reconstructing actuator faults for robotic manipulators. An intelligent approach with fast reconstruction property is developed. This is achieved by using observer technique. This scheme is capable of precisely reconstructing the actual actuator fault. It is shown by Lyapunov stability analysis that the reconstruction error can converge to zero after finite time. A perfect reconstruction performance including precise and fast properties can be provided for actuator fault. The most important feature of the scheme is that, it does not depend on control law, dynamic model of actuator, faults' type, and also their time-profile. This super reconstruction performance and capability of the proposed approach are further validated by simulation and experimental results.

  15. Research of influence of open-winding faults on properties of brushless permanent magnets motor

    NASA Astrophysics Data System (ADS)

    Bogusz, Piotr; Korkosz, Mariusz; Powrózek, Adam; Prokop, Jan; Wygonik, Piotr

    2017-12-01

    The paper presents an analysis of influence of selected fault states on properties of brushless DC motor with permanent magnets. The subject of study was a BLDC motor designed by the authors for unmanned aerial vehicle hybrid drive. Four parallel branches per each phase were provided in the discussed 3-phase motor. After open-winding fault in single or few parallel branches, a further operation of the motor can be continued. Waveforms of currents, voltages and electromagnetic torque were determined in discussed fault states based on the developed mathematical and simulation models. Laboratory test results concerning an influence of open-windings faults in parallel branches on properties of BLDC motor were presented.

  16. Quantification of a maximum injection volume of CO2 to avert geomechanical perturbations using a compositional fluid flow reservoir simulator

    NASA Astrophysics Data System (ADS)

    Jung, Hojung; Singh, Gurpreet; Espinoza, D. Nicolas; Wheeler, Mary F.

    2018-02-01

    Subsurface CO2 injection and storage alters formation pressure. Changes of pore pressure may result in fault reactivation and hydraulic fracturing if the pressure exceeds the corresponding thresholds. Most simulation models predict such thresholds utilizing relatively homogeneous reservoir rock models and do not account for CO2 dissolution in the brine phase to calculate pore pressure evolution. This study presents an estimation of reservoir capacity in terms of allowable injection volume and rate utilizing the Frio CO2 injection site in the coast of the Gulf of Mexico as a case study. The work includes laboratory core testing, well-logging data analyses, and reservoir numerical simulation. We built a fine-scale reservoir model of the Frio pilot test in our in-house reservoir simulator IPARS (Integrated Parallel Accurate Reservoir Simulator). We first performed history matching of the pressure transient data of the Frio pilot test, and then used this history-matched reservoir model to investigate the effect of the CO2 dissolution into brine and predict the implications of larger CO2 injection volumes. Our simulation results -including CO2 dissolution- exhibited 33% lower pressure build-up relative to the simulation excluding dissolution. Capillary heterogeneity helps spread the CO2 plume and facilitate early breakthrough. Formation expansivity helps alleviate pore pressure build-up. Simulation results suggest that the injection schedule adopted during the actual pilot test very likely did not affect the mechanical integrity of the storage complex. Fault reactivation requires injection volumes of at least about sixty times larger than the actual injected volume at the same injection rate. Hydraulic fracturing necessitates much larger injection rates than the ones used in the Frio pilot test. Tested rock samples exhibit ductile deformation at in-situ effective stresses. Hence, we do not expect an increase of fault permeability in the Frio sand even in the presence of fault reactivation.

  17. Contact force structure and force chains in 3D sheared granular systems

    NASA Astrophysics Data System (ADS)

    Mair, Karen; Jettestuen, Espen; Abe, Steffen

    2010-05-01

    Faults often exhibit accumulations of granular debris, ground up to create a layer of rock flour or fault gouge separating the rigid fault walls. Numerical simulations and laboratory experiments of sheared granular materials, suggest that applied loads are preferentially transmitted across such systems by transient force networks that carry enhanced forces. The characterisation of such features is important since their nature and persistence almost certainly influence the macroscopic mechanical stability of these systems and potentially that of natural faults. 3D numerical simulations of granular shear are a valuable investigation tool since they allow us to track individual particle motions, contact forces and their evolution during applied shear, that are difficult to view directly in laboratory experiments or natural fault zones. In characterising contact force distributions, it is important to use global structure measures that allow meaningful comparisons of granular systems having e.g. different grain size distributions, as may be expected at different stages of a fault's evolution. We therefore use a series of simple measures to characterise the structure, such as distributions and correlations of contact forces that can be mapped onto a force network percolation problem as recently proposed by Ostojic and coworkers for 2D granular systems. This allows the use of measures from percolation theory to both define and characterise the force networks. We demonstrate the application of this method to 3D simulations of a sheared granular material. Importantly, we then compare our measure of the contact force structure with macroscopic frictional behaviour measured at the boundaries of our model to determine the influence of the force networks on macroscopic mechanical stability.

  18. Numerical simulations of stick-slip in fluid saturated granular fault gouge

    NASA Astrophysics Data System (ADS)

    Dorostkar, O.; Johnson, P. A.; Guyer, R. A.; Marone, C.; Carmeliet, J.

    2016-12-01

    Fluids play a key role in determining the frictional strength and stability of faults. For example, fluid flow and fluid-solid interaction in fault gouge can trigger seismicity, alter earthquake nucleation properties and cause fault zone weakening. We present results of 3D numerical simulations of stick-slip behavior in dry and saturated granular fault gouge. In the saturated case, the gouge is fully saturated and drainage is possible through the boundaries. We model the solid phase (particles) with the discrete element method (DEM) while the fluid is described by the Navier-Stokes equations and solved by computational fluid dynamics (CFD). In our model, granular gouge is sheared between two rough plates under boundary conditions of constant normal stress and constant shearing velocity at the layer boundaries. A phase-space study including shearing velocity and normal stress is taken to identify the conditions for stick-slip regime. We analyzed slip events for dry and saturated cases to determine shear stress drop, released kinetic energy and compaction. The presence of fluid tends to cause larger slip events. We observe a close correlation between the kinetic energy of the particles and of the fluid. In short, during slip, fluid flow induced by the failure and compaction of the granular system, mobilizes the particles, which increases their kinetic energy, leading to greater slip. We further observe that the solid-fluid interaction forces are equal or larger than the solid-solid interaction forces during the slip event, indicating the important influence of the fluid on the granular system. Our simulations can explain the behaviors observed in experimental studies and we are working to apply our results to tectonic faults.

  19. Ultrareliable fault-tolerant control systems

    NASA Technical Reports Server (NTRS)

    Webster, L. D.; Slykhouse, R. A.; Booth, L. A., Jr.; Carson, T. M.; Davis, G. J.; Howard, J. C.

    1984-01-01

    It is demonstrated that fault-tolerant computer systems, such as on the Shuttles, based on redundant, independent operation are a viable alternative in fault tolerant system designs. The ultrareliable fault-tolerant control system (UFTCS) was developed and tested in laboratory simulations of an UH-1H helicopter. UFTCS includes asymptotically stable independent control elements in a parallel, cross-linked system environment. Static redundancy provides the fault tolerance. A polling is performed among the computers, with results allowing for time-delay channel variations with tight bounds. When compared with the laboratory and actual flight data for the helicopter, the probability of a fault was, for the first 10 hr of flight given a quintuple computer redundancy, found to be 1 in 290 billion. Two weeks of untended Space Station operations would experience a fault probability of 1 in 24 million. Techniques for avoiding channel divergence problems are identified.

  20. Fault tolerance of artificial neural networks with applications in critical systems

    NASA Technical Reports Server (NTRS)

    Protzel, Peter W.; Palumbo, Daniel L.; Arras, Michael K.

    1992-01-01

    This paper investigates the fault tolerance characteristics of time continuous recurrent artificial neural networks (ANN) that can be used to solve optimization problems. The principle of operations and performance of these networks are first illustrated by using well-known model problems like the traveling salesman problem and the assignment problem. The ANNs are then subjected to 13 simultaneous 'stuck at 1' or 'stuck at 0' faults for network sizes of up to 900 'neurons'. The effects of these faults is demonstrated and the cause for the observed fault tolerance is discussed. An application is presented in which a network performs a critical task for a real-time distributed processing system by generating new task allocations during the reconfiguration of the system. The performance degradation of the ANN under the presence of faults is investigated by large-scale simulations, and the potential benefits of delegating a critical task to a fault tolerant network are discussed.

  1. Indirect adaptive fuzzy fault-tolerant tracking control for MIMO nonlinear systems with actuator and sensor failures.

    PubMed

    Bounemeur, Abdelhamid; Chemachema, Mohamed; Essounbouli, Najib

    2018-05-10

    In this paper, an active fuzzy fault tolerant tracking control (AFFTTC) scheme is developed for a class of multi-input multi-output (MIMO) unknown nonlinear systems in the presence of unknown actuator faults, sensor failures and external disturbance. The developed control scheme deals with four kinds of faults for both sensors and actuators. The bias, drift, and loss of accuracy additive faults are considered along with the loss of effectiveness multiplicative fault. A fuzzy adaptive controller based on back-stepping design is developed to deal with actuator failures and unknown system dynamics. However, an additional robust control term is added to deal with sensor faults, approximation errors, and external disturbances. Lyapunov theory is used to prove the stability of the closed loop system. Numerical simulations on a quadrotor are presented to show the effectiveness of the proposed approach. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Heterogeneous rupture on homogenous faults: Three-dimensional spontaneous rupture simulations with thermal pressurization

    NASA Astrophysics Data System (ADS)

    Urata, Yumi; Kuge, Keiko; Kase, Yuko

    2008-11-01

    To understand role of fluid on earthquake rupture processes, we investigated effects of thermal pressurization on spatial variation of dynamic rupture by computing spontaneous rupture propagation on a rectangular fault. We found thermal pressurization can cause heterogeneity of rupture even on a fault of uniform properties. On drained faults, tractions drop linearly with increasing slip in the same way everywhere. However, by changing the drained condition to an undrained one, the slip-weakening curves become non-linear and depend on locations on faults with small shear zone thickness w, and the dynamic frictional stresses vary spatially and temporally. Consequently, the super-shear transition fault length decreases for small w, and the final slip distribution can have some peaks regardless of w, especially on undrained faults. These effects should be taken into account of determining dynamic rupture parameters and modeling earthquake cycles when the presence of fluid is suggested in the source regions.

  3. Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous elastic solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duru, Kenneth, E-mail: kduru@stanford.edu; Dunham, Eric M.; Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA

    Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a)more » enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge–Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.« less

  4. Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous elastic solids

    NASA Astrophysics Data System (ADS)

    Duru, Kenneth; Dunham, Eric M.

    2016-01-01

    Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.

  5. Children's and adults' judgments of the certainty of deductive inferences, inductive inferences, and guesses.

    PubMed

    Pillow, Bradford H; Pearson, Raeanne M; Hecht, Mary; Bremer, Amanda

    2010-01-01

    Children and adults rated their own certainty following inductive inferences, deductive inferences, and guesses. Beginning in kindergarten, participants rated deductions as more certain than weak inductions or guesses. Deductions were rated as more certain than strong inductions beginning in Grade 3, and fourth-grade children and adults differentiated strong inductions, weak inductions, and informed guesses from pure guesses. By Grade 3, participants also gave different types of explanations for their deductions and inductions. These results are discussed in relation to children's concepts of cognitive processes, logical reasoning, and epistemological development.

  6. Conditional Probabilities of Large Earthquake Sequences in California from the Physics-based Rupture Simulator RSQSim

    NASA Astrophysics Data System (ADS)

    Gilchrist, J. J.; Jordan, T. H.; Shaw, B. E.; Milner, K. R.; Richards-Dinger, K. B.; Dieterich, J. H.

    2017-12-01

    Within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM), we are developing physics-based forecasting models for earthquake ruptures in California. We employ the 3D boundary element code RSQSim (Rate-State Earthquake Simulator of Dieterich & Richards-Dinger, 2010) to generate synthetic catalogs with tens of millions of events that span up to a million years each. This code models rupture nucleation by rate- and state-dependent friction and Coulomb stress transfer in complex, fully interacting fault systems. The Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault and deformation models are used to specify the fault geometry and long-term slip rates. We have employed the Blue Waters supercomputer to generate long catalogs of simulated California seismicity from which we calculate the forecasting statistics for large events. We have performed probabilistic seismic hazard analysis with RSQSim catalogs that were calibrated with system-wide parameters and found a remarkably good agreement with UCERF3 (Milner et al., this meeting). We build on this analysis, comparing the conditional probabilities of sequences of large events from RSQSim and UCERF3. In making these comparisons, we consider the epistemic uncertainties associated with the RSQSim parameters (e.g., rate- and state-frictional parameters), as well as the effects of model-tuning (e.g., adjusting the RSQSim parameters to match UCERF3 recurrence rates). The comparisons illustrate how physics-based rupture simulators might assist forecasters in understanding the short-term hazards of large aftershocks and multi-event sequences associated with complex, multi-fault ruptures.

  7. Evaluation of Cepstrum Algorithm with Impact Seeded Fault Data of Helicopter Oil Cooler Fan Bearings and Machine Fault Simulator Data

    DTIC Science & Technology

    2013-02-01

    of a bearing must be put into practice. There are many potential methods, the most traditional being the use of statistical time-domain features...accelerate degradation to test multiples bearings to gain statistical relevance and extrapolate results to scale for field conditions. Temperature...as time statistics , frequency estimation to improve the fault frequency detection. For future investigations, one can further explore the

  8. Statistical inference in comparing DInSAR and GPS data in fault areas

    NASA Astrophysics Data System (ADS)

    Barzaghi, R.; Borghi, A.; Kunzle, A.

    2012-04-01

    DInSAR and GPS data are nowadays currently used in geophysical investigation, e.g. for estimating slip rate over the fault plane in seismogenic areas. This analysis is usually done by mapping the surface deformation rates as estimated by GPS and DInSAR over the fault plane using suitable geophysical models (e.g. the Okada model). Usually, DInSAR vertical velocities and GPS horizontal velocities are used for getting an integrated slip estimate. However, it is sometimes critical to merge the two kinds of information since they may reflect a common undergoing geophysical signal plus different disturbing signals that are not related to the fault dynamic. In GPS and DInSAR data analysis, these artifacts are mainly connected to signal propagation in the atmosphere and to hydrological phenomena (e.g. variation in the water table). Thus, some coherence test between the two information must be carried out in order to properly merge the GPS and DInSAR velocities in the inversion procedure. To this aim, statistical tests have been studied to check for the compatibility of the two deformation rate estimates coming from GPS and DInSAR data analysis. This has been done according both to standard and Bayesian testing methodology. The effectiveness of the proposed inference methods has been checked with numerical simulations in the case of a normal fault. The fault structure is defined following the Pollino fault model and both GPS and DInSAR data are simulated according to real data acquired in this area.

  9. Modeling earthquake sequences along the Manila subduction zone: Effects of three-dimensional fault geometry

    NASA Astrophysics Data System (ADS)

    Yu, Hongyu; Liu, Yajing; Yang, Hongfeng; Ning, Jieyuan

    2018-05-01

    To assess the potential of catastrophic megathrust earthquakes (MW > 8) along the Manila Trench, the eastern boundary of the South China Sea, we incorporate a 3D non-planar fault geometry in the framework of rate-state friction to simulate earthquake rupture sequences along the fault segment between 15°N-19°N of northern Luzon. Our simulation results demonstrate that the first-order fault geometry heterogeneity, the transitional-segment (possibly related to the subducting Scarborough seamount chain) connecting the steeper south segment and the flatter north segment, controls earthquake rupture behaviors. The strong along-strike curvature at the transitional-segment typically leads to partial ruptures of MW 8.3 and MW 7.8 along the southern and northern segments respectively. The entire fault occasionally ruptures in MW 8.8 events when the cumulative stress in the transitional-segment is sufficiently high to overcome the geometrical inhibition. Fault shear stress evolution, represented by the S-ratio, is clearly modulated by the width of seismogenic zone (W). At a constant plate convergence rate, a larger W indicates on average lower interseismic stress loading rate and longer rupture recurrence period, and could slow down or sometimes stop ruptures that initiated from a narrower portion. Moreover, the modeled interseismic slip rate before whole-fault rupture events is comparable with the coupling state that was inferred from the interplate seismicity distribution, suggesting the Manila trench could potentially rupture in a M8+ earthquake.

  10. Fusion information entropy method of rolling bearing fault diagnosis based on n-dimensional characteristic parameter distance

    NASA Astrophysics Data System (ADS)

    Ai, Yan-Ting; Guan, Jiao-Yue; Fei, Cheng-Wei; Tian, Jing; Zhang, Feng-Ling

    2017-05-01

    To monitor rolling bearing operating status with casings in real time efficiently and accurately, a fusion method based on n-dimensional characteristic parameters distance (n-DCPD) was proposed for rolling bearing fault diagnosis with two types of signals including vibration signal and acoustic emission signals. The n-DCPD was investigated based on four information entropies (singular spectrum entropy in time domain, power spectrum entropy in frequency domain, wavelet space characteristic spectrum entropy and wavelet energy spectrum entropy in time-frequency domain) and the basic thought of fusion information entropy fault diagnosis method with n-DCPD was given. Through rotor simulation test rig, the vibration and acoustic emission signals of six rolling bearing faults (ball fault, inner race fault, outer race fault, inner-ball faults, inner-outer faults and normal) are collected under different operation conditions with the emphasis on the rotation speed from 800 rpm to 2000 rpm. In the light of the proposed fusion information entropy method with n-DCPD, the diagnosis of rolling bearing faults was completed. The fault diagnosis results show that the fusion entropy method holds high precision in the recognition of rolling bearing faults. The efforts of this study provide a novel and useful methodology for the fault diagnosis of an aeroengine rolling bearing.

  11. 26 CFR 1.162-10T - Questions and answers relating to the deduction of employee benefits under the Tax Reform Act of...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... of employee benefits under the Tax Reform Act of 1984; certain limits on amounts deductible... and Corporations § 1.162-10T Questions and answers relating to the deduction of employee benefits... amendment of section 404(b) by the Tax Reform Act of 1984 affect the deduction of employee benefits under...

  12. Modeling of fault reactivation and induced seismicity during hydraulic fracturing of shale-gas reservoirs

    EPA Science Inventory

    We have conducted numerical simulation studies to assess the potential for injection-induced fault reactivation and notable seismic events associated with shale-gas hydraulic fracturing operations. The modeling is generally tuned toward conditions usually encountered in the Marce...

  13. Near-trench slip potential of megaquakes evaluated from fault properties and conditions

    PubMed Central

    Hirono, Tetsuro; Tsuda, Kenichi; Tanikawa, Wataru; Ampuero, Jean-Paul; Shibazaki, Bunichiro; Kinoshita, Masataka; Mori, James J.

    2016-01-01

    Near-trench slip during large megathrust earthquakes (megaquakes) is an important factor in the generation of destructive tsunamis. We proposed a new approach to assessing the near-trench slip potential quantitatively by integrating laboratory-derived properties of fault materials and simulations of fault weakening and rupture propagation. Although the permeability of the sandy Nankai Trough materials are higher than that of the clayey materials from the Japan Trench, dynamic weakening by thermally pressurized fluid is greater at the Nankai Trough owing to higher friction, although initially overpressured fluid at the Nankai Trough restrains the fault weakening. Dynamic rupture simulations reproduced the large slip near the trench observed in the 2011 Tohoku-oki earthquake and predicted the possibility of a large slip of over 30 m for the impending megaquake at the Nankai Trough. Our integrative approach is applicable globally to subduction zones as a novel tool for the prediction of extreme tsunami-producing near-trench slip. PMID:27321861

  14. Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratama, Cecep, E-mail: great.pratama@gmail.com; Meilano, Irwan; Nugraha, Andri Dian

    Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate formore » Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%.« less

  15. Common faults and their impacts for rooftop air conditioners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Breuker, M.S.; Braun, J.E.

    This paper identifies important faults and their performance impacts for rooftop air conditioners. The frequencies of occurrence and the relative costs of service for different faults were estimated through analysis of service records. Several of the important and difficult to diagnose refrigeration cycle faults were simulated in the laboratory. Also, the impacts on several performance indices were quantified through transient testing for a range of conditions and fault levels. The transient test results indicated that fault detection and diagnostics could be performed using methods that incorporate steady-state assumptions and models. Furthermore, the fault testing led to a set of genericmore » rules for the impacts of faults on measurements that could be used for fault diagnoses. The average impacts of the faults on cooling capacity and coefficient of performance (COP) were also evaluated. Based upon the results, all of the faults are significant at the levels introduced, and should be detected and diagnosed by an FDD system. The data set obtained during this work was very comprehensive, and was used to design and evaluate the performance of an FDD method that will be reported in a future paper.« less

  16. Distributed Fault Detection Based on Credibility and Cooperation for WSNs in Smart Grids.

    PubMed

    Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong

    2017-04-28

    Due to the increasingly important role in monitoring and data collection that sensors play, accurate and timely fault detection is a key issue for wireless sensor networks (WSNs) in smart grids. This paper presents a novel distributed fault detection mechanism for WSNs based on credibility and cooperation. Firstly, a reasonable credibility model of a sensor is established to identify any suspicious status of the sensor according to its own temporal data correlation. Based on the credibility model, the suspicious sensor is then chosen to launch fault diagnosis requests. Secondly, the sending time of fault diagnosis request is discussed to avoid the transmission overhead brought about by unnecessary diagnosis requests and improve the efficiency of fault detection based on neighbor cooperation. The diagnosis reply of a neighbor sensor is analyzed according to its own status. Finally, to further improve the accuracy of fault detection, the diagnosis results of neighbors are divided into several classifications to judge the fault status of the sensors which launch the fault diagnosis requests. Simulation results show that this novel mechanism can achieve high fault detection ratio with a small number of fault diagnoses and low data congestion probability.

  17. Distributed Fault Detection Based on Credibility and Cooperation for WSNs in Smart Grids

    PubMed Central

    Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong

    2017-01-01

    Due to the increasingly important role in monitoring and data collection that sensors play, accurate and timely fault detection is a key issue for wireless sensor networks (WSNs) in smart grids. This paper presents a novel distributed fault detection mechanism for WSNs based on credibility and cooperation. Firstly, a reasonable credibility model of a sensor is established to identify any suspicious status of the sensor according to its own temporal data correlation. Based on the credibility model, the suspicious sensor is then chosen to launch fault diagnosis requests. Secondly, the sending time of fault diagnosis request is discussed to avoid the transmission overhead brought about by unnecessary diagnosis requests and improve the efficiency of fault detection based on neighbor cooperation. The diagnosis reply of a neighbor sensor is analyzed according to its own status. Finally, to further improve the accuracy of fault detection, the diagnosis results of neighbors are divided into several classifications to judge the fault status of the sensors which launch the fault diagnosis requests. Simulation results show that this novel mechanism can achieve high fault detection ratio with a small number of fault diagnoses and low data congestion probability. PMID:28452925

  18. Imaging of earthquake faults using small UAVs as a pathfinder for air and space observations

    USGS Publications Warehouse

    Donnellan, Andrea; Green, Joseph; Ansar, Adnan; Aletky, Joseph; Glasscoe, Margaret; Ben-Zion, Yehuda; Arrowsmith, J. Ramón; DeLong, Stephen B.

    2017-01-01

    Large earthquakes cause billions of dollars in damage and extensive loss of life and property. Geodetic and topographic imaging provide measurements of transient and long-term crustal deformation needed to monitor fault zones and understand earthquakes. Earthquake-induced strain and rupture characteristics are expressed in topographic features imprinted on the landscapes of fault zones. Small UAVs provide an efficient and flexible means to collect multi-angle imagery to reconstruct fine scale fault zone topography and provide surrogate data to determine requirements for and to simulate future platforms for air- and space-based multi-angle imaging.

  19. Multiple sensor fault diagnosis for dynamic processes.

    PubMed

    Li, Cheng-Chih; Jeng, Jyh-Cheng

    2010-10-01

    Modern industrial plants are usually large scaled and contain a great amount of sensors. Sensor fault diagnosis is crucial and necessary to process safety and optimal operation. This paper proposes a systematic approach to detect, isolate and identify multiple sensor faults for multivariate dynamic systems. The current work first defines deviation vectors for sensor observations, and further defines and derives the basic sensor fault matrix (BSFM), consisting of the normalized basic fault vectors, by several different methods. By projecting a process deviation vector to the space spanned by BSFM, this research uses a vector with the resulted weights on each direction for multiple sensor fault diagnosis. This study also proposes a novel monitoring index and derives corresponding sensor fault detectability. The study also utilizes that vector to isolate and identify multiple sensor faults, and discusses the isolatability and identifiability. Simulation examples and comparison with two conventional PCA-based contribution plots are presented to demonstrate the effectiveness of the proposed methodology. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Formal Validation of Fault Management Design Solutions

    NASA Technical Reports Server (NTRS)

    Gibson, Corrina; Karban, Robert; Andolfato, Luigi; Day, John

    2013-01-01

    The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model's logic by performing fault injection simulations, and verify the fault protection system's logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-Statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.

  1. A Kalman Filter Based Technique for Stator Turn-Fault Detection of the Induction Motors

    NASA Astrophysics Data System (ADS)

    Ghanbari, Teymoor; Samet, Haidar

    2017-11-01

    Monitoring of the Induction Motors (IMs) through stator current for different faults diagnosis has considerable economic and technical advantages in comparison with the other techniques in this content. Among different faults of an IM, stator and bearing faults are more probable types, which can be detected by analyzing signatures of the stator currents. One of the most reliable indicators for fault detection of IMs is lower sidebands of power frequency in the stator currents. This paper deals with a novel simple technique for detecting stator turn-fault of the IMs. Frequencies of the lower sidebands are determined using the motor specifications and their amplitudes are estimated by a Kalman Filter (KF). Instantaneous Total Harmonic Distortion (ITHD) of these harmonics is calculated. Since variation of the ITHD for the three-phase currents is considerable in case of stator turn-fault, the fault can be detected using this criterion, confidently. Different simulation results verify high performance of the proposed method. The performance of the method is also confirmed using some experiments.

  2. Study on the Evaluation Method for Fault Displacement: Probabilistic Approach Based on Japanese Earthquake Rupture Data - Distributed fault displacements -

    NASA Astrophysics Data System (ADS)

    Inoue, N.; Kitada, N.; Tonagi, M.

    2016-12-01

    Distributed fault displacements in Probabilistic Fault Displace- ment Analysis (PFDHA) have an important rule in evaluation of important facilities such as Nuclear Installations. In Japan, the Nu- clear Installations should be constructed where there is no possibility that the displacement by the earthquake on the active faults occurs. Youngs et al. (2003) defined the distributed fault as displacement on other faults or shears, or fractures in the vicinity of the principal rup- ture in response to the principal faulting. Other researchers treated the data of distribution fault around principal fault and modeled according to their definitions (e.g. Petersen et al., 2011; Takao et al., 2013 ). We organized Japanese fault displacements data and constructed the slip-distance relationship depending on fault types. In the case of reverse fault, slip-distance relationship on the foot-wall indicated difference trend compared with that on hanging-wall. The process zone or damaged zone have been studied as weak structure around principal faults. The density or number is rapidly decrease away from the principal faults. We contrasted the trend of these zones with that of distributed slip-distance distributions. The subsurface FEM simulation have been carried out to inves- tigate the distribution of stress around principal faults. The results indicated similar trend compared with the distribution of field obser- vations. This research was part of the 2014-2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.

  3. The myth of induction in qualitative nursing research.

    PubMed

    Bergdahl, Elisabeth; Berterö, Carina M

    2015-04-01

    In nursing today, it remains unclear what constitutes a good foundation for qualitative scientific inquiry. There is a tendency to define qualitative research as a form of inductive inquiry; deductive practice is seldom discussed, and when it is, this usually occurs in the context of data analysis. We will look at how the terms 'induction' and 'deduction' are used in qualitative nursing science and by qualitative research theorists, and relate these uses to the traditional definitions of these terms by Popper and other philosophers of science. We will also question the assertion that qualitative research is or should be inductive. The position we defend here is that qualitative research should use deductive methods. We also see a need to understand the difference between the creative process needed to create theory and the justification of a theory. Our position is that misunderstandings regarding the philosophy of science and the role of inductive and deductive logic and science are still harming the development of nursing theory and science. The purpose of this article is to discuss and reflect upon inductive and deductive views of science as well as inductive and deductive analyses in qualitative research. We start by describing inductive and deductive methods and logic from a philosophy of science perspective, and we examine how the concepts of induction and deduction are often described and used in qualitative methods and nursing research. Finally, we attempt to provide a theoretical perspective that reconciles the misunderstandings regarding induction and deduction. Our conclusion is that openness towards deductive thinking and testing hypotheses is needed in qualitative nursing research. We must also realize that strict induction will not create theory; to generate theory, a creative leap is needed. © 2014 John Wiley & Sons Ltd.

  4. Strong ground motion prediction applying dynamic rupture simulations for Beppu-Haneyama Active Fault Zone, southwestern Japan

    NASA Astrophysics Data System (ADS)

    Yoshimi, M.; Matsushima, S.; Ando, R.; Miyake, H.; Imanishi, K.; Hayashida, T.; Takenaka, H.; Suzuki, H.; Matsuyama, H.

    2017-12-01

    We conducted strong ground motion prediction for the active Beppu-Haneyama Fault zone (BHFZ), Kyushu island, southwestern Japan. Since the BHFZ runs through Oita and Beppy cities, strong ground motion as well as fault displacement may affect much to the cities.We constructed a 3-dimensional velocity structure of a sedimentary basin, Beppu bay basin, where the fault zone runs through and Oita and Beppu cities are located. Minimum shear wave velocity of the 3d model is 500 m/s. Additional 1-d structure is modeled for sites with softer sediment: holocene plain area. We observed, collected, and compiled data obtained from microtremor surveys, ground motion observations, boreholes etc. phase velocity and H/V ratio. Finer structure of the Oita Plain is modeled, as 250m-mesh model, with empirical relation among N-value, lithology, depth and Vs, using borehole data, then validated with the phase velocity data obtained by the dense microtremor array observation (Yoshimi et al., 2016).Synthetic ground motion has been calculated with a hybrid technique composed of a stochastic Green's function method (for HF wave), a 3D finite difference (LF wave) and 1D amplification calculation. Fault geometry has been determined based on reflection surveys and active fault map. The rake angles are calculated with a dynamic rupture simulation considering three fault segments under a stress filed estimated from source mechanism of earthquakes around the faults (Ando et al., JpGU-AGU2017). Fault parameters such as the average stress drop, a size of asperity etc. are determined based on an empirical relation proposed by Irikura and Miyake (2001). As a result, strong ground motion stronger than 100 cm/s is predicted in the hanging wall side of the Oita plain.This work is supported by the Comprehensive Research on the Beppu-Haneyama Fault Zone funded by the Ministry of Education, Culture, Sports, Science, and Technology (MEXT), Japan.

  5. A Performance Prediction Model for a Fault-Tolerant Computer During Recovery and Restoration

    NASA Technical Reports Server (NTRS)

    Obando, Rodrigo A.; Stoughton, John W.

    1995-01-01

    The modeling and design of a fault-tolerant multiprocessor system is addressed. Of interest is the behavior of the system during recovery and restoration after a fault has occurred. The multiprocessor systems are based on the Algorithm to Architecture Mapping Model (ATAMM) and the fault considered is the death of a processor. The developed model is useful in the determination of performance bounds of the system during recovery and restoration. The performance bounds include time to recover from the fault, time to restore the system, and determination of any permanent delay in the input to output latency after the system has regained steady state. Implementation of an ATAMM based computer was developed for a four-processor generic VHSIC spaceborne computer (GVSC) as the target system. A simulation of the GVSC was also written on the code used in the ATAMM Multicomputer Operating System (AMOS). The simulation is used to verify the new model for tracking the propagation of the delay through the system and predicting the behavior of the transient state of recovery and restoration. The model is shown to accurately predict the transient behavior of an ATAMM based multicomputer during recovery and restoration.

  6. Building a 3D faulted a priori model for stratigraphic inversion: Illustration of a new methodology applied on a North Sea field case study

    NASA Astrophysics Data System (ADS)

    Rainaud, Jean-François; Clochard, Vincent; Delépine, Nicolas; Crabié, Thomas; Poudret, Mathieu; Perrin, Michel; Klein, Emmanuel

    2018-07-01

    Accurate reservoir characterization is needed all along the development of an oil and gas field study. It helps building 3D numerical reservoir simulation models for estimating the original oil and gas volumes in place and for simulating fluid flow behaviors. At a later stage of the field development, reservoir characterization can also help deciding which recovery techniques need to be used for fluids extraction. In complex media, such as faulted reservoirs, flow behavior predictions within volumes close to faults can be a very challenging issue. During the development plan, it is necessary to determine which types of communication exist between faults or which potential barriers exist for fluid flows. The solving of these issues rests on accurate fault characterization. In most cases, faults are not preserved along reservoir characterization workflows. The memory of the interpreted faults from seismic is not kept during seismic inversion and further interpretation of the result. The goal of our study is at first to integrate a 3D fault network as a priori information into a model-based stratigraphic inversion procedure. Secondly, we apply our methodology on a well-known oil and gas case study over a typical North Sea field (UK Northern North Sea) in order to demonstrate its added value for determining reservoir properties. More precisely, the a priori model is composed of several geological units populated by physical attributes, they are extrapolated from well log data following the deposition mode, but usually a priori model building methods respect neither the 3D fault geometry nor the stratification dips on the fault sides. We address this difficulty by applying an efficient flattening method for each stratigraphic unit in our workflow. Even before seismic inversion, the obtained stratigraphic model has been directly used to model synthetic seismic on our case study. Comparisons between synthetic seismic obtained from our 3D fault network model give much lower residuals than with a "basic" stratigraphic model. Finally, we apply our model-based inversion considering both faulted and non-faulted a priori models. By comparing the rock impedances results obtain in the two cases, we can see a better delineation of the Brent-reservoir compartments by using the 3D faulted a priori model built with our method.

  7. Differential equations governing slip-induced pore-pressure fluctuations in a water-saturated granular medium

    USGS Publications Warehouse

    Iverson, R.M.

    1993-01-01

    Macroscopic frictional slip in water-saturated granular media occurs commonly during landsliding, surface faulting, and intense bedload transport. A mathematical model of dynamic pore-pressure fluctuations that accompany and influence such sliding is derived here by both inductive and deductive methods. The inductive derivation shows how the governing differential equations represent the physics of the steadily sliding array of cylindrical fiberglass rods investigated experimentally by Iverson and LaHusen (1989). The deductive derivation shows how the same equations result from a novel application of Biot's (1956) dynamic mixture theory to macroscopic deformation. The model consists of two linear differential equations and five initial and boundary conditions that govern solid displacements and pore-water pressures. Solid displacements and water pressures are strongly coupled, in part through a boundary condition that ensures mass conservation during irreversible pore deformation that occurs along the bumpy slip surface. Feedback between this deformation and the pore-pressure field may yield complex system responses. The dual derivations of the model help explicate key assumptions. For example, the model requires that the dimensionless parameter B, defined here through normalization of Biot's equations, is much larger than one. This indicates that solid-fluid coupling forces are dominated by viscous rather than inertial effects. A tabulation of physical and kinematic variables for the rod-array experiments of Iverson and LaHusen and for various geologic phenomena shows that the model assumptions commonly are satisfied. A subsequent paper will describe model tests against experimental data. ?? 1993 International Association for Mathematical Geology.

  8. 46 CFR 69.119 - Spaces deducted from gross tonnage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... gear, capstan, windlass, and chain locker is deductible. A fore peak used exclusively as chain locker... oils, blocks, hawsers, rigging, deck gear, or other boatswain's stores for daily use is deductible. The...

  9. 46 CFR 69.119 - Spaces deducted from gross tonnage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... gear, capstan, windlass, and chain locker is deductible. A fore peak used exclusively as chain locker... oils, blocks, hawsers, rigging, deck gear, or other boatswain's stores for daily use is deductible. The...

  10. 46 CFR 69.119 - Spaces deducted from gross tonnage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... gear, capstan, windlass, and chain locker is deductible. A fore peak used exclusively as chain locker... oils, blocks, hawsers, rigging, deck gear, or other boatswain's stores for daily use is deductible. The...

  11. 46 CFR 69.119 - Spaces deducted from gross tonnage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... gear, capstan, windlass, and chain locker is deductible. A fore peak used exclusively as chain locker... oils, blocks, hawsers, rigging, deck gear, or other boatswain's stores for daily use is deductible. The...

  12. Impact of pre- and/or syn-tectonic salt layers in the hangingwall geometry of a kinked-planar extensional fault: insights from analogue modelling and comparison with the Parentis basin (bay of Biscay)

    NASA Astrophysics Data System (ADS)

    Ferrer, O.; Vendeville, B. C.; Roca, E.

    2012-04-01

    Using sandbox analogue modelling we determine the role played by a pre-kinematic or a syn-kinematic viscous salt layer during rollover folding of the hangingwall of a normal fault with a variable kinked-planar geometry, as well as understand the origin and the mechanisms that control the formation, kinematic evolution and geometry of salt structures developed in the hangingwall of this fault. The experiments we conducted consisted of nine models made of dry quartz-sand (35μm average grain size) simulating brittle rocks and a viscous silicone polymer (SMG 36 from Dow Corning) simulating salt in nature. The models were constructed between two end walls, one of which was fixed, whereas the other was moved by a motor-driven worm screw. The fixed wall was part of the rigid footwall of the model's master border fault. This fault was simulated using three different wood block configurations, which was overlain by a flexible (but not stretchable) sheet that was attached to the mobile endwall of the model. We applied three different infill hangingwall configurations to each fault geometry: (1) without silicone (sand only), (2) sand overlain by a pre-kinematic silicone layer deposited above the entire hanginwall, and (3) sand partly overlain by a syn-kinematic silicone layer that overlain only parts of the hangingwall. All models were subjected to a 14 cm of basement extension in a direction orthogonal to that of the border fault. Results show that the presence of a viscous layer (silicone) clearly controls the deformation pattern of the hangingwall. Thus, regardless of the silicone layer's geometry (either pre- or syn-extensional) or the geometry of the extensional fault, the silicone layer acts as a very efficient detachment level separating two different structural styles in each unit. In particular, the silicone layer acts as an extensional ductile shear zone inhibiting upward propagation of normal faults and/or shears bands from the sub-silicone layers. Whereas the basement is affected by antithetic normal faults that are more or less complex depending on the geometry of the master fault, the lateral flow of the silicone produces salt-cored anticlines, walls and diapirs in the overburden of the hangingwall. The mechanical behavior of the silicone layer as an extensional shear zone, combined with the lateral changes in pressure gradients due to overburden thickness changes, triggered the silicone migration from the half-graben depocenter towards the rollover shoulder. As a result, the accumulation of silicone produces gentle silicone-cored anticlines and local diapirs with minor extensional faults. Upwards fault propagation from the sub-silicone "basement" to the supra-silicone unit only occurs either when the supra- and sub-silicone materials are welded, or when the amount of slip along the master fault is large enough so that the tip of the silicone reaches the junction between the upper and lower panels of the master faults. Comparison between the results of these models with data from the western offshore Parentis Basin (Eastern Bay of Biscay) validates the structural interpretation of this region.

  13. Multi-version software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1989-01-01

    A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.

  14. Sliding Mode Observer-Based Current Sensor Fault Reconstruction and Unknown Load Disturbance Estimation for PMSM Driven System.

    PubMed

    Zhao, Kaihui; Li, Peng; Zhang, Changfan; Li, Xiangfei; He, Jing; Lin, Yuliang

    2017-12-06

    This paper proposes a new scheme of reconstructing current sensor faults and estimating unknown load disturbance for a permanent magnet synchronous motor (PMSM)-driven system. First, the original PMSM system is transformed into two subsystems; the first subsystem has unknown system load disturbances, which are unrelated to sensor faults, and the second subsystem has sensor faults, but is free from unknown load disturbances. Introducing a new state variable, the augmented subsystem that has sensor faults can be transformed into having actuator faults. Second, two sliding mode observers (SMOs) are designed: the unknown load disturbance is estimated by the first SMO in the subsystem, which has unknown load disturbance, and the sensor faults can be reconstructed using the second SMO in the augmented subsystem, which has sensor faults. The gains of the proposed SMOs and their stability analysis are developed via the solution of linear matrix inequality (LMI). Finally, the effectiveness of the proposed scheme was verified by simulations and experiments. The results demonstrate that the proposed scheme can reconstruct current sensor faults and estimate unknown load disturbance for the PMSM-driven system.

  15. Vibration signal models for fault diagnosis of planet bearings

    NASA Astrophysics Data System (ADS)

    Feng, Zhipeng; Ma, Haoqun; Zuo, Ming J.

    2016-05-01

    Rolling element bearings are key components of planetary gearboxes. Among them, the motion of planet bearings is very complex, encompassing spinning and revolution. Therefore, planet bearing vibrations are highly intricate and their fault characteristics are completely different from those of fixed-axis case, making planet bearing fault diagnosis a difficult topic. In order to address this issue, we derive the explicit equations for calculating the characteristic frequency of outer race, rolling element and inner race fault, considering the complex motion of planet bearings. We also develop the planet bearing vibration signal model for each fault case, considering the modulation effects of load zone passing, time-varying angle between the gear pair mesh and fault induced impact force, as well as the time-varying vibration transfer path. Based on the developed signal models, we derive the explicit equations of Fourier spectrum in each fault case, and summarize the vibration spectral characteristics respectively. The theoretical derivations are illustrated by numerical simulation, and further validated experimentally and all the three fault cases (i.e. outer race, rolling element and inner race localized fault) are diagnosed.

  16. A phase angle based diagnostic scheme to planetary gear faults diagnostics under non-stationary operational conditions

    NASA Astrophysics Data System (ADS)

    Feng, Ke; Wang, Kesheng; Ni, Qing; Zuo, Ming J.; Wei, Dongdong

    2017-11-01

    Planetary gearbox is a critical component for rotating machinery. It is widely used in wind turbines, aerospace and transmission systems in heavy industry. Thus, it is important to monitor planetary gearboxes, especially for fault diagnostics, during its operational conditions. However, in practice, operational conditions of planetary gearbox are often characterized by variations of rotational speeds and loads, which may bring difficulties for fault diagnosis through the measured vibrations. In this paper, phase angle data extracted from measured planetary gearbox vibrations is used for fault detection under non-stationary operational conditions. Together with sample entropy, fault diagnosis on planetary gearbox is implemented. The proposed scheme is explained and demonstrated in both simulation and experimental studies. The scheme proves to be effective and features advantages on fault diagnosis of planetary gearboxes under non-stationary operational conditions.

  17. Fault tolerant control of multivariable processes using auto-tuning PID controller.

    PubMed

    Yu, Ding-Li; Chang, T K; Yu, Ding-Wen

    2005-02-01

    Fault tolerant control of dynamic processes is investigated in this paper using an auto-tuning PID controller. A fault tolerant control scheme is proposed composing an auto-tuning PID controller based on an adaptive neural network model. The model is trained online using the extended Kalman filter (EKF) algorithm to learn system post-fault dynamics. Based on this model, the PID controller adjusts its parameters to compensate the effects of the faults, so that the control performance is recovered from degradation. The auto-tuning algorithm for the PID controller is derived with the Lyapunov method and therefore, the model predicted tracking error is guaranteed to converge asymptotically. The method is applied to a simulated two-input two-output continuous stirred tank reactor (CSTR) with various faults, which demonstrate the applicability of the developed scheme to industrial processes.

  18. An uncertainty-based distributed fault detection mechanism for wireless sensor networks.

    PubMed

    Yang, Yang; Gao, Zhipeng; Zhou, Hang; Qiu, Xuesong

    2014-04-25

    Exchanging too many messages for fault detection will cause not only a degradation of the network quality of service, but also represents a huge burden on the limited energy of sensors. Therefore, we propose an uncertainty-based distributed fault detection through aided judgment of neighbors for wireless sensor networks. The algorithm considers the serious influence of sensing measurement loss and therefore uses Markov decision processes for filling in missing data. Most important of all, fault misjudgments caused by uncertainty conditions are the main drawbacks of traditional distributed fault detection mechanisms. We draw on the experience of evidence fusion rules based on information entropy theory and the degree of disagreement function to increase the accuracy of fault detection. Simulation results demonstrate our algorithm can effectively reduce communication energy overhead due to message exchanges and provide a higher detection accuracy ratio.

  19. Adaptive sensor-fault tolerant control for a class of multivariable uncertain nonlinear systems.

    PubMed

    Khebbache, Hicham; Tadjine, Mohamed; Labiod, Salim; Boulkroune, Abdesselem

    2015-03-01

    This paper deals with the active fault tolerant control (AFTC) problem for a class of multiple-input multiple-output (MIMO) uncertain nonlinear systems subject to sensor faults and external disturbances. The proposed AFTC method can tolerate three additive (bias, drift and loss of accuracy) and one multiplicative (loss of effectiveness) sensor faults. By employing backstepping technique, a novel adaptive backstepping-based AFTC scheme is developed using the fact that sensor faults and system uncertainties (including external disturbances and unexpected nonlinear functions caused by sensor faults) can be on-line estimated and compensated via robust adaptive schemes. The stability analysis of the closed-loop system is rigorously proven using a Lyapunov approach. The effectiveness of the proposed controller is illustrated by two simulation examples. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Dependability analysis of parallel systems using a simulation-based approach. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sawyer, Darren Charles

    1994-01-01

    The analysis of dependability in large, complex, parallel systems executing real applications or workloads is examined in this thesis. To effectively demonstrate the wide range of dependability problems that can be analyzed through simulation, the analysis of three case studies is presented. For each case, the organization of the simulation model used is outlined, and the results from simulated fault injection experiments are explained, showing the usefulness of this method in dependability modeling of large parallel systems. The simulation models are constructed using DEPEND and C++. Where possible, methods to increase dependability are derived from the experimental results. Another interesting facet of all three cases is the presence of some kind of workload of application executing in the simulation while faults are injected. This provides a completely new dimension to this type of study, not possible to model accurately with analytical approaches.

  1. Fault Diagnostics for Turbo-Shaft Engine Sensors Based on a Simplified On-Board Model

    PubMed Central

    Lu, Feng; Huang, Jinquan; Xing, Yaodong

    2012-01-01

    Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient. PMID:23112645

  2. Fault diagnostics for turbo-shaft engine sensors based on a simplified on-board model.

    PubMed

    Lu, Feng; Huang, Jinquan; Xing, Yaodong

    2012-01-01

    Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.

  3. Initial results on fault diagnosis of DSN antenna control assemblies using pattern recognition techniques

    NASA Technical Reports Server (NTRS)

    Smyth, P.; Mellstrom, J.

    1990-01-01

    Initial results obtained from an investigation using pattern recognition techniques for identifying fault modes in the Deep Space Network (DSN) 70 m antenna control loops are described. The overall background to the problem is described, the motivation and potential benefits of this approach are outlined. In particular, an experiment is described in which fault modes were introduced into a state-space simulation of the antenna control loops. By training a multilayer feed-forward neural network on the simulated sensor output, classification rates of over 95 percent were achieved with a false alarm rate of zero on unseen tests data. It concludes that although the neural classifier has certain practical limitations at present, it also has considerable potential for problems of this nature.

  4. Nearly half of families in high-deductible health plans whose members have chronic conditions face substantial financial burden.

    PubMed

    Galbraith, Alison A; Ross-Degnan, Dennis; Soumerai, Stephen B; Rosenthal, Meredith B; Gay, Charlene; Lieu, Tracy A

    2011-02-01

    High-deductible health plans-typically with deductibles of at least $1,000 per individual and $2,000 per family-require greater enrollee cost sharing than traditional plans. But they also may provide more affordable premiums and may be the lowest-cost, or only, coverage option for many families with members who are chronically ill. We surveyed families with chronic conditions in high-deductible plans and families in traditional plans to compare health care-related financial burden-such as experiencing difficulty paying medical or basic bills or having to set up payment plans. Almost half (48 percent) of the families with chronic conditions in high-deductible plans reported health care-related financial burden, compared to 21 percent of families in traditional plans. Almost twice as many lower-income families in high-deductible plans spent more than 3 percent of income on health care expenses as lower-income families in traditional plans (53 percent versus 29 percent). As health reform efforts advance, policy makers must consider how to modify high-deductible plans to reduce the financial burden for families with chronic conditions.

  5. Dynamic rupture simulations on complex fault zone structures with off-fault plasticity using the ADER-DG method

    NASA Astrophysics Data System (ADS)

    Wollherr, Stephanie; Gabriel, Alice-Agnes; Igel, Heiner

    2015-04-01

    In dynamic rupture models, high stress concentrations at rupture fronts have to to be accommodated by off-fault inelastic processes such as plastic deformation. As presented in (Roten et al., 2014), incorporating plastic yielding can significantly reduce earlier predictions of ground motions in the Los Angeles Basin. Further, an inelastic response of materials surrounding a fault potentially has a strong impact on surface displacement and is therefore a key aspect in understanding the triggering of tsunamis through floor uplifting. We present an implementation of off-fault-plasticity and its verification for the software package SeisSol, an arbitrary high-order derivative discontinuous Galerkin (ADER-DG) method. The software recently reached multi-petaflop/s performance on some of the largest supercomputers worldwide and was a Gordon Bell prize finalist application in 2014 (Heinecke et al., 2014). For the nonelastic calculations we impose a Drucker-Prager yield criterion in shear stress with a viscous regularization following (Andrews, 2005). It permits the smooth relaxation of high stress concentrations induced in the dynamic rupture process. We verify the implementation by comparison to the SCEC/USGS Spontaneous Rupture Code Verification Benchmarks. The results of test problem TPV13 with a 60-degree dipping normal fault show that SeisSol is in good accordance with other codes. Additionally we aim to explore the numerical characteristics of the off-fault plasticity implementation by performing convergence tests for the 2D code. The ADER-DG method is especially suited for complex geometries by using unstructured tetrahedral meshes. Local adaptation of the mesh resolution enables a fine sampling of the cohesive zone on the fault while simultaneously satisfying the dispersion requirements of wave propagation away from the fault. In this context we will investigate the influence of off-fault-plasticity on geometrically complex fault zone structures like subduction zones or branched faults. Studying the interplay of stress conditions and angle dependence of neighbouring branches including inelastic material behaviour and its effects on rupture jumps and seismic activation helps to advance our understanding of earthquake source processes. An application is the simulation of a real large-scale subduction zone scenario including plasticity to validate the coupling of our dynamic rupture calculations to a tsunami model in the framework of the ASCETE project (http://www.ascete.de/). Andrews, D. J. (2005): Rupture dynamics with energy loss outside the slip zone, J. Geophys. Res., 110, B01307. Heinecke, A. (2014), A. Breuer, S. Rettenberger, M. Bader, A.-A. Gabriel, C. Pelties, A. Bode, W. Barth, K. Vaidyanathan, M. Smelyanskiy and P. Dubey: Petascale High Order Dynamic Rupture Earthquake Simulations on Heterogeneous Supercomputers. In Supercomputing 2014, The International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE, New Orleans, LA, USA, November 2014. Roten, D. (2014), K. B. Olsen, S.M. Day, Y. Cui, and D. Fäh: Expected seismic shaking in Los Angeles reduced by San Andreas fault zone plasticity, Geophys. Res. Lett., 41, 2769-2777.

  6. Summary: Experimental validation of real-time fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Iyer, R. K.; Choi, G. S.

    1992-01-01

    Testing and validation of real-time systems is always difficult to perform since neither the error generation process nor the fault propagation problem is easy to comprehend. There is no better substitute to results based on actual measurements and experimentation. Such results are essential for developing a rational basis for evaluation and validation of real-time systems. However, with physical experimentation, controllability and observability are limited to external instrumentation that can be hooked-up to the system under test. And this process is quite a difficult, if not impossible, task for a complex system. Also, to set up such experiments for measurements, physical hardware must exist. On the other hand, a simulation approach allows flexibility that is unequaled by any other existing method for system evaluation. A simulation methodology for system evaluation was successfully developed and implemented and the environment was demonstrated using existing real-time avionic systems. The research was oriented toward evaluating the impact of permanent and transient faults in aircraft control computers. Results were obtained for the Bendix BDX 930 system and Hamilton Standard EEC131 jet engine controller. The studies showed that simulated fault injection is valuable, in the design stage, to evaluate the susceptibility of computing sytems to different types of failures.

  7. Analytical and experimental vibration analysis of a faulty gear system

    NASA Astrophysics Data System (ADS)

    Choy, F. K.; Braun, M. J.; Polyshchuk, V.; Zakrajsek, J. J.; Townsend, D. P.; Handschuh, R. F.

    1994-10-01

    A comprehensive analytical procedure was developed for predicting faults in gear transmission systems under normal operating conditions. A gear tooth fault model is developed to simulate the effects of pitting and wear on the vibration signal under normal operating conditions. The model uses changes in the gear mesh stiffness to simulate the effects of gear tooth faults. The overall dynamics of the gear transmission system is evaluated by coupling the dynamics of each individual gear-rotor system through gear mesh forces generated between each gear-rotor system and the bearing forces generated between the rotor and the gearbox structures. The predicted results were compared with experimental results obtained from a spiral bevel gear fatigue test rig at NASA Lewis Research Center. The Wigner-Ville Distribution (WVD) was used to give a comprehensive comparison of the predicted and experimental results. The WVD method applied to the experimental results were also compared to other fault detection techniques to verify the WVD's ability to detect the pitting damage, and to determine its relative performance. Overall results show good correlation between the experimental vibration data of the damaged test gear and the predicted vibration from the model with simulated gear tooth pitting damage. Results also verified that the WVD method can successfully detect and locate gear tooth wear and pitting damage.

  8. Analytical and experimental vibration analysis of a faulty gear system

    NASA Astrophysics Data System (ADS)

    Choy, F. K.; Braun, M. J.; Polyshchuk, V.; Zakrajsek, J. J.; Townsend, D. P.; Handschuh, R. F.

    1994-10-01

    A comprehensive analytical procedure was developed for predicting faults in gear transmission systems under normal operating conditions. A gear tooth fault model is developed to simulate the effects of pitting and wear on the vibration signal under normal operating conditions. The model uses changes in the gear mesh stiffness to simulate the effects of gear tooth faults. The overall dynamics of the gear transmission system is evaluated by coupling the dynamics of each individual gear-rotor system through gear mesh forces generated between each gear-rotor system and the bearing forces generated between the rotor and the gearbox structure. The predicted results were compared with experimental results obtained from a spiral bevel gear fatigue test rig at NASA Lewis Research Center. The Wigner-Ville distribution (WVD) was used to give a comprehensive comparison of the predicted and experimental results. The WVD method applied to the experimental results were also compared to other fault detection techniques to verify the WVD's ability to detect the pitting damage, and to determine its relative performance. Overall results show good correlation between the experimental vibration data of the damaged test gear and the predicted vibration from the model with simulated gear tooth pitting damage. Results also verified that the WVD method can successfully detect and locate gear tooth wear and pitting damage.

  9. Analytical and Experimental Vibration Analysis of a Faulty Gear System

    NASA Technical Reports Server (NTRS)

    Choy, F. K.; Braun, M. J.; Polyshchuk, V.; Zakrajsek, J. J.; Townsend, D. P.; Handschuh, R. F.

    1994-01-01

    A comprehensive analytical procedure was developed for predicting faults in gear transmission systems under normal operating conditions. A gear tooth fault model is developed to simulate the effects of pitting and wear on the vibration signal under normal operating conditions. The model uses changes in the gear mesh stiffness to simulate the effects of gear tooth faults. The overall dynamics of the gear transmission system is evaluated by coupling the dynamics of each individual gear-rotor system through gear mesh forces generated between each gear-rotor system and the bearing forces generated between the rotor and the gearbox structure. The predicted results were compared with experimental results obtained from a spiral bevel gear fatigue test rig at NASA Lewis Research Center. The Wigner-Ville distribution (WVD) was used to give a comprehensive comparison of the predicted and experimental results. The WVD method applied to the experimental results were also compared to other fault detection techniques to verify the WVD's ability to detect the pitting damage, and to determine its relative performance. Overall results show good correlation between the experimental vibration data of the damaged test gear and the predicted vibration from the model with simulated gear tooth pitting damage. Results also verified that the WVD method can successfully detect and locate gear tooth wear and pitting damage.

  10. Fault detection and identification in missile system guidance and control: a filtering approach

    NASA Astrophysics Data System (ADS)

    Padgett, Mary Lou; Evers, Johnny; Karplus, Walter J.

    1996-03-01

    Real-world applications of computational intelligence can enhance the fault detection and identification capabilities of a missile guidance and control system. A simulation of a bank-to- turn missile demonstrates that actuator failure may cause the missile to roll and miss the target. Failure of one fin actuator can be detected using a filter and depicting the filter output as fuzzy numbers. The properties and limitations of artificial neural networks fed by these fuzzy numbers are explored. A suite of networks is constructed to (1) detect a fault and (2) determine which fin (if any) failed. Both the zero order moment term and the fin rate term show changes during actuator failure. Simulations address the following questions: (1) How bad does the actuator failure have to be for detection to occur, (2) How bad does the actuator failure have to be for fault detection and isolation to occur, (3) are both zero order moment and fine rate terms needed. A suite of target trajectories are simulated, and properties and limitations of the approach reported. In some cases, detection of the failed actuator occurs within 0.1 second, and isolation of the failure occurs 0.1 after that. Suggestions for further research are offered.

  11. Immunity-Based Aircraft Fault Detection System

    NASA Technical Reports Server (NTRS)

    Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.

    2004-01-01

    In the study reported in this paper, we have developed and applied an Artificial Immune System (AIS) algorithm for aircraft fault detection, as an extension to a previous work on intelligent flight control (IFC). Though the prior studies had established the benefits of IFC, one area of weakness that needed to be strengthened was the control dead band induced by commanding a failed surface. Since the IFC approach uses fault accommodation with no detection, the dead band, although it reduces over time due to learning, is present and causes degradation in handling qualities. If the failure can be identified, this dead band can be further A ed to ensure rapid fault accommodation and better handling qualities. The paper describes the application of an immunity-based approach that can detect a broad spectrum of known and unforeseen failures. The approach incorporates the knowledge of the normal operational behavior of the aircraft from sensory data, and probabilistically generates a set of pattern detectors that can detect any abnormalities (including faults) in the behavior pattern indicating unsafe in-flight operation. We developed a tool called MILD (Multi-level Immune Learning Detection) based on a real-valued negative selection algorithm that can generate a small number of specialized detectors (as signatures of known failure conditions) and a larger set of generalized detectors for unknown (or possible) fault conditions. Once the fault is detected and identified, an adaptive control system would use this detection information to stabilize the aircraft by utilizing available resources (control surfaces). We experimented with data sets collected under normal and various simulated failure conditions using a piloted motion-base simulation facility. The reported results are from a collection of test cases that reflect the performance of the proposed immunity-based fault detection algorithm.

  12. Source characterization and dynamic fault modeling of induced seismicity

    NASA Astrophysics Data System (ADS)

    Lui, S. K. Y.; Young, R. P.

    2017-12-01

    In recent years there are increasing concerns worldwide that industrial activities in the sub-surface can cause or trigger damaging earthquakes. In order to effectively mitigate the damaging effects of induced seismicity, the key is to better understand the source physics of induced earthquakes, which still remain elusive at present. Furthermore, an improved understanding of induced earthquake physics is pivotal to assess large-magnitude earthquake triggering. A better quantification of the possible causes of induced earthquakes can be achieved through numerical simulations. The fault model used in this study is governed by the empirically-derived rate-and-state friction laws, featuring a velocity-weakening (VW) patch embedded into a large velocity-strengthening (VS) region. Outside of that, the fault is slipping at the background loading rate. The model is fully dynamic, with all wave effects resolved, and is able to resolve spontaneous long-term slip history on a fault segment at all stages of seismic cycles. An earlier study using this model has established that aseismic slip plays a major role in the triggering of small repeating earthquakes. This study presents a series of cases with earthquakes occurring on faults with different fault frictional properties and fluid-induced stress perturbations. The effects to both the overall seismicity rate and fault slip behavior are investigated, and the causal relationship between the pre-slip pattern prior to the event and the induced source characteristics is discussed. Based on simulation results, the subsequent step is to select specific cases for laboratory experiments which allow well controlled variables and fault parameters. Ultimately, the aim is to provide better constraints on important parameters for induced earthquakes based on numerical modeling and laboratory data, and hence to contribute to a physics-based induced earthquake hazard assessment.

  13. Shear heating and solid state diffusion: Constraints from clumped isotope thermometry in carbonate faults

    NASA Astrophysics Data System (ADS)

    Siman-Tov, S.; Affek, H. P.; Matthews, A.; Aharonov, E.; Reches, Z.

    2015-12-01

    Natural faults are expected to heat rapidly during seismic slip and to cool quite quickly after the event. Here we examine clumped isotope thermometry for its ability to identify short duration elevated temperature events along frictionally heated carbonate faults. This method is based on measured Δ47 values that indicate the relative atomic order of oxygen and carbon stable isotopes in the calcite lattice, which is affected by heat and thus can serve as a thermometer. We examine three types of calcite rock samples: (1) samples that were rapidly heated and then cooled in static laboratory experiments, simulating the temperature cycle experienced by fault rock during earthquake slip; (2) limestone samples that were experimentally sheared to simulate earthquake slip events; and (3) samples taken from principle slip zones of natural carbonate faults that likely experienced earthquake slip. Experimental results show that Δ47 values decrease rapidly (in the course of seconds) and systematically both with increasing temperature and shear velocity. On the other hand, carbonate shear zone from natural faults do not show such Δ47 reduction. We propose that the experimental Δ47 response is controlled by the presence of high-stressed nano-grains within the fault zone that can reduce the activation energy for diffusion by up to 60%, and thus lead to an increased rate of solid-state diffusion in the experiments. However, the lowering of activation energy is a double-edged sword in terms of clumped isotopes: In laboratory experiments, it allows for rapid disordering so that isotopic signal appears after very short heating, but in natural faults it also leads to relatively fast isotopic re-ordering after the cessation of frictional heating, thus erasing the high temperature signature in Δ47 values within relatively short geological times (<1 Ma).

  14. Modeling of time-lapse multi-scale seismic monitoring of CO2 injected into a fault zone to enhance the characterization of permeability in enhanced geothermal systems

    NASA Astrophysics Data System (ADS)

    Zhang, R.; Borgia, A.; Daley, T. M.; Oldenburg, C. M.; Jung, Y.; Lee, K. J.; Doughty, C.; Altundas, B.; Chugunov, N.; Ramakrishnan, T. S.

    2017-12-01

    Subsurface permeable faults and fracture networks play a critical role for enhanced geothermal systems (EGS) by providing conduits for fluid flow. Characterization of the permeable flow paths before and after stimulation is necessary to evaluate and optimize energy extraction. To provide insight into the feasibility of using CO2 as a contrast agent to enhance fault characterization by seismic methods, we model seismic monitoring of supercritical CO2 (scCO2) injected into a fault. During the CO2 injection, the original brine is replaced by scCO2, which leads to variations in geophysical properties of the formation. To explore the technical feasibility of the approach, we present modeling results for different time-lapse seismic methods including surface seismic, vertical seismic profiling (VSP), and a cross-well survey. We simulate the injection and production of CO2 into a normal fault in a system based on the Brady's geothermal field and model pressure and saturation variations in the fault zone using TOUGH2-ECO2N. The simulation results provide changing fluid properties during the injection, such as saturation and salinity changes, which allow us to estimate corresponding changes in seismic properties of the fault and the formation. We model the response of the system to active seismic monitoring in time-lapse mode using an anisotropic finite difference method with modifications for fracture compliance. Results to date show that even narrow fault and fracture zones filled with CO2 can be better detected using the VSP and cross-well survey geometry, while it would be difficult to image the CO2 plume by using surface seismic methods.

  15. Current Sensor Fault Reconstruction for PMSM Drives

    PubMed Central

    Huang, Gang; Luo, Yi-Ping; Zhang, Chang-Fan; He, Jing; Huang, Yi-Shan

    2016-01-01

    This paper deals with a current sensor fault reconstruction algorithm for the torque closed-loop drive system of an interior PMSM. First, sensor faults are equated to actuator ones by a new introduced state variable. Then, in αβ coordinates, based on the motor model with active flux linkage, a current observer is constructed with a specific sliding mode equivalent control methodology to eliminate the effects of unknown disturbances, and the phase current sensor faults are reconstructed by means of an adaptive method. Finally, an αβ axis current fault processing module is designed based on the reconstructed value. The feasibility and effectiveness of the proposed method are verified by simulation and experimental tests on the RT-LAB platform. PMID:26840317

  16. Stacking Faults and Mechanical Behavior beyond the Elastic Limit of an Imidazole-Based Metal Organic Framework: ZIF-8.

    PubMed

    Hegde, Vinay I; Tan, Jin-Chong; Waghmare, Umesh V; Cheetham, Anthony K

    2013-10-17

    We determine the nonlinear mechanical behavior of a prototypical zeolitic imidazolate framework (ZIF-8) along two modes of mechanical failure in response to tensile and shear forces using first-principles simulations. Our generalized stacking fault energy surface reveals an intrinsic stacking fault of surprisingly low energy comparable to that in copper, though the energy barrier associated with its formation is much higher. The lack of vibrational spectroscopic evidence for such faults in experiments can be explained with the structural instability of the barrier state to form a denser and disordered state of ZIF-8 seen in our analysis, that is, large shear leads to its amorphization rather than formation of faults.

  17. Effect of stacking fault energy on mechanism of plastic deformation in nanotwinned FCC metals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borovikov, Valery; Mendelev, Mikhail I.; King, Alexander H.

    Starting from a semi-empirical potential designed for Cu, we have developed a series of potentials that provide essentially constant values of all significant (calculated) materials properties except for the intrinsic stacking fault energy, which varies over a range that encompasses the lowest and highest values observed in nature. In addition, these potentials were employed in molecular dynamics (MD) simulations to investigate how stacking fault energy affects the mechanical behavior of nanotwinned face-centered cubic (FCC) materials. The results indicate that properties such as yield strength and microstructural stability do not vary systematically with stacking fault energy, but rather fall into twomore » distinct regimes corresponding to 'low' and 'high' stacking fault energies.« less

  18. Effect of stacking fault energy on mechanism of plastic deformation in nanotwinned FCC metals

    DOE PAGES

    Borovikov, Valery; Mendelev, Mikhail I.; King, Alexander H.; ...

    2015-05-15

    Starting from a semi-empirical potential designed for Cu, we have developed a series of potentials that provide essentially constant values of all significant (calculated) materials properties except for the intrinsic stacking fault energy, which varies over a range that encompasses the lowest and highest values observed in nature. In addition, these potentials were employed in molecular dynamics (MD) simulations to investigate how stacking fault energy affects the mechanical behavior of nanotwinned face-centered cubic (FCC) materials. The results indicate that properties such as yield strength and microstructural stability do not vary systematically with stacking fault energy, but rather fall into twomore » distinct regimes corresponding to 'low' and 'high' stacking fault energies.« less

  19. A benchmark for fault tolerant flight control evaluation

    NASA Astrophysics Data System (ADS)

    Smaili, H.; Breeman, J.; Lombaerts, T.; Stroosma, O.

    2013-12-01

    A large transport aircraft simulation benchmark (REconfigurable COntrol for Vehicle Emergency Return - RECOVER) has been developed within the GARTEUR (Group for Aeronautical Research and Technology in Europe) Flight Mechanics Action Group 16 (FM-AG(16)) on Fault Tolerant Control (2004 2008) for the integrated evaluation of fault detection and identification (FDI) and reconfigurable flight control strategies. The benchmark includes a suitable set of assessment criteria and failure cases, based on reconstructed accident scenarios, to assess the potential of new adaptive control strategies to improve aircraft survivability. The application of reconstruction and modeling techniques, based on accident flight data, has resulted in high-fidelity nonlinear aircraft and fault models to evaluate new Fault Tolerant Flight Control (FTFC) concepts and their real-time performance to accommodate in-flight failures.

  20. Fault-tolerance of a neural network solving the traveling salesman problem

    NASA Technical Reports Server (NTRS)

    Protzel, P.; Palumbo, D.; Arras, M.

    1989-01-01

    This study presents the results of a fault-injection experiment that stimulates a neural network solving the Traveling Salesman Problem (TSP). The network is based on a modified version of Hopfield's and Tank's original method. We define a performance characteristic for the TSP that allows an overall assessment of the solution quality for different city-distributions and problem sizes. Five different 10-, 20-, and 30- city cases are sued for the injection of up to 13 simultaneous stuck-at-0 and stuck-at-1 faults. The results of more than 4000 simulation-runs show the extreme fault-tolerance of the network, especially with respect to stuck-at-0 faults. One possible explanation for the overall surprising result is the redundancy of the problem representation.

Top