Sample records for high-performance deductive fault

  1. Performance investigation on DCSFCL considering different magnetic materials

    NASA Astrophysics Data System (ADS)

    Yuan, Jiaxin; Zhou, Hang; Zhong, Yongheng; Gan, Pengcheng; Gao, Yanhui; Muramatsu, Kazuhiro; Du, Zhiye; Chen, Baichao

    2018-05-01

    In order to protect high voltage direct current (HVDC) system from destructive consequences caused by fault current, a novel concept of HVDC system fault current limiter (DCSFCL) was proposed previously. Since DCSFCL is based on saturable core reactor theory, iron core becomes the key to the final performance of it. Therefore, three typical kinds of soft magnetic materials were chosen to find out their impact on performances of DCSFCL. Different characteristics of materials were compared and their theoretical deductions were carried out, too. In the meanwhile, 3D models applying those three materials were built separately and finite element analysis simulations were performed to compare these results and further verify the assumptions. It turns out that materials with large saturation flux density value Bs like silicon steel and short demagnetization time like ferrite might be the best choice for DCSFCL, which can be a future research direction of magnetic materials.

  2. 20 CFR 404.510 - When an individual is “without fault” in a deduction overpayment.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 2 2012-04-01 2012-04-01 false When an individual is âwithout faultâ in a deduction overpayment. 404.510 Section 404.510 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL... or Recovery of Overpayments, and Liability of a Certifying Officer § 404.510 When an individual is...

  3. Self-checking self-repairing computer nodes using the mirror processor

    NASA Technical Reports Server (NTRS)

    Tamir, Yuval

    1992-01-01

    Circuitry added to fault-tolerant systems for concurrent error deduction usually reduces performance. Using a technique called micro rollback, it is possible to eliminate most of the performance penalty of concurrent error detection. Error detection is performed in parallel with intermodule communication, and erroneous state changes are later undone. The author reports on the design and implementation of a VLSI RISC microprocessor, called the Mirror Processor (MP), which is capable of micro rollback. In order to achieve concurrent error detection, two MP chips operate in lockstep, comparing external signals and a signature of internal signals every clock cycle. If a mismatch is detected, both processors roll back to the beginning of the cycle when the error occurred. In some cases the erroneous state is corrected by copying a value from the fault-free processor to the faulty processor. The architecture, microarchitecture, and VLSI implementation of the MP, emphasizing its error-detection, error-recovery, and self-diagnosis capabilities, are described.

  4. A 3D modeling approach to complex faults with multi-source data

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan

    2015-04-01

    Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.

  5. Using Fault Trees to Advance Understanding of Diagnostic Errors.

    PubMed

    Rogith, Deevakar; Iyengar, M Sriram; Singh, Hardeep

    2017-11-01

    Diagnostic errors annually affect at least 5% of adults in the outpatient setting in the United States. Formal analytic techniques are only infrequently used to understand them, in part because of the complexity of diagnostic processes and clinical work flows involved. In this article, diagnostic errors were modeled using fault tree analysis (FTA), a form of root cause analysis that has been successfully used in other high-complexity, high-risk contexts. How factors contributing to diagnostic errors can be systematically modeled by FTA to inform error understanding and error prevention is demonstrated. A team of three experts reviewed 10 published cases of diagnostic error and constructed fault trees. The fault trees were modeled according to currently available conceptual frameworks characterizing diagnostic error. The 10 trees were then synthesized into a single fault tree to identify common contributing factors and pathways leading to diagnostic error. FTA is a visual, structured, deductive approach that depicts the temporal sequence of events and their interactions in a formal logical hierarchy. The visual FTA enables easier understanding of causative processes and cognitive and system factors, as well as rapid identification of common pathways and interactions in a unified fashion. In addition, it enables calculation of empirical estimates for causative pathways. Thus, fault trees might provide a useful framework for both quantitative and qualitative analysis of diagnostic errors. Future directions include establishing validity and reliability by modeling a wider range of error cases, conducting quantitative evaluations, and undertaking deeper exploration of other FTA capabilities. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.

  6. 48 CFR 52.232-7 - Payments under Time-and-Materials and Labor-Hour Contracts.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the Contractor to withhold amounts from its billings until a reserve is set aside in an amount that... Disputes clause of this contract. If the Schedule provides rates for overtime, the premium portion of those... Contractor shall not deduct from gross costs the benefits lost without fault or neglect on the part of the...

  7. Insurance Applications of Active Fault Maps Showing Epistemic Uncertainty

    NASA Astrophysics Data System (ADS)

    Woo, G.

    2005-12-01

    Insurance loss modeling for earthquakes utilizes available maps of active faulting produced by geoscientists. All such maps are subject to uncertainty, arising from lack of knowledge of fault geometry and rupture history. Field work to undertake geological fault investigations drains human and monetary resources, and this inevitably limits the resolution of fault parameters. Some areas are more accessible than others; some may be of greater social or economic importance than others; some areas may be investigated more rapidly or diligently than others; or funding restrictions may have curtailed the extent of the fault mapping program. In contrast with the aleatory uncertainty associated with the inherent variability in the dynamics of earthquake fault rupture, uncertainty associated with lack of knowledge of fault geometry and rupture history is epistemic. The extent of this epistemic uncertainty may vary substantially from one regional or national fault map to another. However aware the local cartographer may be, this uncertainty is generally not conveyed in detail to the international map user. For example, an area may be left blank for a variety of reasons, ranging from lack of sufficient investigation of a fault to lack of convincing evidence of activity. Epistemic uncertainty in fault parameters is of concern in any probabilistic assessment of seismic hazard, not least in insurance earthquake risk applications. A logic-tree framework is appropriate for incorporating epistemic uncertainty. Some insurance contracts cover specific high-value properties or transport infrastructure, and therefore are extremely sensitive to the geometry of active faulting. Alternative Risk Transfer (ART) to the capital markets may also be considered. In order for such insurance or ART contracts to be properly priced, uncertainty should be taken into account. Accordingly, an estimate is needed for the likelihood of surface rupture capable of causing severe damage. Especially where a high deductible is in force, this requires estimation of the epistemic uncertainty on fault geometry and activity. Transport infrastructure insurance is of practical interest in seismic countries. On the North Anatolian Fault in Turkey, there is uncertainty over an unbroken segment between the eastern end of the Dazce Fault and Bolu. This may have ruptured during the 1944 earthquake. Existing hazard maps may simply use a question mark to flag uncertainty. However, a far more informative type of hazard map might express spatial variations in the confidence level associated with a fault map. Through such visual guidance, an insurance risk analyst would be better placed to price earthquake cover, allowing for epistemic uncertainty.

  8. FTAPE: A fault injection tool to measure fault tolerance

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1995-01-01

    The paper introduces FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The tool combines system-wide fault injection with a controllable workload. A workload generator is used to create high stress conditions for the machine. Faults are injected based on this workload activity in order to ensure a high level of fault propagation. The errors/fault ratio and performance degradation are presented as measures of fault tolerance.

  9. Fault Tree Handbook

    DTIC Science & Technology

    1981-01-01

    are applied to determine what system states (usually failed states) are possible; deductive methods are applied to determine how a given system state...Similar considerations apply to the single failures of CVA, BVB and CVB and this important additional information has been displayed in the principal...way. The point "maximum tolerable failure" corresponds to the survival point of the company building the aircraft. Above that point, only intolerable

  10. Tectonic aspects of the guatemala earthquake of 4 february 1976.

    PubMed

    Plafker, G

    1976-09-24

    The locations of surface ruptures and the main shock epicenter indicate that the disastrous Guatemala earthquake of 4 February 1976 was tectonic in origin and generated mainly by slip on the Motagua fault, which has an arcuate roughly east-west trend across central Guatemala. Fault breakage was observed for 230 km. Displacement is predominantly horizontal and sinistral with a maximum measured offset of 340 cm and an average of about 100 cm. Secondary fault breaks trending roughly north-northeast to south-southwest have been found in a zone about 20 km long and 8 km wide extending from the western suburbs of Guatemala City to near Mixco, and similar faults with more subtle surface expression probably occur elsewhere in the Guatemalan Highlands. Displacements on the secondary faults are predominantly extensional and dip-slip, with as much as 15 cm vertical offset on a single fracture. The primary fault that broke during the earthquake involved roughly 10 percent of the length of the great transform fault system that defines the boundary between the Caribbean and North American plates. The observed sinistral displacement is striking confirmation of deductions regarding the late Cenozoic relative motion between these two crustal plates that were based largely on indirect geologic and geophysical evidence. The earthquake-related secondary faulting, together with the complex pattern of geologically young normal faults that occur in the Guatemalan Highlands and elsewhere in western Central America, suggest that the eastern wedge-shaped part of the Caribbean plate, roughly between the Motagua fault system and the volcanic arc, is being pulled apart in tension and left behind as the main mass of the plate moves relatively eastward. Because of their proximity to areas of high population density, shallow-focus earthquakes that originate on the Motagua fault system, on the system of predominantly extensional faults within the western part of the Caribbean plate, and in association with volcanism may pose a more serious seismic hazard than the more numerous (but generally more distant) earthquakes that are generated in the eastward-dipping subduction zone beneath Middle America.

  11. Cultural Difference in Stereotype Perceptions and Performances in Nonverbal Deductive Reasoning and Creativity

    ERIC Educational Resources Information Center

    Wong, Regine; Niu, Weihua

    2013-01-01

    A total of 182 undergraduate students from China and the United States participated in a study examining the presence of stereotypical perceptions regarding creativity and deductive reasoning abilities, as well as the influence of stereotype on participants' performance on deductive reasoning and creativity in nonverbal form. The results showed…

  12. 20 CFR 404.457 - Deductions where taxes neither deducted from wages of certain maritime employees nor paid.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... wages of certain maritime employees nor paid. 404.457 Section 404.457 Employees' Benefits SOCIAL... maritime employees nor paid. (a) When deduction is required. A deduction is required where: (1) An... Administration or, for services performed before February 11, 1942, through the United States Maritime Commission...

  13. Nearly half of families in high-deductible health plans whose members have chronic conditions face substantial financial burden.

    PubMed

    Galbraith, Alison A; Ross-Degnan, Dennis; Soumerai, Stephen B; Rosenthal, Meredith B; Gay, Charlene; Lieu, Tracy A

    2011-02-01

    High-deductible health plans-typically with deductibles of at least $1,000 per individual and $2,000 per family-require greater enrollee cost sharing than traditional plans. But they also may provide more affordable premiums and may be the lowest-cost, or only, coverage option for many families with members who are chronically ill. We surveyed families with chronic conditions in high-deductible plans and families in traditional plans to compare health care-related financial burden-such as experiencing difficulty paying medical or basic bills or having to set up payment plans. Almost half (48 percent) of the families with chronic conditions in high-deductible plans reported health care-related financial burden, compared to 21 percent of families in traditional plans. Almost twice as many lower-income families in high-deductible plans spent more than 3 percent of income on health care expenses as lower-income families in traditional plans (53 percent versus 29 percent). As health reform efforts advance, policy makers must consider how to modify high-deductible plans to reduce the financial burden for families with chronic conditions.

  14. Causation mechanism analysis for haze pollution related to vehicle emission in Guangzhou, China by employing the fault tree approach.

    PubMed

    Huang, Weiqing; Fan, Hongbo; Qiu, Yongfu; Cheng, Zhiyu; Xu, Pingru; Qian, Yu

    2016-05-01

    Recently, China has frequently experienced large-scale, severe and persistent haze pollution due to surging urbanization and industrialization and a rapid growth in the number of motor vehicles and energy consumption. The vehicle emission due to the consumption of a large number of fossil fuels is no doubt a critical factor of the haze pollution. This work is focused on the causation mechanism of haze pollution related to the vehicle emission for Guangzhou city by employing the Fault Tree Analysis (FTA) method for the first time. With the establishment of the fault tree system of "Haze weather-Vehicle exhausts explosive emission", all of the important risk factors are discussed and identified by using this deductive FTA method. The qualitative and quantitative assessments of the fault tree system are carried out based on the structure, probability and critical importance degree analysis of the risk factors. The study may provide a new simple and effective tool/strategy for the causation mechanism analysis and risk management of haze pollution in China. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Nearly Half of Families In High-Deductible Health Plans Whose Members Have Chronic Conditions Face Substantial Financial Burden

    PubMed Central

    Galbraith, Alison A.; Ross-Degnan, Dennis; Soumerai, Stephen B.; Rosenthal, Meredith B.; Gay, Charlene; Lieu, Tracy A.

    2015-01-01

    High-deductible health plans – typically with deductibles of at least $1,000 per individual and $2,000 per family -- require greater enrollee cost sharing than traditional plans. But they also may provide more affordable premiums and may be the lowest-cost, or only, coverage option for many families with members who are chronically ill. We surveyed families with chronic conditions in high-deductible plans and families in traditional plans to compare health care-related financial burden – such as experiencing difficulty paying medical or basic bills or having to set up payment plans. Almost half (48 percent) of the families with chronic conditions in high-deductible plans reported health care-related financial burden, compared to a fifth of families (21 percent) in traditional plans. Almost twice as many lower-income families in high-deductible plans spent more than 3 percent of income on health care expenses as lower-income families in traditional plans (53 percent versus 29 percent). As health reform efforts advance, policy makers must consider how to modify high-deductible plans to reduce the financial burden for families with chronic conditions. PMID:21289354

  16. Conversion of Questionnaire Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powell, Danny H; Elwood Jr, Robert H

    During the survey, respondents are asked to provide qualitative answers (well, adequate, needs improvement) on how well material control and accountability (MC&A) functions are being performed. These responses can be used to develop failure probabilities for basic events performed during routine operation of the MC&A systems. The failure frequencies for individual events may be used to estimate total system effectiveness using a fault tree in a probabilistic risk analysis (PRA). Numeric risk values are required for the PRA fault tree calculations that are performed to evaluate system effectiveness. So, the performance ratings in the questionnaire must be converted to relativemore » risk values for all of the basic MC&A tasks performed in the facility. If a specific material protection, control, and accountability (MPC&A) task is being performed at the 'perfect' level, the task is considered to have a near zero risk of failure. If the task is performed at a less than perfect level, the deficiency in performance represents some risk of failure for the event. As the degree of deficiency in performance increases, the risk of failure increases. If a task that should be performed is not being performed, that task is in a state of failure. The failure probabilities of all basic events contribute to the total system risk. Conversion of questionnaire MPC&A system performance data to numeric values is a separate function from the process of completing the questionnaire. When specific questions in the questionnaire are answered, the focus is on correctly assessing and reporting, in an adjectival manner, the actual performance of the related MC&A function. Prior to conversion, consideration should not be given to the numeric value that will be assigned during the conversion process. In the conversion process, adjectival responses to questions on system performance are quantified based on a log normal scale typically used in human error analysis (see A.D. Swain and H.E. Guttmann, 'Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications,' NUREG/CR-1278). This conversion produces the basic event risk of failure values required for the fault tree calculations. The fault tree is a deductive logic structure that corresponds to the operational nuclear MC&A system at a nuclear facility. The conventional Delphi process is a time-honored approach commonly used in the risk assessment field to extract numerical values for the failure rates of actions or activities when statistically significant data is absent.« less

  17. The Impact of Consumer-Directed Health Plans and Patient Socioeconomic Status on Physician Recommendations for Colorectal Cancer Screening

    PubMed Central

    Mallya, Giridhar; Polsky, Daniel

    2008-01-01

    Background Consumer-directed health plans are increasingly common, yet little is known about their impact on physician decision-making and preventive service use. Objective To determine how patients’ deductible levels and socioeconomic status may affect primary care physicians’ recommendations for colorectal cancer screening. Design, Setting, and Participants Screening recommendations were elicited using hypothetical vignettes from a national sample of 1,500 primary care physicians. Physicians were randomized to one of four vignettes describing a patient with either low or high socioeconomic status (SES) and either low- or high-deductible plan. Bivariate and multivariate analyses were used to examine how recommendations varied as a function of SES and deductible. Outcome Measures Rates of recommendation for home fecal occult blood testing, sigmoidoscopy, colonoscopy, and inappropriate screening, defined as no screening or office-based fecal occult blood testing. Results A total of 528 (49%) eligible physicians responded. Overall, 7.2% of physicians recommended inappropriate screening; 3.2% of patients with high SES in low-deductible plans received inappropriate screening recommendations and 11.4% of patients with low SES in high-deductible plans for an adjusted odds ratio of 0.22 (0.05–0.89). The odds of a colonoscopy recommendation were over ten times higher (AOR 11.46, 5.26–24.94) for patients with high SES in low-deductible plans compared to patients with low SES in high-deductible plans. Funds in medical savings accounts eliminated differences in inappropriate screening recommendations. Conclusions Patient SES and deductible-level affect physician recommendations for preventive care. Coverage of preventive services and funds in medical savings accounts may help to mitigate the impact of high-deductibles and SES on inappropriate recommendations. PMID:18629590

  18. Introduction to Concurrent Engineering: Electronic Circuit Design and Production Applications

    DTIC Science & Technology

    1992-09-01

    STD-1629. Failure mode distribution data for many different types of parts may be found in RAC publication FMD -91. FMEA utilizes inductive logic in a...contrasts with a Fault Tree Analysis ( FTA ) which utilizes deductive logic in a "top down" approach. In FTA , a system failure is assumed and traced down...Analysis ( FTA ) is a graphical method of risk analysis used to identify critical failure modes within a system or equipment. Utilizing a pictorial approach

  19. Collaborative use of virtual patients after a lecture enhances learning with minimal investment of cognitive load.

    PubMed

    Marei, Hesham F; Donkers, Jeroen; Al-Eraky, Mohamed M; Van Merrienboer, Jeroen J G

    2018-05-25

    The use of virtual patients (VPs), due to their high complexity and/or inappropriate sequencing with other instructional methods, might cause a high cognitive load, which hampers learning. To investigate the efficiency of instructional methods that involved three different applications of VPs combined with lectures. From two consecutive batches, 171 out of 183 students have participated in lecture and VPs sessions. One group received a lecture session followed by a collaborative VPs learning activity (collaborative deductive). The other two groups received a lecture session and an independent VP learning activity, which either followed the lecture session (independent deductive) or preceded it (independent inductive). All groups were administrated written knowledge acquisition and retention tests as well as transfer tests using two new VPs. All participants completed a cognitive load questionnaire, which measured intrinsic, extraneous and germane load. Mixed effect analysis of cognitive load and efficiency using the R statistical program was performed. The highest intrinsic and extraneous load was found in the independent inductive group, while the lowest intrinsic and extraneous load was seen in the collaborative deductive group. Furthermore, comparisons showed a significantly higher efficiency, that is, higher performance in combination with lower cognitive load, for the collaborative deductive group than for the other two groups. Collaborative use of VPs after a lecture is the most efficient instructional method, of those tested, as it leads to better learning and transfer combined with lower cognitive load, when compared with independent use of VPs, either before or after the lecture.

  20. A highly reliable, high performance open avionics architecture for real time Nap-of-the-Earth operations

    NASA Technical Reports Server (NTRS)

    Harper, Richard E.; Elks, Carl

    1995-01-01

    An Army Fault Tolerant Architecture (AFTA) has been developed to meet real-time fault tolerant processing requirements of future Army applications. AFTA is the enabling technology that will allow the Army to configure existing processors and other hardware to provide high throughput and ultrahigh reliability necessary for TF/TA/NOE flight control and other advanced Army applications. A comprehensive conceptual study of AFTA has been completed that addresses a wide range of issues including requirements, architecture, hardware, software, testability, producibility, analytical models, validation and verification, common mode faults, VHDL, and a fault tolerant data bus. A Brassboard AFTA for demonstration and validation has been fabricated, and two operating systems and a flight-critical Army application have been ported to it. Detailed performance measurements have been made of fault tolerance and operating system overheads while AFTA was executing the flight application in the presence of faults.

  1. The Design of a Fault-Tolerant COTS-Based Bus Architecture for Space Applications

    NASA Technical Reports Server (NTRS)

    Chau, Savio N.; Alkalai, Leon; Tai, Ann T.

    2000-01-01

    The high-performance, scalability and miniaturization requirements together with the power, mass and cost constraints mandate the use of commercial-off-the-shelf (COTS) components and standards in the X2000 avionics system architecture for deep-space missions. In this paper, we report our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. While the COTS standard IEEE 1394 adequately supports power management, high performance and scalability, its topological criteria impose restrictions on fault tolerance realization. To circumvent the difficulties, we derive a "stack-tree" topology that not only complies with the IEEE 1394 standard but also facilitates fault tolerance realization in a spaceborne system with limited dedicated resource redundancies. Moreover, by exploiting pertinent standard features of the 1394 interface which are not purposely designed for fault tolerance, we devise a comprehensive set of fault detection mechanisms to support the fault-tolerant bus architecture.

  2. Fault Analysis and Detection in Microgrids with High PV Penetration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El Khatib, Mohamed; Hernandez Alvidrez, Javier; Ellis, Abraham

    In this report we focus on analyzing current-controlled PV inverters behaviour under faults in order to develop fault detection schemes for microgrids with high PV penetration. Inverter model suitable for steady state fault studies is presented and the impact of PV inverters on two protection elements is analyzed. The studied protection elements are superimposed quantities based directional element and negative sequence directional element. Additionally, several non-overcurrent fault detection schemes are discussed in this report for microgrids with high PV penetration. A detailed time-domain simulation study is presented to assess the performance of the presented fault detection schemes under different microgridmore » modes of operation.« less

  3. Deductive Evaluation: Formal Code Analysis With Low User Burden

    NASA Technical Reports Server (NTRS)

    Di Vito, Ben. L

    2016-01-01

    We describe a framework for symbolically evaluating iterative C code using a deductive approach that automatically discovers and proves program properties. Although verification is not performed, the method can infer detailed program behavior. Software engineering work flows could be enhanced by this type of analysis. Floyd-Hoare verification principles are applied to synthesize loop invariants, using a library of iteration-specific deductive knowledge. When needed, theorem proving is interleaved with evaluation and performed on the fly. Evaluation results take the form of inferred expressions and type constraints for values of program variables. An implementation using PVS (Prototype Verification System) is presented along with results for sample C functions.

  4. Study on Practical Application of Turboprop Engine Condition Monitoring and Fault Diagnostic System Using Fuzzy-Neuro Algorithms

    NASA Astrophysics Data System (ADS)

    Kong, Changduk; Lim, Semyeong; Kim, Keunwoo

    2013-03-01

    The Neural Networks is mostly used to engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measuring performance data, and proposes a fault diagnostic system using the base performance model and artificial intelligent methods such as Fuzzy and Neural Networks. Each real engine performance model, which is named as the base performance model that can simulate a new engine performance, is inversely made using its performance test data. Therefore the condition monitoring of each engine can be more precisely carried out through comparison with measuring performance data. The proposed diagnostic system identifies firstly the faulted components using Fuzzy Logic, and then quantifies faults of the identified components using Neural Networks leaned by fault learning data base obtained from the developed base performance model. In leaning the measuring performance data of the faulted components, the FFBP (Feed Forward Back Propagation) is used. In order to user's friendly purpose, the proposed diagnostic program is coded by the GUI type using MATLAB.

  5. Integrated Fault Diagnosis Algorithm for Motor Sensors of In-Wheel Independent Drive Electric Vehicles.

    PubMed

    Jeon, Namju; Lee, Hyeongcheol

    2016-12-12

    An integrated fault-diagnosis algorithm for a motor sensor of in-wheel independent drive electric vehicles is presented. This paper proposes a method that integrates the high- and low-level fault diagnoses to improve the robustness and performance of the system. For the high-level fault diagnosis of vehicle dynamics, a planar two-track non-linear model is first selected, and the longitudinal and lateral forces are calculated. To ensure redundancy of the system, correlation between the sensor and residual in the vehicle dynamics is analyzed to detect and separate the fault of the drive motor system of each wheel. To diagnose the motor system for low-level faults, the state equation of an interior permanent magnet synchronous motor is developed, and a parity equation is used to diagnose the fault of the electric current and position sensors. The validity of the high-level fault-diagnosis algorithm is verified using Carsim and Matlab/Simulink co-simulation. The low-level fault diagnosis is verified through Matlab/Simulink simulation and experiments. Finally, according to the residuals of the high- and low-level fault diagnoses, fault-detection flags are defined. On the basis of this information, an integrated fault-diagnosis strategy is proposed.

  6. Reliability analysis of a wastewater treatment plant using fault tree analysis and Monte Carlo simulation.

    PubMed

    Taheriyoun, Masoud; Moradinejad, Saber

    2015-01-01

    The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.

  7. Value-Based Insurance Design Benefit Offsets Reductions In Medication Adherence Associated With Switch To Deductible Plan.

    PubMed

    Reed, Mary E; Warton, E Margaret; Kim, Eileen; Solomon, Matthew D; Karter, Andrew J

    2017-03-01

    Enrollment in high-deductible health plans is increasing out-of-pocket spending. But innovative plans that pair deductibles with value-based insurance designs can help preserve low-cost access to high-value treatments for patients by aligning coverage with clinical value. Among adults in high-deductible health plans who were prescribed medications for chronic conditions, we examined what impact a value-based pharmacy benefit that offered free chronic disease medications had on medication adherence. Overall, we found that the value-based plan offset reductions in medication adherence associated with switching to a deductible plan. The value-based plan appeared particularly beneficial for patients who started with low levels of medication adherence. Patients with additional clinical complexity or vulnerable populations living in neighborhoods with lower socioeconomic status, however, did not show adherence improvements and might not be taking advantage of value-based insurance design provisions. Additional efforts may be needed to educate patients about their nuanced benefit plans to help overcome initial confusion about these complex plans. Project HOPE—The People-to-People Health Foundation, Inc.

  8. The seats of reason? An imaging study of deductive and inductive reasoning.

    PubMed

    Goel, V; Gold, B; Kapur, S; Houle, S

    1997-03-24

    We carried out a neuroimaging study to test the neurophysiological predictions made by different cognitive models of reasoning. Ten normal volunteers performed deductive and inductive reasoning tasks while their regional cerebral blood flow pattern was recorded using [15O]H2O PET imaging. In the control condition subjects semantically comprehended sets of three sentences. In the deductive reasoning condition subjects determined whether the third sentence was entailed by the first two sentences. In the inductive reasoning condition subjects reported whether the third sentence was plausible given the first two sentences. The deduction condition resulted in activation of the left inferior frontal gyrus (Brodmann areas 45, 47). The induction condition resulted in activation of a large area comprised of the left medial frontal gyrus, the left cingulate gyrus, and the left superior frontal gyrus (Brodmann areas 8, 9, 24, 32). Induction was distinguished from deduction by the involvement of the medial aspect of the left superior frontal gyrus (Brodmann areas 8, 9). These results are consistent with cognitive models of reasoning that postulate different mechanisms for inductive and deductive reasoning and view deduction as a formal rule-based process.

  9. Bearing Fault Diagnosis Based on Statistical Locally Linear Embedding

    PubMed Central

    Wang, Xiang; Zheng, Yuan; Zhao, Zhenzhou; Wang, Jinping

    2015-01-01

    Fault diagnosis is essentially a kind of pattern recognition. The measured signal samples usually distribute on nonlinear low-dimensional manifolds embedded in the high-dimensional signal space, so how to implement feature extraction, dimensionality reduction and improve recognition performance is a crucial task. In this paper a novel machinery fault diagnosis approach based on a statistical locally linear embedding (S-LLE) algorithm which is an extension of LLE by exploiting the fault class label information is proposed. The fault diagnosis approach first extracts the intrinsic manifold features from the high-dimensional feature vectors which are obtained from vibration signals that feature extraction by time-domain, frequency-domain and empirical mode decomposition (EMD), and then translates the complex mode space into a salient low-dimensional feature space by the manifold learning algorithm S-LLE, which outperforms other feature reduction methods such as PCA, LDA and LLE. Finally in the feature reduction space pattern classification and fault diagnosis by classifier are carried out easily and rapidly. Rolling bearing fault signals are used to validate the proposed fault diagnosis approach. The results indicate that the proposed approach obviously improves the classification performance of fault pattern recognition and outperforms the other traditional approaches. PMID:26153771

  10. VLSI Implementation of Fault Tolerance Multiplier based on Reversible Logic Gate

    NASA Astrophysics Data System (ADS)

    Ahmad, Nabihah; Hakimi Mokhtar, Ahmad; Othman, Nurmiza binti; Fhong Soon, Chin; Rahman, Ab Al Hadi Ab

    2017-08-01

    Multiplier is one of the essential component in the digital world such as in digital signal processing, microprocessor, quantum computing and widely used in arithmetic unit. Due to the complexity of the multiplier, tendency of errors are very high. This paper aimed to design a 2×2 bit Fault Tolerance Multiplier based on Reversible logic gate with low power consumption and high performance. This design have been implemented using 90nm Complemetary Metal Oxide Semiconductor (CMOS) technology in Synopsys Electronic Design Automation (EDA) Tools. Implementation of the multiplier architecture is by using the reversible logic gates. The fault tolerance multiplier used the combination of three reversible logic gate which are Double Feynman gate (F2G), New Fault Tolerance (NFT) gate and Islam Gate (IG) with the area of 160μm x 420.3μm (67.25 mm2). This design achieved a low power consumption of 122.85μW and propagation delay of 16.99ns. The fault tolerance multiplier proposed achieved a low power consumption and high performance which suitable for application of modern computing as it has a fault tolerance capabilities.

  11. Integrated Fault Diagnosis Algorithm for Motor Sensors of In-Wheel Independent Drive Electric Vehicles

    PubMed Central

    Jeon, Namju; Lee, Hyeongcheol

    2016-01-01

    An integrated fault-diagnosis algorithm for a motor sensor of in-wheel independent drive electric vehicles is presented. This paper proposes a method that integrates the high- and low-level fault diagnoses to improve the robustness and performance of the system. For the high-level fault diagnosis of vehicle dynamics, a planar two-track non-linear model is first selected, and the longitudinal and lateral forces are calculated. To ensure redundancy of the system, correlation between the sensor and residual in the vehicle dynamics is analyzed to detect and separate the fault of the drive motor system of each wheel. To diagnose the motor system for low-level faults, the state equation of an interior permanent magnet synchronous motor is developed, and a parity equation is used to diagnose the fault of the electric current and position sensors. The validity of the high-level fault-diagnosis algorithm is verified using Carsim and Matlab/Simulink co-simulation. The low-level fault diagnosis is verified through Matlab/Simulink simulation and experiments. Finally, according to the residuals of the high- and low-level fault diagnoses, fault-detection flags are defined. On the basis of this information, an integrated fault-diagnosis strategy is proposed. PMID:27973431

  12. Fault tolerant operation of switched reluctance machine

    NASA Astrophysics Data System (ADS)

    Wang, Wei

    The energy crisis and environmental challenges have driven industry towards more energy efficient solutions. With nearly 60% of electricity consumed by various electric machines in industry sector, advancement in the efficiency of the electric drive system is of vital importance. Adjustable speed drive system (ASDS) provides excellent speed regulation and dynamic performance as well as dramatically improved system efficiency compared with conventional motors without electronics drives. Industry has witnessed tremendous grow in ASDS applications not only as a driving force but also as an electric auxiliary system for replacing bulky and low efficiency auxiliary hydraulic and mechanical systems. With the vast penetration of ASDS, its fault tolerant operation capability is more widely recognized as an important feature of drive performance especially for aerospace, automotive applications and other industrial drive applications demanding high reliability. The Switched Reluctance Machine (SRM), a low cost, highly reliable electric machine with fault tolerant operation capability, has drawn substantial attention in the past three decades. Nevertheless, SRM is not free of fault. Certain faults such as converter faults, sensor faults, winding shorts, eccentricity and position sensor faults are commonly shared among all ASDS. In this dissertation, a thorough understanding of various faults and their influence on transient and steady state performance of SRM is developed via simulation and experimental study, providing necessary knowledge for fault detection and post fault management. Lumped parameter models are established for fast real time simulation and drive control. Based on the behavior of the faults, a fault detection scheme is developed for the purpose of fast and reliable fault diagnosis. In order to improve the SRM power and torque capacity under faults, the maximum torque per ampere excitation are conceptualized and validated through theoretical analysis and experiments. With the proposed optimal waveform, torque production is greatly improved under the same Root Mean Square (RMS) current constraint. Additionally, position sensorless operation methods under phase faults are investigated to account for the combination of physical position sensor and phase winding faults. A comprehensive solution for position sensorless operation under single and multiple phases fault are proposed and validated through experiments. Continuous position sensorless operation with seamless transition between various numbers of phase fault is achieved.

  13. What is the role of induction and deduction in reasoning and scientific inquiry?

    NASA Astrophysics Data System (ADS)

    Lawson, Anton E.

    2005-08-01

    A long-standing and continuing controversy exists regarding the role of induction and deduction in reasoning and in scientific inquiry. Given the inherent difficulty in reconstructing reasoning patterns based on personal and historical accounts, evidence about the nature of human reasoning in scientific inquiry has been sought from a controlled experiment designed to identify the role played by enumerative induction and deduction in cognition as well as from the relatively new field of neural modeling. Both experimental results and the neurological models imply that induction across a limited set of observations plays no role in task performance and in reasoning. Therefore, support has been obtained for Popper's hypothesis that enumerative induction does not exist as a psychological process. Instead, people appear to process information in terms of increasingly abstract cycles of hypothetico-deductive reasoning. Consequently, science instruction should provide students with opportunities to generate and test increasingly complex and abstract hypotheses and theories in a hypothetico-deductive manner. In this way students can be expected to become increasingly conscious of their underlying hypothetico-deductive thought processes, increasingly skilled in their application, and hence increasingly scientifically literate.

  14. The measurement of the stacking fault energy in copper, nickel and copper-nickel alloys

    NASA Technical Reports Server (NTRS)

    Leighly, H. P., Jr.

    1982-01-01

    The relationship of hydrogen solubility and the hydrogen embrittlement of high strength, high performance face centered cubic alloys to the stacking fault energy of the alloys was investigated. The stacking fault energy is inversely related to the distance between the two partial dislocations which are formed by the dissociation of a perfect dislocation. The two partial dislocations define a stacking fault in the crystal which offers a region for hydrogen segregation. The distance between the partial dislocations is measured by weak beam, dark field transmission electron microscopy. The stacking fault energy is calculated. Pure copper, pure nickel and copper-nickel single crystals are used to determine the stacking fault energy.

  15. Double-layer rotor magnetic shield performance analysis in high temperature superconducting synchronous generators under short circuit fault conditions

    NASA Astrophysics Data System (ADS)

    Hekmati, Arsalan; Aliahmadi, Mehdi

    2016-12-01

    High temperature superconducting, HTS, synchronous machines benefit from a rotor magnetic shield in order to protect superconducting coils against asynchronous magnetic fields. This magnetic shield, however, suffers from exerted Lorentz forces generated in light of induced eddy currents during transient conditions, e.g. stator windings short-circuit fault. In addition, to the exerted electromagnetic forces, eddy current losses and the associated effects on the cryogenic system are the other consequences of shielding HTS coils. This study aims at investigating the Rotor Magnetic Shield, RMS, performance in HTS synchronous generators under stator winding short-circuit fault conditions. The induced eddy currents in different circumferential positions of the rotor magnetic shield along with associated Joule heating losses would be studied using 2-D time-stepping Finite Element Analysis, FEA. The investigation of Lorentz forces exerted on the magnetic shield during transient conditions has also been performed in this paper. The obtained results show that double line-to-ground fault is of the most importance among different types of short-circuit faults. It was revealed that when it comes to the design of the rotor magnetic shields, in addition to the eddy current distribution and the associated ohmic losses, two phase-to-ground fault should be taken into account since the produced electromagnetic forces in the time of fault conditions are more severe during double line-to-ground fault.

  16. Effect of cost-sharing reductions on preventive service use among Medicare fee-for-service beneficiaries.

    PubMed

    Goodwin, Suzanne M; Anderson, Gerard F

    2012-01-01

    Section 4104 of the Patient Protection and Affordable Care Act (ACA) waives previous cost-sharing requirements for many Medicare-covered preventive services. In 1997, Congress passed similar legislation waiving the deductible only for mammograms and Pap smears. The purpose of this study is to examine the effect of the deductible waiver on mammogram and Pap smear utilization rates. Using 1995-2003 Medicare claims from a sample of female, elderly Medicare fee-for-service beneficiaries, two pre/post analyses were conducted comparing mammogram and Pap smear utilization rates before and after implementation of the deductible waiver. Receipt of screening mammograms and Pap smears served as the outcome measures, and two time measures, representing two post-test observation periods, were used to examine the short- and long-term impacts on utilization. There was a 20 percent short-term and a 25 percent longer term increase in the probability of having had a mammogram in the four years following the 1997 deductible waiver. Beneficiaries were no more likely to receive a Pap smear following the deductible waiver. Elimination of cost sharing may be an effective strategy for increasing preventive service use, but the impact could depend on the characteristics of the procedure, its cost, and the disease and populations it targets. These historical findings suggest that, with implementation of Section 4104, the greatest increases in utilization will be seen for preventive services that screen for diseases with high incidence or prevalence rates that increase with age, that are expensive, and that are performed on a frequent basis.

  17. High-Intensity Radiated Field Fault-Injection Experiment for a Fault-Tolerant Distributed Communication System

    NASA Technical Reports Server (NTRS)

    Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven

    2010-01-01

    Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.

  18. Fault-tolerant Control of a Cyber-physical System

    NASA Astrophysics Data System (ADS)

    Roxana, Rusu-Both; Eva-Henrietta, Dulf

    2017-10-01

    Cyber-physical systems represent a new emerging field in automatic control. The fault system is a key component, because modern, large scale processes must meet high standards of performance, reliability and safety. Fault propagation in large scale chemical processes can lead to loss of production, energy, raw materials and even environmental hazard. The present paper develops a multi-agent fault-tolerant control architecture using robust fractional order controllers for a (13C) cryogenic separation column cascade. The JADE (Java Agent DEvelopment Framework) platform was used to implement the multi-agent fault tolerant control system while the operational model of the process was implemented in Matlab/SIMULINK environment. MACSimJX (Multiagent Control Using Simulink with Jade Extension) toolbox was used to link the control system and the process model. In order to verify the performance and to prove the feasibility of the proposed control architecture several fault simulation scenarios were performed.

  19. Nickel-Hydrogen Battery Fault Clearing at Low State of Charge

    NASA Technical Reports Server (NTRS)

    Lurie, C.

    1997-01-01

    Fault clearing currents were achieved and maintained at discharge rates from C/2 to C/3 at high and low states of charge. The fault clearing plateau voltage is strong function of: discharge current, and voltage-prior-to-the-fault-clearing-event and a weak function of state of charge. Voltage performance, for the range of conditions reported, is summarized.

  20. Study on Fault Diagnostics of a Turboprop Engine Using Inverse Performance Model and Artificial Intelligent Methods

    NASA Astrophysics Data System (ADS)

    Kong, Changduk; Lim, Semyeong

    2011-12-01

    Recently, the health monitoring system of major gas path components of gas turbine uses mostly the model based method like the Gas Path Analysis (GPA). This method is to find quantity changes of component performance characteristic parameters such as isentropic efficiency and mass flow parameter by comparing between measured engine performance parameters such as temperatures, pressures, rotational speeds, fuel consumption, etc. and clean engine performance parameters without any engine faults which are calculated by the base engine performance model. Currently, the expert engine diagnostic systems using the artificial intelligent methods such as Neural Networks (NNs), Fuzzy Logic and Genetic Algorithms (GAs) have been studied to improve the model based method. Among them the NNs are mostly used to the engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base if there are large amount of learning data. In addition, it has a very complex structure for finding effectively single type faults or multiple type faults of gas path components. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measured performance data, and proposes a fault diagnostic system using the base engine performance model and the artificial intelligent methods such as Fuzzy logic and Neural Network. The proposed diagnostic system isolates firstly the faulted components using Fuzzy Logic, then quantifies faults of the identified components using the NN leaned by fault learning data base, which are obtained from the developed base performance model. In leaning the NN, the Feed Forward Back Propagation (FFBP) method is used. Finally, it is verified through several test examples that the component faults implanted arbitrarily in the engine are well isolated and quantified by the proposed diagnostic system.

  1. Coordinated Fault Tolerance for High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, Jack; Bosilca, George; et al.

    2013-04-08

    Our work to meet our goal of end-to-end fault tolerance has focused on two areas: (1) improving fault tolerance in various software currently available and widely used throughout the HEC domain and (2) using fault information exchange and coordination to achieve holistic, systemwide fault tolerance and understanding how to design and implement interfaces for integrating fault tolerance features for multiple layers of the software stack—from the application, math libraries, and programming language runtime to other common system software such as jobs schedulers, resource managers, and monitoring tools.

  2. Adjustable Autonomy Testbed

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Schrenkenghost, Debra K.

    2001-01-01

    The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.

  3. A comparative study of sensor fault diagnosis methods based on observer for ECAS system

    NASA Astrophysics Data System (ADS)

    Xu, Xing; Wang, Wei; Zou, Nannan; Chen, Long; Cui, Xiaoli

    2017-03-01

    The performance and practicality of electronically controlled air suspension (ECAS) system are highly dependent on the state information supplied by kinds of sensors, but faults of sensors occur frequently. Based on a non-linearized 3-DOF 1/4 vehicle model, different methods of fault detection and isolation (FDI) are used to diagnose the sensor faults for ECAS system. The considered approaches include an extended Kalman filter (EKF) with concise algorithm, a strong tracking filter (STF) with robust tracking ability, and the cubature Kalman filter (CKF) with numerical precision. We propose three filters of EKF, STF, and CKF to design a state observer of ECAS system under typical sensor faults and noise. Results show that three approaches can successfully detect and isolate faults respectively despite of the existence of environmental noise, FDI time delay and fault sensitivity of different algorithms are different, meanwhile, compared with EKF and STF, CKF method has best performing FDI of sensor faults for ECAS system.

  4. A Kalman Filter Based Technique for Stator Turn-Fault Detection of the Induction Motors

    NASA Astrophysics Data System (ADS)

    Ghanbari, Teymoor; Samet, Haidar

    2017-11-01

    Monitoring of the Induction Motors (IMs) through stator current for different faults diagnosis has considerable economic and technical advantages in comparison with the other techniques in this content. Among different faults of an IM, stator and bearing faults are more probable types, which can be detected by analyzing signatures of the stator currents. One of the most reliable indicators for fault detection of IMs is lower sidebands of power frequency in the stator currents. This paper deals with a novel simple technique for detecting stator turn-fault of the IMs. Frequencies of the lower sidebands are determined using the motor specifications and their amplitudes are estimated by a Kalman Filter (KF). Instantaneous Total Harmonic Distortion (ITHD) of these harmonics is calculated. Since variation of the ITHD for the three-phase currents is considerable in case of stator turn-fault, the fault can be detected using this criterion, confidently. Different simulation results verify high performance of the proposed method. The performance of the method is also confirmed using some experiments.

  5. A study of the relationship between the performance and dependability of a fault-tolerant computer

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar K.

    1994-01-01

    This thesis studies the relationship by creating a tool (FTAPE) that integrates a high stress workload generator with fault injection and by using the tool to evaluate system performance under error conditions. The workloads are comprised of processes which are formed from atomic components that represent CPU, memory, and I/O activity. The fault injector is software-implemented and is capable of injecting any memory addressable location, including special registers and caches. This tool has been used to study a Tandem Integrity S2 Computer. Workloads with varying numbers of processes and varying compositions of CPU, memory, and I/O activity are first characterized in terms of performance. Then faults are injected into these workloads. The results show that as the number of concurrent processes increases, the mean fault latency initially increases due to increased contention for the CPU. However, for even higher numbers of processes (less than 3 processes), the mean latency decreases because long latency faults are paged out before they can be activated.

  6. Machine remaining useful life prediction: An integrated adaptive neuro-fuzzy and high-order particle filtering approach

    NASA Astrophysics Data System (ADS)

    Chen, Chaochao; Vachtsevanos, George; Orchard, Marcos E.

    2012-04-01

    Machine prognosis can be considered as the generation of long-term predictions that describe the evolution in time of a fault indicator, with the purpose of estimating the remaining useful life (RUL) of a failing component/subsystem so that timely maintenance can be performed to avoid catastrophic failures. This paper proposes an integrated RUL prediction method using adaptive neuro-fuzzy inference systems (ANFIS) and high-order particle filtering, which forecasts the time evolution of the fault indicator and estimates the probability density function (pdf) of RUL. The ANFIS is trained and integrated in a high-order particle filter as a model describing the fault progression. The high-order particle filter is used to estimate the current state and carry out p-step-ahead predictions via a set of particles. These predictions are used to estimate the RUL pdf. The performance of the proposed method is evaluated via the real-world data from a seeded fault test for a UH-60 helicopter planetary gear plate. The results demonstrate that it outperforms both the conventional ANFIS predictor and the particle-filter-based predictor where the fault growth model is a first-order model that is trained via the ANFIS.

  7. Health savings accounts--a revolutionary method of funding medical care expenses.

    PubMed

    Knox, George Con

    2004-10-01

    Health Savings Accounts coupled with a High-Deductible Health Care plan offer many eligible individuals the opportunity to reduce their insurance premiums and create a tax-deductible account to pay medical, dental, and vision care expenses.

  8. Evaluative Feedback can Improve Deductive Reasoning

    DTIC Science & Technology

    2012-08-01

    theories of reasoning explicitly permit evaluative feedback to modulate the way individuals reason (Braine & O’Brien, 1998; Oaksford & Chater, 2007...incorrect is to check their reasoning (Johnson-Laird, Girotto, & Legrenzi, 2004). If feedback influences the way people make deductions, theories of... theories of reasoning might account for improvements in performance due to evaluative feedback. Experiment 1: Sentential reasoning Experiment 1

  9. Reliability of Fault Tolerant Control Systems. Part 2

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva

    2000-01-01

    This paper reports Part II of a two part effort that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability properties peculiar to fault-tolerant control systems are emphasized, such as the presence of analytic redundancy in high proportion, the dependence of failures on control performance, and high risks associated with decisions in redundancy management due to multiple sources of uncertainties and sometimes large processing requirements. As a consequence, coverage of failures through redundancy management can be severely limited. The paper proposes to formulate the fault tolerant control problem as an optimization problem that maximizes coverage of failures through redundancy management. Coverage modeling is attempted in a way that captures its dependence on the control performance and on the diagnostic resolution. Under the proposed redundancy management policy, it is shown that an enhanced overall system reliability can be achieved with a control law of a superior robustness, with an estimator of a higher resolution, and with a control performance requirement of a lesser stringency.

  10. Tectono-seismic characteristics of faults in the shallow portion of an accretionary prism

    NASA Astrophysics Data System (ADS)

    Hirono, Tetsuro; Ishikawa, Tsuyoshi

    2018-01-01

    To understand the tectono-seismic evolution of faults in the shallow part of a subduction-accretion system, we examined major faults in a fossil accretionary prism, the Emi Group (Hota Group), Boso Peninsula, Japan, by performing multiple structural, geochemical, and mineralogical analyses. Because the strata are relatively shallow (burial depth, 1-4 km), early stage deformation related to subduction, accretion, and uplifting processes is well preserved in three dominant fault zones. On the basis of both previous findings and our geochemical and mineralogical results, we inferred that early stage faulting in a near-trench setting under high pore fluid pressure and second stage faulting at relatively deep along subduction corresponded to aseismic deformations, as shown by velocity strengthening characteristics; and during late stage faulting, probably in association with accretion and uplift processes, a high-temperature fluid, revealed by a geochemical temperature proxy, triggered fault weakening by a thermal pressurization mechanism, and potentially led to the generation of a tsunami.

  11. Using Performance Tools to Support Experiments in HPC Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naughton, III, Thomas J; Boehm, Swen; Engelmann, Christian

    2014-01-01

    The high performance computing (HPC) community is working to address fault tolerance and resilience concerns for current and future large scale computing platforms. This is driving enhancements in the programming environ- ments, specifically research on enhancing message passing libraries to support fault tolerant computing capabilities. The community has also recognized that tools for resilience experimentation are greatly lacking. However, we argue that there are several parallels between performance tools and resilience tools . As such, we believe the rich set of HPC performance-focused tools can be extended (repurposed) to benefit the resilience community. In this paper, we describe the initialmore » motivation to leverage standard HPC per- formance analysis techniques to aid in developing diagnostic tools to assist fault tolerance experiments for HPC applications. These diagnosis procedures help to provide context for the system when the errors (failures) occurred. We describe our initial work in leveraging an MPI performance trace tool to assist in provid- ing global context during fault injection experiments. Such tools will assist the HPC resilience community as they extend existing and new application codes to support fault tolerances.« less

  12. An approach to secure weather and climate models against hardware faults

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; Dawson, Andrew

    2017-03-01

    Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelization to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. In this paper, we present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform model simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13 % for the shallow water model.

  13. An approach to secure weather and climate models against hardware faults

    NASA Astrophysics Data System (ADS)

    Düben, Peter; Dawson, Andrew

    2017-04-01

    Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelisation to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. We present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13% for the shallow water model.

  14. Potential fault region detection in TFDS images based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Sun, Junhua; Xiao, Zhongwen

    2016-10-01

    In recent years, more than 300 sets of Trouble of Running Freight Train Detection System (TFDS) have been installed on railway to monitor the safety of running freight trains in China. However, TFDS is simply responsible for capturing, transmitting, and storing images, and fails to recognize faults automatically due to some difficulties such as such as the diversity and complexity of faults and some low quality images. To improve the performance of automatic fault recognition, it is of great importance to locate the potential fault areas. In this paper, we first introduce a convolutional neural network (CNN) model to TFDS and propose a potential fault region detection system (PFRDS) for simultaneously detecting four typical types of potential fault regions (PFRs). The experimental results show that this system has a higher performance of image detection to PFRs in TFDS. An average detection recall of 98.95% and precision of 100% are obtained, demonstrating the high detection ability and robustness against various poor imaging situations.

  15. Performance back-deduction from a loading to flow coefficient map: Application to radial turbine

    NASA Astrophysics Data System (ADS)

    Carbonneau, Xavier; Binder, Nicolas

    2012-12-01

    Radial turbine stages are often used for applications requiring off-design operation, as turbocharging for instance. The off-design ability of such stages is commonly analyzed through the traditional turbine map, plotting the reduced mass-flow against the pressure-ratio, for reduced-speed lines. However, some alternatives are possible, such as the flow-coefficient ( Ψ) to loading-coefficient ( φ) diagram where the pressure-ratio lines are actually straight lines, very convenient property to perform prediction. A robust method re-creating this map from a predicted Ψ-φ diagram is needed. Recent work has shown that this back-deduction quality, without the use of any loss models, depends on the knowledge of an intermediate pressure-ratio. A modelization of this parameter is then proposed. The comparison with both experimental and CFD results is presented, with quite good agreement for mass flow rate and rotational speed, and for the intermediate pressure ratio. The last part of the paper is dedicated to the application of the intermediate pressure-ratio knowledge to the improvement of the deduction of the pressure ratio lines in the Ψ-φ diagram. Beside this improvement, the back-deduction method of the classical map is structured, applied and evaluated.

  16. Basic research on machinery fault diagnostics: Past, present, and future trends

    NASA Astrophysics Data System (ADS)

    Chen, Xuefeng; Wang, Shibin; Qiao, Baijie; Chen, Qiang

    2018-06-01

    Machinery fault diagnosis has progressed over the past decades with the evolution of machineries in terms of complexity and scale. High-value machineries require condition monitoring and fault diagnosis to guarantee their designed functions and performance throughout their lifetime. Research on machinery Fault diagnostics has grown rapidly in recent years. This paper attempts to summarize and review the recent R&D trends in the basic research field of machinery fault diagnosis in terms of four main aspects: Fault mechanism, sensor technique and signal acquisition, signal processing, and intelligent diagnostics. The review discusses the special contributions of Chinese scholars to machinery fault diagnostics. On the basis of the review of basic theory of machinery fault diagnosis and its practical applications in engineering, the paper concludes with a brief discussion on the future trends and challenges in machinery fault diagnosis.

  17. Characterization of the faulted behavior of digital computers and fault tolerant systems

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Miner, Paul S.

    1989-01-01

    A development status evaluation is presented for efforts conducted at NASA-Langley since 1977, toward the characterization of the latent fault in digital fault-tolerant systems. Attention is given to the practical, high speed, generalized gate-level logic system simulator developed, as well as to the validation methodology used for the simulator, on the basis of faultable software and hardware simulations employing a prototype MIL-STD-1750A processor. After validation, latency tests will be performed.

  18. Dynamic rupture simulations on complex fault zone structures with off-fault plasticity using the ADER-DG method

    NASA Astrophysics Data System (ADS)

    Wollherr, Stephanie; Gabriel, Alice-Agnes; Igel, Heiner

    2015-04-01

    In dynamic rupture models, high stress concentrations at rupture fronts have to to be accommodated by off-fault inelastic processes such as plastic deformation. As presented in (Roten et al., 2014), incorporating plastic yielding can significantly reduce earlier predictions of ground motions in the Los Angeles Basin. Further, an inelastic response of materials surrounding a fault potentially has a strong impact on surface displacement and is therefore a key aspect in understanding the triggering of tsunamis through floor uplifting. We present an implementation of off-fault-plasticity and its verification for the software package SeisSol, an arbitrary high-order derivative discontinuous Galerkin (ADER-DG) method. The software recently reached multi-petaflop/s performance on some of the largest supercomputers worldwide and was a Gordon Bell prize finalist application in 2014 (Heinecke et al., 2014). For the nonelastic calculations we impose a Drucker-Prager yield criterion in shear stress with a viscous regularization following (Andrews, 2005). It permits the smooth relaxation of high stress concentrations induced in the dynamic rupture process. We verify the implementation by comparison to the SCEC/USGS Spontaneous Rupture Code Verification Benchmarks. The results of test problem TPV13 with a 60-degree dipping normal fault show that SeisSol is in good accordance with other codes. Additionally we aim to explore the numerical characteristics of the off-fault plasticity implementation by performing convergence tests for the 2D code. The ADER-DG method is especially suited for complex geometries by using unstructured tetrahedral meshes. Local adaptation of the mesh resolution enables a fine sampling of the cohesive zone on the fault while simultaneously satisfying the dispersion requirements of wave propagation away from the fault. In this context we will investigate the influence of off-fault-plasticity on geometrically complex fault zone structures like subduction zones or branched faults. Studying the interplay of stress conditions and angle dependence of neighbouring branches including inelastic material behaviour and its effects on rupture jumps and seismic activation helps to advance our understanding of earthquake source processes. An application is the simulation of a real large-scale subduction zone scenario including plasticity to validate the coupling of our dynamic rupture calculations to a tsunami model in the framework of the ASCETE project (http://www.ascete.de/). Andrews, D. J. (2005): Rupture dynamics with energy loss outside the slip zone, J. Geophys. Res., 110, B01307. Heinecke, A. (2014), A. Breuer, S. Rettenberger, M. Bader, A.-A. Gabriel, C. Pelties, A. Bode, W. Barth, K. Vaidyanathan, M. Smelyanskiy and P. Dubey: Petascale High Order Dynamic Rupture Earthquake Simulations on Heterogeneous Supercomputers. In Supercomputing 2014, The International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE, New Orleans, LA, USA, November 2014. Roten, D. (2014), K. B. Olsen, S.M. Day, Y. Cui, and D. Fäh: Expected seismic shaking in Los Angeles reduced by San Andreas fault zone plasticity, Geophys. Res. Lett., 41, 2769-2777.

  19. Reliability of Fault Tolerant Control Systems. Part 1

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva

    2001-01-01

    This paper reports Part I of a two part effort, that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability analysis of fault-tolerant control systems is performed using Markov models. Reliability properties, peculiar to fault-tolerant control systems are emphasized. As a consequence, coverage of failures through redundancy management can be severely limited. It is shown that in the early life of a syi1ein composed of highly reliable subsystems, the reliability of the overall system is affine with respect to coverage, and inadequate coverage induces dominant single point failures. The utility of some existing software tools for assessing the reliability of fault tolerant control systems is also discussed. Coverage modeling is attempted in Part II in a way that captures its dependence on the control performance and on the diagnostic resolution.

  20. Fault zone structure and fluid-rock interaction of a high angle normal fault in Carrara marble (NW Tuscany, Italy)

    NASA Astrophysics Data System (ADS)

    Molli, G.; Cortecci, G.; Vaselli, L.; Ottria, G.; Cortopassi, A.; Dinelli, E.; Mussi, M.; Barbieri, M.

    2010-09-01

    We studied the geometry, intensity of deformation and fluid-rock interaction of a high angle normal fault within Carrara marble in the Alpi Apuane NW Tuscany, Italy. The fault is comprised of a core bounded by two major, non-parallel slip surfaces. The fault core, marked by crush breccia and cataclasites, asymmetrically grades to the host protolith through a damage zone, which is well developed only in the footwall block. On the contrary, the transition from the fault core to the hangingwall protolith is sharply defined by the upper main slip surface. Faulting was associated with fluid-rock interaction, as evidenced by kinematically related veins observable in the damage zone and fluid channelling within the fault core, where an orange-brownish cataclasite matrix can be observed. A chemical and isotopic study of veins and different structural elements of the fault zone (protolith, damage zone and fault core), including a mathematical model, was performed to document type, role, and activity of fluid-rock interactions during deformation. The results of our studies suggested that deformation pattern was mainly controlled by processes associated with a linking-damage zone at a fault tip, development of a fault core, localization and channelling of fluids within the fault zone. Syn-kinematic microstructural modification of calcite microfabric possibly played a role in confining fluid percolation.

  1. Deductibles in health insurance

    NASA Astrophysics Data System (ADS)

    Dimitriyadis, I.; Öney, Ü. N.

    2009-11-01

    This study is an extension to a simulation study that has been developed to determine ruin probabilities in health insurance. The study concentrates on inpatient and outpatient benefits for customers of varying age bands. Loss distributions are modelled through the Allianz tool pack for different classes of insureds. Premiums at different levels of deductibles are derived in the simulation and ruin probabilities are computed assuming a linear loading on the premium. The increase in the probability of ruin at high levels of the deductible clearly shows the insufficiency of proportional loading in deductible premiums. The PH-transform pricing rule developed by Wang is analyzed as an alternative pricing rule. A simple case, where an insured is assumed to be an exponential utility decision maker while the insurer's pricing rule is a PH-transform is also treated.

  2. Fault-scale controls on rift geometry: the Bilila-Mtakataka Fault, Malawi

    NASA Astrophysics Data System (ADS)

    Hodge, M.; Fagereng, A.; Biggs, J.; Mdala, H. S.

    2017-12-01

    Border faults that develop during initial stages of rifting determine the geometry of rifts and passive margins. At outcrop and regional scales, it has been suggested that border fault orientation may be controlled by reactivation of pre-existing weaknesses. Here, we perform a multi-scale investigation on the influence of anisotropic fabrics along a major developing border fault in the southern East African Rift, Malawi. The 130 km long Bilila-Mtakataka fault has been proposed to have slipped in a single MW 8 earthquake with 10 m of normal displacement. The fault is marked by an 11±7 m high scarp with an average trend that is oblique to the current plate motion. Variations in scarp height are greatest at lithological boundaries and where the scarp switches between following and cross-cutting high-grade metamorphic foliation. Based on the scarp's geometry and morphology, we define 6 geometrically distinct segments. We suggest that the segments link to at least one deeper structure that strikes parallel to the average scarp trend, an orientation consistent with the kinematics of an early phase of rift initiation. The slip required on a deep fault(s) to match the height of the current scarp suggests multiple earthquakes along the fault. We test this hypothesis by studying the scarp morphology using high-resolution satellite data. Our results suggest that during the earthquake(s) that formed the current scarp, the propagation of the fault toward the surface locally followed moderately-dipping foliation well oriented for reactivation. In conclusion, although well oriented pre-existing weaknesses locally influence shallow fault geometry, large-scale border fault geometry appears primarily controlled by the stress field at the time of fault initiation.

  3. Fault tree analysis: NiH2 aerospace cells for LEO mission

    NASA Technical Reports Server (NTRS)

    Klein, Glenn C.; Rash, Donald E., Jr.

    1992-01-01

    The Fault Tree Analysis (FTA) is one of several reliability analyses or assessments applied to battery cells to be utilized in typical Electric Power Subsystems for spacecraft in low Earth orbit missions. FTA is generally the process of reviewing and analytically examining a system or equipment in such a way as to emphasize the lower level fault occurrences which directly or indirectly contribute to the major fault or top level event. This qualitative FTA addresses the potential of occurrence for five specific top level events: hydrogen leakage through either discrete leakage paths or through pressure vessel rupture; and four distinct modes of performance degradation - high charge voltage, suppressed discharge voltage, loss of capacity, and high pressure.

  4. Incipient fault detection study for advanced spacecraft systems

    NASA Technical Reports Server (NTRS)

    Milner, G. Martin; Black, Michael C.; Hovenga, J. Mike; Mcclure, Paul F.

    1986-01-01

    A feasibility study to investigate the application of vibration monitoring to the rotating machinery of planned NASA advanced spacecraft components is described. Factors investigated include: (1) special problems associated with small, high RPM machines; (2) application across multiple component types; (3) microgravity; (4) multiple fault types; (5) eight different analysis techniques including signature analysis, high frequency demodulation, cepstrum, clustering, amplitude analysis, and pattern recognition are compared; and (6) small sample statistical analysis is used to compare performance by computation of probability of detection and false alarm for an ensemble of repeated baseline and faulted tests. Both detection and classification performance are quantified. Vibration monitoring is shown to be an effective means of detecting the most important problem types for small, high RPM fans and pumps typical of those planned for the advanced spacecraft. A preliminary monitoring system design and implementation plan is presented.

  5. Subsurface Resistivity Structures in and Around Strike-Slip Faults - Electromagnetic Surveys and Drillings Across Active Faults in Central Japan -

    NASA Astrophysics Data System (ADS)

    Omura, K.; Ikeda, R.; Iio, Y.; Matsuda, T.

    2005-12-01

    Electrical resistivity is important property to investigate the structure of active faults. Pore fluid affect seriously the electrical properties of rocks, subsurface electrical resistivity can be an indicator of the existence of fluid and distribution of pores. Fracture zone of fault is expected to have low resistivity due to high porosity and small gain size. Especially, strike-slip type fault has nearly vertical fracture zone and the fracture zone would be detected by an electrical survey across the fault. We performed electromagnetic survey across the strike-slip active faults in central Japan. At the same faults, we also drilled borehole into the fault and did downhole logging in the borehole. We applied MT or CSAMT methods onto 5 faults: Nojima fault which appeared on the surface by the 1995 Great Kobe earthquake (M=7.2), western Nagano Ohtaki area(1984 Nagano-ken seibu earthquake (M=6.8), the fault did not appeared on the surface), Neodani fault which appeared by the 1891 Nobi earthquake (M=8.0), Atera fault which seemed to be dislocated by the 1586 Tensyo earthquake (M=7.9), Gofukuji fault that is considered to have activated about 1200 years ago. The sampling frequencies of electrical and magnetic field were 2 - 1024Hz (10 frequencies) for CSAMT survey and 0.00055 - 384Hz (40 frequencies) for MT survey. The electromagnetic data were processed by standard method and inverted to 2-D resistivity structure along transects of the faults. Results of the survey were compared with downhole electrical logging data and observational descriptions of drilled cores. Fault plane of each fault were recognized as low resistivity region or boundary between relatively low and high resistivity region, except for Gofukuji fault. As for Gofukuji fault, fault was located in relatively high resistivity region. During very long elapsed time from the last earthquake, the properties of fracture zone of Gofukuji fault might changed from low resistivity properties as observed for other faults. Downhole electrical logging data were consistent to values of resistivity estimated by electromagnetic survey for each fault. The existence of relatively low and high resistivity regions in 2-D structure from electromagnetic survey was observed again by downhole logging at the correspondent portion in the borehole. Cores recovered from depthes where the electrical logging showed low resistivity were hardly fractured and altered from host rock which showed high resistivity. Results of electromagnetic survey, downhole electrical logging and observation of drilled cores were consistent to each other. In present case, electromagnetic survey is useful to explore the properties of fault fracture zone. In the further investigations, it is important to explore relationships among features of resistivity structure and geological and geophysical situations of the faults.

  6. Application of Fault Tree Analysis and Fuzzy Neural Networks to Fault Diagnosis in the Internet of Things (IoT) for Aquaculture.

    PubMed

    Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing

    2017-01-14

    In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.

  7. Application of Fault Tree Analysis and Fuzzy Neural Networks to Fault Diagnosis in the Internet of Things (IoT) for Aquaculture

    PubMed Central

    Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing

    2017-01-01

    In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT. PMID:28098822

  8. Sequoia: A fault-tolerant tightly coupled multiprocessor for transaction processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, P.A.

    1988-02-01

    The Sequoia computer is a tightly coupled multiprocessor, and thus attains the performance advantages of this style of architecture. It avoids most of the fault-tolerance disadvantages of tight coupling by using a new fault-tolerance design. The Sequoia architecture is similar to other multimicroprocessor architectures, such as those of Encore and Sequent, in that it gives dozens of microprocessors shared access to a large main memory. It resembles the Stratus architecture in its extensive use of hardware fault-detection techniques. It resembles Stratus and Auragen in its ability to quickly recover all processes after a single point failure, transparently to the user.more » However, Sequoia is unique in its combination of a large-scale tightly coupled architecture with a hardware approach to fault tolerance. This article gives an overview of how the hardware architecture and operating systems (OS) work together to provide a high degree of fault tolerance with good system performance.« less

  9. Towards New Metrics for High-Performance Computing Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Ashraf, Rizwan A; Engelmann, Christian

    Ensuring the reliability of applications is becoming an increasingly important challenge as high-performance computing (HPC) systems experience an ever-growing number of faults, errors and failures. While the HPC community has made substantial progress in developing various resilience solutions, it continues to rely on platform-based metrics to quantify application resiliency improvements. The resilience of an HPC application is concerned with the reliability of the application outcome as well as the fault handling efficiency. To understand the scope of impact, effective coverage and performance efficiency of existing and emerging resilience solutions, there is a need for new metrics. In this paper, wemore » develop new ways to quantify resilience that consider both the reliability and the performance characteristics of the solutions from the perspective of HPC applications. As HPC systems continue to evolve in terms of scale and complexity, it is expected that applications will experience various types of faults, errors and failures, which will require applications to apply multiple resilience solutions across the system stack. The proposed metrics are intended to be useful for understanding the combined impact of these solutions on an application's ability to produce correct results and to evaluate their overall impact on an application's performance in the presence of various modes of faults.« less

  10. The 4 phase VSR motor: The ideal prime mover for electric vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holling, G.H.; Yeck, M.M.

    1994-12-31

    4 phase variable switched reluctance motors are gaining acceptance in many applications due to their fault tolerant characteristics. A 4 phase variable switched reluctance motor (VSR) is modelled and its performance is predicted for several operating points for an electric vehicle application. The 4 phase VSR offers fault tolerance, high performance, and an excellent torque to weight ratio. The actual system performance was measured both on a teststand and on an actual vehicle. While the system described is used in a production electric motorscooter, the technology is equally applicable for high efficiency electric cars and buses. 4 refs.

  11. Quantitative fault tolerant control design for a hydraulic actuator with a leaking piston seal

    NASA Astrophysics Data System (ADS)

    Karpenko, Mark

    Hydraulic actuators are complex fluid power devices whose performance can be degraded in the presence of system faults. In this thesis a linear, fixed-gain, fault tolerant controller is designed that can maintain the positioning performance of an electrohydraulic actuator operating under load with a leaking piston seal and in the presence of parametric uncertainties. Developing a control system tolerant to this class of internal leakage fault is important since a leaking piston seal can be difficult to detect, unless the actuator is disassembled. The designed fault tolerant control law is of low-order, uses only the actuator position as feedback, and can: (i) accommodate nonlinearities in the hydraulic functions, (ii) maintain robustness against typical uncertainties in the hydraulic system parameters, and (iii) keep the positioning performance of the actuator within prescribed tolerances despite an internal leakage fault that can bypass up to 40% of the rated servovalve flow across the actuator piston. Experimental tests verify the functionality of the fault tolerant control under normal and faulty operating conditions. The fault tolerant controller is synthesized based on linear time-invariant equivalent (LTIE) models of the hydraulic actuator using the quantitative feedback theory (QFT) design technique. A numerical approach for identifying LTIE frequency response functions of hydraulic actuators from acceptable input-output responses is developed so that linearizing the hydraulic functions can be avoided. The proposed approach can properly identify the features of the hydraulic actuator frequency response that are important for control system design and requires no prior knowledge about the asymptotic behavior or structure of the LTIE transfer functions. A distributed hardware-in-the-loop (HIL) simulation architecture is constructed that enables the performance of the proposed fault tolerant control law to be further substantiated, under realistic operating conditions. Using the HIL framework, the fault tolerant hydraulic actuator is operated as a flight control actuator against the real-time numerical simulation of a high-performance jet aircraft. A robust electrohydraulic loading system is also designed using QFT so that the in-flight aerodynamic load can be experimentally replicated. The results of the HIL experiments show that using the fault tolerant controller to compensate the internal leakage fault at the actuator level can benefit the flight performance of the airplane.

  12. Model-Based Diagnostics for Propellant Loading Systems

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Foygel, Michael; Smelyanskiy, Vadim N.

    2011-01-01

    The loading of spacecraft propellants is a complex, risky operation. Therefore, diagnostic solutions are necessary to quickly identify when a fault occurs, so that recovery actions can be taken or an abort procedure can be initiated. Model-based diagnosis solutions, established using an in-depth analysis and understanding of the underlying physical processes, offer the advanced capability to quickly detect and isolate faults, identify their severity, and predict their effects on system performance. We develop a physics-based model of a cryogenic propellant loading system, which describes the complex dynamics of liquid hydrogen filling from a storage tank to an external vehicle tank, as well as the influence of different faults on this process. The model takes into account the main physical processes such as highly nonequilibrium condensation and evaporation of the hydrogen vapor, pressurization, and also the dynamics of liquid hydrogen and vapor flows inside the system in the presence of helium gas. Since the model incorporates multiple faults in the system, it provides a suitable framework for model-based diagnostics and prognostics algorithms. Using this model, we analyze the effects of faults on the system, derive symbolic fault signatures for the purposes of fault isolation, and perform fault identification using a particle filter approach. We demonstrate the detection, isolation, and identification of a number of faults using simulation-based experiments.

  13. Eigenstructure Assignment for Fault Tolerant Flight Control Design

    NASA Technical Reports Server (NTRS)

    Sobel, Kenneth; Joshi, Suresh (Technical Monitor)

    2002-01-01

    In recent years, fault tolerant flight control systems have gained an increased interest for high performance military aircraft as well as civil aircraft. Fault tolerant control systems can be described as either active or passive. An active fault tolerant control system has to either reconfigure or adapt the controller in response to a failure. One approach is to reconfigure the controller based upon detection and identification of the failure. Another approach is to use direct adaptive control to adjust the controller without explicitly identifying the failure. In contrast, a passive fault tolerant control system uses a fixed controller which achieves acceptable performance for a presumed set of failures. We have obtained a passive fault tolerant flight control law for the F/A-18 aircraft which achieves acceptable handling qualities for a class of control surface failures. The class of failures includes the symmetric failure of any one control surface being stuck at its trim value. A comparison was made of an eigenstructure assignment gain designed for the unfailed aircraft with a fault tolerant multiobjective optimization gain. We have shown that time responses for the unfailed aircraft using the eigenstructure assignment gain and the fault tolerant gain are identical. Furthermore, the fault tolerant gain achieves MIL-F-8785C specifications for all failure conditions.

  14. Measurement of fault latency in a digital avionic mini processor, part 2

    NASA Technical Reports Server (NTRS)

    Mcgough, J.; Swern, F.

    1983-01-01

    The results of fault injection experiments utilizing a gate-level emulation of the central processor unit of the Bendix BDX-930 digital computer are described. Several earlier programs were reprogrammed, expanding the instruction set to capitalize on the full power of the BDX-930 computer. As a final demonstration of fault coverage an extensive, 3-axis, high performance flght control computation was added. The stages in the development of a CPU self-test program emphasizing the relationship between fault coverage, speed, and quantity of instructions were demonstrated.

  15. From Fault-Diagnosis and Performance Recovery of a Controlled System to Chaotic Secure Communication

    NASA Astrophysics Data System (ADS)

    Hsu, Wen-Teng; Tsai, Jason Sheng-Hong; Guo, Fang-Cheng; Guo, Shu-Mei; Shieh, Leang-San

    Chaotic systems are often applied to encryption on secure communication, but they may not provide high-degree security. In order to improve the security of communication, chaotic systems may need to add other secure signals, but this may cause the system to diverge. In this paper, we redesign a communication scheme that could create secure communication with additional secure signals, and the proposed scheme could keep system convergence. First, we introduce the universal state-space adaptive observer-based fault diagnosis/estimator and the high-performance tracker for the sampled-data linear time-varying system with unanticipated decay factors in actuators/system states. Besides, robustness, convergence in the mean, and tracking ability are given in this paper. A residual generation scheme and a mechanism for auto-tuning switched gain is also presented, so that the introduced methodology is applicable for the fault detection and diagnosis (FDD) for actuator and state faults to yield a high tracking performance recovery. The evolutionary programming-based adaptive observer is then applied to the problem of secure communication. Whenever the tracker induces a large control input which might not conform to the input constraint of some physical systems, the proposed modified linear quadratic optimal tracker (LQT) can effectively restrict the control input within the specified constraint interval, under the acceptable tracking performance. The effectiveness of the proposed design methodology is illustrated through tracking control simulation examples.

  16. Slip-pulse rupture behavior on a 2 meter granite fault

    USGS Publications Warehouse

    McLaskey, Gregory C.; Kilgore, Brian D.; Beeler, Nicholas M.

    2015-01-01

    We describe observations of dynamic rupture events that spontaneously arise on meter-scale laboratory earthquake experiments. While low-frequency slip of the granite sample occurs in a relatively uniform and crack-like manner, instruments capable of detecting high frequency motions show that some parts of the fault slip abruptly (velocity >100 mm∙s-1, acceleration >20 km∙s-2) while the majority of the fault slips more slowly. Abruptly slipping regions propagate along the fault at nearly the shear wave speed. We propose that the dramatic reduction in frictional strength implied by this pulse-like rupture behavior has a common mechanism to the weakening reported in high velocity friction experiments performed on rotary machines. The slip pulses can also be identified as migrating sources of high frequency seismic waves. As observations from large earthquakes show similar propagating high frequency sources, the pulses described here may have relevance to the mechanics of larger earthquakes.

  17. Design and experimental validation for direct-drive fault-tolerant permanent-magnet vernier machines.

    PubMed

    Liu, Guohai; Yang, Junqin; Chen, Ming; Chen, Qian

    2014-01-01

    A fault-tolerant permanent-magnet vernier (FT-PMV) machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs). This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM), the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis.

  18. The Deductibility of Work-Related Higher Education Costs: The Saga Continues

    ERIC Educational Resources Information Center

    Segal, Mark A.; Bird, Bruce M.

    2011-01-01

    Whether a taxpayer's work-related higher education costs are deductible under IRC (Internal Revenue Code) Section 162 is an issue highly dependent upon facts and circumstances. The regulations pursuant to IRC Section 162 and the emergence of case law on this topic constitute important elements to consider in making this determination.

  19. Study of Stand-Alone Microgrid under Condition of Faults on Distribution Line

    NASA Astrophysics Data System (ADS)

    Malla, S. G.; Bhende, C. N.

    2014-10-01

    The behavior of stand-alone microgrid is analyzed under the condition of faults on distribution feeders. During fault since battery is not able to maintain dc-link voltage within limit, the resistive dump load control is presented to do so. An inverter control is proposed to maintain balanced voltages at PCC under the unbalanced load condition and to reduce voltage unbalance factor (VUF) at load points. The proposed inverter control also has facility to protect itself from high fault current. Existing maximum power point tracker (MPPT) algorithm is modified to limit the speed of generator during fault. Extensive simulation results using MATLAB/SIMULINK established that the performance of the controllers is quite satisfactory under different fault conditions as well as unbalanced load conditions.

  20. Fault seal analysis of Okan and Meren fields, Nigeria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eisenberg, R.A.; Brenneman, R.J.; Adeogba, A.A.

    The sealing capacity and the dynamic seal behavior of faults between juxtaposed reservoirs were analyzed for Okan and Meren fields, offshore Nigeria. In both fields correlations were found between reservoir performance, juxtaposed fluid types, oil geochemistry, interpreted fluid contact relationships, fault sealing/leaking condition, and calculated smear gouge ratios. Integration of these data has been invaluable in quantifying fault seal risk and may effect depletion strategies for fault-juxtaposed reservoirs within these fields. Fault plane sections defined reservoir juxtapositions and aided visualization of potential cross-fault spill points. Smear gouge ratios calculated from E-logs were used to estimate the composition of fault-gouge materialsmore » between the juxtaposed reservoirs. These tools augmented interpretation of seal/nonseal character based on fluid contact relationships in proved reservoirs and, in addition, were used to quantify fault seal risk of untested fault-dependent closures in Okan. The results of these analyses were then used to interpret production-induced fault seal breakdown within the G-sands and also to risk seal integrity of fault dependent closures within the untested O-sands in an adjacent, upthrown fault block. Within this fault block the presence of potential fault intersection leak points and large areas of sand/sand juxtaposition with high smear gouge ratios (low sealing potential) limits potential reserves within the O-sand package. In Meren Field the E- and G-sands are juxtaposed, on different pressure decline, geochemically distinct, and are characterized by low smear gouge ratios. In contrast, specific G- and H-sands, juxtaposed across the same fault, contain similar OOWCs and are characterized by high smear gouge ratios. The cross-sealing and/or cross-leaking nature of compartment boundaries at Meren is related to fault displacement variation and the composition of displaced stratigraphy.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Sisi; Li, Yun; Levitt, Karl N.

    Consensus is a fundamental approach to implementing fault-tolerant services through replication where there exists a tradeoff between the cost and the resilience. For instance, Crash Fault Tolerant (CFT) protocols have a low cost but can only handle crash failures while Byzantine Fault Tolerant (BFT) protocols handle arbitrary failures but have a higher cost. Hybrid protocols enjoy the benefits of both high performance without failures and high resiliency under failures by switching among different subprotocols. However, it is challenging to determine which subprotocols should be used. We propose a moving target approach to switch among protocols according to the existing systemmore » and network vulnerability. At the core of our approach is a formalized cost model that evaluates the vulnerability and performance of consensus protocols based on real-time Intrusion Detection System (IDS) signals. Based on the evaluation results, we demonstrate that a safe, cheap, and unpredictable protocol is always used and a high IDS error rate can be tolerated.« less

  2. The effect of biological movement variability on the performance of the golf swing in high- and low-handicapped players.

    PubMed

    Bradshaw, Elizabeth J; Keogh, Justin W L; Hume, Patria A; Maulder, Peter S; Nortje, Jacques; Marnewick, Michel

    2009-06-01

    The purpose of this study was to examine the role of neuromotor noise on golf swing performance in high- and low-handicap players. Selected two-dimensional kinematic measures of 20 male golfers (n=10 per high- or low-handicap group) performing 10 golf swings with a 5-iron club was obtained through video analysis. Neuromotor noise was calculated by deducting the standard error of the measurement from the coefficient of variation obtained from intra-individual analysis. Statistical methods included linear regression analysis and one-way analysis of variance using SPSS. Absolute invariance in the key technical positions (e.g., at the top of the backswing) of the golf swing appears to be a more favorable technique for skilled performance.

  3. Multi-sensor information fusion method for vibration fault diagnosis of rolling bearing

    NASA Astrophysics Data System (ADS)

    Jiao, Jing; Yue, Jianhai; Pei, Di

    2017-10-01

    Bearing is a key element in high-speed electric multiple unit (EMU) and any defect of it can cause huge malfunctioning of EMU under high operation speed. This paper presents a new method for bearing fault diagnosis based on least square support vector machine (LS-SVM) in feature-level fusion and Dempster-Shafer (D-S) evidence theory in decision-level fusion which were used to solve the problems about low detection accuracy, difficulty in extracting sensitive characteristics and unstable diagnosis system of single-sensor in rolling bearing fault diagnosis. Wavelet de-nosing technique was used for removing the signal noises. LS-SVM was used to make pattern recognition of the bearing vibration signal, and then fusion process was made according to the D-S evidence theory, so as to realize recognition of bearing fault. The results indicated that the data fusion method improved the performance of the intelligent approach in rolling bearing fault detection significantly. Moreover, the results showed that this method can efficiently improve the accuracy of fault diagnosis.

  4. Perspectives from deductible plan enrollees: plan knowledge and anticipated care-seeking changes.

    PubMed

    Reed, Mary; Benedetti, Nancy; Brand, Richard; Newhouse, Joseph P; Hsu, John

    2009-12-29

    Consumer directed health care proposes that patients will engage as informed consumers of health care services by sharing in more of their medical costs, often through deductibles. We examined knowledge of deductible plan details among new enrollees, as well as anticipated care-seeking changes in response to the deductible. In a large integrated delivery system with a range of deductible-based health plans which varied in services included or exempted from deductible, we conducted a mixed-method, cross-sectional telephone interview study. Among 458 adults newly enrolled in a deductible plan (71% response rate), 51% knew they had a deductible, 26% knew the deductible amount, and 6% knew which medical services were included or exempted from their deductible. After adjusting for respondent characteristics, those with more deductible-applicable services and those with lower self-reported health status were significantly more likely to know they had a deductible. Among those who knew of their deductible, half anticipated that it would cause them to delay or avoid medical care, including avoiding doctor's office visits and medical tests, even services that they believed were medically necessary. Many expressed concern about their costs, anticipating the inability to afford care and expressing the desire to change plans. Early in their experience with a deductible, patients had limited awareness of the deductible and little knowledge of the details. Many who knew of the deductible reported that it would cause them to delay or avoid seeking care and were concerned about their healthcare costs.

  5. Modeling and measurement of fault-tolerant multiprocessors

    NASA Technical Reports Server (NTRS)

    Shin, K. G.; Woodbury, M. H.; Lee, Y. H.

    1985-01-01

    The workload effects on computer performance are addressed first for a highly reliable unibus multiprocessor used in real-time control. As an approach to studing these effects, a modified Stochastic Petri Net (SPN) is used to describe the synchronous operation of the multiprocessor system. From this model the vital components affecting performance can be determined. However, because of the complexity in solving the modified SPN, a simpler model, i.e., a closed priority queuing network, is constructed that represents the same critical aspects. The use of this model for a specific application requires the partitioning of the workload into job classes. It is shown that the steady state solution of the queuing model directly produces useful results. The use of this model in evaluating an existing system, the Fault Tolerant Multiprocessor (FTMP) at the NASA AIRLAB, is outlined with some experimental results. Also addressed is the technique of measuring fault latency, an important microscopic system parameter. Most related works have assumed no or a negligible fault latency and then performed approximate analyses. To eliminate this deficiency, a new methodology for indirectly measuring fault latency is presented.

  6. A Novel Dual Separate Paths (DSP) Algorithm Providing Fault-Tolerant Communication for Wireless Sensor Networks.

    PubMed

    Tien, Nguyen Xuan; Kim, Semog; Rhee, Jong Myung; Park, Sang Yoon

    2017-07-25

    Fault tolerance has long been a major concern for sensor communications in fault-tolerant cyber physical systems (CPSs). Network failure problems often occur in wireless sensor networks (WSNs) due to various factors such as the insufficient power of sensor nodes, the dislocation of sensor nodes, the unstable state of wireless links, and unpredictable environmental interference. Fault tolerance is thus one of the key requirements for data communications in WSN applications. This paper proposes a novel path redundancy-based algorithm, called dual separate paths (DSP), that provides fault-tolerant communication with the improvement of the network traffic performance for WSN applications, such as fault-tolerant CPSs. The proposed DSP algorithm establishes two separate paths between a source and a destination in a network based on the network topology information. These paths are node-disjoint paths and have optimal path distances. Unicast frames are delivered from the source to the destination in the network through the dual paths, providing fault-tolerant communication and reducing redundant unicast traffic for the network. The DSP algorithm can be applied to wired and wireless networks, such as WSNs, to provide seamless fault-tolerant communication for mission-critical and life-critical applications such as fault-tolerant CPSs. The analyzed and simulated results show that the DSP-based approach not only provides fault-tolerant communication, but also improves network traffic performance. For the case study in this paper, when the DSP algorithm was applied to high-availability seamless redundancy (HSR) networks, the proposed DSP-based approach reduced the network traffic by 80% to 88% compared with the standard HSR protocol, thus improving network traffic performance.

  7. Validation environment for AIPS/ALS: Implementation and results

    NASA Technical Reports Server (NTRS)

    Segall, Zary; Siewiorek, Daniel; Caplan, Eddie; Chung, Alan; Czeck, Edward; Vrsalovic, Dalibor

    1990-01-01

    The work is presented which was performed in porting the Fault Injection-based Automated Testing (FIAT) and Programming and Instrumentation Environments (PIE) validation tools, to the Advanced Information Processing System (AIPS) in the context of the Ada Language System (ALS) application, as well as an initial fault free validation of the available AIPS system. The PIE components implemented on AIPS provide the monitoring mechanisms required for validation. These mechanisms represent a substantial portion of the FIAT system. Moreover, these are required for the implementation of the FIAT environment on AIPS. Using these components, an initial fault free validation of the AIPS system was performed. The implementation is described of the FIAT/PIE system, configured for fault free validation of the AIPS fault tolerant computer system. The PIE components were modified to support the Ada language. A special purpose AIPS/Ada runtime monitoring and data collection was implemented. A number of initial Ada programs running on the PIE/AIPS system were implemented. The instrumentation of the Ada programs was accomplished automatically inside the PIE programming environment. PIE's on-line graphical views show vividly and accurately the performance characteristics of Ada programs, AIPS kernel and the application's interaction with the AIPS kernel. The data collection mechanisms were written in a high level language, Ada, and provide a high degree of flexibility for implementation under various system conditions.

  8. Development and evaluation of a fault-tolerant multiprocessor (FTMP) computer. Volume 4: FTMP executive summary

    NASA Technical Reports Server (NTRS)

    Smith, T. B., III; Lala, J. H.

    1984-01-01

    The FTMP architecture is a high reliability computer concept modeled after a homogeneous multiprocessor architecture. Elements of the FTMP are operated in tight synchronism with one another and hardware fault-detection and fault-masking is provided which is transparent to the software. Operating system design and user software design is thus greatly simplified. Performance of the FTMP is also comparable to that of a simplex equivalent due to the efficiency of fault handling hardware. The FTMP project constructed an engineering module of the FTMP, programmed the machine and extensively tested the architecture through fault injection and other stress testing. This testing confirmed the soundness of the FTMP concepts.

  9. Overview and First Results of an In-situ Stimulation Experiment in Switzerland

    NASA Astrophysics Data System (ADS)

    Amann, F.; Gischig, V.; Doetsch, J.; Jalali, M.; Valley, B.; Evans, K. F.; Krietsch, H.; Dutler, N.; Villiger, L.

    2017-12-01

    A decameter-scale in-situ stimulation and circulation (ISC) experiment is currently being conducted at the Grimsel Test Site in Switzerland with the objective of improving our understanding of key seismo-hydro-mechanical coupled processes associated with high pressure fluid injections in a moderately fractured crystalline rock mass. The ISC experiment activities aim to support the development of EGS technology by 1) advancing the understanding of fundamental processes that occur within the rock mass in response to relatively large-volume fluid injections at high pressures, 2) improving the ability to estimate and model induced seismic hazard and risks, 3) assessing the potential of different injection protocols to keep seismic event magnitudes below an acceptable threshold, 4) developing novel monitoring and imaging techniques for pressure, temperature, stress, strain and displacement as well as geophysical methods such as ground penetration radar, passive and active seismic and 5) generating a high-quality benchmark datasets that facilitates the development and validation of numerical modelling tools. The ISC experiment includes six fault slip and five hydraulic fracturing experiments at an intermediate scale (i.e. 20*20*20m) at 480m depth, which allows high resolution monitoring of the evolution of pore pressure in the stimulated fault zone and the surrounding rock matrix, fault dislocations including shear and dilation, and micro-seismicity in an exceptionally well characterized structural setting. In February 2017 we performed the fault-slip experiments on interconnected faults. Subsequently an intense phase of post-stimulation hydraulic characterization was performed. In Mai 2017 we performed hydraulic fracturing tests within test intervals that were free of natural fractures. In this contribution we give an overview and show first results of the above mentioned stimulation tests.

  10. 26 CFR 15.1-1 - Elections to deduct.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    .... (2) Election to deduct under section 615—(i) General rule. The election to deduct exploration... 26 Internal Revenue 14 2010-04-01 2010-04-01 false Elections to deduct. 15.1-1 Section 15.1-1... Elections to deduct. (a) Manner of making election—(1) Election to deduct under section 617(a). The election...

  11. 26 CFR 20.2053-10 - Deduction for certain foreign death taxes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 14 2010-04-01 2010-04-01 false Deduction for certain foreign death taxes. 20... § 20.2053-10 Deduction for certain foreign death taxes. (a) General rule. A deduction is allowed the... for foreign death taxes. (b) Condition for allowance of deduction. (1) The deduction is not allowed...

  12. 26 CFR 20.2053-10 - Deduction for certain foreign death taxes.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 14 2012-04-01 2012-04-01 false Deduction for certain foreign death taxes. 20... § 20.2053-10 Deduction for certain foreign death taxes. (a) General rule. A deduction is allowed the... for foreign death taxes. (b) Condition for allowance of deduction. (1) The deduction is not allowed...

  13. 26 CFR 20.2053-10 - Deduction for certain foreign death taxes.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 14 2011-04-01 2010-04-01 true Deduction for certain foreign death taxes. 20... § 20.2053-10 Deduction for certain foreign death taxes. (a) General rule. A deduction is allowed the... for foreign death taxes. (b) Condition for allowance of deduction. (1) The deduction is not allowed...

  14. Premium growth and its effect on employer-sponsored insurance.

    PubMed

    Vistnes, Jessica; Selden, Thomas

    2011-03-01

    We use variation in premium inflation and general inflation across geographic areas to identify the effects of downward nominal wage rigidity on employers' health insurance decisions. Using employer level data from the 2000 to 2005 Medical Expenditure Panel Survey-Insurance Component, we examine the effect of premium growth on the likelihood that an employer offers insurance, eligibility rates among employees, continuous measures of employee premium contributions for both single and family coverage, and deductibles. We find that small, low-wage employers are less likely to offer health insurance in response to increased premium inflation, and if they do offer coverage they increase employee contributions and deductible levels. In contrast, larger, low-wage employers maintain their offers of coverage, but reduce eligibility for such coverage. They also increase employee contributions for single and family coverage, but not deductibles. Among high-wage employers, all but the largest increase deductibles in response to cost pressures.

  15. SABRE: a bio-inspired fault-tolerant electronic architecture.

    PubMed

    Bremner, P; Liu, Y; Samie, M; Dragffy, G; Pipe, A G; Tempesti, G; Timmis, J; Tyrrell, A M

    2013-03-01

    As electronic devices become increasingly complex, ensuring their reliable, fault-free operation is becoming correspondingly more challenging. It can be observed that, in spite of their complexity, biological systems are highly reliable and fault tolerant. Hence, we are motivated to take inspiration for biological systems in the design of electronic ones. In SABRE (self-healing cellular architectures for biologically inspired highly reliable electronic systems), we have designed a bio-inspired fault-tolerant hierarchical architecture for this purpose. As in biology, the foundation for the whole system is cellular in nature, with each cell able to detect faults in its operation and trigger intra-cellular or extra-cellular repair as required. At the next level in the hierarchy, arrays of cells are configured and controlled as function units in a transport triggered architecture (TTA), which is able to perform partial-dynamic reconfiguration to rectify problems that cannot be solved at the cellular level. Each TTA is, in turn, part of a larger multi-processor system which employs coarser grain reconfiguration to tolerate faults that cause a processor to fail. In this paper, we describe the details of operation of each layer of the SABRE hierarchy, and how these layers interact to provide a high systemic level of fault tolerance.

  16. Negative Selection Algorithm for Aircraft Fault Detection

    NASA Technical Reports Server (NTRS)

    Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.

    2004-01-01

    We investigated a real-valued Negative Selection Algorithm (NSA) for fault detection in man-in-the-loop aircraft operation. The detection algorithm uses body-axes angular rate sensory data exhibiting the normal flight behavior patterns, to generate probabilistically a set of fault detectors that can detect any abnormalities (including faults and damages) in the behavior pattern of the aircraft flight. We performed experiments with datasets (collected under normal and various simulated failure conditions) using the NASA Ames man-in-the-loop high-fidelity C-17 flight simulator. The paper provides results of experiments with different datasets representing various failure conditions.

  17. Optimizing the Reliability and Performance of Service Composition Applications with Fault Tolerance in Wireless Sensor Networks

    PubMed Central

    Wu, Zhao; Xiong, Naixue; Huang, Yannong; Xu, Degang; Hu, Chunyang

    2015-01-01

    The services composition technology provides flexible methods for building service composition applications (SCAs) in wireless sensor networks (WSNs). The high reliability and high performance of SCAs help services composition technology promote the practical application of WSNs. The optimization methods for reliability and performance used for traditional software systems are mostly based on the instantiations of software components, which are inapplicable and inefficient in the ever-changing SCAs in WSNs. In this paper, we consider the SCAs with fault tolerance in WSNs. Based on a Universal Generating Function (UGF) we propose a reliability and performance model of SCAs in WSNs, which generalizes a redundancy optimization problem to a multi-state system. Based on this model, an efficient optimization algorithm for reliability and performance of SCAs in WSNs is developed based on a Genetic Algorithm (GA) to find the optimal structure of SCAs with fault-tolerance in WSNs. In order to examine the feasibility of our algorithm, we have evaluated the performance. Furthermore, the interrelationships between the reliability, performance and cost are investigated. In addition, a distinct approach to determine the most suitable parameters in the suggested algorithm is proposed. PMID:26561818

  18. Magnetometric and gravimetric surveys in fault detection over Acambay System

    NASA Astrophysics Data System (ADS)

    García-Serrano, A.; Sanchez-Gonzalez, J.; Cifuentes-Nava, G.

    2013-05-01

    In commemoration of the centennial of the Acambay intraplate earthquake of November 19th 1912, we carry out gravimetric and magnetometric surveys to define the structure of faults caused by this event. The study area is located approximately 11 km south of Acambay, in the Acambay-Tixmadeje fault system, where we performed two magnetometric surveys, the first consisting of 17 lines with a spacing of 35m between lines and 5m between stations, and the second with a total of 12 lines with the same spacing, both NW. In addition to these two lines we performed gravimetric profiles located in the central part of each magnetometric survey, with a spacing of 25m between stations, in order to correlate the results of both techniques, the lengths of such profiles were of 600m and 550m respectively. This work describes the data processing including directional derivatives, analytical signal and inversion, by means of which we obtain results of magnetic variations and anomaly traits highly correlated with those faults. It is of great importance to characterize these faults given the large population growth in the area and settlement houses on them, which involves a high risk in the security of the population, considering that these are active faults and cannot be discard earthquakes associated with them, so it is necessary for the authorities and people have relevant information to these problem.

  19. Large transient fault current test of an electrical roll ring

    NASA Technical Reports Server (NTRS)

    Yenni, Edward J.; Birchenough, Arthur G.

    1992-01-01

    The space station uses precision rotary gimbals to provide for sun tracking of its photoelectric arrays. Electrical power, command signals and data are transferred across the gimbals by roll rings. Roll rings have been shown to be capable of highly efficient electrical transmission and long life, through tests conducted at the NASA Lewis Research Center and Honeywell's Satellite and Space Systems Division in Phoenix, AZ. Large potential fault currents inherent to the power system's DC distribution architecture, have brought about the need to evaluate the effects of large transient fault currents on roll rings. A test recently conducted at Lewis subjected a roll ring to a simulated worst case space station electrical fault. The system model used to obtain the fault profile is described, along with details of the reduced order circuit that was used to simulate the fault. Test results comparing roll ring performance before and after the fault are also presented.

  20. Periodic Application of Concurrent Error Detection in Processor Array Architectures. PhD. Thesis -

    NASA Technical Reports Server (NTRS)

    Chen, Paul Peichuan

    1993-01-01

    Processor arrays can provide an attractive architecture for some applications. Featuring modularity, regular interconnection and high parallelism, such arrays are well-suited for VLSI/WSI implementations, and applications with high computational requirements, such as real-time signal processing. Preserving the integrity of results can be of paramount importance for certain applications. In these cases, fault tolerance should be used to ensure reliable delivery of a system's service. One aspect of fault tolerance is the detection of errors caused by faults. Concurrent error detection (CED) techniques offer the advantage that transient and intermittent faults may be detected with greater probability than with off-line diagnostic tests. Applying time-redundant CED techniques can reduce hardware redundancy costs. However, most time-redundant CED techniques degrade a system's performance.

  1. Multiple resolution chirp reflectometry for fault localization and diagnosis in a high voltage cable in automotive electronics

    NASA Astrophysics Data System (ADS)

    Chang, Seung Jin; Lee, Chun Ku; Shin, Yong-June; Park, Jin Bae

    2016-12-01

    A multiple chirp reflectometry system with a fault estimation process is proposed to obtain multiple resolution and to measure the degree of fault in a target cable. A multiple resolution algorithm has the ability to localize faults, regardless of fault location. The time delay information, which is derived from the normalized cross-correlation between the incident signal and bandpass filtered reflected signals, is converted to a fault location and cable length. The in-phase and quadrature components are obtained by lowpass filtering of the mixed signal of the incident signal and the reflected signal. Based on in-phase and quadrature components, the reflection coefficient is estimated by the proposed fault estimation process including the mixing and filtering procedure. Also, the measurement uncertainty for this experiment is analyzed according to the Guide to the Expression of Uncertainty in Measurement. To verify the performance of the proposed method, we conduct comparative experiments to detect and measure faults under different conditions. Considering the installation environment of the high voltage cable used in an actual vehicle, target cable length and fault position are designed. To simulate the degree of fault, the variety of termination impedance (10 Ω , 30 Ω , 50 Ω , and 1 \\text{k} Ω ) are used and estimated by the proposed method in this experiment. The proposed method demonstrates advantages in that it has multiple resolution to overcome the blind spot problem, and can assess the state of the fault.

  2. Deductive Evaluation: Implicit Code Verification With Low User Burden

    NASA Technical Reports Server (NTRS)

    Di Vito, Ben L.

    2016-01-01

    We describe a framework for symbolically evaluating C code using a deductive approach that discovers and proves program properties. The framework applies Floyd-Hoare verification principles in its treatment of loops, with a library of iteration schemes serving to derive loop invariants. During evaluation, theorem proving is performed on-the-fly, obviating the generation of verification conditions normally needed to establish loop properties. A PVS-based prototype is presented along with results for sample C functions.

  3. Deductive reasoning, brain maturation, and science concept acquisition: Are they linked?

    NASA Astrophysics Data System (ADS)

    Lawson, Anton E.

    The present study tested the alternative hypotheses that the poor performance of the intuitive and transitional students on the concept acquisition tasks employed in the Lawson et al. (1991) study was due either to their failure (a) to use deductive reasoning to test potentially relevant task features, as suggested by Lawson et al. (1991); (b) to identify potentially relevant features; or (c) to derive and test a successful problem-solving strategy. To test these hypotheses a training session, which consisted of a series of seven concept acquisition tasks, was designed to reveal to students key task features and the deductive reasoning pattern necessary to solve the tasks. The training was individually administered to students (ages 5-14 years). Results revealed that none of the five- and six-year-olds, approximately half of the seven-year-olds, and virtually all of the students eight years and older responded successfully to the training. These results are viewed as contradictory to the hypothesis that the intuitive and transitional students in the Lawson et al. (1991) study lacked the reasoning skills necessary to identify and test potentially relevant task features. Instead, the results support the hypothesis that their poor performance was due to their failure to use hypothetico-deductive reasoning to derive an effective strategy. Previous research is cited that indicates that the brain's frontal lobes undergo a pronounced growth spurt from about four years of age to about seven years of age. In fact, the performance of normal six-year-olds and adults with frontal lobe damage on tasks such as the Wisconsin Card Sorting Task (WCST), a task similar in many ways to the present concept acquisition tasks, has been found to be identical. Consequently, the hypothesis is advanced that maturation of the frontal lobes can explain the striking improvement in performance at age seven. A neural network of the role of the frontal lobes in task performance based upon the work of Levine and Prueitt (1989) is presented. The advance in reasoning that presumably results from effective operation of the frontal lobes is seen as a fundamental advance in intellectual development because it enables children to employ an inductive-deductive reasoning pattern to change their minds when confronted with contradictory evidence regarding features of perceptible objects, a skill necessary for descriptive concept acquisition. It is suggested that a further qualitative advance in intellectual development occurs when an analogous pattern of abductive-deductive reasoning is applied to hypothetical objects and/or processes to allow for alternative hypothesis testing and theoretical concept acquisition. Apparently this is the reasoning pattern needed to derive an effective problem-solving strategy to solve the concept acquisition tasks of Lawson et al. (1991) when direct instruction is not provided. Implications for the science classroom are suggested.

  4. Fault tolerant features and experiments of ANTS distributed real-time system

    NASA Astrophysics Data System (ADS)

    Dominic-Savio, Patrick; Lo, Jien-Chung; Tufts, Donald W.

    1995-01-01

    The ANTS project at the University of Rhode Island introduces the concept of Active Nodal Task Seeking (ANTS) as a way to efficiently design and implement dependable, high-performance, distributed computing. This paper presents the fault tolerant design features that have been incorporated in the ANTS experimental system implementation. The results of performance evaluations and fault injection experiments are reported. The fault-tolerant version of ANTS categorizes all computing nodes into three groups. They are: the up-and-running green group, the self-diagnosing yellow group and the failed red group. Each available computing node will be placed in the yellow group periodically for a routine diagnosis. In addition, for long-life missions, ANTS uses a monitoring scheme to identify faulty computing nodes. In this monitoring scheme, the communication pattern of each computing node is monitored by two other nodes.

  5. Uncertain deduction and conditional reasoning.

    PubMed

    Evans, Jonathan St B T; Thompson, Valerie A; Over, David E

    2015-01-01

    There has been a paradigm shift in the psychology of deductive reasoning. Many researchers no longer think it is appropriate to ask people to assume premises and decide what necessarily follows, with the results evaluated by binary extensional logic. Most every day and scientific inference is made from more or less confidently held beliefs and not assumptions, and the relevant normative standard is Bayesian probability theory. We argue that the study of "uncertain deduction" should directly ask people to assign probabilities to both premises and conclusions, and report an experiment using this method. We assess this reasoning by two Bayesian metrics: probabilistic validity and coherence according to probability theory. On both measures, participants perform above chance in conditional reasoning, but they do much better when statements are grouped as inferences, rather than evaluated in separate tasks.

  6. Exercises for Bringing the Hypothetico-Deductive Method to Life

    ERIC Educational Resources Information Center

    Romesburg, H. Charles

    2014-01-01

    This article explains four kinds of inquiry exercises, different in purpose, for teaching advanced-level high school and college students the hypothetico-deductive (H-D) method. The first uses a picture of a river system to convey the H-D method's logic. The second has teams of students use the H-D method: their teacher poses a hypothesis…

  7. Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data

    USGS Publications Warehouse

    Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.

    2015-01-01

    We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.

  8. Nuclear Power Plant Thermocouple Sensor-Fault Detection and Classification Using Deep Learning and Generalized Likelihood Ratio Test

    NASA Astrophysics Data System (ADS)

    Mandal, Shyamapada; Santhi, B.; Sridhar, S.; Vinolia, K.; Swaminathan, P.

    2017-06-01

    In this paper, an online fault detection and classification method is proposed for thermocouples used in nuclear power plants. In the proposed method, the fault data are detected by the classification method, which classifies the fault data from the normal data. Deep belief network (DBN), a technique for deep learning, is applied to classify the fault data. The DBN has a multilayer feature extraction scheme, which is highly sensitive to a small variation of data. Since the classification method is unable to detect the faulty sensor; therefore, a technique is proposed to identify the faulty sensor from the fault data. Finally, the composite statistical hypothesis test, namely generalized likelihood ratio test, is applied to compute the fault pattern of the faulty sensor signal based on the magnitude of the fault. The performance of the proposed method is validated by field data obtained from thermocouple sensors of the fast breeder test reactor.

  9. A wideband magnetoresistive sensor for monitoring dynamic fault slip in laboratory fault friction experiments

    USGS Publications Warehouse

    Kilgore, Brian D.

    2017-01-01

    A non-contact, wideband method of sensing dynamic fault slip in laboratory geophysical experiments employs an inexpensive magnetoresistive sensor, a small neodymium rare earth magnet, and user built application-specific wideband signal conditioning. The magnetoresistive sensor generates a voltage proportional to the changing angles of magnetic flux lines, generated by differential motion or rotation of the near-by magnet, through the sensor. The performance of an array of these sensors compares favorably to other conventional position sensing methods employed at multiple locations along a 2 m long × 0.4 m deep laboratory strike-slip fault. For these magnetoresistive sensors, the lack of resonance signals commonly encountered with cantilever-type position sensor mounting, the wide band response (DC to ≈ 100 kHz) that exceeds the capabilities of many traditional position sensors, and the small space required on the sample, make them attractive options for capturing high speed fault slip measurements in these laboratory experiments. An unanticipated observation of this study is the apparent sensitivity of this sensor to high frequency electomagnetic signals associated with fault rupture and (or) rupture propagation, which may offer new insights into the physics of earthquake faulting.

  10. Design and Experimental Validation for Direct-Drive Fault-Tolerant Permanent-Magnet Vernier Machines

    PubMed Central

    Liu, Guohai; Yang, Junqin; Chen, Ming; Chen, Qian

    2014-01-01

    A fault-tolerant permanent-magnet vernier (FT-PMV) machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs). This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM), the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis. PMID:25045729

  11. Comparative analysis of techniques for evaluating the effectiveness of aircraft computing systems

    NASA Technical Reports Server (NTRS)

    Hitt, E. F.; Bridgman, M. S.; Robinson, A. C.

    1981-01-01

    Performability analysis is a technique developed for evaluating the effectiveness of fault-tolerant computing systems in multiphase missions. Performability was evaluated for its accuracy, practical usefulness, and relative cost. The evaluation was performed by applying performability and the fault tree method to a set of sample problems ranging from simple to moderately complex. The problems involved as many as five outcomes, two to five mission phases, permanent faults, and some functional dependencies. Transient faults and software errors were not considered. A different analyst was responsible for each technique. Significantly more time and effort were required to learn performability analysis than the fault tree method. Performability is inherently as accurate as fault tree analysis. For the sample problems, fault trees were more practical and less time consuming to apply, while performability required less ingenuity and was more checkable. Performability offers some advantages for evaluating very complex problems.

  12. Fuzzy logic based on-line fault detection and classification in transmission line.

    PubMed

    Adhikari, Shuma; Sinha, Nidul; Dorendrajit, Thingam

    2016-01-01

    This study presents fuzzy logic based online fault detection and classification of transmission line using Programmable Automation and Control technology based National Instrument Compact Reconfigurable i/o (CRIO) devices. The LabVIEW software combined with CRIO can perform real time data acquisition of transmission line. When fault occurs in the system current waveforms are distorted due to transients and their pattern changes according to the type of fault in the system. The three phase alternating current, zero sequence and positive sequence current data generated by LabVIEW through CRIO-9067 are processed directly for relaying. The result shows that proposed technique is capable of right tripping action and classification of type of fault at high speed therefore can be employed in practical application.

  13. A benchmark for fault tolerant flight control evaluation

    NASA Astrophysics Data System (ADS)

    Smaili, H.; Breeman, J.; Lombaerts, T.; Stroosma, O.

    2013-12-01

    A large transport aircraft simulation benchmark (REconfigurable COntrol for Vehicle Emergency Return - RECOVER) has been developed within the GARTEUR (Group for Aeronautical Research and Technology in Europe) Flight Mechanics Action Group 16 (FM-AG(16)) on Fault Tolerant Control (2004 2008) for the integrated evaluation of fault detection and identification (FDI) and reconfigurable flight control strategies. The benchmark includes a suitable set of assessment criteria and failure cases, based on reconstructed accident scenarios, to assess the potential of new adaptive control strategies to improve aircraft survivability. The application of reconstruction and modeling techniques, based on accident flight data, has resulted in high-fidelity nonlinear aircraft and fault models to evaluate new Fault Tolerant Flight Control (FTFC) concepts and their real-time performance to accommodate in-flight failures.

  14. An intelligent fault diagnosis method of rolling bearings based on regularized kernel Marginal Fisher analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Li; Shi, Tielin; Xuan, Jianping

    2012-05-01

    Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.

  15. Shallow lithological structure across the Dead Sea Transform derived from geophysical experiments

    USGS Publications Warehouse

    Stankiewicz, J.; Munoz, G.; Ritter, O.; Bedrosian, P.A.; Ryberg, T.; Weckmann, U.; Weber, M.

    2011-01-01

    In the framework of the DEad SEa Rift Transect (DESERT) project a 150 km magnetotelluric profile consisting of 154 sites was carried out across the Dead Sea Transform. The resistivity model presented shows conductive structures in the western section of the study area terminating abruptly at the Arava Fault. For a more detailed analysis we performed a joint interpretation of the resistivity model with a P wave velocity model from a partially coincident seismic experiment. The technique used is a statistical correlation of resistivity and velocity values in parameter space. Regions of high probability of a coexisting pair of values for the two parameters are mapped back into the spatial domain, illustrating the geographical location of lithological classes. In this study, four regions of enhanced probability have been identified, and are remapped as four lithological classes. This technique confirms the Arava Fault marks the boundary of a highly conductive lithological class down to a depth of ???3 km. That the fault acts as an impermeable barrier to fluid flow is unusual for large fault zone, which often exhibit a fault zone characterized by high conductivity and low seismic velocity. At greater depths it is possible to resolve the Precambrian basement into two classes characterized by vastly different resistivity values but similar seismic velocities. The boundary between these classes is approximately coincident with the Al Quweira Fault, with higher resistivities observed east of the fault. This is interpreted as evidence for the original deformation along the DST originally taking place at the Al Quweira Fault, before being shifted to the Arava Fault. 

  16. Multi-Fault Diagnosis of Rolling Bearings via Adaptive Projection Intrinsically Transformed Multivariate Empirical Mode Decomposition and High Order Singular Value Decomposition

    PubMed Central

    Lv, Yong; Song, Gangbing

    2018-01-01

    Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal. PMID:29659510

  17. Multi-Fault Diagnosis of Rolling Bearings via Adaptive Projection Intrinsically Transformed Multivariate Empirical Mode Decomposition and High Order Singular Value Decomposition.

    PubMed

    Yuan, Rui; Lv, Yong; Song, Gangbing

    2018-04-16

    Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal.

  18. A Negative Selection Immune System Inspired Methodology for Fault Diagnosis of Wind Turbines.

    PubMed

    Alizadeh, Esmaeil; Meskin, Nader; Khorasani, Khashayar

    2017-11-01

    High operational and maintenance costs represent as major economic constraints in the wind turbine (WT) industry. These concerns have made investigation into fault diagnosis of WT systems an extremely important and active area of research. In this paper, an immune system (IS) inspired methodology for performing fault detection and isolation (FDI) of a WT system is proposed and developed. The proposed scheme is based on a self nonself discrimination paradigm of a biological IS. Specifically, the negative selection mechanism [negative selection algorithm (NSA)] of the human body is utilized. In this paper, a hierarchical bank of NSAs are designed to detect and isolate both individual as well as simultaneously occurring faults common to the WTs. A smoothing moving window filter is then utilized to further improve the reliability and performance of the FDI scheme. Moreover, the performance of our proposed scheme is compared with another state-of-the-art data-driven technique, namely the support vector machines (SVMs) to demonstrate and illustrate the superiority and advantages of our proposed NSA-based FDI scheme. Finally, a nonparametric statistical comparison test is implemented to evaluate our proposed methodology with that of the SVM under various fault severities.

  19. Heterogeneity in friction strength of an active fault by incorporation of fragments of the surrounding host rock

    NASA Astrophysics Data System (ADS)

    Kato, Naoki; Hirono, Tetsuro

    2016-07-01

    To understand the correlation between the mesoscale structure and the frictional strength of an active fault, we performed a field investigation of the Atera fault at Tase, central Japan, and made laboratory-based determinations of its mineral assemblages and friction coefficients. The fault zone contains a light gray fault gouge, a brown fault gouge, and a black fault breccia. Samples of the two gouges contained large amounts of clay minerals such as smectite and had low friction coefficients of approximately 0.2-0.4 under the condition of 0.01 m s-1 slip velocity and 0.5-2.5 MP confining pressure, whereas the breccia contained large amounts of angular quartz and feldspar and had a friction coefficient of 0.7 under the same condition. Because the fault breccia closely resembles the granitic rock of the hangingwall in composition, texture, and friction coefficient, we interpret the breccia as having originated from this protolith. If the mechanical incorporation of wall rocks of high friction coefficient into fault zones is widespread at the mesoscale, it causes the heterogeneity in friction strength of fault zones and might contribute to the evolution of fault-zone architectures.

  20. 26 CFR 1.832-5 - Deductions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... TAXES (CONTINUED) Other Insurance Companies § 1.832-5 Deductions. (a) The deductions allowable are..., insurance companies are allowed a deduction for losses from capital assets sold or exchanged in order to... provisions of section 1212. The deduction is the same as that allowed mutual insurance companies subject to...

  1. 26 CFR 1.832-2 - Deductions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... TAXES (CONTINUED) Other Insurance Companies § 1.832-2 Deductions. (a) The deductions allowable are..., insurance companies are allowed a deduction for losses from capital assets sold or exchanged in order to... provisions of section 1212. The deduction is the same as that allowed mutual insurance companies subject to...

  2. 26 CFR 1.832-2 - Deductions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... TAXES (CONTINUED) Other Insurance Companies § 1.832-2 Deductions. (a) The deductions allowable are..., insurance companies are allowed a deduction for losses from capital assets sold or exchanged in order to... provisions of section 1212. The deduction is the same as that allowed mutual insurance companies subject to...

  3. 26 CFR 1.832-2 - Deductions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... TAXES (CONTINUED) Other Insurance Companies § 1.832-2 Deductions. (a) The deductions allowable are..., insurance companies are allowed a deduction for losses from capital assets sold or exchanged in order to... provisions of section 1212. The deduction is the same as that allowed mutual insurance companies subject to...

  4. 26 CFR 1.832-2 - Deductions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... TAXES (CONTINUED) Other Insurance Companies § 1.832-2 Deductions. (a) The deductions allowable are..., insurance companies are allowed a deduction for losses from capital assets sold or exchanged in order to... provisions of section 1212. The deduction is the same as that allowed mutual insurance companies subject to...

  5. Fault tolerant, radiation hard, high performance digital signal processor

    NASA Technical Reports Server (NTRS)

    Holmann, Edgar; Linscott, Ivan R.; Maurer, Michael J.; Tyler, G. L.; Libby, Vibeke

    1990-01-01

    An architecture has been developed for a high-performance VLSI digital signal processor that is highly reliable, fault-tolerant, and radiation-hard. The signal processor, part of a spacecraft receiver designed to support uplink radio science experiments at the outer planets, organizes the connections between redundant arithmetic resources, register files, and memory through a shuffle exchange communication network. The configuration of the network and the state of the processor resources are all under microprogram control, which both maps the resources according to algorithmic needs and reconfigures the processing should a failure occur. In addition, the microprogram is reloadable through the uplink to accommodate changes in the science objectives throughout the course of the mission. The processor will be implemented with silicon compiler tools, and its design will be verified through silicon compilation simulation at all levels from the resources to full functionality. By blending reconfiguration with redundancy the processor implementation is fault-tolerant and reliable, and possesses the long expected lifetime needed for a spacecraft mission to the outer planets.

  6. A deep convolutional neural network with new training methods for bearing fault diagnosis under noisy environment and different working load

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Li, Chuanhao; Peng, Gaoliang; Chen, Yuanhang; Zhang, Zhujun

    2018-02-01

    In recent years, intelligent fault diagnosis algorithms using machine learning technique have achieved much success. However, due to the fact that in real world industrial applications, the working load is changing all the time and noise from the working environment is inevitable, degradation of the performance of intelligent fault diagnosis methods is very serious. In this paper, a new model based on deep learning is proposed to address the problem. Our contributions of include: First, we proposed an end-to-end method that takes raw temporal signals as inputs and thus doesn't need any time consuming denoising preprocessing. The model can achieve pretty high accuracy under noisy environment. Second, the model does not rely on any domain adaptation algorithm or require information of the target domain. It can achieve high accuracy when working load is changed. To understand the proposed model, we will visualize the learned features, and try to analyze the reasons behind the high performance of the model.

  7. High Frequency Near-Field Ground Motion Excited by Strike-Slip Step Overs

    NASA Astrophysics Data System (ADS)

    Hu, Feng; Wen, Jian; Chen, Xiaofei

    2018-03-01

    We performed dynamic rupture simulations on step overs with 1-2 km step widths and present their corresponding horizontal peak ground velocity distributions in the near field within different frequency ranges. The rupture speeds on fault segments are determinant in controlling the near-field ground motion. A Mach wave impact area at the free surface, which can be inferred from the distribution of the ratio of the maximum fault-strike particle velocity to the maximum fault-normal particle velocity, is generated in the near field with sustained supershear ruptures on fault segments, and the Mach wave impact area cannot be detected with unsustained supershear ruptures alone. Sub-Rayleigh ruptures produce stronger ground motions beyond the end of fault segments. The existence of a low-velocity layer close to the free surface generates large amounts of high-frequency seismic radiation at step over discontinuities. For near-vertical step overs, normal stress perturbations on the primary fault caused by dipping structures affect the rupture speed transition, which further determines the distribution of the near-field ground motion. The presence of an extensional linking fault enhances the near-field ground motion in the extensional regime. This work helps us understand the characteristics of high-frequency seismic radiation in the vicinities of step overs and provides useful insights for interpreting the rupture speed distributions derived from the characteristics of near-field ground motion.

  8. A Deep Learning Approach for Fault Diagnosis of Induction Motors in Manufacturing

    NASA Astrophysics Data System (ADS)

    Shao, Si-Yu; Sun, Wen-Jun; Yan, Ru-Qiang; Wang, Peng; Gao, Robert X.

    2017-11-01

    Extracting features from original signals is a key procedure for traditional fault diagnosis of induction motors, as it directly influences the performance of fault recognition. However, high quality features need expert knowledge and human intervention. In this paper, a deep learning approach based on deep belief networks (DBN) is developed to learn features from frequency distribution of vibration signals with the purpose of characterizing working status of induction motors. It combines feature extraction procedure with classification task together to achieve automated and intelligent fault diagnosis. The DBN model is built by stacking multiple-units of restricted Boltzmann machine (RBM), and is trained using layer-by-layer pre-training algorithm. Compared with traditional diagnostic approaches where feature extraction is needed, the presented approach has the ability of learning hierarchical representations, which are suitable for fault classification, directly from frequency distribution of the measurement data. The structure of the DBN model is investigated as the scale and depth of the DBN architecture directly affect its classification performance. Experimental study conducted on a machine fault simulator verifies the effectiveness of the deep learning approach for fault diagnosis of induction motors. This research proposes an intelligent diagnosis method for induction motor which utilizes deep learning model to automatically learn features from sensor data and realize working status recognition.

  9. High-velocity frictional properties of Alpine Fault rocks: Mechanical data, microstructural analysis, and implications for rupture propagation

    NASA Astrophysics Data System (ADS)

    Boulton, Carolyn; Yao, Lu; Faulkner, Daniel R.; Townend, John; Toy, Virginia G.; Sutherland, Rupert; Ma, Shengli; Shimamoto, Toshihiko

    2017-04-01

    The Alpine Fault in New Zealand is a major plate-bounding structure that typically slips in ∼M8 earthquakes every c. 330 years. To investigate the near-surface, high-velocity frictional behavior of surface- and borehole-derived Alpine Fault gouges and cataclasites, twenty-one rotary shear experiments were conducted at 1 MPa normal stress and 1 m/s equivalent slip velocity under both room-dry and water-saturated (wet) conditions. In the room-dry experiments, the peak friction coefficient (μp = τp/σn) of Alpine Fault cataclasites and fault gouges was consistently high (mean μp = 0.67 ± 0.07). In the wet experiments, the fault gouge peak friction coefficients were lower (mean μp = 0.20 ± 0.12) than the cataclasite peak friction coefficients (mean μp = 0.64 ± 0.04). All fault rocks exhibited very low steady-state friction coefficients (μss) (room-dry experiments mean μss = 0.16 ± 0.05; wet experiments mean μss = 0.09 ± 0.04). Of all the experiments performed, six experiments conducted on wet smectite-bearing principal slip zone (PSZ) fault gouges yielded the lowest peak friction coefficients (μp = 0.10-0.20), the lowest steady-state friction coefficients (μss = 0.03-0.09), and, commonly, the lowest specific fracture energy values (EG = 0.01-0.69 MJ/m2). Microstructures produced during room-dry and wet experiments on a smectite-bearing PSZ fault gouge were compared with microstructures in the same material recovered from the Deep Fault Drilling Project (DFDP-1) drill cores. The near-absence of localized shear bands with a strong crystallographic preferred orientation in the natural samples most resembles microstructures formed during wet experiments. Mechanical data and microstructural observations suggest that Alpine Fault ruptures propagate preferentially through water-saturated smectite-bearing fault gouges that exhibit low peak and steady-state friction coefficients.

  10. 42 CFR 408.42 - Deduction from railroad retirement benefits.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Deduction from railroad retirement benefits. 408.42... § 408.42 Deduction from railroad retirement benefits. (a) Responsibility for deductions. If an enrollee is entitled to railroad retirement benefits, his or her SMI premiums are deducted from those benefits...

  11. 42 CFR 417.158 - Payroll deductions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Payroll deductions. 417.158 Section 417.158 Public....158 Payroll deductions. Each employing entity that provides payroll deductions as a means of paying... employee's contribution, if any, to be paid through payroll deductions. [59 FR 49841, Sept. 30, 1994] ...

  12. 26 CFR 1.832-2 - Deductions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... TAXES Other Insurance Companies § 1.832-2 Deductions. (a) The deductions allowable are specified in... provisions of section 1212. The deduction is the same as that allowed mutual insurance companies subject to... companies, other than mutual fire insurance companies described in § 1.831-1, are also allowed a deduction...

  13. Common faults and their impacts for rooftop air conditioners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Breuker, M.S.; Braun, J.E.

    This paper identifies important faults and their performance impacts for rooftop air conditioners. The frequencies of occurrence and the relative costs of service for different faults were estimated through analysis of service records. Several of the important and difficult to diagnose refrigeration cycle faults were simulated in the laboratory. Also, the impacts on several performance indices were quantified through transient testing for a range of conditions and fault levels. The transient test results indicated that fault detection and diagnostics could be performed using methods that incorporate steady-state assumptions and models. Furthermore, the fault testing led to a set of genericmore » rules for the impacts of faults on measurements that could be used for fault diagnoses. The average impacts of the faults on cooling capacity and coefficient of performance (COP) were also evaluated. Based upon the results, all of the faults are significant at the levels introduced, and should be detected and diagnosed by an FDD system. The data set obtained during this work was very comprehensive, and was used to design and evaluate the performance of an FDD method that will be reported in a future paper.« less

  14. Analysis of field-oriented controlled induction motor drives under sensor faults and an overview of sensorless schemes.

    PubMed

    Arun Dominic, D; Chelliah, Thanga Raj

    2014-09-01

    To obtain high dynamic performance on induction motor drives (IMD), variable voltage and variable frequency operation has to be performed by measuring speed of rotation and stator currents through sensors and fed back them to the controllers. When the sensors are undergone a fault, the stability of control system, may be designed for an industrial process, is disturbed. This paper studies the negative effects on a 12.5 hp induction motor drives when the field oriented control system is subjected to sensor faults. To illustrate the importance of this study mine hoist load diagram is considered as shaft load of the tested machine. The methods to recover the system from sensor faults are discussed. In addition, the various speed sensorless schemes are reviewed comprehensively. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Cold seeps and splay faults on Nankai margin

    NASA Astrophysics Data System (ADS)

    Henry, P.; Ashi, J.; Tsunogai, U.; Toki, T.; Kuramoto, S.; Kinoshita, M.; Lallemant, S. J.

    2003-04-01

    Cold seeps (bacterial mats, specific fauna, authigenic carbonates) are common on the Nankai margin and considered as evidence for seepage of methane bearing fluids. Camera and submersible surveys performed over the years have shown that cold seeps are generally associated with active faults. One question is whether part of the fluids expelled originate from the seismogenic zone and migrate along splay faults to the seafloor. The localisation of most cold seeps on the hanging wall of major thrusts may, however, be interpreted in various ways: (a) footwall compaction and diffuse flow (b) fluid channelling along the fault zone at depths and diffuse flow near the seafloor (c) erosion and channelling along permeable strata. In 2002, new observations and sampling were performed with submersible and ROV (1) on major thrusts along the boundary between the Kumano forearc basin domain and the accretionary wedge domain, (2) on a fault affecting the forearc (Kodaiba fault), (3) on mud volcanoes in the Kumano basin. In area (1) tsunami and seismic inversions indicate that the targeted thrusts are in the slip zone of the To-Nankai 1944 earthquakes. In this area, the largest seep zone, continuous over at least 2 km, coincides with the termination of a thrust trace, indicating local fluid channelling along the edge of the fault zone. Kodaiba fault is part of another splay fault system, which has both thrusting and strike-slip components and terminates westward into an en-echelon fold system. Strong seepage activity with abundant carbonates was found on a fold at the fault termination. One mud volcano, rooted in one of the en-echelon fold, has exceptionally high seepage activity compared with the others and thick carbonate crusts. These observations suggest that fluid expulsion along fault zones is most active at fault terminations and may be enhanced during fault initiation. Preliminary geochemical results indicate signatures differ between seep sites and suggests that the two fault systems tap in different sources.

  16. Multiwavelet packet entropy and its application in transmission line fault recognition and classification.

    PubMed

    Liu, Zhigang; Han, Zhiwei; Zhang, Yang; Zhang, Qiaoge

    2014-11-01

    Multiwavelets possess better properties than traditional wavelets. Multiwavelet packet transformation has more high-frequency information. Spectral entropy can be applied as an analysis index to the complexity or uncertainty of a signal. This paper tries to define four multiwavelet packet entropies to extract the features of different transmission line faults, and uses a radial basis function (RBF) neural network to recognize and classify 10 fault types of power transmission lines. First, the preprocessing and postprocessing problems of multiwavelets are presented. Shannon entropy and Tsallis entropy are introduced, and their difference is discussed. Second, multiwavelet packet energy entropy, time entropy, Shannon singular entropy, and Tsallis singular entropy are defined as the feature extraction methods of transmission line fault signals. Third, the plan of transmission line fault recognition using multiwavelet packet entropies and an RBF neural network is proposed. Finally, the experimental results show that the plan with the four multiwavelet packet energy entropies defined in this paper achieves better performance in fault recognition. The performance with SA4 (symmetric antisymmetric) multiwavelet packet Tsallis singular entropy is the best among the combinations of different multiwavelet packets and the four multiwavelet packet entropies.

  17. 26 CFR 1.243-1 - Deduction for dividends received by corporations.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 3 2010-04-01 2010-04-01 false Deduction for dividends received by corporations... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Special Deductions for Corporations § 1.243-1 Deduction for dividends received by corporations. (a)(1) A corporation is allowed a deduction under section 243 for...

  18. 26 CFR 1.172-1 - Net operating loss deduction.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 3 2010-04-01 2010-04-01 false Net operating loss deduction. 1.172-1 Section 1... operating loss deduction. (a) Allowance of deduction. Section 172(a) allows as a deduction in computing taxable income for any taxable year subject to the Code the aggregate of the net operating loss carryovers...

  19. 26 CFR 1.108-3 - Intercompany losses and deductions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Intercompany losses and deductions. 1.108-3... Intercompany losses and deductions. (a) General rule. This section applies to certain losses and deductions... attributes to which section 108(b) applies, a loss or deduction not yet taken into account under section 267...

  20. 26 CFR 1.812-2 - Operations loss deduction.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... (CONTINUED) INCOME TAXES Gain and Loss from Operations § 1.812-2 Operations loss deduction. (a) Allowance of deduction. Section 812 provides that a life insurance company shall be allowed a deduction in computing gain... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Operations loss deduction. 1.812-2 Section 1.812...

  1. Magnetic properties of cores from the Wenchuan Earthquake Fault Scientific Drilling Hole-2 (WFSD-2), China

    NASA Astrophysics Data System (ADS)

    Zhang, L., Jr.; Sun, Z.; Li, H.; Cao, Y.; Ye, X.; Wang, L.; Zhao, Y.; Han, S.

    2015-12-01

    During an earthquake, seismic slip and frictional heating may cause the physical and chemical alterations of magnetic minerals within the fault zone. Rock magnetism provides a method for understanding earthquake dynamics. The Wenchuan earthquake Fault Scientific Drilling Project (WFSD) started right after 2008 Mw7.9 Wenchuan earthquake, to investigate the earthquake faulting mechanism. Hole 2 (WFSD-2) is located in the Pengguan Complex in the Bajiaomiao village (Dujiangyan, Sichuan), and reached the Yingxiu-Beichuan fault (YBF). We measured the surface magnetic susceptibility of the cores in WFSD-2 from 500 m to 1530 m with an interval of 1 cm. Rocks at 500-599.31 m-depth and 1211.49-1530 m-depth are from the Neoproterozoic Pengguang Complex while the section from 599.31 m to 1211.49 m is composed of Late Triassic sediments. The magnetic susceptibility values of the first part of the Pengguan Complex range from 1 to 25 × 10-6 SI, while the second part ranges from 10 to 200 × 10-6 SI, which indicate that the two parts are not from the same rock units. The Late Triassic sedimentary rocks have a low magnetic susceptibility values, ranging from -5 to 20 × 10-6 SI. Most fault zones coincide with the high value of magnetic susceptibility in the WFSD-2 cores. Fault rocks, mainly fault breccia, cataclasite, gouge and pseudotachylite within the WFSD-2 cores, and mostly display a significantly higher magnetic susceptibility than host rocks (5:1 to 20:1). In particular, in the YBF zone of the WFSD-2 cores (from 600 to 960 m), dozens of stages with high values of magnetic susceptibility have been observed. The multi-layered fault rocks with high magnetic susceptibility values might indicate that the YBF is a long-term active fault. The magnetic susceptibility values change with different types of fault rocks. The gouge and pseudotachylite have higher values of magnetic susceptibility than other fault rocks. Other primary rock magnetism analyses were then performed to investigate the mechanisms. We consider that the principal mechanism for the high magnetic susceptibility of these fault rocks is most likely the production of new magnetite from iron-bearing paramagnetic minerals (such as silicates or clays). These new magnetites might originate from frictional heating on a seismic fault slip plane or seismic fluid during an earthquake.

  2. Frictional heterogeneities on carbonate-bearing normal faults: Insights from the Monte Maggio Fault, Italy

    NASA Astrophysics Data System (ADS)

    Carpenter, B. M.; Scuderi, M. M.; Collettini, C.; Marone, C.

    2014-12-01

    Observations of heterogeneous and complex fault slip are often attributed to the complexity of fault structure and/or spatial heterogeneity of fault frictional behavior. Such complex slip patterns have been observed for earthquakes on normal faults throughout central Italy, where many of the Mw 6 to 7 earthquakes in the Apennines nucleate at depths where the lithology is dominated by carbonate rocks. To explore the relationship between fault structure and heterogeneous frictional properties, we studied the exhumed Monte Maggio Fault, located in the northern Apennines. We collected intact specimens of the fault zone, including the principal slip surface and hanging wall cataclasite, and performed experiments at a normal stress of 10 MPa under saturated conditions. Experiments designed to reactivate slip between the cemented principal slip surface and cataclasite show a 3 MPa stress drop as the fault surface fails, then velocity-neutral frictional behavior and significant frictional healing. Overall, our results suggest that (1) earthquakes may readily nucleate in areas of the fault where the slip surface separates massive limestone and are likely to propagate in areas where fault gouge is in contact with the slip surface; (2) postseismic slip is more likely to occur in areas of the fault where gouge is present; and (3) high rates of frictional healing and low creep relaxation observed between solid fault surfaces could lead to significant aftershocks in areas of low stress drop.

  3. Deductibles in health insurance: can the actuarially fair premium reduction exceed the deductible?

    PubMed

    Bakker, F M; van Vliet, R C; van de Ven, W P

    2000-09-01

    The actuarially fair premium reduction in case of a deductible relative to full insurance is affected by: (1) out-of-pocket payments, (2) moral hazard, (3) administrative costs, and, in case of a voluntary deductible, (4) adverse selection. Both the partial effects and the total effect of these factors are analyzed. Moral hazard and adverse selection appear to have a substantial effect on the expected health care costs above a deductible but a small effect on the expected out-of-pocket expenditure. A premium model indicates that for a broad range of deductible amounts the actuarially fair premium reduction exceeds the deductible.

  4. A technique for evaluating the application of the pin-level stuck-at fault model to VLSI circuits

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Finelli, George B.

    1987-01-01

    Accurate fault models are required to conduct the experiments defined in validation methodologies for highly reliable fault-tolerant computers (e.g., computers with a probability of failure of 10 to the -9 for a 10-hour mission). Described is a technique by which a researcher can evaluate the capability of the pin-level stuck-at fault model to simulate true error behavior symptoms in very large scale integrated (VLSI) digital circuits. The technique is based on a statistical comparison of the error behavior resulting from faults applied at the pin-level of and internal to a VLSI circuit. As an example of an application of the technique, the error behavior of a microprocessor simulation subjected to internal stuck-at faults is compared with the error behavior which results from pin-level stuck-at faults. The error behavior is characterized by the time between errors and the duration of errors. Based on this example data, the pin-level stuck-at fault model is found to deliver less than ideal performance. However, with respect to the class of faults which cause a system crash, the pin-level, stuck-at fault model is found to provide a good modeling capability.

  5. Gravity and Magnetic Surveys Over the Santa Rita Fault System, Southeastern Arizona

    USGS Publications Warehouse

    Hegmann, Mary

    2001-01-01

    Gravity and magnetic surveys were performed in the northeast portion of the Santa Rita Experimental Range, in southeastern Arizona, to identify faults and gain a better understanding of the subsurface geology. A total of 234 gravity stations were established, and numerous magnetic data were collected with portable and truck-mounted proton precession magnetometers. In addition, one line of very low frequency electromagnetic data was collected together with magnetic data. Gravity anomalies are used to identify two normal faults that project northward toward a previously identified fault. The gravity data also confirm the location of a second previously interpreted normal fault. Interpretation of magnetic anomaly data indicates the presence of a higher-susceptibility sedimentary unit located beneath lowersusceptibility surficial sediments. Magnetic anomaly data identify a 1-km-wide negative anomaly east of these faults caused by an unknown source and reveal the high variability of susceptibility in the Tertiary intrusive rocks in the area.

  6. 42 CFR 408.45 - Deduction from age 72 special payments.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Deduction from age 72 special payments. 408.45... § 408.45 Deduction from age 72 special payments. (a) Deduction of premiums. SMI premiums are deducted from age 72 special payments made under section 228 of the Act or the payments are withheld under...

  7. 26 CFR 20.2053-9 - Deduction for certain State death taxes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 14 2010-04-01 2010-04-01 false Deduction for certain State death taxes. 20... § 20.2053-9 Deduction for certain State death taxes. (a) General rule. A deduction is allowed a... death taxes. However, see section 2058 to determine the deductibility of state death taxes by estates to...

  8. 26 CFR 1.642(g)-2 - Deductions included.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 8 2014-04-01 2014-04-01 false Deductions included. 1.642(g)-2 Section 1.642(g... (CONTINUED) INCOME TAXES (CONTINUED) Estates, Trusts, and Beneficiaries § 1.642(g)-2 Deductions included. It...(g) is applicable be treated in the same way. One deduction or portion of a deduction may be allowed...

  9. 26 CFR 1.642(g)-2 - Deductions included.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 8 2013-04-01 2013-04-01 false Deductions included. 1.642(g)-2 Section 1.642(g... (CONTINUED) INCOME TAXES (CONTINUED) Estates, Trusts, and Beneficiaries § 1.642(g)-2 Deductions included. It...(g) is applicable be treated in the same way. One deduction or portion of a deduction may be allowed...

  10. 26 CFR 1.642(g)-2 - Deductions included.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 8 2011-04-01 2011-04-01 false Deductions included. 1.642(g)-2 Section 1.642(g... (CONTINUED) INCOME TAXES (CONTINUED) Estates, Trusts, and Beneficiaries § 1.642(g)-2 Deductions included. It...(g) is applicable be treated in the same way. One deduction or portion of a deduction may be allowed...

  11. 26 CFR 1.642(g)-2 - Deductions included.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 8 2012-04-01 2012-04-01 false Deductions included. 1.642(g)-2 Section 1.642(g... (CONTINUED) INCOME TAXES (CONTINUED) Estates, Trusts, and Beneficiaries § 1.642(g)-2 Deductions included. It...(g) is applicable be treated in the same way. One deduction or portion of a deduction may be allowed...

  12. 26 CFR 1.642(g)-2 - Deductions included.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Deductions included. 1.642(g)-2 Section 1.642(g... (CONTINUED) INCOME TAXES Estates, Trusts, and Beneficiaries § 1.642(g)-2 Deductions included. It is not required that the total deductions, or the total amount of any deduction, to which section 642(g) is...

  13. 42 CFR 408.45 - Deduction from age 72 special payments.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 2 2013-10-01 2013-10-01 false Deduction from age 72 special payments. 408.45... § 408.45 Deduction from age 72 special payments. (a) Deduction of premiums. SMI premiums are deducted from age 72 special payments made under section 228 of the Act or the payments are withheld under...

  14. 26 CFR 20.2053-9 - Deduction for certain State death taxes.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 14 2012-04-01 2012-04-01 false Deduction for certain State death taxes. 20... § 20.2053-9 Deduction for certain State death taxes. (a) General rule. A deduction is allowed a... death taxes. However, see section 2058 to determine the deductibility of state death taxes by estates to...

  15. 26 CFR 20.2053-9 - Deduction for certain State death taxes.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 14 2011-04-01 2010-04-01 true Deduction for certain State death taxes. 20.2053....2053-9 Deduction for certain State death taxes. (a) General rule. A deduction is allowed a decedent's....2011-2 for the effect which the allowance of this deduction has upon the credit for State death taxes...

  16. Modeling and Performance Considerations for Automated Fault Isolation in Complex Systems

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob; Oostdyk, Rebecca

    2010-01-01

    The purpose of this paper is to document the modeling considerations and performance metrics that were examined in the development of a large-scale Fault Detection, Isolation and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FDIR team members developed a set of operational requirements for the models that would be used for fault isolation and worked closely with the vendor of the software tools selected for fault isolation to ensure that the software was able to meet the requirements. Once the requirements were established, example models of sufficient complexity were used to test the performance of the software. The results of the performance testing demonstrated the need for enhancements to the software in order to meet the demands of the full-scale ground and vehicle FDIR system. The paper highlights the importance of the development of operational requirements and preliminary performance testing as a strategy for identifying deficiencies in highly scalable systems and rectifying those deficiencies before they imperil the success of the project

  17. A Novel Wide-Area Backup Protection Based on Fault Component Current Distribution and Improved Evidence Theory

    PubMed Central

    Zhang, Zhe; Kong, Xiangping; Yin, Xianggen; Yang, Zengli; Wang, Lijun

    2014-01-01

    In order to solve the problems of the existing wide-area backup protection (WABP) algorithms, the paper proposes a novel WABP algorithm based on the distribution characteristics of fault component current and improved Dempster/Shafer (D-S) evidence theory. When a fault occurs, slave substations transmit to master station the amplitudes of fault component currents of transmission lines which are the closest to fault element. Then master substation identifies suspicious faulty lines according to the distribution characteristics of fault component current. After that, the master substation will identify the actual faulty line with improved D-S evidence theory based on the action states of traditional protections and direction components of these suspicious faulty lines. The simulation examples based on IEEE 10-generator-39-bus system show that the proposed WABP algorithm has an excellent performance. The algorithm has low requirement of sampling synchronization, small wide-area communication flow, and high fault tolerance. PMID:25050399

  18. Dynamic rupture scenarios from Sumatra to Iceland - High-resolution earthquake source physics on natural fault systems

    NASA Astrophysics Data System (ADS)

    Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Wollherr, Stephanie

    2017-04-01

    Capturing the observed complexity of earthquake sources in dynamic rupture simulations may require: non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and fault strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure. All of these factors have been independently shown to alter dynamic rupture behavior and thus possibly influence the degree of realism attainable via simulated ground motions. In this presentation we will show examples of high-resolution earthquake scenarios, e.g. based on the 2004 Sumatra-Andaman Earthquake, the 1994 Northridge earthquake and a potential rupture of the Husavik-Flatey fault system in Northern Iceland. The simulations combine a multitude of representations of source complexity at the necessary spatio-temporal resolution enabled by excellent scalability on modern HPC systems. Such simulations allow an analysis of the dominant factors impacting earthquake source physics and ground motions given distinct tectonic settings or distinct focuses of seismic hazard assessment. Across all simulations, we find that fault geometry concurrently with the regional background stress state provide a first order influence on source dynamics and the emanated seismic wave field. The dynamic rupture models are performed with SeisSol, a software package based on an ADER-Discontinuous Galerkin scheme for solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. Use of unstructured tetrahedral meshes allows for a realistic representation of the non-planar fault geometry, subsurface structure and bathymetry. The results presented highlight the fact that modern numerical methods are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis.

  19. Second-order sliding mode control for DFIG-based wind turbines fault ride-through capability enhancement.

    PubMed

    Benbouzid, Mohamed; Beltran, Brice; Amirat, Yassine; Yao, Gang; Han, Jingang; Mangel, Hervé

    2014-05-01

    This paper deals with the fault ride-through capability assessment of a doubly fed induction generator-based wind turbine using a high-order sliding mode control. Indeed, it has been recently suggested that sliding mode control is a solution of choice to the fault ride-through problem. In this context, this paper proposes a second-order sliding mode as an improved solution that handle the classical sliding mode chattering problem. Indeed, the main and attractive features of high-order sliding modes are robustness against external disturbances, the grids faults in particular, and chattering-free behavior (no extra mechanical stress on the wind turbine drive train). Simulations using the NREL FAST code on a 1.5-MW wind turbine are carried out to evaluate ride-through performance of the proposed high-order sliding mode control strategy in case of grid frequency variations and unbalanced voltage sags. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  20. A novel end-to-end fault detection and localization protocol for wavelength-routed WDM networks

    NASA Astrophysics Data System (ADS)

    Zeng, Hongqing; Vukovic, Alex; Huang, Changcheng

    2005-09-01

    Recently the wavelength division multiplexing (WDM) networks are becoming prevalent for telecommunication networks. However, even a very short disruption of service caused by network faults may lead to high data loss in such networks due to the high date rates, increased wavelength numbers and density. Therefore, the network survivability is critical and has been intensively studied, where fault detection and localization is the vital part but has received disproportional attentions. In this paper we describe and analyze an end-to-end lightpath fault detection scheme in data plane with the fault notification in control plane. The endeavor is focused on reducing the fault detection time. In this protocol, the source node of each lightpath keeps sending hello packets to the destination node exactly following the path for data traffic. The destination node generates an alarm once a certain number of consecutive hello packets are missed within a given time period. Then the network management unit collects all alarms and locates the faulty source based on the network topology, as well as sends fault notification messages via control plane to either the source node or all upstream nodes along the lightpath. The performance evaluation shows such a protocol can achieve fast fault detection, and at the same time, the overhead brought to the user data by hello packets is negligible.

  1. Coordinated Fault-Tolerance for High-Performance Computing Final Project Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panda, Dhabaleswar Kumar; Beckman, Pete

    2011-07-28

    With the Coordinated Infrastructure for Fault Tolerance Systems (CIFTS, as the original project came to be called) project, our aim has been to understand and tackle the following broad research questions, the answers to which will help the HEC community analyze and shape the direction of research in the field of fault tolerance and resiliency on future high-end leadership systems. Will availability of global fault information, obtained by fault information exchange between the different HEC software on a system, allow individual system software to better detect, diagnose, and adaptively respond to faults? If fault-awareness is raised throughout the system throughmore » fault information exchange, is it possible to get all system software working together to provide a more comprehensive end-to-end fault management on the system? What are the missing fault-tolerance features that widely used HEC system software lacks today that would inhibit such software from taking advantage of systemwide global fault information? What are the practical limitations of a systemwide approach for end-to-end fault management based on fault awareness and coordination? What mechanisms, tools, and technologies are needed to bring about fault awareness and coordination of responses on a leadership-class system? What standards, outreach, and community interaction are needed for adoption of the concept of fault awareness and coordination for fault management on future systems? Keeping our overall objectives in mind, the CIFTS team has taken a parallel fourfold approach. Our central goal was to design and implement a light-weight, scalable infrastructure with a simple, standardized interface to allow communication of fault-related information through the system and facilitate coordinated responses. This work led to the development of the Fault Tolerance Backplane (FTB) publish-subscribe API specification, together with a reference implementation and several experimental implementations on top of existing publish-subscribe tools. We enhanced the intrinsic fault tolerance capabilities representative implementations of a variety of key HPC software subsystems and integrated them with the FTB. Targeting software subsystems included: MPI communication libraries, checkpoint/restart libraries, resource managers and job schedulers, and system monitoring tools. Leveraging the aforementioned infrastructure, as well as developing and utilizing additional tools, we have examined issues associated with expanded, end-to-end fault response from both system and application viewpoints. From the standpoint of system operations, we have investigated log and root cause analysis, anomaly detection and fault prediction, and generalized notification mechanisms. Our applications work has included libraries for fault-tolerance linear algebra, application frameworks for coupled multiphysics applications, and external frameworks to support the monitoring and response for general applications. Our final goal was to engage the high-end computing community to increase awareness of tools and issues around coordinated end-to-end fault management.« less

  2. Simulation-driven machine learning: Bearing fault classification

    NASA Astrophysics Data System (ADS)

    Sobie, Cameron; Freitas, Carina; Nicolai, Mike

    2018-01-01

    Increasing the accuracy of mechanical fault detection has the potential to improve system safety and economic performance by minimizing scheduled maintenance and the probability of unexpected system failure. Advances in computational performance have enabled the application of machine learning algorithms across numerous applications including condition monitoring and failure detection. Past applications of machine learning to physical failure have relied explicitly on historical data, which limits the feasibility of this approach to in-service components with extended service histories. Furthermore, recorded failure data is often only valid for the specific circumstances and components for which it was collected. This work directly addresses these challenges for roller bearings with race faults by generating training data using information gained from high resolution simulations of roller bearing dynamics, which is used to train machine learning algorithms that are then validated against four experimental datasets. Several different machine learning methodologies are compared starting from well-established statistical feature-based methods to convolutional neural networks, and a novel application of dynamic time warping (DTW) to bearing fault classification is proposed as a robust, parameter free method for race fault detection.

  3. TWT transmitter fault prediction based on ANFIS

    NASA Astrophysics Data System (ADS)

    Li, Mengyan; Li, Junshan; Li, Shuangshuang; Wang, Wenqing; Li, Fen

    2017-11-01

    Fault prediction is an important component of health management, and plays an important role in the reliability guarantee of complex electronic equipments. Transmitter is a unit with high failure rate. The cathode performance of TWT is a common fault of transmitter. In this dissertation, a model based on a set of key parameters of TWT is proposed. By choosing proper parameters and applying adaptive neural network training model, this method, combined with analytic hierarchy process (AHP), has a certain reference value for the overall health judgment of TWT transmitters.

  4. RedThreads: An Interface for Application-Level Fault Detection/Correction Through Adaptive Redundant Multithreading

    DOE PAGES

    Hukerikar, Saurabh; Teranishi, Keita; Diniz, Pedro C.; ...

    2017-02-11

    In the presence of accelerated fault rates, which are projected to be the norm on future exascale systems, it will become increasingly difficult for high-performance computing (HPC) applications to accomplish useful computation. Due to the fault-oblivious nature of current HPC programming paradigms and execution environments, HPC applications are insufficiently equipped to deal with errors. We believe that HPC applications should be enabled with capabilities to actively search for and correct errors in their computations. The redundant multithreading (RMT) approach offers lightweight replicated execution streams of program instructions within the context of a single application process. Furthermore, the use of completemore » redundancy incurs significant overhead to the application performance.« less

  5. RedThreads: An Interface for Application-Level Fault Detection/Correction Through Adaptive Redundant Multithreading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Teranishi, Keita; Diniz, Pedro C.

    In the presence of accelerated fault rates, which are projected to be the norm on future exascale systems, it will become increasingly difficult for high-performance computing (HPC) applications to accomplish useful computation. Due to the fault-oblivious nature of current HPC programming paradigms and execution environments, HPC applications are insufficiently equipped to deal with errors. We believe that HPC applications should be enabled with capabilities to actively search for and correct errors in their computations. The redundant multithreading (RMT) approach offers lightweight replicated execution streams of program instructions within the context of a single application process. Furthermore, the use of completemore » redundancy incurs significant overhead to the application performance.« less

  6. Using Decision Procedures to Build Domain-Specific Deductive Synthesis Systems

    NASA Technical Reports Server (NTRS)

    VanBaalen, Jeffrey; Roach, Steven; Lau, Sonie (Technical Monitor)

    1998-01-01

    This paper describes a class of decision procedures that we have found useful for efficient, domain-specific deductive synthesis. These procedures are called closure-based ground literal satisfiability procedures. We argue that this is a large and interesting class of procedures and show how to interface these procedures to a theorem prover for efficient deductive synthesis. Finally, we describe some results we have observed from our implementation. Amphion/NAIF is a domain-specific, high-assurance software synthesis system. It takes an abstract specification of a problem in solar system mechanics, such as 'when will a signal sent from the Cassini spacecraft to Earth be blocked by the planet Saturn?', and automatically synthesizes a FORTRAN program to solve it.

  7. Petascale computation of multi-physics seismic simulations

    NASA Astrophysics Data System (ADS)

    Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Wollherr, Stephanie; Duru, Kenneth C.

    2017-04-01

    Capturing the observed complexity of earthquake sources in concurrence with seismic wave propagation simulations is an inherently multi-scale, multi-physics problem. In this presentation, we present simulations of earthquake scenarios resolving high-detail dynamic rupture evolution and high frequency ground motion. The simulations combine a multitude of representations of model complexity; such as non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and fault strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure to capture dynamic rupture behavior at the source; and seismic wave attenuation, 3D subsurface structure and bathymetry impacting seismic wave propagation. Performing such scenarios at the necessary spatio-temporal resolution requires highly optimized and massively parallel simulation tools which can efficiently exploit HPC facilities. Our up to multi-PetaFLOP simulations are performed with SeisSol (www.seissol.org), an open-source software package based on an ADER-Discontinuous Galerkin (DG) scheme solving the seismic wave equations in velocity-stress formulation in elastic, viscoelastic, and viscoplastic media with high-order accuracy in time and space. Our flux-based implementation of frictional failure remains free of spurious oscillations. Tetrahedral unstructured meshes allow for complicated model geometry. SeisSol has been optimized on all software levels, including: assembler-level DG kernels which obtain 50% peak performance on some of the largest supercomputers worldwide; an overlapping MPI-OpenMP parallelization shadowing the multiphysics computations; usage of local time stepping; parallel input and output schemes and direct interfaces to community standard data formats. All these factors enable aim to minimise the time-to-solution. The results presented highlight the fact that modern numerical methods and hardware-aware optimization for modern supercomputers are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis. Lastly, we will conclude with an outlook on future exascale ADER-DG solvers for seismological applications.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Aiman; Laguna, Ignacio; Sato, Kento

    Future high-performance computing systems may face frequent failures with their rapid increase in scale and complexity. Resilience to faults has become a major challenge for large-scale applications running on supercomputers, which demands fault tolerance support for prevalent MPI applications. Among failure scenarios, process failures are one of the most severe issues as they usually lead to termination of applications. However, the widely used MPI implementations do not provide mechanisms for fault tolerance. We propose FTA-MPI (Fault Tolerance Assistant MPI), a programming model that provides support for failure detection, failure notification and recovery. Specifically, FTA-MPI exploits a try/catch model that enablesmore » failure localization and transparent recovery of process failures in MPI applications. We demonstrate FTA-MPI with synthetic applications and a molecular dynamics code CoMD, and show that FTA-MPI provides high programmability for users and enables convenient and flexible recovery of process failures.« less

  9. Segregation and Phase Transformations Along Superlattice Intrinsic Stacking Faults in Ni-Based Superalloys

    NASA Astrophysics Data System (ADS)

    Smith, T. M.; Esser, B. D.; Good, B.; Hooshmand, M. S.; Viswanathan, G. B.; Rae, C. M. F.; Ghazisaeidi, M.; McComb, D. W.; Mills, M. J.

    2018-06-01

    In this study, local chemical and structural changes along superlattice intrinsic stacking faults combine to represent an atomic-scale phase transformation. In order to elicit stacking fault shear, creep tests of two different single crystal Ni-based superalloys, ME501 and CMSX-4, were performed near 750 °C using stresses of 552 and 750 MPa, respectively. Through high-resolution scanning transmission electron microscopy (STEM) and state-of-the-art energy dispersive X-ray spectroscopy, ordered compositional changes were measured along SISFs in both alloys. For both instances, the elemental segregation and local crystal structure present along the SISFs are consistent with a nanoscale γ' to D019 phase transformation. Other notable observations are prominent γ-rich Cottrell atmospheres and new evidence of more complex reordering processes responsible for the formation of these faults. These findings are further supported using density functional theory calculations and high-angle annular dark-field (HAADF)-STEM image simulations.

  10. Soft-Fault Detection Technologies Developed for Electrical Power Systems

    NASA Technical Reports Server (NTRS)

    Button, Robert M.

    2004-01-01

    The NASA Glenn Research Center, partner universities, and defense contractors are working to develop intelligent power management and distribution (PMAD) technologies for future spacecraft and launch vehicles. The goals are to provide higher performance (efficiency, transient response, and stability), higher fault tolerance, and higher reliability through the application of digital control and communication technologies. It is also expected that these technologies will eventually reduce the design, development, manufacturing, and integration costs for large, electrical power systems for space vehicles. The main focus of this research has been to incorporate digital control, communications, and intelligent algorithms into power electronic devices such as direct-current to direct-current (dc-dc) converters and protective switchgear. These technologies, in turn, will enable revolutionary changes in the way electrical power systems are designed, developed, configured, and integrated in aerospace vehicles and satellites. Initial successes in integrating modern, digital controllers have proven that transient response performance can be improved using advanced nonlinear control algorithms. One technology being developed includes the detection of "soft faults," those not typically covered by current systems in use today. Soft faults include arcing faults, corona discharge faults, and undetected leakage currents. Using digital control and advanced signal analysis algorithms, we have shown that it is possible to reliably detect arcing faults in high-voltage dc power distribution systems (see the preceding photograph). Another research effort has shown that low-level leakage faults and cable degradation can be detected by analyzing power system parameters over time. This additional fault detection capability will result in higher reliability for long-lived power systems such as reusable launch vehicles and space exploration missions.

  11. A hybrid fault diagnosis approach based on mixed-domain state features for rotating machinery.

    PubMed

    Xue, Xiaoming; Zhou, Jianzhong

    2017-01-01

    To make further improvement in the diagnosis accuracy and efficiency, a mixed-domain state features data based hybrid fault diagnosis approach, which systematically blends both the statistical analysis approach and the artificial intelligence technology, is proposed in this work for rolling element bearings. For simplifying the fault diagnosis problems, the execution of the proposed method is divided into three steps, i.e., fault preliminary detection, fault type recognition and fault degree identification. In the first step, a preliminary judgment about the health status of the equipment can be evaluated by the statistical analysis method based on the permutation entropy theory. If fault exists, the following two processes based on the artificial intelligence approach are performed to further recognize the fault type and then identify the fault degree. For the two subsequent steps, mixed-domain state features containing time-domain, frequency-domain and multi-scale features are extracted to represent the fault peculiarity under different working conditions. As a powerful time-frequency analysis method, the fast EEMD method was employed to obtain multi-scale features. Furthermore, due to the information redundancy and the submergence of original feature space, a novel manifold learning method (modified LGPCA) is introduced to realize the low-dimensional representations for high-dimensional feature space. Finally, two cases with 12 working conditions respectively have been employed to evaluate the performance of the proposed method, where vibration signals were measured from an experimental bench of rolling element bearing. The analysis results showed the effectiveness and the superiority of the proposed method of which the diagnosis thought is more suitable for practical application. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Modeling and Simulation Reliable Spacecraft On-Board Computing

    NASA Technical Reports Server (NTRS)

    Park, Nohpill

    1999-01-01

    The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.

  13. Structural Controls of the Friction Constitutive Properties of Carbonate-bearing Faults

    NASA Astrophysics Data System (ADS)

    Carpenter, B. M.; Collettini, C.; Scuderi, M.; Marone, C.

    2012-12-01

    The identification of hetereogenous and complex post-seismic slip for the 2009, Mw = 6.3, L'Aquila earthquake highlights the importance of fault zone structure and frictional behavior. Many of the Mw 6 to 7 earthquakes that occur on normal faults in the active Apennines, such as L'Aquila, nucleate at depths where the lithology is dominated by carbonate rocks. Due to the complex structure observed in exhumed faults (i.e. the presence of highly polished principal slip surfaces, cemented cataclasites, and phyllosilicate-bearing, foliated fault gouge) as well as the large spectrum of fault slip behaviors identified world wide, we designed a suite of experiments using intact and powdered samples to better constrain the possible slip behaviors of these carbonate bearing faults. We collected samples from the exposed Rocchetta Fault, a ~10km long, normal fault with approximately 600m of total offset. The exposed principal slip surface cuts through the Calcare Massiccio formation, which is present throughout central Italy at depths of earthquake nucleation. We collected intact specimens of the natural slip surface and cemented cataclasite, as well as fragments of both which were later pulverized. Furthermore, we collected an intact sample of the hanging wall cataclasite and footwall limestone that contained the principal slip surface. We performed friction experiments in a variety of different configurations (slip surface on slip surface, slip surface on powdered cataclasite, etc.) in order to investigate heterogeneity in frictional behavior as controlled by fault structure. We sheared saturated samples at a constant normal stress of 10 MPa at room temperature. Velocity-stepping tests were performed from 1 to 300 μm/s to identify the friction constitutive parameters of this fault material. Furthermore, a series slide-hold-slide tests were performed (holds of 3 to 1000 seconds) to measure the amount of frictional healing and determine the frictional healing rate. Results from experiments designed to reactivate slip between the principal slip surface and cemented cataclasite show a peak friction value of ~0.95 followed by a ~3 MPa stress drop as the fault surface fails. Our other results suggest that earthquakes will easily nucleate in areas of the fault where two slip surfaces are in contact and are likely to propagate in areas where pulverized fault gouge is in contact with the slip surface. Our data show that samples collected from a single fault can exhibit a large range of slip behaviors. Heterogeneous frictional behavior documented in the lab must be combined with field observations of complex fault structure and seismological observations of the different modes of fault slip to further our understanding of fault slip. Future work will consist of thin section and XRD analysis of all experimental material.

  14. A robust detector for rolling element bearing condition monitoring based on the modulation signal bispectrum and its performance evaluation against the Kurtogram

    NASA Astrophysics Data System (ADS)

    Tian, Xiange; Xi Gu, James; Rehab, Ibrahim; Abdalla, Gaballa M.; Gu, Fengshou; Ball, A. D.

    2018-02-01

    Envelope analysis is a widely used method for rolling element bearing fault detection. To obtain high detection accuracy, it is critical to determine an optimal frequency narrowband for the envelope demodulation. However, many of the schemes which are used for the narrowband selection, such as the Kurtogram, can produce poor detection results because they are sensitive to random noise and aperiodic impulses which normally occur in practical applications. To achieve the purposes of denoising and frequency band optimisation, this paper proposes a novel modulation signal bispectrum (MSB) based robust detector for bearing fault detection. Because of its inherent noise suppression capability, the MSB allows effective suppression of both stationary random noise and discrete aperiodic noise. The high magnitude features that result from the use of the MSB also enhance the modulation effects of a bearing fault and can be used to provide optimal frequency bands for fault detection. The Kurtogram is generally accepted as a powerful means of selecting the most appropriate frequency band for envelope analysis, and as such it has been used as the benchmark comparator for performance evaluation in this paper. Both simulated and experimental data analysis results show that the proposed method produces more accurate and robust detection results than Kurtogram based approaches for common bearing faults under a range of representative scenarios.

  15. Implanted component faults and their effects on gas turbine engine performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacLeod, J.D.; Taylor, V.; Laflamme, J.C.G.

    Under the sponsorship of the Canadian Department of National Defence, the Engine Laboratory of the National Research Council of Canada (NRCC) has established a program for the evaluation of component deterioration on gas turbine engine performance. The effect is aimed at investigating the effects of typical in-service faults on the performance characteristics of each individual engine component. The objective of the program is the development of a generalized fault library, which will be used with fault identification techniques in the field, to reduce unscheduled maintenance. To evaluate the effects of implanted faults on the performance of a single spool engine,more » such as an Allison T56 turboprop engine, a series of faulted parts were installed. For this paper the following faults were analyzed: (a) first-stage turbine nozzle erosion damage; (b) first-stage turbine rotor blade untwist; (c) compressor seal wear; (d) first and second-stage compressor blade tip clearance increase. This paper describes the project objectives, the experimental installation, and the results of the fault implantation on engine performance. Discussed are performance variations on both engine and component characteristics. As the performance changes were significant, a rigorous measurement uncertainty analysis is included.« less

  16. Fault Tolerance for VLSI Multicomputers

    DTIC Science & Technology

    1985-08-01

    that consists of hundreds or thousands of VLSI computation nodes interconnected by dedicated links. Some important applications of high-end computers...technology, and intended applications . A proposed fault tolerance scheme combines hardware that performs error detection and system-level protocols for...order to recover from the error and resume correct operation, a valid system state must be restored. A low-overhead, application -transparent error

  17. Award ER25750: Coordinated Infrastructure for Fault Tolerance Systems Indiana University Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumsdaine, Andrew

    2013-03-08

    The main purpose of the Coordinated Infrastructure for Fault Tolerance in Systems initiative has been to conduct research with a goal of providing end-to-end fault tolerance on a systemwide basis for applications and other system software. While fault tolerance has been an integral part of most high-performance computing (HPC) system software developed over the past decade, it has been treated mostly as a collection of isolated stovepipes. Visibility and response to faults has typically been limited to the particular hardware and software subsystems in which they are initially observed. Little fault information is shared across subsystems, allowing little flexibility ormore » control on a system-wide basis, making it practically impossible to provide cohesive end-to-end fault tolerance in support of scientific applications. As an example, consider faults such as communication link failures that can be seen by a network library but are not directly visible to the job scheduler, or consider faults related to node failures that can be detected by system monitoring software but are not inherently visible to the resource manager. If information about such faults could be shared by the network libraries or monitoring software, then other system software, such as a resource manager or job scheduler, could ensure that failed nodes or failed network links were excluded from further job allocations and that further diagnosis could be performed. As a founding member and one of the lead developers of the Open MPI project, our efforts over the course of this project have been focused on making Open MPI more robust to failures by supporting various fault tolerance techniques, and using fault information exchange and coordination between MPI and the HPC system software stack from the application, numeric libraries, and programming language runtime to other common system components such as jobs schedulers, resource managers, and monitoring tools.« less

  18. Research on bearing fault diagnosis of large machinery based on mathematical morphology

    NASA Astrophysics Data System (ADS)

    Wang, Yu

    2018-04-01

    To study the automatic diagnosis of large machinery fault based on support vector machine, combining the four common faults of the large machinery, the support vector machine is used to classify and identify the fault. The extracted feature vectors are entered. The feature vector is trained and identified by multi - classification method. The optimal parameters of the support vector machine are searched by trial and error method and cross validation method. Then, the support vector machine is compared with BP neural network. The results show that the support vector machines are short in time and high in classification accuracy. It is more suitable for the research of fault diagnosis in large machinery. Therefore, it can be concluded that the training speed of support vector machines (SVM) is fast and the performance is good.

  19. Fault diagnosis for analog circuits utilizing time-frequency features and improved VVRKFA

    NASA Astrophysics Data System (ADS)

    He, Wei; He, Yigang; Luo, Qiwu; Zhang, Chaolong

    2018-04-01

    This paper proposes a novel scheme for analog circuit fault diagnosis utilizing features extracted from the time-frequency representations of signals and an improved vector-valued regularized kernel function approximation (VVRKFA). First, the cross-wavelet transform is employed to yield the energy-phase distribution of the fault signals over the time and frequency domain. Since the distribution is high-dimensional, a supervised dimensionality reduction technique—the bilateral 2D linear discriminant analysis—is applied to build a concise feature set from the distributions. Finally, VVRKFA is utilized to locate the fault. In order to improve the classification performance, the quantum-behaved particle swarm optimization technique is employed to gradually tune the learning parameter of the VVRKFA classifier. The experimental results for the analog circuit faults classification have demonstrated that the proposed diagnosis scheme has an advantage over other approaches.

  20. Stirrers.

    ERIC Educational Resources Information Center

    Moody, P. J.

    1998-01-01

    An investigation was conducted of the relative drag experienced by different types of tea/coffee stirrers to make deductions about their stirring efficiencies. Experiments compared the stirring performances of convex and concave sides of a plastic teaspoon with the performance of a commercial stirrer. The performance of the spoon exceeded that of…

  1. The buried active faults in southeastern China as revealed by the relocated background seismicity and fault plane solutions

    NASA Astrophysics Data System (ADS)

    Zhu, A.; Wang, P.; Liu, F.

    2017-12-01

    The southeastern China in the mainland corresponds to the south China block, which is characterized by moderate historical seismicity and low stain rate. Most faults are buried under thick Quaternary deposits, so it is difficult to detect and locate them using the routine geological methods. Only a few have been identified to be active in late Quaternary, which leads to relatively high potentially seismic risk to this region due to the unexpected locations of the earthquakes. We performed both hypoDD and tomoDD for the background seismicity from 2000 to 2016 to investigate the buried faults. Some buried active faults are revealed by the relocated seismicity and the velocity structure, no geologically known faults corresponding to them and no surface active evidence ever observed. The geometries of the faults are obtained by analyzing the hypocentral distribution pattern and focal mechanism. The focal mechanism solutions indicate that all the revealed faults are dominated in strike-slip mechanisms, or with some thrust components. While the previous fault investigation and detection results show that most of the Quaternary faults in southeastern China are dominated by normal movement. It suggests that there may exist two fault systems in deep and shallow tectonic regimes. The revealed faults may construct the deep one that act as the seismogenic faults, and the normal faults at shallow cannot generate the destructive earthquakes. The variation in the Curie-point depths agrees well with the structure plane of the revealed active faults, suggesting that the faults may have changed the deep structure.

  2. On-board fault diagnostics for fly-by-light flight control systems using neural network flight processors

    NASA Astrophysics Data System (ADS)

    Urnes, James M., Sr.; Cushing, John; Bond, William E.; Nunes, Steve

    1996-10-01

    Fly-by-Light control systems offer higher performance for fighter and transport aircraft, with efficient fiber optic data transmission, electric control surface actuation, and multi-channel high capacity centralized processing combining to provide maximum aircraft flight control system handling qualities and safety. The key to efficient support for these vehicles is timely and accurate fault diagnostics of all control system components. These diagnostic tests are best conducted during flight when all facts relating to the failure are present. The resulting data can be used by the ground crew for efficient repair and turnaround of the aircraft, saving time and money in support costs. These difficult to diagnose (Cannot Duplicate) fault indications average 40 - 50% of maintenance activities on today's fighter and transport aircraft, adding significantly to fleet support cost. Fiber optic data transmission can support a wealth of data for fault monitoring; the most efficient method of fault diagnostics is accurate modeling of the component response under normal and failed conditions for use in comparison with the actual component flight data. Neural Network hardware processors offer an efficient and cost-effective method to install fault diagnostics in flight systems, permitting on-board diagnostic modeling of very complex subsystems. Task 2C of the ARPA FLASH program is a design demonstration of this diagnostics approach, using the very high speed computation of the Adaptive Solutions Neural Network processor to monitor an advanced Electrohydrostatic control surface actuator linked through a AS-1773A fiber optic bus. This paper describes the design approach and projected performance of this on-line diagnostics system.

  3. Automatic translation of digraph to fault-tree models

    NASA Technical Reports Server (NTRS)

    Iverson, David L.

    1992-01-01

    The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.

  4. Towards a Fault-based SHA in the Southern Upper Rhine Graben

    NASA Astrophysics Data System (ADS)

    Baize, Stéphane; Reicherter, Klaus; Thomas, Jessica; Chartier, Thomas; Cushing, Edward Marc

    2016-04-01

    A brief overview at a seismic map of the Upper Rhine Graben area (say between Strasbourg and Basel) reveals that the region is seismically active. The area has been hit recently by shallow and moderate quakes but, historically, strong quakes damaged and devastated populated zones. Several authors previously suggested, through preliminary geomorphological and geophysical studies, that active faults could be traced along the eastern margin of the graben. Thus, fault-based PSHA (probabilistic seismic hazard assessment) studies should be developed. Nevertheless, most of the input data in fault-based PSHA models are highly uncertain, based upon sparse or hypothetical data. Geophysical and geological data document the presence of post-Tertiary westward dipping faults in the area. However, our first investigations suggest that the available surface fault map do not provide a reliable document of Quaternary fault traces. Slip rate values that can be currently used in fault-PSHA models are based on regional stratigraphic data, but these include neither detailed datings nor clear base surface contours. Several hints on fault activity do exist and we have now relevant tools and techniques to figure out the activity of the faults of concern. Our preliminary analyses suggest that the LiDAR topography can adequately image the fault segments and, thanks to detailed geomorphological analysis, these data allow tracking cumulative fault offsets. Because the fault models can therefore be considered highly uncertain, our coming project for the next 3 years is to acquire and analyze these accurate topographical data, to trace the active faults and to determine slip rates through relevant features dating. Eventually, we plan to find a key site to perform a paleoseismological trench because this approach has been proved to be worth in the Graben, both to the North (Wörms and Strasbourg) and to the South (Basel). This would be done in order to definitely prove whether the faults ruptured the ground surface during the Quaternary, and in order to determine key fault parameters such as magnitude and age of large events.

  5. Orion GN&C Fault Management System Verification: Scope And Methodology

    NASA Technical Reports Server (NTRS)

    Brown, Denise; Weiler, David; Flanary, Ronald

    2016-01-01

    In order to ensure long-term ability to meet mission goals and to provide for the safety of the public, ground personnel, and any crew members, nearly all spacecraft include a fault management (FM) system. For a manned vehicle such as Orion, the safety of the crew is of paramount importance. The goal of the Orion Guidance, Navigation and Control (GN&C) fault management system is to detect, isolate, and respond to faults before they can result in harm to the human crew or loss of the spacecraft. Verification of fault management/fault protection capability is challenging due to the large number of possible faults in a complex spacecraft, the inherent unpredictability of faults, the complexity of interactions among the various spacecraft components, and the inability to easily quantify human reactions to failure scenarios. The Orion GN&C Fault Detection, Isolation, and Recovery (FDIR) team has developed a methodology for bounding the scope of FM system verification while ensuring sufficient coverage of the failure space and providing high confidence that the fault management system meets all safety requirements. The methodology utilizes a swarm search algorithm to identify failure cases that can result in catastrophic loss of the crew or the vehicle and rare event sequential Monte Carlo to verify safety and FDIR performance requirements.

  6. The influence of the fault zone width on land surface vibrations after the high-energy tremor in the "Rydułtowy-Anna" hard coal mine

    NASA Astrophysics Data System (ADS)

    Pilecka, Elżbieta; Szwarkowski, Dariusz

    2018-04-01

    In the article, a numerical analysis of the impact of the width of the fault zone on land surface tremors on the area of the "Rydułtowy - Anna" hard coal mine was performed. The analysis covered the dynamic impact of the actual seismic wave after the high-energy tremor of 7 June 2013. Vibrations on the land surface are a measure of the mining damage risk. It is particularly the horizontal components of land vibrations that are dangerous to buildings which is reflected in the Mining Scales of Intensity (GSI) of vibrations. The run of a seismic wave in the rock mass from the hypocenter to the area's surface depends on the lithology of the area and the presence of fault zones. The rock mass network cut by faults of various widths influences the amplitude of tremor reaching the area's surface. The analysis of the impact of the width of the fault zone was done for three alternatives.

  7. Dynamic Fault Detection Chassis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mize, Jeffery J

    2007-01-01

    Abstract The high frequency switching megawatt-class High Voltage Converter Modulator (HVCM) developed by Los Alamos National Laboratory for the Oak Ridge National Laboratory's Spallation Neutron Source (SNS) is now in operation. One of the major problems with the modulator systems is shoot-thru conditions that can occur in a IGBTs H-bridge topology resulting in large fault currents and device failure in a few microseconds. The Dynamic Fault Detection Chassis (DFDC) is a fault monitoring system; it monitors transformer flux saturation using a window comparator and dV/dt events on the cathode voltage caused by any abnormality such as capacitor breakdown, transformer primarymore » turns shorts, or dielectric breakdown between the transformer primary and secondary. If faults are detected, the DFDC will inhibit the IGBT gate drives and shut the system down, significantly reducing the possibility of a shoot-thru condition or other equipment damaging events. In this paper, we will present system integration considerations, performance characteristics of the DFDC, and discuss its ability to significantly reduce costly down time for the entire facility.« less

  8. 77 FR 18687 - Guidance Regarding Deduction and Capitalization of Expenditures Related to Tangible Property...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-28

    ... 20. * * * The plumbing fixtures in all the restrooms perform a discrete and critical function in the... fixtures in the hotel building perform a discrete and critical function in the operation of the plumbing...

  9. Differential involvement of left prefrontal cortex in inductive and deductive reasoning.

    PubMed

    Goel, Vinod; Dolan, Raymond J

    2004-10-01

    While inductive and deductive reasoning are considered distinct logical and psychological processes, little is known about their respective neural basis. To address this issue we scanned 16 subjects with fMRI, using an event-related design, while they engaged in inductive and deductive reasoning tasks. Both types of reasoning were characterized by activation of left lateral prefrontal and bilateral dorsal frontal, parietal, and occipital cortices. Neural responses unique to each type of reasoning determined from the Reasoning Type (deduction and induction) by Task (reasoning and baseline) interaction indicated greater involvement of left inferior frontal gyrus (BA 44) in deduction than induction, while left dorsolateral (BA 8/9) prefrontal gyrus showed greater activity during induction than deduction. This pattern suggests a dissociation within prefrontal cortex for deductive and inductive reasoning.

  10. A perspectivist review of supermetallicity studies. II

    NASA Astrophysics Data System (ADS)

    Taylor, B. J.

    A summary of indirect deductions is provided, taking into account a high-dispersion analysis of Delta Pav conducted by Rodgers (1969), a study of three K giant - F dwarf binaries performed by Deming and Butler (1979), and investigations involving the Hyades giants. Attention is given to an analysis of explanations, the analyses reported by Peterson (1976), the most recent results, future work on the VSL Giants, a summary of deficiencies in the methodology of supermetallicity, and the present state of the M67 problem.

  11. Ground Surface Deformation in Unconsolidated Sediments Caused by Bedrock Fault Movements: Dip-Slip and Strike-Slip Fault Model Test and Field Survey

    NASA Astrophysics Data System (ADS)

    Ueta, K.; Tani, K.

    2001-12-01

    Sandbox experiments were performed to investigate ground surface deformation in unconsolidated sediments caused by dip-slip and strike-slip motion on bedrock faults. A 332.5 cm long, 200 cm high, and 40 cm wide sandbox was used in a dip-slip fault model test. In the strike-slip fault test, a 600 cm long, 250 cm wide, and 60 cm high sandbox and a 170 cm long, 25 cm wide, 15 cm high sandbox were used. Computerized X-ray tomography applied to the sandbox experiments made it possible to analyze the kinematic evolution, as well as the three-dimensional geometry, of the faults. The fault type, fault dip, fault displacement, thickness and density of sandpack and grain size of the sand were varied for different experiments. Field survey of active faults in Japan and California were also made to investigate the deformation of unconsolidated sediments overlying bedrock faults. A comparison of the experimental results with natural cases of active faults reveals the following: (1) In the case of dip-slip faulting, the shear bands are not shown as one linear plane but as en echelon pattern. Thicker and finer unconsolidated sediments produce more shear bands and clearer en echelon shear band patterns. (2) In the case of left-lateral strike-slip faulting, the deformation of the sand pack with increasing basement displacement is observed as follows. a) In three dimensions, the right-stepping shears that have a "cirque" / "shell" / "ship body" shape develop on both sides of the basement fault. The shears on one side of the basement fault join those on the other side, resulting in helicoidal shaped shear surfaces. Shears reach the surface of the sand near or above the basement fault and en echelon Riedel shears are observed at the surface of the sand. b) Right-stepping pressure ridges develop within the zone defined by the Riedel shears. c) Lower-angle shears generally branch off from the first Riedel shears. d) Right-stepping helicoidal shaped lower-angle shears offset Riedel shears and pressure ridges, and left-stepping and right-stepping pressure ridges are observed. d) With displacement concentrated on the central throughgoing fault zone, a "Zone of shear band" (ZSB) developed directly above the basement fault. The geometry of the ZSB shows a strong resemblance to linear ridge and trough geomorphology associated with active strike-slip faulting. (3) In the case of normal faulting, the location of the surface fault rupture is just above the bedrock faults, which have no relationship with the fault dip. On the other hand, the location of the surface rupture of the reverse fault has closely relationship with the fault dip. In the case of strike-slip faulting, the width of the deformation zone in dense sand is wider than that in loose sand. (4) The horizontal distance of surface rupture from the bedrock fault normalized by the height of sand mass (W/H) does not depend on the height of sand mass and grain size of sand. The values of W/H from the test agree well with those of earthquake faults. (5) The normalized base displacement required to propagate the shear rupture zone to the ground surface (D/H), in the case of normal faulting, is lower than those for reverse faulting and strike-slip faulting.

  12. 20 CFR 416.724 - Amounts of penalty deductions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Amounts of penalty deductions. 416.724 Section 416.724 Employees' Benefits SOCIAL SECURITY ADMINISTRATION SUPPLEMENTAL SECURITY INCOME FOR THE AGED, BLIND, AND DISABLED Reports Required Penalty Deductions § 416.724 Amounts of penalty deductions...

  13. 20 CFR 416.724 - Amounts of penalty deductions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Amounts of penalty deductions. 416.724 Section 416.724 Employees' Benefits SOCIAL SECURITY ADMINISTRATION SUPPLEMENTAL SECURITY INCOME FOR THE AGED, BLIND, AND DISABLED Reports Required Penalty Deductions § 416.724 Amounts of penalty deductions...

  14. Dynamic rupture scenarios from Sumatra to Iceland - High-resolution earthquake source physics on natural fault systems

    NASA Astrophysics Data System (ADS)

    Gabriel, A. A.; Madden, E. H.; Ulrich, T.; Wollherr, S.

    2016-12-01

    Capturing the observed complexity of earthquake sources in dynamic rupture simulations may require: non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure. All of these factors have been independently shown to alter dynamic rupture behavior and thus possibly influence the degree of realism attainable via simulated ground motions. In this presentation we will show examples of high-resolution earthquake scenarios, e.g. based on the 2004 Sumatra-Andaman Earthquake and a potential rupture of the Husavik-Flatey fault system in Northern Iceland. The simulations combine a multitude of representations of source complexity at the necessary spatio-temporal resolution enabled by excellent scalability on modern HPC systems. Such simulations allow an analysis of the dominant factors impacting earthquake source physics and ground motions given distinct tectonic settings or distinct focuses of seismic hazard assessment. Across all simulations, we find that fault geometry concurrently with the regional background stress state provide a first order influence on source dynamics and the emanated seismic wave field. The dynamic rupture models are performed with SeisSol, a software package based on an ADER-Discontinuous Galerkin scheme for solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. Use of unstructured tetrahedral meshes allows for a realistic representation of the non-planar fault geometry, subsurface structure and bathymetry. The results presented highlight the fact that modern numerical methods are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis.

  15. Breaking down barriers in cooperative fault management: Temporal and functional information displays

    NASA Technical Reports Server (NTRS)

    Potter, Scott S.; Woods, David D.

    1994-01-01

    At the highest level, the fundamental question addressed by this research is how to aid human operators engaged in dynamic fault management. In dynamic fault management there is some underlying dynamic process (an engineered or physiological process referred to as the monitored process - MP) whose state changes over time and whose behavior must be monitored and controlled. In these types of applications (dynamic, real-time systems), a vast array of sensor data is available to provide information on the state of the MP. Faults disturb the MP and diagnosis must be performed in parallel with responses to maintain process integrity and to correct the underlying problem. These situations frequently involve time pressure, multiple interacting goals, high consequences of failure, and multiple interleaved tasks.

  16. Automated fault-management in a simulated spaceflight micro-world

    NASA Technical Reports Server (NTRS)

    Lorenz, Bernd; Di Nocera, Francesco; Rottger, Stefan; Parasuraman, Raja

    2002-01-01

    BACKGROUND: As human spaceflight missions extend in duration and distance from Earth, a self-sufficient crew will bear far greater onboard responsibility and authority for mission success. This will increase the need for automated fault management (FM). Human factors issues in the use of such systems include maintenance of cognitive skill, situational awareness (SA), trust in automation, and workload. This study examine the human performance consequences of operator use of intelligent FM support in interaction with an autonomous, space-related, atmospheric control system. METHODS: An expert system representing a model-base reasoning agent supported operators at a low level of automation (LOA) by a computerized fault finding guide, at a medium LOA by an automated diagnosis and recovery advisory, and at a high LOA by automate diagnosis and recovery implementation, subject to operator approval or veto. Ten percent of the experimental trials involved complete failure of FM support. RESULTS: Benefits of automation were reflected in more accurate diagnoses, shorter fault identification time, and reduced subjective operator workload. Unexpectedly, fault identification times deteriorated more at the medium than at the high LOA during automation failure. Analyses of information sampling behavior showed that offloading operators from recovery implementation during reliable automation enabled operators at high LOA to engage in fault assessment activities CONCLUSIONS: The potential threat to SA imposed by high-level automation, in which decision advisories are automatically generated, need not inevitably be counteracted by choosing a lower LOA. Instead, freeing operator cognitive resources by automatic implementation of recover plans at a higher LOA can promote better fault comprehension, so long as the automation interface is designed to support efficient information sampling.

  17. 26 CFR 1.873-1 - Deductions allowed nonresident alien individuals.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 9 2010-04-01 2010-04-01 false Deductions allowed nonresident alien individuals... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Nonresident Aliens and Foreign Corporations § 1.873-1 Deductions allowed nonresident alien individuals. (a) General provisions—(1) Allocation of deductions. In...

  18. A methodology towards virtualisation-based high performance simulation platform supporting multidisciplinary design of complex products

    NASA Astrophysics Data System (ADS)

    Ren, Lei; Zhang, Lin; Tao, Fei; (Luke) Zhang, Xiaolong; Luo, Yongliang; Zhang, Yabin

    2012-08-01

    Multidisciplinary design of complex products leads to an increasing demand for high performance simulation (HPS) platforms. One great challenge is how to achieve high efficient utilisation of large-scale simulation resources in distributed and heterogeneous environments. This article reports a virtualisation-based methodology to realise a HPS platform. This research is driven by the issues concerning large-scale simulation resources deployment and complex simulation environment construction, efficient and transparent utilisation of fine-grained simulation resources and high reliable simulation with fault tolerance. A framework of virtualisation-based simulation platform (VSIM) is first proposed. Then the article investigates and discusses key approaches in VSIM, including simulation resources modelling, a method to automatically deploying simulation resources for dynamic construction of system environment, and a live migration mechanism in case of faults in run-time simulation. Furthermore, the proposed methodology is applied to a multidisciplinary design system for aircraft virtual prototyping and some experiments are conducted. The experimental results show that the proposed methodology can (1) significantly improve the utilisation of fine-grained simulation resources, (2) result in a great reduction in deployment time and an increased flexibility for simulation environment construction and (3)achieve fault tolerant simulation.

  19. Feature extraction based on semi-supervised kernel Marginal Fisher analysis and its application in bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Jiang, Li; Xuan, Jianping; Shi, Tielin

    2013-12-01

    Generally, the vibration signals of faulty machinery are non-stationary and nonlinear under complicated operating conditions. Therefore, it is a big challenge for machinery fault diagnosis to extract optimal features for improving classification accuracy. This paper proposes semi-supervised kernel Marginal Fisher analysis (SSKMFA) for feature extraction, which can discover the intrinsic manifold structure of dataset, and simultaneously consider the intra-class compactness and the inter-class separability. Based on SSKMFA, a novel approach to fault diagnosis is put forward and applied to fault recognition of rolling bearings. SSKMFA directly extracts the low-dimensional characteristics from the raw high-dimensional vibration signals, by exploiting the inherent manifold structure of both labeled and unlabeled samples. Subsequently, the optimal low-dimensional features are fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories and severities of bearings. The experimental results demonstrate that the proposed approach improves the fault recognition performance and outperforms the other four feature extraction methods.

  20. Testability analysis on a hydraulic system in a certain equipment based on simulation model

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Cong, Hua; Liu, Yuanhong; Feng, Fuzhou

    2018-03-01

    Aiming at the problem that the complicated structure and the shortage of fault statistics information in hydraulic systems, a multi value testability analysis method based on simulation model is proposed. Based on the simulation model of AMESim, this method injects the simulated faults and records variation of test parameters ,such as pressure, flow rate, at each test point compared with those under normal conditions .Thus a multi-value fault-test dependency matrix is established. Then the fault detection rate (FDR) and fault isolation rate (FIR) are calculated based on the dependency matrix. Finally the system of testability and fault diagnosis capability are analyzed and evaluated, which can only reach a lower 54%(FDR) and 23%(FIR). In order to improve testability performance of the system,. number and position of the test points are optimized on the system. Results show the proposed test placement scheme can be used to solve the problems that difficulty, inefficiency and high cost in the system maintenance.

  1. Reliable High Performance Peta- and Exa-Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bronevetsky, G

    2012-04-02

    As supercomputers become larger and more powerful, they are growing increasingly complex. This is reflected both in the exponentially increasing numbers of components in HPC systems (LLNL is currently installing the 1.6 million core Sequoia system) as well as the wide variety of software and hardware components that a typical system includes. At this scale it becomes infeasible to make each component sufficiently reliable to prevent regular faults somewhere in the system or to account for all possible cross-component interactions. The resulting faults and instability cause HPC applications to crash, perform sub-optimally or even produce erroneous results. As supercomputers continuemore » to approach Exascale performance and full system reliability becomes prohibitively expensive, we will require novel techniques to bridge the gap between the lower reliability provided by hardware systems and users unchanging need for consistent performance and reliable results. Previous research on HPC system reliability has developed various techniques for tolerating and detecting various types of faults. However, these techniques have seen very limited real applicability because of our poor understanding of how real systems are affected by complex faults such as soft fault-induced bit flips or performance degradations. Prior work on such techniques has had very limited practical utility because it has generally focused on analyzing the behavior of entire software/hardware systems both during normal operation and in the face of faults. Because such behaviors are extremely complex, such studies have only produced coarse behavioral models of limited sets of software/hardware system stacks. Since this provides little insight into the many different system stacks and applications used in practice, this work has had little real-world impact. My project addresses this problem by developing a modular methodology to analyze the behavior of applications and systems during both normal and faulty operation. By synthesizing models of individual components into a whole-system behavior models my work is making it possible to automatically understand the behavior of arbitrary real-world systems to enable them to tolerate a wide range of system faults. My project is following a multi-pronged research strategy. Section II discusses my work on modeling the behavior of existing applications and systems. Section II.A discusses resilience in the face of soft faults and Section II.B looks at techniques to tolerate performance faults. Finally Section III presents an alternative approach that studies how a system should be designed from the ground up to make resilience natural and easy.« less

  2. 26 CFR 1.832-5 - Deductions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... TAXES Other Insurance Companies § 1.832-5 Deductions. (a) The deductions allowable are specified in... provisions of section 1212. The deduction is the same as that allowed mutual insurance companies subject to... companies, other than mutual fire insurance companies described in section 831(a)(3)(A) and the regulations...

  3. 42 CFR 408.43 - Deduction from social security benefits.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 2 2011-10-01 2011-10-01 false Deduction from social security benefits. 408.43... § 408.43 Deduction from social security benefits. SSA, acting as CMS's agent, deducts the premiums from the monthly social security benefits if the enrollee is not entitled to railroad retirement benefits...

  4. Children's and Adults' Evaluation of Their Own Inductive Inferences, Deductive Inferences, and Guesses

    ERIC Educational Resources Information Center

    Pillow, Bradford H.; Pearson, RaeAnne M.

    2009-01-01

    Adults' and kindergarten through fourth-grade children's evaluations and explanations of inductive inferences, deductive inferences, and guesses were assessed. Beginning in kindergarten, participants rated deductions as more certain than weak inductions or guesses. Beginning in third grade, deductions were rated as more certain than strong…

  5. 48 CFR 1852.236-71 - Additive or deductive items.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 6 2013-10-01 2013-10-01 false Additive or deductive... and Clauses 1852.236-71 Additive or deductive items. As prescribed in 1836.570(a), insert the following provision: Additive or Deductive Items (MAR 1989) (a) The low bidder for purposes of award shall...

  6. 48 CFR 452.236-70 - Additive or Deductive Items.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Additive or Deductive... Additive or Deductive Items. As prescribed in 436.205, insert the following provision: Additive or... listed in the schedule) those additive or deductive bid items providing the most features of the work...

  7. 48 CFR 1452.236-71 - Additive or Deductive Items.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Additive or Deductive... Additive or Deductive Items. As prescribed in 1436.571, insert the following provision: Additive or... the bidder having the lowest total of the base bid and a combination of additive and deductive items...

  8. 48 CFR 1436.571 - Additive and deductive items.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Additive and deductive... Additive and deductive items. If it appears that funds available for a construction project may be... the work as specified and for one or more additive or deductive bid items which add or omit specified...

  9. 48 CFR 452.236-70 - Additive or Deductive Items.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Additive or Deductive... Additive or Deductive Items. As prescribed in 436.205, insert the following provision: Additive or... listed in the schedule) those additive or deductive bid items providing the most features of the work...

  10. 48 CFR 1836.213-370 - Additive and deductive items.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false Additive and deductive... Special Aspects of Contracting for Construction 1836.213-370 Additive and deductive items. When it appears... the work generally as specified and one or more additive or deductive bid items progressively adding...

  11. 48 CFR 452.236-70 - Additive or Deductive Items.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Additive or Deductive... Additive or Deductive Items. As prescribed in 436.205, insert the following provision: Additive or... listed in the schedule) those additive or deductive bid items providing the most features of the work...

  12. 48 CFR 1836.213-370 - Additive and deductive items.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 6 2013-10-01 2013-10-01 false Additive and deductive... Special Aspects of Contracting for Construction 1836.213-370 Additive and deductive items. When it appears... the work generally as specified and one or more additive or deductive bid items progressively adding...

  13. 48 CFR 1436.571 - Additive and deductive items.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Additive and deductive... Additive and deductive items. If it appears that funds available for a construction project may be... the work as specified and for one or more additive or deductive bid items which add or omit specified...

  14. 48 CFR 1836.213-370 - Additive and deductive items.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false Additive and deductive... Special Aspects of Contracting for Construction 1836.213-370 Additive and deductive items. When it appears... the work generally as specified and one or more additive or deductive bid items progressively adding...

  15. 48 CFR 1836.213-370 - Additive and deductive items.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Additive and deductive... Special Aspects of Contracting for Construction 1836.213-370 Additive and deductive items. When it appears... the work generally as specified and one or more additive or deductive bid items progressively adding...

  16. 48 CFR 1436.571 - Additive and deductive items.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Additive and deductive... Additive and deductive items. If it appears that funds available for a construction project may be... the work as specified and for one or more additive or deductive bid items which add or omit specified...

  17. 48 CFR 452.236-70 - Additive or Deductive Items.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Additive or Deductive... Additive or Deductive Items. As prescribed in 436.205, insert the following provision: Additive or... listed in the schedule) those additive or deductive bid items providing the most features of the work...

  18. 48 CFR 1836.213-370 - Additive and deductive items.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Additive and deductive... Special Aspects of Contracting for Construction 1836.213-370 Additive and deductive items. When it appears... the work generally as specified and one or more additive or deductive bid items progressively adding...

  19. 48 CFR 1436.571 - Additive and deductive items.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Additive and deductive... Additive and deductive items. If it appears that funds available for a construction project may be... the work as specified and for one or more additive or deductive bid items which add or omit specified...

  20. 48 CFR 1452.236-71 - Additive or Deductive Items.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Additive or Deductive... Additive or Deductive Items. As prescribed in 1436.571, insert the following provision: Additive or... the bidder having the lowest total of the base bid and a combination of additive and deductive items...

  1. 48 CFR 1852.236-71 - Additive or deductive items.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Additive or deductive... and Clauses 1852.236-71 Additive or deductive items. As prescribed in 1836.570(a), insert the following provision: Additive or Deductive Items (MAR 1989) (a) The low bidder for purposes of award shall...

  2. 48 CFR 1852.236-71 - Additive or deductive items.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false Additive or deductive... and Clauses 1852.236-71 Additive or deductive items. As prescribed in 1836.570(a), insert the following provision: Additive or Deductive Items (MAR 1989) (a) The low bidder for purposes of award shall...

  3. 48 CFR 452.236-70 - Additive or Deductive Items.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Additive or Deductive... Additive or Deductive Items. As prescribed in 436.205, insert the following provision: Additive or... listed in the schedule) those additive or deductive bid items providing the most features of the work...

  4. 48 CFR 1852.236-71 - Additive or deductive items.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false Additive or deductive... and Clauses 1852.236-71 Additive or deductive items. As prescribed in 1836.570(a), insert the following provision: Additive or Deductive Items (MAR 1989) (a) The low bidder for purposes of award shall...

  5. 48 CFR 1452.236-71 - Additive or Deductive Items.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Additive or Deductive... Additive or Deductive Items. As prescribed in 1436.571, insert the following provision: Additive or... the bidder having the lowest total of the base bid and a combination of additive and deductive items...

  6. 48 CFR 1436.571 - Additive and deductive items.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Additive and deductive... Additive and deductive items. If it appears that funds available for a construction project may be... the work as specified and for one or more additive or deductive bid items which add or omit specified...

  7. 48 CFR 1452.236-71 - Additive or Deductive Items.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Additive or Deductive... Additive or Deductive Items. As prescribed in 1436.571, insert the following provision: Additive or... the bidder having the lowest total of the base bid and a combination of additive and deductive items...

  8. 48 CFR 1452.236-71 - Additive or Deductive Items.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Additive or Deductive... Additive or Deductive Items. As prescribed in 1436.571, insert the following provision: Additive or... the bidder having the lowest total of the base bid and a combination of additive and deductive items...

  9. 12 CFR 347.208 - Assessment base deductions by insured branch.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 5 2012-01-01 2012-01-01 false Assessment base deductions by insured branch... STATEMENTS OF GENERAL POLICY INTERNATIONAL BANKING Foreign Banks § 347.208 Assessment base deductions by..., branches, agencies, or wholly owned subsidiaries may be deducted from the assessment base of the insured...

  10. 12 CFR 347.208 - Assessment base deductions by insured branch.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 4 2011-01-01 2011-01-01 false Assessment base deductions by insured branch... STATEMENTS OF GENERAL POLICY INTERNATIONAL BANKING Foreign Banks § 347.208 Assessment base deductions by..., branches, agencies, or wholly owned subsidiaries may be deducted from the assessment base of the insured...

  11. 22 CFR 512.22 - Deduction from pay.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 2 2014-04-01 2014-04-01 false Deduction from pay. 512.22 Section 512.22 Foreign Relations BROADCASTING BOARD OF GOVERNORS COLLECTION OF DEBTS UNDER THE DEBT COLLECTION ACT OF 1982 Salary Offset § 512.22 Deduction from pay. (a) Deduction by salary offset, from an employee's...

  12. 22 CFR 512.22 - Deduction from pay.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 2 2012-04-01 2009-04-01 true Deduction from pay. 512.22 Section 512.22 Foreign Relations BROADCASTING BOARD OF GOVERNORS COLLECTION OF DEBTS UNDER THE DEBT COLLECTION ACT OF 1982 Salary Offset § 512.22 Deduction from pay. (a) Deduction by salary offset, from an employee's...

  13. 22 CFR 512.22 - Deduction from pay.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 2 2011-04-01 2009-04-01 true Deduction from pay. 512.22 Section 512.22 Foreign Relations BROADCASTING BOARD OF GOVERNORS COLLECTION OF DEBTS UNDER THE DEBT COLLECTION ACT OF 1982 Salary Offset § 512.22 Deduction from pay. (a) Deduction by salary offset, from an employee's...

  14. 22 CFR 512.22 - Deduction from pay.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 2 2013-04-01 2009-04-01 true Deduction from pay. 512.22 Section 512.22 Foreign Relations BROADCASTING BOARD OF GOVERNORS COLLECTION OF DEBTS UNDER THE DEBT COLLECTION ACT OF 1982 Salary Offset § 512.22 Deduction from pay. (a) Deduction by salary offset, from an employee's...

  15. 42 CFR 408.43 - Deduction from social security benefits.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 2 2014-10-01 2014-10-01 false Deduction from social security benefits. 408.43... § 408.43 Deduction from social security benefits. SSA, acting as CMS's agent, deducts the premiums from the monthly social security benefits if the enrollee is not entitled to railroad retirement benefits...

  16. 42 CFR 408.43 - Deduction from social security benefits.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 2 2012-10-01 2012-10-01 false Deduction from social security benefits. 408.43... § 408.43 Deduction from social security benefits. SSA, acting as CMS's agent, deducts the premiums from the monthly social security benefits if the enrollee is not entitled to railroad retirement benefits...

  17. 42 CFR 408.43 - Deduction from social security benefits.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 2 2013-10-01 2013-10-01 false Deduction from social security benefits. 408.43... § 408.43 Deduction from social security benefits. SSA, acting as CMS's agent, deducts the premiums from the monthly social security benefits if the enrollee is not entitled to railroad retirement benefits...

  18. 42 CFR 408.43 - Deduction from social security benefits.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Deduction from social security benefits. 408.43... § 408.43 Deduction from social security benefits. SSA, acting as CMS's agent, deducts the premiums from the monthly social security benefits if the enrollee is not entitled to railroad retirement benefits...

  19. 12 CFR 347.208 - Assessment base deductions by insured branch.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Assessment base deductions by insured branch... STATEMENTS OF GENERAL POLICY INTERNATIONAL BANKING Foreign Banks § 347.208 Assessment base deductions by..., branches, agencies, or wholly owned subsidiaries may be deducted from the assessment base of the insured...

  20. Deductive Error Diagnosis and Inductive Error Generalization for Intelligent Tutoring Systems.

    ERIC Educational Resources Information Center

    Hoppe, H. Ulrich

    1994-01-01

    Examines the deductive approach to error diagnosis for intelligent tutoring systems. Topics covered include the principles of the deductive approach to diagnosis; domain-specific heuristics to solve the problem of generalizing error patterns; and deductive diagnosis and the hypertext-based learning environment. (Contains 26 references.) (JLB)

  1. 29 CFR 1450.23 - Deduction from pay.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 4 2010-07-01 2010-07-01 false Deduction from pay. 1450.23 Section 1450.23 Labor... OWED THE UNITED STATES Salary Offset § 1450.23 Deduction from pay. (a) Deduction by salary offset, from an employee's current disposable pay, shall be subject to the following conditions: (1) Ordinarily...

  2. 22 CFR 512.22 - Deduction from pay.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Deduction from pay. 512.22 Section 512.22... 1982 Salary Offset § 512.22 Deduction from pay. (a) Deduction by salary offset, from an employee's disposable current pay, shall be subject to the following circumstances: (1) When funds are available, the...

  3. 26 CFR 1.461-4 - Economic performance.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 6 2010-04-01 2010-04-01 false Economic performance. 1.461-4 Section 1.461-4...) INCOME TAXES Taxable Year for Which Deductions Taken § 1.461-4 Economic performance. (a) Introduction—(1... earlier than the taxable year in which economic performance occurs with respect to the liability. (2...

  4. 26 CFR 1.461-4 - Economic performance.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 6 2013-04-01 2013-04-01 false Economic performance. 1.461-4 Section 1.461-4...) INCOME TAXES (CONTINUED) Taxable Year for Which Deductions Taken § 1.461-4 Economic performance. (a... treated as met any earlier than the taxable year in which economic performance occurs with respect to the...

  5. 26 CFR 1.461-4 - Economic performance.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 6 2012-04-01 2012-04-01 false Economic performance. 1.461-4 Section 1.461-4...) INCOME TAXES (CONTINUED) Taxable Year for Which Deductions Taken § 1.461-4 Economic performance. (a... treated as met any earlier than the taxable year in which economic performance occurs with respect to the...

  6. 26 CFR 1.461-4 - Economic performance.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 6 2011-04-01 2011-04-01 false Economic performance. 1.461-4 Section 1.461-4...) INCOME TAXES (CONTINUED) Taxable Year for Which Deductions Taken § 1.461-4 Economic performance. (a... treated as met any earlier than the taxable year in which economic performance occurs with respect to the...

  7. 26 CFR 1.461-4 - Economic performance.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 6 2014-04-01 2014-04-01 false Economic performance. 1.461-4 Section 1.461-4...) INCOME TAXES (CONTINUED) Taxable Year for Which Deductions Taken § 1.461-4 Economic performance. (a... treated as met any earlier than the taxable year in which economic performance occurs with respect to the...

  8. Reset Tree-Based Optical Fault Detection

    PubMed Central

    Lee, Dong-Geon; Choi, Dooho; Seo, Jungtaek; Kim, Howon

    2013-01-01

    In this paper, we present a new reset tree-based scheme to protect cryptographic hardware against optical fault injection attacks. As one of the most powerful invasive attacks on cryptographic hardware, optical fault attacks cause semiconductors to misbehave by injecting high-energy light into a decapped integrated circuit. The contaminated result from the affected chip is then used to reveal secret information, such as a key, from the cryptographic hardware. Since the advent of such attacks, various countermeasures have been proposed. Although most of these countermeasures are strong, there is still the possibility of attack. In this paper, we present a novel optical fault detection scheme that utilizes the buffers on a circuit's reset signal tree as a fault detection sensor. To evaluate our proposal, we model radiation-induced currents into circuit components and perform a SPICE simulation. The proposed scheme is expected to be used as a supplemental security tool. PMID:23698267

  9. Using marine magnetic survey data to identify a gold ore-controlling fault: a case study in Sanshandao fault, eastern China

    NASA Astrophysics Data System (ADS)

    Yan, Jiayong; Wang, Zhihui; Wang, Jinhui; Song, Jianhua

    2018-06-01

    The Jiaodong Peninsula has the greatest concentration of gold ore in China and is characterized by altered tectonite-type gold ore deposits. This type of gold deposit is mainly formed in fracture zones and is strictly controlled by faults. Three major ore-controlling faults occur in the Jiaodong Peninsula—the Jiaojia, Zhaoping and Sanshandao faults; the former two are located on land and the latter is located near Sanshandao and its adjacent offshore area. The discovery of the world’s largest marine gold deposit in northeastern Sanshandao indicates that the shallow offshore area has great potential for gold prospecting. However, as two ends of the Sanshandao fault extend to the Bohai Sea, conventional geological survey methods cannot determine the distribution of the fault and this is constraining the discovery of new gold deposits. To explore the southwestward extension of the Sanshandao fault, we performed a 1:25 000 scale marine magnetic survey in this region and obtained high-quality magnetic survey data covering 170 km2. Multi-scale edge detection and three-dimensional inversion of magnetic anomalies identify the characteristics of the southwestward extension of the Sanshandao fault and the three-dimensional distribution of the main lithologies, providing significant evidence for the deployment of marine gold deposit prospecting in the southern segment of the Sanshandao fault. Moreover, three other faults were identified in the study area and faults F2 and F4 are inferred as ore-controlling faults: there may exist other altered tectonite-type gold ore deposits along these two faults.

  10. Consumer-directed health care for persons under 65 years of age with private health insurance: United States, 2007.

    PubMed

    Cohen, Robin A; Martinez, Michael E

    2009-03-01

    Data from the National Health Interview Survey. In 2007, 17.3% of persons under 65 years of age with private health insurance were enrolled in a high deductible health plan (HDHP), 4.5% were enrolled in a consumer-directed health plan (CDHP), and 14.8% were in a family with a flexible spending account for medical expenses (FSA); Persons with directly purchased private health insurance were more likely to be enrolled in a high deductible plan than those who obtained their private health insurance through an employer or union; Higher incomes and higher educational attainment were associated with greater uptake and enrollment in HDHPs, CDHPs, and FSAs. National attention to consumer-directed health care has increased following the enactment of the Medicare Prescription Drug Improvement and Modernization Act of 2003 (P.L. 108-173), which established tax-advantaged health savings accounts (1). Consumer-directed health care enables individuals to have more control over when and how they access care, what types of care they use, and how much they spend on health care services. This report includes estimates of three measures of consumer-directed private health care. Estimates for 2007 are provided for enrollment in high deductible health plans (HDHPs), plans with high deductibles coupled with health savings accounts also known as consumer-directed health plans (CDHPs), and the percentage of individuals with private coverage whose family has a flexible spending account (FSA) for medical expenses, by selected sociodemographic characteristics. All material appearing in this report is in the public domain and may be reproduced or copied without permission; citation as to source, however, is appreciated.

  11. Greek classicism in living structure? Some deductive pathways in animal morphology.

    PubMed

    Zweers, G A

    1985-01-01

    Classical temples in ancient Greece show two deterministic illusionistic principles of architecture, which govern their functional design: geometric proportionalism and a set of illusion-strengthening rules in the proportionalism's "stochastic margin". Animal morphology, in its mechanistic-deductive revival, applies just one architectural principle, which is not always satisfactory. Whether a "Greek Classical" situation occurs in the architecture of living structure is to be investigated by extreme testing with deductive methods. Three deductive methods for explanation of living structure in animal morphology are proposed: the parts, the compromise, and the transformation deduction. The methods are based upon the systems concept for an organism, the flow chart for a functionalistic picture, and the network chart for a structuralistic picture, whereas the "optimal design" serves as the architectural principle for living structure. These methods show clearly the high explanatory power of deductive methods in morphology, but they also make one open end most explicit: neutral issues do exist. Full explanation of living structure asks for three entries: functional design within architectural and transformational constraints. The transformational constraint brings necessarily in a stochastic component: an at random variation being a sort of "free management space". This variation must be a variation from the deterministic principle of the optimal design, since any transformation requires space for plasticity in structure and action, and flexibility in role fulfilling. Nevertheless, finally the question comes up whether for animal structure a similar situation exists as in Greek Classical temples. This means that the at random variation, that is found when the optimal design is used to explain structure, comprises apart from a stochastic part also real deviations being yet another deterministic part. This deterministic part could be a set of rules that governs actualization in the "free management space".

  12. Main propulsion functional path analysis for performance monitoring fault detection and annunciation

    NASA Technical Reports Server (NTRS)

    Keesler, E. L.

    1974-01-01

    A total of 48 operational flight instrumentation measurements were identified for use in performance monitoring and fault detection. The Operational Flight Instrumentation List contains all measurements identified for fault detection and annunciation. Some 16 controller data words were identified for use in fault detection and annunciation.

  13. Solar Photovoltaic (PV) Distributed Generation Systems - Control and Protection

    NASA Astrophysics Data System (ADS)

    Yi, Zhehan

    This dissertation proposes a comprehensive control, power management, and fault detection strategy for solar photovoltaic (PV) distribution generations. Battery storages are typically employed in PV systems to mitigate the power fluctuation caused by unstable solar irradiance. With AC and DC loads, a PV-battery system can be treated as a hybrid microgrid which contains both DC and AC power resources and buses. In this thesis, a control power and management system (CAPMS) for PV-battery hybrid microgrid is proposed, which provides 1) the DC and AC bus voltage and AC frequency regulating scheme and controllers designed to track set points; 2) a power flow management strategy in the hybrid microgrid to achieve system generation and demand balance in both grid-connected and islanded modes; 3) smooth transition control during grid reconnection by frequency and phase synchronization control between the main grid and microgrid. Due to the increasing demands for PV power, scales of PV systems are getting larger and fault detection in PV arrays becomes challenging. High-impedance faults, low-mismatch faults, and faults occurred in low irradiance conditions tend to be hidden due to low fault currents, particularly, when a PV maximum power point tracking (MPPT) algorithm is in-service. If remain undetected, these faults can considerably lower the output energy of solar systems, damage the panels, and potentially cause fire hazards. In this dissertation, fault detection challenges in PV arrays are analyzed in depth, considering the crossing relations among the characteristics of PV, interactions with MPPT algorithms, and the nature of solar irradiance. Two fault detection schemes are then designed as attempts to address these technical issues, which detect faults inside PV arrays accurately even under challenging circumstances, e.g., faults in low irradiance conditions or high-impedance faults. Taking advantage of multi-resolution signal decomposition (MSD), a powerful signal processing technique based on discrete wavelet transformation (DWT), the first attempt is devised, which extracts the features of both line-to-line (L-L) and line-to-ground (L-G) faults and employs a fuzzy inference system (FIS) for the decision-making stage of fault detection. This scheme is then improved as the second attempt by further studying the system's behaviors during L-L faults, extracting more efficient fault features, and devising a more advanced decision-making stage: the two-stage support vector machine (SVM). For the first time, the two-stage SVM method is proposed in this dissertation to detect L-L faults in PV system with satisfactory accuracies. Numerous simulation and experimental case studies are carried out to verify the proposed control and protection strategies. Simulation environment is set up using the PSCAD/EMTDC and Matlab/Simulink software packages. Experimental case studies are conducted in a PV-battery hybrid microgrid using the dSPACE real-time controller to demonstrate the ease of hardware implementation and the controller performance. Another small-scale grid-connected PV system is set up to verify both fault detection algorithms which demonstrate promising performances and fault detecting accuracies.

  14. Contribution of variable-speed pump hydro storage for power system dynamic performance

    NASA Astrophysics Data System (ADS)

    Silva, B.; Moreira, C.

    2017-04-01

    This paper presents the study of variable-speed Pump Storage Powerplant (PSP) in the Portuguese power system. It evaluates the progressive integration in three major locations and compares the power system performance following a severe fault event with consequent disconnection of non-Fault Ride-through (FRT) compliant Wind Farms (WF). To achieve such objective, a frequency responsive model was developed in PSS/E and was further used to substitute existing fixed-speed PSP. The results allow identifying a clear enhancement on the power system performance by the presence of frequency responsive variable-speed PSP, especially for the scenario presented, with high level of renewables integration.

  15. Evaluating the roles of the inferior frontal gyrus and superior parietal lobule in deductive reasoning: an rTMS study.

    PubMed

    Tsujii, Takeo; Sakatani, Kaoru; Masuda, Sayako; Akiyama, Takekazu; Watanabe, Shigeru

    2011-09-15

    This study used off-line repetitive transcranial magnetic stimulation (rTMS) to examine the roles of the superior parietal lobule (SPL) and inferior frontal gyrus (IFG) in a deductive reasoning task. Subjects performed a categorical syllogistic reasoning task involving congruent, incongruent, and abstract trials. Twenty four subjects received magnetic stimulation to the SPL region prior to the task. In the other 24 subjects, TMS was administered to the IFG region before the task. Stimulation lasted for 10min, with an inter-pulse frequency of 1Hz. We found that bilateral SPL (Brodmann area (BA) 7) stimulation disrupted performance on abstract and incongruent reasoning. Left IFG (BA 45) stimulation impaired congruent reasoning performance while paradoxically facilitating incongruent reasoning performance. This resulted in the elimination of the belief-bias. In contrast, right IFG stimulation only impaired incongruent reasoning performance, thus enhancing the belief-bias effect. These findings are largely consistent with the dual-process theory of reasoning, which proposes the existence of two different human reasoning systems: a belief-based heuristic system; and a logic-based analytic system. The present findings suggest that the left language-related IFG (BA 45) may correspond to the heuristic system, while bilateral SPL may underlie the analytic system. The right IFG may play a role in blocking the belief-based heuristic system for solving incongruent reasoning trials. This study could offer an insight about functional roles of distributed brain systems in human deductive reasoning by utilizing the rTMS approach. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Evaluating the performance of a fault detection and diagnostic system for vapor compression equipment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Breuker, M.S.; Braun, J.E.

    This paper presents a detailed evaluation of the performance of a statistical, rule-based fault detection and diagnostic (FDD) technique presented by Rossi and Braun (1997). Steady-state and transient tests were performed on a simple rooftop air conditioner over a range of conditions and fault levels. The steady-state data without faults were used to train models that predict outputs for normal operation. The transient data with faults were used to evaluate FDD performance. The effect of a number of design variables on FDD sensitivity for different faults was evaluated and two prototype systems were specified for more complete evaluation. Good performancemore » was achieved in detecting and diagnosing five faults using only six temperatures (2 input and 4 output) and linear models. The performance improved by about a factor of two when ten measurements (three input and seven output) and higher order models were used. This approach for evaluating and optimizing the performance of the statistical, rule-based FDD technique could be used as a design and evaluation tool when applying this FDD method to other packaged air-conditioning systems. Furthermore, the approach could also be modified to evaluate the performance of other FDD methods.« less

  17. Is the Multigrid Method Fault Tolerant? The Two-Grid Case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ainsworth, Mark; Glusa, Christian

    2016-06-30

    The predicted reduced resiliency of next-generation high performance computers means that it will become necessary to take into account the effects of randomly occurring faults on numerical methods. Further, in the event of a hard fault occurring, a decision has to be made as to what remedial action should be taken in order to resume the execution of the algorithm. The action that is chosen can have a dramatic effect on the performance and characteristics of the scheme. Ideally, the resulting algorithm should be subjected to the same kind of mathematical analysis that was applied to the original, deterministic variant.more » The purpose of this work is to provide an analysis of the behaviour of the multigrid algorithm in the presence of faults. Multigrid is arguably the method of choice for the solution of large-scale linear algebra problems arising from discretization of partial differential equations and it is of considerable importance to anticipate its behaviour on an exascale machine. The analysis of resilience of algorithms is in its infancy and the current work is perhaps the first to provide a mathematical model for faults and analyse the behaviour of a state-of-the-art algorithm under the model. It is shown that the Two Grid Method fails to be resilient to faults. Attention is then turned to identifying the minimal necessary remedial action required to restore the rate of convergence to that enjoyed by the ideal fault-free method.« less

  18. Overview of condition monitoring and operation control of electric power conversion systems in direct-drive wind turbines under faults

    NASA Astrophysics Data System (ADS)

    Huang, Shoudao; Wu, Xuan; Liu, Xiao; Gao, Jian; He, Yunze

    2017-09-01

    Electric power conversion system (EPCS), which consists of a generator and power converter, is one of the most important subsystems in a direct-drive wind turbine (DD-WT). However, this component accounts for the most failures (approximately 60% of the total number) in the entire DD-WT system according to statistical data. To improve the reliability of EPCSs and reduce the operation and maintenance cost of DD-WTs, numerous researchers have studied condition monitoring (CM) and fault diagnostics (FD). Numerous CM and FD techniques, which have respective advantages and disadvantages, have emerged. This paper provides an overview of the CM, FD, and operation control of EPCSs in DD-WTs under faults. After introducing the functional principle and structure of EPCS, this survey discusses the common failures in wind generators and power converters; briefly reviewed CM and FD methods and operation control of these generators and power converters under faults; and discussed the grid voltage faults related to EPCSs in DD-WTs. These theories and their related technical concepts are systematically discussed. Finally, predicted development trends are presented. The paper provides a valuable reference for developing service quality evaluation methods and fault operation control systems to achieve high-performance and high-intelligence DD-WTs.

  19. Cost segregation of assets offers tax benefits.

    PubMed

    Grant, D A

    2001-04-01

    A cost-segregation study is an asset-reclassification strategy that accelerates tax-depreciation deductions. By using this strategy, healthcare facility owners can lower their current income-tax liability and increase current cash flow. Simply put, certain real estate is reclassified from long-lived real property to shorter-lived personal property for depreciation purposes. Depreciation deductions for the personal property then can be greatly accelerated, thereby producing greater present-value tax savings. An analysis of costs can be conducted from either detailed construction records, when such records are available, or by using qualified appraisers, architects, or engineers to perform the allocation analysis.

  20. Windows .NET Network Distributed Basic Local Alignment Search Toolkit (W.ND-BLAST)

    PubMed Central

    Dowd, Scot E; Zaragoza, Joaquin; Rodriguez, Javier R; Oliver, Melvin J; Payton, Paxton R

    2005-01-01

    Background BLAST is one of the most common and useful tools for Genetic Research. This paper describes a software application we have termed Windows .NET Distributed Basic Local Alignment Search Toolkit (W.ND-BLAST), which enhances the BLAST utility by improving usability, fault recovery, and scalability in a Windows desktop environment. Our goal was to develop an easy to use, fault tolerant, high-throughput BLAST solution that incorporates a comprehensive BLAST result viewer with curation and annotation functionality. Results W.ND-BLAST is a comprehensive Windows-based software toolkit that targets researchers, including those with minimal computer skills, and provides the ability increase the performance of BLAST by distributing BLAST queries to any number of Windows based machines across local area networks (LAN). W.ND-BLAST provides intuitive Graphic User Interfaces (GUI) for BLAST database creation, BLAST execution, BLAST output evaluation and BLAST result exportation. This software also provides several layers of fault tolerance and fault recovery to prevent loss of data if nodes or master machines fail. This paper lays out the functionality of W.ND-BLAST. W.ND-BLAST displays close to 100% performance efficiency when distributing tasks to 12 remote computers of the same performance class. A high throughput BLAST job which took 662.68 minutes (11 hours) on one average machine was completed in 44.97 minutes when distributed to 17 nodes, which included lower performance class machines. Finally, there is a comprehensive high-throughput BLAST Output Viewer (BOV) and Annotation Engine components, which provides comprehensive exportation of BLAST hits to text files, annotated fasta files, tables, or association files. Conclusion W.ND-BLAST provides an interactive tool that allows scientists to easily utilizing their available computing resources for high throughput and comprehensive sequence analyses. The install package for W.ND-BLAST is freely downloadable from . With registration the software is free, installation, networking, and usage instructions are provided as well as a support forum. PMID:15819992

  1. Physicochemical Processes and the Evolution of Strength in Calcite Fault Gouge at Room Temperature

    NASA Astrophysics Data System (ADS)

    Carpenter, B. M.; Viti, C.; Collettini, C.

    2015-12-01

    The presence of calcite in and near faults, as the dominant material, cement, or vein fill, indicates that the mechanical behavior of carbonate-dominated material likely plays an important role in shallow- and mid-crustal faulting. Furthermore, a variety of physical and chemical processes control the evolution of strength and style of slip along seismogenic faults and thus play a critical role in the seismic cycle. Determining the role and contributions of these types of mechanisms is essential to furthering our understanding of the processes and timescales that lead to the strengthening of faults during interseismic periods and their behavior during the earthquake nucleation process. To further our understanding of these processes, we performed laboratory-shearing experiments on calcite gouge at normal stresses from 1 to 100 MPa, under conditions of saturation and at room temperature. We performed velocity stepping (0.1-1000μm/s) and slide-hold-slide (1-3000s) tests, to measure the velocity dependence of friction and the amount of frictional strengthening respectively, under saturated conditions with pore fluid that was in equilibrium with CaCO3. At 5 MPa normal stress, we also varied the environmental conditions by performing experiments under conditions of 5% RH and 50 % RH, and saturation with: silicone oil, demineralized water, and the equilibrated solution combined with 0.5M NaCl. Finally, we collected post experimental samples for microscopic analysis. Our combined analyses of rate-dependence, strengthening behavior, and microstructures show that calcite fault gouge transitions from brittle to semi-brittle behavior at high normal stress and low sliding velocities. Furthermore, our results also highlight how changes in pore water chemistry can have significant influence on the mechanical behavior of calcite gouge in both the laboratory and in natural faults. Our observations have important implications for earthquake nucleation and propagation on faults in carbonate-dominated lithologies.

  2. 26 CFR 1.941-1 - Special deduction for China Trade Act corporations.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 10 2014-04-01 2013-04-01 true Special deduction for China Trade Act... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) China Trade Act Corporations § 1.941-1 Special deduction for China Trade Act corporations. In addition to the deductions from taxable income otherwise...

  3. 26 CFR 1.941-1 - Special deduction for China Trade Act corporations.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 10 2013-04-01 2013-04-01 false Special deduction for China Trade Act... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) China Trade Act Corporations § 1.941-1 Special deduction for China Trade Act corporations. In addition to the deductions from taxable income otherwise...

  4. Children's and Adults' Judgments of the Certainty of Deductive Inferences, Inductive Inferences, and Guesses

    ERIC Educational Resources Information Center

    Pillow, Bradford H.; Pearson, RaeAnne M.; Hecht, Mary; Bremer, Amanda

    2010-01-01

    Children and adults rated their own certainty following inductive inferences, deductive inferences, and guesses. Beginning in kindergarten, participants rated deductions as more certain than weak inductions or guesses. Deductions were rated as more certain than strong inductions beginning in Grade 3, and fourth-grade children and adults…

  5. 26 CFR 1.941-1 - Special deduction for China Trade Act corporations.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 10 2011-04-01 2011-04-01 false Special deduction for China Trade Act... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) China Trade Act Corporations § 1.941-1 Special deduction for China Trade Act corporations. In addition to the deductions from taxable income otherwise...

  6. 26 CFR 1.941-1 - Special deduction for China Trade Act corporations.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 10 2010-04-01 2010-04-01 false Special deduction for China Trade Act... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES China Trade Act Corporations § 1.941-1 Special deduction for China Trade Act corporations. In addition to the deductions from taxable income otherwise allowed such a...

  7. 42 CFR 409.89 - Exemption of kidney donors from deductible and coinsurance requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 2 2011-10-01 2011-10-01 false Exemption of kidney donors from deductible and... Deductibles and Coinsurance § 409.89 Exemption of kidney donors from deductible and coinsurance requirements... furnished to an individual in connection with the donation of a kidney for transplant surgery. ...

  8. 42 CFR 409.89 - Exemption of kidney donors from deductible and coinsurance requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Exemption of kidney donors from deductible and... Deductibles and Coinsurance § 409.89 Exemption of kidney donors from deductible and coinsurance requirements... furnished to an individual in connection with the donation of a kidney for transplant surgery. ...

  9. 25 CFR 163.25 - Forest management deductions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Forest management deductions. 163.25 Section 163.25... Forest Management and Operations § 163.25 Forest management deductions. (a) Pursuant to the provisions of 25 U.S.C. 413 and 25 U.S.C. 3105, a forest management deduction shall be withheld from the gross...

  10. 26 CFR 20.2053-10 - Deduction for certain foreign death taxes.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 2055, but only if (1) the conditions stated in paragraph (b) of this section are met, and (2) an... for foreign death taxes. (b) Condition for allowance of deduction. (1) The deduction is not allowed.... For allowance of the deduction, it is sufficient if either of these conditions is satisfied. Thus, in...

  11. 26 CFR 20.2053-10 - Deduction for certain foreign death taxes.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 2055, but only if (1) the conditions stated in paragraph (b) of this section are met, and (2) an... for foreign death taxes. (b) Condition for allowance of deduction. (1) The deduction is not allowed.... For allowance of the deduction, it is sufficient if either of these conditions is satisfied. Thus, in...

  12. 48 CFR 252.236-7007 - Additive or deductive items.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Additive or deductive... of Provisions And Clauses 252.236-7007 Additive or deductive items. As prescribed in 236.570(b)(5), use the following provision: Additive or Deductive Items (DEC 1991) (a) The low offeror and the items...

  13. 48 CFR 252.236-7007 - Additive or deductive items.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Additive or deductive... of Provisions And Clauses 252.236-7007 Additive or deductive items. As prescribed in 236.570(b)(5), use the following provision: Additive or Deductive Items (DEC 1991) (a) The low offeror and the items...

  14. 48 CFR 1852.236-71 - Additive or deductive items.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Additive or deductive items... 1852.236-71 Additive or deductive items. As prescribed in 1836.570(a), insert the following provision: Additive or Deductive Items (MAR 1989) (a) The low bidder for purposes of award shall be the conforming...

  15. 48 CFR 252.236-7007 - Additive or deductive items.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Additive or deductive... of Provisions And Clauses 252.236-7007 Additive or deductive items. As prescribed in 236.570(b)(5), use the following provision: Additive or Deductive Items (DEC 1991) (a) The low offeror and the items...

  16. 48 CFR 252.236-7007 - Additive or deductive items.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Additive or deductive... of Provisions And Clauses 252.236-7007 Additive or deductive items. As prescribed in 236.570(b)(5), use the following provision: Additive or Deductive Items (DEC 1991) (a) The low offeror and the items...

  17. 48 CFR 252.236-7007 - Additive or deductive items.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Additive or deductive... of Provisions And Clauses 252.236-7007 Additive or deductive items. As prescribed in 236.570(b)(5), use the following provision: Additive or Deductive Items (DEC 1991) (a) The low offeror and the items...

  18. 26 CFR 1.642(d)-1 - Net operating loss deduction.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Net operating loss deduction. 1.642(d)-1 Section... TAX (CONTINUED) INCOME TAXES Estates, Trusts, and Beneficiaries § 1.642(d)-1 Net operating loss deduction. The net operating loss deduction allowed by section 172 is available to estates and trusts...

  19. 26 CFR 1.1402(a)-7 - Net operating loss deduction.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 12 2010-04-01 2010-04-01 false Net operating loss deduction. 1.1402(a)-7...) INCOME TAX (CONTINUED) INCOME TAXES Tax on Self-Employment Income § 1.1402(a)-7 Net operating loss deduction. The deduction provided by section 172, relating to net operating losses sustained in years other...

  20. 25 CFR 163.25 - Forest management deductions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false Forest management deductions. 163.25 Section 163.25... Forest Management and Operations § 163.25 Forest management deductions. (a) Pursuant to the provisions of 25 U.S.C. 413 and 25 U.S.C. 3105, a forest management deduction shall be withheld from the gross...

  1. 20 CFR 361.11 - Procedures for salary offset: When deductions may begin.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Procedures for salary offset: When deductions... § 361.11 Procedures for salary offset: When deductions may begin. (a) Deductions to liquidate an... a debt is completed, offset shall be made from subsequent payments of any nature (e.g., final salary...

  2. 38 CFR 8.5 - Authorization for deduction of premiums from compensation, retirement pay, or pension.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Authorization for deduction of premiums from compensation, retirement pay, or pension. 8.5 Section 8.5 Pensions, Bonuses, and... Authorization for deduction of premiums from compensation, retirement pay, or pension. Deductions from benefits...

  3. Comparative investigation of vibration and current monitoring for prediction of mechanical and electrical faults in induction motor based on multiclass-support vector machine algorithms

    NASA Astrophysics Data System (ADS)

    Gangsar, Purushottam; Tiwari, Rajiv

    2017-09-01

    This paper presents an investigation of vibration and current monitoring for effective fault prediction in induction motor (IM) by using multiclass support vector machine (MSVM) algorithms. Failures of IM may occur due to propagation of a mechanical or electrical fault. Hence, for timely detection of these faults, the vibration as well as current signals was acquired after multiple experiments of varying speeds and external torques from an experimental test rig. Here, total ten different fault conditions that frequently encountered in IM (four mechanical fault, five electrical fault conditions and one no defect condition) have been considered. In the case of stator winding fault, and phase unbalance and single phasing fault, different level of severity were also considered for the prediction. In this study, the identification has been performed of the mechanical and electrical faults, individually and collectively. Fault predictions have been performed using vibration signal alone, current signal alone and vibration-current signal concurrently. The one-versus-one MSVM has been trained at various operating conditions of IM using the radial basis function (RBF) kernel and tested for same conditions, which gives the result in the form of percentage fault prediction. The prediction performance is investigated for the wide range of RBF kernel parameter, i.e. gamma, and selected the best result for one optimal value of gamma for each case. Fault predictions has been performed and investigated for the wide range of operational speeds of the IM as well as external torques on the IM.

  4. The tracking performance of distributed recoverable flight control systems subject to high intensity radiated fields

    NASA Astrophysics Data System (ADS)

    Wang, Rui

    It is known that high intensity radiated fields (HIRF) can produce upsets in digital electronics, and thereby degrade the performance of digital flight control systems. Such upsets, either from natural or man-made sources, can change data values on digital buses and memory and affect CPU instruction execution. HIRF environments are also known to trigger common-mode faults, affecting nearly-simultaneously multiple fault containment regions, and hence reducing the benefits of n-modular redundancy and other fault-tolerant computing techniques. Thus, it is important to develop models which describe the integration of the embedded digital system, where the control law is implemented, as well as the dynamics of the closed-loop system. In this dissertation, theoretical tools are presented to analyze the relationship between the design choices for a class of distributed recoverable computing platforms and the tracking performance degradation of a digital flight control system implemented on such a platform while operating in a HIRF environment. Specifically, a tractable hybrid performance model is developed for a digital flight control system implemented on a computing platform inspired largely by the NASA family of fault-tolerant, reconfigurable computer architectures known as SPIDER (scalable processor-independent design for enhanced reliability). The focus will be on the SPIDER implementation, which uses the computer communication system known as ROBUS-2 (reliable optical bus). A physical HIRF experiment was conducted at the NASA Langley Research Center in order to validate the theoretical tracking performance degradation predictions for a distributed Boeing 747 flight control system subject to a HIRF environment. An extrapolation of these results for scenarios that could not be physically tested is also presented.

  5. Learning and diagnosing faults using neural networks

    NASA Technical Reports Server (NTRS)

    Whitehead, Bruce A.; Kiech, Earl L.; Ali, Moonis

    1990-01-01

    Neural networks have been employed for learning fault behavior from rocket engine simulator parameters and for diagnosing faults on the basis of the learned behavior. Two problems in applying neural networks to learning and diagnosing faults are (1) the complexity of the sensor data to fault mapping to be modeled by the neural network, which implies difficult and lengthy training procedures; and (2) the lack of sufficient training data to adequately represent the very large number of different types of faults which might occur. Methods are derived and tested in an architecture which addresses these two problems. First, the sensor data to fault mapping is decomposed into three simpler mappings which perform sensor data compression, hypothesis generation, and sensor fusion. Efficient training is performed for each mapping separately. Secondly, the neural network which performs sensor fusion is structured to detect new unknown faults for which training examples were not presented during training. These methods were tested on a task of fault diagnosis by employing rocket engine simulator data. Results indicate that the decomposed neural network architecture can be trained efficiently, can identify faults for which it has been trained, and can detect the occurrence of faults for which it has not been trained.

  6. The Kumamoto Mw7.1 mainshock: deep initiation triggered by the shallow foreshocks

    NASA Astrophysics Data System (ADS)

    Shi, Q.; Wei, S.

    2017-12-01

    The Kumamoto Mw7.1 earthquake and its Mw6.2 foreshock struck the central Kyushu region in mid-April, 2016. The surface ruptures are characterized with multiple fault segments and a mix of strike-slip and normal motion extended from the intersection area of Hinagu and Futagawa faults to the southwest of Mt. Aso. Despite complex surface ruptures, most of the finite fault inversions use two fault segments to approximate the fault geometry. To study the rupture process and the complex fault geometry of this earthquake, we performed a multiple point source inversion for the mainshock using the data on 93 K-net and Kik-net stations. With path calibration from the Mw6.0 foreshock, we selected the frequency ranges for the Pnl waves (0.02 0.26 Hz) and surface waves (0.02 0.12 Hz), as well as the components that can be well modeled with the 1D velocity model. Our four-point-source results reveal a unilateral rupture towards Mt. Aso and varying fault geometries. The first sub-event is a high angle ( 79°) right-lateral strike-slip event at the depth of 16 km on the north end of the Hinagu fault. Notably the two M>6 foreshocks is located by our previous studies near the north end of the Hinagu fault at the depth of 5 9 km, which may give rise to the stress concentration at depth. The following three sub-events are distributed along the surface rupture of the Futagawa fault, with focal depths within 4 10 km. Their focal mechanisms present similar right-lateral fault slips with relatively small dip angles (62 67°) and apparent normal-fault component. Thus, the mainshock rupture initiated from the relatively deep part of the Hinagu fault and propagated through the fault-bend toward NE along the relatively shallow part of the Futagawa fault until it was terminated near Mt. Aso. Based on the four-point-source solution, we conducted a finite-fault inversion and obtained a kinematic rupture model of the mainshock. We then performed the Coulomb Stress analyses on the two foreshocks and the mainshock. The results support that the stress alternation after the foreshocks may have triggered the failure on the fault plane of the Mw7.1 earthquake. Therefore, the 2016 Kumamoto earthquake sequence is dominated by a series of large triggering events whose initiation is associated with the geometric barrier in the intersection of the Futagawa and Hinagu faults.

  7. Proactive Fault Tolerance Using Preemptive Migration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Vallee, Geoffroy R; Naughton, III, Thomas J

    2009-01-01

    Proactive fault tolerance (FT) in high-performance computing is a concept that prevents compute node failures from impacting running parallel applications by preemptively migrating application parts away from nodes that are about to fail. This paper provides a foundation for proactive FT by defining its architecture and classifying implementation options. This paper further relates prior work to the presented architecture and classification, and discusses the challenges ahead for needed supporting technologies.

  8. Heat Flow Budget of the Gulf of California Rift: Preliminary Results of a High Resolution Survey Across the Wagner Basin

    NASA Astrophysics Data System (ADS)

    Negrete-Aranda, R.; Neumann, F.; Harris, R. N.; Contreras, J.; Gonzalez-Fernandez, A.; Sclater, J. G.

    2016-12-01

    The thermal regime exerts a primary control on rift dynamics and mode of extension for continental lithosphere. We present three heat-flow profiles across the southern terminus of the Cerro Prieto fault, in the northern Gulf of California. The longest profile is 42 km and has a measurement spacing of 1 km that spans the hanging-wall block (Wagner basin) and the footwall block of that fault. Measurements were taken with a 6.5 m long Fielax, violin-bow probe. Most measurements are of good quality, i.e., the probe fully penetrated sediments and measurements were stable enough to perform reliable inversion for heat flow and thermal properties. However, it was necessary to perform numerous corrections due to environmental phenomena related the copious sedimentation in the area, and seasonal changes in water temperature. Our measurements indicate the total throughput across the central rift and its east shoulder is 15 KW/m per meter of rift length. More important, heat flow values cluster in three distinct spatial groups: (i) heat flow in the well sedimented depocenter of the Wagner basin is approximately 200 mW/m2; (ii) the footwall block heat-flow is approximately 400 mW/m2; and (iii) heat flow across the fault zone is very high, up to 5,000 mW/m2. Our interpretation is that the former value represents the background conductive heat flow in the rift whereas heat flow across the fault represents advective heat transport by hydrothermal fluids. The high heat flow in the footwall block of the Cerro Prieto fault might be result of both conductive and advective heat transfer by fluid seepage from the basin. These data provide evidence that fluids from deep magma bodies transported along faults assist rifting in the northern Gulf of California. We are exploring how fluids may play a role in weakening the lithosphere and help localizing/delocalizing strain along major transforms and numerous normal faults observed in the area.

  9. The NMDAr antagonist ketamine interferes with manipulation of information for transitive inference reasoning in non-human primates.

    PubMed

    Brunamonti, Emiliano; Mione, Valentina; Di Bello, Fabio; De Luna, Paolo; Genovesio, Aldo; Ferraina, Stefano

    2014-09-01

    One of the most remarkable traits of highly encephalized animals is their ability to manipulate knowledge flexibly to infer logical relationships. Operationally, the corresponding cognitive process can be defined as reasoning. One hypothesis is that this process relies on the reverberating activity of glutamate neural circuits, sustained by NMDA receptor (NMDAr) mediated synaptic transmission, in both parietal and prefrontal areas. We trained two macaque monkeys to perform a form of deductive reasoning - the transitive inference task - in which they were required to learn the relationship between six adjacent items in a single session and then deduct the relationship between nonadjacent items that had not been paired in the learning phase. When the animals had learned the sequence, we administered systemically a subanaesthetic dose of ketamine (a NMDAr antagonist) and measured their performance on learned and novel problems. We observed impairments in determining the relationship between novel pairs of items. Our results are consistent with the hypothesis that transitive inference premises are integrated during learning in a unified representation and that reducing NMDAr activity interferes with the use of this mental model, when decisions are required in comparing pairs of items that have not been learned. © The Author(s) 2014.

  10. Experimental study on deformation field evolution in rock sample with en echelon faults using digital speckle correlation method

    NASA Astrophysics Data System (ADS)

    Ma, S.; Ma, J.; Liu, L.; Liu, P.

    2007-12-01

    Digital speckle correlation method (DSCM) is one kind of photomechanical deformation measurement method. DSCM could obtain continuous deformation field contactlessly by just capturing speckle images from specimen surface. Therefore, it is suitable to observe high spatial resolution deformation field in tectonophysical experiment. However, in the general DSCM experiment, the inspected surface of specimen needs to be painted to bear speckle grains in order to obtain the high quality speckle image. This also affects the realization of other measurement techniques. In this study, an improved DSCM system is developed and utilized to measure deformation field of rock specimen without surface painting. The granodiorite with high contrast nature grains is chosen to manufacture the specimen, and a specially designed DSCM algorithm is developed to analyze this kind of nature speckle images. Verification and calibration experiments show that the system could inspect a continuous (about 15Hz) high resolution displacement field (with resolution of 5μm) and strain field (with resolution of 50μɛ), dispensing with any preparation on rock specimen. Therefore, it could be conveniently utilized to study the failure of rock structure. Samples with compressive en echelon faults and extensional en echelon faults are studied on a two-direction servo-control test machine. The failure process of the samples is discussed based on the DSCM results. Experiment results show that: 1) The contours of displacement field could clearly indicate the activities of faults and new cracks. The displacement gradient adjacent to active faults and cracks is much greater than other areas. 2) Before failure of the samples, the mean strain of the jog area is largest for the compressive en echelon fault, while that is smallest for the extensional en echelon fault. This consists with the understanding that the jog area of compressive fault subjects to compression and that of extensional fault subjects to tension. 3) For the extensional en echelon sample, the dislocation across fault on load-driving end is greater than that cross fault on fixed end. Within the same fault, the dislocation across branch far from the jog area is greater than that across branch near the jog area. This indicates the restriction effect of jog area on the activity of fault. Moreover, the average dislocation across faults is much greater than that across the cracks. 4) For the compressive en echelon fault, the wing cracks initialized firstly and propagate outwards the jog area. Subsequently, a wedge strain concentration area is initialized and developed in the jog area because of the interaction of the two faults. Finally, the jog area failed when one crack propagates rapidly and connects the two ends of faults. The DSCM system used in this study could clearly show the deformation and failure process of the en echelon fault sample. The experiment using DSCM could be performed dispensing with any preparation on specimen and not affecting other inspection. Therefore, DSCM is expected to be a suitable tool for experimental study of fault samples in laboratory.

  11. Adaptation of superconducting fault current limiter to high-speed reclosing

    NASA Astrophysics Data System (ADS)

    Koyama, T.; Yanabu, S.

    2009-10-01

    Using a high temperature superconductor, we constructed and tested a model superconducting fault current limiter (SFCL). The superconductor might break in some cases because of its excessive generation of heat. Therefore, it is desirable to interrupt early the current that flows to superconductor. So, we proposed the SFCL using an electromagnetic repulsion switch which is composed of a superconductor, a vacuum interrupter and a by-pass coil, and its structure is simple. Duration that the current flow in the superconductor can be easily minimized to the level of less than 0.5 cycle using this equipment. On the other hand, the fault current is also easily limited by large reactance of the parallel coil. There is duty of high-speed reclosing after interrupting fault current in the electric power system. After the fault current is interrupted, the back-up breaker is re-closed within 350 ms. So, the electromagnetic repulsion switch should return to former state and the superconductor should be recovered to superconducting state before high-speed reclosing. Then, we proposed the SFCL using an electromagnetic repulsion switch which employs our new reclosing function. We also studied recovery time of the superconductor, because superconductor should be recovered to superconducting state within 350 ms. In this paper, the recovery time characteristics of the superconducting wire were investigated. Also, we combined the superconductor with the electromagnetic repulsion switch, and we did performance test. As a result, a high-speed reclosing within 350 ms was proven to be possible.

  12. Design of a fault-tolerant reversible control unit in molecular quantum-dot cellular automata

    NASA Astrophysics Data System (ADS)

    Bahadori, Golnaz; Houshmand, Monireh; Zomorodi-Moghadam, Mariam

    Quantum-dot cellular automata (QCA) is a promising emerging nanotechnology that has been attracting considerable attention due to its small feature size, ultra-low power consuming, and high clock frequency. Therefore, there have been many efforts to design computational units based on this technology. Despite these advantages of the QCA-based nanotechnologies, their implementation is susceptible to a high error rate. On the other hand, using the reversible computing leads to zero bit erasures and no energy dissipation. As the reversible computation does not lose information, the fault detection happens with a high probability. In this paper, first we propose a fault-tolerant control unit using reversible gates which improves on the previous design. The proposed design is then synthesized to the QCA technology and is simulated by the QCADesigner tool. Evaluation results indicate the performance of the proposed approach.

  13. Real-Time Diagnosis of Faults Using a Bank of Kalman Filters

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2006-01-01

    A new robust method of automated real-time diagnosis of faults in an aircraft engine or a similar complex system involves the use of a bank of Kalman filters. In order to be highly reliable, a diagnostic system must be designed to account for the numerous failure conditions that an aircraft engine may encounter in operation. The method achieves this objective though the utilization of multiple Kalman filters, each of which is uniquely designed based on a specific failure hypothesis. A fault-detection-and-isolation (FDI) system, developed based on this method, is able to isolate faults in sensors and actuators while detecting component faults (abrupt degradation in engine component performance). By affording a capability for real-time identification of minor faults before they grow into major ones, the method promises to enhance safety and reduce operating costs. The robustness of this method is further enhanced by incorporating information regarding the aging condition of an engine. In general, real-time fault diagnostic methods use the nominal performance of a "healthy" new engine as a reference condition in the diagnostic process. Such an approach does not account for gradual changes in performance associated with aging of an otherwise healthy engine. By incorporating information on gradual, aging-related changes, the new method makes it possible to retain at least some of the sensitivity and accuracy needed to detect incipient faults while preventing false alarms that could result from erroneous interpretation of symptoms of aging as symptoms of failures. The figure schematically depicts an FDI system according to the new method. The FDI system is integrated with an engine, from which it accepts two sets of input signals: sensor readings and actuator commands. Two main parts of the FDI system are a bank of Kalman filters and a subsystem that implements FDI decision rules. Each Kalman filter is designed to detect a specific sensor or actuator fault. When a sensor or actuator fault occurs, large estimation errors are generated by all filters except the one using the correct hypothesis. By monitoring the residual output of each filter, the specific fault that has occurred can be detected and isolated on the basis of the decision rules. A set of parameters that indicate the performance of the engine components is estimated by the "correct" Kalman filter for use in detecting component faults. To reduce the loss of diagnostic accuracy and sensitivity in the face of aging, the FDI system accepts information from a steady-state-condition-monitoring system. This information is used to update the Kalman filters and a data bank of trim values representative of the current aging condition.

  14. Evaluating Application Resilience with XRay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Sui; Bronevetsky, Greg; Li, Bin

    2015-05-07

    The rising count and shrinking feature size of transistors within modern computers is making them increasingly vulnerable to various types of soft faults. This problem is especially acute in high-performance computing (HPC) systems used for scientific computing, because these systems include many thousands of compute cores and nodes, all of which may be utilized in a single large-scale run. The increasing vulnerability of HPC applications to errors induced by soft faults is motivating extensive work on techniques to make these applications more resiilent to such faults, ranging from generic techniques such as replication or checkpoint/restart to algorithmspecific error detection andmore » tolerance techniques. Effective use of such techniques requires a detailed understanding of how a given application is affected by soft faults to ensure that (i) efforts to improve application resilience are spent in the code regions most vulnerable to faults and (ii) the appropriate resilience technique is applied to each code region. This paper presents XRay, a tool to view the application vulnerability to soft errors, and illustrates how XRay can be used in the context of a representative application. In addition to providing actionable insights into application behavior XRay automatically selects the number of fault injection experiments required to provide an informative view of application behavior, ensuring that the information is statistically well-grounded without performing unnecessary experiments.« less

  15. Applying fault tree analysis to the prevention of wrong-site surgery.

    PubMed

    Abecassis, Zachary A; McElroy, Lisa M; Patel, Ronak M; Khorzad, Rebeca; Carroll, Charles; Mehrotra, Sanjay

    2015-01-01

    Wrong-site surgery (WSS) is a rare event that occurs to hundreds of patients each year. Despite national implementation of the Universal Protocol over the past decade, development of effective interventions remains a challenge. We performed a systematic review of the literature reporting root causes of WSS and used the results to perform a fault tree analysis to assess the reliability of the system in preventing WSS and identifying high-priority targets for interventions aimed at reducing WSS. Process components where a single error could result in WSS were labeled with OR gates; process aspects reinforced by verification were labeled with AND gates. The overall redundancy of the system was evaluated based on prevalence of AND gates and OR gates. In total, 37 studies described risk factors for WSS. The fault tree contains 35 faults, most of which fall into five main categories. Despite the Universal Protocol mandating patient verification, surgical site signing, and a brief time-out, a large proportion of the process relies on human transcription and verification. Fault tree analysis provides a standardized perspective of errors or faults within the system of surgical scheduling and site confirmation. It can be adapted by institutions or specialties to lead to more targeted interventions to increase redundancy and reliability within the preoperative process. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. 26 CFR 1.167(a)-10 - When depreciation deduction is allowable.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false When depreciation deduction is allowable. 1.167... Corporations § 1.167(a)-10 When depreciation deduction is allowable. (a) A taxpayer should deduct the proper depreciation allowance each year and may not increase his depreciation allowances in later years by reason of...

  17. 26 CFR 20.2053-6 - Deduction for taxes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 14 2010-04-01 2010-04-01 false Deduction for taxes. 20.2053-6 Section 20.2053... TAXES ESTATE TAX; ESTATES OF DECEDENTS DYING AFTER AUGUST 16, 1954 Taxable Estate § 20.2053-6 Deduction for taxes. (a) In general—(1) Taxes are deductible in computing a decedent's gross estate— (i) Only as...

  18. 7 CFR 3.81 - Procedures for salary offset: when deductions may begin.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 1 2010-01-01 2010-01-01 false Procedures for salary offset: when deductions may... Salary Offset § 3.81 Procedures for salary offset: when deductions may begin. (a) Deductions to liquidate... Offset Salary to collect from the employee's current pay. (b) If the employee filed a petition for a...

  19. 78 FR 41961 - Submission for Review: 3206-0170, Application for Refund of Retirement Deductions/FERS (SF 3106...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-12

    ... Retirement Deductions/FERS (SF 3106) and Current/Former Spouse(s) Notification of Application for Refund of Retirement Deductions Under FERS (SF 3106A) AGENCY: U.S. Office of Personnel Management. ACTION: 30-Day... Current/Former Spouse(s) Notification of Application for Refund of Retirement Deductions Under FERS (SF...

  20. 26 CFR 1.249-1 - Limitation on deduction of bond premium on repurchase.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... A issues a callable 20-year convertible bond at face for $1,000 bearing interest at 10 percent per... 26 Internal Revenue 3 2010-04-01 2010-04-01 false Limitation on deduction of bond premium on... deduction of bond premium on repurchase. (a) Limitation—(1) General rule. No deduction is allowed to the...

  1. 26 CFR 1.163-12 - Deduction of original issue discount on instrument held by related foreign person.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... on which the amount is includible in income is determined with reference to the method of accounting... 26 Internal Revenue 2 2012-04-01 2012-04-01 false Deduction of original issue discount on... Deductions for Individuals and Corporations § 1.163-12 Deduction of original issue discount on instrument...

  2. 26 CFR 1.163-12 - Deduction of original issue discount on instrument held by related foreign person.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... on which the amount is includible in income is determined with reference to the method of accounting... 26 Internal Revenue 2 2014-04-01 2014-04-01 false Deduction of original issue discount on... Deductions for Individuals and Corporations § 1.163-12 Deduction of original issue discount on instrument...

  3. 26 CFR 1.163-12 - Deduction of original issue discount on instrument held by related foreign person.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... on which the amount is includible in income is determined with reference to the method of accounting... 26 Internal Revenue 2 2013-04-01 2013-04-01 false Deduction of original issue discount on... Deductions for Individuals and Corporations § 1.163-12 Deduction of original issue discount on instrument...

  4. HOT Faults", Fault Organization, and the Occurrence of the Largest Earthquakes

    NASA Astrophysics Data System (ADS)

    Carlson, J. M.; Hillers, G.; Archuleta, R. J.

    2006-12-01

    We apply the concept of "Highly Optimized Tolerance" (HOT) for the investigation of spatio-temporal seismicity evolution, in particular mechanisms associated with largest earthquakes. HOT provides a framework for investigating both qualitative and quantitative features of complex feedback systems that are far from equilibrium and punctuated by rare, catastrophic events. In HOT, robustness trade-offs lead to complexity and power laws in systems that are coupled to evolving environments. HOT was originally inspired by biology and engineering, where systems are internally very highly structured, through biological evolution or deliberate design, and perform in an optimum manner despite fluctuations in their surroundings. Though faults and fault systems are not designed in ways comparable to biological and engineered structures, feedback processes are responsible in a conceptually comparable way for the development, evolution and maintenance of younger fault structures and primary slip surfaces of mature faults, respectively. Hence, in geophysical applications the "optimization" approach is perhaps more aptly replaced by "organization", reflecting the distinction between HOT and random, disorganized configurations, and highlighting the importance of structured interdependencies that evolve via feedback among and between different spatial and temporal scales. Expressed in the terminology of the HOT concept, mature faults represent a configuration optimally organized for the release of strain energy; whereas immature, more heterogeneous fault networks represent intermittent, suboptimal systems that are regularized towards structural simplicity and the ability to generate large earthquakes more easily. We discuss fault structure and associated seismic response pattern within the HOT concept, and outline fundamental differences between this novel interpretation to more orthodox viewpoints like the criticality concept. The discussion is flanked by numerical simulations of a 2D fault model, where we investigate different feedback mechanisms and their effect on seismicity evolution. We introduce an approach to estimate the state of a fault and thus its capability of generating a large (system-wide) event assuming likely heterogeneous distributions of hypocenters and stresses, respectively.

  5. 3D fault curvature and fractal roughness: Insights for rupture dynamics and ground motions using a Discontinous Galerkin method

    NASA Astrophysics Data System (ADS)

    Ulrich, Thomas; Gabriel, Alice-Agnes

    2017-04-01

    Natural fault geometries are subject to a large degree of uncertainty. Their geometrical structure is not directly observable and may only be inferred from surface traces, or geophysical measurements. Most studies aiming at assessing the potential seismic hazard of natural faults rely on idealised shaped models, based on observable large-scale features. Yet, real faults are wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. Dynamic rupture simulations aim to capture the observed complexity of earthquake sources and ground-motions. From a numerical point of view, incorporating rough faults in such simulations is challenging - it requires optimised codes able to run efficiently on high-performance computers and simultaneously handle complex geometries. Physics-based rupture dynamics hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Moreover, the simulated ground-motions present many similarities with observed ground-motions records. Thus, such simulations may foster our understanding of earthquake source processes, and help deriving more accurate seismic hazard estimates. In this presentation, the software package SeisSol (www.seissol.org), based on an ADER-Discontinuous Galerkin scheme, is used to solve the spontaneous dynamic earthquake rupture problem. The usage of tetrahedral unstructured meshes naturally allows for complicated fault geometries. However, SeisSol's high-order discretisation in time and space is not particularly suited for small-scale fault roughness. We will demonstrate modelling conditions under which SeisSol resolves rupture dynamics on rough faults accurately. The strong impact of the geometric gradient of the fault surface on the rupture process is then shown in 3D simulations. Following, the benefits of explicitly modelling fault curvature and roughness, in distinction to prescribing heterogeneous initial stress conditions on a planar fault, is demonstrated. Furthermore, we show that rupture extend, rupture front coherency and rupture speed are highly dependent on the initial amplitude of stress acting on the fault, defined by the normalized prestress factor R, the ratio of the potential stress drop over the breakdown stress drop. The effects of fault complexity are particularly pronounced for lower R. By low-pass filtering a rough fault at several cut-off wavelengths, we then try to capture rupture complexity using a simplified fault geometry. We find that equivalent source dynamics can only be obtained using a scarcely filtered fault associated with a reduced stress level. To investigate the wavelength-dependent roughness effect, the fault geometry is bandpass-filtered over several spectral ranges. We show that geometric fluctuations cause rupture velocity fluctuations of similar length scale. The impact of fault geometry is especially pronounced when the rupture front velocity is near supershear. Roughness fluctuations significantly smaller than the rupture front characteristic dimension (cohesive zone size) affect only macroscopic rupture properties, thus, posing a minimum length scale limiting the required resolution of 3D fault complexity. Lastly, the effect of fault curvature and roughness on the simulated ground-motions is assessed. Despite employing a simple linear slip weakening friction law, the simulated ground-motions compare well with estimates from ground motions prediction equations, even at relatively high frequencies.

  6. Finding faults: analogical comparison supports spatial concept learning in geoscience.

    PubMed

    Jee, Benjamin D; Uttal, David H; Gentner, Dedre; Manduca, Cathy; Shipley, Thomas F; Sageman, Bradley

    2013-05-01

    A central issue in education is how to support the spatial thinking involved in learning science, technology, engineering, and mathematics (STEM). We investigated whether and how the cognitive process of analogical comparison supports learning of a basic spatial concept in geoscience, fault. Because of the high variability in the appearance of faults, it may be difficult for students to learn the category-relevant spatial structure. There is abundant evidence that comparing analogous examples can help students gain insight into important category-defining features (Gentner in Cogn Sci 34(5):752-775, 2010). Further, comparing high-similarity pairs can be especially effective at revealing key differences (Sagi et al. 2012). Across three experiments, we tested whether comparison of visually similar contrasting examples would help students learn the fault concept. Our main findings were that participants performed better at identifying faults when they (1) compared contrasting (fault/no fault) cases versus viewing each case separately (Experiment 1), (2) compared similar as opposed to dissimilar contrasting cases early in learning (Experiment 2), and (3) viewed a contrasting pair of schematic block diagrams as opposed to a single block diagram of a fault as part of an instructional text (Experiment 3). These results suggest that comparison of visually similar contrasting cases helped distinguish category-relevant from category-irrelevant features for participants. When such comparisons occurred early in learning, participants were more likely to form an accurate conceptual representation. Thus, analogical comparison of images may provide one powerful way to enhance spatial learning in geoscience and other STEM disciplines.

  7. Minimalist fault-tolerance techniques for mitigating single-event effects in non-radiation-hardened microcontrollers

    NASA Astrophysics Data System (ADS)

    Caldwell, Douglas Wyche

    Commercial microcontrollers--monolithic integrated circuits containing microprocessor, memory and various peripheral functions--such as are used in industrial, automotive and military applications, present spacecraft avionics system designers an appealing mix of higher performance and lower power together with faster system-development time and lower unit costs. However, these parts are not radiation-hardened for application in the space environment and Single-Event Effects (SEE) caused by high-energy, ionizing radiation present a significant challenge. Mitigating these effects with techniques which require minimal additional support logic, and thereby preserve the high functional density of these devices, can allow their benefits to be realized. This dissertation uses fault-tolerance to mitigate the transient errors and occasional latchups that non-hardened microcontrollers can experience in the space radiation environment. Space systems requirements and the historical use of fault-tolerant computers in spacecraft provide context. Space radiation and its effects in semiconductors define the fault environment. A reference architecture is presented which uses two or three microcontrollers with a combination of hardware and software voting techniques to mitigate SEE. A prototypical spacecraft function (an inertial measurement unit) is used to illustrate the techniques and to explore how real application requirements impact the fault-tolerance approach. Low-cost approaches which leverage features of existing commercial microcontrollers are analyzed. A high-speed serial bus is used for voting among redundant devices and a novel wire-OR output voting scheme exploits the bidirectional controls of I/O pins. A hardware testbed and prototype software were constructed to evaluate two- and three-processor configurations. Simulated Single-Event Upsets (SEUs) were injected at high rates and the response of the system monitored. The resulting statistics were used to evaluate technical effectiveness. Fault-recovery probabilities (coverages) higher than 99.99% were experimentally demonstrated. The greater than thousand-fold reduction in observed effects provides performance comparable with SEE tolerance of tested, rad-hard devices. Technical results were combined with cost data to assess the cost-effectiveness of the techniques. It was found that a three-processor system was only marginally more effective than a two-device system at detecting and recovering from faults, but consumed substantially more resources, suggesting that simpler configurations are generally more cost-effective.

  8. Shallow Seismic Reflection Study of Recently Active Fault Scarps, Mina Deflection, Western Nevada

    NASA Astrophysics Data System (ADS)

    Black, R. A.; Christie, M.; Tsoflias, G. P.; Stockli, D. F.

    2006-12-01

    During the spring and summer of 2006 University of Kansas geophysics students and faculty acquired shallow, high resolution seismic reflection data over actively deforming alluvial fans developing across the Emmigrant Peak (in Fish Lake Valley) and Queen Valley Faults in western Nevada. These normal faults represent a portion of the transition from the right-lateral deformation associated with the Walker Lane/Eastern California Shear Zone to the normal and left-lateral faulting of the Mina Deflection. Data were gathered over areas of recent high resolution geological mapping and limited trenching by KU students. An extensive GPR data grid was also acquired. The GPR results are reported in Christie, et al., 2006. The seismic data gathered in the spring included both walkaway tests and a short CMP test line. These data indicated that a very near-surface P-wave to S-wave conversion was taking place and that very high quality S-wave reflections were probably dominating shot records to over one second in time. CMP lines acquired during the summer utilized a 144 channel networked Geode system, single 28 hz geophones, and a 30.06 downhole rifle source. Receiver spacing was 0.5 m, source spacing 1.0m and CMP bin spacings were 0.25m for all lines. Surveying was performed using an RTK system which was also used to develop a concurrent high resolution DEM. A dip line of over 400m and a strike line over 100m in length were shot across the active fan scarp in Fish Lake Valley. Data processing is still underway. However, preliminary interpretation of common-offset gathers and brute stacks indicates very complex faulting and detailed stratigraphic information to depths of over 125m. Depth of information was actually limited by the 1024ms recording time. Several west-dipping normal faults downstep towards the basin. East-dipping antithetic normal faulting is extensive. Several distinctive stratigraphic packages are bound by the faults and apparent unconformitites. A CMP dip line was also run across a large active scarp in Queen Valley near Boundary Peak. Due to slope steepness and extensive boulder armoring shot and receiver locations had to be skipped within several meters of the actual scarp location. Initial structural and stratigraphic interpretations are similar to those in the Fish Lake Valley location. Overall the data prove that the actively deforming fans can be imaged in detail sufficient to perform structural and possibly seismic stratigraphic analysis within the upper one hundred meters of the fans, if not deeper.

  9. Learning Capability and Business Performance: A Non-Financial and Financial Assessment

    ERIC Educational Resources Information Center

    Ma Prieto, Isabel; Revilla, Elena

    2006-01-01

    Purpose: There has been little research that includes reliable deductions about the positive influence of learning capability on business performance. For this reason, the main objective of the present study is to empirically explore the link between learning capability in organizations and business performance evaluated in both financial and…

  10. Verifying Digital Components of Physical Systems: Experimental Evaluation of Test Quality

    NASA Astrophysics Data System (ADS)

    Laputenko, A. V.; López, J. E.; Yevtushenko, N. V.

    2018-03-01

    This paper continues the study of high quality test derivation for verifying digital components which are used in various physical systems; those are sensors, data transfer components, etc. We have used logic circuits b01-b010 of the package of ITC'99 benchmarks (Second Release) for experimental evaluation which as stated before, describe digital components of physical systems designed for various applications. Test sequences are derived for detecting the most known faults of the reference logic circuit using three different approaches to test derivation. Three widely used fault types such as stuck-at-faults, bridges, and faults which slightly modify the behavior of one gate are considered as possible faults of the reference behavior. The most interesting test sequences are short test sequences that can provide appropriate guarantees after testing, and thus, we experimentally study various approaches to the derivation of the so-called complete test suites which detect all fault types. In the first series of experiments, we compare two approaches for deriving complete test suites. In the first approach, a shortest test sequence is derived for testing each fault. In the second approach, a test sequence is pseudo-randomly generated by the use of an appropriate software for logic synthesis and verification (ABC system in our study) and thus, can be longer. However, after deleting sequences detecting the same set of faults, a test suite returned by the second approach is shorter. The latter underlines the fact that in many cases it is useless to spend `time and efforts' for deriving a shortest distinguishing sequence; it is better to use the test minimization afterwards. The performed experiments also show that the use of only randomly generated test sequences is not very efficient since such sequences do not detect all the faults of any type. After reaching the fault coverage around 70%, saturation is observed, and the fault coverage cannot be increased anymore. For deriving high quality short test suites, the approach that is the combination of randomly generated sequences together with sequences which are aimed to detect faults not detected by random tests, allows to reach the good fault coverage using shortest test sequences.

  11. Dynamic ruptures on faults of complex geometry: insights from numerical simulations, from large-scale curvature to small-scale fractal roughness

    NASA Astrophysics Data System (ADS)

    Ulrich, T.; Gabriel, A. A.

    2016-12-01

    The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.

  12. Dynamic earthquake rupture simulation on nonplanar faults embedded in 3D geometrically complex, heterogeneous Earth models

    NASA Astrophysics Data System (ADS)

    Duru, K.; Dunham, E. M.; Bydlon, S. A.; Radhakrishnan, H.

    2014-12-01

    Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake.The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws.The corresponding initial boundary value problems are both numerically and computationally challenging.In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas.Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods.We present a numerical method for:a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration;b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of seismic waves in heterogeneous media with free surface topography.We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. We have performed extensive numerical experiments using a slip-weakening friction law on non-planar faults, including recent SCEC benchmark problems. We also show simulations on fractal faults revealing the complexity of rupture dynamics on rough faults. We are presently extending our method to rate-and-state friction laws and off-fault plasticity.

  13. Understanding Loss Deductions For Yard Trees

    Treesearch

    John Greene

    1998-01-01

    The sudden destruction of trees or other yard plants due to a fire, storm, or massive insect attack qualifies for a casualty loss deduction. Unfortnately, the casualty loss rules for personal use property allow deductions only for large losses. To calculate your deduction, start with the lesser of the decrease in fair market value of your property caused by the loss of...

  14. 26 CFR 1.1312-5 - Correlative deductions and inclusions for trusts or estates and legatees, beneficiaries, or heirs.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 11 2010-04-01 2010-04-01 true Correlative deductions and inclusions for trusts... of Tax Between Years and Special Limitations § 1.1312-5 Correlative deductions and inclusions for... the amount of the deduction allowed by sections 651 and 661 or the inclusion in taxable income of the...

  15. 26 CFR 1.221-1 - Deduction for interest paid on qualified education loans after December 31, 2001.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... § 1.163-7, original issue discount on a qualified education loan is not deductible until paid. See... education loans after December 31, 2001. 1.221-1 Section 1.221-1 Internal Revenue INTERNAL REVENUE SERVICE... Deductions for Individuals § 1.221-1 Deduction for interest paid on qualified education loans after December...

  16. 26 CFR 1.221-1 - Deduction for interest paid on qualified education loans after December 31, 2001.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... § 1.163-7, original issue discount on a qualified education loan is not deductible until paid. See... education loans after December 31, 2001. 1.221-1 Section 1.221-1 Internal Revenue INTERNAL REVENUE SERVICE... Deductions for Individuals § 1.221-1 Deduction for interest paid on qualified education loans after December...

  17. 26 CFR 1.221-1 - Deduction for interest paid on qualified education loans after December 31, 2001.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... § 1.163-7, original issue discount on a qualified education loan is not deductible until paid. See... education loans after December 31, 2001. 1.221-1 Section 1.221-1 Internal Revenue INTERNAL REVENUE SERVICE... Deductions for Individuals § 1.221-1 Deduction for interest paid on qualified education loans after December...

  18. Specifying Specification.

    PubMed

    Paulo, Norbert

    2016-03-01

    This paper tackles the accusation that applied ethics is no serious academic enterprise because it lacks theoretical bracing. It does so in two steps. In the first step I introduce and discuss a highly acclaimed method to guarantee stability in ethical theories: Henry Richardson's specification. The discussion shows how seriously ethicists take the stability of the connection between the foundational parts of their theories and their further development as well as their "application" to particular problems or cases. A detailed scrutiny of specification leads to the second step, where I use insights from legal theory to inform the debate around stability from that point of view. This view reveals some of specification's limitations. I suggest that, once specification is sufficiently specified, it appears astonishingly similar to deduction as used in legal theory. Legal theory also provides valuable insight into the functional range of deduction and its relation to other forms of reasoning. This leads to a richer understanding of stability in normative theories and to a smart division of labor between deduction and other forms of reasoning. The comparison to legal theory thereby provides a framework for how different methods such as specification, deduction, balancing, and analogy relate to one another.

  19. Numerical modeling of mountain formation on Io

    NASA Astrophysics Data System (ADS)

    Turtle, E. P.; Jaeger, W. L.; McEwen, A. S.; Keszthelyi, L.

    2000-10-01

    Io has ~ 100 mountains [1] that, although often associated with patera [2], do not appear to be volcanic structures. The mountains are up to 16 km high [3] and are generally isolated from each other. We have performed finite-element simulations of the formation of these mountains, investigating several mountain building scenarios: (1) a volcanic construct due to heterogeneous resurfacing on a coherent, homogeneous lithosphere; (2) a volcanic construct on a faulted, homogeneous lithosphere; (3) a volcanic construct on a faulted, homogeneous lithosphere under compression induced by subsidence due to Io's high resurfacing rate; (4) a faulted, homogeneous lithosphere under subsidence-induced compression; (5) a faulted, heterogeneous lithosphere under subsidence-induced compression; and (6) a mantle upwelling beneath a coherent, homogeneous lithosphere under subsidence-induced compression. The models of volcanic constructs do not produce mountains similar to those observed on Io. Neither do those of pervasively faulted lithospheres under compression; these predict a series of tilted lithospheric blocks or plateaus, as opposed to the isolated structures that are observed. Our models show that rising mantle material impinging on the base of the lithosphere can focus the compressional stresses to localize thrust faulting and mountain building. Such faults could also provide conduits along which magma could reach the surface as is observed near several mountains. [1] Carr et al., Icarus 135, pp. 146-165, 1998. [2] McEwen et al., Science 288, pp. 1193-1198, 2000. [3] Schenk and Bulmer, Science 279, pp. 1514-1517, 1998.

  20. An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    imon, Donald L.; Armstrong, Jeffrey B.

    2012-01-01

    A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.

  1. A Hybrid Procedural/Deductive Executive for Autonomous Spacecraft

    NASA Technical Reports Server (NTRS)

    Pell, Barney; Gamble, Edward B.; Gat, Erann; Kessing, Ron; Kurien, James; Millar, William; Nayak, P. Pandurang; Plaunt, Christian; Williams, Brian C.; Lau, Sonie (Technical Monitor)

    1998-01-01

    The New Millennium Remote Agent (NMRA) will be the first AI system to control an actual spacecraft. The spacecraft domain places a strong premium on autonomy and requires dynamic recoveries and robust concurrent execution, all in the presence of tight real-time deadlines, changing goals, scarce resource constraints, and a wide variety of possible failures. To achieve this level of execution robustness, we have integrated a procedural executive based on generic procedures with a deductive model-based executive. A procedural executive provides sophisticated control constructs such as loops, parallel activity, locks, and synchronization which are used for robust schedule execution, hierarchical task decomposition, and routine configuration management. A deductive executive provides algorithms for sophisticated state inference and optimal failure recover), planning. The integrated executive enables designers to code knowledge via a combination of procedures and declarative models, yielding a rich modeling capability suitable to the challenges of real spacecraft control. The interface between the two executives ensures both that recovery sequences are smoothly merged into high-level schedule execution and that a high degree of reactivity is retained to effectively handle additional failures during recovery.

  2. Final Project Report: Imaging Fault Zones Using a Novel Elastic Reverse-Time Migration Imaging Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lianjie; Chen, Ting; Tan, Sirui

    Imaging fault zones and fractures is crucial for geothermal operators, providing important information for reservoir evaluation and management strategies. However, there are no existing techniques available for directly and clearly imaging fault zones, particularly for steeply dipping faults and fracture zones. In this project, we developed novel acoustic- and elastic-waveform inversion methods for high-resolution velocity model building. In addition, we developed acoustic and elastic reverse-time migration methods for high-resolution subsurface imaging of complex subsurface structures and steeply-dipping fault/fracture zones. We first evaluated and verified the improved capabilities of our newly developed seismic inversion and migration imaging methods using synthetic seismicmore » data. Our numerical tests verified that our new methods directly image subsurface fracture/fault zones using surface seismic reflection data. We then applied our novel seismic inversion and migration imaging methods to a field 3D surface seismic dataset acquired at the Soda Lake geothermal field using Vibroseis sources. Our migration images of the Soda Lake geothermal field obtained using our seismic inversion and migration imaging algorithms revealed several possible fault/fracture zones. AltaRock Energy, Inc. is working with Cyrq Energy, Inc. to refine the geologic interpretation at the Soda Lake geothermal field. Trenton Cladouhos, Senior Vice President R&D of AltaRock, was very interested in our imaging results of 3D surface seismic data from the Soda Lake geothermal field. He planed to perform detailed interpretation of our images in collaboration with James Faulds and Holly McLachlan of University of Nevada at Reno. Using our high-resolution seismic inversion and migration imaging results can help determine the optimal locations to drill wells for geothermal energy production and reduce the risk of geothermal exploration.« less

  3. Using Magnetics and Topography to Model Fault Splays of the Hilton Creek Fault System within the Long Valley Caldera

    NASA Astrophysics Data System (ADS)

    De Cristofaro, J. L.; Polet, J.

    2017-12-01

    The Hilton Creek Fault (HCF) is a range-bounding extensional fault that forms the eastern escarpment of California's Sierra Nevada mountain range, near the town of Mammoth Lakes. The fault is well mapped along its main trace to the south of the Long Valley Caldera (LVC), but the location and nature of its northern terminus is poorly constrained. The fault terminates as a series of left-stepping splays within the LVC, an area of active volcanism that most notably erupted 760 ka, and currently experiences continuous geothermal activity and sporadic earthquake swarms. The timing of the most recent motion on these fault splays is debated, as is the threat posed by this section of the Hilton Creek Fault. The Third Uniform California Earthquake Rupture Forecast (UCERF3) model depicts the HCF as a single strand projecting up to 12km into the LVC. However, Bailey (1989) and Hill and Montgomery-Brown (2015) have argued against this model, suggesting that extensional faulting within the Caldera has been accommodated by the ongoing volcanic uplift and thus the intracaldera section of the HCF has not experienced motion since 760ka.We intend to map the intracaldera fault splays and model their subsurface characteristics to better assess their rupture history and potential. This will be accomplished using high-resolution topography and subsurface geophysical methods, including ground-based magnetics. Preliminary work was performed using high-precision Nikon Nivo 5.C total stations to generate elevation profiles and a backpack mounted GEM GS-19 proton precession magnetometer. The initial results reveal a correlation between magnetic anomalies and topography. East-West topographic profiles show terrace-like steps, sub-meter in height, which correlate to changes in the magnetic data. Continued study of the magnetic data using Oasis Montaj 3D modeling software is planned. Additionally, we intend to prepare a high-resolution terrain model using structure-from-motion techniques derived from imagery acquired by an unmanned aerial vehicle and ground control points measured with realtime kinematic GPS receivers. This terrain model will be combined with subsurface geophysical data to form a comprehensive model of the subsurface.

  4. Comparative Study of Fault Diagnostic Methods in Voltage Source Inverter Fed Three Phase Induction Motor Drive

    NASA Astrophysics Data System (ADS)

    Dhumale, R. B.; Lokhande, S. D.

    2017-05-01

    Three phase Pulse Width Modulation inverter plays vital role in industrial applications. The performance of inverter demeans as several types of faults take place in it. The widely used switching devices in power electronics are Insulated Gate Bipolar Transistors (IGBTs) and Metal Oxide Field Effect Transistors (MOSFET). The IGBTs faults are broadly classified as base or collector open circuit fault, misfiring fault and short circuit fault. To develop consistency and performance of inverter, knowledge of fault mode is extremely important. This paper presents the comparative study of IGBTs fault diagnosis. Experimental set up is implemented for data acquisition under various faulty and healthy conditions. Recent methods are executed using MATLAB-Simulink and compared using key parameters like average accuracy, fault detection time, implementation efforts, threshold dependency, and detection parameter, resistivity against noise and load dependency.

  5. High-deductible health plans.

    PubMed

    2014-05-01

    High-deductible health plans (HDHPs) are insurance policies with higher deductibles than conventional plans. The Medicare Prescription Drug Improvement and Modernization Act of 2003 linked many HDHPs with tax-advantaged spending accounts. The 2010 Patient Protection and Affordable Care Act continues to provide for HDHPs in its lower-level plans on the health insurance marketplace and provides for them in employer-offered plans. HDHPs decrease the premium cost of insurance policies for purchasers and shift the risk of further payments to the individual subscriber. HDHPs reduce utilization and total medical costs, at least in the short term. Because HDHPs require out-of-pocket payment in the initial stages of care, primary care and other outpatient services as well as elective procedures are the services most affected, whereas higher-cost services in the health care system, incurred after the deductible is met, are unaffected. HDHPs promote adverse selection because healthier and wealthier patients tend to opt out of conventional plans in favor of HDHPs. Because the ill pay more than the healthy under HDHPs, families with children with special health care needs bear an increased cost burden in this model. HDHPs discourage use of nonpreventive primary care and thus are at odds with most recommendations for improving the organization of health care, which focus on strengthening primary care.This policy statement provides background information on HDHPs, discusses the implications for families and pediatric care providers, and suggests courses of action. Copyright © 2014 by the American Academy of Pediatrics.

  6. Initiating Event Analysis of a Lithium Fluoride Thorium Reactor

    NASA Astrophysics Data System (ADS)

    Geraci, Nicholas Charles

    The primary purpose of this study is to perform an Initiating Event Analysis for a Lithium Fluoride Thorium Reactor (LFTR) as the first step of a Probabilistic Safety Assessment (PSA). The major objective of the research is to compile a list of key initiating events capable of resulting in failure of safety systems and release of radioactive material from the LFTR. Due to the complex interactions between engineering design, component reliability and human reliability, probabilistic safety assessments are most useful when the scope is limited to a single reactor plant. Thus, this thesis will study the LFTR design proposed by Flibe Energy. An October 2015 Electric Power Research Institute report on the Flibe Energy LFTR asked "what-if?" questions of subject matter experts and compiled a list of key hazards with the most significant consequences to the safety or integrity of the LFTR. The potential exists for unforeseen hazards to pose additional risk for the LFTR, but the scope of this thesis is limited to evaluation of those key hazards already identified by Flibe Energy. These key hazards are the starting point for the Initiating Event Analysis performed in this thesis. Engineering evaluation and technical study of the plant using a literature review and comparison to reference technology revealed four hazards with high potential to cause reactor core damage. To determine the initiating events resulting in realization of these four hazards, reference was made to previous PSAs and existing NRC and EPRI initiating event lists. Finally, fault tree and event tree analyses were conducted, completing the logical classification of initiating events. Results are qualitative as opposed to quantitative due to the early stages of system design descriptions and lack of operating experience or data for the LFTR. In summary, this thesis analyzes initiating events using previous research and inductive and deductive reasoning through traditional risk management techniques to arrive at a list of key initiating events that can be used to address vulnerabilities during the design phases of LFTR development.

  7. Adding Fault Tolerance to NPB Benchmarks Using ULFM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parchman, Zachary W; Vallee, Geoffroy R; Naughton III, Thomas J

    2016-01-01

    In the world of high-performance computing, fault tolerance and application resilience are becoming some of the primary concerns because of increasing hardware failures and memory corruptions. While the research community has been investigating various options, from system-level solutions to application-level solutions, standards such as the Message Passing Interface (MPI) are also starting to include such capabilities. The current proposal for MPI fault tolerant is centered around the User-Level Failure Mitigation (ULFM) concept, which provides means for fault detection and recovery of the MPI layer. This approach does not address application-level recovery, which is currently left to application developers. In thismore » work, we present a mod- ification of some of the benchmarks of the NAS parallel benchmark (NPB) to include support of the ULFM capabilities as well as application-level strategies and mechanisms for application-level failure recovery. As such, we present: (i) an application-level library to checkpoint and restore data, (ii) extensions of NPB benchmarks for fault tolerance based on different strategies, (iii) a fault injection tool, and (iv) some preliminary results that show the impact of such fault tolerant strategies on the application execution.« less

  8. Fault tolerance of artificial neural networks with applications in critical systems

    NASA Technical Reports Server (NTRS)

    Protzel, Peter W.; Palumbo, Daniel L.; Arras, Michael K.

    1992-01-01

    This paper investigates the fault tolerance characteristics of time continuous recurrent artificial neural networks (ANN) that can be used to solve optimization problems. The principle of operations and performance of these networks are first illustrated by using well-known model problems like the traveling salesman problem and the assignment problem. The ANNs are then subjected to 13 simultaneous 'stuck at 1' or 'stuck at 0' faults for network sizes of up to 900 'neurons'. The effects of these faults is demonstrated and the cause for the observed fault tolerance is discussed. An application is presented in which a network performs a critical task for a real-time distributed processing system by generating new task allocations during the reconfiguration of the system. The performance degradation of the ANN under the presence of faults is investigated by large-scale simulations, and the potential benefits of delegating a critical task to a fault tolerant network are discussed.

  9. Children's and adults' judgments of the certainty of deductive inferences, inductive inferences, and guesses.

    PubMed

    Pillow, Bradford H; Pearson, Raeanne M; Hecht, Mary; Bremer, Amanda

    2010-01-01

    Children and adults rated their own certainty following inductive inferences, deductive inferences, and guesses. Beginning in kindergarten, participants rated deductions as more certain than weak inductions or guesses. Deductions were rated as more certain than strong inductions beginning in Grade 3, and fourth-grade children and adults differentiated strong inductions, weak inductions, and informed guesses from pure guesses. By Grade 3, participants also gave different types of explanations for their deductions and inductions. These results are discussed in relation to children's concepts of cognitive processes, logical reasoning, and epistemological development.

  10. High tsunami risk at northern tip of Sumatra as a result of the activity of the Sumatra Fault Zone (SFZ) combined with coastal landslides

    NASA Astrophysics Data System (ADS)

    Haridhi, H. A.; Huang, B. S.; Wen, K. L.; Mirza, A.; Rizal, S.; Purnawan, S.; Fajri, I.; Klingelhoefer, F.; Liu, C. S.; Lee, C. S.; Wilson, C. R.

    2017-12-01

    The lesson learned from the 12 January 2010, Mw 7.0 Haiti earthquake has shown that an earthquake with strike-slip faulting can produce a significant tsunami. This occasion is rare since in the fact of the fault consist predominantly of lateral motion, which is rarely associated with significant uplift or tsunami generation. Yet, another hint from this event, that this earthquake was accompanied by a coastal landslide. Again, there were only few records of a submarine slides as a primary source that generate a tsunami. Hence, the Haiti Mw 7.0 earthquake was generated by these combined mechanisms, i.e. strike-slip faulting earthquake and coastal landslide. In reflecting this event, the Sumatra region exhibit almost identical situation, where the right lateral strike-slip faulting of Sumatra Fault Zone (SFZ) is located. In this study, we are focusing at the northern tip of SFZ at Aceh Province. The reason we focused our study at its northern tip is that, since the Sumatra-Andaman mega earthquake and tsunami on 26 December 2004, which occurred at the subduction zone, there were no records of significant earthquake along the SFZ, where at this location the SFZ is divided into two faults, i.e. Aceh and Seulimeum faults. This study aimed as a mitigation effort, if an earthquake happened at these faults, do we observe a similar result as that happened at Haiti or not. To do so, we access the high-resolution shallow bathymetry data that acquired through a Community-Based Bathymetric Survey (CBBS), examines five scanned Single Channel Seismic (SCS) reflections data, perform the slope stability analysis and that simulate the tsunami using Cornell Multi-grid Coupled Tsunami Model (COMCOT) model with a combined source of fault activity and submarine landslide. The result shows that, by these combined mechanisms, if the earthquake as large as 7 Mw or larger, it could produce a tsunami as high as 6 meters along the coast. The detailed shallow bathymetric and the slope stability results indicate that the slope is close to failure and that the SCS reflection shows a turbidites type unconformity that indicate an evidence of past submarine landslide. We concluded that, there is a high risk of an event that is similar to Haiti occurred at Aceh province.

  11. Shallow subsurface imaging of the Piano di Pezza active normal fault (central Italy) by high-resolution refraction and electrical resistivity tomography coupled with time domain electromagnetic data

    NASA Astrophysics Data System (ADS)

    Villani, Fabio; Tulliani, Valerio; Fierro, Elisa; Sapia, Vincenzo; Civico, Riccardo

    2015-04-01

    The Piano di Pezza fault is the north-westernmost segment of the >20 km long Ovindoli-Pezza active normal fault-system (central Italy). Although existing paleoseismic data document high vertical Holocene slip rates (~1 mm/yr) and a remarkable seismogenic potential of this fault, its subsurface setting and Pleistocene cumulative displacement are still poorly known. We investigated for the first time by means of high-resolution seismic and electrical resistivity tomography coupled with time domain electromagnetic (TDEM) measurements the shallow subsurface of a key section of the Piano di Pezza fault. Our surveys cross a ~5 m-high fault scarp that was generated by repeated surface-rupturing earthquakes displacing some Late Holocene alluvial fans. We provide 2-D Vp and resistivity images which clearly show significant details of the fault structure and the geometry of the shallow basin infill material down to 50 m depth. We can estimate the dip (~50°) and the Holocene vertical displacement of the master fault (~10 m). We also recognize in the hangingwall some low-velocity/low-resistivity regions that we relate to packages of colluvial wedges derived from scarp degradation, which may represent the record of several paleo-earthquakes older than the Late Holocene events previously recognized by paleoseismic trenching. Conversely, due to the limited investigation depth of seismic and electrical tomography, the estimation of the cumulative amount of Pleistocene throw is hampered. Therefore, to increase the depth of investigation, we performed 7 TDEM measurements along the electrical profile using a 50 m loop size both in central and offset configuration. The recovered 1-D resistivity models show a good match with 2-D resistivity images in the near surface. Moreover, TDEM inversion results indicate that in the hangingwall, ~200 m away from the surface fault trace, the carbonate pre-Quaternary basement may be found at ~90-100 m depth. The combined approach of electrical and seismic data coupled with TDEM measurements provides a robust constraint to the Piano di Pezza fault cumulative offset. Our data are useful for better reconstructing the deep structural setting of the Piano di Pezza basin and assessing the role played by extensional tectonics in its Quaternary evolution.

  12. Permeability Evolution With Shearing of Simulated Faults in Unconventional Shale Reservoirs

    NASA Astrophysics Data System (ADS)

    Wu, W.; Gensterblum, Y.; Reece, J. S.; Zoback, M. D.

    2016-12-01

    Horizontal drilling and multi-stage hydraulic fracturing can lead to fault reactivation, a process thought to influence production from extremely low-permeability unconventional reservoir. A fundamental understanding of permeability changes with shear could be helpful for optimizing reservoir stimulation strategies. We examined the effects of confining pressure and frictional sliding on fault permeability in Eagle Ford shale samples. We performed shear-flow experiments in a triaxial apparatus on four shale samples: (1) clay-rich sample with sawcut fault, (2) calcite-rich sample with sawcut fault, (3) clay-rich sample with natural fault, and (4) calcite-rich sample with natural fault. We used pressure pulse-decay and steady-state flow techniques to measure fault permeability. Initial pore and confining pressures are set to 2.5 MPa and 5.0 MPa, respectively. To investigate the influence of confining pressure on fault permeability, we incrementally raised and lowered the confining pressure and measure permeability at different effective stresses. To examine the effect of frictional sliding on fault permeability, we slide the samples four times at a constant shear displacement rate of 0.043 mm/min for 10 minutes each and measure fault permeability before and after frictional sliding. We used a 3D Laser Scanner to image fault surface topography before and after the experiment. Our results show that frictional sliding can enhance fault permeability at low confining pressures (e.g., ≥5.0 MPa) and reduce fault permeability at high confining pressures (e.g., ≥7.5 MPa). The permeability of sawcut faults almost fully recovers when confining pressure returns to the initial value, and increases with sliding due to asperity damage and subsequent dilation at low confining pressures. In contrast, the permeability of natural faults does not fully recover. It initially increases with sliding, but then decreases with further sliding most likely due to fault gouge blocking fluid pathways.

  13. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory

    PubMed Central

    Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong

    2016-01-01

    Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods. PMID:26797611

  14. Support vector machine in machine condition monitoring and fault diagnosis

    NASA Astrophysics Data System (ADS)

    Widodo, Achmad; Yang, Bo-Suk

    2007-08-01

    Recently, the issue of machine condition monitoring and fault diagnosis as a part of maintenance system became global due to the potential advantages to be gained from reduced maintenance costs, improved productivity and increased machine availability. This paper presents a survey of machine condition monitoring and fault diagnosis using support vector machine (SVM). It attempts to summarize and review the recent research and developments of SVM in machine condition monitoring and diagnosis. Numerous methods have been developed based on intelligent systems such as artificial neural network, fuzzy expert system, condition-based reasoning, random forest, etc. However, the use of SVM for machine condition monitoring and fault diagnosis is still rare. SVM has excellent performance in generalization so it can produce high accuracy in classification for machine condition monitoring and diagnosis. Until 2006, the use of SVM in machine condition monitoring and fault diagnosis is tending to develop towards expertise orientation and problem-oriented domain. Finally, the ability to continually change and obtain a novel idea for machine condition monitoring and fault diagnosis using SVM will be future works.

  15. 26 CFR 1.162-10T - Questions and answers relating to the deduction of employee benefits under the Tax Reform Act of...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... of employee benefits under the Tax Reform Act of 1984; certain limits on amounts deductible... and Corporations § 1.162-10T Questions and answers relating to the deduction of employee benefits... amendment of section 404(b) by the Tax Reform Act of 1984 affect the deduction of employee benefits under...

  16. Automatic Fault Characterization via Abnormality-Enhanced Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bronevetsky, G; Laguna, I; de Supinski, B R

    Enterprise and high-performance computing systems are growing extremely large and complex, employing hundreds to hundreds of thousands of processors and software/hardware stacks built by many people across many organizations. As the growing scale of these machines increases the frequency of faults, system complexity makes these faults difficult to detect and to diagnose. Current system management techniques, which focus primarily on efficient data access and query mechanisms, require system administrators to examine the behavior of various system services manually. Growing system complexity is making this manual process unmanageable: administrators require more effective management tools that can detect faults and help tomore » identify their root causes. System administrators need timely notification when a fault is manifested that includes the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize system behavior. However, the complex effects of system faults make these tools difficult to apply effectively. This paper investigates the application of classification and clustering algorithms to fault detection and characterization. We show experimentally that naively applying these methods achieves poor accuracy. Further, we design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy. Our experiments demonstrate that these techniques can detect and characterize faults with 65% accuracy, compared to just 5% accuracy for naive approaches.« less

  17. Rapid recovery from transient faults in the fault-tolerant processor with fault-tolerant shared memory

    NASA Technical Reports Server (NTRS)

    Harper, Richard E.; Butler, Bryan P.

    1990-01-01

    The Draper fault-tolerant processor with fault-tolerant shared memory (FTP/FTSM), which is designed to allow application tasks to continue execution during the memory alignment process, is described. Processor performance is not affected by memory alignment. In addition, the FTP/FTSM incorporates a hardware scrubber device to perform the memory alignment quickly during unused memory access cycles. The FTP/FTSM architecture is described, followed by an estimate of the time required for channel reintegration.

  18. Distributed Evaluation Functions for Fault Tolerant Multi-Rover Systems

    NASA Technical Reports Server (NTRS)

    Agogino, Adrian; Turner, Kagan

    2005-01-01

    The ability to evolve fault tolerant control strategies for large collections of agents is critical to the successful application of evolutionary strategies to domains where failures are common. Furthermore, while evolutionary algorithms have been highly successful in discovering single-agent control strategies, extending such algorithms to multiagent domains has proven to be difficult. In this paper we present a method for shaping evaluation functions for agents that provide control strategies that both are tolerant to different types of failures and lead to coordinated behavior in a multi-agent setting. This method neither relies of a centralized strategy (susceptible to single point of failures) nor a distributed strategy where each agent uses a system wide evaluation function (severe credit assignment problem). In a multi-rover problem, we show that agents using our agent-specific evaluation perform up to 500% better than agents using the system evaluation. In addition we show that agents are still able to maintain a high level of performance when up to 60% of the agents fail due to actuator, communication or controller faults.

  19. An improved fault-tolerant control scheme for PWM inverter-fed induction motor-based EVs.

    PubMed

    Tabbache, Bekheïra; Benbouzid, Mohamed; Kheloui, Abdelaziz; Bourgeot, Jean-Matthieu; Mamoune, Abdeslam

    2013-11-01

    This paper proposes an improved fault-tolerant control scheme for PWM inverter-fed induction motor-based electric vehicles. The proposed strategy deals with power switch (IGBTs) failures mitigation within a reconfigurable induction motor control. To increase the vehicle powertrain reliability regarding IGBT open-circuit failures, 4-wire and 4-leg PWM inverter topologies are investigated and their performances discussed in a vehicle context. The proposed fault-tolerant topologies require only minimum hardware modifications to the conventional off-the-shelf six-switch three-phase drive, mitigating the IGBTs failures by specific inverter control. Indeed, the two topologies exploit the induction motor neutral accessibility for fault-tolerant purposes. The 4-wire topology uses then classical hysteresis controllers to account for the IGBT failures. The 4-leg topology, meanwhile, uses a specific 3D space vector PWM to handle vehicle requirements in terms of size (DC bus capacitors) and cost (IGBTs number). Experiments on an induction motor drive and simulations on an electric vehicle are carried-out using a European urban driving cycle to show that the proposed fault-tolerant control approach is effective and provides a simple configuration with high performance in terms of speed and torque responses. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Two kinds of reasoning.

    PubMed

    Rips, L J

    2001-03-01

    According to one view of reasoning, people can evaluate arguments in at least two qualitatively different ways: in terms of their deductive correctness and in terms of their inductive strength. According to a second view, assessments of both correctness and strength are a function of an argument's position on a single psychological continuum (e.g., subjective conditional probability). A deductively correct argument is one with the maximum value on this continuum; a strong argument is one with a high value. The present experiment tested these theories by asking participants to evaluate the same set of arguments for correctness and strength. The results produced an interaction between type of argument and instructions: In some conditions, participants judged one argument deductively correct more often than a second, but judged the second argument inductively strong more often than the first. This finding supports the view that people have distinct ways to evaluate arguments.

  1. Effect of Na presence during CuInSe{sub 2} growth on stacking fault annihilation and electronic properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stange, H., E-mail: helena.stange@helmholtz-berlin.de; Brunken, S.; Hempel, H.

    While presence of Na is essential for the performance of high-efficiency Cu(In,Ga)Se{sub 2} thin film solar cells, the reasons why addition of Na by post-deposition treatment is superior to pre-deposition Na supply—particularly at low growth temperatures—are not yet fully understood. Here, we show by X-ray diffraction and electron microscopy that Na impedes annihilation of stacking faults during the Cu-poor/Cu-rich transition of low temperature 3-stage co-evaporation and prevents Cu homogeneity on a microscopic level. Lower charge carrier mobilities are found by optical pump terahertz probe spectroscopy for samples with remaining high stacking fault density, indicating a detrimental effect on electronic propertiesmore » if Na is present during growth.« less

  2. The myth of induction in qualitative nursing research.

    PubMed

    Bergdahl, Elisabeth; Berterö, Carina M

    2015-04-01

    In nursing today, it remains unclear what constitutes a good foundation for qualitative scientific inquiry. There is a tendency to define qualitative research as a form of inductive inquiry; deductive practice is seldom discussed, and when it is, this usually occurs in the context of data analysis. We will look at how the terms 'induction' and 'deduction' are used in qualitative nursing science and by qualitative research theorists, and relate these uses to the traditional definitions of these terms by Popper and other philosophers of science. We will also question the assertion that qualitative research is or should be inductive. The position we defend here is that qualitative research should use deductive methods. We also see a need to understand the difference between the creative process needed to create theory and the justification of a theory. Our position is that misunderstandings regarding the philosophy of science and the role of inductive and deductive logic and science are still harming the development of nursing theory and science. The purpose of this article is to discuss and reflect upon inductive and deductive views of science as well as inductive and deductive analyses in qualitative research. We start by describing inductive and deductive methods and logic from a philosophy of science perspective, and we examine how the concepts of induction and deduction are often described and used in qualitative methods and nursing research. Finally, we attempt to provide a theoretical perspective that reconciles the misunderstandings regarding induction and deduction. Our conclusion is that openness towards deductive thinking and testing hypotheses is needed in qualitative nursing research. We must also realize that strict induction will not create theory; to generate theory, a creative leap is needed. © 2014 John Wiley & Sons Ltd.

  3. Differential equations governing slip-induced pore-pressure fluctuations in a water-saturated granular medium

    USGS Publications Warehouse

    Iverson, R.M.

    1993-01-01

    Macroscopic frictional slip in water-saturated granular media occurs commonly during landsliding, surface faulting, and intense bedload transport. A mathematical model of dynamic pore-pressure fluctuations that accompany and influence such sliding is derived here by both inductive and deductive methods. The inductive derivation shows how the governing differential equations represent the physics of the steadily sliding array of cylindrical fiberglass rods investigated experimentally by Iverson and LaHusen (1989). The deductive derivation shows how the same equations result from a novel application of Biot's (1956) dynamic mixture theory to macroscopic deformation. The model consists of two linear differential equations and five initial and boundary conditions that govern solid displacements and pore-water pressures. Solid displacements and water pressures are strongly coupled, in part through a boundary condition that ensures mass conservation during irreversible pore deformation that occurs along the bumpy slip surface. Feedback between this deformation and the pore-pressure field may yield complex system responses. The dual derivations of the model help explicate key assumptions. For example, the model requires that the dimensionless parameter B, defined here through normalization of Biot's equations, is much larger than one. This indicates that solid-fluid coupling forces are dominated by viscous rather than inertial effects. A tabulation of physical and kinematic variables for the rod-array experiments of Iverson and LaHusen and for various geologic phenomena shows that the model assumptions commonly are satisfied. A subsequent paper will describe model tests against experimental data. ?? 1993 International Association for Mathematical Geology.

  4. 78 FR 40831 - Proposed Collection; Comment Request for Regulation Project

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-08

    ... taxpayers and sourcing of income, deductions, gains and losses from a global dealing operation. The... information is necessary for the proper performance of the functions of the agency, including whether the...

  5. 46 CFR 69.119 - Spaces deducted from gross tonnage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... gear, capstan, windlass, and chain locker is deductible. A fore peak used exclusively as chain locker... oils, blocks, hawsers, rigging, deck gear, or other boatswain's stores for daily use is deductible. The...

  6. 46 CFR 69.119 - Spaces deducted from gross tonnage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... gear, capstan, windlass, and chain locker is deductible. A fore peak used exclusively as chain locker... oils, blocks, hawsers, rigging, deck gear, or other boatswain's stores for daily use is deductible. The...

  7. 46 CFR 69.119 - Spaces deducted from gross tonnage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... gear, capstan, windlass, and chain locker is deductible. A fore peak used exclusively as chain locker... oils, blocks, hawsers, rigging, deck gear, or other boatswain's stores for daily use is deductible. The...

  8. 46 CFR 69.119 - Spaces deducted from gross tonnage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... gear, capstan, windlass, and chain locker is deductible. A fore peak used exclusively as chain locker... oils, blocks, hawsers, rigging, deck gear, or other boatswain's stores for daily use is deductible. The...

  9. Electrical resistivity imaging in transmission between surface and underground tunnel for fault characterization

    NASA Astrophysics Data System (ADS)

    Lesparre, N.; Boyle, A.; Grychtol, B.; Cabrera, J.; Marteau, J.; Adler, A.

    2016-05-01

    Electrical resistivity images supply information on sub-surface structures and are classically performed to characterize faults geometry. Here we use the presence of a tunnel intersecting a regional fault to inject electrical currents between surface and the tunnel to improve the image resolution at depth. We apply an original methodology for defining the inversion parametrization based on pilot points to better deal with the heterogeneous sounding of the medium. An increased region of high spatial resolution is shown by analysis of point spread functions as well as inversion of synthetics. Such evaluations highlight the advantages of using transmission measurements by transferring a few electrodes from the main profile to increase the sounding depth. Based on the resulting image we propose a revised structure for the medium surrounding the Cernon fault supported by geological observations and muon flux measurements.

  10. BFT replication resistant to MAC attacks

    NASA Astrophysics Data System (ADS)

    Zbierski, Maciej

    2016-09-01

    Over the last decade numerous Byzantine fault-tolerant (BFT) replication protocols have been proposed in the literature. However, the vast majority of these solutions reuse the same authentication scheme, which makes them susceptible to a so called MAC attack. Such vulnerability enables malicious clients to undetectably prevent the replicated service from processing incoming client requests, and consequently making it permanently unavailable. While some BFT protocols attempted to address this issue by using different authentication mechanisms, they at the same time significantly degraded the performance achieved in correct environments. This article presents a novel adaptive authentication mechanism which can be combined with practically any Byzantine fault-tolerant replication protocol. Unlike previous solutions, the proposed scheme dynamically switches between two operation modes to combine high performance in correct environments and liveness during MAC attacks. The experiment results presented in the article demonstrate that the proposed mechanism can sufficiently tolerate MAC attacks without introducing any observable overhead whenever no faults are present.

  11. Study of the Nankai seismogenic fault using dynamic wave propagation modelling of digital rock from the Nobeoka Fault

    NASA Astrophysics Data System (ADS)

    Eng, Chandoeun; Ikeda, Tatsunori; Tsuji, Takeshi

    2018-10-01

    To understand the characteristics of the Nankai seismogenic fault in the plate convergent margin, we calculated the P- and S-wave velocities (VP and VS) of digital rock models constructed from core samples of an ancient plate boundary fault at Nobeoka, Kyushu Island, Japan. We first constructed 3D digital rock models from microcomputed tomography images and identified their heterogeneous textures such as cracks and veins. We replaced the cracks and veins with air, water, quartz, calcite and other materials with different bulk and shear moduli. Using the Rotated Staggered Grid Finite-Difference Method, we performed dynamic wave propagation simulations and quantified the effective VP, VS and the ratio of VP to VS (VP/VS) of the 3D digital rock models with different crack-filling minerals. Our results demonstrate that the water-saturated cracks considerably decreased the seismic velocity and increased VP/VS. The VP/VS of the quartz-filled rock model was lower than that in the water-saturated case and in the calcite-filled rock model. By comparing the elastic properties derived from the digital rock models with the seismic velocities (e.g. VP and VP/VS) around the seismogenic fault estimated from field seismic data, we characterised the evolution process of the deep seismogenic fault. The high VP/VS and low VP observed at the transition from aseismic to coseismic regimes in the Nankai Trough can be explained by open cracks (or fractures), while the low VP/VS and high VP observed at the deeper coseismic fault zone suggests quartz-filled cracks. The quartz-rich fault zone characterised as low VP/VS and high VP in this study could partially relate to the coseismic behaviour as suggested by previous studies, because quartz exhibits slip-weakening behaviour (i.e. unstable coseismic slip).

  12. Performance Evaluation of Cloud Service Considering Fault Recovery

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Tan, Feng; Dai, Yuan-Shun; Guo, Suchang

    In cloud computing, cloud service performance is an important issue. To improve cloud service reliability, fault recovery may be used. However, the use of fault recovery could have impact on the performance of cloud service. In this paper, we conduct a preliminary study on this issue. Cloud service performance is quantified by service response time, whose probability density function as well as the mean is derived.

  13. Flight elements: Fault detection and fault management

    NASA Technical Reports Server (NTRS)

    Lum, H.; Patterson-Hine, A.; Edge, J. T.; Lawler, D.

    1990-01-01

    Fault management for an intelligent computational system must be developed using a top down integrated engineering approach. An approach proposed includes integrating the overall environment involving sensors and their associated data; design knowledge capture; operations; fault detection, identification, and reconfiguration; testability; causal models including digraph matrix analysis; and overall performance impacts on the hardware and software architecture. Implementation of the concept to achieve a real time intelligent fault detection and management system will be accomplished via the implementation of several objectives, which are: Development of fault tolerant/FDIR requirement and specification from a systems level which will carry through from conceptual design through implementation and mission operations; Implementation of monitoring, diagnosis, and reconfiguration at all system levels providing fault isolation and system integration; Optimize system operations to manage degraded system performance through system integration; and Lower development and operations costs through the implementation of an intelligent real time fault detection and fault management system and an information management system.

  14. Impacts of Intelligent Automated Quality Control on a Small Animal APD-Based Digital PET Scanner

    NASA Astrophysics Data System (ADS)

    Charest, Jonathan; Beaudoin, Jean-François; Bergeron, Mélanie; Cadorette, Jules; Arpin, Louis; Lecomte, Roger; Brunet, Charles-Antoine; Fontaine, Réjean

    2016-10-01

    Stable system performance is mandatory to warrant the accuracy and reliability of biological results relying on small animal positron emission tomography (PET) imaging studies. This simple requirement sets the ground for imposing routine quality control (QC) procedures to keep PET scanners at a reliable optimal performance level. However, such procedures can become burdensome to implement for scanner operators, especially taking into account the increasing number of data acquisition channels in newer generation PET scanners. In systems using pixel detectors to achieve enhanced spatial resolution and contrast-to-noise ratio (CNR), the QC workload rapidly increases to unmanageable levels due to the number of independent channels involved. An artificial intelligence based QC system, referred to as Scanner Intelligent Diagnosis for Optimal Performance (SIDOP), was proposed to help reducing the QC workload by performing automatic channel fault detection and diagnosis. SIDOP consists of four high-level modules that employ machine learning methods to perform their tasks: Parameter Extraction, Channel Fault Detection, Fault Prioritization, and Fault Diagnosis. Ultimately, SIDOP submits a prioritized faulty channel list to the operator and proposes actions to correct them. To validate that SIDOP can perform QC procedures adequately, it was deployed on a LabPET™ scanner and multiple performance metrics were extracted. After multiple corrections on sub-optimal scanner settings, a 8.5% (with a 95% confidence interval (CI) of [7.6, 9.3]) improvement in the CNR, a 17.0% (CI: [15.3, 18.7]) decrease of the uniformity percentage standard deviation, and a 6.8% gain in global sensitivity were observed. These results confirm that SIDOP can indeed be of assistance in performing QC procedures and restore performance to optimal figures.

  15. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.0)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Engelmann, Christian

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest that very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Practical limits on power consumption in HPC systems will require future systems to embrace innovative architectures, increasing the levels of hardware and software complexities. The resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies thatmore » are capable of handling a broad set of fault models at accelerated fault rates. These techniques must seek to improve resilience at reasonable overheads to power consumption and performance. While the HPC community has developed various solutions, application-level as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power eciency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software ecosystems, which are expected to be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience based on the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. The catalog of resilience design patterns provides designers with reusable design elements. We define a design framework that enhances our understanding of the important constraints and opportunities for solutions deployed at various layers of the system stack. The framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also enables optimization of the cost-benefit trade-os among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-ecient manner in spite of frequent faults, errors, and failures of various types.« less

  16. 48 CFR 209.105-2-70 - Inclusion of determination of contractor fault in Federal Awardee Performance and Integrity...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Inclusion of determination of contractor fault in Federal Awardee Performance and Integrity Information System (FAPIIS). 209.105... Contractors 209.105-2-70 Inclusion of determination of contractor fault in Federal Awardee Performance and...

  17. 48 CFR 209.105-2-70 - Inclusion of determination of contractor fault in Federal Awardee Performance and Integrity...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Inclusion of determination of contractor fault in Federal Awardee Performance and Integrity Information System (FAPIIS). 209.105... Contractors 209.105-2-70 Inclusion of determination of contractor fault in Federal Awardee Performance and...

  18. 48 CFR 209.105-2-70 - Inclusion of determination of contractor fault in Federal Awardee Performance and Integrity...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Inclusion of determination of contractor fault in Federal Awardee Performance and Integrity Information System (FAPIIS). 209.105... Contractors 209.105-2-70 Inclusion of determination of contractor fault in Federal Awardee Performance and...

  19. 48 CFR 209.105-2-70 - Inclusion of determination of contractor fault in Federal Awardee Performance and Integrity...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Inclusion of determination of contractor fault in Federal Awardee Performance and Integrity Information System (FAPIIS). 209.105... Contractors 209.105-2-70 Inclusion of determination of contractor fault in Federal Awardee Performance and...

  20. Fault structure and kinematics of the Long Valley Caldera region, California, revealed by high-accuracy earthquake hypocenters and focal mechanism stress inversions

    NASA Astrophysics Data System (ADS)

    Prejean, Stephanie; Ellsworth, William; Zoback, Mark; Waldhauser, Felix

    2002-12-01

    We have determined high-resolution hypocenters for 45,000+ earthquakes that occurred between 1980 and 2000 in the Long Valley caldera area using a double-difference earthquake location algorithm and routinely determined arrival times. The locations reveal numerous discrete fault planes in the southern caldera and adjacent Sierra Nevada block (SNB). Intracaldera faults include a series of east/west-striking right-lateral strike-slip faults beneath the caldera's south moat and a series of more northerly striking strike-slip/normal faults beneath the caldera's resurgent dome. Seismicity in the SNB south of the caldera is confined to a crustal block bounded on the west by an east-dipping oblique normal fault and on the east by the Hilton Creek fault. Two NE-striking left-lateral strike-slip faults are responsible for most seismicity within this block. To understand better the stresses driving seismicity, we performed stress inversions using focal mechanisms with 50 or more first motions. This analysis reveals that the least principal stress direction systematically rotates across the studied region, from NE to SW in the caldera's south moat to WNW-ESE in Round Valley, 25 km to the SE. Because WNW-ESE extension is characteristic of the western boundary of the Basin and Range province, caldera area stresses appear to be locally perturbed. This stress perturbation does not seem to result from magma chamber inflation but may be related to the significant (˜20 km) left step in the locus of extension along the Sierra Nevada/Basin and Range province boundary. This implies that regional-scale tectonic processes are driving seismic deformation in the Long Valley caldera.

  1. Fault structure and kinematics of the Long Valley Caldera region, California, revealed by high-accuracy earthquake hypocenters and focal mechanism stress inversions

    USGS Publications Warehouse

    Prejean, Stephanie; Ellsworth, William L.; Zoback, Mark; Waldhauser, Felix

    2002-01-01

    We have determined high-resolution hypocenters for 45,000+ earthquakes that occurred between 1980 and 2000 in the Long Valley caldera area using a double-difference earthquake location algorithm and routinely determined arrival times. The locations reveal numerous discrete fault planes in the southern caldera and adjacent Sierra Nevada block (SNB). Intracaldera faults include a series of east/west-striking right-lateral strike-slip faults beneath the caldera's south moat and a series of more northerly striking strike-slip/normal faults beneath the caldera's resurgent dome. Seismicity in the SNB south of the caldera is confined to a crustal block bounded on the west by an east-dipping oblique normal fault and on the east by the Hilton Creek fault. Two NE-striking left-lateral strike-slip faults are responsible for most seismicity within this block. To understand better the stresses driving seismicity, we performed stress inversions using focal mechanisms with 50 or more first motions. This analysis reveals that the least principal stress direction systematically rotates across the studied region, from NE to SW in the caldera's south moat to WNW-ESE in Round Valley, 25 km to the SE. Because WNW-ESE extension is characteristic of the western boundary of the Basin and Range province, caldera area stresses appear to be locally perturbed. This stress perturbation does not seem to result from magma chamber inflation but may be related to the significant (???20 km) left step in the locus of extension along the Sierra Nevada/Basin and Range province boundary. This implies that regional-scale tectonic processes are driving seismic deformation in the Long Valley caldera.

  2. Field and experimental evidence for coseismic ruptures along shallow creeping faults in forearc sediments of the Crotone Basin, South Italy

    NASA Astrophysics Data System (ADS)

    Balsamo, Fabrizio; Aldega, Luca; De Paola, Nicola; Faoro, Igor; Storti, Fabrizio

    2014-05-01

    Large seismic slip occurring along shallow creeping faults in tectonically active areas represents an unsolved paradox, which is largely due to our poor understanding of the mechanics governing creeping faults, and to the lack of documented geological evidence showing how coseismic rupturing overprints creep in near-surface conditions. In this contribution we integrate field, petrophysical, mineralogical and friction data to characterize the signature of coseismic ruptures propagating along shallow creeping faults affecting unconsolidated forearc sediments of the seismically active Crotone Basin, in South Italy. Field observations of fault zones show widespread foliated cataclasites in fault cores, locally overprinted by sharp slip surfaces decorated by thin (0.5-1.5 cm) black gouge layers. Compared to foliated cataclasites, black gouges have much lower grain size, porosity and permeability, which may have facilitated slip weakening by thermal fluid pressurization. Moreover, black gouges are characterized by distinct mineralogical assemblages compatible with high temperatures (180-200°C) due to frictional heating during seismic slip. Foliated cataclasites and black gouges were also produced by laboratory friction experiments performed on host sediments at sub-seismic (≤ 0.1 m/s) and seismic (1 m/s) slip rates, respectively. Black gouges display low friction coefficients (0.3) and velocity-weakening behaviours, as opposed to high friction coefficients (0.65) and velocity-strengthening behaviours shown by the foliated cataclasites. Our results show that narrow black gouges developed within foliated cataclasites represent a potential diagnostic marker for episodic seismic activity in shallow creeping faults. These findings can help understanding the time-space partitioning between aseismic and seismic slip of faults at shallow crustal levels, impacting on seismic hazard evaluation of subduction zones and forearc regions affected by destructive earthquakes and tsunamis.

  3. High-Resolution Seismic Reflection Profiling Across the Black Hills Fault, Clark County, Nevada: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Zaragoza, S. A.; Snelson, C. M.; Jernsletten, J. A.; Saldana, S. C.; Hirsch, A.; McEwan, D.

    2005-12-01

    The Black Hills fault (BHF) is located in the central Basin and Range Province of western North America, a region that has undergone significant Cenozoic extension. The BHF is an east-dipping normal fault that forms the northwestern structural boundary of the Eldorado basin and lies ~20 km southeast of Las Vegas, Nevada. A recent trench study indicated that the fault offsets Holocene strata, and is capable of producing Mw 6.4-6.8 earthquakes. These estimates indicate a subsurface rupture length at least 10 km greater than the length of the scarp. This poses a significant hazard to structures such as the nearby Hoover Dam Bypass Bridge, which is being built to withstand a Mw 6.2-7.0 earthquake on local faults. If the BHF does continue in the subsurface, this structure, as well as nearby communities (Las Vegas, Boulder City, and Henderson), may not be as safe as previously expected. Previous attempts to image the fault with shallow seismics (hammer source) were inconclusive. However, gravity studies imply that the fault continues south of the scarp. Therefore, a new experiment utilizing high-resolution seismic reflection was performed to image subsurface geologic structures south of the scarp. At each shot point, a stack of four 30-160 Hz vibroseis sweeps of 15 s duration was recorded on a 60-channel system with 40 Hz geophones. This produced two 300 m reflection profiles, with a maximum depth of 500-600 m. A preliminary look at these data indicates the existence of two faults, potentially confirming that the BHF continues in the subsurface south of the scarp.

  4. GUI Type Fault Diagnostic Program for a Turboshaft Engine Using Fuzzy and Neural Networks

    NASA Astrophysics Data System (ADS)

    Kong, Changduk; Koo, Youngju

    2011-04-01

    The helicopter to be operated in a severe flight environmental condition must have a very reliable propulsion system. On-line condition monitoring and fault detection of the engine can promote reliability and availability of the helicopter propulsion system. A hybrid health monitoring program using Fuzzy Logic and Neural Network Algorithms can be proposed. In this hybrid method, the Fuzzy Logic identifies easily the faulted components from engine measuring parameter changes, and the Neural Networks can quantify accurately its identified faults. In order to use effectively the fault diagnostic system, a GUI (Graphical User Interface) type program is newly proposed. This program is composed of the real time monitoring part, the engine condition monitoring part and the fault diagnostic part. The real time monitoring part can display measuring parameters of the study turboshaft engine such as power turbine inlet temperature, exhaust gas temperature, fuel flow, torque and gas generator speed. The engine condition monitoring part can evaluate the engine condition through comparison between monitoring performance parameters the base performance parameters analyzed by the base performance analysis program using look-up tables. The fault diagnostic part can identify and quantify the single faults the multiple faults from the monitoring parameters using hybrid method.

  5. Chaining for Flexible and High-Performance Key-Value Systems

    DTIC Science & Technology

    2012-09-01

    store that is fault tolerant achieves high performance and availability, and offers strong data consistency? We present a new replication protocol...effective high performance data access and analytics, many sites use simpler data model “ NoSQL ” systems. ese systems store and retrieve data only by...DRAM, Flash, and disk-based storage; can act as an unreliable cache or a durable store ; and can offer strong or weak data consistency. e value of

  6. A High Performance COTS Based Computer Architecture

    NASA Astrophysics Data System (ADS)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  7. Frictional Properties of Main Fault Gouge of Mont Terri, Switzerland

    NASA Astrophysics Data System (ADS)

    Aoki, K.; Seshimo, K.; Guglielmi, Y.; Nussbaum, C.; Shimamoto, T.; Ma, S.; Yao, L.; Kametaka, M.; Sakai, T.

    2016-12-01

    JAEA participated in the Fault Slip Experiment of Mont Terri Project which aims at understanding (i) the conditions for slip activation and stability of clay faults, and (ii) the evolution of the coupling between fault slip, pore pressure and fluids migration. The experiment uses SIMFIP probe to estimate (i) the hydraulic and elastic properties of fault zone elements, (ii) the state of stresses across the fault zone and (iii) the fault zone apparent strength properties (friction coefficient and cohesion). To elaborate on the Fault Slip Experiment, JAEA performed friction experiment of borehole cores of depths 47.2m and 37.3m using a rotary-shear low to high-velocity friction apparatus at Institute of Geology, China Earthquake Administration. Friction experiments were performed either dry with room humidity or with 30wt% of H2O, at a normal stress of 1.38 MPa and at low to intermediate slip rates ranging 0.21 microns/s to 2.1mm/s. Sample from a depth of 37.3 m is a fault rock with scaly fabric with calcite veins, whereas that from 47.2 m in depth is a pelitic rock that disaggregates easily with water. Main experimental results are summarized as follows. (1) Gouge samples from both depths exhibit slight velocity-strengthening at V below 0.021 mm/s and notable velocity strengthening at V above approximately 0.021 mm/s. Frictional regimes can be classified into low-velocity and intermediate-velocity regimes, characterized by slight and clear velocity-strengthening behaviors, respectively. (2) Wet gouge from a depth of 47.2 m has mss of 0.12 0.2 at low V and 0.11 0.24 at intermediate V, while dry gouge from the same depth has mss two to three times as high as that for the wet gouge from the same depth. (3) In contrast, both dry and wet gouges from a depth of 37.3 m has mss of around 0.4 to 0.74 at low V and from around 0.45 to 0.75 at intermediate V. There are almost no differences between the dry and wet gouges from this depth (4) The wet gouge from 47.2 m depths has clear slip zone at the gouge-moving piston interface, but clear slip zones are missing in wet gouge from 37.3 m depth. (5) It is hoped that the frictional strength from the present experiments would give some insight on the initiation conditions of fault slip during fluid injection. Results of four other depths will be discussed at the session.

  8. Reliability Assessment for Low-cost Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Freeman, Paul Michael

    Existing low-cost unmanned aerospace systems are unreliable, and engineers must blend reliability analysis with fault-tolerant control in novel ways. This dissertation introduces the University of Minnesota unmanned aerial vehicle flight research platform, a comprehensive simulation and flight test facility for reliability and fault-tolerance research. An industry-standard reliability assessment technique, the failure modes and effects analysis, is performed for an unmanned aircraft. Particular attention is afforded to the control surface and servo-actuation subsystem. Maintaining effector health is essential for safe flight; failures may lead to loss of control incidents. Failure likelihood, severity, and risk are qualitatively assessed for several effector failure modes. Design changes are recommended to improve aircraft reliability based on this analysis. Most notably, the control surfaces are split, providing independent actuation and dual-redundancy. The simulation models for control surface aerodynamic effects are updated to reflect the split surfaces using a first-principles geometric analysis. The failure modes and effects analysis is extended by using a high-fidelity nonlinear aircraft simulation. A trim state discovery is performed to identify the achievable steady, wings-level flight envelope of the healthy and damaged vehicle. Tolerance of elevator actuator failures is studied using familiar tools from linear systems analysis. This analysis reveals significant inherent performance limitations for candidate adaptive/reconfigurable control algorithms used for the vehicle. Moreover, it demonstrates how these tools can be applied in a design feedback loop to make safety-critical unmanned systems more reliable. Control surface impairments that do occur must be quickly and accurately detected. This dissertation also considers fault detection and identification for an unmanned aerial vehicle using model-based and model-free approaches and applies those algorithms to experimental faulted and unfaulted flight test data. Flight tests are conducted with actuator faults that affect the plant input and sensor faults that affect the vehicle state measurements. A model-based detection strategy is designed and uses robust linear filtering methods to reject exogenous disturbances, e.g. wind, while providing robustness to model variation. A data-driven algorithm is developed to operate exclusively on raw flight test data without physical model knowledge. The fault detection and identification performance of these complementary but different methods is compared. Together, enhanced reliability assessment and multi-pronged fault detection and identification techniques can help to bring about the next generation of reliable low-cost unmanned aircraft.

  9. Gas Path On-line Fault Diagnostics Using a Nonlinear Integrated Model for Gas Turbine Engines

    NASA Astrophysics Data System (ADS)

    Lu, Feng; Huang, Jin-quan; Ji, Chun-sheng; Zhang, Dong-dong; Jiao, Hua-bin

    2014-08-01

    Gas turbine engine gas path fault diagnosis is closely related technology that assists operators in managing the engine units. However, the performance gradual degradation is inevitable due to the usage, and it result in the model mismatch and then misdiagnosis by the popular model-based approach. In this paper, an on-line integrated architecture based on nonlinear model is developed for gas turbine engine anomaly detection and fault diagnosis over the course of the engine's life. These two engine models have different performance parameter update rate. One is the nonlinear real-time adaptive performance model with the spherical square-root unscented Kalman filter (SSR-UKF) producing performance estimates, and the other is a nonlinear baseline model for the measurement estimates. The fault detection and diagnosis logic is designed to discriminate sensor fault and component fault. This integration architecture is not only aware of long-term engine health degradation but also effective to detect gas path performance anomaly shifts while the engine continues to degrade. Compared to the existing architecture, the proposed approach has its benefit investigated in the experiment and analysis.

  10. Volume Based Curvature Attributes Illuminate Stress Effects in Contiguous Fault Blocks, Central Basin Platform, West Texas

    NASA Astrophysics Data System (ADS)

    Blumentritt, C. H.; Marfurt, K. J.

    2005-05-01

    We compute curvatures for 3-D seismic volumes covering 200+ mi2 of the Central Basin Platform in West Texas and find that these attributes illumination lineations not seen on other displays of the seismic data. We analyze the preferred orientations of these lineations defined by well imaged faults and fault zones and find that the patterns vary according to the nature of the faults bounding the blocks, mostly strike-slip, high angle reverse, or oblique slip. We perform the analysis in the pre-Mississippian section which is decoupled from the overburden by a Permian age unconformity. Our technique differs from that of previous workers in that we compute curvatures on each sample of a seismic volume using a moving subvolume rather than along surfaces interpreted from the data. In this way, we minimize high frequency variations in the results that arise from picking errors in the interpretation or noise in the data. We are able to extract and display values of curvature along time or depth slices, along horizon slices, and along poorly imaged horizons.

  11. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Howard; Braun, James E.

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less

  12. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Howard; Braun, James E.

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less

  13. Runtime Speculative Software-Only Fault Tolerance

    DTIC Science & Technology

    2012-06-01

    reliability of RSFT, a in-depth analysis on its window of vulnerability is also discussed and measured via simulated fault injection. The performance...propagation of faults through the entire program. For optimal performance, these techniques have to use herotic alias analysis to find the minimum set of...affect program output. No program source code or alias analysis is needed to analyze the fault propagation ahead of time. 2.3 Limitations of Existing

  14. Predictive modelling of fault related fracturing in carbonate damage-zones: analytical and numerical models of field data (Central Apennines, Italy)

    NASA Astrophysics Data System (ADS)

    Mannino, Irene; Cianfarra, Paola; Salvini, Francesco

    2010-05-01

    Permeability in carbonates is strongly influenced by the presence of brittle deformation patterns, i.e pressure-solution surfaces, extensional fractures, and faults. Carbonate rocks achieve fracturing both during diagenesis and tectonic processes. Attitude, spatial distribution and connectivity of brittle deformation features rule the secondary permeability of carbonatic rocks and therefore the accumulation and the pathway of deep fluids (ground-water, hydrocarbon). This is particularly true in fault zones, where the damage zone and the fault core show different hydraulic properties from the pristine rock as well as between them. To improve the knowledge of fault architecture and faults hydraulic properties we study the brittle deformation patterns related to fault kinematics in carbonate successions. In particular we focussed on the damage-zone fracturing evolution. Fieldwork was performed in Meso-Cenozoic carbonate units of the Latium-Abruzzi Platform, Central Apennines, Italy. These units represent field analogues of rock reservoir in the Southern Apennines. We combine the study of rock physical characteristics of 22 faults and quantitative analyses of brittle deformation for the same faults, including bedding attitudes, fracturing type, attitudes, and spatial intensity distribution by using the dimension/spacing ratio, namely H/S ratio where H is the dimension of the fracture and S is the spacing between two analogous fractures of the same set. Statistical analyses of structural data (stereonets, contouring and H/S transect) were performed to infer a focussed, general algorithm that describes the expected intensity of fracturing process. The analytical model was fit to field measurements by a Montecarlo-convergent approach. This method proved a useful tool to quantify complex relations with a high number of variables. It creates a large sequence of possible solution parameters and results are compared with field data. For each item an error mean value is computed (RMS), representing the effectiveness of the fit and so the validity of this analysis. Eventually, the method selects the set of parameters that produced the least values. The tested algorithm describes the expected H/S values as a function of the distance from the fault core (D), the clay content (S), and the fault throw (T). The preliminary results of the Montecarlo inversion show that the distance (D) has the most effective influence in the H/S spatial distribution and the H/S value decreases with the distance from the fault-core. The rheological parameter shows a value similar to the diagenetic H/S values (1-1.5). The resulting equation has a reasonable RMS value of 0.116. The results of the Montecarlo models were finally implemented in FRAP, a fault environment modelling software. It is a true 4D tool that can predict stress conditions and permeability architecture associated to a given faults during single or multiple tectonic events. We present some models of fault-related fracturing among the studied faults performed by FRAP and we compare them with the field measurements, to test the validity of our methodology.

  15. Detection of CMOS bridging faults using minimal stuck-at fault test sets

    NASA Technical Reports Server (NTRS)

    Ijaz, Nabeel; Frenzel, James F.

    1993-01-01

    The performance of minimal stuck-at fault test sets at detecting bridging faults are evaluated. New functional models of circuit primitives are presented which allow accurate representation of bridging faults under switch-level simulation. The effectiveness of the patterns is evaluated using both voltage and current testing.

  16. Health management and controls for Earth-to-orbit propulsion systems

    NASA Astrophysics Data System (ADS)

    Bickford, R. L.

    1995-03-01

    Avionics and health management technologies increase the safety and reliability while decreasing the overall cost for Earth-to-orbit (ETO) propulsion systems. New ETO propulsion systems will depend on highly reliable fault tolerant flight avionics, advanced sensing systems and artificial intelligence aided software to ensure critical control, safety and maintenance requirements are met in a cost effective manner. Propulsion avionics consist of the engine controller, actuators, sensors, software and ground support elements. In addition to control and safety functions, these elements perform system monitoring for health management. Health management is enhanced by advanced sensing systems and algorithms which provide automated fault detection and enable adaptive control and/or maintenance approaches. Aerojet is developing advanced fault tolerant rocket engine controllers which provide very high levels of reliability. Smart sensors and software systems which significantly enhance fault coverage and enable automated operations are also under development. Smart sensing systems, such as flight capable plume spectrometers, have reached maturity in ground-based applications and are suitable for bridging to flight. Software to detect failed sensors has reached similar maturity. This paper will discuss fault detection and isolation for advanced rocket engine controllers as well as examples of advanced sensing systems and software which significantly improve component failure detection for engine system safety and health management.

  17. Model-Based Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Kumar, Aditya; Viassolo, Daniel

    2008-01-01

    The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.

  18. SFT: Scalable Fault Tolerance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrini, Fabrizio; Nieplocha, Jarek; Tipparaju, Vinod

    2006-04-15

    In this paper we will present a new technology that we are currently developing within the SFT: Scalable Fault Tolerance FastOS project which seeks to implement fault tolerance at the operating system level. Major design goals include dynamic reallocation of resources to allow continuing execution in the presence of hardware failures, very high scalability, high efficiency (low overhead), and transparency—requiring no changes to user applications. Our technology is based on a global coordination mechanism, that enforces transparent recovery lines in the system, and TICK, a lightweight, incremental checkpointing software architecture implemented as a Linux kernel module. TICK is completely user-transparentmore » and does not require any changes to user code or system libraries; it is highly responsive: an interrupt, such as a timer interrupt, can trigger a checkpoint in as little as 2.5μs; and it supports incremental and full checkpoints with minimal overhead—less than 6% with full checkpointing to disk performed as frequently as once per minute.« less

  19. Using high-resolution multibeam bathymetry to identify seafloor surface rupture along the Palos Verdes fault complex in offshore Southern California

    USGS Publications Warehouse

    Marlow, M. S.; Gardner, J.V.; Normark, W.R.

    2000-01-01

    Recently acquired high-resolution multibeam bathymetric data reveal several linear traces that are the surficial expressions of seafloor rupture of Holocene faults on the upper continental slope southeast of the Palos Verdes Peninsula. High-resolution multichannel and boomer seismic-reflection profiles show that these linear ruptures are the surficial expressions of Holocene faults with vertical to steep dips. The most prominent fault on the multibeam bathymetry is about 10 km to the west of the mapped trace of the Palos Verdes fault and extends for at least 14 km between the shelf edge and the base of the continental slope. This fault is informally called the Avalon Knoll fault for the nearby geographic feature of that name. Seismic-reflection profiles show that the Avalon Knoll fault is part of a northwest-trending complex of faults and anticlinal uplifts that are evident as scarps and bathymetric highs on the multibeam bathymetry. This fault complex may extend onshore and contribute to the missing balance of Quaternary uplift determined for the Palos Verdes Hills and not accounted for by vertical uplift along the onshore Palos Verdes fault. We investigate the extent of the newly located offshore Avalon Knoll fault and use this mapped fault length to estimate likely minimum magnitudes for events along this fault.

  20. High temperature superconducting fault current limiter

    DOEpatents

    Hull, J.R.

    1997-02-04

    A fault current limiter for an electrical circuit is disclosed. The fault current limiter includes a high temperature superconductor in the electrical circuit. The high temperature superconductor is cooled below its critical temperature to maintain the superconducting electrical properties during operation as the fault current limiter. 15 figs.

  1. Cooperative fault-tolerant distributed computing U.S. Department of Energy Grant DE-FG02-02ER25537 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunderam, Vaidy S.

    2007-01-09

    The Harness project has developed novel software frameworks for the execution of high-end simulations in a fault-tolerant manner on distributed resources. The H2O subsystem comprises the kernel of the Harness framework, and controls the key functions of resource management across multiple administrative domains, especially issues of access and allocation. It is based on a “pluggable” architecture that enables the aggregated use of distributed heterogeneous resources for high performance computing. The major contributions of the Harness II project result in significantly enhancing the overall computational productivity of high-end scientific applications by enabling robust, failure-resilient computations on cooperatively pooled resource collections.

  2. Analysis of a flux-coupling type superconductor fault current limiter with pancake coils

    NASA Astrophysics Data System (ADS)

    Liu, Shizhuo; Xia, Dong; Zhang, Zhifeng; Qiu, Qingquan; Zhang, Guomin

    2017-10-01

    The characteristics of a flux-coupling type superconductor fault current limiter (SFCL) with pancake coils are investigated in this paper. The conventional double-wound non-inductive pancake coil used in AC power systems has an inevitable defect in Voltage Sourced Converter Based High Voltage DC (VSC-HVDC) power systems. Due to its special structure, flashover would occur easily during the fault in high voltage environment. Considering the shortcomings of conventional resistive SFCLs with non-inductive coils, a novel flux-coupling type SFCL with pancake coils is carried out. The module connections of pancake coils are performed. The electromagnetic field and force analysis of the module are contrasted under different parameters. To ensure proper operation of the module, the impedance of the module under representative operating conditions is calculated. Finally, the feasibility of the flux-coupling type SFCL in VSC-HVDC power systems is discussed.

  3. A distributed fault-detection and diagnosis system using on-line parameter estimation

    NASA Technical Reports Server (NTRS)

    Guo, T.-H.; Merrill, W.; Duyar, A.

    1991-01-01

    The development of a model-based fault-detection and diagnosis system (FDD) is reviewed. The system can be used as an integral part of an intelligent control system. It determines the faults of a system from comparison of the measurements of the system with a priori information represented by the model of the system. The method of modeling a complex system is described and a description of diagnosis models which include process faults is presented. There are three distinct classes of fault modes covered by the system performance model equation: actuator faults, sensor faults, and performance degradation. A system equation for a complete model that describes all three classes of faults is given. The strategy for detecting the fault and estimating the fault parameters using a distributed on-line parameter identification scheme is presented. A two-step approach is proposed. The first step is composed of a group of hypothesis testing modules, (HTM) in parallel processing to test each class of faults. The second step is the fault diagnosis module which checks all the information obtained from the HTM level, isolates the fault, and determines its magnitude. The proposed FDD system was demonstrated by applying it to detect actuator and sensor faults added to a simulation of the Space Shuttle Main Engine. The simulation results show that the proposed FDD system can adequately detect the faults and estimate their magnitudes.

  4. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2015-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  5. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2016-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  6. Fault Diagnosis Method for a Mine Hoist in the Internet of Things Environment.

    PubMed

    Li, Juanli; Xie, Jiacheng; Yang, Zhaojian; Li, Junjie

    2018-06-13

    To reduce the difficulty of acquiring and transmitting data in mining hoist fault diagnosis systems and to mitigate the low efficiency and unreasonable reasoning process problems, a fault diagnosis method for mine hoisting equipment based on the Internet of Things (IoT) is proposed in this study. The IoT requires three basic architectural layers: a perception layer, network layer, and application layer. In the perception layer, we designed a collaborative acquisition system based on the ZigBee short distance wireless communication technology for key components of the mine hoisting equipment. Real-time data acquisition was achieved, and a network layer was created by using long-distance wireless General Packet Radio Service (GPRS) transmission. The transmission and reception platforms for remote data transmission were able to transmit data in real time. A fault diagnosis reasoning method is proposed based on the improved Dezert-Smarandache Theory (DSmT) evidence theory, and fault diagnosis reasoning is performed. Based on interactive technology, a humanized and visualized fault diagnosis platform is created in the application layer. The method is then verified. A fault diagnosis test of the mine hoisting mechanism shows that the proposed diagnosis method obtains complete diagnostic data, and the diagnosis results have high accuracy and reliability.

  7. 78 FR 41961 - Submission for Review: 3206-0128, Application for Refund of Retirement Deductions (CSRS), SF 2802...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-12

    ... Retirement Deductions (CSRS), SF 2802 and Current/Former Spouse's Notification of Application for Refund of... Deductions Civil Service Retirement System and Current/Former Spouse's Notification of Application for Refund... Reduction [[Page 41962

  8. Evaluating and extending user-level fault tolerance in MPI applications

    DOE PAGES

    Laguna, Ignacio; Richards, David F.; Gamblin, Todd; ...

    2016-01-11

    The user-level failure mitigation (ULFM) interface has been proposed to provide fault-tolerant semantics in the Message Passing Interface (MPI). Previous work presented performance evaluations of ULFM; yet questions related to its programability and applicability, especially to non-trivial, bulk synchronous applications, remain unanswered. In this article, we present our experiences on using ULFM in a case study with a large, highly scalable, bulk synchronous molecular dynamics application to shed light on the advantages and difficulties of this interface to program fault-tolerant MPI applications. We found that, although ULFM is suitable for master–worker applications, it provides few benefits for more common bulkmore » synchronous MPI applications. Furthermore, to address these limitations, we introduce a new, simpler fault-tolerant interface for complex, bulk synchronous MPI programs with better applicability and support than ULFM for application-level recovery mechanisms, such as global rollback.« less

  9. Microearthquake streaks and seismicity triggered by slow earthquakes on the mobile south flank of Kilauea Volcano, Hawai'i

    USGS Publications Warehouse

    Wolfe, C.J.; Brooks, B.A.; Foster, J.H.; Okubo, P.G.

    2007-01-01

    We perform waveform cross correlation and high precision relocation of both background seismicity and seismicity triggered by periodic slow earthquakes at Kilauea Volcano's mobile south flank. We demonstrate that the triggered seismicity dominantly occurs on several preexisting fault zones at the Hilina region. Regardless of the velocity model employed, the relocated earthquake epicenters and triggered seismicity localize onto distinct fault zones that form streaks aligned with the slow earthquake surface displacements determined from GPS. Due to the unknown effects of velocity heterogeneity and nonideal station coverage, our relocation analyses cannot distinguish whether some of these fault zones occur within the volcanic crust at shallow depths or whether all occur on the decollement between the volcano and preexisting oceanic crust at depths of ???8 km. Nonetheless, these Hilina fault zones consistently respond to stress perturbations from nearby slow earthquakes. Copyright 2007 by the American Geophysical Union.

  10. High temperature superconducting fault current limiter

    DOEpatents

    Hull, John R.

    1997-01-01

    A fault current limiter (10) for an electrical circuit (14). The fault current limiter (10) includes a high temperature superconductor (12) in the electrical circuit (14). The high temperature superconductor (12) is cooled below its critical temperature to maintain the superconducting electrical properties during operation as the fault current limiter (10).

  11. Study of a phase-to-ground fault on a 400 kV overhead transmission line

    NASA Astrophysics Data System (ADS)

    Iagăr, A.; Popa, G. N.; Diniş, C. M.

    2018-01-01

    Power utilities need to supply their consumers at high power quality level. Because the faults that occur on High-Voltage and Extra-High-Voltage transmission lines can cause serious damages in underlying transmission and distribution systems, it is important to examine each fault in detail. In this work we studied a phase-to-ground fault (on phase 1) of 400 kV overhead transmission line Mintia-Arad. Indactic® 650 fault analyzing system was used to record the history of the fault. Signals (analog and digital) recorded by Indactic® 650 were visualized and analyzed by Focus program. Summary of fault report allowed evaluation of behavior of control and protection equipment and determination of cause and location of the fault.

  12. The emergence of reasoning by the disjunctive syllogism in early childhood.

    PubMed

    Mody, Shilpa; Carey, Susan

    2016-09-01

    Logical inference is often seen as an exclusively human and language-dependent ability, but several nonhuman animal species search in a manner that is consistent with a deductive inference, the disjunctive syllogism: when a reward is hidden in one of two cups, and one cup is shown to be empty, they will search for the reward in the other cup. In Experiment 1, we extended these results to toddlers, finding that 23-month-olds consistently approached the non-empty location. However, these results could reflect non-deductive approaches of simply avoiding the empty location, or of searching in any location that might contain the reward, rather than reasoning through the disjunctive syllogism to infer that the other location must contain the reward. Experiment 2 addressed these alternatives, finding evidence that 3- to 5-year-olds used the disjunctive syllogism, while 2.5-year-olds did not. This suggests that younger children may not easily deploy this logical inference, and that a non-deductive approach may be behind the successful performance of nonhuman animals and human infants. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Deductive and inductive reasoning in Parkinson's disease patients and normal controls: review and experimental evidence.

    PubMed

    Natsopoulos, D; Katsarou, Z; Alevriadou, A; Grouios, G; Bostantzopoulou, S; Mentenopoulos, G

    1997-09-01

    In the present study, fifty-four subjects were tested; twenty-seven with idiopathic Parkinson's disease and twenty-seven normal controls matched in age, education, verbal ability, level of depression, sex and socio-economic status. The subjects were tested on eight tasks. Five of the tasks were the classic deductive reasoning syllogisms, modus ponens, modus tollendo tollens, affirming the consequent, denying the antecedent and three-term series problems phrased in a factual context (brief scripts). Three of the tasks were inductive reasoning, including logical inferences, metaphors and similes. All tasks were presented to subjects in a multiple choice format. The results, overall, have shown nonsignificant differences between the two groups in deductive and inductive reasoning, an ability traditionally associated with frontal lobes involvement. Of the comparisons performed between subgroups of the patients and normal controls concerning disease duration, disease onset and predominant involvement of the left and/or right hemisphere, significant differences were found between patients with earlier disease onset and normal controls and between bilaterally affected patients and normal controls, demonstrating an additive effect of lateralization to reasoning ability.

  14. The Emergence of Reasoning by the Disjunctive Syllogism in Early Childhood

    PubMed Central

    Mody, Shilpa; Carey, Susan

    2016-01-01

    Logical inference is often seen as an exclusively human and language-dependent ability, but several nonhuman animal species search in a manner that is consistent with a deductive inference, the disjunctive syllogism: when a reward is hidden in one of two cups, and one cup is shown to be empty, they will search for the reward in the other cup. In Experiment 1, we extended these results to toddlers, finding that 23-month-olds consistently approached the non-empty location. However, these results could reflect non-deductive approaches of simply avoiding the empty location, or of searching in any location that might contain the reward, rather than reasoning through the disjunctive syllogism to infer that the other location must contain the reward. Experiment 2 addressed these alternatives, finding evidence that 3- to 5-year-olds used the disjunctive syllogism, while 2.5-year-olds did not. This suggests that younger children may not easily deploy this logical inference, and that a non-deductive approach may be behind the successful performance of nonhuman animals and human infants. PMID:27239748

  15. Varieties of clinical reasoning.

    PubMed

    Bolton, Jonathan W

    2015-06-01

    Clinical reasoning comprises a variety of different modes of inference. The modes that are practiced will be influenced by the sociological characteristics of the clinical settings and the tasks to be performed by the clinician. This article presents C.S. Peirce's typology of modes of inference: deduction, induction and abduction. It describes their differences and their roles as stages in scientific argument. The article applies the typology to reasoning in clinical settings. The article describes their differences, and their roles as stages in scientific argument. It then applies the typology to reasoning in typical clinical settings. Abduction is less commonly taught or discussed than induction and deduction. However, it is a common mode of inference in clinical settings, especially when the clinician must try to make sense of a surprising phenomenon. Whether abduction is followed up with deductive and inductive verification is strongly influenced by situational constraints and the cognitive and psychological stamina of the clinician. Recognizing the inevitability of abduction in clinical practice and its value to discovery is important to an accurate understanding of clinical reasoning. © 2015 John Wiley & Sons, Ltd.

  16. A cascaded Schwarz converter for high frequency power distribution

    NASA Technical Reports Server (NTRS)

    Ray, Biswajit; Stuart, Thomas A.

    1988-01-01

    It is shown that two Schwarz converters in cascade provide a very reliable 20-kHz source that features zero current commutation, constant frequency, and fault-tolerant operation, meeting requirements for spacecraft applications. A steady-state analysis of the converter is presented, and equations for the steady-state performance are derived. Fault-current limiting is discussed. Experimental results are presented for a 900-W version, which has been successfully tested under no-load, full-load, and short-circut conditions.

  17. The development of an interim generalized gate logic software simulator

    NASA Technical Reports Server (NTRS)

    Mcgough, J. G.; Nemeroff, S.

    1985-01-01

    A proof-of-concept computer program called IGGLOSS (Interim Generalized Gate Logic Software Simulator) was developed and is discussed. The simulator engine was designed to perform stochastic estimation of self test coverage (fault-detection latency times) of digital computers or systems. A major attribute of the IGGLOSS is its high-speed simulation: 9.5 x 1,000,000 gates/cpu sec for nonfaulted circuits and 4.4 x 1,000,000 gates/cpu sec for faulted circuits on a VAX 11/780 host computer.

  18. Validation of Helicopter Gear Condition Indicators Using Seeded Fault Tests

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula; Brandon, E. Bruce

    2013-01-01

    A "seeded fault test" in support of a rotorcraft condition based maintenance program (CBM), is an experiment in which a component is tested with a known fault while health monitoring data is collected. These tests are performed at operating conditions comparable to operating conditions the component would be exposed to while installed on the aircraft. Performance of seeded fault tests is one method used to provide evidence that a Health Usage Monitoring System (HUMS) can replace current maintenance practices required for aircraft airworthiness. Actual in-service experience of the HUMS detecting a component fault is another validation method. This paper will discuss a hybrid validation approach that combines in service-data with seeded fault tests. For this approach, existing in-service HUMS flight data from a naturally occurring component fault will be used to define a component seeded fault test. An example, using spiral bevel gears as the targeted component, will be presented. Since the U.S. Army has begun to develop standards for using seeded fault tests for HUMS validation, the hybrid approach will be mapped to the steps defined within their Aeronautical Design Standard Handbook for CBM. This paper will step through their defined processes, and identify additional steps that may be required when using component test rig fault tests to demonstrate helicopter CI performance. The discussion within this paper will provide the reader with a better appreciation for the challenges faced when defining a seeded fault test for HUMS validation.

  19. Molecular implementation of simple logic programs.

    PubMed

    Ran, Tom; Kaplan, Shai; Shapiro, Ehud

    2009-10-01

    Autonomous programmable computing devices made of biomolecules could interact with a biological environment and be used in future biological and medical applications. Biomolecular implementations of finite automata and logic gates have already been developed. Here, we report an autonomous programmable molecular system based on the manipulation of DNA strands that is capable of performing simple logical deductions. Using molecular representations of facts such as Man(Socrates) and rules such as Mortal(X) <-- Man(X) (Every Man is Mortal), the system can answer molecular queries such as Mortal(Socrates)? (Is Socrates Mortal?) and Mortal(X)? (Who is Mortal?). This biomolecular computing system compares favourably with previous approaches in terms of expressive power, performance and precision. A compiler translates facts, rules and queries into their molecular representations and subsequently operates a robotic system that assembles the logical deductions and delivers the result. This prototype is the first simple programming language with a molecular-scale implementation.

  20. An Integrated Architecture for On-Board Aircraft Engine Performance Trend Monitoring and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.

    2010-01-01

    Aircraft engine performance trend monitoring and gas path fault diagnostics are closely related technologies that assist operators in managing the health of their gas turbine engine assets. Trend monitoring is the process of monitoring the gradual performance change that an aircraft engine will naturally incur over time due to turbomachinery deterioration, while gas path diagnostics is the process of detecting and isolating the occurrence of any faults impacting engine flow-path performance. Today, performance trend monitoring and gas path fault diagnostic functions are performed by a combination of on-board and off-board strategies. On-board engine control computers contain logic that monitors for anomalous engine operation in real-time. Off-board ground stations are used to conduct fleet-wide engine trend monitoring and fault diagnostics based on data collected from each engine each flight. Continuing advances in avionics are enabling the migration of portions of the ground-based functionality on-board, giving rise to more sophisticated on-board engine health management capabilities. This paper reviews the conventional engine performance trend monitoring and gas path fault diagnostic architecture commonly applied today, and presents a proposed enhanced on-board architecture for future applications. The enhanced architecture gains real-time access to an expanded quantity of engine parameters, and provides advanced on-board model-based estimation capabilities. The benefits of the enhanced architecture include the real-time continuous monitoring of engine health, the early diagnosis of fault conditions, and the estimation of unmeasured engine performance parameters. A future vision to advance the enhanced architecture is also presented and discussed

  1. 5 CFR 846.102 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... EMPLOYEES RETIREMENT SYSTEM-ELECTIONS OF COVERAGE General Provisions § 846.102 Definitions. In this part... subject to both CSRS deductions (or deductions under another retirement system for Federal employees if such service is creditable under CSRS) and social security deductions as a result of the Social...

  2. Real-time closed-loop simulation and upset evaluation of control systems in harsh electromagnetic environments

    NASA Technical Reports Server (NTRS)

    Belcastro, Celeste M.

    1989-01-01

    Digital control systems for applications such as aircraft avionics and multibody systems must maintain adequate control integrity in adverse as well as nominal operating conditions. For example, control systems for advanced aircraft, and especially those with relaxed static stability, will be critical to flight and will, therefore, have very high reliability specifications which must be met regardless of operating conditions. In addition, multibody systems such as robotic manipulators performing critical functions must have control systems capable of robust performance in any operating environment in order to complete the assigned task reliably. Severe operating conditions for electronic control systems can result from electromagnetic disturbances caused by lightning, high energy radio frequency (HERF) transmitters, and nuclear electromagnetic pulses (NEMP). For this reason, techniques must be developed to evaluate the integrity of the control system in adverse operating environments. The most difficult and illusive perturbations to computer-based control systems that can be caused by an electromagnetic environment (EME) are functional error modes that involve no component damage. These error modes are collectively known as upset, can occur simultaneously in all of the channels of a redundant control system, and are software dependent. Upset studies performed to date have not addressed the assessment of fault tolerant systems and do not involve the evaluation of a control system operating in a closed-loop with the plant. A methodology for performing a real-time simulation of the closed-loop dynamics of a fault tolerant control system with a simulated plant operating in an electromagnetically harsh environment is presented. In particular, considerations for performing upset tests on the controller are discussed. Some of these considerations are the generation and coupling of analog signals representative of electromagnetic disturbances to a control system under test, analog data acquisition, and digital data acquisition from fault tolerant systems. In addition, a case study of an upset test methodology for a fault tolerant electromagnetic aircraft engine control system is presented.

  3. An Analysis of Failure Handling in Chameleon, A Framework for Supporting Cost-Effective Fault Tolerant Services

    NASA Technical Reports Server (NTRS)

    Haakensen, Erik Edward

    1998-01-01

    The desire for low-cost reliable computing is increasing. Most current fault tolerant computing solutions are not very flexible, i.e., they cannot adapt to reliability requirements of newly emerging applications in business, commerce, and manufacturing. It is important that users have a flexible, reliable platform to support both critical and noncritical applications. Chameleon, under development at the Center for Reliable and High-Performance Computing at the University of Illinois, is a software framework. for supporting cost-effective adaptable networked fault tolerant service. This thesis details a simulation of fault injection, detection, and recovery in Chameleon. The simulation was written in C++ using the DEPEND simulation library. The results obtained from the simulation included the amount of overhead incurred by the fault detection and recovery mechanisms supported by Chameleon. In addition, information about fault scenarios from which Chameleon cannot recover was gained. The results of the simulation showed that both critical and noncritical applications can be executed in the Chameleon environment with a fairly small amount of overhead. No single point of failure from which Chameleon could not recover was found. Chameleon was also found to be capable of recovering from several multiple failure scenarios.

  4. A Doppler Transient Model Based on the Laplace Wavelet and Spectrum Correlation Assessment for Locomotive Bearing Fault Diagnosis

    PubMed Central

    Shen, Changqing; Liu, Fang; Wang, Dong; Zhang, Ao; Kong, Fanrang; Tse, Peter W.

    2013-01-01

    The condition of locomotive bearings, which are essential components in trains, is crucial to train safety. The Doppler effect significantly distorts acoustic signals during high movement speeds, substantially increasing the difficulty of monitoring locomotive bearings online. In this study, a new Doppler transient model based on the acoustic theory and the Laplace wavelet is presented for the identification of fault-related impact intervals embedded in acoustic signals. An envelope spectrum correlation assessment is conducted between the transient model and the real fault signal in the frequency domain to optimize the model parameters. The proposed method can identify the parameters used for simulated transients (periods in simulated transients) from acoustic signals. Thus, localized bearing faults can be detected successfully based on identified parameters, particularly period intervals. The performance of the proposed method is tested on a simulated signal suffering from the Doppler effect. Besides, the proposed method is used to analyze real acoustic signals of locomotive bearings with inner race and outer race faults, respectively. The results confirm that the periods between the transients, which represent locomotive bearing fault characteristics, can be detected successfully. PMID:24253191

  5. A Doppler transient model based on the laplace wavelet and spectrum correlation assessment for locomotive bearing fault diagnosis.

    PubMed

    Shen, Changqing; Liu, Fang; Wang, Dong; Zhang, Ao; Kong, Fanrang; Tse, Peter W

    2013-11-18

    The condition of locomotive bearings, which are essential components in trains, is crucial to train safety. The Doppler effect significantly distorts acoustic signals during high movement speeds, substantially increasing the difficulty of monitoring locomotive bearings online. In this study, a new Doppler transient model based on the acoustic theory and the Laplace wavelet is presented for the identification of fault-related impact intervals embedded in acoustic signals. An envelope spectrum correlation assessment is conducted between the transient model and the real fault signal in the frequency domain to optimize the model parameters. The proposed method can identify the parameters used for simulated transients (periods in simulated transients) from acoustic signals. Thus, localized bearing faults can be detected successfully based on identified parameters, particularly period intervals. The performance of the proposed method is tested on a simulated signal suffering from the Doppler effect. Besides, the proposed method is used to analyze real acoustic signals of locomotive bearings with inner race and outer race faults, respectively. The results confirm that the periods between the transients, which represent locomotive bearing fault characteristics, can be detected successfully.

  6. Experience of automation failures in training: effects on trust, automation bias, complacency and performance.

    PubMed

    Sauer, Juergen; Chavaillaz, Alain; Wastell, David

    2016-06-01

    This work examined the effects of operators' exposure to various types of automation failures in training. Forty-five participants were trained for 3.5 h on a simulated process control environment. During training, participants either experienced a fully reliable, automatic fault repair facility (i.e. faults detected and correctly diagnosed), a misdiagnosis-prone one (i.e. faults detected but not correctly diagnosed) or a miss-prone one (i.e. faults not detected). One week after training, participants were tested for 3 h, experiencing two types of automation failures (misdiagnosis, miss). The results showed that automation bias was very high when operators trained on miss-prone automation encountered a failure of the diagnostic system. Operator errors resulting from automation bias were much higher when automation misdiagnosed a fault than when it missed one. Differences in trust levels that were instilled by the different training experiences disappeared during the testing session. Practitioner Summary: The experience of automation failures during training has some consequences. A greater potential for operator errors may be expected when an automatic system failed to diagnose a fault than when it failed to detect one.

  7. Strength evolution of simulated carbonate-bearing faults: The role of normal stress and slip velocity

    NASA Astrophysics Data System (ADS)

    Mercuri, Marco; Scuderi, Marco Maria; Tesei, Telemaco; Carminati, Eugenio; Collettini, Cristiano

    2018-04-01

    A great number of earthquakes occur within thick carbonate sequences in the shallow crust. At the same time, carbonate fault rocks exhumed from a depth < 6 km (i.e., from seismogenic depths) exhibit the coexistence of structures related to brittle (i.e., cataclasis) and ductile deformation processes (i.e., pressure-solution and granular plasticity). We performed friction experiments on water-saturated simulated carbonate-bearing faults for a wide range of normal stresses (from 5 to 120 MPa) and slip velocities (from 0.3 to 100 μm/s). At high normal stresses (σn > 20 MPa) fault gouges undergo strain-weakening, that is more pronounced at slow slip velocities, and causes a significant reduction of frictional strength, from μ = 0.7 to μ = 0.47. Microstructural analysis show that fault gouge weakening is driven by deformation accommodated by cataclasis and pressure-insensitive deformation processes (pressure solution and granular plasticity) that become more efficient at slow slip velocity. The reduction in frictional strength caused by strain weakening behaviour promoted by the activation of pressure-insensitive deformation might play a significant role in carbonate-bearing faults mechanics.

  8. Fault tree models for fault tolerant hypercube multiprocessors

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Tuazon, Jezus O.

    1991-01-01

    Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.

  9. 3D Dynamic Rupture Simulations along Dipping Faults, with a focus on the Wasatch Fault Zone, Utah

    NASA Astrophysics Data System (ADS)

    Withers, K.; Moschetti, M. P.

    2017-12-01

    We study dynamic rupture and ground motion from dip-slip faults in regions that have high-seismic hazard, such as the Wasatch fault zone, Utah. Previous numerical simulations have modeled deterministic ground motion along segments of this fault in the heavily populated regions near Salt Lake City but were restricted to low frequencies ( 1 Hz). We seek to better understand the rupture process and assess broadband ground motions and variability from the Wasatch Fault Zone by extending deterministic ground motion prediction to higher frequencies (up to 5 Hz). We perform simulations along a dipping normal fault (40 x 20 km along strike and width, respectively) with characteristics derived from geologic observations to generate a suite of ruptures > Mw 6.5. This approach utilizes dynamic simulations (fully physics-based models, where the initial stress drop and friction law are imposed) using a summation by parts (SBP) method. The simulations include rough-fault topography following a self-similar fractal distribution (over length scales from 100 m to the size of the fault) in addition to off-fault plasticity. Energy losses from heat and other mechanisms, modeled as anelastic attenuation, are also included, as well as free-surface topography, which can significantly affect ground motion patterns. We compare the effect of material structure and both rate and state and slip-weakening friction laws have on rupture propagation. The simulations show reduced slip and moment release in the near surface with the inclusion of plasticity, better agreeing with observations of shallow slip deficit. Long-wavelength fault geometry imparts a non-uniform stress distribution along both dip and strike, influencing the preferred rupture direction and hypocenter location, potentially important for seismic hazard estimation.

  10. A Dynamic Finite Element Method for Simulating the Physics of Faults Systems

    NASA Astrophysics Data System (ADS)

    Saez, E.; Mora, P.; Gross, L.; Weatherley, D.

    2004-12-01

    We introduce a dynamic Finite Element method using a novel high level scripting language to describe the physical equations, boundary conditions and time integration scheme. The library we use is the parallel Finley library: a finite element kernel library, designed for solving large-scale problems. It is incorporated as a differential equation solver into a more general library called escript, based on the scripting language Python. This library has been developed to facilitate the rapid development of 3D parallel codes, and is optimised for the Australian Computational Earth Systems Simulator Major National Research Facility (ACcESS MNRF) supercomputer, a 208 processor SGI Altix with a peak performance of 1.1 TFlops. Using the scripting approach we obtain a parallel FE code able to take advantage of the computational efficiency of the Altix 3700. We consider faults as material discontinuities (the displacement, velocity, and acceleration fields are discontinuous at the fault), with elastic behavior. The stress continuity at the fault is achieved naturally through the expression of the fault interactions in the weak formulation. The elasticity problem is solved explicitly in time, using the Saint Verlat scheme. Finally, we specify a suitable frictional constitutive relation and numerical scheme to simulate fault behaviour. Our model is based on previous work on modelling fault friction and multi-fault systems using lattice solid-like models. We adapt the 2D model for simulating the dynamics of parallel fault systems described to the Finite-Element method. The approach uses a frictional relation along faults that is slip and slip-rate dependent, and the numerical integration approach introduced by Mora and Place in the lattice solid model. In order to illustrate the new Finite Element model, single and multi-fault simulation examples are presented.

  11. Contrasting frictional behaviour of fault gouges containing Mg-rich phyllosilicates

    NASA Astrophysics Data System (ADS)

    Sanchez Roa, C.; Faulkner, D.; Jimenez Millan, J.; Nieto, F.

    2015-12-01

    The clay mineralogy of fault gouges has important implications on frictional properties and stability of fault planes. We studied the specific case of the Galera fault zone where fault gouges containing Mg-rich phyllosilicates appear as hydrothermal deposits related to high salinity fluids enriched in Mg2+. These deposits are dominated by sepiolite and palygorskite, both fibrous clay minerals with similar composition to Mg-smectite. The frictional strengths of sepiolite and palygorskite have not yet been determined, however, as they are part of the clay mineral group, it has been assumed that their frictional behaviour would be in line with platy clay minerals. We performed frictional sliding experiments on powdered pure standards and fault rocks in order to establish the frictional behaviour of sepiolite and palygorskite using a triaxial deformation apparatus with a servo-controlled axial loading system and fluid pressure pump. Friction coefficients for palygorskite and sepiolite as monomineralic samples were found to be 0.65 to 0.7 for dry experiments, and 0.45 to 0.5 for water-saturated experiments. Although these fibrous minerals are part of the phyllosilicates group, they show higher friction coefficients and their mechanical behaviour is less stable than platy clay minerals. This difference is a consequence of their stronger structural framework and the discontinuity of water layers. Our results present a contrast in mechanical behaviour between Mg-rich fibrous and platy clay minerals in fault gouges, where smectite is known to considerably reduce friction coefficients and to increase the stability of the fault plane leading to creeping processes. Transformations between saponite and sepiolite have been previously observed and could modify the deformation regime of a fault zone. Constraining the stability conditions and possible mineral reactions or transformations in fault gouges could help us understand the general role of clay minerals in fault stability.

  12. A testing-coverage software reliability model considering fault removal efficiency and error generation.

    PubMed

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.

  13. Fault-tolerant quantum computation with nondeterministic entangling gates

    NASA Astrophysics Data System (ADS)

    Auger, James M.; Anwar, Hussain; Gimeno-Segovia, Mercedes; Stace, Thomas M.; Browne, Dan E.

    2018-03-01

    Performing entangling gates between physical qubits is necessary for building a large-scale universal quantum computer, but in some physical implementations—for example, those that are based on linear optics or networks of ion traps—entangling gates can only be implemented probabilistically. In this work, we study the fault-tolerant performance of a topological cluster state scheme with local nondeterministic entanglement generation, where failed entangling gates (which correspond to bonds on the lattice representation of the cluster state) lead to a defective three-dimensional lattice with missing bonds. We present two approaches for dealing with missing bonds; the first is a nonadaptive scheme that requires no additional quantum processing, and the second is an adaptive scheme in which qubits can be measured in an alternative basis to effectively remove them from the lattice, hence eliminating their damaging effect and leading to better threshold performance. We find that a fault-tolerance threshold can still be observed with a bond-loss rate of 6.5% for the nonadaptive scheme, and a bond-loss rate as high as 14.5% for the adaptive scheme.

  14. Non-negative Matrix Factorization and Co-clustering: A Promising Tool for Multi-tasks Bearing Fault Diagnosis

    NASA Astrophysics Data System (ADS)

    Shen, Fei; Chen, Chao; Yan, Ruqiang

    2017-05-01

    Classical bearing fault diagnosis methods, being designed according to one specific task, always pay attention to the effectiveness of extracted features and the final diagnostic performance. However, most of these approaches suffer from inefficiency when multiple tasks exist, especially in a real-time diagnostic scenario. A fault diagnosis method based on Non-negative Matrix Factorization (NMF) and Co-clustering strategy is proposed to overcome this limitation. Firstly, some high-dimensional matrixes are constructed using the Short-Time Fourier Transform (STFT) features, where the dimension of each matrix equals to the number of target tasks. Then, the NMF algorithm is carried out to obtain different components in each dimension direction through optimized matching, such as Euclidean distance and divergence distance. Finally, a Co-clustering technique based on information entropy is utilized to realize classification of each component. To verity the effectiveness of the proposed approach, a series of bearing data sets were analysed in this research. The tests indicated that although the diagnostic performance of single task is comparable to traditional clustering methods such as K-mean algorithm and Guassian Mixture Model, the accuracy and computational efficiency in multi-tasks fault diagnosis are improved.

  15. A Three-Dimensional Receiver Operator Characteristic Surface Diagnostic Metric

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.

    2011-01-01

    Receiver Operator Characteristic (ROC) curves are commonly applied as metrics for quantifying the performance of binary fault detection systems. An ROC curve provides a visual representation of a detection system s True Positive Rate versus False Positive Rate sensitivity as the detection threshold is varied. The area under the curve provides a measure of fault detection performance independent of the applied detection threshold. While the standard ROC curve is well suited for quantifying binary fault detection performance, it is not suitable for quantifying the classification performance of multi-fault classification problems. Furthermore, it does not provide a measure of diagnostic latency. To address these shortcomings, a novel three-dimensional receiver operator characteristic (3D ROC) surface metric has been developed. This is done by generating and applying two separate curves: the standard ROC curve reflecting fault detection performance, and a second curve reflecting fault classification performance. A third dimension, diagnostic latency, is added giving rise to 3D ROC surfaces. Applying numerical integration techniques, the volumes under and between the surfaces are calculated to produce metrics of the diagnostic system s detection and classification performance. This paper will describe the 3D ROC surface metric in detail, and present an example of its application for quantifying the performance of aircraft engine gas path diagnostic methods. Metric limitations and potential enhancements are also discussed

  16. Strategies for Teaching Elementary and Junior High Students.

    ERIC Educational Resources Information Center

    Consuegra, Gerard F.

    1980-01-01

    Discusses the applications of Piaget's theory of cognitive development to elementary and junior high school science teaching. Topics include planning concrete experiences, inductive and hypothetical deductive reasoning, measurement concepts, combinatorial logic, scientific experimentation and reflexive thinking. (SA)

  17. Procedural errors in air traffic control: effects of traffic density, expertise, and automation.

    PubMed

    Di Nocera, Francesco; Fabrizi, Roberto; Terenzi, Michela; Ferlazzo, Fabio

    2006-06-01

    Air traffic management requires operators to frequently shift between multiple tasks and/or goals with different levels of accomplishment. Procedural errors can occur when a controller accomplishes one of the tasks before the entire operation has been completed. The present study had two goals: first, to verify the occurrence of post-completion errors in air traffic control (ATC) tasks; and second, to assess effects on performance of medium term conflict detection (MTCD) tools. There were 18 military controllers who performed a simulated ATC task with and without automation support (MTCD vs. manual) in high and low air traffic density conditions. During the task, which consisted of managing several simulated flights in an enroute ATC scenario, a trace suddenly disappeared "after" the operator took the aircraft in charge, "during" the management of the trace, or "before" the pilot's first contact. In the manual condition, only the fault type "during" was found to be significantly different from the other two. On the contrary, when in the MTCD condition, the fault type "after" generated significantly less errors than the fault type "before." Additionally, automation was found to affect performance of junior controllers, whereas seniors' performance was not affected. Procedural errors can happen in ATC, but automation can mitigate this effect. Lack of benefits for the "before" fault type may be due to the fact that operators extend their reliance to a part of the task that is unsupported by the automated system.

  18. 76 FR 33814 - Proposed Collection; Comment Request for Regulation Project

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-09

    ... collection requirements related to Additional First Year Depreciation Deduction. DATES: Written comments... . SUPPLEMENTARY INFORMATION: Title: Additional First Year Depreciation Deduction. OMB Number: 1545-2207... year depreciation deduction. Section 401(b) of the TRUIRJCA amends Sec. 168(k) by adding Sec. 168(k)(5...

  19. 26 CFR 1.873-1 - Deductions allowed nonresident alien individuals.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 9 2013-04-01 2013-04-01 false Deductions allowed nonresident alien individuals... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Nonresident Aliens and Foreign Corporations § 1.873-1 Deductions allowed nonresident alien individuals. (a) General provisions—(1) Allocation of...

  20. 26 CFR 1.873-1 - Deductions allowed nonresident alien individuals.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 9 2011-04-01 2011-04-01 false Deductions allowed nonresident alien individuals... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Nonresident Aliens and Foreign Corporations § 1.873-1 Deductions allowed nonresident alien individuals. (a) General provisions—(1) Allocation of...

  1. 26 CFR 1.873-1 - Deductions allowed nonresident alien individuals.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 9 2012-04-01 2012-04-01 false Deductions allowed nonresident alien individuals... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Nonresident Aliens and Foreign Corporations § 1.873-1 Deductions allowed nonresident alien individuals. (a) General provisions—(1) Allocation of...

  2. 26 CFR 1.873-1 - Deductions allowed nonresident alien individuals.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 9 2014-04-01 2014-04-01 false Deductions allowed nonresident alien individuals... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Nonresident Aliens and Foreign Corporations § 1.873-1 Deductions allowed nonresident alien individuals. (a) General provisions—(1) Allocation of...

  3. 5 CFR 831.1003 - Deductions from pay.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Deductions from pay. 831.1003 Section 831.1003 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) RETIREMENT CSRS Offset § 831.1003 Deductions from pay. (a) Except as otherwise provided in this...

  4. 42 CFR 409.82 - Inpatient hospital deductible.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Inpatient hospital deductible. 409.82 Section 409.82 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE PROGRAM HOSPITAL INSURANCE BENEFITS Hospital Insurance Deductibles and Coinsurance § 409.82...

  5. 26 CFR 1.6015-0 - Table of contents.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... all qualifying joint filers. (a) In general. (b) Understatement. (c) Knowledge or reason to know. (d...) Actual knowledge. (i) In general. (A) Omitted income. (B) Deduction or credit. (1) Erroneous deductions in general. (2) Fictitious or inflated deduction. (ii) Partial knowledge. (iii) Knowledge of the...

  6. 5 CFR 831.1003 - Deductions from pay.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Deductions from pay. 831.1003 Section 831.1003 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) RETIREMENT CSRS Offset § 831.1003 Deductions from pay. (a) Except as otherwise provided in this...

  7. Fault recovery characteristics of the fault tolerant multi-processor

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1990-01-01

    The fault handling performance of the fault tolerant multiprocessor (FTMP) was investigated. Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles byzantine or lying faults. It is pointed out that these weak areas in the FTMP's design increase the probability that, for any hardware fault, a good LRU (line replaceable unit) is mistakenly disabled by the fault management software. It is concluded that fault injection can help detect and analyze the behavior of a system in the ultra-reliable regime. Although fault injection testing cannot be exhaustive, it has been demonstrated that it provides a unique capability to unmask problems and to characterize the behavior of a fault-tolerant system.

  8. Modeling the effects of argument length and validity on inductive and deductive reasoning.

    PubMed

    Rotello, Caren M; Heit, Evan

    2009-09-01

    In an effort to assess models of inductive reasoning and deductive reasoning, the authors, in 3 experiments, examined the effects of argument length and logical validity on evaluation of arguments. In Experiments 1a and 1b, participants were given either induction or deduction instructions for a common set of stimuli. Two distinct effects were observed: Induction judgments were more affected by argument length, and deduction judgments were more affected by validity. In Experiment 2, fluency was manipulated by displaying the materials in a low-contrast font, leading to increased sensitivity to logical validity. Several variants of 1-process and 2-process models of reasoning were assessed against the results. A 1-process model that assumed the same scale of argument strength underlies induction and deduction was not successful. A 2-process model that assumed separate, continuous informational dimensions of apparent deductive validity and associative strength gave the more successful account. (c) 2009 APA, all rights reserved.

  9. Roads towards fault-tolerant universal quantum computation

    NASA Astrophysics Data System (ADS)

    Campbell, Earl T.; Terhal, Barbara M.; Vuillot, Christophe

    2017-09-01

    A practical quantum computer must not merely store information, but also process it. To prevent errors introduced by noise from multiplying and spreading, a fault-tolerant computational architecture is required. Current experiments are taking the first steps toward noise-resilient logical qubits. But to convert these quantum devices from memories to processors, it is necessary to specify how a universal set of gates is performed on them. The leading proposals for doing so, such as magic-state distillation and colour-code techniques, have high resource demands. Alternative schemes, such as those that use high-dimensional quantum codes in a modular architecture, have potential benefits, but need to be explored further.

  10. Roads towards fault-tolerant universal quantum computation.

    PubMed

    Campbell, Earl T; Terhal, Barbara M; Vuillot, Christophe

    2017-09-13

    A practical quantum computer must not merely store information, but also process it. To prevent errors introduced by noise from multiplying and spreading, a fault-tolerant computational architecture is required. Current experiments are taking the first steps toward noise-resilient logical qubits. But to convert these quantum devices from memories to processors, it is necessary to specify how a universal set of gates is performed on them. The leading proposals for doing so, such as magic-state distillation and colour-code techniques, have high resource demands. Alternative schemes, such as those that use high-dimensional quantum codes in a modular architecture, have potential benefits, but need to be explored further.

  11. Source characteristics of 2000 small earthquakes nucleating on the Alto Tiberina fault system (central Italy).

    NASA Astrophysics Data System (ADS)

    Munafo, I.; Malagnini, L.; Tinti, E.; Chiaraluce, L.; Di Stefano, R.; Valoroso, L.

    2014-12-01

    The Alto Tiberina Fault (ATF) is a 60 km long east-dipping low-angle normal fault, located in a sector of the Northern Apennines (Italy) undergoing active extension since the Quaternary. The ATF has been imaged by analyzing the active source seismic reflection profiles, and the instrumentally recorded persistent background seismicity. The present study is an attempt to separate the contributions of source, site, and crustal attenuation, in order to focus on the mechanics of the seismic sources on the ATF, as well on the synthetic and the antithetic structures within the ATF hanging-wall (i.e. Colfiorito fault, Gubbio fault and Umbria Valley fault). In order to compute source spectra, we perform a set of regressions over the seismograms of 2000 small earthquakes (-0.8 < ML< 4) recorded between 2010 and 2014 at 50 permanent seismic stations deployed in the framework of the Alto Tiberina Near Fault Observatory project (TABOO) and equipped with three-components seismometers, three of which located in shallow boreholes. Because we deal with some very small earthquakes, we maximize the signal to noise ratio (SNR) with a technique based on the analysis of peak values of bandpass-filtered time histories, in addition to the same processing performed on Fourier amplitudes. We rely on a tool called Random Vibration Theory (RVT) to completely switch from peak values in the time domain to Fourier spectral amplitudes. Low-frequency spectral plateau of the source terms are used to compute moment magnitudes (Mw) of all the events, whereas a source spectral ratio technique is used to estimate the corner frequencies (Brune spectral model) of a subset of events chosen over the analysis of the noise affecting the spectral ratios. So far, the described approach provides high accuracy over the spectral parameters of earthquakes of localized seismicity, and may be used to gain insights into the underlying mechanics of faulting and the earthquake processes.

  12. Waiving Deductibles and Copays: No Good Deed Goes Unpunished.

    PubMed

    Sacopulos, Michael J

    2015-01-01

    In an age of high deductibles, many surgical candidates are requesting that practices reduce or waive the self-pay portion of the professional fee. These requests come with a risk to the practice. Third-party payer agreements often prohibit discounting of fees to patients. Claiming breach of contract and interference with actuarial calculations, some third-party payers have sued practices for waiving fees owed by their insureds. Only by having the proper policies in place may a practice safely engage in fee reductions for patients insured by an entity with whom the practice has a contractual relationship.

  13. Active tectonics of the Seattle fault and central Puget sound, Washington - Implications for earthquake hazards

    USGS Publications Warehouse

    Johnson, S.Y.; Dadisman, S.V.; Childs, J. R.; Stanley, W.D.

    1999-01-01

    We use an extensive network of marine high-resolution and conventional industry seismic-reflection data to constrain the location, shallow structure, and displacement rates of the Seattle fault zone and crosscutting high-angle faults in the Puget Lowland of western Washington. Analysis of seismic profiles extending 50 km across the Puget Lowland from Lake Washington to Hood Canal indicates that the west-trending Seattle fault comprises a broad (4-6 km) zone of three or more south-dipping reverse faults. Quaternary sediment has been folded and faulted along all faults in the zone but is clearly most pronounced along fault A, the northernmost fault, which forms the boundary between the Seattle uplift and Seattle basin. Analysis of growth strata deposited across fault A indicate minimum Quaternary slip rates of about 0.6 mm/yr. Slip rates across the entire zone are estimated to be 0.7-1.1 mm/yr. The Seattle fault is cut into two main segments by an active, north-trending, high-angle, strike-slip fault zone with cumulative dextral displacement of about 2.4 km. Faults in this zone truncate and warp reflections in Tertiary and Quaternary strata and locally coincide with bathymetric lineaments. Cumulative slip rates on these faults may exceed 0.2 mm/yr. Assuming no other crosscutting faults, this north-trending fault zone divides the Seattle fault into 30-40-km-long western and eastern segments. Although this geometry could limit the area ruptured in some Seattle fault earthquakes, a large event ca. A.D. 900 appears to have involved both segments. Regional seismic-hazard assessments must (1) incorporate new information on fault length, geometry, and displacement rates on the Seattle fault, and (2) consider the hazard presented by the previously unrecognized, north-trending fault zone.

  14. An Integrated Architecture for Aircraft Engine Performance Monitoring and Fault Diagnostics: Engine Test Results

    NASA Technical Reports Server (NTRS)

    Rinehart, Aidan W.; Simon, Donald L.

    2015-01-01

    This paper presents a model-based architecture for performance trend monitoring and gas path fault diagnostics designed for analyzing streaming transient aircraft engine measurement data. The technique analyzes residuals between sensed engine outputs and model predicted outputs for fault detection and isolation purposes. Diagnostic results from the application of the approach to test data acquired from an aircraft turbofan engine are presented. The approach is found to avoid false alarms when presented nominal fault-free data. Additionally, the approach is found to successfully detect and isolate gas path seeded-faults under steady-state operating scenarios although some fault misclassifications are noted during engine transients. Recommendations for follow-on maturation and evaluation of the technique are also presented.

  15. An Integrated Architecture for Aircraft Engine Performance Monitoring and Fault Diagnostics: Engine Test Results

    NASA Technical Reports Server (NTRS)

    Rinehart, Aidan W.; Simon, Donald L.

    2014-01-01

    This paper presents a model-based architecture for performance trend monitoring and gas path fault diagnostics designed for analyzing streaming transient aircraft engine measurement data. The technique analyzes residuals between sensed engine outputs and model predicted outputs for fault detection and isolation purposes. Diagnostic results from the application of the approach to test data acquired from an aircraft turbofan engine are presented. The approach is found to avoid false alarms when presented nominal fault-free data. Additionally, the approach is found to successfully detect and isolate gas path seeded-faults under steady-state operating scenarios although some fault misclassifications are noted during engine transients. Recommendations for follow-on maturation and evaluation of the technique are also presented.

  16. 26 CFR 1.1446-0 - Table of contents.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    .... (iv) Interest deductions. (v) Limitation on capital losses. (vi) Other deductions. (vii) Limitations... distributions. (4) Coordination with section 1445(e)(1). § 1.1446-5Tiered partnership structures. (a) In general... use of deductions and losses certified to a partnership. (ii) De minimis certificate for nonresident...

  17. 78 FR 48606 - Guidance Regarding Deferred Discharge of Indebtedness Income of Corporations and Deferred...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-09

    ... Regarding Deferred Discharge of Indebtedness Income of Corporations and Deferred Original Issue Discount... deduction of deferred original issue discount (OID) (deferred OID deductions) under section 108(i)(5)(D... original issue discount deductions of C corporations. * * * * * (b) * * * (2) * * * (iii) * * * (D...

  18. 26 CFR 1.274-5A - Substantiation requirements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...: (1) Traveling away from home (including meals and lodging) deductible under section 162 or 212, (2... the evidence indicated a taxpayer incurred deductible travel or entertainment expense but the exact... no deduction shall be allowed for any expenditure for travel, entertainment, or a gift unless the...

  19. Proof Construction: Adolescent Development from Inductive to Deductive Problem-Solving Strategies.

    ERIC Educational Resources Information Center

    Foltz, Carol; And Others

    1995-01-01

    Studied 100 adolescents' approaches to problem-solving proofs and reasoning competence tasks. Found that a formal level of reasoning competence is associated with a deductive approach. Results support the notion of a cognitive development progression from an inductive approach to a deductive approach. (ETB)

  20. 76 FR 64879 - Deduction for Qualified Film and Television Production Costs

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-19

    ... Deduction for Qualified Film and Television Production Costs AGENCY: Internal Revenue Service (IRS... regulations relating to deductions for the costs of producing film and television productions. Those temporary... 2008, and affect taxpayers that produce films and television productions within the United States. The...

  1. 20 CFR 71.3 - Deductions from benefits.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Deductions from benefits. 71.3 Section 71.3 Employees' Benefits OFFICE OF WORKERS' COMPENSATION PROGRAMS, DEPARTMENT OF LABOR COMPENSATION FOR INJURY... JAPANESE GOVERNMENT GENERAL PROVISIONS § 71.3 Deductions from benefits. If a civilian American citizen or...

  2. 20 CFR 71.3 - Deductions from benefits.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Deductions from benefits. 71.3 Section 71.3 Employees' Benefits OFFICE OF WORKERS' COMPENSATION PROGRAMS, DEPARTMENT OF LABOR COMPENSATION FOR INJURY... JAPANESE GOVERNMENT GENERAL PROVISIONS § 71.3 Deductions from benefits. If a civilian American citizen or...

  3. 20 CFR 71.3 - Deductions from benefits.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Deductions from benefits. 71.3 Section 71.3 Employees' Benefits OFFICE OF WORKERS' COMPENSATION PROGRAMS, DEPARTMENT OF LABOR COMPENSATION FOR INJURY... JAPANESE GOVERNMENT GENERAL PROVISIONS § 71.3 Deductions from benefits. If a civilian American citizen or...

  4. 17 CFR 256.426.5 - Other deductions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... deductible before determining total income before interest charges. (b) Records shall be so maintained by...) UNIFORM SYSTEM OF ACCOUNTS FOR MUTUAL SERVICE COMPANIES AND SUBSIDIARY SERVICE COMPANIES, PUBLIC UTILITY HOLDING COMPANY ACT OF 1935 Income and Expense Accounts § 256.426.5 Other deductions. (a) This account...

  5. 37 CFR 251.73 - Deduction of costs of distribution proceedings.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... OF CONGRESS COPYRIGHT ARBITRATION ROYALTY PANEL RULES AND PROCEDURES COPYRIGHT ARBITRATION ROYALTY PANEL RULES OF PROCEDURE Royalty Fee Distribution Proceedings § 251.73 Deduction of costs of... distributions of royalty fees are made, deduct the reasonable costs incurred by the Library of Congress and the...

  6. 42 CFR 409.80 - Inpatient deductible and coinsurance: General provisions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Inpatient deductible and coinsurance: General provisions. 409.80 Section 409.80 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE PROGRAM HOSPITAL INSURANCE BENEFITS Hospital Insurance Deductibles and...

  7. 77 FR 45480 - Deductions for Entertainment Use of Business Aircraft

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-01

    ... Contribution Deduction A commentator suggested that the final regulations should include rules on charitable contribution deductions for the fixed costs of using aircraft for charitable purposes. These rules are outside... business, and no comments were received. Drafting Information The principal authors of these regulations...

  8. 29 CFR 541.603 - Effect of improper deductions from salary.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 541.603 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR... deductions; the number and geographic location of employees whose salary was improperly reduced; the number... classification working for the same managers responsible for the actual improper deductions. Employees in...

  9. Fault tolerant onboard packet switch architecture for communication satellites: Shared memory per beam approach

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Quintana, Jorge A.; Soni, Nitin J.

    1994-01-01

    The NASA Lewis Research Center is developing a multichannel communication signal processing satellite (MCSPS) system which will provide low data rate, direct to user, commercial communications services. The focus of current space segment developments is a flexible, high-throughput, fault tolerant onboard information switching processor. This information switching processor (ISP) is a destination-directed packet switch which performs both space and time switching to route user information among numerous user ground terminals. Through both industry study contracts and in-house investigations, several packet switching architectures were examined. A contention-free approach, the shared memory per beam architecture, was selected for implementation. The shared memory per beam architecture, fault tolerance insertion, implementation, and demonstration plans are described.

  10. Case study: Optimizing fault model input parameters using bio-inspired algorithms

    NASA Astrophysics Data System (ADS)

    Plucar, Jan; Grunt, Onřej; Zelinka, Ivan

    2017-07-01

    We present a case study that demonstrates a bio-inspired approach in the process of finding optimal parameters for GSM fault model. This model is constructed using Petri Nets approach it represents dynamic model of GSM network environment in the suburban areas of Ostrava city (Czech Republic). We have been faced with a task of finding optimal parameters for an application that requires high amount of data transfers between the application itself and secure servers located in datacenter. In order to find the optimal set of parameters we employ bio-inspired algorithms such as Differential Evolution (DE) or Self Organizing Migrating Algorithm (SOMA). In this paper we present use of these algorithms, compare results and judge their performance in fault probability mitigation.

  11. Probabilistic seismic hazard analyses for ground motions and fault displacement at Yucca Mountain, Nevada

    USGS Publications Warehouse

    Stepp, J.C.; Wong, I.; Whitney, J.; Quittmeyer, R.; Abrahamson, N.; Toro, G.; Young, S.R.; Coppersmith, K.; Savy, J.; Sullivan, T.

    2001-01-01

    Probabilistic seismic hazard analyses were conducted to estimate both ground motion and fault displacement hazards at the potential geologic repository for spent nuclear fuel and high-level radioactive waste at Yucca Mountain, Nevada. The study is believed to be the largest and most comprehensive analyses ever conducted for ground-shaking hazard and is a first-of-a-kind assessment of probabilistic fault displacement hazard. The major emphasis of the study was on the quantification of epistemic uncertainty. Six teams of three experts performed seismic source and fault displacement evaluations, and seven individual experts provided ground motion evaluations. State-of-the-practice expert elicitation processes involving structured workshops, consensus identification of parameters and issues to be evaluated, common sharing of data and information, and open exchanges about the basis for preliminary interpretations were implemented. Ground-shaking hazard was computed for a hypothetical rock outcrop at -300 m, the depth of the potential waste emplacement drifts, at the designated design annual exceedance probabilities of 10-3 and 10-4. The fault displacement hazard was calculated at the design annual exceedance probabilities of 10-4 and 10-5.

  12. Latent component-based gear tooth fault detection filter using advanced parametric modeling

    NASA Astrophysics Data System (ADS)

    Ettefagh, M. M.; Sadeghi, M. H.; Rezaee, M.; Chitsaz, S.

    2009-10-01

    In this paper, a new parametric model-based filter is proposed for gear tooth fault detection. The designing of the filter consists of identifying the most proper latent component (LC) of the undamaged gearbox signal by analyzing the instant modules (IMs) and instant frequencies (IFs) and then using the component with lowest IM as the proposed filter output for detecting fault of the gearbox. The filter parameters are estimated by using the LC theory in which an advanced parametric modeling method has been implemented. The proposed method is applied on the signals, extracted from simulated gearbox for detection of the simulated gear faults. In addition, the method is used for quality inspection of the produced Nissan-Junior vehicle gearbox by gear profile error detection in an industrial test bed. For evaluation purpose, the proposed method is compared with the previous parametric TAR/AR-based filters in which the parametric model residual is considered as the filter output and also Yule-Walker and Kalman filter are implemented for estimating the parameters. The results confirm the high performance of the new proposed fault detection method.

  13. Microstructural characterization of high-manganese austenitic steels with different stacking fault energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sato, Shigeo, E-mail: s.sato@imr.tohoku.ac.jp; Kwon, Eui-Pyo; Imafuku, Muneyuki

    Microstructures of tensile-deformed high-manganese austenitic steels exhibiting twinning-induced plasticity were analyzed by electron backscatter diffraction pattern observation and X-ray diffraction measurement to examine the influence of differences in their stacking fault energies on twinning activity during deformation. The steel specimen with the low stacking fault energy of 15 mJ/m{sup 2} had a microstructure with a high population of mechanical twins than the steel specimen with the high stacking fault energy (25 mJ/m{sup 2}). The <111> and <100> fibers developed along the tensile axis, and mechanical twinning occurred preferentially in the <111> fiber. The Schmid factors for slip and twinning deformationsmore » can explain the origin of higher twinning activity in the <111> fiber. However, the high stacking fault energy suppresses the twinning activity even in the <111> fiber. A line profile analysis based on the X-ray diffraction data revealed the relationship between the characteristics of the deformed microstructures and the stacking fault energies of the steel specimens. Although the variation in dislocation density with the tensile deformation is not affected by the stacking fault energies, the effect of the stacking fault energies on the crystallite size refinement becomes significant with a decrease in the stacking fault energies. Moreover, the stacking fault probability, which was estimated from a peak-shift analysis of the 111 and 200 diffractions, was high for the specimen with low stacking fault energy. Regardless of the difference in the stacking fault energies of the steel specimens, the refined crystallite size has a certain correlation with the stacking fault probability, indicating that whether the deformation-induced crystallite-size refinement occurs depends directly on the stacking fault probability rather than on the stacking fault energies in the present steel specimens. - Highlights: {yields} We studied effects of stacking fault energies on deformed microstructures of steels. {yields} Correlations between texture and occurrence of mechanical twinning are discussed. {yields} Evolutions of dislocations and crystallite are analyzed by line profile analysis.« less

  14. Seismic measurements of the internal properties of fault zones

    USGS Publications Warehouse

    Mooney, W.D.; Ginzburg, A.

    1986-01-01

    The internal properties within and adjacent to fault zones are reviewed, principally on the basis of laboratory, borehole, and seismic refraction and reflection data. The deformation of rocks by faulting ranges from intragrain microcracking to severe alteration. Saturated microcracked and mildly fractured rocks do not exhibit a significant reduction in velocity, but, from borehole measurements, densely fractured rocks do show significantly reduced velocities, the amount of reduction generally proportional to the fracture density. Highly fractured rock and thick fault gouge along the creeping portion of the San Andreas fault are evidenced by a pronounced seismic low-velocity zone (LVZ), which is either very thin or absent along locked portions of the fault. Thus there is a correlation between fault slip behavior and seismic velocity structure within the fault zone; high pore pressure within the pronounced LVZ may be conductive to fault creep. Deep seismic reflection data indicate that crustal faults sometimes extend through the entire crust. Models of these data and geologic evidence are consistent with a composition of deep faults consisting of highly foliated, seismically anisotropic mylonites. ?? 1986 Birkha??user Verlag, Basel.

  15. The use of fault reporting of medical equipment to identify latent design flaws.

    PubMed

    Flewwelling, C J; Easty, A C; Vicente, K J; Cafazzo, J A

    2014-10-01

    Poor device design that fails to adequately account for user needs, cognition, and behavior is often responsible for use errors resulting in adverse events. This poor device design is also often latent, and could be responsible for "No Fault Found" (NFF) reporting, in which medical devices sent for repair by clinical users are found to be operating as intended. Unresolved NFF reports may contribute to incident under reporting, clinical user frustration, and biomedical engineering technologist inefficacy. This study uses human factors engineering methods to investigate the relationship between NFF reporting frequency and device usability. An analysis of medical equipment maintenance data was conducted to identify devices with a high NFF reporting frequency. Subsequently, semi-structured interviews and heuristic evaluations were performed in order to identify potential usability issues. Finally, usability testing was conducted in order to validate that latent usability related design faults result in a higher frequency of NFF reporting. The analysis of medical equipment maintenance data identified six devices with a high NFF reporting frequency. Semi-structured interviews, heuristic evaluations and usability testing revealed that usability issues caused a significant portion of the NFF reports. Other factors suspected to contribute to increased NFF reporting include accessory issues, intermittent faults and environmental issues. Usability testing conducted on three of the devices revealed 23 latent usability related design faults. These findings demonstrate that latent usability related design faults manifest themselves as an increase in NFF reporting and that devices containing usability related design faults can be identified through an analysis of medical equipment maintenance data. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  16. 78 FR 39984 - Guidance Regarding Deferred Discharge of Indebtedness Income of Corporations and Deferred...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-03

    ... Guidance Regarding Deferred Discharge of Indebtedness Income of Corporations and Deferred Original Issue... cancellation of debt (COD)) income (deferred COD income) and the accelerated deduction of deferred original... the deduction of deferred original issue discount that is otherwise includible or deductible under the...

  17. Preparing for Formal Proofs in Geometry

    ERIC Educational Resources Information Center

    Johnson, Art

    2009-01-01

    One way in which geometry teachers can help students develop their reasoning is by providing proof-readiness experiences. Blum and Kirsch (1991) suggest that "preformal proofs" can help students develop deductive reasoning. Preformal proofs, which follow the basic principles of deductive reasoning, can help prepare students for formal deduction in…

  18. 26 CFR 1.584-6 - Net operating loss deduction.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 7 2010-04-01 2010-04-01 true Net operating loss deduction. 1.584-6 Section 1.584-6 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Banking Institutions § 1.584-6 Net operating loss deduction. The net...

  19. 75 FR 68799 - Medicare Program; Inpatient Hospital Deductible and Hospital and Extended Care Services...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-09

    ... 0938-AP86 Medicare Program; Inpatient Hospital Deductible and Hospital and Extended Care Services.... SUMMARY: This notice announces the inpatient hospital deductible and the hospital and extended care... extended care services in a skilled nursing facility in a benefit period. DATES: Effective Date: This...

  20. 26 CFR 1.179-3 - Carryover of disallowed deduction.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... deduction are selected by the taxpayer in the year the properties are placed in service. This selection must... no selection is made, the total carryover of disallowed deduction is apportioned equally over the... restaurant business. During 1992, ABC purchases and places in service two items of section 179 property—a...

  1. 38 CFR 8.4 - Deduction of insurance premiums from compensation, retirement pay, or pension.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...' Relief DEPARTMENT OF VETERANS AFFAIRS NATIONAL SERVICE LIFE INSURANCE Premiums § 8.4 Deduction of insurance premiums from compensation, retirement pay, or pension. The insured under a National Service life insurance policy which is not lapsed may authorize the monthly deduction of premiums from disability...

  2. 38 CFR 8.4 - Deduction of insurance premiums from compensation, retirement pay, or pension.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...' Relief DEPARTMENT OF VETERANS AFFAIRS NATIONAL SERVICE LIFE INSURANCE Premiums § 8.4 Deduction of insurance premiums from compensation, retirement pay, or pension. The insured under a National Service life insurance policy which is not lapsed may authorize the monthly deduction of premiums from disability...

  3. 20 CFR 226.35 - Deductions from regular annuity rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Deductions from regular annuity rate. 226.35... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced...

  4. 34 CFR 32.10 - Deductions process.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Deductions process. 32.10 Section 32.10 Education Office of the Secretary, Department of Education SALARY OFFSET TO RECOVER OVERPAYMENTS OF PAY OR ALLOWANCES FROM DEPARTMENT OF EDUCATION EMPLOYEES § 32.10 Deductions process. (a) Debts must be collected in...

  5. Changes Affecting Faculty Deductions.

    ERIC Educational Resources Information Center

    Hoyt, Christopher R.

    1987-01-01

    The Tax Reform Act of 1986 has brought faculty lower tax rates, but they have lost many tax deductions to which they were accustomed. The impact on higher education, the 80% limitation for meals and entertainment, travel, and the 2% adjusted gross income (AGI) floor for miscellaneous itemized deductions are discussed. (MLW)

  6. On Inference Rules of Logic-Based Information Retrieval Systems.

    ERIC Educational Resources Information Center

    Chen, Patrick Shicheng

    1994-01-01

    Discussion of relevance and the needs of the users in information retrieval focuses on a deductive object-oriented approach and suggests eight inference rules for the deduction. Highlights include characteristics of a deductive object-oriented system, database and data modeling language, implementation, and user interface. (Contains 24…

  7. 29 CFR 783.45 - Deductions from wages.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Deductions from wages. 783.45 Section 783.45 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL... TO EMPLOYEES EMPLOYED AS SEAMEN Computation of Wages and Hours § 783.45 Deductions from wages. Where...

  8. 26 CFR 1.809-6 - Modifications.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... addition to reserves for bad debts under section 166(c). However, a deduction for specific bad debts shall...) Amortizable bond premium. No deduction shall be allowed under section 171 for the amortization of bond premiums since a special deduction for such premiums is specifically taken into account under section 818(b...

  9. 20 CFR 416.722 - Circumstances under which we make a penalty deduction.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Circumstances under which we make a penalty deduction. 416.722 Section 416.722 Employees' Benefits SOCIAL SECURITY ADMINISTRATION SUPPLEMENTAL SECURITY INCOME FOR THE AGED, BLIND, AND DISABLED Reports Required Penalty Deductions § 416.722 Circumstances...

  10. 20 CFR 416.722 - Circumstances under which we make a penalty deduction.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Circumstances under which we make a penalty deduction. 416.722 Section 416.722 Employees' Benefits SOCIAL SECURITY ADMINISTRATION SUPPLEMENTAL SECURITY INCOME FOR THE AGED, BLIND, AND DISABLED Reports Required Penalty Deductions § 416.722 Circumstances...

  11. 26 CFR 1.469-7 - Treatment of self-charged items of interest income and deduction.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    .... The rules— (i) Treat certain interest income resulting from these lending transactions as passive... interest income as passive activity deductions; and (iii) Allocate the passive activity gross income and passive activity deductions resulting from this treatment among the taxpayer's activities. (2) Priority of...

  12. Experimental verification of the model for formation of double Shockley stacking faults in highly doped regions of PVT-grown 4H–SiC wafers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yu; Guo, Jianqiu; Goue, Ouloide

    Recently, we reported on the formation of overlapping rhombus-shaped stacking faults from scratches left over by the chemical mechanical polishing during high temperature annealing of PVT-grown 4H–SiC wafer. These stacking faults are restricted to regions with high N-doped areas of the wafer. The type of these stacking faults were determined to be Shockley stacking faults by analyzing the behavior of their area contrast using synchrotron white beam X-ray topography studies. A model was proposed to explain the formation mechanism of the rhombus shaped stacking faults based on double Shockley fault nucleation and propagation. In this paper, we have experimentally verifiedmore » this model by characterizing the configuration of the bounding partials of the stacking faults on both surfaces using synchrotron topography in back reflection geometry. As predicted by the model, on both the Si and C faces, the leading partials bounding the rhombus-shaped stacking faults are 30° Si-core and the trailing partials are 30° C-core. Finally, using high resolution transmission electron microscopy, we have verified that the enclosed stacking fault is a double Shockley type.« less

  13. A testing-coverage software reliability model considering fault removal efficiency and error generation

    PubMed Central

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091

  14. Children's and adults' evaluation of the certainty of deductive inferences, inductive inferences, and guesses.

    PubMed

    Pillow, Bradford H

    2002-01-01

    Two experiments investigated kindergarten through fourth-grade children's and adults' (N = 128) ability to (1) evaluate the certainty of deductive inferences, inductive inferences, and guesses; and (2) explain the origins of inferential knowledge. When judging their own cognitive state, children in first grade and older rated deductive inferences as more certain than guesses; but when judging another person's knowledge, children did not distinguish valid inferences from invalid inferences and guesses until fourth grade. By third grade, children differentiated their own deductive inferences from inductive inferences and guesses, but only adults both differentiated deductive inferences from inductive inferences and differentiated inductive inferences from guesses. Children's recognition of their own inferences may contribute to the development of knowledge about cognitive processes, scientific reasoning, and a constructivist epistemology.

  15. The Brain Network for Deductive Reasoning: A Quantitative Meta-analysis of 28 Neuroimaging Studies

    PubMed Central

    Prado, Jérôme; Chadha, Angad; Booth, James R.

    2011-01-01

    Over the course of the past decade, contradictory claims have been made regarding the neural bases of deductive reasoning. Researchers have been puzzled by apparent inconsistencies in the literature. Some have even questioned the effectiveness of the methodology used to study the neural bases of deductive reasoning. However, the idea that neuroimaging findings are inconsistent is not based on any quantitative evidence. Here, we report the results of a quantitative meta-analysis of 28 neuroimaging studies of deductive reasoning published between 1997 and 2010, combining 382 participants. Consistent areas of activations across studies were identified using the multilevel kernel density analysis method. We found that results from neuroimaging studies are more consistent than what has been previously assumed. Overall, studies consistently report activations in specific regions of a left fronto-parietal system, as well as in the left Basal Ganglia. This brain system can be decomposed into three subsystems that are specific to particular types of deductive arguments: relational, categorical, and propositional. These dissociations explain inconstancies in the literature. However, they are incompatible with the notion that deductive reasoning is supported by a single cognitive system relying either on visuospatial or rule-based mechanisms. Our findings provide critical insight into the cognitive organization of deductive reasoning and need to be accounted for by cognitive theories. PMID:21568632

  16. Style and rate of quaternary deformation of the Hosgri Fault Zone, offshore south-central coastal California

    USGS Publications Warehouse

    Hanson, Kathryn L.; Lettis, William R.; McLaren, Marcia; Savage, William U.; Hall, N. Timothy; Keller, Mararget A.

    2004-01-01

    The Hosgri Fault Zone is the southernmost component of a complex system of right-slip faults in south-central coastal California that includes the San Gregorio, Sur, and San Simeon Faults. We have characterized the contemporary style of faulting along the zone on the basis of an integrated analysis of a broad spectrum of data, including shallow high-resolution and deep penetration seismic reflection data; geologic and geomorphic data along the Hosgri and San Simeon Fault Zones and the intervening San Simeon/Hosgri pull-apart basin; the distribution and nature of near-coast seismicity; regional tectonic kinematics; and comparison of the Hosgri Fault Zone with worldwide strike-slip, oblique-slip, and reverse-slip fault zones. These data show that the modern Hosgri Fault Zone is a convergent right-slip (transpressional) fault having a late Quaternary slip rate of 1 to 3 mm/yr. Evidence supporting predominantly strike-slip deformation includes (1) a long, narrow, linear zone of faulting and associated deformation; (2) the presence of asymmetric flower structures; (3) kinematically consistent localized extensional and compressional deformation at releasing and restraining bends or steps, respectively, in the fault zone; (4) changes in the sense and magnitude of vertical separation both along trend of the fault zone and vertically within the fault zone; (5) strike-slip focal mechanisms along the fault trace; (6) a distribution of seismicity that delineates a high-angle fault extending through the seismogenic crust; (7) high ratios of lateral to vertical slip along the fault zone; and (8) the separation by the fault of two tectonic domains (offshore Santa Maria Basin, onshore Los Osos domain) that are undergoing contrasting styles of deformation and orientations of crustal shortening. The convergent component of slip is evidenced by the deformation of the early-late Pliocene unconformity. In characterizing the style of faulting along the Hosgri Fault Zone, we assessed alternative tectonic models by evaluating (1) the cumulative effects of multiple deformational episodes that can produce complex, difficult-to-interpret fault geometries, patterns, and senses of displacement; (2) the difficult imaging of high-angle fault planes and horizontal fault separations on seismic reflection data; and (3) the effects of strain partitioning that yield coeval strike-slip faults and associated fold and thrust belts.

  17. Infrastructure and mechanical properties of a fault zone in sandstone as an outcrop analogue of a potential geothermal reservoir

    NASA Astrophysics Data System (ADS)

    Bauer, J. F.; Meier, S.; Philipp, S. L.

    2013-12-01

    Due to high drilling costs of geothermal projects, it is economically sensible to assess the potential suitability of a reservoir prior to drilling. Fault zones are of particular importance, because they may enhance fluid flow, or be flow barriers, respectively, depending on their particular infrastructure. Outcrop analogue studies are useful to analyze the fault zone infrastructure and thereby increase the predictability of fluid flow behavior across fault zones in the corresponding deep reservoir. The main aims of the present study are to 1) analyze the infrastructure and the differences of fracture system parameters in fault zones and 2) determine the mechanical properties of the faulted rocks. We measure fracture frequencies as well as orientations, lengths and apertures and take representative rock samples for each facies to obtain Young's modulus, compressive and tensile strengths in the laboratory. Since fractures reduce the stiffnesses of in situ rock masses we use an inverse correlation of the number of discontinuities to calculate effective (in situ) Young's moduli to investigate the variation of mechanical properties in fault zones. In addition we determine the rebound hardness, which correlates with the compressive strength measured in the laboratory, with a 'Schmidt-Hammer' in the field because this allows detailed maps of mechanical property variations within fault zones. Here we present the first results for a fault zone in the Triassic Lower Bunter of the Upper Rhine Graben in France. The outcrop at Cleebourg exposes the damage zone of the footwall and a clear developed fault core of a NNW-SSE-striking normal fault. The approximately 15 m wide fault core consists of fault gouge, slip zones, deformation bands and host rock lenses. Intensive deformation close to the core led to the formation of a distal fault core, a 5 m wide zone with disturbed layering and high fracture frequency. The damage zone also contains more fractures than the host rock. Fracture frequency and connectivity clearly increase near the fault core where the reservoir permeability may thus be higher, the effective Young's modulus lower. Similarly the Schmidt-Hammer measurements show that the rebound hardness, or the compressive strength, respectively, decreases near the fault core. This Project is part of the Research- and Development Project 'AuGE' (Outcrop Analogue Studies in Geothermal Exploration). Project partners are the companies Geothermal Engeneering GmbH as well as the Universities of Heidelberg and Erlangen. We thank the German Federal Ministry for the Environment, Nature Conversation and Nuclear Safty (BMU) for funding the project in the framework of the 5th Energy Research Program (FKZ: 0325302). Also thanks to the owner of the quarry for the permission to perform our field studies.

  18. Human-centered design (HCD) of a fault-finding application for mobile devices and its impact on the reduction of time in fault diagnosis in the manufacturing industry.

    PubMed

    Kluge, Annette; Termer, Anatoli

    2017-03-01

    The present article describes the design process of a fault-finding application for mobile devices, which was built to support workers' performance by guiding them through a systematic strategy to stay focused during a fault-finding process. In collaboration with a project partner in the manufacturing industry, a fault diagnosis application was conceptualized based on a human-centered design approach (ISO 9241-210:2010). A field study with 42 maintenance workers was conducted for the purpose of evaluating the performance enhancement of fault finding in three different scenarios as well as for assessing the workers' acceptance of the technology. Workers using the mobile device application were twice as fast at fault finding as the control group without the application and perceived the application as very useful. The results indicate a vast potential of the mobile application for fault diagnosis in contemporary manufacturing systems. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Engelmann, Christian

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Therefore the resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. Also, due to practical limits on powermore » consumption in HPC systems future systems are likely to embrace innovative architectures, increasing the levels of hardware and software complexities. As a result the techniques that seek to improve resilience must navigate the complex trade-off space between resilience and the overheads to power consumption and performance. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power efficiency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience using the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. Each established solution is described in the form of a pattern that addresses concrete problems in the design of resilient systems. The complete catalog of resilience design patterns provides designers with reusable design elements. We also define a framework that enhances a designer's understanding of the important constraints and opportunities for the design patterns to be implemented and deployed at various layers of the system stack. This design framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also supports optimization of the cost-benefit trade-offs among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner in spite of frequent faults, errors, and failures of various types.« less

  20. GenSAA: A tool for advancing satellite monitoring with graphical expert systems

    NASA Technical Reports Server (NTRS)

    Hughes, Peter M.; Luczak, Edward C.

    1993-01-01

    During numerous contacts with a satellite each day, spacecraft analysts must closely monitor real time data for combinations of telemetry parameter values, trends, and other indications that may signify a problem or failure. As satellites become more complex and the number of data items increases, this task is becoming increasingly difficult for humans to perform at acceptable performance levels. At the NASA Goddard Space Flight Center, fault-isolation expert systems have been developed to support data monitoring and fault detection tasks in satellite control centers. Based on the lessons learned during these initial efforts in expert system automation, a new domain-specific expert system development tool named the Generic Spacecraft Analyst Assistant (GenSAA) is being developed to facilitate the rapid development and reuse of real-time expert systems to serve as fault-isolation assistants for spacecraft analysts. Although initially domain-specific in nature, this powerful tool will support the development of highly graphical expert systems for data monitoring purposes throughout the space and commercial industry.

  1. Dynamic rupture simulations of the 2016 Mw7.8 Kaikōura earthquake: a cascading multi-fault event

    NASA Astrophysics Data System (ADS)

    Ulrich, T.; Gabriel, A. A.; Ampuero, J. P.; Xu, W.; Feng, G.

    2017-12-01

    The Mw7.8 Kaikōura earthquake struck the Northern part of New Zealand's South Island roughly one year ago. It ruptured multiple segments of the contractional North Canterbury fault zone and of the Marlborough fault system. Field observations combined with satellite data suggest a rupture path involving partly unmapped faults separated by large stepover distances larger than 5 km, the maximum distance usually considered by the latest seismic hazard assessment methods. This might imply distant rupture transfer mechanisms generally not considered in seismic hazard assessment. We present high-resolution 3D dynamic rupture simulations of the Kaikōura earthquake under physically self-consistent initial stress and strength conditions. Our simulations are based on recent finite-fault slip inversions that constrain fault system geometry and final slip distribution from remote sensing, surface rupture and geodetic data (Xu et al., 2017). We assume a uniform background stress field, without lateral fault stress or strength heterogeneity. We use the open-source software SeisSol (www.seissol.org) which is based on an arbitrary high-order accurate DERivative Discontinuous Galerkin method (ADER-DG). Our method can account for complex fault geometries, high resolution topography and bathymetry, 3D subsurface structure, off-fault plasticity and modern friction laws. It enables the simulation of seismic wave propagation with high-order accuracy in space and time in complex media. We show that a cascading rupture driven by dynamic triggering can break all fault segments that were involved in this earthquake without mechanically requiring an underlying thrust fault. Our prefered fault geometry connects most fault segments: it does not features stepover larger than 2 km. The best scenario matches the main macroscopic characteristics of the earthquake, including its apparently slow rupture propagation caused by zigzag cascading, the moment magnitude and the overall inferred slip distribution. We observe a high sensitivity of cascading dynamics on fault-step over distance and off-fault energy dissipation.

  2. Sliding window denoising K-Singular Value Decomposition and its application on rolling bearing impact fault diagnosis

    NASA Astrophysics Data System (ADS)

    Yang, Honggang; Lin, Huibin; Ding, Kang

    2018-05-01

    The performance of sparse features extraction by commonly used K-Singular Value Decomposition (K-SVD) method depends largely on the signal segment selected in rolling bearing diagnosis, furthermore, the calculating speed is relatively slow and the dictionary becomes so redundant when the fault signal is relatively long. A new sliding window denoising K-SVD (SWD-KSVD) method is proposed, which uses only one small segment of time domain signal containing impacts to perform sliding window dictionary learning and select an optimal pattern with oscillating information of the rolling bearing fault according to a maximum variance principle. An inner product operation between the optimal pattern and the whole fault signal is performed to enhance the characteristic of the impacts' occurrence moments. Lastly, the signal is reconstructed at peak points of the inner product to realize the extraction of the rolling bearing fault features. Both simulation and experiments verify that the method could extract the fault features effectively.

  3. Faults Discovery By Using Mined Data

    NASA Technical Reports Server (NTRS)

    Lee, Charles

    2005-01-01

    Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.

  4. A fault-tolerant strategy based on SMC for current-controlled converters

    NASA Astrophysics Data System (ADS)

    Azer, Peter M.; Marei, Mostafa I.; Sattar, Ahmed A.

    2018-05-01

    The sliding mode control (SMC) is used to control variable structure systems such as power electronics converters. This paper presents a fault-tolerant strategy based on the SMC for current-controlled AC-DC converters. The proposed SMC is based on three sliding surfaces for the three legs of the AC-DC converter. Two sliding surfaces are assigned to control the phase currents since the input three-phase currents are balanced. Hence, the third sliding surface is considered as an extra degree of freedom which is utilised to control the neutral voltage. This action is utilised to enhance the performance of the converter during open-switch faults. The proposed fault-tolerant strategy is based on allocating the sliding surface of the faulty leg to control the neutral voltage. Consequently, the current waveform is improved. The behaviour of the current-controlled converter during different types of open-switch faults is analysed. Double switch faults include three cases: two upper switch fault; upper and lower switch fault at different legs; and two switches of the same leg. The dynamic performance of the proposed system is evaluated during healthy and open-switch fault operations. Simulation results exhibit the various merits of the proposed SMC-based fault-tolerant strategy.

  5. Structural localization and origin of compartmentalized fluid flow, Comstock lode, Virginia City, Nevada

    USGS Publications Warehouse

    Berger, B.R.; Tingley, J.V.; Drew, L.J.

    2003-01-01

    Bonanza-grade orebodies in epithermal-style mineral deposits characteristically occur as discrete zones within spatially more extensive fault and/or fracture systems. Empirically, the segregation of such systems into compartments of higher and lower permeability appears to be a key process necessary for high-grade ore formation and, most commonly, it is such concentrations of metals that make an epithermal vein district world class. In the world-class silver- and gold-producing Comstock mining district, Nevada, several lines of evidence lead to the conclusion that the Comstock lode is localized in an extensional stepover between right-lateral fault zones. This evidence includes fault geometries, kinematic indicators of slip, the hydraulic connectivity of faults as demonstrated by veins and dikes along faults, and the opening of a normal-fault-bounded, asymmetric basin between two parallel and overlapping northwest-striking, lateral- to lateral-oblique-slip fault zones. During basin opening, thick, generally subeconomic, banded quartz-adularia veins were deposited in the normal fault zone, the Comstock fault, and along one of the bounding lateral fault zones, the Silver City fault. As deformation continued, the intrusion of dikes and small plugs into the hanging wall of the Comstock fault zone may have impeded the ability of the stepover to accommodate displacement on the bounding strike-slip faults through extension within the stepover. A transient period of transpressional deformation of the Comstock fault zone ensued, and the early-stage veins were deformed through boudinaging and hydraulic fragmentation, fault-motion inversion, and high- and low-angle axial rotations of segments of the fault planes and some fault-bounded wedges. This deformation led to the formation of spatially restricted compartments of high vertical permeability and hydraulic connectivity and low lateral hydraulic connectivity. Bonanza orebodies were formed in the compartmentalized zones of high permeability and hydraulic connectivity. As heat flow and related hydrothermal activitv waned along the Comstock fault zone, extension was reactivated in the stepover along the Occidental zone of normal faults east of the Comstock fault zone. Volcanic and related intrusive activity in this part of the stepover led to a new episode of hydrothermal activity and formation of the Occidental lodes.

  6. 17 CFR 270.2a19-2 - Investment company general partners not deemed interested persons.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... following: (1) Only general partners who are natural persons shall serve as, and perform the functions of..., gain, loss, deduction, or credit, and other contributions, required to be held or made by general...

  7. 17 CFR 270.2a19-2 - Investment company general partners not deemed interested persons.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... following: (1) Only general partners who are natural persons shall serve as, and perform the functions of..., gain, loss, deduction, or credit, and other contributions, required to be held or made by general...

  8. 17 CFR 270.2a19-2 - Investment company general partners not deemed interested persons.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... following: (1) Only general partners who are natural persons shall serve as, and perform the functions of..., gain, loss, deduction, or credit, and other contributions, required to be held or made by general...

  9. 17 CFR 270.2a19-2 - Investment company general partners not deemed interested persons.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... following: (1) Only general partners who are natural persons shall serve as, and perform the functions of..., gain, loss, deduction, or credit, and other contributions, required to be held or made by general...

  10. 17 CFR 270.2a19-2 - Investment company general partners not deemed interested persons.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... following: (1) Only general partners who are natural persons shall serve as, and perform the functions of..., gain, loss, deduction, or credit, and other contributions, required to be held or made by general...

  11. 26 CFR 20.2053-1 - Deductions for expenses, indebtedness, and taxes; in general.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... behalf. N is a CPA and provides similar accounting and bookkeeping services to unrelated clients. At the... services rendered arose in the ordinary course of business, as N is a CPA performing similar services for...

  12. 26 CFR 20.2053-1 - Deductions for expenses, indebtedness, and taxes; in general.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... behalf. N is a CPA and provides similar accounting and bookkeeping services to unrelated clients. At the... services rendered arose in the ordinary course of business, as N is a CPA performing similar services for...

  13. 26 CFR 20.2053-1 - Deductions for expenses, indebtedness, and taxes; in general.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... behalf. N is a CPA and provides similar accounting and bookkeeping services to unrelated clients. At the... services rendered arose in the ordinary course of business, as N is a CPA performing similar services for...

  14. 26 CFR 20.2053-1 - Deductions for expenses, indebtedness, and taxes; in general.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... behalf. N is a CPA and provides similar accounting and bookkeeping services to unrelated clients. At the... services rendered arose in the ordinary course of business, as N is a CPA performing similar services for...

  15. 26 CFR 20.2053-1 - Deductions for expenses, indebtedness, and taxes; in general.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... behalf. N is a CPA and provides similar accounting and bookkeeping services to unrelated clients. At the... services rendered arose in the ordinary course of business, as N is a CPA performing similar services for...

  16. Parameter Transient Behavior Analysis on Fault Tolerant Control System

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine (Technical Monitor); Shin, Jong-Yeob

    2003-01-01

    In a fault tolerant control (FTC) system, a parameter varying FTC law is reconfigured based on fault parameters estimated by fault detection and isolation (FDI) modules. FDI modules require some time to detect fault occurrences in aero-vehicle dynamics. This paper illustrates analysis of a FTC system based on estimated fault parameter transient behavior which may include false fault detections during a short time interval. Using Lyapunov function analysis, the upper bound of an induced-L2 norm of the FTC system performance is calculated as a function of a fault detection time and the exponential decay rate of the Lyapunov function.

  17. The evolving contribution of border faults and intra-rift faults in early-stage East African rifts: insights from the Natron (Tanzania) and Magadi (Kenya) basins

    NASA Astrophysics Data System (ADS)

    Muirhead, J.; Kattenhorn, S. A.; Dindi, E.; Gama, R.

    2013-12-01

    In the early stages of continental rifting, East African Rift (EAR) basins are conventionally depicted as asymmetric basins bounded on one side by a ~100 km-long border fault. As rifting progresses, strain concentrates into the rift center, producing intra-rift faults. The timing and nature of the transition from border fault to intra-rift-dominated strain accommodation is unclear. Our study focuses on this transitional phase of continental rifting by exploring the spatial and temporal evolution of faulting in the Natron (border fault initiation at ~3 Ma) and Magadi (~7 Ma) basins of northern Tanzania and southern Kenya, respectively. We compare the morphologies and activity histories of faults in each basin using field observations and remote sensing in order to address the relative contributions of border faults and intra-rift faults to crustal strain accommodation as rifting progresses. The ~500 m-high border fault along the western margin of the Natron basin is steep compared to many border faults in the eastern branch of the EAR, indicating limited scarp degradation by mass wasting. Locally, the escarpment shows open fissures and young scarps 10s of meters high and a few kilometers long, implying ongoing border fault activity in this young rift. However, intra-rift faults within ~1 Ma lavas are greatly eroded and fresh scarps are typically absent, implying long recurrence intervals between slip events. Rift-normal topographic profiles across the Natron basin show the lowest elevations in the lake-filled basin adjacent to the border fault, where a number of hydrothermal springs along the border fault system expel water into the lake. In contrast to Natron, a ~1600 m high, densely vegetated, border fault escarpment along the western edge of the Magadi basin is highly degraded; we were unable to identify evidence of recent rupturing. Rift-normal elevation profiles indicate the focus of strain has migrated away from the border fault into the rift center, where faults pervasively dissect 1.2-0.8 Ma trachyte lavas. Unlike Natron, intra-rift faults in the Magadi basin exhibit primarily steep, little-degraded fault scarps, implying greater activity than Natron intra-rift faults. Numerous fault-associated springs feed water into perennial Lake Magadi, which has no surface drainage input, yet survives despite a high evaporation rate that has created economically viable evaporite deposits. Calcite vein-filled joints are common along fault zones around Lake Magadi, as well as several cm veins around columnar joints that imply isotropic expansion of the fracture network under high pressures of CO2-rich fluids. Our work indicates that the locus of strain in this portion of the EAR transfers from the border fault to the center of the rift basin some time between 3 and 7 million years after rift initiation. This transition likely reflects the evolving respective roles of crustal flexure and magma budget in focusing strain, as well as the hydrothermal fluid budget along evolving fault zones.

  18. Abnormal fault-recovery characteristics of the fault-tolerant multiprocessor uncovered using a new fault-injection methodology

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1991-01-01

    An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.

  19. Modification of multi-ring basins - The Imbrium model

    NASA Technical Reports Server (NTRS)

    Whitford-Stark, J. L.

    1981-01-01

    It is shown that the gross variations in wall height around Imbrium result largely from intersection of the Imbrium basin with pre-existing basins and faulting: angle of impact and slumping played a lesser modifying role. The gross irregularities in plan of the northern part of Imbrium is hypothesized to result from the collapse of large crustal blocks into the Imbrium and Serenitatis cavities. Lithosphere thickness is believed to play an important role in the mechanisms of formation and modification of large craters and basins. The deduction of slow sub-lithospheric flow of material toward the cavity centers does not lend support to the tsunami model, requires a minor modification of the nested-crater model and provides a mechanism for the production of megaterraces. Spatial and temporal lithosphere variations satisfy constraints requiring the overlap of morphology/diameter characteristics, variable onset diameters between planets, variable ring spacings from planet to planet and provide a mechanism for producing local irregularities in ring structures.

  20. Strain rate effect on fault slip and rupture evolution: Insight from meter-scale rock friction experiments

    NASA Astrophysics Data System (ADS)

    Xu, Shiqing; Fukuyama, Eiichi; Yamashita, Futoshi; Mizoguchi, Kazuo; Takizawa, Shigeru; Kawakata, Hironori

    2018-05-01

    We conduct meter-scale rock friction experiments to study strain rate effect on fault slip and rupture evolution. Two rock samples made of Indian metagabbro, with a nominal contact dimension of 1.5 m long and 0.1 m wide, are juxtaposed and loaded in a direct shear configuration to simulate the fault motion. A series of experimental tests, under constant loading rates ranging from 0.01 mm/s to 1 mm/s and under a fixed normal stress of 6.7 MPa, are performed to simulate conditions with changing strain rates. Load cells and displacement transducers are utilized to examine the macroscopic fault behavior, while high-density arrays of strain gauges close to the fault are used to investigate the local fault behavior. The observations show that the macroscopic peak strength, strength drop, and the rate of strength drop can increase with increasing loading rate. At the local scale, the observations reveal that slow loading rates favor generation of characteristic ruptures that always nucleate in the form of slow slip at about the same location. In contrast, fast loading rates can promote very abrupt rupture nucleation and along-strike scatter of hypocenter locations. At a given propagation distance, rupture speed tends to increase with increasing loading rate. We propose that a strain-rate-dependent fault fragmentation process can enhance the efficiency of fault healing during the stick period, which together with healing time controls the recovery of fault strength. In addition, a strain-rate-dependent weakening mechanism can be activated during the slip period, which together with strain energy selects the modes of fault slip and rupture propagation. The results help to understand the spectrum of fault slip and rock deformation modes in nature, and emphasize the role of heterogeneity in tuning fault behavior under different strain rates.

Top