Sauer, Juergen; Chavaillaz, Alain; Wastell, David
2016-06-01
This work examined the effects of operators' exposure to various types of automation failures in training. Forty-five participants were trained for 3.5 h on a simulated process control environment. During training, participants either experienced a fully reliable, automatic fault repair facility (i.e. faults detected and correctly diagnosed), a misdiagnosis-prone one (i.e. faults detected but not correctly diagnosed) or a miss-prone one (i.e. faults not detected). One week after training, participants were tested for 3 h, experiencing two types of automation failures (misdiagnosis, miss). The results showed that automation bias was very high when operators trained on miss-prone automation encountered a failure of the diagnostic system. Operator errors resulting from automation bias were much higher when automation misdiagnosed a fault than when it missed one. Differences in trust levels that were instilled by the different training experiences disappeared during the testing session. Practitioner Summary: The experience of automation failures during training has some consequences. A greater potential for operator errors may be expected when an automatic system failed to diagnose a fault than when it failed to detect one.
Level of Automation and Failure Frequency Effects on Simulated Lunar Lander Performance
NASA Technical Reports Server (NTRS)
Marquez, Jessica J.; Ramirez, Margarita
2014-01-01
A human-in-the-loop experiment was conducted at the NASA Ames Research Center Vertical Motion Simulator, where instrument-rated pilots completed a simulated terminal descent phase of a lunar landing. Ten pilots participated in a 2 x 2 mixed design experiment, with level of automation as the within-subjects factor and failure frequency as the between subjects factor. The two evaluated levels of automation were high (fully automated landing) and low (manual controlled landing). During test trials, participants were exposed to either a high number of failures (75% failure frequency) or low number of failures (25% failure frequency). In order to investigate the pilots' sensitivity to changes in levels of automation and failure frequency, the dependent measure selected for this experiment was accuracy of failure diagnosis, from which D Prime and Decision Criterion were derived. For each of the dependent measures, no significant difference was found for level of automation and no significant interaction was detected between level of automation and failure frequency. A significant effect was identified for failure frequency suggesting failure frequency has a significant effect on pilots' sensitivity to failure detection and diagnosis. Participants were more likely to correctly identify and diagnose failures if they experienced the higher levels of failures, regardless of level of automation
Negative Selection Algorithm for Aircraft Fault Detection
NASA Technical Reports Server (NTRS)
Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.
2004-01-01
We investigated a real-valued Negative Selection Algorithm (NSA) for fault detection in man-in-the-loop aircraft operation. The detection algorithm uses body-axes angular rate sensory data exhibiting the normal flight behavior patterns, to generate probabilistically a set of fault detectors that can detect any abnormalities (including faults and damages) in the behavior pattern of the aircraft flight. We performed experiments with datasets (collected under normal and various simulated failure conditions) using the NASA Ames man-in-the-loop high-fidelity C-17 flight simulator. The paper provides results of experiments with different datasets representing various failure conditions.
Change Deafness and the Organizational Properties of Sounds
ERIC Educational Resources Information Center
Gregg, Melissa K.; Samuel, Arthur G.
2008-01-01
Change blindness, or the failure to detect (often large) changes to visual scenes, has been demonstrated in a variety of different situations. Failures to detect auditory changes are far less studied, and thus little is known about the nature of change deafness. Five experiments were conducted to explore the processes involved in change deafness…
Achieving fast and stable failure detection in WDM Networks
NASA Astrophysics Data System (ADS)
Gao, Donghui; Zhou, Zhiyu; Zhang, Hanyi
2005-02-01
In dynamic networks, the failure detection time takes a major part of the convergence time, which is an important network performance index. To detect a node or link failure in the network, traditional protocols, like Hello protocol in OSPF or RSVP, exchanges keep-alive messages between neighboring nodes to keep track of the link/node state. But by default settings, it can get a minimum detection time in the measure of dozens of seconds, which can not meet the demands of fast network convergence and failure recovery. When configuring the related parameters to reduce the detection time, there will be notable instability problems. In this paper, we analyzed the problem and designed a new failure detection algorithm to reduce the network overhead of detection signaling. Through our experiment we found it is effective to enhance the stability by implicitly acknowledge other signaling messages as keep-alive messages. We conducted our proposal and the previous approaches on the ASON test-bed. The experimental results show that our algorithm gives better performances than previous schemes in about an order magnitude reduction of both false failure alarms and queuing delay to other messages, especially under light traffic load.
The analysis of the pilot's cognitive and decision processes
NASA Technical Reports Server (NTRS)
Curry, R. E.
1975-01-01
Articles are presented on pilot performance in zero-visibility precision approach, failure detection by pilots during automatic landing, experiments in pilot decision-making during simulated low visibility approaches, a multinomial maximum likelihood program, and a random search algorithm for laboratory computers. Other topics discussed include detection of system failures in multi-axis tasks and changes in pilot workload during an instrument landing.
NASA Technical Reports Server (NTRS)
Vanschalkwyk, Christiaan Mauritz
1991-01-01
Many applications require that a control system must be tolerant to the failure of its components. This is especially true for large space-based systems that must work unattended and with long periods between maintenance. Fault tolerance can be obtained by detecting the failure of the control system component, determining which component has failed, and reconfiguring the system so that the failed component is isolated from the controller. Component failure detection experiments that were conducted on an experimental space structure, the NASA Langley Mini-Mast are presented. Two methodologies for failure detection and isolation (FDI) exist that do not require the specification of failure modes and are applicable to both actuators and sensors. These methods are known as the Failure Detection Filter and the method of Generalized Parity Relations. The latter method was applied to three different sensor types on the Mini-Mast. Failures were simulated in input-output data that were recorded during operation of the Mini-Mast. Both single and double sensor parity relations were tested and the effect of several design parameters on the performance of these relations is discussed. The detection of actuator failures is also treated. It is shown that in all the cases it is possible to identify the parity relations directly from input-output data. Frequency domain analysis is used to explain the behavior of the parity relations.
A Diagnostic Approach for Electro-Mechanical Actuators in Aerospace Systems
NASA Technical Reports Server (NTRS)
Balaban, Edward; Saxena, Abhinav; Bansal, Prasun; Goebel, Kai Frank; Stoelting, Paul; Curran, Simon
2009-01-01
Electro-mechanical actuators (EMA) are finding increasing use in aerospace applications, especially with the trend towards all all-electric aircraft and spacecraft designs. However, electro-mechanical actuators still lack the knowledge base accumulated for other fielded actuator types, particularly with regard to fault detection and characterization. This paper presents a thorough analysis of some of the critical failure modes documented for EMAs and describes experiments conducted on detecting and isolating a subset of them. The list of failures has been prepared through an extensive Failure Modes and Criticality Analysis (FMECA) reference, literature review, and accessible industry experience. Methods for data acquisition and validation of algorithms on EMA test stands are described. A variety of condition indicators were developed that enabled detection, identification, and isolation among the various fault modes. A diagnostic algorithm based on an artificial neural network is shown to operate successfully using these condition indicators and furthermore, robustness of these diagnostic routines to sensor faults is demonstrated by showing their ability to distinguish between them and component failures. The paper concludes with a roadmap leading from this effort towards developing successful prognostic algorithms for electromechanical actuators.
Object memory and change detection: dissociation as a function of visual and conceptual similarity.
Yeh, Yei-Yu; Yang, Cheng-Ta
2008-01-01
People often fail to detect a change between two visual scenes, a phenomenon referred to as change blindness. This study investigates how a post-change object's similarity to the pre-change object influences memory of the pre-change object and affects change detection. The results of Experiment 1 showed that similarity lowered detection sensitivity but did not affect the speed of identifying the pre-change object, suggesting that similarity between the pre- and post-change objects does not degrade the pre-change representation. Identification speed for the pre-change object was faster than naming the new object regardless of detection accuracy. Similarity also decreased detection sensitivity in Experiment 2 but improved the recognition of the pre-change object under both correct detection and detection failure. The similarity effect on recognition was greatly reduced when 20% of each pre-change stimulus was masked by random dots in Experiment 3. Together the results suggest that the level of pre-change representation under detection failure is equivalent to the level under correct detection and that the pre-change representation is almost complete. Similarity lowers detection sensitivity but improves explicit access in recognition. Dissociation arises between recognition and change detection as the two judgments rely on the match-to-mismatch signal and mismatch-to-match signal, respectively.
Crystal growth furnace safety system validation
NASA Technical Reports Server (NTRS)
Mackowski, D. W.; Hartfield, R.; Bhavnani, S. H.; Belcher, V. M.
1994-01-01
The findings are reported regarding the safe operation of the NASA crystal growth furnace (CGF) and potential methods for detecting containment failures of the furnace. The main conclusions are summarized by ampoule leak detection, cartridge leak detection, and detection of hazardous species in the experiment apparatus container (EAC).
An energy-efficient failure detector for vehicular cloud computing.
Liu, Jiaxi; Wu, Zhibo; Dong, Jian; Wu, Jin; Wen, Dongxin
2018-01-01
Failure detectors are one of the fundamental components for maintaining the high availability of vehicular cloud computing. In vehicular cloud computing, lots of RSUs are deployed along the road to improve the connectivity. Many of them are equipped with solar battery due to the unavailability or excess expense of wired electrical power. So it is important to reduce the battery consumption of RSU. However, the existing failure detection algorithms are not designed to save battery consumption RSU. To solve this problem, a new energy-efficient failure detector 2E-FD has been proposed specifically for vehicular cloud computing. 2E-FD does not only provide acceptable failure detection service, but also saves the battery consumption of RSU. Through the comparative experiments, the results show that our failure detector has better performance in terms of speed, accuracy and battery consumption.
An energy-efficient failure detector for vehicular cloud computing
Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Wen, Dongxin
2018-01-01
Failure detectors are one of the fundamental components for maintaining the high availability of vehicular cloud computing. In vehicular cloud computing, lots of RSUs are deployed along the road to improve the connectivity. Many of them are equipped with solar battery due to the unavailability or excess expense of wired electrical power. So it is important to reduce the battery consumption of RSU. However, the existing failure detection algorithms are not designed to save battery consumption RSU. To solve this problem, a new energy-efficient failure detector 2E-FD has been proposed specifically for vehicular cloud computing. 2E-FD does not only provide acceptable failure detection service, but also saves the battery consumption of RSU. Through the comparative experiments, the results show that our failure detector has better performance in terms of speed, accuracy and battery consumption. PMID:29352282
Fault Injection Techniques and Tools
NASA Technical Reports Server (NTRS)
Hsueh, Mei-Chen; Tsai, Timothy K.; Iyer, Ravishankar K.
1997-01-01
Dependability evaluation involves the study of failures and errors. The destructive nature of a crash and long error latency make it difficult to identify the causes of failures in the operational environment. It is particularly hard to recreate a failure scenario for a large, complex system. To identify and understand potential failures, we use an experiment-based approach for studying the dependability of a system. Such an approach is applied not only during the conception and design phases, but also during the prototype and operational phases. To take an experiment-based approach, we must first understand a system's architecture, structure, and behavior. Specifically, we need to know its tolerance for faults and failures, including its built-in detection and recovery mechanisms, and we need specific instruments and tools to inject faults, create failures or errors, and monitor their effects.
Failure Mode Identification Through Clustering Analysis
NASA Technical Reports Server (NTRS)
Arunajadai, Srikesh G.; Stone, Robert B.; Tumer, Irem Y.; Clancy, Daniel (Technical Monitor)
2002-01-01
Research has shown that nearly 80% of the costs and problems are created in product development and that cost and quality are essentially designed into products in the conceptual stage. Currently, failure identification procedures (such as FMEA (Failure Modes and Effects Analysis), FMECA (Failure Modes, Effects and Criticality Analysis) and FTA (Fault Tree Analysis)) and design of experiments are being used for quality control and for the detection of potential failure modes during the detail design stage or post-product launch. Though all of these methods have their own advantages, they do not give information as to what are the predominant failures that a designer should focus on while designing a product. This work uses a functional approach to identify failure modes, which hypothesizes that similarities exist between different failure modes based on the functionality of the product/component. In this paper, a statistical clustering procedure is proposed to retrieve information on the set of predominant failures that a function experiences. The various stages of the methodology are illustrated using a hypothetical design example.
Failure detection and fault management techniques for flush airdata sensing systems
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.
1992-01-01
A high-angle-of-attack flush airdata sensing system was installed and flight tested on the F-18 High Alpha Research Vehicle at NASA-Dryden. This system uses a matrix of pressure orifices arranged in concentric circles on the nose of the vehicle to determine angles of attack, angles of sideslip, dynamic pressure, and static pressure as well as other airdata parameters. Results presented use an arrangement of 11 symmetrically distributed ports on the aircraft nose. Experience with this sensing system data indicates that the primary concern for real-time implementation is the detection and management of overall system and individual pressure sensor failures. The multiple port sensing system is more tolerant to small disturbances in the measured pressure data than conventional probe-based intrusive airdata systems. However, under adverse circumstances, large undetected failures in individual pressure ports can result in algorithm divergence and catastrophic failure of the entire system. How system and individual port failures may be detected using chi sq. analysis is shown. Once identified, the effects of failures are eliminated using weighted least squares.
Evidence-Based Early Reading Practices within a Response to Intervention System
ERIC Educational Resources Information Center
Bursuck, Bill; Blanks, Brooke
2010-01-01
Many students who experience reading failure are inappropriately placed in special education. A promising response to reducing reading failure and the overidentification of students for special education is Response to Intervention (RTI), a comprehensive early detection and prevention system that allows teachers to identify and support struggling…
Triplexer Monitor Design for Failure Detection in FTTH System
NASA Astrophysics Data System (ADS)
Fu, Minglei; Le, Zichun; Hu, Jinhua; Fei, Xia
2012-09-01
Triplexer was one of the key components in FTTH systems, which employed an analog overlay channel for video broadcasting in addition to bidirectional digital transmission. To enhance the survivability of triplexer as well as the robustness of FTTH system, a multi-ports device named triplexer monitor was designed and realized, by which failures at triplexer ports can be detected and localized. Triplexer monitor was composed of integrated circuits and its four input ports were connected with the beam splitter whose power division ratio was 95∶5. By means of detecting the sampled optical signal from the beam splitters, triplexer monitor tracked the status of the four ports in triplexer (e.g. 1310 nm, 1490 nm, 1550 nm and com ports). In this paper, the operation scenario of the triplexer monitor with external optical devices was addressed. And the integrated circuit structure of the triplexer monitor was also given. Furthermore, a failure localization algorithm was proposed, which based on the state transition diagram. In order to measure the failure detection and localization time under the circumstance of different failed ports, an experimental test-bed was built. Experiment results showed that the detection time for the failure at 1310 nm port by the triplexer monitor was less than 8.20 ms. For the failure at 1490 nm or 1550 nm port it was less than 8.20 ms and for the failure at com port it was less than 7.20 ms.
FDIR Strategy Validation with the B Method
NASA Astrophysics Data System (ADS)
Sabatier, D.; Dellandrea, B.; Chemouil, D.
2008-08-01
In a formation flying satellite system, the FDIR strategy (Failure Detection, Isolation and Recovery) is paramount. When a failure occurs, satellites should be able to take appropriate reconfiguration actions to obtain the best possible results given the failure, ranging from avoiding satellite-to-satellite collision to continuing the mission without disturbance if possible. To achieve this goal, each satellite in the formation has an implemented FDIR strategy that governs how it detects failures (from tests or by deduction) and how it reacts (reconfiguration using redundant equipments, avoidance manoeuvres, etc.). The goal is to protect the satellites first and the mission as much as possible. In a project initiated by the CNES, ClearSy experiments the B Method to validate the FDIR strategies developed by Thales Alenia Space, of the inter satellite positioning and communication devices that will be used for the SIMBOL-X (2 satellite configuration) and the PEGASE (3 satellite configuration) missions and potentially for other missions afterward. These radio frequency metrology sensor devices provide satellite positioning and inter satellite communication in formation flying. This article presents the results of this experience.
NASA Technical Reports Server (NTRS)
Vanschalkwyk, Christiaan M.
1992-01-01
We discuss the application of Generalized Parity Relations to two experimental flexible space structures, the NASA Langley Mini-Mast and Marshall Space Flight Center ACES mast. We concentrate on the generation of residuals and make no attempt to implement the Decision Function. It should be clear from the examples that are presented whether it would be possible to detect the failure of a specific component. We derive the equations from Generalized Parity Relations. Two special cases are treated: namely, Single Sensor Parity Relations (SSPR) and Double Sensor Parity Relations (DSPR). Generalized Parity Relations for actuators are also derived. The NASA Langley Mini-Mast and the application of SSPR and DSPR to a set of displacement sensors located at the tip of the Mini-Mast are discussed. The performance of a reduced order model that includes the first five models of the mast is compared to a set of parity relations that was identified on a set of input-output data. Both time domain and frequency domain comparisons are made. The effect of the sampling period and model order on the performance of the Residual Generators are also discussed. Failure detection experiments where the sensor set consisted of two gyros and an accelerometer are presented. The effects of model order and sampling frequency are again illustrated. The detection of actuator failures is discussed. We use Generalized Parity Relations to monitor control system component failures on the ACES mast. An overview is given of the Failure Detection Filter and experimental results are discussed. Conclusions and directions for future research are given.
ERIC Educational Resources Information Center
Karmakar, Subrata
2017-01-01
Online monitoring of high-voltage (HV) equipment is a vital tool for early detection of insulation failure. Most insulation failures are caused by partial discharges (PDs) inside the HV equipment. Because of the very high cost of establishing HV equipment facility and the limitations of electromagnetic interference-screened laboratories, only a…
Generation Failure: Estimating Metacognition in Cued Recall
ERIC Educational Resources Information Center
Higham, P.A.; Tam, H.
2005-01-01
Three experiments examined generation, recognition, and response bias in the original encoding-specificity paradigm using the type 2 signal-detection analysis advocated by Higham (2002). Experiments 1 (pure-list design) and 2 (mixed-list design) indicated that some guidance regarding the strength of the associative relationship between the test…
Beeler, N.M.; Lockner, D.A.
2003-01-01
We provide an explanation why earthquake occurrence does not correlate well with the daily solid Earth tides. The explanation is derived from analysis of laboratory experiments in which faults are loaded to quasiperiodic failure by the combined action of a constant stressing rate, intended to simulate tectonic loading, and a small sinusoidal stress, analogous to the Earth tides. Event populations whose failure times correlate with the oscillating stress show two modes of response; the response mode depends on the stressing frequency. Correlation that is consistent with stress threshold failure models, e.g., Coulomb failure, results when the period of stress oscillation exceeds a characteristic time tn; the degree of correlation between failure time and the phase of the driving stress depends on the amplitude and frequency of the stress oscillation and on the stressing rate. When the period of the oscillating stress is less than tn, the correlation is not consistent with threshold failure models, and much higher stress amplitudes are required to induce detectable correlation with the oscillating stress. The physical interpretation of tn is the duration of failure nucleation. Behavior at the higher frequencies is consistent with a second-order dependence of the fault strength on sliding rate which determines the duration of nucleation and damps the response to stress change at frequencies greater than 1/tn. Simple extrapolation of these results to the Earth suggests a very weak correlation of earthquakes with the daily Earth tides, one that would require >13,000 earthquakes to detect. On the basis of our experiments and analysis, the absence of definitive daily triggering of earthquakes by the Earth tides requires that for earthquakes, tn exceeds the daily tidal period. The experiments suggest that the minimum typical duration of earthquake nucleation on the San Andreas fault system is ???1 year.
Experimental study on the stability and failure of individual step-pool
NASA Astrophysics Data System (ADS)
Zhang, Chendi; Xu, Mengzhen; Hassan, Marwan A.; Chartrand, Shawn M.; Wang, Zhaoyin
2018-06-01
Step-pools are one of the most common bedforms in mountain streams, the stability and failure of which play a significant role for riverbed stability and fluvial processes. Given this importance, flume experiments were performed with a manually constructed step-pool model. The experiments were carried out with a constant flow rate to study features of step-pool stability as well as failure mechanisms. The results demonstrate that motion of the keystone grain (KS) caused 90% of the total failure events. The pool reached its maximum depth and either exhibited relative stability for a period before step failure, which was called the stable phase, or the pool collapsed before its full development. The critical scour depth for the pool increased linearly with discharge until the trend was interrupted by step failure. Variability of the stable phase duration ranged by one order of magnitude, whereas variability of pool scour depth was constrained within 50%. Step adjustment was detected in almost all of the runs with step-pool failure and was one or two orders smaller than the diameter of the step stones. Two discharge regimes for step-pool failure were revealed: one regime captures threshold conditions and frames possible step-pool failure, whereas the second regime captures step-pool failure conditions and is the discharge of an exceptional event. In the transitional stage between the two discharge regimes, pool and step adjustment magnitude displayed relatively large variabilities, which resulted in feedbacks that extended the duration of step-pool stability. Step adjustment, which was a type of structural deformation, increased significantly before step failure. As a result, we consider step deformation as the direct explanation to step-pool failure rather than pool scour, which displayed relative stability during step deformations in our experiments.
Evaluation of Fuzzy Rulemaking for Expert Systems for Failure Detection
NASA Technical Reports Server (NTRS)
Laritz, F.; Sheridan, T. B.
1984-01-01
Computer aids in expert systems were proposed to diagnose failures in complex systems. It is shown that the fuzzy set theory of Zadeh offers a new perspective for modeling for humans thinking and language use. It is assumed that real expert human operators of aircraft, power plants and other systems do not think of their control tasks or failure diagnosis tasks in terms of control laws in differential equation form, but rather keep in mind a set of rules of thumb in fuzzy form. Fuzzy set experiments are described.
Flight experience with flight control redundancy management
NASA Technical Reports Server (NTRS)
Szalai, K. J.; Larson, R. R.; Glover, R. D.
1980-01-01
Flight experience with both current and advanced redundancy management schemes was gained in recent flight research programs using the F-8 digital fly by wire aircraft. The flight performance of fault detection, isolation, and reconfiguration (FDIR) methods for sensors, computers, and actuators is reviewed. Results of induced failures as well as of actual random failures are discussed. Deficiencies in modeling and implementation techniques are also discussed. The paper also presents comparison off multisensor tracking in smooth air, in turbulence, during large maneuvers, and during maneuvers typical of those of large commercial transport aircraft. The results of flight tests of an advanced analytic redundancy management algorithm are compared with the performance of a contemporary algorithm in terms of time to detection, false alarms, and missed alarms. The performance of computer redundancy management in both iron bird and flight tests is also presented.
Biased but in Doubt: Conflict and Decision Confidence
De Neys, Wim; Cromheeke, Sofie; Osman, Magda
2011-01-01
Human reasoning is often biased by intuitive heuristics. A central question is whether the bias results from a failure to detect that the intuitions conflict with traditional normative considerations or from a failure to discard the tempting intuitions. The present study addressed this unresolved debate by using people's decision confidence as a nonverbal index of conflict detection. Participants were asked to indicate how confident they were after solving classic base-rate (Experiment 1) and conjunction fallacy (Experiment 2) problems in which a cued intuitive response could be inconsistent or consistent with the traditional correct response. Results indicated that reasoners showed a clear confidence decrease when they gave an intuitive response that conflicted with the normative response. Contrary to popular belief, this establishes that people seem to acknowledge that their intuitive answers are not fully warranted. Experiment 3 established that younger reasoners did not yet show the confidence decrease, which points to the role of improved bias awareness in our reasoning development. Implications for the long standing debate on human rationality are discussed. PMID:21283574
1998-12-01
failure detection, monitoring, and decision making.) moderator function. Originally, the output from these One of the best known OCM implementations, the...imposed by the tasks themselves, the information and equipment provided, the task environment, operator skills and experience, operator strategies , the...problem-solving situation, including the toward failure.) knowledge necessary to generate the right problem- solving strategies , the attention that
[Early detection, prevention and management of renal failure in liver transplantation].
Castells, Lluís; Baliellas, Carme; Bilbao, Itxarone; Cantarell, Carme; Cruzado, Josep Maria; Esforzado, Núria; García-Valdecasas, Juan Carlos; Lladó, Laura; Rimola, Antoni; Serón, Daniel; Oppenheimer, Federico
2014-10-01
Renal failure is a frequent complication in liver transplant recipients and is associated with increased morbidity and mortality. A variety of risk factors for the development of renal failure in the pre- and post-transplantation periods have been described, as well as at the time of surgery. To reduce the negative impact of renal failure in this population, an active approach is required for the identification of those patients with risk factors, the implementation of preventive strategies, and the early detection of progressive deterioration of renal function. Based on published evidence and on clinical experience, this document presents a series of recommendations on monitoring RF in LT recipients, as well as on the prevention and management of acute and chronic renal failure after LT and referral of these patients to the nephrologist. In addition, this document also provides an update of the various immunosuppressive regimens tested in this population for the prevention and control of post-transplantation deterioration of renal function. Copyright © 2013 Elsevier España, S.L.U. and AEEH y AEG. All rights reserved.
Immunity-Based Aircraft Fault Detection System
NASA Technical Reports Server (NTRS)
Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.
2004-01-01
In the study reported in this paper, we have developed and applied an Artificial Immune System (AIS) algorithm for aircraft fault detection, as an extension to a previous work on intelligent flight control (IFC). Though the prior studies had established the benefits of IFC, one area of weakness that needed to be strengthened was the control dead band induced by commanding a failed surface. Since the IFC approach uses fault accommodation with no detection, the dead band, although it reduces over time due to learning, is present and causes degradation in handling qualities. If the failure can be identified, this dead band can be further A ed to ensure rapid fault accommodation and better handling qualities. The paper describes the application of an immunity-based approach that can detect a broad spectrum of known and unforeseen failures. The approach incorporates the knowledge of the normal operational behavior of the aircraft from sensory data, and probabilistically generates a set of pattern detectors that can detect any abnormalities (including faults) in the behavior pattern indicating unsafe in-flight operation. We developed a tool called MILD (Multi-level Immune Learning Detection) based on a real-valued negative selection algorithm that can generate a small number of specialized detectors (as signatures of known failure conditions) and a larger set of generalized detectors for unknown (or possible) fault conditions. Once the fault is detected and identified, an adaptive control system would use this detection information to stabilize the aircraft by utilizing available resources (control surfaces). We experimented with data sets collected under normal and various simulated failure conditions using a piloted motion-base simulation facility. The reported results are from a collection of test cases that reflect the performance of the proposed immunity-based fault detection algorithm.
Repeated Induction of Inattentional Blindness in a Simulated Aviation Environment
NASA Technical Reports Server (NTRS)
Kennedy, Kellie D.; Stephens, Chad L.; Williams, Ralph A.; Schutte, Paul C.
2017-01-01
The study reported herein is a subset of a larger investigation on the role of automation in the context of the flight deck and used a fixed-based, human-in-the-loop simulator. This paper explored the relationship between automation and inattentional blindness (IB) occurrences in a repeated induction paradigm using two types of runway incursions. The critical stimuli for both runway incursions were directly relevant to primary task performance. Sixty non-pilot participants performed the final five minutes of a landing scenario twice in one of three automation conditions: full automation (FA), partial automation (PA), and no automation (NA). The first induction resulted in a 70 percent (42 of 60) detection failure rate with those in the PA condition significantly more likely to detect the incursion compared to the FA condition or the NA condition. The second induction yielded a 50 percent detection failure rate. Although detection improved (detection failure rates declined) in all conditions, those in the FA condition demonstrated the greatest improvement with doubled detection rates. The detection behavior in the first trial did not preclude a failed detection in the second induction. Group membership (IB vs. Detection) in the FA condition showed a greater improvement than those in the NA condition and rated the Mental Demand and Effort subscales of the NASA-TLX (NASA Task Load Index) significantly higher for Time 2 compared Time 1. Participants in the FA condition used the experience of IB exposure to improve task performance whereas those in the NA condition did not, indicating the availability and reallocation of attentional resources in the FA condition. These findings support the role of engagement in operational attention detriment and the consideration of attentional failure causation to determine appropriate mitigation strategies.
An experiment in software reliability
NASA Technical Reports Server (NTRS)
Dunham, J. R.; Pierce, J. L.
1986-01-01
The results of a software reliability experiment conducted in a controlled laboratory setting are reported. The experiment was undertaken to gather data on software failures and is one in a series of experiments being pursued by the Fault Tolerant Systems Branch of NASA Langley Research Center to find a means of credibly performing reliability evaluations of flight control software. The experiment tests a small sample of implementations of radar tracking software having ultra-reliability requirements and uses n-version programming for error detection, and repetitive run modeling for failure and fault rate estimation. The experiment results agree with those of Nagel and Skrivan in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates. Additional analysis of the experimental data raises new questions concerning the phenomenon of interacting faults. This phenomenon may provide one explanation for software reliability decay.
Distributed optical fibre sensing for early detection of shallow landslides triggering.
Schenato, Luca; Palmieri, Luca; Camporese, Matteo; Bersan, Silvia; Cola, Simonetta; Pasuto, Alessandro; Galtarossa, Andrea; Salandin, Paolo; Simonini, Paolo
2017-10-31
A distributed optical fibre sensing system is used to measure landslide-induced strains on an optical fibre buried in a large scale physical model of a slope. The fibre sensing cable is deployed at the predefined failure surface and interrogated by means of optical frequency domain reflectometry. The strain evolution is measured with centimetre spatial resolution until the occurrence of the slope failure. Standard legacy sensors measuring soil moisture and pore water pressure are installed at different depths and positions along the slope for comparison and validation. The evolution of the strain field is related to landslide dynamics with unprecedented resolution and insight. In fact, the results of the experiment clearly identify several phases within the evolution of the landslide and show that optical fibres can detect precursory signs of failure well before the collapse, paving the way for the development of more effective early warning systems.
ERIC Educational Resources Information Center
Yeh, Su-Ling; Li, Jing-Ling
2004-01-01
Repetition blindness (RB) refers to the failure to detect the second occurrence of a repeated item in rapid serial visual presentation (RSVP). In two experiments using RSVP, the ability to report two critical characters was found to be impaired when these two characters were identical (Experiment 1) or similar by sharing one repeated component…
Health management system for rocket engines
NASA Technical Reports Server (NTRS)
Nemeth, Edward
1990-01-01
The functional framework of a failure detection algorithm for the Space Shuttle Main Engine (SSME) is developed. The basic algorithm is based only on existing SSME measurements. Supplemental measurements, expected to enhance failure detection effectiveness, are identified. To support the algorithm development, a figure of merit is defined to estimate the likelihood of SSME criticality 1 failure modes and the failure modes are ranked in order of likelihood of occurrence. Nine classes of failure detection strategies are evaluated and promising features are extracted as the basis for the failure detection algorithm. The failure detection algorithm provides early warning capabilities for a wide variety of SSME failure modes. Preliminary algorithm evaluation, using data from three SSME failures representing three different failure types, demonstrated indications of imminent catastrophic failure well in advance of redline cutoff in all three cases.
NASA Technical Reports Server (NTRS)
Morscher, Gregory N.
1999-01-01
Ceramic matrix composites are being developed for elevated-temperature engine applications. A leading material system in this class of materials is silicon carbide (SiC) fiber-reinforced SiC matrix composites. Unfortunately, the nonoxide fibers, matrix, and interphase (boron nitride in this system) can react with oxygen or water vapor in the atmosphere, leading to strength degradation of the composite at elevated temperatures. For this study, constant-load stress-rupture tests were performed in air at temperatures ranging from 815 to 960 C until failure. From these data, predictions can be made for the useful life of such composites under similar stressed-oxidation conditions. During these experiments, the sounds of failure events (matrix cracking and fiber breaking) were monitored with a modal acoustic emission (AE) analyzer through transducers that were attached at the ends of the tensile bars. Such failure events, which are caused by applied stress and oxidation reactions, cause these composites to fail prematurely. Because of the nature of acoustic waveform propagation in thin tensile bars, the location of individual source events and the eventual failure event could be detected accurately.
Changing scenes: memory for naturalistic events following change blindness.
Mäntylä, Timo; Sundström, Anna
2004-11-01
Research on scene perception indicates that viewers often fail to detect large changes to scene regions when these changes occur during a visual disruption such as a saccade or a movie cut. In two experiments, we examined whether this relative inability to detect changes would produce systematic biases in event memory. In Experiment 1, participants decided whether two successively presented images were the same or different, followed by a memory task, in which they recalled the content of the viewed scene. In Experiment 2, participants viewed a short video, in which an actor carried out a series of daily activities, and central scenes' attributes were changed during a movie cut. A high degree of change blindness was observed in both experiments, and these effects were related to scene complexity (Experiment 1) and level of retrieval support (Experiment 2). Most important, participants reported the changed, rather than the initial, event attributes following a failure in change detection. These findings suggest that attentional limitations during encoding contribute to biases in episodic memory.
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1985-01-01
The application of the failure detection filter to the detection and identification of aircraft control element failures was evaluated in a linear digital simulation of the longitudinal dynamics of a B-737 Aircraft. Simulation results show that with a simple correlator and threshold detector used to process the filter residuals, the failure detection performance is seriously degraded by the effects of turbulence.
Redundancy relations and robust failure detection
NASA Technical Reports Server (NTRS)
Chow, E. Y.; Lou, X. C.; Verghese, G. C.; Willsky, A. S.
1984-01-01
All failure detection methods are based on the use of redundancy, that is on (possible dynamic) relations among the measured variables. Consequently the robustness of the failure detection process depends to a great degree on the reliability of the redundancy relations given the inevitable presence of model uncertainties. The problem of determining redundancy relations which are optimally robust in a sense which includes the major issues of importance in practical failure detection is addressed. A significant amount of intuition concerning the geometry of robust failure detection is provided.
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Wagner, E.; Motyka, P.; Hall, S. R.
1985-01-01
The performance of the detection filter in detecting and isolating aircraft control surface and actuator failures is evaluated. The basic detection filter theory assumption of no direct input-output coupling is violated in this application due to the use of acceleration measurements for detecting and isolating failures. With this coupling, residuals produced by control surface failures may only be constrained to a known plane rather than to a single direction. A detection filter design with such planar failure signatures is presented, with the design issues briefly addressed. In addition, a modification to constrain the residual to a single known direction even with direct input-output coupling is also presented. Both the detection filter and the modification are tested using a nonlinear aircraft simulation. While no thresholds were selected, both filters demonstrated an ability to detect control surface and actuator failures. Failure isolation may be a problem if there are several control surfaces which produce similar effects on the aircraft. In addition, the detection filter was sensitive to wind turbulence and modeling errors.
NASA Technical Reports Server (NTRS)
Hall, Steven R.; Walker, Bruce K.
1990-01-01
A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.
Detection of faults and software reliability analysis
NASA Technical Reports Server (NTRS)
Knight, J. C.
1986-01-01
Multiversion or N-version programming was proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. Specific topics addressed are: failure probabilities in N-version systems, consistent comparison in N-version systems, descriptions of the faults found in the Knight and Leveson experiment, analytic models of comparison testing, characteristics of the input regions that trigger faults, fault tolerance through data diversity, and the relationship between failures caused by automatically seeded faults.
Automatically Detecting Failures in Natural Language Processing Tools for Online Community Text.
Park, Albert; Hartzler, Andrea L; Huh, Jina; McDonald, David W; Pratt, Wanda
2015-08-31
The prevalence and value of patient-generated health text are increasing, but processing such text remains problematic. Although existing biomedical natural language processing (NLP) tools are appealing, most were developed to process clinician- or researcher-generated text, such as clinical notes or journal articles. In addition to being constructed for different types of text, other challenges of using existing NLP include constantly changing technologies, source vocabularies, and characteristics of text. These continuously evolving challenges warrant the need for applying low-cost systematic assessment. However, the primarily accepted evaluation method in NLP, manual annotation, requires tremendous effort and time. The primary objective of this study is to explore an alternative approach-using low-cost, automated methods to detect failures (eg, incorrect boundaries, missed terms, mismapped concepts) when processing patient-generated text with existing biomedical NLP tools. We first characterize common failures that NLP tools can make in processing online community text. We then demonstrate the feasibility of our automated approach in detecting these common failures using one of the most popular biomedical NLP tools, MetaMap. Using 9657 posts from an online cancer community, we explored our automated failure detection approach in two steps: (1) to characterize the failure types, we first manually reviewed MetaMap's commonly occurring failures, grouped the inaccurate mappings into failure types, and then identified causes of the failures through iterative rounds of manual review using open coding, and (2) to automatically detect these failure types, we then explored combinations of existing NLP techniques and dictionary-based matching for each failure cause. Finally, we manually evaluated the automatically detected failures. From our manual review, we characterized three types of failure: (1) boundary failures, (2) missed term failures, and (3) word ambiguity failures. Within these three failure types, we discovered 12 causes of inaccurate mappings of concepts. We used automated methods to detect almost half of 383,572 MetaMap's mappings as problematic. Word sense ambiguity failure was the most widely occurring, comprising 82.22% of failures. Boundary failure was the second most frequent, amounting to 15.90% of failures, while missed term failures were the least common, making up 1.88% of failures. The automated failure detection achieved precision, recall, accuracy, and F1 score of 83.00%, 92.57%, 88.17%, and 87.52%, respectively. We illustrate the challenges of processing patient-generated online health community text and characterize failures of NLP tools on this patient-generated health text, demonstrating the feasibility of our low-cost approach to automatically detect those failures. Our approach shows the potential for scalable and effective solutions to automatically assess the constantly evolving NLP tools and source vocabularies to process patient-generated text.
Lanying Lin; Sheng He; Feng Fu; Xiping Wang
2015-01-01
Wood failure percentage (WFP) is an important index for evaluating the bond strength of plywood. Currently, the method used for detecting WFP is visual inspection, which lacks efficiency. In order to improve it, image processing methods are applied to wood failure detection. The present study used thresholding and K-means clustering algorithms in wood failure detection...
A geometric approach to failure detection and identification in linear systems
NASA Technical Reports Server (NTRS)
Massoumnia, M. A.
1986-01-01
Using concepts of (C,A)-invariant and unobservability (complementary observability) subspaces, a geometric formulation of the failure detection and identification filter problem is stated. Using these geometric concepts, it is shown that it is possible to design a causal linear time-invariant processor that can be used to detect and uniquely identify a component failure in a linear time-invariant system, assuming: (1) The components can fail simultaneously, and (2) The components can fail only one at a time. In addition, a geometric formulation of Beard's failure detection filter problem is stated. This new formulation completely clarifies of output separability and mutual detectability introduced by Beard and also exploits the dual relationship between a restricted version of the failure detection and identification problem and the control decoupling problem. Moreover, the frequency domain interpretation of the results is used to relate the concepts of failure sensitive observers with the generalized parity relations introduced by Chow. This interpretation unifies the various failure detection and identification concepts and design procedures.
Flight experience with a fail-operational digital fly-by-wire control system
NASA Technical Reports Server (NTRS)
Brown, S. R.; Szalai, K. J.
1977-01-01
The NASA Dryden Flight Research Center is flight testing a triply redundant digital fly-by-wire (DFBW) control system installed in an F-8 aircraft. The full-time, full-authority system performs three-axis flight control computations, including stability and command augmentation, autopilot functions, failure detection and isolation, and self-test functions. Advanced control law experiments include an active flap mode for ride smoothing and maneuver drag reduction. This paper discusses research being conducted on computer synchronization, fault detection, fault isolation, and recovery from transient faults. The F-8 DFBW system has demonstrated immunity from nuisance fault declarations while quickly identifying truly faulty components.
Nishii, Nobuhiro; Miyoshi, Akihito; Kubo, Motoki; Miyamoto, Masakazu; Morimoto, Yoshimasa; Kawada, Satoshi; Nakagawa, Koji; Watanabe, Atsuyuki; Nakamura, Kazufumi; Morita, Hiroshi; Ito, Hiroshi
2018-03-01
Remote monitoring (RM) has been advocated as the new standard of care for patients with cardiovascular implantable electronic devices (CIEDs). RM has allowed the early detection of adverse clinical events, such as arrhythmia, lead failure, and battery depletion. However, lead failure was often identified only by arrhythmic events, but not impedance abnormalities. To compare the usefulness of arrhythmic events with conventional impedance abnormalities for identifying lead failure in CIED patients followed by RM. CIED patients in 12 hospitals have been followed by the RM center in Okayama University Hospital. All transmitted data have been analyzed and summarized. From April 2009 to March 2016, 1,873 patients have been followed by the RM center. During the mean follow-up period of 775 days, 42 lead failure events (atrial lead 22, right ventricular pacemaker lead 5, implantable cardioverter defibrillator [ICD] lead 15) were detected. The proportion of lead failures detected only by arrhythmic events, which were not detected by conventional impedance abnormalities, was significantly higher than that detected by impedance abnormalities (arrhythmic event 76.2%, 95% CI: 60.5-87.9%; impedance abnormalities 23.8%, 95% CI: 12.1-39.5%). Twenty-seven events (64.7%) were detected without any alert. Of 15 patients with ICD lead failure, none has experienced inappropriate therapy. RM can detect lead failure earlier, before clinical adverse events. However, CIEDs often diagnose lead failure as just arrhythmic events without any warning. Thus, to detect lead failure earlier, careful human analysis of arrhythmic events is useful. © 2017 Wiley Periodicals, Inc.
Detection of system failures in multi-axes tasks. [pilot monitored instrument approach
NASA Technical Reports Server (NTRS)
Ephrath, A. R.
1975-01-01
The effects of the pilot's participation mode in the control task on his workload level and failure detection performance were examined considering a low visibility landing approach. It is found that the participation mode had a strong effect on the pilot's workload, the induced workload being lowest when the pilot acted as a monitoring element during a coupled approach and highest when the pilot was an active element in the control loop. The effects of workload and participation mode on failure detection were separated. The participation mode was shown to have a dominant effect on the failure detection performance, with a failure in a monitored (coupled) axis being detected significantly faster than a comparable failure in a manually controlled axis.
On-Board Particulate Filter Failure Prevention and Failure Diagnostics Using Radio Frequency Sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sappok, Alex; Ragaller, Paul; Herman, Andrew
The increasing use of diesel and gasoline particulate filters requires advanced on-board diagnostics (OBD) to prevent and detect filter failures and malfunctions. Early detection of upstream (engine-out) malfunctions is paramount to preventing irreversible damage to downstream aftertreatment system components. Such early detection can mitigate the failure of the particulate filter resulting in the escape of emissions exceeding permissible limits and extend the component life. However, despite best efforts at early detection and filter failure prevention, the OBD system must also be able to detect filter failures when they occur. In this study, radio frequency (RF) sensors were used to directlymore » monitor the particulate filter state of health for both gasoline particulate filter (GPF) and diesel particulate filter (DPF) applications. The testing included controlled engine dynamometer evaluations, which characterized soot slip from various filter failure modes, as well as on-road fleet vehicle tests. The results show a high sensitivity to detect conditions resulting in soot leakage from the particulate filter, as well as potential for direct detection of structural failures including internal cracks and melted regions within the filter media itself. Furthermore, the measurements demonstrate, for the first time, the capability to employ a direct and continuous monitor of particulate filter diagnostics to both prevent and detect potential failure conditions in the field.« less
NASA Astrophysics Data System (ADS)
Mahmood, Faleh H.; Kadhim, Hussein T.; Resen, Ali K.; Shaban, Auday H.
2018-05-01
The failure such as air gap weirdness, rubbing, and scrapping between stator and rotor generator arise unavoidably and may cause extremely terrible results for a wind turbine. Therefore, we should pay more attention to detect and identify its cause-bearing failure in wind turbine to improve the operational reliability. The current paper tends to use of power spectral density analysis method of detecting internal race and external race bearing failure in micro wind turbine by estimation stator current signal of the generator. The failure detector method shows that it is well suited and effective for bearing failure detection.
Turbofan engine demonstration of sensor failure detection
NASA Technical Reports Server (NTRS)
Merrill, Walter C.; Delaat, John C.; Abdelwahab, Mahmood
1991-01-01
In the paper, the results of a full-scale engine demonstration of a sensor failure detection algorithm are presented. The algorithm detects, isolates, and accommodates sensor failures using analytical redundancy. The experimental hardware, including the F100 engine, is described. Demonstration results were obtained over a large portion of a typical flight envelope for the F100 engine. They include both subsonic and supersonic conditions at both medium and full, nonafter burning, power. Estimated accuracy, minimum detectable levels of sensor failures, and failure accommodation performance for an F100 turbofan engine control system are discussed.
Study of an automatic trajectory following control system
NASA Technical Reports Server (NTRS)
Vanlandingham, H. F.; Moose, R. L.; Zwicke, P. E.; Lucas, W. H.; Brinkley, J. D.
1983-01-01
It is shown that the estimator part of the Modified Partitioned Adaptive Controller, (MPAC) developed for nonlinear aircraft dynamics of a small jet transport can adapt to sensor failures. In addition, an investigation is made into the potential usefulness of the configuration detection technique used in the MPAC and the failure detection filter is developed that determines how a noise plant output is associated with a line or plane characteristic of a failure. It is shown by computer simulation that the estimator part and the configuration detection part of the MPAC can readily adapt to actuator and sensor failures and that the failure detection filter technique cannot detect actuator or sensor failures accurately for this type of system because of the plant modeling errors. In addition, it is shown that the decision technique, developed for the failure detection filter, can accurately determine that the plant output is related to the characteristic line or plane in the presence of sensor noise.
Evaluation of bridge cables corrosion using acoustic emission technique
NASA Astrophysics Data System (ADS)
Li, Dongsheng; Ou, Jinping
2010-04-01
Owing to the nature of the stress, corrosion of bridge cable may result in catastrophic failure of the structure. However, using electrochemical techniques isn't fully efficient for the detection and control on line of the corrosion phenomenon. A non-destructive testing method based on acoustic emission technique monitoring bridge cable corrosion was explored. The steel strands were placed at room temperature in 5% NaCl solution. Acoustic emission (AE) characteristic parameters were recorded in the whole corrosion experiment process. Based on the plot of cumulated acoustic activity, the bridge cables corrosion included three stages. It can be clearly seen that different stages have different acoustic emission signal characteristics. The AE characteristic parameters would be increased with cables corrosion development. Finally, the bridge cables corrosion experiment with different stress state and different corrosion environment was performed. The results shows that stress magnitude only affects the bridge cable failure time, however, the AE characteristic parameters value has changed a little. It was verified that AE technique can be used to detect the bridge cable early corrosion, investigating corrosion developing trend, and in monitoring and evaluating corrosion damages.
Babiker, Amir; Amer, Yasser S; Osman, Mohamed E; Al-Eyadhy, Ayman; Fatani, Solafa; Mohamed, Sarar; Alnemri, Abdulrahman; Titi, Maher A; Shaikh, Farheen; Alswat, Khalid A; Wahabi, Hayfaa A; Al-Ansary, Lubna A
2018-02-01
Implementation of clinical practice guidelines (CPGs) has been shown to reduce variation in practice and improve health care quality and patients' safety. There is a limited experience of CPG implementation (CPGI) in the Middle East. The CPG program in our institution was launched in 2009. The Quality Management department conducted a Failure Mode and Effect Analysis (FMEA) for further improvement of CPGI. This is a prospective study of a qualitative/quantitative design. Our FMEA included (1) process review and recording of the steps and activities of CPGI; (2) hazard analysis by recording activity-related failure modes and their effects, identification of actions required, assigned severity, occurrence, and detection scores for each failure mode and calculated the risk priority number (RPN) by using an online interactive FMEA tool; (3) planning: RPNs were prioritized, recommendations, and further planning for new interventions were identified; and (4) monitoring: after reduction or elimination of the failure mode. The calculated RPN will be compared with subsequent analysis in post-implementation phase. The data were scrutinized from a feedback of quality team members using a FMEA framework to enhance the implementation of 29 adapted CPGs. The identified potential common failure modes with the highest RPN (≥ 80) included awareness/training activities, accessibility of CPGs, fewer advocates from clinical champions, and CPGs auditing. Actions included (1) organizing regular awareness activities, (2) making CPGs printed and electronic copies accessible, (3) encouraging senior practitioners to get involved in CPGI, and (4) enhancing CPGs auditing as part of the quality sustainability plan. In our experience, FMEA could be a useful tool to enhance CPGI. It helped us to identify potential barriers and prepare relevant solutions. © 2017 John Wiley & Sons, Ltd.
Lambird, Kathleen Hoffman; Mann, Traci
2006-09-01
High self-esteem (HSE) is increasingly recognized as heterogeneous. By measuring subtypes of HSE, the present research reevaluates the finding that HSE individuals show poor self-regulation following ego threat (Baumeister, Heatherton, & Tice, 1993). In Experiment 1, participants with HSE showed poor self-regulation after ego threat only if they also were defensive (high in self-presentation bias). In Experiment 2, two measures--self-presentation bias and implicit self-esteem--were used to subtype HSE individuals as defensive. Both operationalizations of defensive HSE predicted poor self-regulation after ego threat. The results indicate that (a) only defensive HSE individuals are prone to self-regulation failure following ego threat and (b) measures of self-presentation bias and implicit self-esteem can both be used to detect defensiveness.
EEMD-based multiscale ICA method for slewing bearing fault detection and diagnosis
NASA Astrophysics Data System (ADS)
Žvokelj, Matej; Zupan, Samo; Prebil, Ivan
2016-05-01
A novel multivariate and multiscale statistical process monitoring method is proposed with the aim of detecting incipient failures in large slewing bearings, where subjective influence plays a minor role. The proposed method integrates the strengths of the Independent Component Analysis (ICA) multivariate monitoring approach with the benefits of Ensemble Empirical Mode Decomposition (EEMD), which adaptively decomposes signals into different time scales and can thus cope with multiscale system dynamics. The method, which was named EEMD-based multiscale ICA (EEMD-MSICA), not only enables bearing fault detection but also offers a mechanism of multivariate signal denoising and, in combination with the Envelope Analysis (EA), a diagnostic tool. The multiscale nature of the proposed approach makes the method convenient to cope with data which emanate from bearings in complex real-world rotating machinery and frequently represent the cumulative effect of many underlying phenomena occupying different regions in the time-frequency plane. The efficiency of the proposed method was tested on simulated as well as real vibration and Acoustic Emission (AE) signals obtained through conducting an accelerated run-to-failure lifetime experiment on a purpose-built laboratory slewing bearing test stand. The ability to detect and locate the early-stage rolling-sliding contact fatigue failure of the bearing indicates that AE and vibration signals carry sufficient information on the bearing condition and that the developed EEMD-MSICA method is able to effectively extract it, thereby representing a reliable bearing fault detection and diagnosis strategy.
Accelerated Aging Experiments for Prognostics of Damage Growth in Composite Materials
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Goebel, Kai Frank; Larrosa, Cecilia C.; Janapati, Vishnuvardhan; Roy, Surajit; Chang, Fu-Kuo
2011-01-01
Composite structures are gaining importance for use in the aerospace industry. Compared to metallic structures their behavior is less well understood. This lack of understanding may pose constraints on their use. One possible way to deal with some of the risks associated with potential failure is to perform in-situ monitoring to detect precursors of failures. Prognostic algorithms can be used to predict impending failures. They require large amounts of training data to build and tune damage model for making useful predictions. One of the key aspects is to get confirmatory feedback from data as damage progresses. These kinds of data are rarely available from actual systems. The next possible resource to collect such data is an accelerated aging platform. To that end this paper describes a fatigue cycling experiment with the goal to stress carbon-carbon composite coupons with various layups. Piezoelectric disc sensors were used to periodically interrogate the system. Analysis showed distinct differences in the signatures of growing failures between data collected at conditions. Periodic X-radiographs were taken to assess the damage ground truth. Results after signal processing showed clear trends of damage growth that were correlated to damage assessed from the X-ray images.
Masini, Laura; Donis, Laura; Loi, Gianfranco; Mones, Eleonora; Molina, Elisa; Bolchini, Cesare; Krengli, Marco
2014-01-01
The aim of this study was to analyze the application of the failure modes and effects analysis (FMEA) to intracranial stereotactic radiation surgery (SRS) by linear accelerator in order to identify the potential failure modes in the process tree and adopt appropriate safety measures to prevent adverse events (AEs) and near-misses, thus improving the process quality. A working group was set up to perform FMEA for intracranial SRS in the framework of a quality assurance program. FMEA was performed in 4 consecutive tasks: (1) creation of a visual map of the process; (2) identification of possible failure modes; (3) assignment of a risk probability number (RPN) to each failure mode based on tabulated scores of severity, frequency of occurrence and detectability; and (4) identification of preventive measures to minimize the risk of occurrence. The whole SRS procedure was subdivided into 73 single steps; 116 total possible failure modes were identified and a score of severity, occurrence, and detectability was assigned to each. Based on these scores, RPN was calculated for each failure mode thus obtaining values from 1 to 180. In our analysis, 112/116 (96.6%) RPN values were <60, 2 (1.7%) between 60 and 125 (63, 70), and 2 (1.7%) >125 (135, 180). The 2 highest RPN scores were assigned to the risk of using the wrong collimator's size and incorrect coordinates on the laser target localizer frame. Failure modes and effects analysis is a simple and practical proactive tool for systematic analysis of risks in radiation therapy. In our experience of SRS, FMEA led to the adoption of major changes in various steps of the SRS procedure.
NASA Technical Reports Server (NTRS)
Motyka, P.
1983-01-01
A methodology is developed and applied for quantitatively analyzing the reliability of a dual, fail-operational redundant strapdown inertial measurement unit (RSDIMU). A Markov evaluation model is defined in terms of the operational states of the RSDIMU to predict system reliability. A 27 state model is defined based upon a candidate redundancy management system which can detect and isolate a spectrum of failure magnitudes. The results of parametric studies are presented which show the effect on reliability of the gyro failure rate, both the gyro and accelerometer failure rates together, false alarms, probability of failure detection, probability of failure isolation, and probability of damage effects and mission time. A technique is developed and evaluated for generating dynamic thresholds for detecting and isolating failures of the dual, separated IMU. Special emphasis is given to the detection of multiple, nonconcurrent failures. Digital simulation time histories are presented which show the thresholds obtained and their effectiveness in detecting and isolating sensor failures.
Li, Tongyang; Wang, Shaoping; Zio, Enrico; Shi, Jian; Hong, Wei
2018-03-15
Leakage is the most important failure mode in aircraft hydraulic systems caused by wear and tear between friction pairs of components. The accurate detection of abrasive debris can reveal the wear condition and predict a system's lifespan. The radial magnetic field (RMF)-based debris detection method provides an online solution for monitoring the wear condition intuitively, which potentially enables a more accurate diagnosis and prognosis on the aviation hydraulic system's ongoing failures. To address the serious mixing of pipe abrasive debris, this paper focuses on the superimposed abrasive debris separation of an RMF abrasive sensor based on the degenerate unmixing estimation technique. Through accurately separating and calculating the morphology and amount of the abrasive debris, the RMF-based abrasive sensor can provide the system with wear trend and sizes estimation of the wear particles. A well-designed experiment was conducted and the result shows that the proposed method can effectively separate the mixed debris and give an accurate count of the debris based on RMF abrasive sensor detection.
Caballero Morales, Santiago Omar
2013-01-01
The application of Preventive Maintenance (PM) and Statistical Process Control (SPC) are important practices to achieve high product quality, small frequency of failures, and cost reduction in a production process. However there are some points that have not been explored in depth about its joint application. First, most SPC is performed with the X-bar control chart which does not fully consider the variability of the production process. Second, many studies of design of control charts consider just the economic aspect while statistical restrictions must be considered to achieve charts with low probabilities of false detection of failures. Third, the effect of PM on processes with different failure probability distributions has not been studied. Hence, this paper covers these points, presenting the Economic Statistical Design (ESD) of joint X-bar-S control charts with a cost model that integrates PM with general failure distribution. Experiments showed statistically significant reductions in costs when PM is performed on processes with high failure rates and reductions in the sampling frequency of units for testing under SPC. PMID:23527082
Modeling Soft Tissue Damage and Failure Using a Combined Particle/Continuum Approach.
Rausch, M K; Karniadakis, G E; Humphrey, J D
2017-02-01
Biological soft tissues experience damage and failure as a result of injury, disease, or simply age; examples include torn ligaments and arterial dissections. Given the complexity of tissue geometry and material behavior, computational models are often essential for studying both damage and failure. Yet, because of the need to account for discontinuous phenomena such as crazing, tearing, and rupturing, continuum methods are limited. Therefore, we model soft tissue damage and failure using a particle/continuum approach. Specifically, we combine continuum damage theory with Smoothed Particle Hydrodynamics (SPH). Because SPH is a meshless particle method, and particle connectivity is determined solely through a neighbor list, discontinuities can be readily modeled by modifying this list. We show, for the first time, that an anisotropic hyperelastic constitutive model commonly employed for modeling soft tissue can be conveniently implemented within a SPH framework and that SPH results show excellent agreement with analytical solutions for uniaxial and biaxial extension as well as finite element solutions for clamped uniaxial extension in 2D and 3D. We further develop a simple algorithm that automatically detects damaged particles and disconnects the spatial domain along rupture lines in 2D and rupture surfaces in 3D. We demonstrate the utility of this approach by simulating damage and failure under clamped uniaxial extension and in a peeling experiment of virtual soft tissue samples. In conclusion, SPH in combination with continuum damage theory may provide an accurate and efficient framework for modeling damage and failure in soft tissues.
Modeling Soft Tissue Damage and Failure Using a Combined Particle/Continuum Approach
Rausch, M. K.; Karniadakis, G. E.; Humphrey, J. D.
2016-01-01
Biological soft tissues experience damage and failure as a result of injury, disease, or simply age; examples include torn ligaments and arterial dissections. Given the complexity of tissue geometry and material behavior, computational models are often essential for studying both damage and failure. Yet, because of the need to account for discontinuous phenomena such as crazing, tearing, and rupturing, continuum methods are limited. Therefore, we model soft tissue damage and failure using a particle/continuum approach. Specifically, we combine continuum damage theory with Smoothed Particle Hydrodynamics (SPH). Because SPH is a meshless particle method, and particle connectivity is determined solely through a neighbor list, discontinuities can be readily modeled by modifying this list. We show, for the first time, that an anisotropic hyperelastic constitutive model commonly employed for modeling soft tissue can be conveniently implemented within a SPH framework and that SPH results show excellent agreement with analytical solutions for uniaxial and biaxial extension as well as finite element solutions for clamped uniaxial extension in 2D and 3D. We further develop a simple algorithm that automatically detects damaged particles and disconnects the spatial domain along rupture lines in 2D and rupture surfaces in 3D. We demonstrate the utility of this approach by simulating damage and failure under clamped uniaxial extension and in a peeling experiment of virtual soft tissue samples. In conclusion, SPH in combination with continuum damage theory may provide an accurate and efficient framework for modeling damage and failure in soft tissues. PMID:27538848
NASA Technical Reports Server (NTRS)
Morrell, Frederick R.; Bailey, Melvin L.
1987-01-01
A vector-based failure detection and isolation technique for a skewed array of two degree-of-freedom inertial sensors is developed. Failure detection is based on comparison of parity equations with a threshold, and isolation is based on comparison of logic variables which are keyed to pass/fail results of the parity test. A multi-level approach to failure detection is used to ensure adequate coverage for the flight control, display, and navigation avionics functions. Sensor error models are introduced to expose the susceptibility of the parity equations to sensor errors and physical separation effects. The algorithm is evaluated in a simulation of a commercial transport operating in a range of light to severe turbulence environments. A bias-jump failure level of 0.2 deg/hr was detected and isolated properly in the light and moderate turbulence environments, but not detected in the extreme turbulence environment. An accelerometer bias-jump failure level of 1.5 milli-g was detected over all turbulence environments. For both types of inertial sensor, hard-over, and null type failures were detected in all environments without incident. The algorithm functioned without false alarm or isolation over all turbulence environments for the runs tested.
NASA Technical Reports Server (NTRS)
Shives, T. R. (Editor); Willard, W. A. (Editor)
1981-01-01
The contribution of failure detection, diagnosis and prognosis to the energy challenge is discussed. Areas of special emphasis included energy management, techniques for failure detection in energy related systems, improved prognostic techniques for energy related systems and opportunities for detection, diagnosis and prognosis in the energy field.
NASA Technical Reports Server (NTRS)
Bundick, W. Thomas
1990-01-01
A methodology for designing a failure detection and identification (FDI) system to detect and isolate control element failures in aircraft control systems is reviewed. An FDI system design for a modified B-737 aircraft resulting from this methodology is also reviewed, and the results of evaluating this system via simulation are presented. The FDI system performed well in a no-turbulence environment, but it experienced an unacceptable number of false alarms in atmospheric turbulence. An adaptive FDI system, which adjusts thresholds and other system parameters based on the estimated turbulence level, was developed and evaluated. The adaptive system performed well over all turbulence levels simulated, reliably detecting all but the smallest magnitude partially-missing-surface failures.
NASA Technical Reports Server (NTRS)
Wolf, J. A.
1978-01-01
The Highly maneuverable aircraft technology (HIMAT) remotely piloted research vehicle (RPRV) uses cross-ship comparison monitoring of the actuator RAM positions to detect a failure in the aileron, canard, and elevator control surface servosystems. Some possible sources of nuisance trips for this failure detection technique are analyzed. A FORTRAN model of the simplex servosystems and the failure detection technique were utilized to provide a convenient means of changing parameters and introducing system noise. The sensitivity of the technique to differences between servosystems and operating conditions was determined. The cross-ship comparison monitoring method presently appears to be marginal in its capability to detect an actual failure and to withstand nuisance trips.
Real-time failure control (SAFD)
NASA Technical Reports Server (NTRS)
Panossian, Hagop V.; Kemp, Victoria R.; Eckerling, Sherry J.
1990-01-01
The Real Time Failure Control program involves development of a failure detection algorithm, referred as System for Failure and Anomaly Detection (SAFD), for the Space Shuttle Main Engine (SSME). This failure detection approach is signal-based and it entails monitoring SSME measurement signals based on predetermined and computed mean values and standard deviations. Twenty four engine measurements are included in the algorithm and provisions are made to add more parameters if needed. Six major sections of research are presented: (1) SAFD algorithm development; (2) SAFD simulations; (3) Digital Transient Model failure simulation; (4) closed-loop simulation; (5) SAFD current limitations; and (6) enhancements planned for.
Measurement of fault latency in a digital avionic miniprocessor
NASA Technical Reports Server (NTRS)
Mcgough, J. G.; Swern, F. L.
1981-01-01
The results of fault injection experiments utilizing a gate-level emulation of the central processor unit of the Bendix BDX-930 digital computer are presented. The failure detection coverage of comparison-monitoring and a typical avionics CPU self-test program was determined. The specific tasks and experiments included: (1) inject randomly selected gate-level and pin-level faults and emulate six software programs using comparison-monitoring to detect the faults; (2) based upon the derived empirical data develop and validate a model of fault latency that will forecast a software program's detecting ability; (3) given a typical avionics self-test program, inject randomly selected faults at both the gate-level and pin-level and determine the proportion of faults detected; (4) determine why faults were undetected; (5) recommend how the emulation can be extended to multiprocessor systems such as SIFT; and (6) determine the proportion of faults detected by a uniprocessor BIT (built-in-test) irrespective of self-test.
1983-03-01
aiding the operator in detecting ocating failures. tN-A brief conclusion reviews how these experiments fit together and speculates on problems and...and issues high level comiands to the TIS (subgcal statements, instructions on how to reach each subgoal or what to do otherwise, andI changes in...parameters). The TIS, insofar as it has subgoals to reach, instructions on how to try or what to do if it is impeded, functions as an aatomaton. it uses
Glovebox and Experiment Safety
NASA Astrophysics Data System (ADS)
Maas, Gerard
2005-12-01
Human spaceflight hardware and operations must comply with NSTS 1700.7. This paper discusses how a glovebox can help.A short layout is given on the process according NSTS/ISS 13830, explaining the responsibility of the payload organization, the approval authority of the PSRP and the defined review phases (0 till III).Amongst others, the following requirement has to be met:"200.1 Design to Tolerate Failures. Failure tolerance is the basic safety requirement that shall be used to control most payload hazards. The payload must tolerate a minimum number of credible failures and/or operator errors determined by the hazard level. This criterion applies when the loss of a function or the inadvertent occurrence of a function results in a hazardous event.200.1a Critical Hazards. Critical hazards shall be controlled such that no single failure or operator error can result in damage to STS/ISS equipment, a nondisabling personnel injury, or the use of unscheduled safing procedures that affect operations of the Orbiter/ISS or another payload.200.1b Catastrophic Hazards. Catastrophic hazards shall be controlled such that no combination of two failures or operator errors can result in the potential for a disabling or fatal personnel injury or loss of the Orbiter/ISS, ground facilities or STS/ISS equipment."For experiments in material science, biological science and life science that require real time operator manipulation, the above requirement may be hard or impossible to meet. Especially if the experiment contains substances that are considered hazardous when released into the habitable environment. In this case operation of the experiment in a glovebox can help to comply.A glovebox provides containment of the experiment and at the same time allows manipulation and visibility to the experiment.The containment inside the glovebox provides failure tolerance because the glovebox uses a negative pressure inside the working volume (WV). The level of failure tolerance is dependent of: the identified failure case and the hazardous substance being released (chemical, biological or different).The principle of the glovebox operation is explained, including: mechanical enclosure, air circulation, air filtration and operational modes.Limitations of the glovebox are presented: inability of an experiment fire to be detected by the ASDA, containment only with respect to specified substances, etc. There are requirements induced by the glovebox that the experiment must comply with: Compatibility with the glovebox filter system, thermal limitations, safe without glovebox services, parameter monitoring when a fire hazard is credible, sufficient containment when entering the glovebox and after the experiment, etc.Experiments that are using a glovebox to be operated in shall assess this integrated set-up and the associated operations for compliance to the safety requirements. During this assessment the PSRP shall determine if the provided failure tolerance is sufficient.The gloveboxes that Bradford Engineering (co-) built for human space flight are: USML-1 and 2, MGBX (STS and MIR), MSG, PGBX, LSG-WVA, BGB and PGB. Some of the evolutions are pointed out (experiment services added without compromising safety levels). The major differences of the gloveboxes are presented. For the gloveboxes that are in operation at this time (MSG) or in the near future (BGB, LSG- WVA and PGB) the specific applications are presented.
White, Richard A.; Lu, Chunling; Rodriguez, Carly A.; Bayona, Jaime; Becerra, Mercedes C.; Burgos, Marcos; Centis, Rosella; Cohen, Theodore; Cox, Helen; D'Ambrosio, Lia; Danilovitz, Manfred; Falzon, Dennis; Gelmanova, Irina Y.; Gler, Maria T.; Grinsdale, Jennifer A.; Holtz, Timothy H.; Keshavjee, Salmaan; Leimane, Vaira; Menzies, Dick; Milstein, Meredith B.; Mishustin, Sergey P.; Pagano, Marcello; Quelapio, Maria I.; Shean, Karen; Shin, Sonya S.; Tolman, Arielle W.; van der Walt, Martha L.; Van Deun, Armand; Viiklepp, Piret
2016-01-01
Debate persists about monitoring method (culture or smear) and interval (monthly or less frequently) during treatment for multidrug-resistant tuberculosis (MDR-TB). We analysed existing data and estimated the effect of monitoring strategies on timing of failure detection. We identified studies reporting microbiological response to MDR-TB treatment and solicited individual patient data from authors. Frailty survival models were used to estimate pooled relative risk of failure detection in the last 12 months of treatment; hazard of failure using monthly culture was the reference. Data were obtained for 5410 patients across 12 observational studies. During the last 12 months of treatment, failure detection occurred in a median of 3 months by monthly culture; failure detection was delayed by 2, 7, and 9 months relying on bimonthly culture, monthly smear and bimonthly smear, respectively. Risk (95% CI) of failure detection delay resulting from monthly smear relative to culture is 0.38 (0.34–0.42) for all patients and 0.33 (0.25–0.42) for HIV-co-infected patients. Failure detection is delayed by reducing the sensitivity and frequency of the monitoring method. Monthly monitoring of sputum cultures from patients receiving MDR-TB treatment is recommended. Expanded laboratory capacity is needed for high-quality culture, and for smear microscopy and rapid molecular tests. PMID:27587552
Intelligent on-line fault tolerant control for unanticipated catastrophic failures.
Yen, Gary G; Ho, Liang-Wei
2004-10-01
As dynamic systems become increasingly complex, experience rapidly changing environments, and encounter a greater variety of unexpected component failures, solving the control problems of such systems is a grand challenge for control engineers. Traditional control design techniques are not adequate to cope with these systems, which may suffer from unanticipated dynamic failures. In this research work, we investigate the on-line fault tolerant control problem and propose an intelligent on-line control strategy to handle the desired trajectories tracking problem for systems suffering from various unanticipated catastrophic faults. Through theoretical analysis, the sufficient condition of system stability has been derived and two different on-line control laws have been developed. The approach of the proposed intelligent control strategy is to continuously monitor the system performance and identify what the system's current state is by using a fault detection method based upon our best knowledge of the nominal system and nominal controller. Once a fault is detected, the proposed intelligent controller will adjust its control signal to compensate for the unknown system failure dynamics by using an artificial neural network as an on-line estimator to approximate the unexpected and unknown failure dynamics. The first control law is derived directly from the Lyapunov stability theory, while the second control law is derived based upon the discrete-time sliding mode control technique. Both control laws have been implemented in a variety of failure scenarios to validate the proposed intelligent control scheme. The simulation results, including a three-tank benchmark problem, comply with theoretical analysis and demonstrate a significant improvement in trajectory following performance based upon the proposed intelligent control strategy.
SCADA alarms processing for wind turbine component failure detection
NASA Astrophysics Data System (ADS)
Gonzalez, E.; Reder, M.; Melero, J. J.
2016-09-01
Wind turbine failure and downtime can often compromise the profitability of a wind farm due to their high impact on the operation and maintenance (O&M) costs. Early detection of failures can facilitate the changeover from corrective maintenance towards a predictive approach. This paper presents a cost-effective methodology to combine various alarm analysis techniques, using data from the Supervisory Control and Data Acquisition (SCADA) system, in order to detect component failures. The approach categorises the alarms according to a reviewed taxonomy, turning overwhelming data into valuable information to assess component status. Then, different alarms analysis techniques are applied for two purposes: the evaluation of the SCADA alarm system capability to detect failures, and the investigation of the relation between components faults being followed by failure occurrences in others. Various case studies are presented and discussed. The study highlights the relationship between faulty behaviour in different components and between failures and adverse environmental conditions.
Gunderson, Bruce D; Gillberg, Jeffrey M; Wood, Mark A; Vijayaraman, Pugazhendhi; Shepard, Richard K; Ellenbogen, Kenneth A
2006-02-01
Implantable cardioverter-defibrillator (ICD) lead failures often present as inappropriate shock therapy. An algorithm that can reliably discriminate between ventricular tachyarrhythmias and noise due to lead failure may prevent patient discomfort and anxiety and avoid device-induced proarrhythmia by preventing inappropriate ICD shocks. The goal of this analysis was to test an ICD tachycardia detection algorithm that differentiates noise due to lead failure from ventricular tachyarrhythmias. We tested an algorithm that uses a measure of the ventricular intracardiac electrogram baseline to discriminate the sinus rhythm isoelectric line from the right ventricular coil-can (i.e., far-field) electrogram during oversensing of noise caused by a lead failure. The baseline measure was defined as the product of the sum (mV) and standard deviation (mV) of the voltage samples for a 188-ms window centered on each sensed electrogram. If the minimum baseline measure of the last 12 beats was <0.35 mV-mV, then the detected rhythm was considered noise due to a lead failure. The first ICD-detected episode of lead failure and inappropriate detection from 24 ICD patients with a pace/sense lead failure and all ventricular arrhythmias from 56 ICD patients without a lead failure were selected. The stored data were analyzed to determine the sensitivity and specificity of the algorithm to detect lead failures. The minimum baseline measure for the 24 lead failure episodes (0.28 +/- 0.34 mV-mV) was smaller than the 135 ventricular tachycardia (40.8 +/- 43.0 mV-mV, P <.0001) and 55 ventricular fibrillation episodes (19.1 +/- 22.8 mV-mV, P <.05). A minimum baseline <0.35 mV-mV threshold had a sensitivity of 83% (20/24) with a 100% (190/190) specificity. A baseline measure of the far-field electrogram had a high sensitivity and specificity to detect lead failure noise compared with ventricular tachycardia or fibrillation.
Artificial-neural-network-based failure detection and isolation
NASA Astrophysics Data System (ADS)
Sadok, Mokhtar; Gharsalli, Imed; Alouani, Ali T.
1998-03-01
This paper presents the design of a systematic failure detection and isolation system that uses the concept of failure sensitive variables (FSV) and artificial neural networks (ANN). The proposed approach was applied to tube leak detection in a utility boiler system. Results of the experimental testing are presented in the paper.
Automatic patient respiration failure detection system with wireless transmission
NASA Technical Reports Server (NTRS)
Dimeff, J.; Pope, J. M.
1968-01-01
Automatic respiration failure detection system detects respiration failure in patients with a surgically implanted tracheostomy tube, and actuates an audible and/or visual alarm. The system incorporates a miniature radio transmitter so that the patient is unencumbered by wires yet can be monitored from a remote location.
NASA Technical Reports Server (NTRS)
Weiss, Jerold L.; Hsu, John Y.
1986-01-01
The use of a decentralized approach to failure detection and isolation for use in restructurable control systems is examined. This work has produced: (1) A method for evaluating fundamental limits to FDI performance; (2) Application using flight recorded data; (3) A working control element FDI system with maximal sensitivity to critical control element failures; (4) Extensive testing on realistic simulations; and (5) A detailed design methodology involving parameter optimization (with respect to model uncertainties) and sensitivity analyses. This project has concentrated on detection and isolation of generic control element failures since these failures frequently lead to emergency conditions and since knowledge of remaining control authority is essential for control system redesign. The failures are generic in the sense that no temporal failure signature information was assumed. Thus, various forms of functional failures are treated in a unified fashion. Such a treatment results in a robust FDI system (i.e., one that covers all failure modes) but sacrifices some performance when detailed failure signature information is known, useful, and employed properly. It was assumed throughout that all sensors are validated (i.e., contain only in-spec errors) and that only the first failure of a single control element needs to be detected and isolated. The FDI system which has been developed will handle a class of multiple failures.
Survey of Failure in Engineering Education and Industry
NASA Astrophysics Data System (ADS)
Arimitsu, Yutaka; Yagi, Hidetsugu
Students have failure experiences in the project-based learning but they do not profess their experiences. On the other hand, failures and accidents, in the industrial world, are analyzed frequently, and a knowledge data base on failure and QC activities have been introduced. To turn failure experience in education to advantage, the authors survey the properties of failures in project based learning and views of students, teachers and managers of design divisions in companies. Teachers and students regard failure experiences as instructive and acceptable. The typical causes of failure in educational institutions are luck of skill in manufacturing and inadequate planning, which are minor causes of failure in the industry. To establish a knowledge data base on failure in educational institutions, properties of failure in education should be taken into account.
Levin, Daniel T; Drivdahl, Sarah B; Momen, Nausheen; Beck, Melissa R
2002-12-01
Recently, a number of experiments have emphasized the degree to which subjects fail to detect large changes in visual scenes. This finding, referred to as "change blindness," is often considered surprising because many people have the intuition that such changes should be easy to detect. documented this intuition by showing that the majority of subjects believe they would notice changes that are actually very rarely detected. Thus subjects exhibit a metacognitive error we refer to as "change blindness blindness." Here, we test whether CBB is caused by a misestimation of the perceptual experience associated with visual changes and show that it persists even when the pre- and postchange views are separated by long delays. In addition, subjects overestimate their change detection ability both when the relevant changes are illustrated by still pictures, and when they are illustrated using videos showing the changes occurring in real time. We conclude that CBB is a robust phenomenon that cannot be accounted for by failure to understand the specific perceptual experience associated with a change. Copyright 2002 Elsevier Science (USA)
Fault detection and fault tolerance in robotics
NASA Technical Reports Server (NTRS)
Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.
1992-01-01
Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.
Faught, Jacqueline Tonigan; Balter, Peter A; Johnson, Jennifer L; Kry, Stephen F; Court, Laurence E; Stingo, Francesco C; Followill, David S
2017-11-01
The objective of this work was to assess both the perception of failure modes in Intensity Modulated Radiation Therapy (IMRT) when the linac is operated at the edge of tolerances given in AAPM TG-40 (Kutcher et al.) and TG-142 (Klein et al.) as well as the application of FMEA to this specific section of the IMRT process. An online survey was distributed to approximately 2000 physicists worldwide that participate in quality services provided by the Imaging and Radiation Oncology Core - Houston (IROC-H). The survey briefly described eleven different failure modes covered by basic quality assurance in step-and-shoot IMRT at or near TG-40 (Kutcher et al.) and TG-142 (Klein et al.) tolerance criteria levels. Respondents were asked to estimate the worst case scenario percent dose error that could be caused by each of these failure modes in a head and neck patient as well as the FMEA scores: Occurrence, Detectability, and Severity. Risk probability number (RPN) scores were calculated as the product of these scores. Demographic data were also collected. A total of 181 individual and three group responses were submitted. 84% were from North America. Most (76%) individual respondents performed at least 80% clinical work and 92% were nationally certified. Respondent medical physics experience ranged from 2.5 to 45 yr (average 18 yr). A total of 52% of individual respondents were at least somewhat familiar with FMEA, while 17% were not familiar. Several IMRT techniques, treatment planning systems, and linear accelerator manufacturers were represented. All failure modes received widely varying scores ranging from 1 to 10 for occurrence, at least 1-9 for detectability, and at least 1-7 for severity. Ranking failure modes by RPN scores also resulted in large variability, with each failure mode being ranked both most risky (1st) and least risky (11th) by different respondents. On average MLC modeling had the highest RPN scores. Individual estimated percent dose errors and severity scores positively correlated (P < 0.01) for each FM as expected. No universal correlations were found between the demographic information collected and scoring, percent dose errors or ranking. Failure modes investigated overall were evaluated as low to medium risk, with average RPNs less than 110. The ranking of 11 failure modes was not agreed upon by the community. Large variability in FMEA scoring may be caused by individual interpretation and/or experience, reflecting the subjective nature of the FMEA tool. © 2017 American Association of Physicists in Medicine.
ERIC Educational Resources Information Center
Tulis, Maria; Ainley, Mary
2011-01-01
The current investigation was designed to identify emotion states students experience during mathematics activities, and in particular to distinguish emotions contingent on experiences of success and experiences of failure. Students' task-related emotional responses were recorded following experiences of success and failure while working with an…
New Approach for Monitoring Seismic and Volcanic Activities Using Microwave Radiometer Data
NASA Astrophysics Data System (ADS)
Maeda, Takashi; Takano, Tadashi
Interferograms formed from the data of satellite-borne synthetic aperture radar (SAR) enable us to detect slight land-surface deformations related to volcanic eruptions and earthquakes. Currently, however, we cannot determine when land-surface deformations occurred with high time resolution since the time lag between two scenes of SAR used to form interferograms is longer than the recurrent period of the satellite carrying it (several tens of days). In order to solve this problem, we are investigating new approach to monitor seismic and vol-canic activities with higher time resolution from satellite-borne sensor data, and now focusing on a satellite-borne microwave radiometer. It is less subject to clouds and rainfalls over the ground than an infrared spectrometer, so more suitable to observe an emission from land sur-faces. With this advantage, we can expect that thermal microwave energy by increasing land surface temperatures is detected before a volcanic eruption. Additionally, laboratory experi-ments recently confirmed that rocks emit microwave energy when fractured. This microwave energy may result from micro discharges in the destruction of materials, or fragment motions with charged surfaces of materials. We first extrapolated the microwave signal power gener-ated by rock failures in an earthquake from the experimental results and concluded that the microwave signals generated by rock failures near the land surface are strong enough to be detected by a satellite-borne radiometer. Accordingly, microwave energy generated by rock failures associated with a seismic activity is likely to be detected as well. However, a satellite-borne microwave radiometer has a serious problem that its spatial res-olution is too coarse compared to SAR or an infrared spectrometer. In order to raise the possibility of detection, a new methodology to compensate the coarse spatial resolution is es-sential. Therefore, we investigated and developed an analysis method to detect local and faint changes from the data of the Advanced Microwave Scanning Radiometer for Earth-Observation System (AMSR-E) aboard the Aqua satellite, and then an algorithm to evaluate microwave energy from land surfaces. Finally, using this algorithm, we have detected characteristic microwave signals emitted from land surfaces in association with some large earthquakes which occurred in Morocco (2004), Sumatra (2007) and Wenchuan (2008) and some large volcanic eruptions which occurred at Reventador in Ecuador (2002) and Chaiten in Chile (2008). In this presentation, the results of these case studies are presented.
SSME leak detection feasibility investigation by utilization of infrared sensor technology
NASA Technical Reports Server (NTRS)
Shohadaee, Ahmad A.; Crawford, Roger A.
1990-01-01
This investigation examined the potential of using state-of-the-art technology of infrared (IR) thermal imaging systems combined with computer, digital image processing and expert systems for Space Shuttle Main Engines (SSME) propellant path peak detection as an early warning system of imminent engine failure. A low-cost, laboratory experiment was devised and an experimental approach was established. The system was installed, checked out, and data were successfully acquired demonstrating the proof-of-concept. The conclusion from this investigation is that both numerical and experimental results indicate that the leak detection by using infrared sensor technology proved to be feasible for a rocket engine health monitoring system.
Sensor Failure Detection of FASSIP System using Principal Component Analysis
NASA Astrophysics Data System (ADS)
Sudarno; Juarsa, Mulya; Santosa, Kussigit; Deswandri; Sunaryo, Geni Rina
2018-02-01
In the nuclear reactor accident of Fukushima Daiichi in Japan, the damages of core and pressure vessel were caused by the failure of its active cooling system (diesel generator was inundated by tsunami). Thus researches on passive cooling system for Nuclear Power Plant are performed to improve the safety aspects of nuclear reactors. The FASSIP system (Passive System Simulation Facility) is an installation used to study the characteristics of passive cooling systems at nuclear power plants. The accuracy of sensor measurement of FASSIP system is essential, because as the basis for determining the characteristics of a passive cooling system. In this research, a sensor failure detection method for FASSIP system is developed, so the indication of sensor failures can be detected early. The method used is Principal Component Analysis (PCA) to reduce the dimension of the sensor, with the Squarred Prediction Error (SPE) and statistic Hotteling criteria for detecting sensor failure indication. The results shows that PCA method is capable to detect the occurrence of a failure at any sensor.
Hydrologic Triggering of Shallow Landslides in a Field-scale Flume
NASA Astrophysics Data System (ADS)
Reid, M. E.; Iverson, R. M.; Iverson, N. R.; Brien, D. L.; Lahusen, R. G.; Logan, M.
2006-12-01
Hydrologic Triggering of Shallow Landslides in a Field-scale Flume Mark E. Reid, Richard M. Iverson, Neal R. Iverson, Dianne L. Brien, Richard G. LaHusen, and Mathew Logan Shallow landslides are often triggered by pore-water pressure increases driven by 1) groundwater inflow from underlying bedrock or soil, 2) prolonged moderate-intensity rainfall or snowmelt, or 3) bursts of high-intensity rainfall. These shallow failures are difficult to capture in the field, limiting our understanding of how different water pathways control failure style or timing. We used the field-scale, USGS debris-flow flume for 7 controlled landslide initiation experiments designed to examine the influence of different hydrologic triggers and the role of soil density, relative to critical state, on failure style and timing. Using sprinklers and/or groundwater injectors, we induced failure in a 0.65m thick, 2m wide, 6m3 prism of loamy sand on a 31° slope, placed behind a retaining wall. We monitored ~50 sensors to measure soil deformation (tiltmeters & extensometers), pore pressure (tensiometers and transducers), and soil moisture (TDR probes). We also extracted soil samples for laboratory estimates of porosity, shear strength, saturated hydraulic conductivity at differing porosities, unsaturated moisture retention characteristics, and compressibility. Experiments with loose soil all resulted in abrupt failure along the concrete flume bed with rapid mobilization into a debris flow. Each of the 3 water pathways, however, resulted in slightly different pore-pressure fields at failure and different times to failure. For example, groundwater injection at the flume bed led to a saturated zone that advanced upward, wetting over half the soil prism before pressures at the bed were sufficient to provoke collapse. With moderate-intensity surface sprinkling, an unsaturated wetting front propagated downward until reaching the bed, then a saturated zone built upward, with the highest pressures at the bed. With the third trigger, soils were initially wetted (but not saturated) with moderate-intensity sprinkling and then subjected to a high-intensity burst, causing failure without widespread positive pressures. It appears that a small pressure perturbation from the burst traveled rapidly downward through tension-saturated soil and led to positive pressure development at the flume bed resulting in failure. In contrast, failures in experiments with stronger, denser soil were gradual and episodic, requiring both sprinkling and groundwater injection. Numerical simulations of variably saturated groundwater flow mimic the behaviors described above. Simulated rainfall with an intensity greater than soil hydraulic conductivity generates rapid pressure perturbations, whereas lower intensity rainfall leads to wetting front propagation and water table buildup. Our results suggest that transient responses induced by high intensity bursts require relatively high frequency monitoring of unsaturated zone changes; in this case conventional piezometers would be unlikely to detect failure-inducing pore pressure changes. These experiments also indicate that although different water pathways control the timing of failure, initial soil density controls the style of failure.
Understanding and managing the effects of battery charger and inverter aging
NASA Astrophysics Data System (ADS)
Gunther, W.; Aggarwal, S.
An aging assessment of battery chargers and inverters was conducted under the auspices of the NRC's Nuclear Plant Aging Research (NPAR) Program. The intentions of this program are to resolve issues related to the aging and service wear of equipment and systems at operating reactor facilities and to assess their impact on safety. Inverters and battery chargers are used in nuclear power plants to perform significant functions related to plant safety and availability. The specific impact of a battery charger or inverter failure varies with plant configuration. Operating experience data have demonstrated that reactor trips, safety injection system actuations, and inoperable emergency core cooling systems have resulted from inverter failures; and dc bus degradation leading to diesel generator inoperability or loss of control room annunication and indication have resulted from battery and battery charger failures. For the battery charger and inverter, the aging and service wear of subcomponents have contributed significantly to equipment failures. This paper summarizes the data and then describes methods that can be used to detect battery charger and inverter degradation prior to failure, as well as methods to minimize the failure effects. In both cases, the managing of battery charger and inverter aging is emphasized.
NASA Astrophysics Data System (ADS)
Kim, S.; Adams, D. E.; Sohn, H.
2013-01-01
As the wind power industry has grown rapidly in the recent decade, maintenance costs have become a significant concern. Due to the high repair costs for wind turbine blades, it is especially important to detect initial blade defects before they become structural failures leading to other potential failures in the tower or nacelle. This research presents a method of detecting cracks on wind turbine blades using the Vibo-Acoustic Modulation technique. Using Vibro-Acoustic Modulation, a crack detection test is conducted on a WHISPER 100 wind turbine in its operating environment. Wind turbines provide the ideal conditions in which to utilize Vibro-Acoustic Modulation because wind turbines experience large structural vibrations. The structural vibration of the wind turbine balde was used as a pumping signal and a PZT was used to generate the probing signal. Because the non-linear portion of the dynamic response is more sensitive to the presence of a crack than the environmental conditions or operating loads, the Vibro-Acoustic Modulation technique can provide a robust structural health monitoring approach for wind turbines. Structural health monitoring can significantly reduce maintenance costs when paired with predictive modeling to minimize unscheduled maintenance.
NASA Technical Reports Server (NTRS)
Bueno, R. A.
1977-01-01
Results of the generalized likelihood ratio (GLR) technique for the detection of failures in aircraft application are presented, and its relationship to the properties of the Kalman-Bucy filter is examined. Under the assumption that the system is perfectly modeled, the detectability and distinguishability of four failure types are investigated by means of analysis and simulations. Detection of failures is found satisfactory, but problems in identifying correctly the mode of a failure may arise. These issues are closely examined as well as the sensitivity of GLR to modeling errors. The advantages and disadvantages of this technique are discussed, and various modifications are suggested to reduce its limitations in performance and computational complexity.
Failure detection in high-performance clusters and computers using chaotic map computations
Rao, Nageswara S.
2015-09-01
A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.
NASA Technical Reports Server (NTRS)
Eberlein, A. J.; Lahm, T. G.
1976-01-01
The degree to which flight-critical failures in a strapdown laser gyro tetrad sensor assembly can be isolated in short-haul aircraft after a failure occurrence has been detected by the skewed sensor failure-detection voting logic is investigated along with the degree to which a failure in the tetrad computer can be detected and isolated at the computer level, assuming a dual-redundant computer configuration. The tetrad system was mechanized with two two-axis inertial navigation channels (INCs), each containing two gyro/accelerometer axes, computer, control circuitry, and input/output circuitry. Gyro/accelerometer data is crossfed between the two INCs to enable each computer to independently perform the navigation task. Computer calculations are synchronized between the computers so that calculated quantities are identical and may be compared. Fail-safe performance (identification of the first failure) is accomplished with a probability approaching 100 percent of the time, while fail-operational performance (identification and isolation of the first failure) is achieved 93 to 96 percent of the time.
NASA Astrophysics Data System (ADS)
Rabiei, Masoud; Sheldon, Jeremy; Palmer, Carl
2012-04-01
The applicability of Electro-Mechanical Impedance (EMI) approach to damage detection, localization and quantification in a mobile bridge structure is investigated in this paper. The developments in this paper focus on assessing the health of Armored Vehicle Launched Bridges (AVLBs). Specifically, two key failure mechanisms of the AVLB to be monitored were fatigue crack growth and damaged (loose) rivets (bolts) were identified. It was shown through experiment that bolt damage (defined here as different torque levels applied to bolts) can be detected, quantified and located using a network of lead zirconate titanate (PZT) transducers distributed on the structure. It was also shown that cracks of various sizes can be detected and quantified using the EMI approach. The experiments were performed on smaller laboratory specimens as well as full size bridge-like components that were built as part of this research. The effects of various parameters such as transducer type and size on the performance of the proposed health assessment approach were also investigated.
Li, Tongyang; Wang, Shaoping; Zio, Enrico; Shi, Jian; Hong, Wei
2018-01-01
Leakage is the most important failure mode in aircraft hydraulic systems caused by wear and tear between friction pairs of components. The accurate detection of abrasive debris can reveal the wear condition and predict a system’s lifespan. The radial magnetic field (RMF)-based debris detection method provides an online solution for monitoring the wear condition intuitively, which potentially enables a more accurate diagnosis and prognosis on the aviation hydraulic system’s ongoing failures. To address the serious mixing of pipe abrasive debris, this paper focuses on the superimposed abrasive debris separation of an RMF abrasive sensor based on the degenerate unmixing estimation technique. Through accurately separating and calculating the morphology and amount of the abrasive debris, the RMF-based abrasive sensor can provide the system with wear trend and sizes estimation of the wear particles. A well-designed experiment was conducted and the result shows that the proposed method can effectively separate the mixed debris and give an accurate count of the debris based on RMF abrasive sensor detection. PMID:29543733
Optimally robust redundancy relations for failure detection in uncertain systems
NASA Technical Reports Server (NTRS)
Lou, X.-C.; Willsky, A. S.; Verghese, G. C.
1986-01-01
All failure detection methods are based, either explicitly or implicitly, on the use of redundancy, i.e. on (possibly dynamic) relations among the measured variables. The robustness of the failure detection process consequently depends to a great degree on the reliability of the redundancy relations, which in turn is affected by the inevitable presence of model uncertainties. In this paper the problem of determining redundancy relations that are optimally robust is addressed in a sense that includes several major issues of importance in practical failure detection and that provides a significant amount of intuition concerning the geometry of robust failure detection. A procedure is given involving the construction of a single matrix and its singular value decomposition for the determination of a complete sequence of redundancy relations, ordered in terms of their level of robustness. This procedure also provides the basis for comparing levels of robustness in redundancy provided by different sets of sensors.
NASA Technical Reports Server (NTRS)
Behbehani, K.
1980-01-01
A new sensor/actuator failure analysis technique for turbofan jet engines was developed. Three phases of failure analysis, namely detection, isolation, and accommodation are considered. Failure detection and isolation techniques are developed by utilizing the concept of Generalized Likelihood Ratio (GLR) tests. These techniques are applicable to both time varying and time invariant systems. Three GLR detectors are developed for: (1) hard-over sensor failure; (2) hard-over actuator failure; and (3) brief disturbances in the actuators. The probability distribution of the GLR detectors and the detectability of sensor/actuator failures are established. Failure type is determined by the maximum of the GLR detectors. Failure accommodation is accomplished by extending the Multivariable Nyquest Array (MNA) control design techniques to nonsquare system designs. The performance and effectiveness of the failure analysis technique are studied by applying the technique to a turbofan jet engine, namely the Quiet Clean Short Haul Experimental Engine (QCSEE). Single and multiple sensor/actuator failures in the QCSEE are simulated and analyzed and the effects of model degradation are studied.
Conditioned suppression of sexual behavior in stallions and reversal with diazepam.
McDonnell, S M; Kenney, R M; Meckley, P E; Garcia, M C
1985-06-01
Sexual behavior dysfunction unaccompanied by detectable physical or endocrine abnormality is an important cause of reproductive failure among domestic stallions. Several authors have suggested that such dysfunction may be psychogenic, related to negative experience associated with intense handling and training. An experimental model of experience-related dysfunction was developed by exposing pony stallions to erection-contingent aversive conditioning. This resulted in rapid, specific suppression of sexual arousal and response similar to spontaneously occurring dysfunction. Subsequently, treatment with a CNS-active benzodiazepine derivative (diazepam) reversed these effects.
New early warning system for gravity-driven ruptures based on codetection of acoustic signal
NASA Astrophysics Data System (ADS)
Faillettaz, J.
2016-12-01
Gravity-driven rupture phenomena in natural media - e.g. landslide, rockfalls, snow or ice avalanches - represent an important class of natural hazards in mountainous regions. To protect the population against such events, a timely evacuation often constitutes the only effective way to secure the potentially endangered area. However, reliable prediction of imminence of such failure events remains challenging due to the nonlinear and complex nature of geological material failure hampered by inherent heterogeneity, unknown initial mechanical state, and complex load application (rainfall, temperature, etc.). Here, a simple method for real-time early warning that considers both the heterogeneity of natural media and characteristics of acoustic emissions attenuation is proposed. This new method capitalizes on codetection of elastic waves emanating from microcracks by multiple and spatially separated sensors. Event-codetection is considered as surrogate for large event size with more frequent codetected events (i.e., detected concurrently on more than one sensor) marking imminence of catastrophic failure. Simple numerical model based on a Fiber Bundle Model considering signal attenuation and hypothetical arrays of sensors confirms the early warning potential of codetection principles. Results suggest that although statistical properties of attenuated signal amplitude could lead to misleading results, monitoring the emergence of large events announcing impeding failure is possible even with attenuated signals depending on sensor network geometry and detection threshold. Preliminary application of the proposed method to acoustic emissions during failure of snow samples has confirmed the potential use of codetection as indicator for imminent failure at lab scale. The applicability of such simple and cheap early warning system is now investigated at a larger scale (hillslope). First results of such a pilot field experiment are presented and analysed.
NASA Technical Reports Server (NTRS)
Littell, Justin D.; Binienda, Wieslaw K.; Arnold, William A.; Roberts, Gary d.; Goldberg, Robert K.
2008-01-01
In previous work, the ballistic impact resistance of triaxial braided carbon/epoxy composites made with large flat tows (12k and 24k) was examined by impacting 2 X2 X0.125" composite panels with gelatin projectiles. Several high strength, intermediate modulus carbon fibers were used in combination with both untoughened and toughened matrix materials. A wide range of penetration thresholds were measured for the various fiber/matrix combinations. However, there was no clear relationship between the penetration threshold and the properties of the constituents. During some of these experiments high speed cameras were used to view the failure process, and full-field strain measurements were made to determine the strain at the onset of failure. However, these experiments provided only limited insight into the microscopic failure processes responsible for the wide range of impact resistance observed. In order to investigate potential microscopic failure processes in more detail, quasi-static tests were performed in tension, compression, and shear. Full-field strain measurement techniques were used to identify local regions of high strain resulting from microscopic failures. Microscopic failure events near the specimen surface, such as splitting of fiber bundles in surface plies, were easily identified. Subsurface damage, such as fiber fracture or fiber bundle splitting, could be identified by its effect on in-plane surface strains. Subsurface delamination could be detected as an out-of-plane deflection at the surface. Using this data, failure criteria could be established at the fiber tow level for use in analysis. An analytical formulation was developed to allow the microscopic failure criteria to be used in place of macroscopic properties as input to simulations performed using the commercial explicit finite element code, LS-DYNA. The test methods developed to investigate microscopic failure will be presented along with methods for determining local failure criteria that can be used in analysis. Results of simulations performed using LS-DYNA will be presented to illustrate the capabilities and limitations for simulating failure during quasi-static deformation and during ballistic impact of large unit cell size triaxial braid composites.
Sensor failure detection system. [for the F100 turbofan engine
NASA Technical Reports Server (NTRS)
Beattie, E. C.; Laprad, R. F.; Mcglone, M. E.; Rock, S. M.; Akhter, M. M.
1981-01-01
Advanced concepts for detecting, isolating, and accommodating sensor failures were studied to determine their applicability to the gas turbine control problem. Five concepts were formulated based upon such techniques as Kalman filters and a screening process led to the selection of one advanced concept for further evaluation. The selected advanced concept uses a Kalman filter to generate residuals, a weighted sum square residuals technique to detect soft failures, likelihood ratio testing of a bank of Kalman filters for isolation, and reconfiguring of the normal mode Kalman filter by eliminating the failed input to accommodate the failure. The advanced concept was compared to a baseline parameter synthesis technique. The advanced concept was shown to be a viable concept for detecting, isolating, and accommodating sensor failures for the gas turbine applications.
Repetition blindness and illusory conjunctions: errors in binding visual types with visual tokens.
Kanwisher, N
1991-05-01
Repetition blindness (Kanwisher, 1986, 1987) has been defined as the failure to detect or recall repetitions of words presented in rapid serial visual presentation (RSVP). The experiments presented here suggest that repetition blindness (RB) is a more general visual phenomenon, and examine its relationship to feature integration theory (Treisman & Gelade, 1980). Experiment 1 shows RB for letters distributed through space, time, or both. Experiment 2 demonstrates RB for repeated colors in RSVP lists. In Experiments 3 and 4, RB was found for repeated letters and colors in spatial arrays. Experiment 5 provides evidence that the mental representations of discrete objects (called "visual tokens" here) that are necessary to detect visual repetitions (Kanwisher, 1987) are the same as the "object files" (Kahneman & Treisman, 1984) in which visual features are conjoined. In Experiment 6, repetition blindness for the second occurrence of a repeated letter resulted only when the first occurrence was attended to. The overall results suggest that a general dissociation between types and tokens in visual information processing can account for both repetition blindness and illusory conjunctions.
Fault detection and identification in missile system guidance and control: a filtering approach
NASA Astrophysics Data System (ADS)
Padgett, Mary Lou; Evers, Johnny; Karplus, Walter J.
1996-03-01
Real-world applications of computational intelligence can enhance the fault detection and identification capabilities of a missile guidance and control system. A simulation of a bank-to- turn missile demonstrates that actuator failure may cause the missile to roll and miss the target. Failure of one fin actuator can be detected using a filter and depicting the filter output as fuzzy numbers. The properties and limitations of artificial neural networks fed by these fuzzy numbers are explored. A suite of networks is constructed to (1) detect a fault and (2) determine which fin (if any) failed. Both the zero order moment term and the fin rate term show changes during actuator failure. Simulations address the following questions: (1) How bad does the actuator failure have to be for detection to occur, (2) How bad does the actuator failure have to be for fault detection and isolation to occur, (3) are both zero order moment and fine rate terms needed. A suite of target trajectories are simulated, and properties and limitations of the approach reported. In some cases, detection of the failed actuator occurs within 0.1 second, and isolation of the failure occurs 0.1 after that. Suggestions for further research are offered.
Heart-rate variability depression in porcine peritonitis-induced sepsis without organ failure.
Jarkovska, Dagmar; Valesova, Lenka; Chvojka, Jiri; Benes, Jan; Danihel, Vojtech; Sviglerova, Jitka; Nalos, Lukas; Matejovic, Martin; Stengl, Milan
2017-05-01
Depression of heart-rate variability (HRV) in conditions of systemic inflammation has been shown in both patients and experimental animal models and HRV has been suggested as an early indicator of sepsis. The sensitivity of HRV-derived parameters to the severity of sepsis, however, remains unclear. In this study we modified the clinically relevant porcine model of peritonitis-induced sepsis in order to avoid the development of organ failure and to test the sensitivity of HRV to such non-severe conditions. In 11 anesthetized, mechanically ventilated and instrumented domestic pigs of both sexes, sepsis was induced by fecal peritonitis. The dose of feces was adjusted and antibiotic therapy was administered to avoid multiorgan failure. Experimental subjects were screened for 40 h from the induction of sepsis. In all septic animals, sepsis with hyperdynamic circulation and increased plasma levels of inflammatory mediators developed within 12 h from the induction of peritonitis. The sepsis did not progress to multiorgan failure and there was no spontaneous death during the experiment despite a modest requirement for vasopressor therapy in most animals (9/11). A pronounced reduction of HRV and elevation of heart rate developed quickly (within 5 h, time constant of 1.97 ± 0.80 h for HRV parameter TINN) upon the induction of sepsis and were maintained throughout the experiment. The frequency domain analysis revealed a decrease in the high-frequency component. The reduction of HRV parameters and elevation of heart rate preceded sepsis-associated hemodynamic changes by several hours (time constant of 11.28 ± 2.07 h for systemic vascular resistance decline). A pronounced and fast reduction of HRV occurred in the setting of a moderate experimental porcine sepsis without organ failure. Inhibition of parasympathetic cardiac signaling probably represents the main mechanism of HRV reduction in sepsis. The sensitivity of HRV to systemic inflammation may allow early detection of a moderate sepsis without organ failure. Impact statement A pronounced and fast reduction of heart-rate variability occurred in the setting of a moderate experimental porcine sepsis without organ failure. Dominant reduction of heart-rate variability was found in the high-frequency band indicating inhibition of parasympathetic cardiac signaling as the main mechanism of heart-rate variability reduction. The sensitivity of heart-rate variability to systemic inflammation may contribute to an early detection of moderate sepsis without organ failure.
An Adaptive Failure Detector Based on Quality of Service in Peer-to-Peer Networks
Dong, Jian; Ren, Xiao; Zuo, Decheng; Liu, Hongwei
2014-01-01
The failure detector is one of the fundamental components that maintain high availability of Peer-to-Peer (P2P) networks. Under different network conditions, the adaptive failure detector based on quality of service (QoS) can achieve the detection time and accuracy required by upper applications with lower detection overhead. In P2P systems, complexity of network and high churn lead to high message loss rate. To reduce the impact on detection accuracy, baseline detection strategy based on retransmission mechanism has been employed widely in many P2P applications; however, Chen's classic adaptive model cannot describe this kind of detection strategy. In order to provide an efficient service of failure detection in P2P systems, this paper establishes a novel QoS evaluation model for the baseline detection strategy. The relationship between the detection period and the QoS is discussed and on this basis, an adaptive failure detector (B-AFD) is proposed, which can meet the quantitative QoS metrics under changing network environment. Meanwhile, it is observed from the experimental analysis that B-AFD achieves better detection accuracy and time with lower detection overhead compared to the traditional baseline strategy and the adaptive detectors based on Chen's model. Moreover, B-AFD has better adaptability to P2P network. PMID:25198005
Human interaction with an intelligent computer in multi-task situations
NASA Technical Reports Server (NTRS)
Rouse, W. B.
1975-01-01
A general formulation of human decision making in multiple task situations is presented. It includes a description of the state, event, and action space in which the multiple task supervisor operates. A specific application to a failure detection and correction situation is discussed and results of a simulation experiment presented. Issues considered include static vs. dynamic allocation of responsibility and competitive vs. cooperative intelligence.
NASA Astrophysics Data System (ADS)
Spinner, Neil S.; Field, Christopher R.; Hammond, Mark H.; Williams, Bradley A.; Myers, Kristina M.; Lubrano, Adam L.; Rose-Pehrsson, Susan L.; Tuttle, Steven G.
2015-04-01
A 5-cubic meter decompression chamber was re-purposed as a fire test chamber to conduct failure and abuse experiments on lithium-ion batteries. Various modifications were performed to enable remote control and monitoring of chamber functions, along with collection of data from instrumentation during tests including high speed and infrared cameras, a Fourier transform infrared spectrometer, real-time gas analyzers, and compact reconfigurable input and output devices. Single- and multi-cell packages of LiCoO2 chemistry 18650 lithium-ion batteries were constructed and data was obtained and analyzed for abuse and failure tests. Surrogate 18650 cells were designed and fabricated for multi-cell packages that mimicked the thermal behavior of real cells without using any active components, enabling internal temperature monitoring of cells adjacent to the active cell undergoing failure. Heat propagation and video recordings before, during, and after energetic failure events revealed a high degree of heterogeneity; some batteries exhibited short burst of sparks while others experienced a longer, sustained flame during failure. Carbon monoxide, carbon dioxide, methane, dimethyl carbonate, and ethylene carbonate were detected via gas analysis, and the presence of these species was consistent throughout all failure events. These results highlight the inherent danger in large format lithium-ion battery packs with regards to cell-to-cell failure, and illustrate the need for effective safety features.
NASA Technical Reports Server (NTRS)
Hopson, Charles B.
1987-01-01
The results of an analysis performed on seven successive Space Shuttle Main Engine (SSME) static test firings, utilizing envelope detection of external accelerometer data are discussed. The results clearly show the great potential for using envelope detection techniques in SSME incipient failure detection.
Device for detecting imminent failure of high-dielectric stress capacitors. [Patent application
McDuff, G.G.
1980-11-05
A device is described for detecting imminent failure of a high-dielectric stress capacitor utilizing circuitry for detecting pulse width variations and pulse magnitude variations. Inexpensive microprocessor circuitry is utilized to make numerical calculations of digital data supplied by detection circuitry for comparison of pulse width data and magnitude data to determine if preselected ranges have been exceeded, thereby indicating imminent failure of a capacitor. Detection circuitry may be incorporated in transmission lines, pulse power circuitry, including laser pulse circuitry or any circuitry where capacitors or capacitor banks are utilized.
Device for detecting imminent failure of high-dielectric stress capacitors
McDuff, George G.
1982-01-01
A device for detecting imminent failure of a high-dielectric stress capacitor utilizing circuitry for detecting pulse width variations and pulse magnitude variations. Inexpensive microprocessor circuitry is utilized to make numerical calculations of digital data supplied by detection circuitry for comparison of pulse width data and magnitude data to determine if preselected ranges have been exceeded, thereby indicating imminent failure of a capacitor. Detection circuitry may be incorporated in transmission lines, pulse power circuitry, including laser pulse circuitry or any circuitry where capacitors or capactior banks are utilized.
The Inclusion of Arbitrary Load Histories in the Strength Decay Model for Stress Rupture
NASA Technical Reports Server (NTRS)
Reeder, James R.
2014-01-01
Stress rupture is a failure mechanism where failures can occur after a period of time, even though the material has seen no increase in load. Carbon/epoxy composite materials have demonstrated the stress rupture failure mechanism. In a previous work, a model was proposed for stress rupture of composite overwrap pressure vessels (COPVs) and similar composite structures based on strength degradation. However, the original model was limited to constant load periods (holds) at constant load. The model was expanded in this paper to address arbitrary loading histories and specifically the inclusions of ramp loadings up to holds and back down. The broadening of the model allows for failures on loading to be treated as any other failure that may occur during testing instead of having to be treated as a special case. The inclusion of ramps can also influence the length of the "safe period" following proof loading that was previously predicted by the model. No stress rupture failures are predicted in a safe period because time is required for strength to decay from above the proof level to the lower level of loading. Although the model can predict failures during the ramp periods, no closed-form solution for the failure times could be derived. Therefore, two suggested solution techniques were proposed. Finally, the model was used to design an experiment that could detect the difference between the strength decay model and a commonly used model for stress rupture. Although these types of models are necessary to help guide experiments for stress rupture, only experimental evidence will determine how well the model may predict actual material response. If the model can be shown to be accurate, current proof loading requirements may result in predicted safe periods as long as 10(13) years. COPVs design requirements for stress rupture may then be relaxed, allowing more efficient designs, while still maintaining an acceptable level of safety.
Epidemic failure detection and consensus for extreme parallelism
Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas; ...
2017-02-01
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less
Advanced detection, isolation and accommodation of sensor failures: Real-time evaluation
NASA Technical Reports Server (NTRS)
Merrill, Walter C.; Delaat, John C.; Bruton, William M.
1987-01-01
The objective of the Advanced Detection, Isolation, and Accommodation (ADIA) Program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines by using analytical redundacy to detect sensor failures. The results of a real time hybrid computer evaluation of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 engine control system are determined. Also included are details about the microprocessor implementation of the algorithm as well as a description of the algorithm itself.
SLSF in-reactor local fault safety experiment P4. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, D. H.; Holland, J. W.; Braid, T. H.
The Sodium Loop Safety Facility (SLSF), a major facility in the US fast-reactor safety program, has been used to simulate a variety of sodium-cooled fast reactor accidents. SLSF experiment P4 was conducted to investigate the behavior of a "worse-than-case" local fault configuration. Objectives of this experiment were to eject molten fuel into a 37-pin bundle of full-length Fast-Test-Reactor-type fuel pins form heat-generating fuel canisters, to characterize the severity of any molten fuel-coolant interaction, and to demonstrate that any resulting blockage could either be tolerated during continued power operation or detected by global monitors to prevent fuel failure propagation. The designmore » goal for molten fuel release was 10 to 30 g. Explusion of molten fuel from fuel canisters caused failure of adjacent pins and a partial flow channel blockage in the fuel bundle during full-power operation. Molten fuel and fuel debris also lodged against the inner surface of the test subassembly hex-can wall. The total fuel disruption of 310 g evaluated from posttest examination data was in excellent agreement with results from the SLSF delayed neutron detection system, but exceeded the target molten fuel release by an order of magnitude. This report contains a summary description of the SLSF in-reactor loop and support systems and the experiment operations. results of the detailed macro- and microexamination of disrupted fuel and metal and results from the analysis of the on-line experimental data are described, as are the interpretations and conclusions drawn from the posttest evaluations. 60 refs., 74 figs.« less
NASA Astrophysics Data System (ADS)
Amirat, Yassine; Choqueuse, Vincent; Benbouzid, Mohamed
2013-12-01
Failure detection has always been a demanding task in the electrical machines community; it has become more challenging in wind energy conversion systems because sustainability and viability of wind farms are highly dependent on the reduction of the operational and maintenance costs. Indeed the most efficient way of reducing these costs would be to continuously monitor the condition of these systems. This allows for early detection of the generator health degeneration, facilitating a proactive response, minimizing downtime, and maximizing productivity. This paper provides then an assessment of a failure detection techniques based on the homopolar component of the generator stator current and attempts to highlight the use of the ensemble empirical mode decomposition as a tool for failure detection in wind turbine generators for stationary and non-stationary cases.
Failure detection and fault management techniques for flush airdata sensing systems
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.
1992-01-01
Methods based on chi-squared analysis are presented for detecting system and individual-port failures in the high-angle-of-attack flush airdata sensing system on the NASA F-18 High Alpha Research Vehicle. The HI-FADS hardware is introduced, and the aerodynamic model describes measured pressure in terms of dynamic pressure, angle of attack, angle of sideslip, and static pressure. Chi-squared analysis is described in the presentation of the concept for failure detection and fault management which includes nominal, iteration, and fault-management modes. A matrix of pressure orifices arranged in concentric circles on the nose of the aircraft indicate the parameters which are applied to the regression algorithms. The sensing techniques are applied to the F-18 flight data, and two examples are given of the computed angle-of-attack time histories. The failure-detection and fault-management techniques permit the matrix to be multiply redundant, and the chi-squared analysis is shown to be useful in the detection of failures.
Failure Detecting Method of Fault Current Limiter System with Rectifier
NASA Astrophysics Data System (ADS)
Tokuda, Noriaki; Matsubara, Yoshio; Asano, Masakuni; Ohkuma, Takeshi; Sato, Yoshibumi; Takahashi, Yoshihisa
A fault current limiter (FCL) is extensively needed to suppress fault current, particularly required for trunk power systems connecting high-voltage transmission lines, such as 500kV class power system which constitutes the nucleus of the electric power system. We proposed a new type FCL system (rectifier type FCL), consisting of solid-state diodes, DC reactor and bypass AC reactor, and demonstrated the excellent performances of this FCL by developing the small 6.6kV and 66kV model. It is important to detect the failure of power devices used in the rectifier under the normal operating condition, for keeping the excellent reliability of the power system. In this paper, we have proposed a new failure detecting method of power devices most suitable for the rectifier type FCL. This failure detecting system is simple and compact. We have adapted the proposed system to the 66kV prototype single-phase model and successfully demonstrated to detect the failure of power devices.
Syndromic surveillance for health information system failures: a feasibility study.
Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico
2013-05-01
To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65-0.85. Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures.
Point counts from clustered populations: Lessons from an experiment with Hawaiian crows
Hayward, G.D.; Kepler, C.B.; Scott, J.M.
1991-01-01
We designed an experiment to identify factors contributing most to error in counts of Hawaiian Crow or Alala (Corvus hawaiiensis) groups that are detected aurally. Seven observers failed to detect calling Alala on 197 of 361 3-min point counts on four transects extending from cages with captive Alala. A detection curve describing the relation between frequency of flock detection and distance typified the distribution expected in transect or point counts. Failure to detect calling Alala was affected most by distance, observer, and Alala calling frequency. The number of individual Alala calling was not important in detection rate. Estimates of the number of Alala calling (flock size) were biased and imprecise: average difference between number of Alala calling and number heard was 3.24 (.+-. 0.277). Distance, observer, number of Alala calling, and Alala calling frequency all contributed to errors in estimates of group size (P < 0.0001). Multiple regression suggested that number of Alala calling contributed most to errors. These results suggest that well-designed point counts may be used to estimate the number of Alala flocks but cast doubt on attempts to estimate flock size when individuals are counted aurally.
Review of Literature on Probability of Detection for Liquid Penetrant Nondestructive Testing
2011-11-01
increased maintenance costs , or catastrophic failure of safety- critical structure. Knowledge of the reliability achieved by NDT methods, including...representative components to gather data for statistical analysis, which can be prohibitively expensive. To account for sampling variability inherent in any...Sioux City and Pensacola. (Those recommendations were discussed in Section 3.4.) Drury et al report on a factorial experiment aimed at identifying the
Transmission Bearing Damage Detection Using Decision Fusion Analysis
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Lewicki, David G.; Decker, Harry J.
2004-01-01
A diagnostic tool was developed for detecting fatigue damage to rolling element bearings in an OH-58 main rotor transmission. Two different monitoring technologies, oil debris analysis and vibration, were integrated using data fusion into a health monitoring system for detecting bearing surface fatigue pitting damage. This integrated system showed improved detection and decision-making capabilities as compared to using individual monitoring technologies. This diagnostic tool was evaluated by collecting vibration and oil debris data from tests performed in the NASA Glenn 500 hp Helicopter Transmission Test Stand. Data was collected during experiments performed in this test rig when two unanticipated bearing failures occurred. Results show that combining the vibration and oil debris measurement technologies improves the detection of pitting damage on spiral bevel gears duplex ball bearings and spiral bevel pinion triplex ball bearings in a main rotor transmission.
NASA Astrophysics Data System (ADS)
Yim, Keun Soo
This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of program states that included dynamically allocated memory (to be spatially comprehensive). In GPUs, we used fault injection studies to demonstrate the importance of detecting silent data corruption (SDC) errors that are mainly due to the lack of fine-grained protections and the massive use of fault-insensitive data. This dissertation also presents transparent fault tolerance frameworks and techniques that are directly applicable to hybrid computers built using only commercial off-the-shelf hardware components. This dissertation shows that by developing understanding of the failure characteristics and error propagation paths of target programs, we were able to create fault tolerance frameworks and techniques that can quickly detect and recover from hardware faults with low performance and hardware overheads.
Microseismic Signature of Magma Failure: Testing Failure Forecast in Heterogeneous Material
NASA Astrophysics Data System (ADS)
Vasseur, J.; Lavallee, Y.; Hess, K.; Wassermann, J. M.; Dingwell, D. B.
2012-12-01
Volcanoes exhibit a range of seismic precursors prior to eruptions. This range of signals derive from different processes, which if quantified, may tell us when and how the volcano will erupt: effusively or explosively. This quantification can be performed in laboratory. Here we investigated the signals associated with the deformation and failure of single-phase silicate liquids compare to mutli-phase magmas containing pores and crystals as heterogeneities. For the past decades, magmas have been simplified as viscoelastic fluids with grossly predictable failure, following an analysis of the stress and strain rate conditions in volcanic conduits. Yet it is clear that the way magmas fail is not unique and evidences increasingly illustrate the role of heterogeneities in the process of magmatic fragmentation. In such multi-phase magmas, failure cannot be predicted using current rheological laws. Microseismicity, as detected in the laboratory by analogous Acoustic Emission (AE), can be used to monitor fracture initiation and propagation, and thus provides invaluable information to characterise the process of brittle failure underlying explosive eruptions. Tri-axial press experiments on different synthetised and natural glass samples have been performed to investigate the acoustic signature of failure. We observed that the failure of single-phase liquids occurs without much strain and is preceded by the constant nucleation, propagation and coalescence of cracks as demonstrated by the monitored AE. In contrast, the failure of multi-phase magmas depends on the applied stress and is strain dependent. The path dependence of magma failure is nonetheless accompanied by supra exponential acceleration in released AEs. Analysis of the released AEs following material Failure Forecast Method (FFM) suggests that the predicability of failure is enhanced by the presence of heterogeneities in magmas. We discuss our observations in terms of volcanic scenarios.
Remote Structural Health Monitoring and Advanced Prognostics of Wind Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Douglas Brown; Bernard Laskowski
The prospect of substantial investment in wind energy generation represents a significant capital investment strategy. In order to maximize the life-cycle of wind turbines, associated rotors, gears, and structural towers, a capability to detect and predict (prognostics) the onset of mechanical faults at a sufficiently early stage for maintenance actions to be planned would significantly reduce both maintenance and operational costs. Advancement towards this effort has been made through the development of anomaly detection, fault detection and fault diagnosis routines to identify selected fault modes of a wind turbine based on available sensor data preceding an unscheduled emergency shutdown. Themore » anomaly detection approach employs spectral techniques to find an approximation of the data using a combination of attributes that capture the bulk of variability in the data. Fault detection and diagnosis (FDD) is performed using a neural network-based classifier trained from baseline and fault data recorded during known failure conditions. The approach has been evaluated for known baseline conditions and three selected failure modes: pitch rate failure, low oil pressure failure and a gearbox gear-tooth failure. Experimental results demonstrate the approach can distinguish between these failure modes and normal baseline behavior within a specified statistical accuracy.« less
NASA Technical Reports Server (NTRS)
Abdul-Aziz, Ali; Baaklini, George Y.; Roth, Don J.
2004-01-01
Engine makers and aviation safety government institutions continue to have a strong interest in monitoring the health of rotating components in aircraft engines to improve safety and to lower maintenance costs. To prevent catastrophic failure (burst) of the engine, they use nondestructive evaluation (NDE) and major overhauls for periodic inspections to discover any cracks that might have formed. The lowest cost fluorescent penetrant inspection NDE technique can fail to disclose cracks that are tightly closed during rest or that are below the surface. The NDE eddy current system is more effective at detecting both crack types, but it requires careful setup and operation and only a small portion of the disk can be practically inspected. So that sensor systems can sustain normal function in a severe environment, health-monitoring systems require the sensor system to transmit a signal if a crack detected in the component is above a predetermined length (but below the length that would lead to failure) and lastly to act neutrally upon the overall performance of the engine system and not interfere with engine maintenance operations. Therefore, more reliable diagnostic tools and high-level techniques for detecting damage and monitoring the health of rotating components are very essential in maintaining engine safety and reliability and in assessing life.
Real-time diagnostics of the reusable rocket engine using on-line system identification
NASA Technical Reports Server (NTRS)
Guo, T.-H.; Merrill, W.; Duyar, A.
1990-01-01
A model-based failure diagnosis system has been proposed for real-time diagnosis of SSME failures. Actuation, sensor, and system degradation failure modes are all considered by the proposed system. In the case of SSME actuation failures, it was shown that real-time identification can effectively be used for failure diagnosis purposes. It is a direct approach since it reduces the detection, isolation, and the estimation of the extent of the failures to the comparison of parameter values before and after the failure. As with any model-based failure detection system, the proposed approach requires a fault model that embodies the essential characteristics of the failure process. The proposed diagnosis approach has the added advantage that it can be used as part of an intelligent control system for failure accommodation purposes.
Corrosivity Sensor for Exposed Pipelines Based on Wireless Energy Transfer.
Lawand, Lydia; Shiryayev, Oleg; Al Handawi, Khalil; Vahdati, Nader; Rostron, Paul
2017-05-30
External corrosion was identified as one of the main causes of pipeline failures worldwide. A solution that addresses the issue of detecting and quantifying corrosivity of environment for application to existing exposed pipelines has been developed. It consists of a sensing array made of an assembly of thin strips of pipeline steel and a circuit that provides a visual sensor reading to the operator. The proposed sensor is passive and does not require a constant power supply. Circuit design was validated through simulations and lab experiments. Accelerated corrosion experiment was conducted to confirm the feasibility of the proposed corrosivity sensor design.
Possible consequences of operation with KIVN fuel elements in K Zircaloy process tubes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, P.A.
1963-08-06
From considerations of the results of experimental simulations of non-axial placement of fuel elements in process tubes and in-reactor experience, it is concluded that the ultimate outcome of a charging error which results in operation with one or more unsupported fuel elements in a K Zircaloy-2 process tube would be multiple fuel failure and failure of the process tube. The outcome of the accident is determined by the speed with which the fuel failure is detected and the reactor is shut down. The release of fission products would be expected to be no greater than that which has occurred followingmore » severe fuel failure incidents. The highest probability for fission product release occurs during the discharge of failed fuel elements, when a small fraction of the exposed uranium of the fuel element may be oxidized when exposed to air before the element falls into the water-filled discharge chute. The confinement and fog spray facilities were installed to reduce the amount of fission products which might escape from the reactor building after such an event.« less
Syndromic surveillance for health information system failures: a feasibility study
Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico
2013-01-01
Objective To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. Methods A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. Results In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65–0.85. Conclusions Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures. PMID:23184193
Risk analysis of analytical validations by probabilistic modification of FMEA.
Barends, D M; Oldenhof, M T; Vredenbregt, M J; Nauta, M J
2012-05-01
Risk analysis is a valuable addition to validation of an analytical chemistry process, enabling not only detecting technical risks, but also risks related to human failures. Failure Mode and Effect Analysis (FMEA) can be applied, using a categorical risk scoring of the occurrence, detection and severity of failure modes, and calculating the Risk Priority Number (RPN) to select failure modes for correction. We propose a probabilistic modification of FMEA, replacing the categorical scoring of occurrence and detection by their estimated relative frequency and maintaining the categorical scoring of severity. In an example, the results of traditional FMEA of a Near Infrared (NIR) analytical procedure used for the screening of suspected counterfeited tablets are re-interpretated by this probabilistic modification of FMEA. Using this probabilistic modification of FMEA, the frequency of occurrence of undetected failure mode(s) can be estimated quantitatively, for each individual failure mode, for a set of failure modes, and the full analytical procedure. Copyright © 2012 Elsevier B.V. All rights reserved.
DC-to-AC inverter ratio failure detector
NASA Technical Reports Server (NTRS)
Ebersole, T. J.; Andrews, R. E.
1975-01-01
Failure detection technique is based upon input-output ratios, which is independent of inverter loading. Since inverter has fixed relationship between V-in/V-out and I-in/I-out, failure detection criteria are based on this ratio, which is simply inverter transformer turns ratio, K, equal to primary turns divided by secondary turns.
Impaired face detection may explain some but not all cases of developmental prosopagnosia.
Dalrymple, Kirsten A; Duchaine, Brad
2016-05-01
Developmental prosopagnosia (DP) is defined by severe face recognition difficulties due to the failure to develop the visual mechanisms for processing faces. The two-process theory of face recognition (Morton & Johnson, 1991) implies that DP could result from a failure of an innate face detection system; this failure could prevent an individual from then tuning higher-level processes for face recognition (Johnson, 2005). Work with adults indicates that some individuals with DP have normal face detection whereas others are impaired. However, face detection has not been addressed in children with DP, even though their results may be especially informative because they have had less opportunity to develop strategies that could mask detection deficits. We tested the face detection abilities of seven children with DP. Four were impaired at face detection to some degree (i.e. abnormally slow, or failed to find faces) while the remaining three children had normal face detection. Hence, the cases with impaired detection are consistent with the two-process account suggesting that DP could result from a failure of face detection. However, the cases with normal detection implicate a higher-level origin. The dissociation between normal face detection and impaired identity perception also indicates that these abilities depend on different neurocognitive processes. © 2015 John Wiley & Sons Ltd.
A Framework for Creating a Function-based Design Tool for Failure Mode Identification
NASA Technical Reports Server (NTRS)
Arunajadai, Srikesh G.; Stone, Robert B.; Tumer, Irem Y.; Clancy, Daniel (Technical Monitor)
2002-01-01
Knowledge of potential failure modes during design is critical for prevention of failures. Currently industries use procedures such as Failure Modes and Effects Analysis (FMEA), Fault Tree analysis, or Failure Modes, Effects and Criticality analysis (FMECA), as well as knowledge and experience, to determine potential failure modes. When new products are being developed there is often a lack of sufficient knowledge of potential failure mode and/or a lack of sufficient experience to identify all failure modes. This gives rise to a situation in which engineers are unable to extract maximum benefits from the above procedures. This work describes a function-based failure identification methodology, which would act as a storehouse of information and experience, providing useful information about the potential failure modes for the design under consideration, as well as enhancing the usefulness of procedures like FMEA. As an example, the method is applied to fifteen products and the benefits are illustrated.
NASA Technical Reports Server (NTRS)
Santi, Louis M.; Butas, John P.; Aguilar, Robert B.; Sowers, Thomas S.
2008-01-01
The J-2X is an expendable liquid hydrogen (LH2)/liquid oxygen (LOX) gas generator cycle rocket engine that is currently being designed as the primary upper stage propulsion element for the new NASA Ares vehicle family. The J-2X engine will contain abort logic that functions as an integral component of the Ares vehicle abort system. This system is responsible for detecting and responding to conditions indicative of impending Loss of Mission (LOM), Loss of Vehicle (LOV), and/or catastrophic Loss of Crew (LOC) failure events. As an earth orbit ascent phase engine, the J-2X is a high power density propulsion element with non-negligible risk of fast propagation rate failures that can quickly lead to LOM, LOV, and/or LOC events. Aggressive reliability requirements for manned Ares missions and the risk of fast propagating J-2X failures dictate the need for on-engine abort condition monitoring and autonomous response capability as well as traditional abort agents such as the vehicle computer, flight crew, and ground control not located on the engine. This paper describes the baseline J-2X abort subsystem concept of operations, as well as the development process for this subsystem. A strategy that leverages heritage system experience and responds to an evolving engine design as well as J-2X specific test data to support abort system development is described. The utilization of performance and failure simulation models to support abort system sensor selection, failure detectability and discrimination studies, decision threshold definition, and abort system performance verification and validation is outlined. The basis for abort false positive and false negative performance constraints is described. Development challenges associated with information shortfalls in the design cycle, abort condition coverage and response assessment, engine-vehicle interface definition, and abort system performance verification and validation are also discussed.
NASA Technical Reports Server (NTRS)
Bueno, R.; Chow, E.; Gershwin, S. B.; Willsky, A. S.
1975-01-01
The research is reported on the problems of failure detection and reliable system design for digital aircraft control systems. Failure modes, cross detection probability, wrong time detection, application of performance tools, and the GLR computer package are discussed.
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Godiwala, P. M.
1985-01-01
The performance analysis results of a fault inferring nonlinear detection system (FINDS) using sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment is presented. First, a statistical analysis of the flight recorded sensor data was made in order to determine the characteristics of sensor inaccuracies. Next, modifications were made to the detection and decision functions in the FINDS algorithm in order to improve false alarm and failure detection performance under real modelling errors present in the flight data. Finally, the failure detection and false alarm performance of the FINDS algorithm were analyzed by injecting bias failures into fourteen sensor outputs over six repetitive runs of the five minute flight data. In general, the detection speed, failure level estimation, and false alarm performance showed a marked improvement over the previously reported simulation runs. In agreement with earlier results, detection speed was faster for filter measurement sensors soon as MLS than for filter input sensors such as flight control accelerometers.
22nd Annual Logistics Conference and Exhibition
2006-04-20
Prognostics & Health Management at GE Dr. Piero P.Bonissone Industrial AI Lab GE Global Research NCD Select detection model Anomaly detection results...Mode 213 x Failure mode histogram 2130014 Anomaly detection from event-log data Anomaly detection from event-log data Diagnostics/ Prognostics Using...Failure Monitoring & AssessmentTactical C4ISR Sense Respond 7 •Diagnostics, Prognostics and health management
A Review of Transmission Diagnostics Research at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Zakajsek, James J.
1994-01-01
This paper presents a summary of the transmission diagnostics research work conducted at NASA Lewis Research Center over the last four years. In 1990, the Transmission Health and Usage Monitoring Research Team at NASA Lewis conducted a survey to determine the critical needs of the diagnostics community. Survey results indicated that experimental verification of gear and bearing fault detection methods, improved fault detection in planetary systems, and damage magnitude assessment and prognostics research were all critical to a highly reliable health and usage monitoring system. In response to this, a variety of transmission fault detection methods were applied to experimentally obtained fatigue data. Failure modes of the fatigue data include a variety of gear pitting failures, tooth wear, tooth fracture, and bearing spalling failures. Overall results indicate that, of the gear fault detection techniques, no one method can successfully detect all possible failure modes. The more successful methods need to be integrated into a single more reliable detection technique. A recently developed method, NA4, in addition to being one of the more successful gear fault detection methods, was also found to exhibit damage magnitude estimation capabilities.
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Godiwala, P. M.; Morrell, F. R.
1985-01-01
This paper presents the performance analysis results of a fault inferring nonlinear detection system (FINDS) using integrated avionics sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment. First, an overview of the FINDS algorithm structure is given. Then, aircraft state estimate time histories and statistics for the flight data sensors are discussed. This is followed by an explanation of modifications made to the detection and decision functions in FINDS to improve false alarm and failure detection performance. Next, the failure detection and false alarm performance of the FINDS algorithm are analyzed by injecting bias failures into fourteen sensor outputs over six repetitive runs of the five minutes of flight data. Results indicate that the detection speed, failure level estimation, and false alarm performance show a marked improvement over the previously reported simulation runs. In agreement with earlier results, detection speed is faster for filter measurement sensors such as MLS than for filter input sensors such as flight control accelerometers. Finally, the progress in modifications of the FINDS algorithm design to accommodate flight computer constraints is discussed.
Cell Phones ≠ Self and Other Problems with Big Data Detection and Containment during Epidemics.
Erikson, Susan L
2018-03-08
Evidence from Sierra Leone reveals the significant limitations of big data in disease detection and containment efforts. Early in the 2014-2016 Ebola epidemic in West Africa, media heralded HealthMap's ability to detect the outbreak from newsfeeds. Later, big data-specifically, call detail record data collected from millions of cell phones-was hyped as useful for stopping the disease by tracking contagious people. It did not work. In this article, I trace the causes of big data's containment failures. During epidemics, big data experiments can have opportunity costs: namely, forestalling urgent response. Finally, what counts as data during epidemics must include that coming from anthropological technologies because they are so useful for detection and containment. © 2018 The Authors Medical Anthropology Quarterly published by Wiley Periodicals, Inc. on behalf of American Anthropological Association.
Denny, Diane S; Allen, Debra K; Worthington, Nicole; Gupta, Digant
2014-01-01
Delivering radiation therapy in an oncology setting is a high-risk process where system failures are more likely to occur because of increasing utilization, complexity, and sophistication of the equipment and related processes. Healthcare failure mode and effect analysis (FMEA) is a method used to proactively detect risks to the patient in a particular healthcare process and correct potential errors before adverse events occur. FMEA is a systematic, multidisciplinary team-based approach to error prevention and enhancing patient safety. We describe our experience of using FMEA as a prospective risk-management technique in radiation oncology at a national network of oncology hospitals in the United States, capitalizing not only on the use of a team-based tool but also creating momentum across a network of collaborative facilities seeking to learn from and share best practices with each other. The major steps of our analysis across 4 sites and collectively were: choosing the process and subprocesses to be studied, assembling a multidisciplinary team at each site responsible for conducting the hazard analysis, and developing and implementing actions related to our findings. We identified 5 areas of performance improvement for which risk-reducing actions were successfully implemented across our enterprise. © 2012 National Association for Healthcare Quality.
Detection of Failure in Asynchronous Motor Using Soft Computing Method
NASA Astrophysics Data System (ADS)
Vinoth Kumar, K.; Sony, Kevin; Achenkunju John, Alan; Kuriakose, Anto; John, Ano P.
2018-04-01
This paper investigates the stator short winding failure of asynchronous motor also their effects on motor current spectrums. A fuzzy logic approach i.e., model based technique possibly will help to detect the asynchronous motor failure. Actually, fuzzy logic similar to humanoid intelligent methods besides expected linguistic empowering inferences through vague statistics. The dynamic model is technologically advanced for asynchronous motor by means of fuzzy logic classifier towards investigate the stator inter turn failure in addition open phase failure. A hardware implementation was carried out with LabVIEW for the online-monitoring of faults.
NASA Technical Reports Server (NTRS)
Kaufman, Howard
1998-01-01
Many papers relevant to reconfigurable flight control have appeared over the past fifteen years. In general these have consisted of theoretical issues, simulation experiments, and in some cases, actual flight tests. Results indicate that reconfiguration of flight controls is certainly feasible for a wide class of failures. However many of the proposed procedures although quite attractive, need further analytical and experimental studies for meaningful validation. Many procedures assume the availability of failure detection and identification logic that will supply adequately fast, the dynamics corresponding to the failed aircraft. This in general implies that the failure detection and fault identification logic must have access to all possible anticipated faults and the corresponding dynamical equations of motion. Unless some sort of explicit on line parameter identification is included, the computational demands could possibly be too excessive. This suggests the need for some form of adaptive control, either by itself as the prime procedure for control reconfiguration or in conjunction with the failure detection logic. If explicit or indirect adaptive control is used, then it is important that the identified models be such that the corresponding computed controls deliver adequate performance to the actual aircraft. Unknown changes in trim should be modelled, and parameter identification needs to be adequately insensitive to noise and at the same time capable of tracking abrupt changes. If however, both failure detection and system parameter identification turn out to be too time consuming in an emergency situation, then the concepts of direct adaptive control should be considered. If direct model reference adaptive control is to be used (on a linear model) with stability assurances, then a positive real or passivity condition needs to be satisfied for all possible configurations. This condition is often satisfied with a feedforward compensator around the plant. This compensator must be robustly designed such that the compensated plant satisfies the required positive real conditions over all expected parameter values. Furthermore, with the feedforward only around the plant, a nonzero (but bounded error) will exist in steady state between the plant and model outputs. This error can be removed by placing the compensator also in the reference model. Design of such a compensator should not be too difficult a problem since for flight control it is generally possible to feedback all the system states.
A novel strategy for rapid detection of NT-proBNP
NASA Astrophysics Data System (ADS)
Cui, Qiyao; Sun, Honghao; Zhu, Hui
2017-09-01
In order to establish a simple, rapid, sensitive, and specific quantitative assay to detect the biomarkers of heart failure, in this study, biotin-streptavidin technology was employed with fluorescence immunochromatographic assay to detect the concentration of the biomarkers in serum, and this method was applied to detect NT-proBNP, which is valuable for diagnostic evaluation of heart failure.
Fear of failure and student athletes' interpersonal antisocial behaviour in education and sport.
Sagar, Sam S; Boardley, Ian D; Kavussanu, Maria
2011-09-01
BACKGROUND. The link between fear of failure and students' antisocial behaviour has received scant research attention despite associations between fear of failure, hostility, and aggression. Also, the effect of sport experience on antisocial behaviour has not been considered outside of the sport context in adult populations. Further, to date, sex differences have not been considered in fear of failure research. AIMS. To examine whether (a) fear of failure and sport experience predict antisocial behaviour in the university and sport contexts in student athletes, and whether this prediction is the same in males and females; and (b) sex differences exist in antisocial behaviour and fear of failure. SAMPLE. British university student athletes (n= 176 male; n= 155 female; M(age) = 20.11 years). METHOD. Participants completed questionnaires assessing fear of failure, sport experience, and antisocial behaviour in both contexts. RESULTS. (a) Fear of failure and sport experience positively predicted antisocial behaviour in university and sport and the strength of these predictions did not differ between males and females; (b) females reported higher levels of fear of devaluing one's self-estimate than males whereas males reported higher levels of fear of important others losing interest than females. Males engaged more frequently than females in antisocial behaviour in both contexts. CONCLUSIONS. Fear of failure and sport experience may be important considerations when trying to understand antisocial behaviour in student athletes in education and sport; moreover, the potential effect of overall fear of failure and of sport experience on this frequency does not differ by sex. The findings make an important contribution to the fear of failure and morality literatures. ©2010 The British Psychological Society.
Sensor failure detection for jet engines
NASA Technical Reports Server (NTRS)
Beattie, E. C.; Laprad, R. F.; Akhter, M. M.; Rock, S. M.
1983-01-01
Revisions to the advanced sensor failure detection, isolation, and accommodation (DIA) algorithm, developed under the sensor failure detection system program were studied to eliminate the steady state errors due to estimation filter biases. Three algorithm revisions were formulated and one revision for detailed evaluation was chosen. The selected version modifies the DIA algorithm to feedback the actual sensor outputs to the integral portion of the control for the nofailure case. In case of a failure, the estimates of the failed sensor output is fed back to the integral portion. The estimator outputs are fed back to the linear regulator portion of the control all the time. The revised algorithm is evaluated and compared to the baseline algorithm developed previously.
Failure prediction of thin beryllium sheets used in spacecraft structures
NASA Technical Reports Server (NTRS)
Roschke, Paul N.; Mascorro, Edward; Papados, Photios; Serna, Oscar R.
1991-01-01
The primary objective of this study is to develop a method for prediction of failure of thin beryllium sheets that undergo complex states of stress. Major components of the research include experimental evaluation of strength parameters for cross-rolled beryllium sheet, application of the Tsai-Wu failure criterion to plate bending problems, development of a high order failure criterion, application of the new criterion to a variety of structures, and incorporation of both failure criteria into a finite element code. A Tsai-Wu failure model for SR-200 sheet material is developed from available tensile data, experiments carried out by NASA on two circular plates, and compression and off-axis experiments performed in this study. The failure surface obtained from the resulting criterion forms an ellipsoid. By supplementing experimental data used in the the two-dimensional criterion and modifying previously suggested failure criteria, a multi-dimensional failure surface is proposed for thin beryllium structures. The new criterion for orthotropic material is represented by a failure surface in six-dimensional stress space. In order to determine coefficients of the governing equation, a number of uniaxial, biaxial, and triaxial experiments are required. Details of these experiments and a complementary ultrasonic investigation are described in detail. Finally, validity of the criterion and newly determined mechanical properties is established through experiments on structures composed of SR200 sheet material. These experiments include a plate-plug arrangement under a complex state of stress and a series of plates with an out-of-plane central point load. Both criteria have been incorporated into a general purpose finite element analysis code. Numerical simulation incrementally applied loads to a structural component that is being designed and checks each nodal point in the model for exceedance of a failure criterion. If stresses at all locations do not exceed the failure criterion, the load is increased and the process is repeated. Failure results for the plate-plug and clamped plate tests are accurate to within 2 percent.
NASA Astrophysics Data System (ADS)
Simioni, Stephan; Sidler, Rolf; Dual, Jürg; Schweizer, Jürg
2015-04-01
Avalanche control by explosives is among the key temporary preventive measures. Yet, little is known about the mechanism involved in releasing avalanches by the effect of an explosion. Here, we test the hypothesis that the stress induced by acoustic waves exceeds the strength of weak snow layers. Consequently the snow fails and the onset of rapid crack propagation might finally lead to the release of a snow slab avalanche. We performed experiments with explosive charges over a snowpack. We installed microphones above the snowpack to measure near-surface air pressure and accelerometers within three snow pits. We also recorded pit walls of each pit with high speed cameras to detect weak layer failure. Empirical relationships and a priori information from ice and air were used to characterize a porous layered model from density measurements of snow profiles in the snow pits. This model was used to perform two-dimensional numerical simulations of wave propagation in Biot-type porous material. Locations of snow failure were identified in the simulation by comparing the axial and deviatoric stress field of the simulation to the corresponding snow strength. The identified snow failure locations corresponded well with the observed failure locations in the experiment. The acceleration measured in the snowpack best correlated with the modeled acceleration of the fluid relative to the ice frame. Even though the near field of the explosion is expected to be governed by non-linear effects as for example the observed supersonic wave propagation in the air above the snow surface, the results of the linear poroelastic simulation fit well with the measured air pressure and snowpack accelerations. The results of this comparison are an important step towards quantifying the effectiveness of avalanche control by explosives.
NASA Technical Reports Server (NTRS)
Mesloh, Nick; Hill, Tim; Kosyk, Kathy
1993-01-01
This paper presents the integrated approach toward failure detection, isolation, and recovery/reconfiguration to be used for the Space Station Freedom External Active Thermal Control System (EATCS). The on-board and on-ground diagnostic capabilities of the EATCS are discussed. Time and safety critical features, as well as noncritical failures, and the detection coverage for each provided by existing capabilities are reviewed. The allocation of responsibility between on-board software and ground-based systems, to be shown during ground testing at the Johnson Space Center, is described. Failure isolation capabilities allocated to the ground include some functionality originally found on orbit but moved to the ground to reduce on-board resource requirements. Complex failures requiring the analysis of multiple external variables, such as environmental conditions, heat loads, or station attitude, are also allocated to ground personnel.
EVALUATION OF SAFETY IN A RADIATION ONCOLOGY SETTING USING FAILURE MODE AND EFFECTS ANALYSIS
Ford, Eric C.; Gaudette, Ray; Myers, Lee; Vanderver, Bruce; Engineer, Lilly; Zellars, Richard; Song, Danny Y.; Wong, John; DeWeese, Theodore L.
2013-01-01
Purpose Failure mode and effects analysis (FMEA) is a widely used tool for prospectively evaluating safety and reliability. We report our experiences in applying FMEA in the setting of radiation oncology. Methods and Materials We performed an FMEA analysis for our external beam radiation therapy service, which consisted of the following tasks: (1) create a visual map of the process, (2) identify possible failure modes; assign risk probability numbers (RPN) to each failure mode based on tabulated scores for the severity, frequency of occurrence, and detectability, each on a scale of 1 to 10; and (3) identify improvements that are both feasible and effective. The RPN scores can span a range of 1 to 1000, with higher scores indicating the relative importance of a given failure mode. Results Our process map consisted of 269 different nodes. We identified 127 possible failure modes with RPN scores ranging from 2 to 160. Fifteen of the top-ranked failure modes were considered for process improvements, representing RPN scores of 75 and more. These specific improvement suggestions were incorporated into our practice with a review and implementation by each department team responsible for the process. Conclusions The FMEA technique provides a systematic method for finding vulnerabilities in a process before they result in an error. The FMEA framework can naturally incorporate further quantification and monitoring. A general-use system for incident and near miss reporting would be useful in this regard. PMID:19409731
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Motyka, P.; Wagner, E.; Hall, S. R.
1986-01-01
The performance of the orthogonal series generalized likelihood ratio (OSGLR) test in detecting and isolating commercial aircraft control surface and actuator failures is evaluated. A modification to incorporate age-weighting which significantly reduces the sensitivity of the algorithm to modeling errors is presented. The steady-state implementation of the algorithm based on a single linear model valid for a cruise flight condition is tested using a nonlinear aircraft simulation. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection and isolation performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling on dynamic pressure and flap deflection is examined. Based on this testing, the OSGLR algorithm should be capable of detecting control surface failures that would affect the safe operation of a commercial aircraft. Isolation may be difficult if there are several surfaces which produce similar effects on the aircraft. Extending the algorithm over the entire operating envelope of a commercial aircraft appears feasible.
Effectiveness of back-to-back testing
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.; Eckhardt, David E.; Caglayan, Alper; Kelly, John P. J.
1987-01-01
Three models of back-to-back testing processes are described. Two models treat the case where there is no intercomponent failure dependence. The third model describes the more realistic case where there is correlation among the failure probabilities of the functionally equivalent components. The theory indicates that back-to-back testing can, under the right conditions, provide a considerable gain in software reliability. The models are used to analyze the data obtained in a fault-tolerant software experiment. It is shown that the expected gain is indeed achieved, and exceeded, provided the intercomponent failure dependence is sufficiently small. However, even with the relatively high correlation the use of several functionally equivalent components coupled with back-to-back testing may provide a considerable reliability gain. Implications of this finding are that the multiversion software development is a feasible and cost effective approach to providing highly reliable software components intended for fault-tolerant software systems, on condition that special attention is directed at early detection and elimination of correlated faults.
Conflict adaptation in positive and negative mood: Applying a success-failure manipulation.
Schuch, Stefanie; Zweerings, Jana; Hirsch, Patricia; Koch, Iring
2017-05-01
Conflict adaptation is a cognitive mechanism denoting increased cognitive control upon detection of conflict. This mechanism can be measured by the congruency sequence effect, indicating the reduction of congruency effects after incongruent trials (where response conflict occurs) relative to congruent trials (without response conflict). Several studies have reported increased conflict adaptation under negative, as compared to positive, mood. In these studies, sustained mood states were induced by film clips or music combined with imagination techniques; these kinds of mood manipulations are highly obvious, possibly distorting the actual mood states experienced by the participants. Here, we report two experiments where mood states were induced in a less obvious way, and with higher ecological validity. Participants received success or failure feedback on their performance in a bogus intelligence test, and this mood manipulation proved highly effective. We largely replicated previous findings of larger conflict adaptation under negative mood than under positive mood, both with a Flanker interference paradigm (Experiment 1) and a Stroop-like interference paradigm (Experiment 2). Results are discussed with respect to current theories on affective influences on cognitive control. Copyright © 2017 Elsevier B.V. All rights reserved.
Mohammadi, Abdolreza Rashidi; Chen, Keqin; Ali, Mohamed Sultan Mohamed; Takahata, Kenichi
2011-12-15
The rupture of a cerebral aneurysm is the most common cause of subarachnoid hemorrhage. Endovascular embolization of the aneurysms by implantation of Guglielmi detachable coils (GDC) has become a major treatment approach in the prevention of a rupture. Implantation of the coils induces formation of tissues over the coils, embolizing the aneurysm. However, blood entry into the coiled aneurysm often occurs due to failures in the embolization process. Current diagnostic methods used for aneurysms, such as X-ray angiography and computer tomography, are ineffective for continuous monitoring of the disease and require extremely expensive equipment. Here we present a novel technique for wireless monitoring of cerebral aneurysms using implanted embolization coils as radiofrequency resonant sensors that detect the blood entry. The experiments show that commonly used embolization coils could be utilized as electrical inductors or antennas. As the blood flows into a coil-implanted aneurysm, parasitic capacitance of the coil is modified because of the difference in permittivity between the blood and the tissues grown around the coil, resulting in a change in the coil's resonant frequency. The resonances of platinum GDC-like coils embedded in aneurysm models are detected to show average responses of 224-819 MHz/ml to saline injected into the models. This preliminary demonstration indicates a new possibility in the use of implanted GDC as a wireless sensor for embolization failures, the first step toward realizing long-term, noninvasive, and cost-effective remote monitoring of cerebral aneurysms treated with coil embolization. Copyright © 2011 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less
A Fault Tolerant System for an Integrated Avionics Sensor Configuration
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Lancraft, R. E.
1984-01-01
An aircraft sensor fault tolerant system methodology for the Transport Systems Research Vehicle in a Microwave Landing System (MLS) environment is described. The fault tolerant system provides reliable estimates in the presence of possible failures both in ground-based navigation aids, and in on-board flight control and inertial sensors. Sensor failures are identified by utilizing the analytic relationships between the various sensors arising from the aircraft point mass equations of motion. The estimation and failure detection performance of the software implementation (called FINDS) of the developed system was analyzed on a nonlinear digital simulation of the research aircraft. Simulation results showing the detection performance of FINDS, using a dual redundant sensor compliment, are presented for bias, hardover, null, ramp, increased noise and scale factor failures. In general, the results show that FINDS can distinguish between normal operating sensor errors and failures while providing an excellent detection speed for bias failures in the MLS, indicated airspeed, attitude and radar altimeter sensors.
A cohort study of Chlamydia trachomatis treatment failure in women: a study protocol
2013-01-01
Background Chlamydia trachomatis is the most commonly diagnosed bacterial sexually transmitted infection in the developed world and diagnosis rates have increased dramatically over the last decade. Repeat infections of chlamydia are very common and may represent re-infection from an untreated partner or treatment failure. The aim of this cohort study is to estimate the proportion of women infected with chlamydia who experience treatment failure after treatment with 1 gram azithromycin. Methods/design This cohort study will follow women diagnosed with chlamydia for up to 56 days post treatment. Women will provide weekly genital specimens for further assay. The primary outcome is the proportion of women who are classified as having treatment failure 28, 42 or 56 days after recruitment. Comprehensive sexual behavior data collection and the detection of Y chromosome DNA and high discriminatory chlamydial genotyping will be used to differentiate between chlamydia re-infection and treatment failure. Azithromycin levels in high-vaginal specimens will be measured using a validated liquid chromatography – tandem mass spectrometry method to assess whether poor azithromycin absorption could be a cause of treatment failure. Chlamydia culture and minimal inhibitory concentrations will be performed to further characterize the chlamydia infections. Discussion Distinguishing between treatment failure and re-infection is important in order to refine treatment recommendations and focus infection control mechanisms. If a large proportion of repeat chlamydia infections are due to antibiotic treatment failure, then international recommendations on chlamydia treatment may need to be re-evaluated. If most are re-infections, then strategies to expedite partner treatment are necessary. PMID:23957327
Liquid and Solid Metal Embrittlement.
1981-09-05
example, embrittlement of AISI 4140 steel begins at T/T, - 0.75 for cadmium, and 0.85 for lead and tin environments (2). In a few cases, e.g. zinc...has recently proposed, however, that liquid zinc can penetrate to very near the tip of a sharp crack in 4140 steel, based upon both direct observation...long could be detected, was observed in delayed failure experi- ments on unnotched 4140 steel, in the quenched and tempered condi- tion, embrittled by
CrossTalk: The Journal of Defense Software Engineering. Volume 24, Number 2, March/April 2011
2011-04-01
and insider at- tacks, we plan to conduct experiments and collect concrete and empirical evidence. As we have done in prior research projects [11...subsequent service failure.” Yet, a faulty state can continue to render service; an er- roneous state cannot. Consider a system that receives concrete ...that does not satisfy specifications. The faults in the concrete are not detected during (faulty) acceptance testing. A two-deck bridge is built using
Advanced Diagnostic System on Earth Observing One
NASA Technical Reports Server (NTRS)
Hayden, Sandra C.; Sweet, Adam J.; Christa, Scott E.; Tran, Daniel; Shulman, Seth
2004-01-01
In this infusion experiment, the Livingstone 2 (L2) model-based diagnosis engine, developed by the Computational Sciences division at NASA Ames Research Center, has been uploaded to the Earth Observing One (EO-1) satellite. L2 is integrated with the Autonomous Sciencecraft Experiment (ASE) which provides an on-board planning capability and a software bridge to the spacecraft's 1773 data bus. Using a model of the spacecraft subsystems, L2 predicts nominal state transitions initiated by control commands, monitors the spacecraft sensors, and, in the case of failure, isolates the fault based on the discrepant observations. Fault detection and isolation is done by determining a set of component modes, including most likely failures, which satisfy the current observations. All mode transitions and diagnoses are telemetered to the ground for analysis. The initial L2 model is scoped to EO-1's imaging instruments and solid state recorder. Diagnostic scenarios for EO-1's nominal imaging timeline are demonstrated by injecting simulated faults on-board the spacecraft. The solid state recorder stores the science images and also hosts: the experiment software. The main objective of the experiment is to mature the L2 technology to Technology Readiness Level (TRL) 7. Experiment results are presented, as well as a discussion of the challenging technical issues encountered. Future extensions may explore coordination with the planner, and model-based ground operations.
Corrosivity Sensor for Exposed Pipelines Based on Wireless Energy Transfer
Lawand, Lydia; Shiryayev, Oleg; Al Handawi, Khalil; Vahdati, Nader; Rostron, Paul
2017-01-01
External corrosion was identified as one of the main causes of pipeline failures worldwide. A solution that addresses the issue of detecting and quantifying corrosivity of environment for application to existing exposed pipelines has been developed. It consists of a sensing array made of an assembly of thin strips of pipeline steel and a circuit that provides a visual sensor reading to the operator. The proposed sensor is passive and does not require a constant power supply. Circuit design was validated through simulations and lab experiments. Accelerated corrosion experiment was conducted to confirm the feasibility of the proposed corrosivity sensor design. PMID:28556805
Speedy routing recovery protocol for large failure tolerance in wireless sensor networks.
Lee, Joa-Hyoung; Jung, In-Bum
2010-01-01
Wireless sensor networks are expected to play an increasingly important role in data collection in hazardous areas. However, the physical fragility of a sensor node makes reliable routing in hazardous areas a challenging problem. Because several sensor nodes in a hazardous area could be damaged simultaneously, the network should be able to recover routing after node failures over large areas. Many routing protocols take single-node failure recovery into account, but it is difficult for these protocols to recover the routing after large-scale failures. In this paper, we propose a routing protocol, referred to as ARF (Adaptive routing protocol for fast Recovery from large-scale Failure), to recover a network quickly after failures over large areas. ARF detects failures by counting the packet losses from parent nodes, and upon failure detection, it decreases the routing interval to notify the neighbor nodes of the failure. Our experimental results indicate that ARF could provide recovery from large-area failures quickly with less packets and energy consumption than previous protocols.
Failure detection and identification for a reconfigurable flight control system
NASA Technical Reports Server (NTRS)
Dallery, Francois
1987-01-01
Failure detection and identification logic for a fault-tolerant longitudinal control system were investigated. Aircraft dynamics were based upon the cruise condition for a hypothetical transonic business jet transport configuration. The fault-tolerant control system consists of conventional control and estimation plus a new outer loop containing failure detection, identification, and reconfiguration (FDIR) logic. It is assumed that the additional logic has access to all measurements, as well as to the outputs of the control and estimation logic. The pilot may also command the FDIR logic to perform special tests.
NASA Technical Reports Server (NTRS)
1976-01-01
Analytic techniques have been developed for detecting and identifying abrupt changes in dynamic systems. The GLR technique monitors the output of the Kalman filter and searches for the time that the failure occured, thus allowing it to be sensitive to new data and consequently increasing the chances for fast system recovery following detection of a failure. All failure detections are based on functional redundancy. Performance tests of the F-8 aircraft flight control system and computerized modelling of the technique are presented.
Kapich, Davorin D.
1987-01-01
A bearing system includes backup bearings for supporting a rotating shaft upon failure of primary bearings. In the preferred embodiment, the backup bearings are rolling element bearings having their rolling elements disposed out of contact with their associated respective inner races during normal functioning of the primary bearings. Displacement detection sensors are provided for detecting displacement of the shaft upon failure of the primary bearings. Upon detection of the failure of the primary bearings, the rolling elements and inner races of the backup bearings are brought into mutual contact by axial displacement of the shaft.
Failure detection system risk reduction assessment
NASA Technical Reports Server (NTRS)
Aguilar, Robert B. (Inventor); Huang, Zhaofeng (Inventor)
2012-01-01
A process includes determining a probability of a failure mode of a system being analyzed reaching a failure limit as a function of time to failure limit, determining a probability of a mitigation of the failure mode as a function of a time to failure limit, and quantifying a risk reduction based on the probability of the failure mode reaching the failure limit and the probability of the mitigation.
Data-Driven Anomaly Detection Performance for the Ares I-X Ground Diagnostic Prototype
NASA Technical Reports Server (NTRS)
Martin, Rodney A.; Schwabacher, Mark A.; Matthews, Bryan L.
2010-01-01
In this paper, we will assess the performance of a data-driven anomaly detection algorithm, the Inductive Monitoring System (IMS), which can be used to detect simulated Thrust Vector Control (TVC) system failures. However, the ability of IMS to detect these failures in a true operational setting may be related to the realistic nature of how they are simulated. As such, we will investigate both a low fidelity and high fidelity approach to simulating such failures, with the latter based upon the underlying physics. Furthermore, the ability of IMS to detect anomalies that were previously unknown and not previously simulated will be studied in earnest, as well as apparent deficiencies or misapplications that result from using the data-driven paradigm. Our conclusions indicate that robust detection performance of simulated failures using IMS is not appreciably affected by the use of a high fidelity simulation. However, we have found that the inclusion of a data-driven algorithm such as IMS into a suite of deployable health management technologies does add significant value.
The embodiment of success and failure as forward versus backward movements.
Robinson, Michael D; Fetterman, Adam K
2015-01-01
People often speak of success (e.g., "advance") and failure (e.g., "setback") as if they were forward versus backward movements through space. Two experiments sought to examine whether grounded associations of this type influence motor behavior. In Experiment 1, participants categorized success versus failure words by moving a joystick forward or backward. Failure categorizations were faster when moving backward, whereas success categorizations were faster when moving forward. Experiment 2 removed the requirement to categorize stimuli and used a word rehearsal task instead. Even without Experiment 1's response procedures, a similar cross-over interaction was obtained (e.g., failure memorizations sped backward movements relative to forward ones). The findings are novel yet consistent with theories of embodied cognition and self-regulation.
Think Again: First Do No Harm: A Case of Munchausen Syndrome by Proxy.
Yalndağ-Öztürk, Nilüfer; Erkek, Nilgün; Şirinoğlu, Melis Bayram
2015-10-01
Apparent life-threatening events caused by Munchausen syndrome by proxy (MSP) are rare but difficult to resolve medically. Failure to properly diagnose MSP can lead to further abuse by the caregiver and increase the risk of complications due to long hospital stays and invasive tests. In this paper, we describe our experiences with a baby who ended up being diagnosed with MSP, including our initial failure to find a pathology, delay of MSP diagnosis, our growing suspicion of MSP despite technical setbacks, our actions after we confirmed MSP as the cause of his hospitalizations. We also describe the difficulties of diagnosing MSP compared to more traditional problems and explain a series of precautions and guidelines to help detect it in a timely manner.
A Voyager attitude control perspective on fault tolerant systems
NASA Technical Reports Server (NTRS)
Rasmussen, R. D.; Litty, E. C.
1981-01-01
In current spacecraft design, a trend can be observed to achieve greater fault tolerance through the application of on-board software dedicated to detecting and isolating failures. Whether fault tolerance through software can meet the desired objectives depends on very careful consideration and control of the system in which the software is imbedded. The considered investigation has the objective to provide some of the insight needed for the required analysis of the system. A description is given of the techniques which have been developed in this connection during the development of the Voyager spacecraft. The Voyager Galileo Attitude and Articulation Control Subsystem (AACS) fault tolerant design is discussed to emphasize basic lessons learned from this experience. The central driver of hardware redundancy implementation on Voyager was known as the 'single point failure criterion'.
NASA Astrophysics Data System (ADS)
Yfantis, G.; Carvajal, H. E.; Pytharouli, S.; Lunn, R. J.
2013-12-01
A number of published studies use seismic sensors to understand the physics involved in slope deformation. In this research we artificially induce failure to two meter scaled slopes in the field and use 12 short period 3D seismometers to monitor the failure. To our knowledge there has been no previous controlled experiments that can allow calibration and validation of the interpreted seismic signals. Inside the body of one of the artificial landslides we embed a pile of glass shards. During movement the pile deforms emitting seismic signals due to friction among the glass shards. Our aim is twofold: First we investigate whether the seismic sensors can record pre-cursory and failure signals. Secondly, we test our hypothesis that the glass shards produce seismic signals with higher amplitudes and a distinct frequency pattern, compared to those emitted by common landslide seismicity and local background noise. Two vertical faces, 2m high, were excavated 3m apart in high porous tropical clay. This highly attenuating material makes the detection of weak seismic signals challenging. Slope failure was induced by increasing the vertical load at the landslide's crown. Special care was taken in the design of all experimental procedures to not add to the area's seismic noise. Measurements took place during 18 hours (during afternoon and night) without any change in soil and weather conditions. The 3D sensors were placed on the ground surface close to the crown, forming a dense microseismic network with 5-to-10m spacing and two nanoseismic arrays, with aperture sizes of 10 and 20 m. This design allowed a direct comparison of the recorded signals emitted by the two landslides. The two faces failed for loading between 70 and 100kN and as a result the pile of glass shards was horizontally deformed allowing differential movement between the shards. After the main failure both landslides were continuing to deform due to soil compaction and horizontal displacement. We apply signal processing techniques to identify and locate the emitted signals related to slope movement, despite high background noise levels and high attenuating geological conditions. Results were groundproofed by visual observations. Our study shows that short period seismic sensors can successfully monitor the brittle behaviour of dry clays for deformations larger than 1 centimetre, as well as weak ground failures. The use of glass, or any other coarse and brittle material, has advantages over soil only, since the friction among the glass shards allows for a more distinct frequency pattern. This makes detection of slope movements easier at heterogeneous environments were signals are emitted following movements of different material types as well as in areas characterised by high background noise levels. Our results provide information on the slope behaviour, a powerful tool for geotechnical engineering applications.
Mikulincer, M
1986-12-01
Following the learned helplessness paradigm, I assessed in this study the effects of global and specific attributions for failure on the generalization of performance deficits in a dissimilar situation. Helplessness training consisted of experience with noncontingent failures on four cognitive discrimination problems attributed to either global or specific causes. Experiment 1 found that performance in a dissimilar situation was impaired following exposure to globally attributed failure. Experiment 2 examined the behavioral effects of the interaction between stable and global attributions of failure. Exposure to unsolvable problems resulted in reduced performance in a dissimilar situation only when failure was attributed to global and stable causes. Finally, Experiment 3 found that learned helplessness deficits were a product of the interaction of global and internal attribution. Performance deficits following unsolvable problems were recorded when failure was attributed to global and internal causes. Results were discussed in terms of the reformulated learned helplessness model.
Simulation Assisted Risk Assessment Applied to Launch Vehicle Conceptual Design
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Go, Susie; Gee, Ken; Lawrence, Scott
2008-01-01
A simulation-based risk assessment approach is presented and is applied to the analysis of abort during the ascent phase of a space exploration mission. The approach utilizes groupings of launch vehicle failures, referred to as failure bins, which are mapped to corresponding failure environments. Physical models are used to characterize the failure environments in terms of the risk due to blast overpressure, resulting debris field, and the thermal radiation due to a fireball. The resulting risk to the crew is dynamically modeled by combining the likelihood of each failure, the severity of the failure environments as a function of initiator and time of the failure, the robustness of the crew module, and the warning time available due to early detection. The approach is shown to support the launch vehicle design process by characterizing the risk drivers and identifying regions where failure detection would significantly reduce the risk to the crew.
Robust failure detection filters. M.S. Thesis
NASA Technical Reports Server (NTRS)
Sanmartin, A. M.
1985-01-01
The robustness of detection filters applied to the detection of actuator failures on a free-free beam is analyzed. This analysis is based on computer simulation tests of the detection filters in the presence of different types of model mismatch, and on frequency response functions of the transfers corresponding to the model mismatch. The robustness of detection filters based on a model of the beam containing a large number of structural modes varied dramatically with the placement of some of the filter poles. The dynamics of these filters were very hard to analyze. The design of detection filters with a number of modes equal to the number of sensors was trivial. They can be configured to detect any number of actuator failure events. The dynamics of these filters were very easy to analyze and their robustness properties were much improved. A change of the output transformation allowed the filter to perform satisfactorily with realistic levels of model mismatch.
Creating and evaluating a data-driven curriculum for central venous catheter placement.
Duncan, James R; Henderson, Katherine; Street, Mandie; Richmond, Amy; Klingensmith, Mary; Beta, Elio; Vannucci, Andrea; Murray, David
2010-09-01
Central venous catheter placement is a common procedure with a high incidence of error. Other fields requiring high reliability have used Failure Mode and Effects Analysis (FMEA) to prioritize quality and safety improvement efforts. To use FMEA in the development of a formal, standardized curriculum for central venous catheter training. We surveyed interns regarding their prior experience with central venous catheter placement. A multidisciplinary team used FMEA to identify high-priority failure modes and to develop online and hands-on training modules to decrease the frequency, diminish the severity, and improve the early detection of these failure modes. We required new interns to complete the modules and tracked their progress using multiple assessments. Survey results showed new interns had little prior experience with central venous catheter placement. Using FMEA, we created a curriculum that focused on planning and execution skills and identified 3 priority topics: (1) retained guidewires, which led to training on handling catheters and guidewires; (2) improved needle access, which prompted the development of an ultrasound training module; and (3) catheter-associated bloodstream infections, which were addressed through training on maximum sterile barriers. Each module included assessments that measured progress toward recognition and avoidance of common failure modes. Since introducing this curriculum, the number of retained guidewires has fallen more than 4-fold. Rates of catheter-associated infections have not yet declined, and it will take time before ultrasound training will have a measurable effect. The FMEA provided a process for curriculum development. Precise definitions of failure modes for retained guidewires facilitated development of a curriculum that contributed to a dramatic decrease in the frequency of this complication. Although infections and access complications have not yet declined, failure mode identification, curriculum development, and monitored implementation show substantial promise for improving patient safety during placement of central venous catheters.
HIV resistance testing and detected drug resistance in Europe.
Schultze, Anna; Phillips, Andrew N; Paredes, Roger; Battegay, Manuel; Rockstroh, Jürgen K; Machala, Ladislav; Tomazic, Janez; Girard, Pierre M; Januskevica, Inga; Gronborg-Laut, Kamilla; Lundgren, Jens D; Cozzi-Lepri, Alessandro
2015-07-17
To describe regional differences and trends in resistance testing among individuals experiencing virological failure and the prevalence of detected resistance among those individuals who had a genotypic resistance test done following virological failure. Multinational cohort study. Individuals in EuroSIDA with virological failure (>1 RNA measurement >500 on ART after >6 months on ART) after 1997 were included. Adjusted odds ratios (aORs) for resistance testing following virological failure and aORs for the detection of resistance among those who had a test were calculated using logistic regression with generalized estimating equations. Compared to 74.2% of ART-experienced individuals in 1997, only 5.1% showed evidence of virological failure in 2012. The odds of resistance testing declined after 2004 (global P < 0.001). Resistance was detected in 77.9% of the tests, NRTI resistance being most common (70.3%), followed by NNRTI (51.6%) and protease inhibitor (46.1%) resistance. The odds of detecting resistance were lower in tests done in 1997-1998, 1999-2000 and 2009-2010, compared to those carried out in 2003-2004 (global P < 0.001). Resistance testing was less common in Eastern Europe [aOR 0.72, 95% confidence interval (CI) 0.55-0.94] compared to Southern Europe, whereas the detection of resistance given that a test was done was less common in Northern (aOR 0.29, 95% CI 0.21-0.39) and Central Eastern (aOR 0.47, 95% CI 0.29-0.76) Europe, compared to Southern Europe. Despite a concurrent decline in virological failure and testing, drug resistance was commonly detected. This suggests a selective approach to resistance testing. The regional differences identified indicate that policy aiming to minimize the emergence of resistance is of particular relevance in some European regions, notably in the countries in Eastern Europe.
Real-Time Detection of Infusion Site Failures in a Closed-Loop Artificial Pancreas.
Howsmon, Daniel P; Baysal, Nihat; Buckingham, Bruce A; Forlenza, Gregory P; Ly, Trang T; Maahs, David M; Marcal, Tatiana; Towers, Lindsey; Mauritzen, Eric; Deshpande, Sunil; Huyett, Lauren M; Pinsker, Jordan E; Gondhalekar, Ravi; Doyle, Francis J; Dassau, Eyal; Hahn, Juergen; Bequette, B Wayne
2018-05-01
As evidence emerges that artificial pancreas systems improve clinical outcomes for patients with type 1 diabetes, the burden of this disease will hopefully begin to be alleviated for many patients and caregivers. However, reliance on automated insulin delivery potentially means patients will be slower to act when devices stop functioning appropriately. One such scenario involves an insulin infusion site failure, where the insulin that is recorded as delivered fails to affect the patient's glucose as expected. Alerting patients to these events in real time would potentially reduce hyperglycemia and ketosis associated with infusion site failures. An infusion site failure detection algorithm was deployed in a randomized crossover study with artificial pancreas and sensor-augmented pump arms in an outpatient setting. Each arm lasted two weeks. Nineteen participants wore infusion sets for up to 7 days. Clinicians contacted patients to confirm infusion site failures detected by the algorithm and instructed on set replacement if failure was confirmed. In real time and under zone model predictive control, the infusion site failure detection algorithm achieved a sensitivity of 88.0% (n = 25) while issuing only 0.22 false positives per day, compared with a sensitivity of 73.3% (n = 15) and 0.27 false positives per day in the SAP arm (as indicated by retrospective analysis). No association between intervention strategy and duration of infusion sets was observed ( P = .58). As patient burden is reduced by each generation of advanced diabetes technology, fault detection algorithms will help ensure that patients are alerted when they need to manually intervene. Clinical Trial Identifier: www.clinicaltrials.gov,NCT02773875.
Development of three-axis inkjet printer for gear sensors
NASA Astrophysics Data System (ADS)
Iba, Daisuke; Rodriguez Lopez, Ricardo; Kamimoto, Takahiro; Nakamura, Morimasa; Miura, Nanako; Iizuka, Takashi; Masuda, Arata; Moriwaki, Ichiro; Sone, Akira
2016-04-01
The long-term objective of our research is to develop sensor systems for detection of gear failure signs. As a very first step, this paper proposes a new method to create sensors directly printed on gears by a printer and conductive ink, and shows the printing system configuration and the procedure of sensor development. The developing printer system is a laser sintering system consisting of a laser and CNC machinery. The laser is able to synthesize micro conductive patterns, and introduced to the CNC machinery as a tool. In order to synthesize sensors on gears, we first design the micro-circuit pattern on a gear through the use of 3D-CAD, and create a program (G-code) for the CNC machinery by CAM. This paper shows initial experiments with the laser sintering process in order to obtain the optimal parameters for the laser setting. This new method proposed here may provide a new manufacturing process for mechanical parts, which have an additional functionality to detect failure, and possible improvements include creating more economical and sustainable systems.
ERIC Educational Resources Information Center
Tempel, Tobias; Neumann, Roland
2016-01-01
We investigated processes underlying performance decrements of highly test-anxious persons. Three experiments contrasted conditions that differed in the degree of activation of concepts related to failure. Participants memorized a list of words either containing words related to failure or containing no words related to failure in Experiment 1. In…
NASA Astrophysics Data System (ADS)
Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.
2016-03-01
Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.
A Fault Tolerance Mechanism for On-Road Sensor Networks
Feng, Lei; Guo, Shaoyong; Sun, Jialu; Yu, Peng; Li, Wenjing
2016-01-01
On-Road Sensor Networks (ORSNs) play an important role in capturing traffic flow data for predicting short-term traffic patterns, driving assistance and self-driving vehicles. However, this kind of network is prone to large-scale communication failure if a few sensors physically fail. In this paper, to ensure that the network works normally, an effective fault-tolerance mechanism for ORSNs which mainly consists of backup on-road sensor deployment, redundant cluster head deployment and an adaptive failure detection and recovery method is proposed. Firstly, based on the N − x principle and the sensors’ failure rate, this paper formulates the backup sensor deployment problem in the form of a two-objective optimization, which explains the trade-off between the cost and fault resumption. In consideration of improving the network resilience further, this paper introduces a redundant cluster head deployment model according to the coverage constraint. Then a common solving method combining integer-continuing and sequential quadratic programming is explored to determine the optimal location of these two deployment problems. Moreover, an Adaptive Detection and Resume (ADR) protocol is deigned to recover the system communication through route and cluster adjustment if there is a backup on-road sensor mismatch. The final experiments show that our proposed mechanism can achieve an average 90% recovery rate and reduce the average number of failed sensors at most by 35.7%. PMID:27918483
Wilkin, Timothy J.; Su, Zhaohui; Krambrink, Amy; Long, Jianmin; Greaves, Wayne; Gross, Robert; Hughes, Michael D.; Flexner, Charles; Skolnik, Paul R.; Coakley, Eoin; Godfrey, Catherine; Hirsch, Martin; Kuritzkes, Daniel R.; Gulick, Roy M.
2010-01-01
Background Vicriviroc, an investigational CCR5 antagonist, demonstrated short-term safety and antiretroviral activity. Methods Phase 2, double-blind, randomized study of vicriviroc in treatment-experienced subjects with CCR5-using HIV-1. Vicriviroc (5, 10 or 15 mg) or placebo was added to a failing regimen with optimization of background antiretroviral medications at day 14. Subjects experiencing virologic failure and subjects completing 48 weeks were offered open-label vicriviroc. Results 118 subjects were randomized. Virologic failure (<1 log10 decline in HIV-1 RNA ≥16 weeks post-randomization) occurred by week 48 in 24/28 (86%), 12/30 (40%), 8/30 (27%), 10/30 (33%) of subjects randomized to placebo, 5, 10 and 15 mg respectively. Overall, 113 subjects received vicriviroc at randomization or after virologic failure, and 52 (46%) achieved HIV-1 RNA <50 copies/mL within 24 weeks. Through 3 years, 49% of those achieving suppression did not experience confirmed viral rebound. Dual or mixed-tropic HIV-1 was detected in 33 (29%). Vicriviroc resistance (progressive decrease in maximal percentage inhibition on phenotypic testing) was detected in 6 subjects. Nine subjects discontinued vicriviroc due to adverse events. Conclusions Vicriviroc appears safe and demonstrates sustained virologic suppression through 3 years of follow-up. Further trials of vicriviroc will establish its clinical utility for the treatment of HIV-1 infection. PMID:20672447
Ferrographic and spectrographic analysis of oil sampled before and after failure of a jet engine
NASA Technical Reports Server (NTRS)
Jones, W. R., Jr.
1980-01-01
An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph as well as plasma, atomic absorption, and emission spectrometers. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism, nor a high level of wear debris was detected in the oil sample from the engine just prior to the test in which the failure occurred. However, low concentrations of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure.
Varma, Neelam; Varma, Subhash; Marwaha, Ram Kumar; Malhotra, Pankaj; Bansal, Deepak; Malik, Kiran; Kaur, Sukhdeep; Garewal, Gurjeevan
2006-07-01
A large number of patients diagnosed with bone marrow failure syndromes (BMFS), comprising aplastic anaemia (AA) and myelodysplastic syndromes (MDS), remain aetiologically uncharacterized worldover, especially in resource constrained set up. We carried out this study to identify a few constitutional causes in BMFS patients attending a tertiary care hospital in north India. Peripheral blood lymphocyte cultures were performed (with and without clastogens) in a cohort of 135 consecutive BMFS patients, in order to detect Fanconi anaemia (FA), Down's syndrome (+21), trisomy 8 (+8) and monosomy 7 (-7). Constitutional factors were detected in 17 (12.6%) patients. FA defect was observed in 24.07 percent (13/54), 16.66 percent (1/6) and 2.85 percent (1/35) paediatric aplastic anaemia, paediatric MDS and adult MDS patients respectively. Down's syndrome was detected in 5.00 percent (2/40) adult aplastic anaemia patients. None of the patients revealed trisomy 8 or monosomy 7. Presence of an underlying factor determines appropriate management, prognostication, family screening and genetic counselling of BMFS patients. Special tests required to confirm or exclude constitutional aetiological factors are not available to majority of the patients in our country. Diepoxybutane (DEB) test yielded better results than mitomycin C (MMC) test in our experience.
The Embodiment of Success and Failure as Forward versus Backward Movements
Robinson, Michael D.; Fetterman, Adam K.
2015-01-01
People often speak of success (e.g., “advance”) and failure (e.g., “setback”) as if they were forward versus backward movements through space. Two experiments sought to examine whether grounded associations of this type influence motor behavior. In Experiment 1, participants categorized success versus failure words by moving a joystick forward or backward. Failure categorizations were faster when moving backward, whereas success categorizations were faster when moving forward. Experiment 2 removed the requirement to categorize stimuli and used a word rehearsal task instead. Even without Experiment 1’s response procedures, a similar cross-over interaction was obtained (e.g., failure memorizations sped backward movements relative to forward ones). The findings are novel yet consistent with theories of embodied cognition and self-regulation. PMID:25658923
2014-09-30
Duration AUV Missions with Minimal Human Intervention James Bellingham Monterey Bay Aquarium Research Institute 7700 Sandholdt Road Moss Landing...subsystem failures and environmental challenges. For example, should an AUV suffer the failure of one of its internal actuators, can that failure be...reduce the need for operator intervention in the event of performance anomalies on long- duration AUV deployments, - To allow the vehicle to detect
On-line detection of key radionuclides for fuel-rod failure in a pressurized water reactor.
Qin, Guoxiu; Chen, Xilin; Guo, Xiaoqing; Ni, Ning
2016-08-01
For early on-line detection of fuel rod failure, the key radionuclides useful in monitoring must leak easily from failing rods. Yield, half-life, and mass share of fission products that enter the primary coolant also need to be considered in on-line analyses. From all the nuclides that enter the primary coolant during fuel-rod failure, (135)Xe and (88)Kr were ultimately chosen as crucial for on-line monitoring of fuel-rod failure. A monitoring system for fuel-rod failure detection for pressurized water reactor (PWR) based on the LaBr3(Ce) detector was assembled and tested. The samples of coolant from the PWR were measured using the system as well as a HPGe γ-ray spectrometer. A comparison showed the method was feasible. Finally, the γ-ray spectra of primary coolant were measured under normal operations and during fuel-rod failure. The two peaks of (135)Xe (249.8keV) and (88)Kr (2392.1keV) were visible, confirming that the method is capable of monitoring fuel-rod failure on-line. Copyright © 2016 Elsevier Ltd. All rights reserved.
Yang, Changbing; Hovorka, Susan D; Treviño, Ramón H; Delgado-Alonso, Jesus
2015-07-21
This study presents a combined use of site characterization, laboratory experiments, single-well push-pull tests (PPTs), and reactive transport modeling to assess potential impacts of CO2 leakage on groundwater quality and leakage-detection ability of a groundwater monitoring network (GMN) in a potable aquifer at a CO2 enhanced oil recovery (CO2 EOR) site. Site characterization indicates that failures of plugged and abandoned wells are possible CO2 leakage pathways. Groundwater chemistry in the shallow aquifer is dominated mainly by silicate mineral weathering, and no CO2 leakage signals have been detected in the shallow aquifer. Results of the laboratory experiments and the field test show no obvious damage to groundwater chemistry should CO2 leakage occur and further were confirmed with a regional-scale reactive transport model (RSRTM) that was built upon the batch experiments and validated with the single-well PPT. Results of the RSRTM indicate that dissolved CO2 as an indicator for CO2 leakage detection works better than dissolved inorganic carbon, pH, and alkalinity at the CO2 EOR site. The detection ability of a GMN was assessed with monitoring efficiency, depending on various factors, including the natural hydraulic gradient, the leakage rate, the number of monitoring wells, the aquifer heterogeneity, and the time for a CO2 plume traveling to the monitoring well.
Signal analysis techniques for incipient failure detection in turbomachinery
NASA Technical Reports Server (NTRS)
Coffin, T.
1985-01-01
Signal analysis techniques for the detection and classification of incipient mechanical failures in turbomachinery were developed, implemented and evaluated. Signal analysis techniques available to describe dynamic measurement characteristics are reviewed. Time domain and spectral methods are described, and statistical classification in terms of moments is discussed. Several of these waveform analysis techniques were implemented on a computer and applied to dynamic signals. A laboratory evaluation of the methods with respect to signal detection capability is described. Plans for further technique evaluation and data base development to characterize turbopump incipient failure modes from Space Shuttle main engine (SSME) hot firing measurements are outlined.
Gear Fault Detection Effectiveness as Applied to Tooth Surface Pitting Fatigue Damage
NASA Technical Reports Server (NTRS)
Lewicki, David G.; Dempsey, Paula J.; Heath, Gregory F.; Shanthakumaran, Perumal
2009-01-01
A study was performed to evaluate fault detection effectiveness as applied to gear tooth pitting fatigue damage. Vibration and oil-debris monitoring (ODM) data were gathered from 24 sets of spur pinion and face gears run during a previous endurance evaluation study. Three common condition indicators (RMS, FM4, and NA4) were deduced from the time-averaged vibration data and used with the ODM to evaluate their performance for gear fault detection. The NA4 parameter showed to be a very good condition indicator for the detection of gear tooth surface pitting failures. The FM4 and RMS parameters performed average to below average in detection of gear tooth surface pitting failures. The ODM sensor was successful in detecting a significant amount of debris from all the gear tooth pitting fatigue failures. Excluding outliers, the average cumulative mass at the end of a test was 40 mg.
A preliminary design for flight testing the FINDS algorithm
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Godiwala, P. M.
1986-01-01
This report presents a preliminary design for flight testing the FINDS (Fault Inferring Nonlinear Detection System) algorithm on a target flight computer. The FINDS software was ported onto the target flight computer by reducing the code size by 65%. Several modifications were made to the computational algorithms resulting in a near real-time execution speed. Finally, a new failure detection strategy was developed resulting in a significant improvement in the detection time performance. In particular, low level MLS, IMU and IAS sensor failures are detected instantaneously with the new detection strategy, while accelerometer and the rate gyro failures are detected within the minimum time allowed by the information generated in the sensor residuals based on the point mass equations of motion. All of the results have been demonstrated by using five minutes of sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
Ramtinfar, Sara; Chabok, Shahrokh Yousefzadeh; Chari, Aliakbar Jafari; Reihanian, Zoheir; Leili, Ehsan Kazemnezhad; Alizadeh, Arsalan
2016-10-01
The aim of this study is to compare the discriminant function of multiple organ dysfunction score (MODS) and sequential organ failure assessment (SOFA) components in predicting the Intensive Care Unit (ICU) mortality and neurologic outcome. A descriptive-analytic study was conducted at a level I trauma center. Data were collected from patients with severe traumatic brain injury admitted to the neurosurgical ICU. Basic demographic data, SOFA and MOD scores were recorded daily for all patients. Odd's ratios (ORs) were calculated to determine the relationship of each component score to mortality, and area under receiver operating characteristic (AUROC) curve was used to compare the discriminative ability of two tools with respect to ICU mortality. The most common organ failure observed was respiratory detected by SOFA of 26% and MODS of 13%, and the second common was cardiovascular detected by SOFA of 18% and MODS of 13%. No hepatic or renal failure occurred, and coagulation failure reported as 2.5% by SOFA and MODS. Cardiovascular failure defined by both tools had a correlation to ICU mortality and it was more significant for SOFA (OR = 6.9, CI = 3.6-13.3, P < 0.05 for SOFA; OR = 5, CI = 3-8.3, P < 0.05 for MODS; AUROC = 0.82 for SOFA; AUROC = 0.73 for MODS). The relationship of cardiovascular failure to dichotomized neurologic outcome was not significant statistically. ICU mortality was not associated with respiratory or coagulation failure. Cardiovascular failure defined by either tool significantly related to ICU mortality. Compared to MODS, SOFA-defined cardiovascular failure was a stronger predictor of death. ICU mortality was not affected by respiratory or coagulation failures.
The Identification of Software Failure Regions
1990-06-01
be used to detect non-obviously redundant test cases. A preliminary examination of the manual analysis method is performed with a set of programs ...failure regions are defined and a method of failure region analysis is described in detail. The thesis describes how this analysis may be used to detect...is the termination of the ability of a functional unit to perform its required function. (Glossary, 1983) The presence of faults in program code
Fault Detection and Isolation for Hydraulic Control
NASA Technical Reports Server (NTRS)
1987-01-01
Pressure sensors and isolation valves act to shut down defective servochannel. Redundant hydraulic system indirectly senses failure in any of its electrical control channels and mechanically isolates hydraulic channel controlled by faulty electrical channel so flat it cannot participate in operating system. With failure-detection and isolation technique, system can sustains two failed channels and still functions at full performance levels. Scheme useful on aircraft or other systems with hydraulic servovalves where failure cannot be tolerated.
Dielectric Spectroscopic Detection of Early Failures in 3-D Integrated Circuits.
Obeng, Yaw; Okoro, C A; Ahn, Jung-Joon; You, Lin; Kopanski, Joseph J
The commercial introduction of three dimensional integrated circuits (3D-ICs) has been hindered by reliability challenges, such as stress related failures, resistivity changes, and unexplained early failures. In this paper, we discuss a new RF-based metrology, based on dielectric spectroscopy, for detecting and characterizing electrically active defects in fully integrated 3D devices. These defects are traceable to the chemistry of the insolation dielectrics used in the through silicon via (TSV) construction. We show that these defects may be responsible for some of the unexplained early reliability failures observed in TSV enabled 3D devices.
Redundancy management of multiple KT-70 inertial measurement units applicable to the space shuttle
NASA Technical Reports Server (NTRS)
Cook, L. J.
1975-01-01
Results of an investigation of velocity failure detection and isolation for 3 inertial measuring units (IMU) and 2 inertial measuring units (IMU) configurations are presented. The failure detection and isolation algorithm performance was highly successful and most types of velocity errors were detected and isolated. The failure detection and isolation algorithm also included attitude FDI but was not evaluated because of the lack of time and low resolution in the gimbal angle synchro outputs. The shuttle KT-70 IMUs will have dual-speed resolvers and high resolution gimbal angle readouts. It was demonstrated by these tests that a single computer utilizing a serial data bus can successfully control a redundant 3-IMU system and perform FDI.
Detecting Structural Failures Via Acoustic Impulse Responses
NASA Technical Reports Server (NTRS)
Bayard, David S.; Joshi, Sanjay S.
1995-01-01
Advanced method of acoustic pulse reflectivity testing developed for use in determining sizes and locations of failures within structures. Used to detect breaks in electrical transmission lines, detect faults in optical fibers, and determine mechanical properties of materials. In method, structure vibrationally excited with acoustic pulse (a "ping") at one location and acoustic response measured at same or different location. Measured acoustic response digitized, then processed by finite-impulse-response (FIR) filtering algorithm unique to method and based on acoustic-wave-propagation and -reflection properties of structure. Offers several advantages: does not require training, does not require prior knowledge of mathematical model of acoustic response of structure, enables detection and localization of multiple failures, and yields data on extent of damage at each location.
Experimental evidence for beta-decay as a source of chirality by enantiomer analysis
NASA Technical Reports Server (NTRS)
Bonner, W. A.
1984-01-01
Earlier experiments testing the Vester-Ulbricht beta-decay hypothesis for the origin of molecular chirality are reviewed, followed by descriptions of experiments involving attempted asymmetric radiolysis of DL-amino acids using quantitative gas chromotography as a probe for optical activity. The radiation sources included Sr-90-Y-90, C-14, and P-32 Bremsstrahlen, longitudinally polarized electrons from a linear accelerator and longitudinally polarized protons from a cyclotron. With the possible exception of the linear accelerator irradiations, these experiments failed to produce g.c.-detectable enantiomeric excesses, even at 50-70 percent gross radiolysis. Thus no unambiguous support for the Vester-Ulbricht hypothesis is found in any of the attempted asymmetric radiolyses performed to date. Radioracemization, a possible reason for these failures, is discussed.
Fung, Erik; Hui, Elsie; Yang, Xiaobo; Lui, Leong T; Cheng, King F; Li, Qi; Fan, Yiting; Sahota, Daljit S; Ma, Bosco H M; Lee, Jenny S W; Lee, Alex P W; Woo, Jean
2018-01-01
Heart failure and frailty are clinical syndromes that present with overlapping phenotypic characteristics. Importantly, their co-presence is associated with increased mortality and morbidity. While mechanical and electrical device therapies for heart failure are vital for select patients with advanced stage disease, the majority of patients and especially those with undiagnosed heart failure would benefit from early disease detection and prompt initiation of guideline-directed medical therapies. In this article, we review the problematic interactions between heart failure and frailty, introduce a focused cardiac screening program for community-living elderly initiated by a mobile communication device app leading to the Undiagnosed heart Failure in frail Older individuals (UFO) study, and discuss how the knowledge of pre-frailty and frailty status could be exploited for the detection of previously undiagnosed heart failure or advanced cardiac disease. The widespread use of mobile devices coupled with increasing availability of novel, effective medical and minimally invasive therapies have incentivized new approaches to heart failure case finding and disease management.
Change detection and change blindness in pigeons (Columba livia).
Herbranson, Walter T; Trinh, Yvan T; Xi, Patricia M; Arand, Mark P; Barker, Michael S K; Pratt, Theodore H
2014-05-01
Change blindness is a phenomenon in which even obvious details in a visual scene change without being noticed. Although change blindness has been studied extensively in humans, we do not yet know if it is a phenomenon that also occurs in other animals. Thus, investigation of change blindness in a nonhuman species may prove to be valuable by beginning to provide some insight into its ultimate causes. Pigeons learned a change detection task in which pecks to the location of a change in a sequence of stimulus displays were reinforced. They were worse at detecting changes if the stimulus displays were separated by a brief interstimulus interval, during which the display was blank, and this primary result matches the general pattern seen in previous studies of change blindness in humans. A second experiment attempted to identify specific stimulus characteristics that most reliably produced a failure to detect changes. Change detection was more difficult when interstimulus intervals were longer and when the change was iterated fewer times. ©2014 APA, all rights reserved.
Robust Fault Detection for Aircraft Using Mixed Structured Singular Value Theory and Fuzzy Logic
NASA Technical Reports Server (NTRS)
Collins, Emmanuel G.
2000-01-01
The purpose of fault detection is to identify when a fault or failure has occurred in a system such as an aircraft or expendable launch vehicle. The faults may occur in sensors, actuators, structural components, etc. One of the primary approaches to model-based fault detection relies on analytical redundancy. That is the output of a computer-based model (actually a state estimator) is compared with the sensor measurements of the actual system to determine when a fault has occurred. Unfortunately, the state estimator is based on an idealized mathematical description of the underlying plant that is never totally accurate. As a result of these modeling errors, false alarms can occur. This research uses mixed structured singular value theory, a relatively recent and powerful robustness analysis tool, to develop robust estimators and demonstrates the use of these estimators in fault detection. To allow qualitative human experience to be effectively incorporated into the detection process fuzzy logic is used to predict the seriousness of the fault that has occurred.
Flight test results of the strapdown ring laser gyro tetrad inertial navigation system
NASA Technical Reports Server (NTRS)
Carestia, R. A.; Hruby, R. J.; Bjorkman, W. S.
1983-01-01
A helicopter flight test program undertaken to evaluate the performance of Tetrad (a strap down, laser gyro, inertial navigation system) is described. The results of 34 flights show a mean final navigational velocity error of 5.06 knots, with a standard deviation of 3.84 knots; a corresponding mean final position error of 2.66 n. mi., with a standard deviation of 1.48 n. mi.; and a modeled mean position error growth rate for the 34 tests of 1.96 knots, with a standard deviation of 1.09 knots. No laser gyro or accelerometer failures were detected during the flight tests. Off line parity residual studies used simulated failures with the prerecorded flight test and laboratory test data. The airborne Tetrad system's failure--detection logic, exercised during the tests, successfully demonstrated the detection of simulated ""hard'' failures and the system's ability to continue successfully to navigate by removing the simulated faulted sensor from the computations. Tetrad's four ring laser gyros provided reliable and accurate angular rate sensing during the 4 yr of the test program, and no sensor failures were detected during the evaluation of free inertial navigation performance.
Initial flight results of the TRMM Kalman filter
NASA Technical Reports Server (NTRS)
Andrews, Stephen F.; Morgenstern, Wendy M.
1998-01-01
The Tropical Rainfall Measuring Mission (TRMM) spacecraft is a nadir pointing spacecraft that nominally controls attitude based on the Earth Sensor Assembly (ESA) output. After a potential single point failure in the ESA was identified, the contingency attitude determination method chosen to backup the ESA-based system was a sixth-order extended Kalman filter that uses magnetometer and digital sun sensor measurements. A brief description of the TRMM Kalman filter will be given, including some implementation issues and algorithm heritage. Operational aspects of the Kalman filter and some failure detection and correction will be described. The Kalman filter was tested in a sun pointing attitude and in a nadir pointing attitude during the in-orbit checkout period, and results from those tests will be presented. This paper will describe some lessons learned from the experience of the TRMM team.
Initial Flight Results of the TRMM Kalman Filter
NASA Technical Reports Server (NTRS)
Andrews, Stephen F.; Morgenstern, Wendy M.
1998-01-01
The Tropical Rainfall Measuring Mission (TRMM) spacecraft is a nadir pointing spacecraft that nominally controls attitude based on the Earth Sensor Assembly (ESA) output. After a potential single point failure in the ESA was identified, the contingency attitude determination method chosen to backup the ESA-based system was a sixth-order extended Kalman filter that uses magnetometer and digital sun sensor measurements. A brief description of the TRMM Kalman filter will be given, including some implementation issues and algorithm heritage. Operational aspects of the Kalman filter and some failure detection and correction will be described. The Kalman filter was tested in a sun pointing attitude and in a nadir pointing attitude during the in-orbit checkout period, and results from those tests will be presented. This paper will describe some lessons learned from the experience of the TRMM team.
Sensor failure detection for jet engines using analytical redundance
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1984-01-01
Analytical redundant sensor failure detection, isolation and accommodation techniques for gas turbine engines are surveyed. Both the theoretical technology base and demonstrated concepts are discussed. Also included is a discussion of current technology needs and ongoing Government sponsored programs to meet those needs.
The tether inspection and repair experiment (TIRE)
NASA Technical Reports Server (NTRS)
Wood, George M.; Loria, Alberto; Harrison, James K.
1988-01-01
The successful development and deployment of reusable tethers for space applications will require methods for detecting, locating, and repairing damage to the tether. This requirement becomes especially important whenever the safety of the STS or the Space Station may be diminished or when critical supplies or systems would be lost in the event of a tether failure. A joint NASA/PSN study endeavor has recently been initiated to evaluate and address the problems to be solved for such an undertaking. The objectives of the Tether Inspection and Repair Experiment (TIRE) are to develop instrumentation and repair technology for specific classes of tethers defined as standards, and to demonstrate the technologies in ground-based and in-flight testing on the STS.
Immunity-based detection, identification, and evaluation of aircraft sub-system failures
NASA Astrophysics Data System (ADS)
Moncayo, Hever Y.
This thesis describes the design, development, and flight-simulation testing of an integrated Artificial Immune System (AIS) for detection, identification, and evaluation of a wide variety of sensor, actuator, propulsion, and structural failures/damages including the prediction of the achievable states and other limitations on performance and handling qualities. The AIS scheme achieves high detection rate and low number of false alarms for all the failure categories considered. Data collected using a motion-based flight simulator are used to define the self for an extended sub-region of the flight envelope. The NASA IFCS F-15 research aircraft model is used and represents a supersonic fighter which include model following adaptive control laws based on non-linear dynamic inversion and artificial neural network augmentation. The flight simulation tests are designed to analyze and demonstrate the performance of the immunity-based aircraft failure detection, identification and evaluation (FDIE) scheme. A general robustness analysis is also presented by determining the achievable limits for a desired performance in the presence of atmospheric perturbations. For the purpose of this work, the integrated AIS scheme is implemented based on three main components. The first component performs the detection when one of the considered failures is present in the system. The second component consists in the identification of the failure category and the classification according to the failed element. During the third phase a general evaluation of the failure is performed with the estimation of the magnitude/severity of the failure and the prediction of its effect on reducing the flight envelope of the aircraft system. Solutions and alternatives to specific design issues of the AIS scheme, such as data clustering and empty space optimization, data fusion and duplication removal, definition of features, dimensionality reduction, and selection of cluster/detector shape are also analyzed in this thesis. They showed to have an important effect on detection performance and are a critical aspect when designing the configuration of the AIS. The results presented in this thesis show that the AIS paradigm addresses directly the complexity and multi-dimensionality associated with a damaged aircraft dynamic response and provides the tools necessary for a comprehensive/integrated solution to the FDIE problem. Excellent detection, identification, and evaluation performance has been recorded for all types of failures considered. The implementation of the proposed AIS-based scheme can potentially have a significant impact on the safety of aircraft operation. The output information obtained from the scheme will be useful to increase pilot situational awareness and determine automated compensation.
Gordon, N. C.; Wareham, D. W.
2009-01-01
We report the failure of the automated MicroScan WalkAway system to detect carbapenem heteroresistance in Enterobacter aerogenes. Carbapenem resistance has become an increasing concern in recent years, and robust surveillance is required to prevent dissemination of resistant strains. Reliance on automated systems may delay the detection of emerging resistance. PMID:19641071
Myers, Larry; Downie, Steven; Taylor, Grant; Marrington, Jessica; Tehan, Gerald; Ireland, Michael J
2018-01-01
The importance of self-regulation in human behavior is readily apparent and diverse theoretical accounts for explaining self-regulation failures have been proposed. Typically, these accounts are based on a sequential task methodology where an initial task is presented to deplete self-regulatory resources, and carryover effects are then examined on a second outcome task. In the aftermath of high profile replication failures using a popular letter-crossing task as a means of depleting self-regulatory resources and subsequent criticisms of that task, current research into self-control is currently at an impasse. This is largely due to the lack of empirical research that tests explicit assumptions regarding the initial task. One such untested assumption is that for resource depletion to occur, the initial task must first establish an habitual response and then this habitual response must be inhibited, with behavioral inhibition being the causal factor in inducing depletion. This study reports on four experiments exploring performance on a letter-canceling task, where the rules for target identification remained constant but the method of responding differed (Experiment 1) and the coherence of the text was manipulated (Experiments 1-4). Experiment 1 established that habit forming and behavioral inhibition did not produce any performance decrement when the targets were embedded in random letter strings. Experiments 2-4 established that target detection was sensitive to language characteristics and the coherence of the background text, suggesting that participants' automatic reading processes is a key driver of performance in the letter-e task.
Failure detection and identification
NASA Technical Reports Server (NTRS)
Massoumnia, Mohammad-Ali; Verghese, George C.; Willsky, Alan S.
1989-01-01
Using the geometric concept of an unobservability subspace, a solution is given to the problem of detecting and identifying control system component failures in linear, time-invariant systems. Conditions are developed for the existence of a causal, linear, time-invariant processor that can detect and uniquely identify a component failure, first for the case where components can fail simultaneously, and then for the case where they fail only one at a time. Explicit design algorithms are provided when these conditions are satisfied. In addition to time-domain solvability conditions, frequency-domain interpretations of the results are given, and connections are drawn with results already available in the literature.
A survey of design methods for failure detection in dynamic systems
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1975-01-01
A number of methods for detecting abrupt changes (such as failures) in stochastic dynamical systems are surveyed. The class of linear systems is concentrated on but the basic concepts, if not the detailed analyses, carry over to other classes of systems. The methods surveyed range from the design of specific failure-sensitive filters, to the use of statistical tests on filter innovations, to the development of jump process formulations. Tradeoffs in complexity versus performance are discussed.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
NASA Astrophysics Data System (ADS)
Helsen, Jan; Gioia, Nicoletta; Peeters, Cédric; Jordaens, Pieter-Jan
2017-05-01
Particularly offshore there is a trend to cluster wind turbines in large wind farms, and in the near future to operate such a farm as an integrated power production plant. Predictability of individual turbine behavior across the entire fleet is key in such a strategy. Failure of turbine subcomponents should be detected well in advance to allow early planning of all necessary maintenance actions; Such that they can be performed during low wind and low electricity demand periods. In order to obtain the insights to predict component failure, it is necessary to have an integrated clean dataset spanning all turbines of the fleet for a sufficiently long period of time. This paper illustrates our big-data approach to do this. In addition, advanced failure detection algorithms are necessary to detect failures in this dataset. This paper discusses a multi-level monitoring approach that consists of a combination of machine learning and advanced physics based signal-processing techniques. The advantage of combining different data sources to detect system degradation is in the higher certainty due to multivariable criteria. In order to able to perform long-term acceleration data signal processing at high frequency a streaming processing approach is necessary. This allows the data to be analysed as the sensors generate it. This paper illustrates this streaming concept on 5kHz acceleration data. A continuous spectrogram is generated from the data-stream. Real-life offshore wind turbine data is used. Using this streaming approach for calculating bearing failure features on continuous acceleration data will support failure propagation detection.
Using failure mode and effects analysis to improve the safety of neonatal parenteral nutrition.
Arenas Villafranca, Jose Javier; Gómez Sánchez, Araceli; Nieto Guindo, Miriam; Faus Felipe, Vicente
2014-07-15
Failure mode and effects analysis (FMEA) was used to identify potential errors and to enable the implementation of measures to improve the safety of neonatal parenteral nutrition (PN). FMEA was used to analyze the preparation and dispensing of neonatal PN from the perspective of the pharmacy service in a general hospital. A process diagram was drafted, illustrating the different phases of the neonatal PN process. Next, the failures that could occur in each of these phases were compiled and cataloged, and a questionnaire was developed in which respondents were asked to rate the following aspects of each error: incidence, detectability, and severity. The highest scoring failures were considered high risk and identified as priority areas for improvements to be made. The evaluation process detected a total of 82 possible failures. Among the phases with the highest number of possible errors were transcription of the medical order, formulation of the PN, and preparation of material for the formulation. After the classification of these 82 possible failures and of their relative importance, a checklist was developed to achieve greater control in the error-detection process. FMEA demonstrated that use of the checklist reduced the level of risk and improved the detectability of errors. FMEA was useful for detecting medication errors in the PN preparation process and enabling corrective measures to be taken. A checklist was developed to reduce errors in the most critical aspects of the process. Copyright © 2014 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Brattabø, Ingfrid Vaksdal; Iversen, Anette Christine; Åstrøm, Anne Nordrehaug; Bjørknes, Ragnhild
2016-11-01
Detecting and responding to child-maltreatment is a serious challenge and public health concern. In Norway, public dental health personnel (PDHP) have a mandatory obligation to report to child welfare services (CWS) if they suspect child-maltreatment. This study aimed to assess PDHP's frequency of reporting and failing to report to CWS and whether the frequencies varied according to personal, organizational and external characteristics. An electronic questionnaire was sent to 1542 public dental hygienists and dentists in Norway, 1200 of who responded (77.8%). The majority 60.0%, reported having sent reports of concern to CWS throughout their career, 32.6% had suspected child-maltreatment but failed to report it in their career and 42.5% had sent reports during the three-year period from 2012 to 2014. The reporting frequency to CWS was influenced by PDHP's personal, organizational and external characteristics, while failure to report was influenced by personal characteristics. Compared to international studies, PDHP in Norway sends reports of concern and fails to report to CWS at relatively high rates. PDHP's likelihood of reporting was influenced by age, working experience, number of patients treated, size of the municipality and geographical region, while failure to report to CWS was influenced by working experience.
Lobaz, Steven; Sammut, Mario; Damodaran, Anand
2013-01-01
We describe our experience of a 71-year-old patient with severe renal failure, who exhibited an unusually prolonged rocuronium-induced neuromuscular blockade (>4 h) and apparent recurarisation, following emergency rapid sequence induction (RSI). At the end of operation, 45 min post induction, train-of-four (TOF) testing had been 4/4 prior to wake up. No respiratory effort was seen 150 min postinduction, despite further neostigmine/glycopyrrolate and repeat TOF 4/4. The patient was resedated and transferred to the intensive care unit (ICU). At 180 min postinduction, fade was evident on TOF, suggestive of rocuronium reblockade. At 285 min, the patient was extubated safely following sugammadex administration and discharged uneventfully from the ICU. An important lesson to recognise is the potential for extremely prolonged neuromuscular blockade following rocuronium in patients with severe renal failure, particularly when using the higher doses (1.2 mg/kg) required for RSI, and that TOF in such cases may not be reliable in detecting residual blockade. PMID:23396837
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
Bartlett, Ashley; Wales, Larry; Houfburg, Rodney; Durfee, William K; Griffith, Steven L; Bentley, Ishmael
2013-10-01
In vitro comparative, laboratory experiments. This study developed a laboratory apparatus that measured resistance to failure using pressures similar to intradiscal pressure of a lumbar spinal disk. Various combinations of an anular repair device were compared. Herniated material of the intervertebral disk is removed during a lumbar discectomy; however, the defect in the anulus fibrosus remains and can provide a pathway for future herniation. Repairing the anulus fibrosus could mitigate this reherniation and improve patient outcomes. A pneumatic cylinder was used to increase the pressure of a sealed chamber until artificial nucleus pulposus material was expulsed through either a 3-mm circular (diameter) or a 6-mm slit anular defect created in a surrogate anulus fibrosus. Each unrepaired condition was compared with 3 repaired conditions using a commercially available soft tissue repair system. The repaired conditions included: (1) a single tension band; (2) 2 tension bands in a cruciate pattern; or (3) 2 tension bands in a parallel pattern. Maximum pressure at the point of extrusion of the internal chamber material and failure or nonfailure of the repair was measured. Significant differences were detected (P<0.05) in maximum failure pressures for the nonrepaired (control) versus repaired conditions. With 1 or 2 tension bands repairing the circular defect, the maximum failure pressure increased by approximately 76% and 131%, respectively. In addition, the failure pressure for 2 tension bands in either a cruciate or parallel configuration was not different, and was approximately 32% higher (P<0.05) than a single tension band in the case of the circular defect. Similar results were seen for the slit defect, with the exception that no difference between the repaired conditions (ie, single vs. 2 tension bands) was detected. This laboratory simulation demonstrated that repairing the anulus fibrosus after a discectomy procedure can be beneficial for retaining intradiscal material. The use of 2 tension bands, versus a single tension band, in either a cruciate or parallel configuration may further improve the ability to retain disk material.
Kreitz, Carina; Schnuerch, Robert; Furley, Philip A; Memmert, Daniel
2018-03-01
Inattentional blindness-the phenomenon that clearly visible, yet currently unexpected objects go unnoticed when our attention is focused elsewhere-is an ecologically valid failure of awareness. It is currently subject to debate whether previous events and experiences determine whether or not inattentional blindness occurs. Using a simple two-phase paradigm in the present study, we found that the likelihood of missing an unexpected object due to inattention did not change when its defining characteristic (its color) was perceptually preactivated (Experiment 1; N = 188). Likewise, noticing rates were not significantly reduced if the object's color was previously motivationally relevant during an unrelated detection task (Experiment 2; N = 184). These results corroborate and extend recent findings questioning the influence of previous experience on subsequent inattentional blindness. This has implications for possible countermeasures intended to thwart the potentially harmful effects of inattention. Copyright © 2018 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
And Others; Dweck, Carol S.
1980-01-01
Two experiments were conducted to examine the role of sex differences in learned helplessness in the generalization of failure experience. Subjects in experiment 1 were fifth graders and subjects in experiment 2 were fourth, fifth, and sixth graders. (MP)
On Deviations between Observed and Theoretically Estimated Values on Additivity-Law Failures
NASA Astrophysics Data System (ADS)
Nayatani, Yoshinobu; Sobagaki, Hiroaki
The authors have reported in the previous studies that the average observed results are about a half of the corresponding predictions on the experiments with large additivity-law failures. One of the reasons of the deviations is studied and clarified by using the original observed data on additivity-law failures in the Nakano experiment. The conclusion from the observations and their analyses clarified that it was essentially difficult to have a good agreement between the average observed results and the corresponding theoretical predictions in the experiments with large additivity-law failures. This is caused by a kind of unavoidable psychological pressure existing in subjects participated in the experiments. We should be satisfied with the agreement in trend between them.
Pennell, William E.; Sutton, Jr., Harry G.
1981-01-01
Method and apparatus for detecting failure in a welded connection, particrly applicable to not readily accessible welds such as those joining components within the reactor vessel of a nuclear reactor system. A preselected tag gas is sealed within a chamber which extends through selected portions of the base metal and weld deposit. In the event of a failure, such as development of a crack extending from the chamber to an outer surface, the tag gas is released. The environment about the welded area is directed to an analyzer which, in the event of presence of the tag gas, evidences the failure. A trigger gas can be included with the tag gas to actuate the analyzer.
An investigation of gear mesh failure prediction techniques. M.S. Thesis - Cleveland State Univ.
NASA Technical Reports Server (NTRS)
Zakrajsek, James J.
1989-01-01
A study was performed in which several gear failure prediction methods were investigated and applied to experimental data from a gear fatigue test apparatus. The primary objective was to provide a baseline understanding of the prediction methods and to evaluate their diagnostic capabilities. The methods investigated use the signal average in both the time and frequency domain to detect gear failure. Data from eleven gear fatigue tests were recorded at periodic time intervals as the gears were run from initiation to failure. Four major failure modes, consisting of heavy wear, tooth breakage, single pits, and distributed pitting were observed among the failed gears. Results show that the prediction methods were able to detect only those gear failures which involved heavy wear or distributed pitting. None of the methods could predict fatigue cracks, which resulted in tooth breakage, or single pits. It is suspected that the fatigue cracks were not detected because of limitations in data acquisition rather than in methodology. Additionally, the frequency response between the gear shaft and the transducer was found to significantly affect the vibration signal. The specific frequencies affected were filtered out of the signal average prior to application of the methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faught, J Tonigan; Johnson, J; Stingo, F
2015-06-15
Purpose: To assess the perception of TG-142 tolerance level dose delivery failures in IMRT and the application of FMEA process to this specific aspect of IMRT. Methods: An online survey was distributed to medical physicists worldwide that briefly described 11 different failure modes (FMs) covered by basic quality assurance in step- and-shoot IMRT at or near TG-142 tolerance criteria levels. For each FM, respondents estimated the worst case H&N patient percent dose error and FMEA scores for Occurrence, Detectability, and Severity. Demographic data was also collected. Results: 181 individual and three group responses were submitted. 84% were from North America.more » Most (76%) individual respondents performed at least 80% clinical work and 92% were nationally certified. Respondent medical physics experience ranged from 2.5–45 years (average 18 years). 52% of individual respondents were at least somewhat familiar with FMEA, while 17% were not familiar. Several IMRT techniques, treatment planning systems and linear accelerator manufacturers were represented. All FMs received widely varying scores ranging from 1–10 for occurrence, at least 1–9 for detectability, and at least 1–7 for severity. Ranking FMs by RPN scores also resulted in large variability, with each FM being ranked both most risky (1st ) and least risky (11th) by different respondents. On average MLC modeling had the highest RPN scores. Individual estimated percent dose errors and severity scores positively correlated (p<0.10) for each FM as expected. No universal correlations were found between the demographic information collected and scoring, percent dose errors, or ranking. Conclusion: FMs investigated overall were evaluated as low to medium risk, with average RPNs less than 110. The ranking of 11 FMs was not agreed upon by the community. Large variability in FMEA scoring may be caused by individual interpretation and/or experience, thus reflecting the subjective nature of the FMEA tool.« less
A survey of design methods for failure detection in dynamic systems
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1975-01-01
A number of methods for the detection of abrupt changes (such as failures) in stochastic dynamical systems were surveyed. The class of linear systems were emphasized, but the basic concepts, if not the detailed analyses, carry over to other classes of systems. The methods surveyed range from the design of specific failure-sensitive filters, to the use of statistical tests on filter innovations, to the development of jump process formulations. Tradeoffs in complexity versus performance are discussed.
Gear Fault Detection Effectiveness as Applied to Tooth Surface Pitting Fatigue Damage
NASA Technical Reports Server (NTRS)
Lewicki, David G.; Dempsey, Paula J.; Heath, Gregory F.; Shanthakumaran, Perumal
2010-01-01
A study was performed to evaluate fault detection effectiveness as applied to gear-tooth-pitting-fatigue damage. Vibration and oil-debris monitoring (ODM) data were gathered from 24 sets of spur pinion and face gears run during a previous endurance evaluation study. Three common condition indicators (RMS, FM4, and NA4 [Ed. 's note: See Appendix A-Definitions D were deduced from the time-averaged vibration data and used with the ODM to evaluate their performance for gear fault detection. The NA4 parameter showed to be a very good condition indicator for the detection of gear tooth surface pitting failures. The FM4 and RMS parameters perfomu:d average to below average in detection of gear tooth surface pitting failures. The ODM sensor was successful in detecting a significant 8lDOunt of debris from all the gear tooth pitting fatigue failures. Excluding outliers, the average cumulative mass at the end of a test was 40 mg.
NASA Astrophysics Data System (ADS)
Edwards, John L.; Beekman, Randy M.; Buchanan, David B.; Farner, Scott; Gershzohn, Gary R.; Khuzadi, Mbuyi; Mikula, D. F.; Nissen, Gerry; Peck, James; Taylor, Shaun
2007-04-01
Human space travel is inherently dangerous. Hazardous conditions will exist. Real time health monitoring of critical subsystems is essential for providing a safe abort timeline in the event of a catastrophic subsystem failure. In this paper, we discuss a practical and cost effective process for developing critical subsystem failure detection, diagnosis and response (FDDR). We also present the results of a real time health monitoring simulation of a propellant ullage pressurization subsystem failure. The health monitoring development process identifies hazards, isolates hazard causes, defines software partitioning requirements and quantifies software algorithm development. The process provides a means to establish the number and placement of sensors necessary to provide real time health monitoring. We discuss how health monitoring software tracks subsystem control commands, interprets off-nominal operational sensor data, predicts failure propagation timelines, corroborate failures predictions and formats failure protocol.
Analytical Study of different types Of network failure detection and possible remedies
NASA Astrophysics Data System (ADS)
Saxena, Shikha; Chandra, Somnath
2012-07-01
Faults in a network have various causes,such as the failure of one or more routers, fiber-cuts, failure of physical elements at the optical layer, or extraneous causes like power outages. These faults are usually detected as failures of a set of dependent logical entities and the links affected by the failed components. A reliable control plane plays a crucial role in creating high-level services in the next-generation transport network based on the Generalized Multiprotocol Label Switching (GMPLS) or Automatically Switched Optical Networks (ASON) model. In this paper, approaches to control-plane survivability, based on protection and restoration mechanisms, are examined. Procedures for the control plane state recovery are also discussed, including link and node failure recovery and the concepts of monitoring paths (MPs) and monitoring cycles (MCs) for unique localization of shared risk linked group (SRLG) failures in all-optical networks. An SRLG failure is a failure of multiple links due to a failure of a common resource. MCs (MPs) start and end at same (distinct) monitoring location(s). They are constructed such that any SRLG failure results in the failure of a unique combination of paths and cycles. We derive necessary and sufficient conditions on the set of MCs and MPs needed for localizing an SRLG failure in an arbitrary graph. Procedure of Protection and Restoration of the SRLG failure by backup re-provisioning algorithm have also been discussed.
Brisco, Meredith A; Coca, Steven G; Chen, Jennifer; Owens, Anjali Tiku; McCauley, Brian D; Kimmel, Stephen E; Testani, Jeffrey M
2013-03-01
Identifying reversible renal dysfunction (RD) in the setting of heart failure is challenging. The goal of this study was to evaluate whether elevated admission blood urea nitrogen/creatinine ratio (BUN/Cr) could identify decompensated heart failure patients likely to experience improvement in renal function (IRF) with treatment. Consecutive hospitalizations with a discharge diagnosis of heart failure were reviewed. IRF was defined as ≥20% increase and worsening renal function as ≥20% decrease in estimated glomerular filtration rate. IRF occurred in 31% of the 896 patients meeting eligibility criteria. Higher admission BUN/Cr was associated with in-hospital IRF (odds ratio, 1.5 per 10 increase; 95% confidence interval [CI], 1.3-1.8; P<0.001), an association persisting after adjustment for baseline characteristics (odds ratio, 1.4; 95% CI, 1.1-1.8; P=0.004). However, higher admission BUN/Cr was also associated with post-discharge worsening renal function (odds ratio, 1.4; 95% CI, 1.1-1.8; P=0.011). Notably, in patients with an elevated admission BUN/Cr, the risk of death associated with RD (estimated glomerular filtration rate <45) was substantial (hazard ratio, 2.2; 95% CI, 1.6-3.1; P<0.001). However, in patients with a normal admission BUN/Cr, RD was not associated with increased mortality (hazard ratio, 1.2; 95% CI, 0.67-2.0; P=0.59; p interaction=0.03). An elevated admission BUN/Cr identifies decompensated patients with heart failure likely to experience IRF with treatment, providing proof of concept that reversible RD may be a discernible entity. However, this improvement seems to be largely transient, and RD, in the setting of an elevated BUN/Cr, remains strongly associated with death. Further research is warranted to develop strategies for the optimal detection and treatment of these high-risk patients.
Brisco, Meredith A.; Coca, Steven G.; Chen, Jennifer; Owens, Anjali Tiku; McCauley, Brian D.; Kimmel, Stephen E.; Testani, Jeffrey M.
2014-01-01
Background Identifying reversible renal dysfunction (RD) in the setting of heart failure is challenging. The goal of this study was to evaluate whether elevated admission blood urea nitrogen/creatinine ratio (BUN/Cr) could identify decompensated heart failure patients likely to experience improvement in renal function (IRF) with treatment. Methods and Results Consecutive hospitalizations with a discharge diagnosis of heart failure were reviewed. IRF was defined as ≥20% increase and worsening renal function as ≥20% decrease in estimated glomerular filtration rate. IRF occurred in 31% of the 896 patients meeting eligibility criteria. Higher admission BUN/Cr was associated with inhospital IRF (odds ratio, 1.5 per 10 increase; 95% confidence interval [CI], 1.3–1.8; P<0.001), an association persisting after adjustment for baseline characteristics (odds ratio, 1.4; 95% CI, 1.1–1.8; P=0.004). However, higher admission BUN/Cr was also associated with post-discharge worsening renal function (odds ratio, 1.4; 95% CI, 1.1–1.8; P=0.011). Notably, in patients with an elevated admission BUN/Cr, the risk of death associated with RD (estimated glomerular filtration rate <45) was substantial (hazard ratio, 2.2; 95% CI, 1.6–3.1; P<0.001). However, in patients with a normal admission BUN/Cr, RD was not associated with increased mortality (hazard ratio, 1.2; 95% CI, 0.67–2.0; P=0.59; p interaction=0.03). Conclusions An elevated admission BUN/Cr identifies decompensated patients with heart failure likely to experience IRF with treatment, providing proof of concept that reversible RD may be a discernible entity. However, this improvement seems to be largely transient, and RD, in the setting of an elevated BUN/Cr, remains strongly associated with death. Further research is warranted to develop strategies for the optimal detection and treatment of these high-risk patients. PMID:23325460
NASA Technical Reports Server (NTRS)
Hunter, H. E.
1972-01-01
The Avco Data Analysis and Prediction Techniques (ADAPT) were employed to determine laws capable of detecting failures in a heat plant up to three days in advance of the occurrence of the failure. The projected performance of algorithms yielded a detection probability of 90% with false alarm rates of the order of 1 per year for a sample rate of 1 per day with each detection, followed by 3 hourly samplings. This performance was verified on 173 independent test cases. The program also demonstrated diagnostic algorithms and the ability to predict the time of failure to approximately plus or minus 8 hours up to three days in advance of the failure. The ADAPT programs produce simple algorithms which have a unique possibility of a relatively low cost updating procedure. The algorithms were implemented on general purpose computers at Kennedy Space Flight Center and tested against current data.
NASA Astrophysics Data System (ADS)
Reid, M. E.; Iverson, R. M.; Brien, D. L.; Iverson, N. R.; Lahusen, R. G.; Logan, M.
2004-12-01
Most studies of landslide initiation employ limit equilibrium analyses of slope stability. Owing to a lack of detailed data, however, few studies have tested limit-equilibrium predictions against physical measurements of slope failure. We have conducted a series of field-scale, highly controlled landslide initiation experiments at the USGS debris-flow flume in Oregon; these experiments provide exceptional data to test limit equilibrium methods. In each of seven experiments, we attempted to induce failure in a 0.65m thick, 2m wide, 6m3 prism of loamy sand placed behind a retaining wall in the 31° sloping flume. We systematically investigated triggering of sliding by groundwater injection, by prolonged moderate-intensity sprinkling, and by bursts of high intensity sprinkling. We also used vibratory compaction to control soil porosity and thereby investigate differences in failure behavior of dense and loose soils. About 50 sensors were monitored at 20 Hz during the experiments, including nests of tiltmeters buried at 7 cm spacing to define subsurface failure geometry, and nests of tensiometers and pore-pressure sensors to define evolving pore-pressure fields. In addition, we performed ancillary laboratory tests to measure soil porosity, shear strength, hydraulic conductivity, and compressibility. In loose soils (porosity of 0.52 to 0.55), abrupt failure typically occurred along the flume bed after substantial soil deformation. In denser soils (porosity of 0.41 to 0.44), gradual failure occurred within the soil prism. All failure surfaces had a maximum length to depth ratio of about 7. In even denser soil (porosity of 0.39), we could not induce failure by sprinkling. The internal friction angle of the soils varied from 28° to 40° with decreasing porosity. We analyzed stability at failure, given the observed pore-pressure conditions just prior to large movement, using a 1-D infinite-slope method and a more complete 2-D Janbu method. Each method provides a static Factor of Safety (FS), and in theory failure occurs when FS ≤ 1. Using the 1-D analysis, all experiments having failure had FS well below 1 (typically 0.5-0.8). Using the 2-D analysis for these same conditions, FS was less than but closer to 1 (typically 0.8-0.9). For the experiment with no failure, the 2-D FS was, reassuringly, > 1. These results indicate that the 2-D Janbu analysis is more accurate than the 1-D infinite-slope method for computing limit-equilibrium slope stability in shallow slides with limited areal extent.
Respiratory failure in diabetic ketoacidosis.
Konstantinov, Nikifor K; Rohrscheib, Mark; Agaba, Emmanuel I; Dorin, Richard I; Murata, Glen H; Tzamaloukas, Antonios H
2015-07-25
Respiratory failure complicating the course of diabetic ketoacidosis (DKA) is a source of increased morbidity and mortality. Detection of respiratory failure in DKA requires focused clinical monitoring, careful interpretation of arterial blood gases, and investigation for conditions that can affect adversely the respiration. Conditions that compromise respiratory function caused by DKA can be detected at presentation but are usually more prevalent during treatment. These conditions include deficits of potassium, magnesium and phosphate and hydrostatic or non-hydrostatic pulmonary edema. Conditions not caused by DKA that can worsen respiratory function under the added stress of DKA include infections of the respiratory system, pre-existing respiratory or neuromuscular disease and miscellaneous other conditions. Prompt recognition and management of the conditions that can lead to respiratory failure in DKA may prevent respiratory failure and improve mortality from DKA.
Respiratory failure in diabetic ketoacidosis
Konstantinov, Nikifor K; Rohrscheib, Mark; Agaba, Emmanuel I; Dorin, Richard I; Murata, Glen H; Tzamaloukas, Antonios H
2015-01-01
Respiratory failure complicating the course of diabetic ketoacidosis (DKA) is a source of increased morbidity and mortality. Detection of respiratory failure in DKA requires focused clinical monitoring, careful interpretation of arterial blood gases, and investigation for conditions that can affect adversely the respiration. Conditions that compromise respiratory function caused by DKA can be detected at presentation but are usually more prevalent during treatment. These conditions include deficits of potassium, magnesium and phosphate and hydrostatic or non-hydrostatic pulmonary edema. Conditions not caused by DKA that can worsen respiratory function under the added stress of DKA include infections of the respiratory system, pre-existing respiratory or neuromuscular disease and miscellaneous other conditions. Prompt recognition and management of the conditions that can lead to respiratory failure in DKA may prevent respiratory failure and improve mortality from DKA. PMID:26240698
NASA integrated vehicle health management technology experiment for X-37
NASA Astrophysics Data System (ADS)
Schwabacher, Mark; Samuels, Jeff; Brownston, Lee
2002-07-01
The NASA Integrated Vehicle Health Management (IVHM) Technology Experiment for X-37 was intended to run IVHM software on board the X-37 spacecraft. The X-37 is an unpiloted vehicle designed to orbit the Earth for up to 21 days before landing on a runway. The objectives of the experiment were to demonstrate the benefits of in-flight IVHM to the operation of a Reusable Launch Vehicle, to advance the Technology Readiness Level of this IVHM technology within a flight environment, and to demonstrate that the IVHM software could operate on the Vehicle Management Computer. The scope of the experiment was to perform real-time fault detection and isolation for X-37's electrical power system and electro-mechanical actuators. The experiment used Livingstone, a software system that performs diagnosis using a qualitative, model-based reasoning approach that searches system-wide interactions to detect and isolate failures. Two of the challenges we faced were to make this research software more efficient so that it would fit within the limited computational resources that were available to us on the X-37 spacecraft, and to modify it so that it satisfied the X-37's software safety requirements. Although the experiment is currently unfunded, the development effort resulted in major improvements in Livingstone's efficiency and safety. This paper reviews some of the details of the modeling and integration efforts, and some of the lessons that were learned.
Experimental design for three-color and four-color gene expression microarrays.
Woo, Yong; Krueger, Winfried; Kaur, Anupinder; Churchill, Gary
2005-06-01
Three-color microarrays, compared with two-color microarrays, can increase design efficiency and power to detect differential expression without additional samples and arrays. Furthermore, three-color microarray technology is currently available at a reasonable cost. Despite the potential advantages, clear guidelines for designing and analyzing three-color experiments do not exist. We propose a three- and a four-color cyclic design (loop) and a complementary graphical representation to help design experiments that are balanced, efficient and robust to hybridization failures. In theory, three-color loop designs are more efficient than two-color loop designs. Experiments using both two- and three-color platforms were performed in parallel and their outputs were analyzed using linear mixed model analysis in R/MAANOVA. These results demonstrate that three-color experiments using the same number of samples (and fewer arrays) will perform as efficiently as two-color experiments. The improved efficiency of the design is somewhat offset by a reduced dynamic range and increased variability in the three-color experimental system. This result suggests that, with minor technological improvements, three-color microarrays using loop designs could detect differential expression more efficiently than two-color loop designs. http://www.jax.org/staff/churchill/labsite/software Multicolor cyclic design construction methods and examples along with additional results of the experiment are provided at http://www.jax.org/staff/churchill/labsite/pubs/yong.
A new method to estimate location and slip of simulated rock failure events
NASA Astrophysics Data System (ADS)
Heinze, Thomas; Galvan, Boris; Miller, Stephen Andrew
2015-05-01
At the laboratory scale, identifying and locating acoustic emissions (AEs) is a common method for short term prediction of failure in geomaterials. Above average AE typically precedes the failure process and is easily measured. At larger scales, increase in micro-seismic activity sometimes precedes large earthquakes (e.g. Tohoku, L'Aquilla, oceanic transforms), and can be used to assess seismic risk. The goal of this work is to develop a methodology and numerical algorithms for extracting a measurable quantity analogous to AE arising from the solution of equations governing rock deformation. Since there is no physical property to quantify AE derivable from the governing equations, an appropriate rock-mechanical analog needs to be found. In this work, we identify a general behavior of the AE generation process preceding rock failure. This behavior includes arbitrary localization of low magnitude events during pre-failure stage, followed by increase in number and amplitude, and finally localization around the incipient failure plane during macroscopic failure. We propose deviatoric strain rate as the numerical analog that mimics this behavior, and develop two different algorithms designed to detect rapid increases in deviatoric strain using moving averages. The numerical model solves a fully poro-elasto-plastic continuum model and is coupled to a two-phase flow model. We test our model by comparing simulation results with experimental data of drained compression and of fluid injection experiments. We find for both cases that occurrence and amplitude of our AE analog mimic the observed general behavior of the AE generation process. Our technique can be extended to modeling at the field scale, possibly providing a mechanistic basis for seismic hazard assessment from seismicity that occasionally precedes large earthquakes.
Relationship between age at natural menopause and risk of heart failure.
Rahman, Iffat; Åkesson, Agneta; Wolk, Alicja
2015-01-01
We investigated whether younger age at natural menopause confers a risk of heart failure. We also examined a possible modifying effect of tobacco smoking. This study used the population-based Swedish Mammography Cohort; 22,256 postmenopausal women with information on age at natural menopause were followed from 1997 through 2011. First event of heart failure was ascertained through the Swedish National Patient Register and the Cause of Death Register. Cox proportional hazards regression analyses were conducted to estimate multivariable-adjusted hazard ratios (HRs) and 95% CIs. During a mean follow-up of 13 years, we ascertained 2,532 first events of heart failure hospitalizations and deaths. The mean age at menopause was 51 years. Early natural menopause (40-45 y), compared with menopause at ages 50 to 54 years, was significantly associated with heart failure (HR, 1.40; 95% CI, 1.19 to 1.64). In analyses stratified by smoking status, similar HRs were observed for this age group among never smokers (HR, 1.33; 95% CI, 1.06 to 1.66) and ever smokers (HR, 1.39; 95% CI, 1.09 to 1.78). Among ever smokers, increased incidence (HR, 1.25; 95% CI, 1.06 to 1.47) of heart failure could be detected even among those who entered menopause at ages 46 to 49 years. We found a significant interaction between age at natural menopause and smoking (P = 0.019). This study indicates that women who experience early natural menopause are at increased risk for developing heart failure and that smoking can modify the association by increasing the risk even among women who enter menopause around ages 46 to 49 years.
Applications of matched field processing to damage detection in composite wind turbine blades
NASA Astrophysics Data System (ADS)
Tippmann, Jeffery D.; Lanza di Scalea, Francesco
2015-03-01
There are many structures serving vital infrastructure, energy, and national security purposes. Inspecting the components and areas of the structure most prone to failure during maintenance operations by using non- destructive evaluation methods has been essential in avoiding costly, but preventable, catastrophic failures. In many cases, the inspections are performed by introducing acoustic, ultrasonic, or even thermographic waves into the structure and then evaluating the response. Sometimes the structure, or a component, is not accessible for active inspection methods. Because of this, there is a growing interest to use passive methods, such as using ambient noise, or sources of opportunity, to produce a passive impulse response function similar to the active approach. Several matched field processing techniques most notably used in oceanography and seismology applications are examined in more detail. While sparse array imaging in structures has been studied for years, all methods studied previously have used an active interrogation approach. Here, structural damage detection is studied by use of the reconstructed impulse response functions in ambient noise within sparse array imaging techniques, such as matched-field processing. This has been studied in experiments on a 9-m wind turbine blade.
NASA Technical Reports Server (NTRS)
Craig, Larry G.
2010-01-01
This slide presentation reviews three failures of software and how the failures contributed to or caused the failure of a launch or payload insertion into orbit. In order to avoid these systematic failures in the future, failure mitigation strategies are suggested for use.
Fung, Erik; Hui, Elsie; Yang, Xiaobo; Lui, Leong T.; Cheng, King F.; Li, Qi; Fan, Yiting; Sahota, Daljit S.; Ma, Bosco H. M.; Lee, Jenny S. W.; Lee, Alex P. W.; Woo, Jean
2018-01-01
Heart failure and frailty are clinical syndromes that present with overlapping phenotypic characteristics. Importantly, their co-presence is associated with increased mortality and morbidity. While mechanical and electrical device therapies for heart failure are vital for select patients with advanced stage disease, the majority of patients and especially those with undiagnosed heart failure would benefit from early disease detection and prompt initiation of guideline-directed medical therapies. In this article, we review the problematic interactions between heart failure and frailty, introduce a focused cardiac screening program for community-living elderly initiated by a mobile communication device app leading to the Undiagnosed heart Failure in frail Older individuals (UFO) study, and discuss how the knowledge of pre-frailty and frailty status could be exploited for the detection of previously undiagnosed heart failure or advanced cardiac disease. The widespread use of mobile devices coupled with increasing availability of novel, effective medical and minimally invasive therapies have incentivized new approaches to heart failure case finding and disease management. PMID:29740330
Extended Testability Analysis Tool
NASA Technical Reports Server (NTRS)
Melcher, Kevin; Maul, William A.; Fulton, Christopher
2012-01-01
The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.
NASA Technical Reports Server (NTRS)
Frickland, P. O.; Repar, J.
1982-01-01
A previously developed test design for accelerated aging of photovoltaic modules was experimentally evaluated. The studies included a review of relevant field experience, environmental chamber cycling of full size modules, and electrical and physical evaluation of the effects of accelerated aging during and after the tests. The test results indicated that thermally induced fatigue of the interconnects was the primary mode of module failure as measured by normalized power output. No chemical change in the silicone encapsulant was detectable after 360 test cycles.
NASA Astrophysics Data System (ADS)
Jules, Kenol; Lin, Paul P.
2007-06-01
With the International Space Station currently operational, a significant amount of acceleration data is being down-linked, processed and analyzed daily on the ground on a continuous basis for the space station reduced gravity environment characterization, the vehicle design requirements verification and science data collection. To help understand the impact of the unique spacecraft environment on the science data, an artificial intelligence monitoring system was developed, which detects in near real time any change in the reduced gravity environment susceptible to affect the on-going experiments. Using a dynamic graphical display, the monitoring system allows science teams, at any time and any location, to see the active vibration disturbances, such as pumps, fans, compressor, crew exercise, re-boost and extra-vehicular activities that might impact the reduced gravity environment the experiments are exposed to. The monitoring system can detect both known and unknown vibratory disturbance activities. It can also perform trend analysis and prediction by analyzing past data over many increments (an increment usually lasts 6 months) collected onboard the station for selected disturbances. This feature can be used to monitor the health of onboard mechanical systems to detect and prevent potential systems failures. The monitoring system has two operating modes: online and offline. Both near real-time on-line vibratory disturbance detection and off-line detection and trend analysis are discussed in this paper.
46 CFR 161.002-8 - Automatic fire detecting systems, general requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... detecting system shall consist of a power supply; a control unit on which are located visible and audible... control unit. Power failure alarm devices may be separately housed from the control unit and may be combined with other power failure alarm systems when specifically approved. (b) [Reserved] [21 FR 9032, Nov...
46 CFR 161.002-8 - Automatic fire detecting systems, general requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... detecting system shall consist of a power supply; a control unit on which are located visible and audible... control unit. Power failure alarm devices may be separately housed from the control unit and may be combined with other power failure alarm systems when specifically approved. (b) [Reserved] [21 FR 9032, Nov...
A dual-processor multi-frequency implementation of the FINDS algorithm
NASA Technical Reports Server (NTRS)
Godiwala, Pankaj M.; Caglayan, Alper K.
1987-01-01
This report presents a parallel processing implementation of the FINDS (Fault Inferring Nonlinear Detection System) algorithm on a dual processor configured target flight computer. First, a filter initialization scheme is presented which allows the no-fail filter (NFF) states to be initialized using the first iteration of the flight data. A modified failure isolation strategy, compatible with the new failure detection strategy reported earlier, is discussed and the performance of the new FDI algorithm is analyzed using flight recorded data from the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment. The results show that low level MLS, IMU, and IAS sensor failures are detected and isolated instantaneously, while accelerometer and rate gyro failures continue to take comparatively longer to detect and isolate. The parallel implementation is accomplished by partitioning the FINDS algorithm into two parts: one based on the translational dynamics and the other based on the rotational kinematics. Finally, a multi-rate implementation of the algorithm is presented yielding significantly low execution times with acceptable estimation and FDI performance.
Anomaly Monitoring Method for Key Components of Satellite
Fan, Linjun; Xiao, Weidong; Tang, Jun
2014-01-01
This paper presented a fault diagnosis method for key components of satellite, called Anomaly Monitoring Method (AMM), which is made up of state estimation based on Multivariate State Estimation Techniques (MSET) and anomaly detection based on Sequential Probability Ratio Test (SPRT). On the basis of analysis failure of lithium-ion batteries (LIBs), we divided the failure of LIBs into internal failure, external failure, and thermal runaway and selected electrolyte resistance (R e) and the charge transfer resistance (R ct) as the key parameters of state estimation. Then, through the actual in-orbit telemetry data of the key parameters of LIBs, we obtained the actual residual value (R X) and healthy residual value (R L) of LIBs based on the state estimation of MSET, and then, through the residual values (R X and R L) of LIBs, we detected the anomaly states based on the anomaly detection of SPRT. Lastly, we conducted an example of AMM for LIBs, and, according to the results of AMM, we validated the feasibility and effectiveness of AMM by comparing it with the results of threshold detective method (TDM). PMID:24587703
A Weibull distribution accrual failure detector for cloud computing.
Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Dong, Jian; Zhao, Yao; Wen, Dongxin
2017-01-01
Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing.
Ferrographic and spectrometer oil analysis from a failed gas turbine engine
NASA Technical Reports Server (NTRS)
Jones, W. R., Jr.
1982-01-01
An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. It was concluded that a severe surge may have caused interference between rotating and stationary compressor that either directly or indirectly ignited the titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph, a plasma, an atomic absorption, and an emission spectrometer to see if this information would aid in the engine failure diagnosis. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism nor a high level of wear debris was detected in the engine oil sample taken just prior to the test in which the failure occurred. However, low concentrations (0.2 to 0.5 ppm) of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations ( 2 ppm) were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure. The oil analyses eliminated a lubrication system bearing or shaft seal failure as the cause of the engine failure.
NASA Technical Reports Server (NTRS)
Anderson, Leif F.; Harrington, Sean P.; Omeke, Ojei, II; Schwaab, Douglas G.
2009-01-01
This is a case study on revised estimates of induced failure for International Space Station (ISS) on-orbit replacement units (ORUs). We devise a heuristic to leverage operational experience data by aggregating ORU, associated function (vehicle sub -system), and vehicle effective' k-factors using actual failure experience. With this input, we determine a significant failure threshold and minimize the difference between the actual and predicted failure rates. We conclude with a discussion on both qualitative and quantitative improvements the heuristic methods and potential benefits to ISS supportability engineering analysis.
Validation diagnostics for defective thermocouple circuits
NASA Astrophysics Data System (ADS)
Reed, R. P.
Thermocouples, properly used under favorable conditions, can measure temperature with an accepted tolerance. However, when improperly applied or exposed to hostile mechanical, chemical, thermal, or radiation environments, they often fail without the error being evident in the temperature record. Conversely, features that appear to be unreasonable in temperature records can be authentic. When hidden failure occurs during measurement, deliberate recording of supplementary information is necessary to distinguish valid from faulty data. Loop resistance change, circuit isolation, isolated noise potential, and other measures can reveal symptoms of developing defects. Monitored continually along with temperature, they can reveal the occurrence, location, and natures of damage incurred during measurement. Special multiterminal branched thermocouple circuits and combinatorial multiplex switching allow detection of dc measurement noise and decalibration. Symptoms of insidious failure, often consequential, are illustrated by examples from field experience in measuring temperature of a propagating retorting front in underground coal gasification.
Development of a Distributed Crack Sensor Using Coaxial Cable.
Zhou, Zhi; Jiao, Tong; Zhao, Peng; Liu, Jia; Xiao, Hai
2016-07-29
Cracks, the important factor of structure failure, reflect structural damage directly. Thus, it is significant to realize distributed, real-time crack monitoring. To overcome the shortages of traditional crack detectors, such as the inconvenience of installation, vulnerability, and low measurement range, etc., an improved topology-based cable sensor with a shallow helical groove on the outside surface of a coaxial cable is proposed in this paper. The sensing mechanism, fabrication method, and performances are investigated both numerically and experimentally. Crack monitoring experiments of the reinforced beams are also presented in this paper, illustrating the utility of this sensor in practical applications. These studies show that the sensor can identify a minimum crack width of 0.02 mm and can measure multiple cracks with a spatial resolution of 3 mm. In addition, it is also proved that the sensor performs well to detect the initiation and development of cracks until structure failure.
Development of a Distributed Crack Sensor Using Coaxial Cable
Zhou, Zhi; Jiao, Tong; Zhao, Peng; Liu, Jia; Xiao, Hai
2016-01-01
Cracks, the important factor of structure failure, reflect structural damage directly. Thus, it is significant to realize distributed, real-time crack monitoring. To overcome the shortages of traditional crack detectors, such as the inconvenience of installation, vulnerability, and low measurement range, etc., an improved topology-based cable sensor with a shallow helical groove on the outside surface of a coaxial cable is proposed in this paper. The sensing mechanism, fabrication method, and performances are investigated both numerically and experimentally. Crack monitoring experiments of the reinforced beams are also presented in this paper, illustrating the utility of this sensor in practical applications. These studies show that the sensor can identify a minimum crack width of 0.02 mm and can measure multiple cracks with a spatial resolution of 3 mm. In addition, it is also proved that the sensor performs well to detect the initiation and development of cracks until structure failure. PMID:27483280
Prediction of Fatigue Crack Growth in Gas Turbine Engine Blades Using Acoustic Emission
Zhang, Zhiheng; Yang, Guoan; Hu, Kun
2018-01-01
Fatigue failure is the main type of failure that occurs in gas turbine engine blades and an online monitoring method for detecting fatigue cracks in blades is urgently needed. Therefore, in this present study, we propose the use of acoustic emission (AE) monitoring for the online identification of the blade status. Experiments on fatigue crack propagation based on the AE monitoring of gas turbine engine blades and TC11 titanium alloy plates were conducted. The relationship between the cumulative AE hits and the fatigue crack length was established, before a method of using the AE parameters to determine the crack propagation stage was proposed. A method for predicting the degree of crack propagation and residual fatigue life based on the AE energy was obtained. The results provide a new method for the online monitoring of cracks in the gas turbine engine blade. PMID:29693556
Prediction of Fatigue Crack Growth in Gas Turbine Engine Blades Using Acoustic Emission.
Zhang, Zhiheng; Yang, Guoan; Hu, Kun
2018-04-25
Fatigue failure is the main type of failure that occurs in gas turbine engine blades and an online monitoring method for detecting fatigue cracks in blades is urgently needed. Therefore, in this present study, we propose the use of acoustic emission (AE) monitoring for the online identification of the blade status. Experiments on fatigue crack propagation based on the AE monitoring of gas turbine engine blades and TC11 titanium alloy plates were conducted. The relationship between the cumulative AE hits and the fatigue crack length was established, before a method of using the AE parameters to determine the crack propagation stage was proposed. A method for predicting the degree of crack propagation and residual fatigue life based on the AE energy was obtained. The results provide a new method for the online monitoring of cracks in the gas turbine engine blade.
Miftari, Rame; Nura, Adem; Topçiu-Shufta, Valdete; Miftari, Valon; Murseli, Arbenita; Haxhibeqiri, Valdete
2017-01-01
Aim: The aim of this study was determination of validity of 99mTcDTPA estimation of GFR for early detection of chronic kidney failure Material and methods: There were 110 patients (54 males and 56 females) with kidney disease referred for evaluation of renal function at UCC of Kosovo. All patients were included in two groups. In the first group were included 30 patients confirmed with renal failure, whereas in the second group were included 80 patients with other renal disease. In study were included only patients with ready results of creatinine, urea and glucose in the blood serum. For estimation of GFR we have used the Gate GFR DTPA method. The statistical data processing was conducted using statistical methods such as arithmetic average, the student t-test, percentage or rate, sensitivity, specificity and accuracy of the test. Results: The average age of all patients was 36 years old. The average age of female was 37 whereas of male 35. Patients with renal failure was significantly older than patients with other renal disease (p<0.005). Renal failure was found in 30 patients (27.27%). The concentration of urea and creatinine in blood serum of patients with renal failure were significantly higher than in patients with other renal disease (P< 0.00001). GFR in patients with renal failure were significantly lower than in patients with other renal disease, 51.75 ml/min (p<0.00001). Sensitivity of uremia and creatininemia for detection of renal failure were 83.33%, whereas sensitivity of 99mTcDTPA GFR was 100%. Specificity of uraemia and creatininemia were 63% whereas specificity of 99mTcDTPA GFR was 47.5%. Diagnostic accuracy of blood urea and creatinine in detecting of renal failure were 69%, whereas diagnostic accuracy of 99mTcDTPA GFR was 61.8%. Conclusion: Gate 99mTc DTPA scintigraphy in collaboration with biochemical tests are very sensitive methods for early detection of patients with chronic renal failure. PMID:28883673
Failure detection and recovery in the assembly/contingency subsystem
NASA Technical Reports Server (NTRS)
Gantenbein, Rex E.
1993-01-01
The Assembly/Contingency Subsystem (ACS) is the primary communications link on board the Space Station. Any failure in a component of this system or in the external devices through which it communicates with ground-based systems will isolate the Station. The ACS software design includes a failure management capability (ACFM) that provides protocols for failure detection, isolation, and recovery (FDIR). The the ACFM design requirements as outlined in the current ACS software requirements specification document are reviewed. The activities carried out in this review include: (1) an informal, but thorough, end-to-end failure mode and effects analysis of the proposed software architecture for the ACFM; and (2) a prototype of the ACFM software, implemented as a C program under the UNIX operating system. The purpose of this review is to evaluate the FDIR protocols specified in the ACS design and the specifications themselves in light of their use in implementing the ACFM. The basis of failure detection in the ACFM is the loss of signal between the ground and the Station, which (under the appropriate circumstances) will initiate recovery to restore communications. This recovery involves the reconfiguration of the ACS to either a backup set of components or to a degraded communications mode. The initiation of recovery depends largely on the criticality of the failure mode, which is defined by tables in the ACFM and can be modified to provide a measure of flexibility in recovery procedures.
People's Financial Choice Depends on their Previous Task Success or Failure.
Sekścińska, Katarzyna
2015-01-01
Existing knowledge about the impact of the experience prior to financial choices has been limited almost exclusively to single risky choices. Moreover, the results obtained in these studies have not been entirely consistent. For example, some studies suggested that the experience of success makes people more willing to take a risk, while other studies led to the opposite conclusions. The results of the two experimental studies presented in this paper provide evidence for the hypothesis that the experience of success or failure influences people's financial choices, but the effect of the success or failure depends on the type of task (financial and non-financial) preceding a financial decision. The experience of success in financial tasks increased participants' tendency to invest and make risky investment choices, while it also made them less prone to save. On the other hand, the experience of failure heightened the amount of money that participants decided to save, and lowered their tendency to invest and make risky investment choices. However, the effects of the experience of success or failure in non-financial tasks were exactly the opposite. The presented studies indicated the role of the specific circumstances in which the individual gains the experience as a possible way to explain the discrepancies in the results of studies on the relationship between the experience prior to financial choice with a tendency to take risks.
People’s Financial Choice Depends on their Previous Task Success or Failure
Sekścińska, Katarzyna
2015-01-01
Existing knowledge about the impact of the experience prior to financial choices has been limited almost exclusively to single risky choices. Moreover, the results obtained in these studies have not been entirely consistent. For example, some studies suggested that the experience of success makes people more willing to take a risk, while other studies led to the opposite conclusions. The results of the two experimental studies presented in this paper provide evidence for the hypothesis that the experience of success or failure influences people’s financial choices, but the effect of the success or failure depends on the type of task (financial and non-financial) preceding a financial decision. The experience of success in financial tasks increased participants’ tendency to invest and make risky investment choices, while it also made them less prone to save. On the other hand, the experience of failure heightened the amount of money that participants decided to save, and lowered their tendency to invest and make risky investment choices. However, the effects of the experience of success or failure in non-financial tasks were exactly the opposite. The presented studies indicated the role of the specific circumstances in which the individual gains the experience as a possible way to explain the discrepancies in the results of studies on the relationship between the experience prior to financial choice with a tendency to take risks. PMID:26635654
NASA Technical Reports Server (NTRS)
Davis, Robert N.; Polites, Michael E.; Trevino, Luis C.
2004-01-01
This paper details a novel scheme for autonomous component health management (ACHM) with failed actuator detection and failed sensor detection, identification, and avoidance. This new scheme has features that far exceed the performance of systems with triple-redundant sensing and voting, yet requires fewer sensors and could be applied to any system with redundant sensing. Relevant background to the ACHM scheme is provided, and the simulation results for the application of that scheme to a single-axis spacecraft attitude control system with a 3rd order plant and dual-redundant measurement of system states are presented. ACHM fulfills key functions needed by an integrated vehicle health monitoring (IVHM) system. It is: autonomous; adaptive; works in realtime; provides optimal state estimation; identifies failed components; avoids failed components; reconfigures for multiple failures; reconfigures for intermittent failures; works for hard-over, soft, and zero-output failures; and works for both open- and closed-loop systems. The ACHM scheme combines a prefilter that generates preliminary state estimates, detects and identifies failed sensors and actuators, and avoids the use of failed sensors in state estimation with a fixed-gain Kalman filter that generates optimal state estimates and provides model-based state estimates that comprise an integral part of the failure detection logic. The results show that ACHM successfully isolates multiple persistent and intermittent hard-over, soft, and zero-output failures. It is now ready to be tested on a computer model of an actual system.
An intelligent control system for failure detection and controller reconfiguration
NASA Technical Reports Server (NTRS)
Biswas, Saroj K.
1994-01-01
We present an architecture of an intelligent restructurable control system to automatically detect failure of system components, assess its impact on system performance and safety, and reconfigure the controller for performance recovery. Fault detection is based on neural network associative memories and pattern classifiers, and is implemented using a multilayer feedforward network. Details of the fault detection network along with simulation results on health monitoring of a dc motor have been presented. Conceptual developments for fault assessment using an expert system and controller reconfiguration using a neural network are outlined.
Characterization of emission microscopy and liquid crystal thermography in IC fault localization
NASA Astrophysics Data System (ADS)
Lau, C. K.; Sim, K. S.
2013-05-01
This paper characterizes two fault localization techniques - Emission Microscopy (EMMI) and Liquid Crystal Thermography (LCT) by using integrated circuit (IC) leakage failures. The majority of today's semiconductor failures do not reveal a clear visual defect on the die surface and therefore require fault localization tools to identify the fault location. Among the various fault localization tools, liquid crystal thermography and frontside emission microscopy are commonly used in most semiconductor failure analysis laboratories. Many people misunderstand that both techniques are the same and both are detecting hot spot in chip failing with short or leakage. As a result, analysts tend to use only LCT since this technique involves very simple test setup compared to EMMI. The omission of EMMI as the alternative technique in fault localization always leads to incomplete analysis when LCT fails to localize any hot spot on a failing chip. Therefore, this research was established to characterize and compare both the techniques in terms of their sensitivity in detecting the fault location in common semiconductor failures. A new method was also proposed as an alternative technique i.e. the backside LCT technique. The research observed that both techniques have successfully detected the defect locations resulted from the leakage failures. LCT wass observed more sensitive than EMMI in the frontside analysis approach. On the other hand, EMMI performed better in the backside analysis approach. LCT was more sensitive in localizing ESD defect location and EMMI was more sensitive in detecting non ESD defect location. Backside LCT was proven to work as effectively as the frontside LCT and was ready to serve as an alternative technique to the backside EMMI. The research confirmed that LCT detects heat generation and EMMI detects photon emission (recombination radiation). The analysis results also suggested that both techniques complementing each other in the IC fault localization. It is necessary for a failure analyst to use both techniques when one of the techniques produces no result.
Microwave imaging of spinning object using orbital angular momentum
NASA Astrophysics Data System (ADS)
Liu, Kang; Li, Xiang; Gao, Yue; Wang, Hongqiang; Cheng, Yongqiang
2017-09-01
The linear Doppler shift used for the detection of a spinning object becomes significantly weakened when the line of sight (LOS) is perpendicular to the object, which will result in the failure of detection. In this paper, a new detection and imaging technique for spinning objects is developed. The rotational Doppler phenomenon is observed by using the microwave carrying orbital angular momentum (OAM). To converge the radiation energy on the area where objects might exist, the generation method of OAM beams is proposed based on the frequency diversity principle, and the imaging model is derived accordingly. The detection method of the rotational Doppler shift and the imaging approach of the azimuthal profiles are proposed, which are verified by proof-of-concept experiments. Simulation and experimental results demonstrate that OAM beams can still be used to obtain the azimuthal profiles of spinning objects even when the LOS is perpendicular to the object. This work remedies the insufficiency in existing microwave sensing technology and offers a new solution to the object identification problem.
Ureaplasma parvum prosthetic joint infection detected by PCR.
Farrell, John J; Larson, Joshua A; Akeson, Jeffrey W; Lowery, Kristin S; Rounds, Megan A; Sampath, Rangarajan; Bonomo, Robert A; Patel, Robin
2014-06-01
We describe the first reported case of Ureaplasma parvum prosthetic joint infection (PJI) detected by PCR. Ureaplasma species do not possess a cell wall and are usually associated with colonization and infection of mucosal surfaces (not prosthetic material). U. parvum is a relatively new species name for certain serovars of Ureaplasma urealyticum, and PCR is useful for species determination. Our patient presented with late infection of his right total knee arthroplasty. Intraoperative fluid and tissue cultures and pre- and postoperative synovial fluid cultures were all negative. To discern the pathogen, we employed PCR coupled with electrospray ionization mass spectrometry (PCR/ESI-MS). Our patient's failure to respond to empirical antimicrobial treatment and our previous experience with PCR/ESI-MS in culture-negative cases of infection prompted us to use this approach over other diagnostic modalities. PCR/ESI-MS detected U. parvum in all samples. U. parvum-specific PCR testing was performed on all synovial fluid samples to confirm the U. parvum detection. Copyright © 2014, American Society for Microbiology. All Rights Reserved.
Safety consequences of local initiating events in an LMFBR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, R.M.; Marr, W.W.; Padilla, A. Jr.
1975-12-01
The potential for fuel-failure propagation in an LMFBR at or near normal conditions is examined. Results are presented to support the conclusion that although individual fuel-pin failure may occur, rapid failure-propagation spreading among a large number of fuel pins in a subassembly is unlikely in an operating LMFBR. This conclusion is supported by operating experience, mechanistic analyses of failure-propagation phenomena, and experiments. In addition, some of the consequences of continued operation with defected fuel are considered.
Cardio-Pulmonary Stethoscope: Clinical Validation With Heart Failure and Hemodialysis Patients.
Iskander, Magdy F; Seto, Todd B; Perron, Ruthsenne Rg; Lim, Eunjung; Qazi, Farhan
2018-05-01
The purpose of this study is to evaluate the accuracy of a noninvasive radiofrequency-based device, the Cardio-Pulmonary Stethoscope (CPS), to monitor heart and respiration rates, and detect changes in lung water content in human experiments and clinical trials. Three human populations (healthy subjects ( ), heart failure (), and hemodialysis () patients) were enrolled in this study. The study was conducted at the University of Hawaii and the Queen's Medical Center in Honolulu, HI, USA. Measurement of heart and respiration rates for all patients was compared with standard FDA - approved monitoring methods. For lung water measurements, CPS data were compared with simultaneous pulmonary capillary wedge pressure (PCWP) measurements for heart failure patients, and with change in weight of extracted fluid for hemodialysis patients. Statistical correlation methods (Pearson, mixed, and intraclass) were used to compare the data and examine accuracy of CPS results. Results show that heart and respiration rates of all patients have excellent correlation factors, r≥0.9. Comparisons with fluid removed during hemodialysis treatment showed correlation factor of to 1, while PCWP measurements of heart failure patients had correlation factor of to 0.97. These results suggest that CPS technology accurately quantifies heart and respiration rates and measure fluid changes in the lungs. The CPS has the potential to accurately monitor lung fluid status noninvasively and continuously in a clinical and outpatient setting. Early and efficient management of lung fluid status is key in managing chronic conditions such heart failure, pulmonary hypertension, and acute respiration distress syndrome.
NASA Technical Reports Server (NTRS)
Hruby, R. J.; Bjorkman, W. S.; Schmidt, S. F.; Carestia, R. A.
1979-01-01
Algorithms were developed that attempt to identify which sensor in a tetrad configuration has experienced a step failure. An algorithm is also described that provides a measure of the confidence with which the correct identification was made. Experimental results are presented from real-time tests conducted on a three-axis motion facility utilizing an ortho-skew tetrad strapdown inertial sensor package. The effects of prediction errors and of quantization on correct failure identification are discussed as well as an algorithm for detecting second failures through prediction.
NASA Astrophysics Data System (ADS)
Sheikh, Muhammad; Elmarakbi, Ahmed; Elkady, Mustafa
2017-12-01
This paper focuses on state of charge (SOC) dependent mechanical failure analysis of 18650 lithium-ion battery to detect signs of thermal runaway. Quasi-static loading conditions are used with four test protocols (Rod, Circular punch, three-point bend and flat plate) to analyse the propagation of mechanical failures and failure induced temperature changes. Finite element analysis (FEA) is used to model single battery cell with the concentric layered formation which represents a complete cell. The numerical simulation model is designed with solid element formation where stell casing and all layers followed the same formation, and fine mesh is used for all layers. Experimental work is also performed to analyse deformation of 18650 lithium-ion cell. The numerical simulation model is validated with experimental results. Deformation of cell mimics thermal runaway and various thermal runaway detection strategies are employed in this work including, force-displacement, voltage-temperature, stress-strain, SOC dependency and separator failure. Results show that cell can undergo severe conditions even with no fracture or rupture, these conditions may slow to develop but they can lead to catastrophic failures. The numerical simulation technique is proved to be useful in predicting initial battery failures, and results are in good correlation with the experimental results.
Kreitz, Carina; Schnuerch, Robert; Gibbons, Henning; Memmert, Daniel
2015-01-01
Human awareness is highly limited, which is vividly demonstrated by the phenomenon that unexpected objects go unnoticed when attention is focused elsewhere (inattentional blindness). Typically, some people fail to notice unexpected objects while others detect them instantaneously. Whether this pattern reflects stable individual differences is unclear to date. In particular, hardly anything is known about the influence of personality on the likelihood of inattentional blindness. To fill this empirical gap, we examined the role of multiple personality factors, namely the Big Five, BIS/BAS, absorption, achievement motivation, and schizotypy, in these failures of awareness. In a large-scale sample (N = 554), susceptibility to inattentional blindness was associated with a low level of openness to experience and marginally with a low level of achievement motivation. However, in a multiple regression analysis, only openness emerged as an independent, negative predictor. This suggests that the general tendency to be open to experience extends to the domain of perception. Our results complement earlier work on the possible link between inattentional blindness and personality by demonstrating, for the first time, that failures to consciously perceive unexpected objects reflect individual differences on a fundamental dimension of personality. PMID:26011567
Laboratory evidence of MTBE biodegradation in Borden aquifer material
NASA Astrophysics Data System (ADS)
Schirmer, Mario; Butler, Barbara J.; Church, Clinton D.; Barker, James F.; Nadarajah, Nalina
2003-02-01
Mainly due to intrinsic biodegradation, monitored natural attenuation can be an effective and inexpensive remediation strategy at petroleum release sites. However, gasoline additives such as methyl tert-butyl ether (MTBE) can jeopardize this strategy because these compounds often degrade, if at all, at a slower rate than the collectively benzene, toluene, ethylbenzene and the xylene (BTEX) compounds. Investigation of whether a compound degrades under certain conditions, and at what rate, is therefore important to the assessment of the intrinsic remediation potential of aquifers. A natural gradient experiment with dissolved MTBE-containing gasoline in the shallow, aerobic sand aquifer at Canadian Forces Base (CFB) Borden (Ontario, Canada) from 1988 to 1996 suggested that biodegradation was the main cause of attenuation for MTBE within the aquifer. This laboratory study demonstrates biologically catalyzed MTBE degradation in Borden aquifer-like environments, and so supports the idea that attenuation due to biodegradation may have occurred in the natural gradient experiment. In an experiment with batch microcosms of aquifer material, three of the microcosms ultimately degraded MTBE to below detection, although this required more than 189 days (or >300 days in one case). Failure to detect the daughter product tert-butyl alcohol (TBA) in the field and the batch experiments could be because TBA was more readily degradable than MTBE under Borden conditions.
A Weibull distribution accrual failure detector for cloud computing
Wu, Zhibo; Wu, Jin; Zhao, Yao; Wen, Dongxin
2017-01-01
Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing. PMID:28278229
The Artful Dodger: Answering the Wrong Question the Right Way
ERIC Educational Resources Information Center
Rogers, Todd; Norton, Michael I.
2011-01-01
What happens when speakers try to "dodge" a question they would rather not answer by answering a different question? In 4 studies, we show that listeners can fail to detect dodges when speakers answer similar--but objectively incorrect--questions (the "artful dodge"), a detection failure that goes hand-in-hand with a failure to rate dodgers more…
A detailed description of the sequential probability ratio test for 2-IMU FDI
NASA Technical Reports Server (NTRS)
Rich, T. M.
1976-01-01
The sequential probability ratio test (SPRT) for 2-IMU FDI (inertial measuring unit failure detection/isolation) is described. The SPRT is a statistical technique for detecting and isolating soft IMU failures originally developed for the strapdown inertial reference unit. The flowchart of a subroutine incorporating the 2-IMU SPRT is included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brett Emery Trabun; Gamage, Thoshitha Thanushka; Bakken, David Edward
This disclosure describes, in part, a system management component and failure detection component for use in a power grid data network to identify anomalies within the network and systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription. The failure detection component may identify an anomaly within the network and a source of the anomaly. Based on the identified anomaly, data rates and or datamore » paths may be adjusted in real-time to ensure that the power grid data network does not become overloaded and/or fail.« less
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1986-01-01
A hypothetical turbofan engine simplified simulation with a multivariable control and sensor failure detection, isolation, and accommodation logic (HYTESS II) is presented. The digital program, written in FORTRAN, is self-contained, efficient, realistic and easily used. Simulated engine dynamics were developed from linearized operating point models. However, essential nonlinear effects are retained. The simulation is representative of the hypothetical, low bypass ratio turbofan engine with an advanced control and failure detection logic. Included is a description of the engine dynamics, the control algorithm, and the sensor failure detection logic. Details of the simulation including block diagrams, variable descriptions, common block definitions, subroutine descriptions, and input requirements are given. Example simulation results are also presented.
A solenoid failure detection system for cold gas attitude control jet valves
NASA Technical Reports Server (NTRS)
Johnston, P. A.
1970-01-01
The development of a solenoid valve failure detection system is described. The technique requires the addition of a radioactive gas to the propellant of a cold gas jet attitude control system. Solenoid failure is detected with an avalanche radiation detector located in the jet nozzle which senses the radiation emitted by the leaking radioactive gas. Measurements of carbon monoxide leakage rates through a Mariner type solenoid valve are presented as a function of gas activity and detector configuration. A cylindrical avalanche detector with a factor of 40 improvement in leak sensitivity is proposed for flight systems because it allows the quantity of radioactive gas that must be added to the propellant to be reduced to a practical level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rimner, Andreas, E-mail: rimnera@mskcc.org; Spratt, Daniel E.; Zauderer, Marjorie G.
Purpose: We previously reported our technique for delivering intensity modulated radiation therapy (IMRT) to the entire pleura while attempting to spare the lung in patients with malignant pleural mesothelioma (MPM). Herein, we report a detailed pattern-of-failure analysis in patients with MPM who were unresectable or underwent pleurectomy/decortication (P/D), uniformly treated with hemithoracic pleural IMRT. Methods and Materials: Sixty-seven patients with MPM were treated with definitive or adjuvant hemithoracic pleural IMRT between November 2004 and May 2013. Pretreatment imaging, treatment plans, and posttreatment imaging were retrospectively reviewed to determine failure location(s). Failures were categorized as in-field (within the 90% isodose line),more » marginal (<90% and ≥50% isodose lines), out-of-field (outside the 50% isodose line), or distant. Results: The median follow-up was 24 months from diagnosis and the median time to in-field local failure from the end of RT was 10 months. Forty-three in-field local failures (64%) were found with a 1- and 2-year actuarial failure rate of 56% and 74%, respectively. For patients who underwent P/D versus those who received a partial pleurectomy or were deemed unresectable, the median time to in-field local failure was 14 months versus 6 months, respectively, with 1- and 2-year actuarial in-field local failure rates of 43% and 60% versus 66% and 83%, respectively (P=.03). There were 13 marginal failures (19%). Five of the marginal failures (38%) were located within the costomediastinal recess. Marginal failures decreased with increasing institutional experience (P=.04). Twenty-five patients (37%) had out-of-field failures. Distant failures occurred in 32 patients (48%). Conclusions: After hemithoracic pleural IMRT, local failure remains the dominant form of failure pattern. Patients treated with adjuvant hemithoracic pleural IMRT after P/D experience a significantly longer time to local and distant failure than patients treated with definitive pleural IMRT. Increasing experience and improvement in target delineation minimize the incidence of avoidable marginal failures.« less
Koen, Joshua D.; Aly, Mariam; Wang, Wei-Chun; Yonelinas, Andrew P.
2013-01-01
A prominent finding in recognition memory is that studied items are associated with more variability in memory strength than new items. Here, we test three competing theories for why this occurs - the encoding variability, attention failure, and recollection accounts. Distinguishing amongst these theories is critical because each provides a fundamentally different account of the processes underlying recognition memory. The encoding variability and attention failure accounts propose that old item variance will be unaffected by retrieval manipulations because the processes producing this effect are ascribed to encoding. The recollection account predicts that both encoding and retrieval manipulations that preferentially affect recollection will affect memory variability. These contrasting predictions were tested by examining the effect of response speeding (Experiment 1), dividing attention at retrieval (Experiment 2), context reinstatement (Experiment 3), and increased test delay (Experiment 4) on recognition performance. The results of all four experiments confirmed the predictions of the recollection account, and were inconsistent with the encoding variability account. The evidence supporting the attention failure account was mixed, with two of the four experiments confirming the account and two disconfirming the account. These results indicate that encoding variability and attention failure are insufficient accounts of memory variance, and provide support for the recollection account. Several alternative theoretical accounts of the results are also considered. PMID:23834057
Failure detection and isolation analysis of a redundant strapdown inertial measurement unit
NASA Technical Reports Server (NTRS)
Motyka, P.; Landey, M.; Mckern, R.
1981-01-01
The objective of this study was to define and develop techniques for failure detection and isolation (FDI) algorithms for a dual fail/operational redundant strapdown inertial navigation system are defined and developed. The FDI techniques chosen include provisions for hard and soft failure detection in the context of flight control and navigation. Analyses were done to determine error detection and switching levels for the inertial navigation system, which is intended for a conventional takeoff or landing (CTOL) operating environment. In addition, investigations of false alarms and missed alarms were included for the FDI techniques developed, along with the analyses of filters to be used in conjunction with FDI processing. Two specific FDI algorithms were compared: the generalized likelihood test and the edge vector test. A deterministic digital computer simulation was used to compare and evaluate the algorithms and FDI systems.
Wang, Junhua; Sun, Shuaiyi; Fang, Shouen; Fu, Ting; Stipancic, Joshua
2017-02-01
This paper aims to both identify the factors affecting driver drowsiness and to develop a real-time drowsy driving probability model based on virtual Location-Based Services (LBS) data obtained using a driving simulator. A driving simulation experiment was designed and conducted using 32 participant drivers. Collected data included the continuous driving time before detection of drowsiness and virtual LBS data related to temperature, time of day, lane width, average travel speed, driving time in heavy traffic, and driving time on different roadway types. Demographic information, such as nap habit, age, gender, and driving experience was also collected through questionnaires distributed to the participants. An Accelerated Failure Time (AFT) model was developed to estimate the driving time before detection of drowsiness. The results of the AFT model showed driving time before drowsiness was longer during the day than at night, and was longer at lower temperatures. Additionally, drivers who identified as having a nap habit were more vulnerable to drowsiness. Generally, higher average travel speeds were correlated to a higher risk of drowsy driving, as were longer periods of low-speed driving in traffic jam conditions. Considering different road types, drivers felt drowsy more quickly on freeways compared to other facilities. The proposed model provides a better understanding of how driver drowsiness is influenced by different environmental and demographic factors. The model can be used to provide real-time data for the LBS-based drowsy driving warning system, improving past methods based only on a fixed driving. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rhee, Seung; Spencer, Cherrill; /Stanford U. /SLAC
2009-01-23
Failure occurs when one or more of the intended functions of a product are no longer fulfilled to the customer's satisfaction. The most critical product failures are those that escape design reviews and in-house quality inspection and are found by the customer. The product may work for a while until its performance degrades to an unacceptable level or it may have not worked even before customer took possession of the product. The end results of failures which may lead to unsafe conditions or major losses of the main function are rated high in severity. Failure Modes and Effects Analysis (FMEA)more » is a tool widely used in the automotive, aerospace, and electronics industries to identify, prioritize, and eliminate known potential failures, problems, and errors from systems under design, before the product is released (Stamatis, 1997). Several industrial FMEA standards such as those published by the Society of Automotive Engineers, US Department of Defense, and the Automotive Industry Action Group employ the Risk Priority Number (RPN) to measure risk and severity of failures. The Risk Priority Number (RPN) is a product of 3 indices: Occurrence (O), Severity (S), and Detection (D). In a traditional FMEA process design engineers typically analyze the 'root cause' and 'end-effects' of potential failures in a sub-system or component and assign penalty points through the O, S, D values to each failure. The analysis is organized around categories called failure modes, which link the causes and effects of failures. A few actions are taken upon completing the FMEA worksheet. The RPN column generally will identify the high-risk areas. The idea of performing FMEA is to eliminate or reduce known and potential failures before they reach the customers. Thus, a plan of action must be in place for the next task. Not all failures can be resolved during the product development cycle, thus prioritization of actions must be made within the design group. One definition of detection difficulty (D) is how well the organization controls the development process. Another definition relates to the detectability of a particular failure in the product when it is in the hands of the customer. The former asks 'What is the chance of catching the problem before we give it to the customer'? The latter asks 'What is the chance of the customer catching the problem before the problem results in a catastrophic failure?' (Palady, 1995) These differing definitions confuse the FMEA users when one tries to determine detection difficulty. Are we trying to measure how easy it is to detect where a failure has occurred or when it has occurred? Or are we trying to measure how easy or difficult it is to prevent failures? Ordinal scale variables are used to rank-order industries such as, hotels, restaurants, and movies (Note that a 4 star hotel is not necessarily twice as good as a 2 star hotel). Ordinal values preserve rank in a group of items, but the distance between the values cannot be measured since a distance function does not exist. Thus, the product or sum of ordinal variables loses its rank since each parameter has different scales. The RPN is a product of 3 independent ordinal variables, it can indicate that some failure types are 'worse' than others, but give no quantitative indication of their relative effects. To resolve the ambiguity of measuring detection difficulty and the irrational logic of multiplying 3 ordinal indices, a new methodology was created to overcome these shortcomings, Life Cost-Based FMEA. Life Cost-Based FMEA measures failure/risk in terms of monetary cost. Cost is a universal parameter that can be easily related to severity by engineers and others. Thus, failure cost can be estimated using the following simplest form: Expected Failure Cost = {sup n}{Sigma}{sub i=1}p{sub i}c{sub i}, p: Probability of a particular failure occurring; c: Monetary cost associated with that particular failure; and n: Total number of failure scenarios. FMEA is most effective when there are inputs into it from all concerned disciplines of the product development team. However, FMEA is a long process and can become tedious and won't be effective if too many people participate. An ideal team should have 3 to 4 people from: design, manufacturing, and service departments if possible. Depending on how complex the system is, the entire process can take anywhere from one to four weeks working full time. Thus, it is important to agree to the time commitment before starting the analysis else, anxious managers might stop the procedure before it is completed.« less
Emotional reactivity and awareness of task performance in Alzheimer's disease.
Mograbi, Daniel C; Brown, Richard G; Salas, Christian; Morris, Robin G
2012-07-01
Lack of awareness about performance in tasks is a common feature of Alzheimer's disease. Nevertheless, clinical anecdotes have suggested that patients may show emotional or behavioural responses to the experience of failure despite reporting limited awareness, an aspect which has been little explored experimentally. The current study investigated emotional reactions to success or failure in tasks despite unawareness of performance in Alzheimer's disease. For this purpose, novel computerised tasks which expose participants to systematic success or failure were used in a group of Alzheimer's disease patients (n=23) and age-matched controls (n=21). Two experiments, the first with reaction time tasks and the second with memory tasks, were carried out, and in each experiment two parallel tasks were used, one in a success condition and one in a failure condition. Awareness of performance was measured comparing participant estimations of performance with actual performance. Emotional reactivity was assessed with a self-report questionnaire and rating of filmed facial expressions. In both experiments the results indicated that, relative to controls, Alzheimer's disease patients exhibited impaired awareness of performance, but comparable differential reactivity to failure relative to success tasks, both in terms of self-report and facial expressions. This suggests that affective valence of failure experience is processed despite unawareness of task performance, which might indicate implicit processing of information in neural pathways bypassing awareness. Copyright © 2012 Elsevier Ltd. All rights reserved.
Soverini, Simona; De Benedittis, Caterina; Castagnetti, Fausto; Gugliotta, Gabriele; Mancini, Manuela; Bavaro, Luana; Machova Polakova, Katerina; Linhartova, Jana; Iurlo, Alessandra; Russo, Domenico; Pane, Fabrizio; Saglio, Giuseppe; Rosti, Gianantonio; Cavo, Michele; Baccarani, Michele; Martinelli, Giovanni
2016-08-02
Imatinib-resistant chronic myeloid leukemia (CML) patients receiving second-line tyrosine kinase inhibitor (TKI) therapy with dasatinib or nilotinib have a higher risk of disease relapse and progression and not infrequently BCR-ABL1 kinase domain (KD) mutations are implicated in therapeutic failure. In this setting, earlier detection of emerging BCR-ABL1 KD mutations would offer greater chances of efficacy for subsequent salvage therapy and limit the biological consequences of full BCR-ABL1 kinase reactivation. Taking advantage of an already set up and validated next-generation deep amplicon sequencing (DS) assay, we aimed to assess whether DS may allow a larger window of detection of emerging BCR-ABL1 KD mutants predicting for an impending relapse. a total of 125 longitudinal samples from 51 CML patients who had acquired dasatinib- or nilotinib-resistant mutations during second-line therapy were analyzed by DS from the time of failure and mutation detection by conventional sequencing backwards. BCR-ABL1/ABL1%(IS) transcript levels were used to define whether the patient had 'optimal response', 'warning' or 'failure' at the time of first mutation detection by DS. DS was able to backtrack dasatinib- or nilotinib-resistant mutations to the previous sample(s) in 23/51 (45 %) pts. Median mutation burden at the time of first detection by DS was 5.5 % (range, 1.5-17.5 %); median interval between detection by DS and detection by conventional sequencing was 3 months (range, 1-9 months). In 5 cases, the mutations were detectable at baseline. In the remaining cases, response level at the time mutations were first detected by DS could be defined as 'Warning' (according to the 2013 ELN definitions of response to 2nd-line therapy) in 13 cases, as 'Optimal response' in one case, as 'Failure' in 4 cases. No dasatinib- or nilotinib-resistant mutations were detected by DS in 15 randomly selected patients with 'warning' at various timepoints, that later turned into optimal responders with no treatment changes. DS enables a larger window of detection of emerging BCR-ABL1 KD mutations predicting for an impending relapse. A 'Warning' response may represent a rational trigger, besides 'Failure', for DS-based mutation screening in CML patients undergoing second-line TKI therapy.
Independent Orbiter Assessment (IOA): Analysis of the Orbiter Experiment (OEX) subsystem
NASA Technical Reports Server (NTRS)
Compton, J. M.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Experiments hardware. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. The Orbiter Experiments (OEX) Program consists of a multiple set of experiments for the purpose of gathering environmental and aerodynamic data to develop more accurate ground models for Shuttle performance and to facilitate the design of future spacecraft. This assessment only addresses currently manifested experiments and their support systems. Specifically this list consists of: Shuttle Entry Air Data System (SEADS); Shuttle Upper Atmosphere Mass Spectrometer (SUMS); Forward Fuselage Support System for OEX (FFSSO); Shuttle Infrared Laced Temperature Sensor (SILTS); Aerodynamic Coefficient Identification Package (ACIP); and Support System for OEX (SSO). There are only two potential critical items for the OEX, since the experiments only gather data for analysis post mission and are totally independent systems except for power. Failure of any experiment component usually only causes a loss of experiment data and in no way jeopardizes the crew or mission.
A new test apparatus for studying the failure process during loading experiments of snow
NASA Astrophysics Data System (ADS)
Capelli, Achille; Reiweger, Ingrid; Schweizer, Jürg
2016-04-01
We developed a new apparatus for fully load-controlled snow failure experiments. The deformation and applied load are measured with two displacement and two force sensors, respectively. The loading experiments are recorded with a high speed camera, and the local strain is derived by a particle image velocimetry (PIV) algorithm. To monitor the progressive failure process within the snow sample, our apparatus includes six piezoelectric transducers that record the acoustic emissions in the ultrasonic range. The six sensors allow localizing the sources of the acoustic emissions, i.e. where the failure process starts and how it develops with time towards catastrophic failure. The quadratic snow samples have a side length of 50 cm and a height of 10 to 20 cm. With an area of 0.25 m2 they are clearly larger than samples used in previous experiments. The size of the samples, which is comparable to the critical size for the onset of crack propagation leading to dry-snow slab avalanche release, allows studying the failure nucleation process and its relation to the spatial distribution of the recorded acoustic emissions. Furthermore the occurrence of features in the acoustic emissions typical for imminent failure of the samples can be analysed. We present preliminary results of the acoustic emissions recorded during tests with homogeneous as well as layered snow samples, including a weak layer, for varying loading rates and loading angles.
Failure mode and effects analysis outputs: are they valid?
Shebl, Nada Atef; Franklin, Bryony Dean; Barber, Nick
2012-06-10
Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: Face validity: by comparing the FMEA participants' mapped processes with observational work. Content validity: by presenting the FMEA findings to other healthcare professionals. Criterion validity: by comparing the FMEA findings with data reported on the trust's incident report database. Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust's incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA's methodology for scoring failures, there were discrepancies between the teams' estimates and similar incidents reported on the trust's incident database. Furthermore, the concept of multiplying ordinal scales to prioritise failures is mathematically flawed. Until FMEA's validity is further explored, healthcare organisations should not solely depend on their FMEA results to prioritise patient safety issues.
NASA Astrophysics Data System (ADS)
Ortuño, María; Guinau, Marta; Calvet, Jaume; Furdada, Glòria; Bordonau, Jaume; Ruiz, Antonio; Camafort, Miquel
2017-10-01
Slope failures have been traditionally detected by field inspection and aerial-photo interpretation. These approaches are generally insufficient to identify subtle landforms, especially those generated during the early stages of failures, and particularly where the site is located in forested and remote terrains. We present the identification and characterization of several large and medium size slope failures previously undetected within the Orri massif, Central Pyrenees. Around 130 scarps were interpreted as being part of Rock Slope Failures (RSFs), while other smaller and more superficial failures were interpreted as complex movements combining colluvium slow flow/slope creep and RSFs. Except for one of them, these slope failures had not been previously detected, albeit they extend across a 15% of the studied region. The failures were identified through the analysis of a high-resolution (1 m) LIDAR-derived bare earth Digital Elevation Model (DEM). Most of the scarps are undetectable either by fieldwork, photo interpretation or 5 m resolution topography analysis owing to their small heights (0.5 to 2 m) and their location within forest areas. In many cases, these landforms are not evident in the field due to the presence of other minor irregularities in the slope and the lack of open views due to the forest. 2D and 3D visualization of hillshade maps with different sun azimuths provided an overall picture of the scarp assemblage and permitted a more complete analysis of the geometry of the scarps with respect to the slope and the structural fabric. The sharpness of some of the landforms suggests ongoing activity, which should be explored in future detailed studies in order to assess potential hazards affecting the Portainé ski resort. Our results reveal that close analysis of the 1 m LIDAR-derived DEM can significantly help to detect early-stage slope deformations in high mountain regions, and that expert judgment of the DEM is essential when dealing with subtle landforms. The incorporation of this approach in regional mapping represents a great advance in completing the catalogue of slope failures and will eventually contribute to a better understanding of the spatial factors controlling them.
System for Anomaly and Failure Detection (SAFD) system development
NASA Technical Reports Server (NTRS)
Oreilly, D.
1992-01-01
This task specified developing the hardware and software necessary to implement the System for Anomaly and Failure Detection (SAFD) algorithm, developed under Technology Test Bed (TTB) Task 21, on the TTB engine stand. This effort involved building two units; one unit to be installed in the Block II Space Shuttle Main Engine (SSME) Hardware Simulation Lab (HSL) at Marshall Space Flight Center (MSFC), and one unit to be installed at the TTB engine stand. Rocketdyne personnel from the HSL performed the task. The SAFD algorithm was developed as an improvement over the current redline system used in the Space Shuttle Main Engine Controller (SSMEC). Simulation tests and execution against previous hot fire tests demonstrated that the SAFD algorithm can detect engine failure as much as tens of seconds before the redline system recognized the failure. Although the current algorithm only operates during steady state conditions (engine not throttling), work is underway to expand the algorithm to work during transient condition.
Ferrographic and spectrometer oil analysis from a failed gas turbine engine
NASA Technical Reports Server (NTRS)
Jones, W. R., Jr.
1983-01-01
An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. It was concluded that a severe surge may have caused interference between rotating and stationary compressor parts that either directly or indirectly ignited the titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph, and with plasma, atomic absorption, and emission spectrometers to see if this information would aid in the engine failure diagnosis. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism nor a high level of wear debris was detected in the engine oil sample taken just prior to the test in which the failure occurred. However, low concentrations (0.2 to 0.5 ppm) of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations (2 ppm) were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure. The oil analyses eliminated a lubrication system bearing or shaft seal failure as the cause of the engine failure. Previously announced in STAR as N83-12433
Triaxial testing of Lopez Fault gouge at 150 MPa mean effective stress
Scott, D.R.; Lockner, D.A.; Byerlee, J.D.; Sammis, C.G.
1994-01-01
Triaxial compression experiments were performed on samples of natural granular fault gouge from the Lopez Fault in Southern California. This material consists primarily of quartz and has a self-similar grain size distribution thought to result from natural cataclasis. The experiments were performed at a constant mean effective stress of 150 MPa, to expose the volumetric strains associated with shear failure. The failure strength is parameterized by the coefficient of internal friction ??, based on the Mohr-Coulomb failure criterion. Samples of remoulded Lopez gouge have internal friction ??=0.6??0.02. In experiments where the ends of the sample are constrained to remain axially aligned, suppressing strain localisation, the sample compacts before failure and dilates persistently after failure. In experiments where one end of the sample is free to move laterally, the strain localises to a single oblique fault at around the point of failure; some dilation occurs but does not persist. A comparison of these experiments suggests that dilation is confined to the region of shear localisation in a sample. Overconsolidated samples have slightly larger failure strengths than normally consolidated samples, and smaller axial strains are required to cause failure. A large amount of dilation occurs after failure in heavily overconsolidated samples, suggesting that dilation is occurring throughout the sample. Undisturbed samples of Lopez gouge, cored from the outcrop, have internal friction in the range ??=0.4-0.6; the upper end of this range corresponds to the value established for remoulded Lopez gouge. Some kind of natural heterogeneity within the undisturbed samples is probably responsible for their low, variable strength. In samples of simulated gouge, with a more uniform grain size, active cataclasis during axial loading leads to large amounts of compaction. Larger axial strains are required to cause failure in simulated gouge, but the failure strength is similar to that of natural Lopez gouge. Use of the Mohr-Coulomb failure criterion to interpret the results from this study, and other recent studies on intact rock and granular gouge, leads to values of ?? that depend on the loading configuration and the intact or granular state of the sample. Conceptual models are advanced to account for these descrepancies. The consequences for strain-weakening of natural faults are also discussed. ?? 1994 Birkha??user Verlag.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katti, Amogh; Di Fatta, Giuseppe; Naughton III, Thomas J
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implementedmore » and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.« less
Decrease the Number of Glovebox Glove Breaches and Failures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurtle, Jackie C.
2013-12-24
Los Alamos National Laboratory (LANL) is committed to the protection of the workers, public, and environment while performing work and uses gloveboxes as engineered controls to protect workers from exposure to hazardous materials while performing plutonium operations. Glovebox gloves are a weak link in the engineered controls and are a major cause of radiation contamination events which can result in potential worker exposure and localized contamination making operational areas off-limits and putting programmatic work on hold. Each day of lost opportunity at Technical Area (TA) 55, Plutonium Facility (PF) 4 is estimated at $1.36 million. Between July 2011 and Junemore » 2013, TA-55-PF-4 had 65 glovebox glove breaches and failures with an average of 2.7 per month. The glovebox work follows the five step safety process promoted at LANL with a decision diamond interjected for whether or not a glove breach or failure event occurred in the course of performing glovebox work. In the event that no glove breach or failure is detected, there is an additional decision for whether or not contamination is detected. In the event that contamination is detected, the possibility for a glove breach or failure event is revisited.« less
Imran, Muhammad; Zafar, Nazir Ahmad
2012-01-01
Maintaining inter-actor connectivity is extremely crucial in mission-critical applications of Wireless Sensor and Actor Networks (WSANs), as actors have to quickly plan optimal coordinated responses to detected events. Failure of a critical actor partitions the inter-actor network into disjoint segments besides leaving a coverage hole, and thus hinders the network operation. This paper presents a Partitioning detection and Connectivity Restoration (PCR) algorithm to tolerate critical actor failure. As part of pre-failure planning, PCR determines critical/non-critical actors based on localized information and designates each critical node with an appropriate backup (preferably non-critical). The pre-designated backup detects the failure of its primary actor and initiates a post-failure recovery process that may involve coordinated multi-actor relocation. To prove the correctness, we construct a formal specification of PCR using Z notation. We model WSAN topology as a dynamic graph and transform PCR to corresponding formal specification using Z notation. Formal specification is analyzed and validated using the Z Eves tool. Moreover, we simulate the specification to quantitatively analyze the efficiency of PCR. Simulation results confirm the effectiveness of PCR and the results shown that it outperforms contemporary schemes found in the literature.
Inductive System Monitors Tasks
NASA Technical Reports Server (NTRS)
2008-01-01
The Inductive Monitoring System (IMS) software developed at Ames Research Center uses artificial intelligence and data mining techniques to build system-monitoring knowledge bases from archived or simulated sensor data. This information is then used to detect unusual or anomalous behavior that may indicate an impending system failure. Currently helping analyze data from systems that help fly and maintain the space shuttle and the International Space Station (ISS), the IMS has also been employed by data classes are then used to build a monitoring knowledge base. In real time, IMS performs monitoring functions: determining and displaying the degree of deviation from nominal performance. IMS trend analyses can detect conditions that may indicate a failure or required system maintenance. The development of IMS was motivated by the difficulty of producing detailed diagnostic models of some system components due to complexity or unavailability of design information. Successful applications have ranged from real-time monitoring of aircraft engine and control systems to anomaly detection in space shuttle and ISS data. IMS was used on shuttle missions STS-121, STS-115, and STS-116 to search the Wing Leading Edge Impact Detection System (WLEIDS) data for signs of possible damaging impacts during launch. It independently verified findings of the WLEIDS Mission Evaluation Room (MER) analysts and indicated additional points of interest that were subsequently investigated by the MER team. In support of the Exploration Systems Mission Directorate, IMS is being deployed as an anomaly detection tool on ISS mission control consoles in the Johnson Space Center Mission Operations Directorate. IMS has been trained to detect faults in the ISS Control Moment Gyroscope (CMG) systems. In laboratory tests, it has already detected several minor anomalies in real-time CMG data. When tested on archived data, IMS was able to detect precursors of the CMG1 failure nearly 15 hours in advance of the actual failure event. In the Aeronautics Research Mission Directorate, IMS successfully performed real-time engine health analysis. IMS was able to detect simulated failures and actual engine anomalies in an F/A-18 aircraft during the course of 25 test flights. IMS is also being used in colla
Managing heart failure in the long-term care setting: nurses' experiences in Ontario, Canada.
Strachan, Patricia H; Kaasalainen, Sharon; Horton, Amy; Jarman, Hellen; D'Elia, Teresa; Van Der Horst, Mary-Lou; Newhouse, Ian; Kelley, Mary Lou; McAiney, Carrie; McKelvie, Robert; Heckman, George A
2014-01-01
Implementation of heart failure guidelines in long-term care (LTC) settings is challenging. Understanding the conditions of nursing practice can improve management, reduce suffering, and prevent hospital admission of LTC residents living with heart failure. The aim of the study was to understand the experiences of LTC nurses managing care for residents with heart failure. This was a descriptive qualitative study nested in Phase 2 of a three-phase mixed methods project designed to investigate barriers and solutions to implementing the Canadian Cardiovascular Society heart failure guidelines into LTC homes. Five focus groups totaling 33 nurses working in LTC settings in Ontario, Canada, were audiorecorded, then transcribed verbatim, and entered into NVivo9. A complex adaptive systems framework informed this analysis. Thematic content analysis was conducted by the research team. Triangulation, rigorous discussion, and a search for negative cases were conducted. Data were collected between May and July 2010. Nurses characterized their experiences managing heart failure in relation to many influences on their capacity for decision-making in LTC settings: (a) a reactive versus proactive approach to chronic illness; (b) ability to interpret heart failure signs, symptoms, and acuity; (c) compromised information flow; (d) access to resources; and (e) moral distress. Heart failure guideline implementation reflects multiple dynamic influences. Leadership that addresses these factors is required to optimize the conditions of heart failure care and related nursing practice.
Sensor failure detection for jet engines
NASA Technical Reports Server (NTRS)
Merrill, Walter C.
1988-01-01
The use of analytical redundancy to improve gas turbine engine control system reliability through sensor failure detection, isolation, and accommodation is surveyed. Both the theoretical and application papers that form the technology base of turbine engine analytical redundancy research are discussed. Also, several important application efforts are reviewed. An assessment of the state-of-the-art in analytical redundancy technology is given.
Detection of structural deterioration and associated airline maintenance problems
NASA Technical Reports Server (NTRS)
Henniker, H. D.; Mitchell, R. G.
1972-01-01
Airline operations involving the detection of structural deterioration and associated maintenance problems are discussed. The standard approach to the maintenance and inspection of aircraft components and systems is described. The frequency of inspections and the application of preventive maintenance practices are examined. The types of failure which airline transport aircraft encounter and the steps taken to prevent catastrophic failure are reported.
NASA Technical Reports Server (NTRS)
Delaat, J. C.; Merrill, W. C.
1983-01-01
A sensor failure detection, isolation, and accommodation algorithm was developed which incorporates analytic sensor redundancy through software. This algorithm was implemented in a high level language on a microprocessor based controls computer. Parallel processing and state-of-the-art 16-bit microprocessors are used along with efficient programming practices to achieve real-time operation.
Tapered Roller Bearing Damage Detection Using Decision Fusion Analysis
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Kreider, Gary; Fichter, Thomas
2006-01-01
A diagnostic tool was developed for detecting fatigue damage of tapered roller bearings. Tapered roller bearings are used in helicopter transmissions and have potential for use in high bypass advanced gas turbine aircraft engines. A diagnostic tool was developed and evaluated experimentally by collecting oil debris data from failure progression tests conducted using health monitoring hardware. Failure progression tests were performed with tapered roller bearings under simulated engine load conditions. Tests were performed on one healthy bearing and three pre-damaged bearings. During each test, data from an on-line, in-line, inductance type oil debris sensor and three accelerometers were monitored and recorded for the occurrence of bearing failure. The bearing was removed and inspected periodically for damage progression throughout testing. Using data fusion techniques, two different monitoring technologies, oil debris analysis and vibration, were integrated into a health monitoring system for detecting bearing surface fatigue pitting damage. The data fusion diagnostic tool was evaluated during bearing failure progression tests under simulated engine load conditions. This integrated system showed improved detection of fatigue damage and health assessment of the tapered roller bearings as compared to using individual health monitoring technologies.
Detecting failure of climate predictions
Runge, Michael C.; Stroeve, Julienne C.; Barrett, Andrew P.; McDonald-Madden, Eve
2016-01-01
The practical consequences of climate change challenge society to formulate responses that are more suited to achieving long-term objectives, even if those responses have to be made in the face of uncertainty1, 2. Such a decision-analytic focus uses the products of climate science as probabilistic predictions about the effects of management policies3. Here we present methods to detect when climate predictions are failing to capture the system dynamics. For a single model, we measure goodness of fit based on the empirical distribution function, and define failure when the distribution of observed values significantly diverges from the modelled distribution. For a set of models, the same statistic can be used to provide relative weights for the individual models, and we define failure when there is no linear weighting of the ensemble models that produces a satisfactory match to the observations. Early detection of failure of a set of predictions is important for improving model predictions and the decisions based on them. We show that these methods would have detected a range shift in northern pintail 20 years before it was actually discovered, and are increasingly giving more weight to those climate models that forecast a September ice-free Arctic by 2055.
Failure mode and effect analysis-based quality assurance for dynamic MLC tracking systems
Sawant, Amit; Dieterich, Sonja; Svatos, Michelle; Keall, Paul
2010-01-01
Purpose: To develop and implement a failure mode and effect analysis (FMEA)-based commissioning and quality assurance framework for dynamic multileaf collimator (DMLC) tumor tracking systems. Methods: A systematic failure mode and effect analysis was performed for a prototype real-time tumor tracking system that uses implanted electromagnetic transponders for tumor position monitoring and a DMLC for real-time beam adaptation. A detailed process tree of DMLC tracking delivery was created and potential tracking-specific failure modes were identified. For each failure mode, a risk probability number (RPN) was calculated from the product of the probability of occurrence, the severity of effect, and the detectibility of the failure. Based on the insights obtained from the FMEA, commissioning and QA procedures were developed to check (i) the accuracy of coordinate system transformation, (ii) system latency, (iii) spatial and dosimetric delivery accuracy, (iv) delivery efficiency, and (v) accuracy and consistency of system response to error conditions. The frequency of testing for each failure mode was determined from the RPN value. Results: Failures modes with RPN≥125 were recommended to be tested monthly. Failure modes with RPN<125 were assigned to be tested during comprehensive evaluations, e.g., during commissioning, annual quality assurance, and after major software∕hardware upgrades. System latency was determined to be ∼193 ms. The system showed consistent and accurate response to erroneous conditions. Tracking accuracy was within 3%–3 mm gamma (100% pass rate) for sinusoidal as well as a wide variety of patient-derived respiratory motions. The total time taken for monthly QA was ∼35 min, while that taken for comprehensive testing was ∼3.5 h. Conclusions: FMEA proved to be a powerful and flexible tool to develop and implement a quality management (QM) framework for DMLC tracking. The authors conclude that the use of FMEA-based QM ensures efficient allocation of clinical resources because the most critical failure modes receive the most attention. It is expected that the set of guidelines proposed here will serve as a living document that is updated with the accumulation of progressively more intrainstitutional and interinstitutional experience with DMLC tracking. PMID:21302802
Dynamic Failure and Fragmentation of a Hot-Pressed Boron Carbide
NASA Astrophysics Data System (ADS)
Sano, Tomoko; Vargas-Gonzalez, Lionel; LaSalvia, Jerry; Hogan, James David
2017-12-01
This study investigates the failure and fragmentation of a hot-pressed boron carbide during high rate impact experiments. Four impact experiments are performed using a composite-backed target configuration at similar velocities, where two of the impact experiments resulted in complete target penetration and two resulted in partial penetration. This paper seeks to evaluate and understand the dynamic behavior of the ceramic that led to either the complete or partial penetration cases, focusing on: (1) surface and internal failure features of fragments using optical, scanning electron, and transmission electron microscopy, and (2) fragment size analysis using state-of-the-art particle-sizing technology that informs about the consequences of failure. Detailed characterization of the mechanical properties and the microstructure is also performed. Results indicate that transgranular fracture was the primary mode of failure in this boron carbide material, and no stress-induced amorphization features were observed. Analysis of the fragment sizes for the partial and completely penetrated experiments revealed a possible correlation between larger fragment sizes and impact performance. The results will add insight into designing improved advanced ceramics for impact protection applications.
Demeter, Lisa M.; DeGruttola, Victor; Lustgarten, Stephanie; Bettendorf, Daniel; Fischl, Margaret; Eshleman, Susan; Spreen, William; Nguyen, Bach-Yen; Koval, Christine E.; Eron, Joseph J.; Hammer, Scott; Squires, Kathleen
2010-01-01
Purpose To evaluate the association of efavirenz hypersusceptibility (EFV-HS) with clinical outcome in a double-blind, placebo-controlled, randomized trial of EFV plus indinavir (EFV+IDV) vs. EFV+IDV plus abacavir (ABC) in 283 nucleoside-experienced HIV-infected patients. Methods and Results Rates of virologic failure were similar in the 2 arms at week 16 (p=0.509). Treatment discontinuations were more common in the ABC arm (p=0.001). Using logistic regression, there was no association between virologic failure and either baseline ABC resistance or regimen sensitivity score. Using 3 different genotypic scoring systems, EFV-HS was significantly associated with reduced virologic failure at week 16, independent of treatment assignment. In some patients on the nucleoside-sparing arm, the nucleoside-resistant mutant L74V was selected for in combination with the uncommonly occurring EFV-resistant mutant K103N+L100I; L74V was not detected as a minority variant, using clonal sequence analysis, when the nucleoside-sparing regimen was initiated. Conclusions Premature treatment discontinuations in the ABC arm and the presence of EFV-hypersusceptible HIV variants in this patient population likely made it difficult to detect a benefit of adding ABC to EFV+IDV. In addition, L74V, when combined with K103N+L100I, may confer a selective advantage to the virus that is independent of its effects on nucleoside resistance. PMID:18215978
Photomultiplier tube failure under hydrostatic pressure in future neutrino detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambliss, K.; Diwan, M.; Simos, N.
Failure of photomultiplier tubes (PMTs) under hydrostatic pressure is a concern in neutrino detection, specifically, in the proposed Long-Baseline Neutrino Experiment project. Controlled hydrostatic implosion tests were performed on prototypic PMT bulbs of 10-inch diameter and recorded using high speed filming techniques to capture failures in detail. These high-speed videos were analyzed frame-by-frame in order to identify the origin of a crack, measure the progression of individual crack along the surface of the bulb as it propagates through the glass, and estimate crack velocity. Crack velocity was calculated for each individual crack, and an average velocity was determined for allmore » measurable cracks on each bulb. Overall, 32 cracks were measured in 9 different bulbs tested. Finite element modeling (FEM) of crack formation and growth in prototypic PMT shows stress concentration near the middle section of the PMT bulbs that correlates well with our crack velocity measurements in that section. The FEM model predicts a crack velocity value that is close to the terminal crack velocity reported. Our measurements also reveal significantly reduced crack velocities compared to terminal crack velocities measured in glasses using fracture mechanics testing and reported in literature.« less
Photomultiplier tube failure under hydrostatic pressure in future neutrino detectors
Chambliss, K.; Diwan, M.; Simos, N.; ...
2014-10-09
Failure of photomultiplier tubes (PMTs) under hydrostatic pressure is a concern in neutrino detection, specifically, in the proposed Long-Baseline Neutrino Experiment project. Controlled hydrostatic implosion tests were performed on prototypic PMT bulbs of 10-inch diameter and recorded using high speed filming techniques to capture failures in detail. These high-speed videos were analyzed frame-by-frame in order to identify the origin of a crack, measure the progression of individual crack along the surface of the bulb as it propagates through the glass, and estimate crack velocity. Crack velocity was calculated for each individual crack, and an average velocity was determined for allmore » measurable cracks on each bulb. Overall, 32 cracks were measured in 9 different bulbs tested. Finite element modeling (FEM) of crack formation and growth in prototypic PMT shows stress concentration near the middle section of the PMT bulbs that correlates well with our crack velocity measurements in that section. The FEM model predicts a crack velocity value that is close to the terminal crack velocity reported. Our measurements also reveal significantly reduced crack velocities compared to terminal crack velocities measured in glasses using fracture mechanics testing and reported in literature.« less
Predicting cancellous bone failure during screw insertion.
Reynolds, Karen J; Cleek, Tammy M; Mohtar, Aaron A; Hearn, Trevor C
2013-04-05
Internal fixation of fractures often requires the tightening of bone screws to stabilise fragments. Inadequate application of torque can leave the fracture unstable, while over-tightening results in the stripping of the thread and loss of fixation. The optimal amount of screw torque is specific to each application and in practice is difficult to attain due to the wide variability in bone properties including bone density. The aim of the research presented in this paper is to investigate the relationships between motor torque and screw compression during powered screw insertion, and to evaluate whether the torque during insertion can be used to predict the ultimate failure torque of the bone. A custom test rig was designed and built for bone screw experiments. By inserting cancellous bone screws into synthetic, ovine and human bone specimens, it was established that variations related to bone density could be automatically detected through the effects of the bone on the rotational characteristics of the screw. The torque measured during screw insertion was found to be directly related to bone density and can be used, on its own, as a good predictor of ultimate failure torque of the bone. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Jian; Liu, Wei; Gao, Weicheng
2018-02-01
This work is carried out to study the influence of double cutouts and stiffener reinforcements on the performance of I-section Carbon Fibre/Epoxy composites beam, including buckling, post-buckling behavior and the ultimate failure. The cantilever I-section beam with two diamond-shaped cutouts in the web and three longitudinal L-shaped stiffeners bonded to one side is subjected to a shear load at free end. Both numerical modelling and Experiment of I-section CFRP beam are performed. In numerical analysis, Tsai-Wu failure criterion is utilized to detect the first-ply-failure load in nonlinear analysis by predicting the load-deflection response. Good agreements are obtained from comparison between the numerical simulations and test results. For the double-hole beam web, the two cutouts show close surface deformation amplitude, which indicates that the stiffeners make the force transformation more effective. Comparing to the numerical result of corresponding beam with single cutout and stiffener reinforcement, the longitudinal stiffeners can not only play a significant role in improving the structural stability (increase about 30%), but also take effects to improve the deformation compatibility of structure. Local buckling happened within the sub-webs partioned by the stiffener and the buckling load is different but close. With post-buckling regime, the two areas show similar deformation characteristic, while the sub-web close to fixed end bears more shear load than the sub-web close to loading end with the increase of normal deformation of structure. The catastrophic failure load is approximate 75.6% higher comparing to buckling load. Results illustrate that the tensile fracture of the fiber is the immediate cause of the ultimate failure of the structure.
Xia, Tingting; Chai, Xichen; Shen, Jiaqing
2017-01-01
Appetite loss is one complication of chronic heart failure (CHF), and its association with pancreatic exocrine insufficiency (PEI) is not well investigated in CHF. We attempted to detect the association between PEI and CHF-induced appetite. Patients with CHF were enrolled, and body mass index (BMI), left ventricular ejection fraction (LVEF), New York Heart Association (NYHA) cardiac function grading, B-type natriuretic peptide (BNP), serum albumin, pro-albumin and hemoglobin were evaluated. The pancreatic exocrine function was measured by fecal elastase-1 (FE-1) levels in the enrolled patients. Appetite assessment was tested by completing the simplified nutritional appetite questionnaire (SNAQ). The improvement of appetite loss by supplemented pancreatic enzymes was also researched in this study. The decrease of FE-1 levels was found in patients with CHF, as well as SNAQ scores. A positive correlation was observed between SNAQ scores and FE-1 levels (r = 0.694, p < 0.001). Pancreatic enzymes supplement could attenuate the decrease of SNAQ scores in CHF patients with FE-1 levels <200 μg/g stool and SNAQ < 14. Appetite loss is commonly seen in CHF, and is partially associated with pancreatic exocrine insufficiency. Oral pancreatic enzyme replacement therapy attenuates the chronic heart failure-induced appetite loss. These results suggest a possible pancreatic-cardiac relationship in chronic heart failure, and further experiment is needed for clarifying the possible mechanisms.
Howe, Andrew J; McKeag, Nicholas A; Wilson, Carol M; Ashfield, Kyle P; Roberts, Michael J
2015-06-01
Implantable cardioverter defibrillator (ICD) lead insulation failure and conductor externalization have been increasingly reported. The 7.8F silicon-insulated Linox SD and Linox S ICD leads (Biotronik, Berlin, Germany) were released in 2006 and 2007, respectively, with an estimated 85,000 implantations worldwide. A 39-year-old female suffered an out-of-hospital ventricular fibrillation (VF) arrest with successful resuscitation. An ICD was implanted utilizing a single coil active fixation Linox(Smart) S lead (Biotronik, Berlin, Germany). A device-triggered alert approximately 3 years after implantation confirmed nonphysiological high rate sensing leading to VF detection. A chest X-ray showed an abnormality of the ICD lead and fluoroscopic screening confirmed conductor externalization proximal to the defibrillator coil. In view of the combined electrical and fluoroscopic abnormalities, urgent lead extraction and replacement were performed. A review of Linox (Biotronik) and Vigila (Sorin Group, Milan, Italy) lead implantations within our center (n = 98) identified 3 additional patients presenting with premature lead failure, 2 associated with nonphysiological sensed events and one associated with a significant decrease in lead impedance. All leads were subsequently removed and replaced. This case provides a striking example of insulation failure affecting the Linox ICD lead and, we believe, is the first to demonstrate conductor externalization manifesting both electrical and fluoroscopic abnormalities. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Zhi, Xuan; Han, Baoqin; Sui, Xianxian; Hu, Rui; Liu, Wanshun
2015-02-01
The effects of low-molecular-weight-chitosan (LMWC) on chronic renal failure (CRF) rats induced by adenine were investigated in vivo and in vitro. Chitosan were hydrolyzed using chitosanase at pH 6-7 and 37° for 24 h to obtain LMWC. In vitro, the effect of LMWC on the proliferation of renal tubular epithelial cells (RTEC) showed that it had no cytotoxic effect and could promote cell growth. For the in vivo experiment, chronic renal failure rats induced by adenine were randomly divided into control group, Niaoduqing group, and high-, medium- and low-dose LMWC groups. For each group, we detected serum creatinine (SCR), blood urea nitrogen (BUN), and total superoxide dismutase (T-SOD), glutathione oxidase (GSH-Px) activities of renal tissue, and obtained the ratio of kidney weight/body weight, pathological changes of kidney. The levels of serum SCR, BUN were higher in the adenine-induced rats than those in the control group, indicating that the rat chronic renal failure model worked successfully. The results after treatment showed that LMWC could reduce the SCR and BUN levels and enhance the activities/levels of T-SOD and GSH-PX in kidney compared to control group. Histopathological examination revealed that adenine-induced renal alterations were restored by LMWC at three tested dosages, especially at the low dosage of 100 mg kg-1 d-1.
Expert systems for automated maintenance of a Mars oxygen production system
NASA Technical Reports Server (NTRS)
Ash, Robert L.; Huang, Jen-Kuang; Ho, Ming-Tsang
1989-01-01
A prototype expert system was developed for maintaining autonomous operation of a Mars oxygen production system. Normal operation conditions and failure modes according to certain desired criteria are tested and identified. Several schemes for failure detection and isolation using forward chaining, backward chaining, knowledge-based and rule-based are devised to perform several housekeeping functions. These functions include self-health checkout, an emergency shut down program, fault detection and conventional control activities. An effort was made to derive the dynamic model of the system using Bond-Graph technique in order to develop the model-based failure detection and isolation scheme by estimation method. Finally, computer simulations and experimental results demonstrated the feasibility of the expert system and a preliminary reliability analysis for the oxygen production system is also provided.
Gyro and accelerometer failure detection and identification in redundant sensor systems
NASA Technical Reports Server (NTRS)
Potter, J. E.; Deckert, J. C.
1972-01-01
Algorithms for failure detection and identification for redundant noncolinear arrays of single degree of freedom gyros and accelerometers are described. These algorithms are optimum in the sense that detection occurs as soon as it is no longer possible to account for the instrument outputs as the outputs of good instruments operating within their noise tolerances, and identification occurs as soon as it is true that only a particular instrument failure could account for the actual instrument outputs within the noise tolerance of good instruments. An estimation algorithm is described which minimizes the maximum possible estimation error magnitude for the given set of instrument outputs. Monte Carlo simulation results are presented for the application of the algorithms to an inertial reference unit consisting of six gyros and six accelerometers in two alternate configurations.
The NASA Integrated Vehicle Health Management Technology Experiment for X-37
NASA Technical Reports Server (NTRS)
Schwabacher, Mark; Samuels, Jeff; Brownston, Lee; Clancy, Daniel (Technical Monitor)
2002-01-01
The NASA Integrated Vehicle Health Management (IVHM) Technology Experiment for X-37 was intended to run IVHM software on-board the X-37 spacecraft. The X-37 is intended to be an unpiloted vehicle that would orbit the Earth for up to 21 days before landing on a runway. The objectives of the experiment were to demonstrate the benefits of in-flight IVHM to the operation of a Reusable Launch Vehicle, to advance the Technology Readiness Level of this IVHM technology within a flight environment, and to demonstrate that the IVHM software could operate on the Vehicle Management Computer. The scope of the experiment was to perform real-time fault detection and isolation for X-37's electrical power system and electro-mechanical actuators. The experiment used Livingstone, a software system that performs diagnosis using a qualitative, model-based reasoning approach that searches system-wide interactions to detect and isolate failures. Two of the challenges we faced were to make this research software more efficient so that it would fit within the limited computational resources that were available to us on the X-37 spacecraft, and to modify it so that it satisfied the X-37's software safety requirements. Although the experiment is currently unfunded, the development effort had value in that it resulted in major improvements in Livingstone's efficiency and safety. This paper reviews some of the details of the modeling and integration efforts, and some of the lessons that were learned.
Koen, Joshua D; Aly, Mariam; Wang, Wei-Chun; Yonelinas, Andrew P
2013-11-01
A prominent finding in recognition memory is that studied items are associated with more variability in memory strength than new items. Here, we test 3 competing theories for why this occurs-the encoding variability, attention failure, and recollection accounts. Distinguishing among these theories is critical because each provides a fundamentally different account of the processes underlying recognition memory. The encoding variability and attention failure accounts propose that old item variance will be unaffected by retrieval manipulations because the processes producing this effect are ascribed to encoding. The recollection account predicts that both encoding and retrieval manipulations that preferentially affect recollection will affect memory variability. These contrasting predictions were tested by examining the effect of response speeding (Experiment 1), dividing attention at retrieval (Experiment 2), context reinstatement (Experiment 3), and increased test delay (Experiment 4) on recognition performance. The results of all 4 experiments confirm the predictions of the recollection account and are inconsistent with the encoding variability account. The evidence supporting the attention failure account is mixed, with 2 of the 4 experiments confirming the account and 2 disconfirming the account. These results indicate that encoding variability and attention failure are insufficient accounts of memory variance and provide support for the recollection account. Several alternative theoretical accounts of the results are also considered. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Van Eygen, Veerle; Thys, Kim; Van Hove, Carl; Rimsky, Laurence T; De Meyer, Sandra; Aerssens, Jeroen; Picchio, Gaston; Vingerhoets, Johan
2016-05-01
Minority variants (1.0-25.0%) were evaluated by deep sequencing (DS) at baseline and virological failure (VF) in a selection of antiretroviral treatment-naïve, HIV-1-infected patients from the rilpivirine ECHO/THRIVE phase III studies. Linkage between frequently emerging resistance-associated mutations (RAMs) was determined. DS (llIumina®) and population sequencing (PS) results were available at baseline for 47 VFs and time of failure for 48 VFs; and at baseline for 49 responders matched for baseline characteristics. Minority mutations were accurately detected at frequencies down to 1.2% of the HIV-1 quasispecies. No baseline minority rilpivirine RAMs were detected in VFs; one responder carried 1.9% F227C. Baseline minority mutations associated with resistance to other non-nucleoside reverse transcriptase inhibitors (NNRTIs) were detected in 8/47 VFs (17.0%) and 7/49 responders (14.3%). Baseline minority nucleoside/nucleotide reverse transcriptase inhibitor (NRTI) RAMs M184V and L210W were each detected in one VF (none in responders). At failure, two patients without NNRTI RAMs by PS carried minority rilpivirine RAMs K101E and/or E138K; and five additional patients carried other minority NNRTI RAMs V90I, V106I, V179I, V189I, and Y188H. Overall at failure, minority NNRTI RAMs and NRTI RAMs were found in 29/48 (60.4%) and 16/48 VFs (33.3%), respectively. Linkage analysis showed that E138K and K101E were usually not observed on the same viral genome. In conclusion, baseline minority rilpivirine RAMs and other NNRTI/NRTI RAMs were uncommon in the rilpivirine arm of the ECHO and THRIVE studies. DS at failure showed emerging NNRTI resistant minority variants in seven rilpivirine VFs who had no detectable NNRTI RAMs by PS. © 2015 Wiley Periodicals, Inc.
An accelerating precursor to predict "time-to-failure" in creep and volcanic eruptions
NASA Astrophysics Data System (ADS)
Hao, Shengwang; Yang, Hang; Elsworth, Derek
2017-09-01
Real-time prediction by monitoring of the evolution of response variables is a central goal in predicting rock failure. A linear relation Ω˙Ω¨-1 = C(tf - t) has been developed to describe the time to failure, where Ω represents a response quantity, C is a constant and tf represents the failure time. Observations from laboratory creep failure experiments and precursors to volcanic eruptions are used to test the validity of the approach. Both cumulative and simple moving window techniques are developed to perform predictions and to illustrate the effects of data selection on the results. Laboratory creep failure experiments on granites show that the linear relation works well during the final approach to failure. For blind prediction, the simple moving window technique is preferred because it always uses the most recent data and excludes effects of early data deviating significantly from the predicted trend. When the predicted results show only small fluctuations, failure is imminent.
Survey of HEPA filter experience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbaugh, E.H.
1982-07-01
A survey of high efficiency particulate air (HEPA) filter applications and experience at Department of Energy (DOE) sites was conducted to provide an overview of the reasons and magnitude of HEPA filter changeouts and failures. Results indicated that approximately 58% of the filters surveyed were changed out in the three year study period, and some 18% of all filters were changed out more than once. Most changeouts (63%) were due to the existence of a high pressure drop across the filter, indicative of filter plugging. Other reasons for changeout included leak-test failure (15%), preventive maintenance service life limit (13%), suspectedmore » damage (5%) and radiation buildup (4%). Filter failures occurred with approximately 12% of all installed filters. Of these failures, most (64%) occurred for unknown or unreported reasons. Handling or installation damage accounted for an additional 19% of reported failures. Media ruptures, filter-frame failures and seal failures each accounted for approximately 5 to 6% of the reported failures.« less
CPV Cell Infant Mortality Study
NASA Astrophysics Data System (ADS)
Bosco, Nick; Sweet, Cassi; Silverman, Timothy J.; Kurtz, Sarah
2011-12-01
Six hundred and fifty CPV cells were characterized before packaging and then after a four-hour concentrated on-sun exposure. An observed infant mortality failure rate was reproduced and attributed to epoxy die-attach voiding at the corners of the cells. These voids increase the local thermal resistance allowing thermal runaway to occur under normal operating conditions in otherwise defect-free cells. FEM simulations and experiments support this hypothesis. X-ray transmission imaging of the affected assemblies was found incapable of detecting all suspect voids and therefore cannot be considered a reliable screening technique in the case of epoxy die-attach.
Failure to replicate the Mehta and Zhu (2009) color-priming effect on anagram solution times.
Steele, Kenneth M
2014-06-01
Mehta and Zhu (Science, 323, 1226-1229, 2009) hypothesized that the color red induces avoidance motivation and that the color blue induces approach motivation. In one experiment, they reported that anagrams of avoidance motivation words were solved more quickly on red backgrounds and that approach motivation anagrams were solved more quickly on blue backgrounds. Reported here is a direct replication of that experiment, using the same anagrams, instructions, and colors, with more than triple the number of participants used in the original study. The results did not show the Mehta and Zhu color-priming effects, even though statistical power was sufficient to detect the effect. The results call into question the existence of their color-priming effect on the solution of anagrams.
Snow fracture: From micro-cracking to global failure
NASA Astrophysics Data System (ADS)
Capelli, Achille; Reiweger, Ingrid; Schweizer, Jürg
2017-04-01
Slab avalanches are caused by a crack forming and propagating in a weak layer within the snow cover, which eventually causes the detachment of the overlying cohesive slab. The gradual damage process leading to the nucleation of the initial failure is still not entirely understood. Therefore, we studied the damage process preceding snow failure by analyzing the acoustic emissions (AE) generated by bond failure or micro-cracking. The AE allow studying the ongoing progressive failure in a non-destructive way. We performed fully load-controlled failure experiments on snow samples presenting a weak layer and recorded the generated AE. The size and frequency of the generated AE increased before failure revealing an acceleration of the damage process with increased size and frequency of damage and/or microscopic cracks. The AE energy was power-law distributed and the exponent (b-value) decreased approaching failure. The waiting time followed an exponential distribution with increasing exponential coefficient λ before failure. The decrease of the b-value and the increase of λ correspond to a change in the event distribution statistics indicating a transition from homogeneously distributed uncorrelated damage producing mostly small AE to localized damage, which cause larger correlated events which leads to brittle failure. We observed brittle failure for the fast experiment and a more ductile behavior for the slow experiments. This rate dependence was reflected also in the AE signature. In the slow experiments the b value and λ were almost constant, and the energy rate increase was moderate indicating that the damage process was in a stable state - suggesting the damage and healing processes to be balanced. On a shorter time scale, however, the AE parameters varied indicating that the damage process was not steady but consisted of a sum of small bursts. We assume that the bursts may have been generated by cascades of correlated micro-cracks caused by localization of stresses at a small scale. The healing process may then have prevented the self-organization of this small scale damage and, therefore, the total failure of the sample.
Detecting gear tooth fracture in a high contact ratio face gear mesh
NASA Technical Reports Server (NTRS)
Zakrajsek, James J.; Handschuh, Robert F.; Lewicki, David G.; Decker, Harry J.
1995-01-01
This paper summarized the results of a study in which three different vibration diagnostic methods were used to detect gear tooth fracture in a high contact ratio face gear mesh. The NASA spiral bevel gear fatigue test rig was used to produce unseeded fault, natural failures of four face gear specimens. During the fatigue tests, which were run to determine load capacity and primary failure mechanisms for face gears, vibration signals were monitored and recorded for gear diagnostic purposes. Gear tooth bending fatigue and surface pitting were the primary failure modes found in the tests. The damage ranged from partial tooth fracture on a single tooth in one test to heavy wear, severe pitting, and complete tooth fracture of several teeth on another test. Three gear fault detection techniques, FM4, NA4*, and NB4, were applied to the experimental data. These methods use the signal average in both the time and frequency domain. Method NA4* was able to conclusively detect the gear tooth fractures in three out of the four fatigue tests, along with gear tooth surface pitting and heavy wear. For multiple tooth fractures, all of the methods gave a clear indication of the damage. It was also found that due to the high contact ratio of the face gear mesh, single tooth fractures did not significantly affect the vibration signal, making this type of failure difficult to detect.
NASA Technical Reports Server (NTRS)
Lewicki, David George; Lambert, Nicholas A.; Wagoner, Robert S.
2015-01-01
The diagnostics capability of micro-electro-mechanical systems (MEMS) based rotating accelerometer sensors in detecting gear tooth crack failures in helicopter main-rotor transmissions was evaluated. MEMS sensors were installed on a pre-notched OH-58C spiral-bevel pinion gear. Endurance tests were performed and the gear was run to tooth fracture failure. Results from the MEMS sensor were compared to conventional accelerometers mounted on the transmission housing. Most of the four stationary accelerometers mounted on the gear box housing and most of the CI's used gave indications of failure at the end of the test. The MEMS system performed well and lasted the entire test. All MEMS accelerometers gave an indication of failure at the end of the test. The MEMS systems performed as well, if not better, than the stationary accelerometers mounted on the gear box housing with regards to gear tooth fault detection. For both the MEMS sensors and stationary sensors, the fault detection time was not much sooner than the actual tooth fracture time. The MEMS sensor spectrum data showed large first order shaft frequency sidebands due to the measurement rotating frame of reference. The method of constructing a pseudo tach signal from periodic characteristics of the vibration data was successful in deriving a TSA signal without an actual tach and proved as an effective way to improve fault detection for the MEMS.
The Need and Requirements for Validating Damage Detection Capability
2011-09-01
Testing of Airborne Equipment [11], 2) Materials / Structure Certification, 3) NDE (POD) Validation Procedures, 4) Failure Mode Effects and Criticality...Analysis (FMECA), and 5) Cost Benefits Analysis [12]. Existing procedures for environmental testing of airborne equipment ensure flight...e.g. ultrasound or eddy current), damage type or failure conditions to detect, criticality of the damage state (e.g. safety of flight), likelihood of
Sun, Dan; Yang, Fei
2017-04-29
To investigate whether metformin can improve the cardiac function through improving the mitochondrial function in model of heart failure after myocardial infarction. Male C57/BL6 mice aged about 8 weeks were selected and the anterior descending branch was ligatured to establish the heart failure model after myocardial infarction. The cardiac function was evaluated via ultrasound after 3 days to determine the modeling was successful, and the mice were randomly divided into two groups. Saline group (Saline) received the intragastric administration of normal saline for 4 weeks, and metformin group (Met) received the intragastric administration of metformin for 4 weeks. At the same time, Shame group (Sham) was set up. Changes in cardiac function in mice were detected at 4 weeks after operation. Hearts were taken from mice after 4 weeks, and cell apoptosis in myocardial tissue was detected using TUNEL method; fresh mitochondria were taken and changes in oxygen consumption rate (OCR) and respiratory control rate (RCR) of mitochondria in each group were detected using bio-energy metabolism tester, and change in mitochondrial membrane potential (MMP) of myocardial tissue was detected via JC-1 staining; the expressions and changes in Bcl-2, Bax, Sirt3, PGC-1α and acetylated PGC-1α in myocardial tissue were detected by Western blot. RT-PCR was used to detect mRNA levels in Sirt3 in myocardial tissues. Metformin improved the systolic function of heart failure model rats after myocardial infarction and reduced the apoptosis of myocardial cells after myocardial infarction. Myocardial mitochondrial respiratory function and membrane potential were decreased after myocardial infarction, and metformin treatment significantly improved the mitochondrial respiratory function and mitochondrial membrane potential; Metformin up-regulated the expression of Sirt3 and the activity of PGC-1α in myocardial tissue of heart failure after myocardial infarction. Metformin decreases the acetylation level of PGC-1α through up-regulating Sirt3, mitigates the damage to mitochondrial membrane potential of model of heart failure after myocardial infarction and improves the respiratory function of mitochondria, thus improving the cardiac function of mice. Copyright © 2017. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Mullin, Daniel Richard
2013-09-01
The majority of space programs whether manned or unmanned for science or exploration require that a Failure Modes Effects and Criticality Analysis (FMECA) be performed as part of their safety and reliability activities. This comes as no surprise given that FMECAs have been an integral part of the reliability engineer's toolkit since the 1950s. The reasons for performing a FMECA are well known including fleshing out system single point failures, system hazards and critical components and functions. However, in the author's ten years' experience as a space systems safety and reliability engineer, findings demonstrate that the FMECA is often performed as an afterthought, simply to meet contract deliverable requirements and is often started long after the system requirements allocation and preliminary design have been completed. There are also important qualitative and quantitative components often missing which can provide useful data to all of project stakeholders. These include; probability of occurrence, probability of detection, time to effect and time to detect and, finally, the Risk Priority Number. This is unfortunate as the FMECA is a powerful system design tool that when used effectively, can help optimize system function while minimizing the risk of failure. When performed as early as possible in conjunction with writing the top level system requirements, the FMECA can provide instant feedback on the viability of the requirements while providing a valuable sanity check early in the design process. It can indicate which areas of the system will require redundancy and which areas are inherently the most risky from the onset. Based on historical and practical examples, it is this author's contention that FMECAs are an immense source of important information for all involved stakeholders in a given project and can provide several benefits including, efficient project management with respect to cost and schedule, system engineering and requirements management, assembly integration and test (AI&T) and operations if applied early, performed to completion and updated along with system design.
Refurbishment of the cryogenic coolers for the Skylab earth resources experiment package
NASA Technical Reports Server (NTRS)
Smithson, J. C.; Luksa, N. C.
1975-01-01
Skylab Earth Resources Experiment Package (EREP) experiments, S191 and S192, required a cold temperature reference for operation of a spectrometer. This cold temperature reference was provided by a subminiature Stirling cycle cooler. However, the failure of the cooler to pass the qualification test made it necessary for additional cooler development, refurbishment, and qualification. A description of the failures and the cause of these failures for each of the coolers is presented. The solutions to the various failure modes are discussed along with problems which arose during the refurbishment program. The rationale and results of various tests are presented. The successful completion of the cryogenic cooler refurbishment program resulted in four of these coolers being flown on Skylab. The system operation during the flight is presented.
Composite Bending Box Section Modal Vibration Fault Detection
NASA Technical Reports Server (NTRS)
Werlink, Rudy
2002-01-01
One of the primary concerns with Composite construction in critical structures such as wings and stabilizers is that hidden faults and cracks can develop operationally. In the real world, catastrophic sudden failure can result from these undetected faults in composite structures. Vibration data incorporating a broad frequency modal approach, could detect significant changes prior to failure. The purpose of this report is to investigate the usefulness of frequency mode testing before and after bending and torsion loading on a composite bending Box Test section. This test article is representative of construction techniques being developed for the recent NASA Blended Wing Body Low Speed Vehicle Project. The Box section represents the construction technique on the proposed blended wing aircraft. Modal testing using an impact hammer provides an frequency fingerprint before and after bending and torsional loading. If a significant structural discontinuity develops, the vibration response is expected to change. The limitations of the data will be evaluated for future use as a non-destructive in-situ method of assessing hidden damage in similarly constructed composite wing assemblies. Modal vibration fault detection sensitivity to band-width, location and axis will be investigated. Do the sensor accelerometers need to be near the fault and or in the same axis? The response data used in this report was recorded at 17 locations using tri-axial accelerometers. The modal tests were conducted following 5 independent loading conditions before load to failure and 2 following load to failure over a period of 6 weeks. Redundant data was used to minimize effects from uncontrolled variables which could lead to incorrect interpretations. It will be shown that vibrational modes detected failure at many locations when skin de-bonding failures occurred near the center section. Important considerations are the axis selected and frequency range.
Technology platform development for targeted plasma metabolites in human heart failure.
Chan, Cy X'avia; Khan, Anjum A; Choi, Jh Howard; Ng, Cm Dominic; Cadeiras, Martin; Deng, Mario; Ping, Peipei
2013-01-01
Heart failure is a multifactorial disease associated with staggeringly high morbidity and motility. Recently, alterations of multiple metabolites have been implicated in heart failure; however, the lack of an effective technology platform to assess these metabolites has limited our understanding on how they contribute to this disease phenotype. We have successfully developed a new workflow combining specific sample preparation with tandem mass spectrometry that enables us to extract most of the targeted metabolites. 19 metabolites were chosen ascribing to their biological relevance to heart failure, including extracellular matrix remodeling, inflammation, insulin resistance, renal dysfunction, and cardioprotection against ischemic injury. In this report, we systematically engineered, optimized and refined a protocol applicable to human plasma samples; this study contributes to the methodology development with respect to deproteinization, incubation, reconstitution, and detection with mass spectrometry. The deproteinization step was optimized with 20% methanol/ethanol at a plasma:solvent ratio of 1:3. Subsequently, an incubation step was implemented which remarkably enhanced the metabolite signals and the number of metabolite peaks detected by mass spectrometry in both positive and negative modes. With respect to the step of reconstitution, 0.1% formic acid was designated as the reconstitution solvent vs. 6.5 mM ammonium bicarbonate, based on the comparable number of metabolite peaks detected in both solvents, and yet the signal detected in the former was higher. By adapting this finalized protocol, we were able to retrieve 13 out of 19 targeted metabolites from human plasma. We have successfully devised a simple albeit effective workflow for the targeted plasma metabolites relevant to human heart failure. This will be employed in tandem with high throughput liquid chromatography mass spectrometry platform to validate and characterize these potential metabolic biomarkers for diagnostic and therapeutic development of heart failure patients.
On Undecidability Aspects of Resilient Computations and Implications to Exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S
2014-01-01
Future Exascale computing systems with a large number of processors, memory elements and interconnection links, are expected to experience multiple, complex faults, which affect both applications and operating-runtime systems. A variety of algorithms, frameworks and tools are being proposed to realize and/or verify the resilience properties of computations that guarantee correct results on failure-prone computing systems. We analytically show that certain resilient computation problems in presence of general classes of faults are undecidable, that is, no algorithms exist for solving them. We first show that the membership verification in a generic set of resilient computations is undecidable. We describe classesmore » of faults that can create infinite loops or non-halting computations, whose detection in general is undecidable. We then show certain resilient computation problems to be undecidable by using reductions from the loop detection and halting problems under two formulations, namely, an abstract programming language and Turing machines, respectively. These two reductions highlight different failure effects: the former represents program and data corruption, and the latter illustrates incorrect program execution. These results call for broad-based, well-characterized resilience approaches that complement purely computational solutions using methods such as hardware monitors, co-designs, and system- and application-specific diagnosis codes.« less
Chen, Yue; Gao, Qin; Song, Fei; Li, Zhizhong; Wang, Yufan
2017-08-01
In the main control rooms of nuclear power plants, operators frequently have to switch between procedure displays and system information displays. In this study, we proposed an operation-unit-based integrated design, which combines the two displays to facilitate the synthesis of information. We grouped actions that complete a single goal into operation units and showed these operation units on the displays of system states. In addition, we used different levels of visual salience to highlight the current unit and provided a list of execution history records. A laboratory experiment, with 42 students performing a simulated procedure to deal with unexpected high pressuriser level, was conducted to compare this design against an action-based integrated design and the existing separated-displays design. The results indicate that our operation-unit-based integrated design yields the best performance in terms of time and completion rate and helped more participants to detect unexpected system failures. Practitioner Summary: In current nuclear control rooms, operators frequently have to switch between procedure and system information displays. We developed an integrated design that incorporates procedure information into system displays. A laboratory study showed that the proposed design significantly improved participants' performance and increased the probability of detecting unexpected system failures.
A Long-Term Study of the Microbial Community Structure in a ...
Many US water treatment facilities use chloramination to limit regulated disinfectant by-product formation. However, chloramination has been shown to promote nitrifying bacteria, and 30 to 63% of water utilities using secondary chloramine disinfection experience nitrification episodes. In this study, we examined the Bacterial population in a simulated chloraminated drinking water distribution system (DWDS). After six months of continuous operation, coupons were incubated in CDC reactors receiving water from the simulated DWDS to study biofilm development. The DWDS was then subjected to episodes of nitrification, followed by a ‘chlorine burn’ by switching disinfectant from chloramine to chlorine, a common nitrification control strategy. The study was organized into five distinct operational schemes: (1) PRE-MODIFIED; system stabilization, (2) STANDARD I; stable chloramine residual, (3) FAILURE; complete nitrification and minimal chloramine residual, (4) RESTORE; chlorine burn, and (5) STANDARD II; stable chloramine residual. Bulk water and biofilm samples were collected and analyzed for water quality parameters and microbial composition. No change in microbial biomass (ATP) in bulk water and biofilm samples was detected during the STANDARD I scheme, while an increase in biofilms was detected after 80 days (FAILURE, i.e. nitrification) followed by a decrease after a chlorine burn with a final increase to previous values (STANDARD I) during the STANDARD I
The evaluation of the OSGLR algorithm for restructurable controls
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.
1986-01-01
The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.
Vibration detection of component health and operability
NASA Technical Reports Server (NTRS)
Baird, B. C.
1975-01-01
In order to prevent catastrophic failure and eliminate unnecessary periodic maintenance in the shuttle orbiter program environmental control system components, some means of detecting incipient failure in these components is required. The utilization was investigated of vibrational/acoustic phenomena as one of the principal physical parameters on which to base the design of this instrumentation. Baseline vibration/acoustic data was collected from three aircraft type fans and two aircraft type pumps over a frequency range from a few hertz to greater than 3000 kHz. The baseline data included spectrum analysis of the baseband vibration signal, spectrum analysis of the detected high frequency bandpass acoustic signal, and amplitude distribution of the high frequency bandpass acoustic signal. A total of eight bearing defects and two unbalancings was introduced into the five test items. All defects were detected by at least one of a set of vibration/acoustic parameters with a margin of at least 2:1 over the worst case baseline. The design of a portable instrument using this set of vibration/acoustic parameters for detecting incipient failures in environmental control system components is described.
NASA Astrophysics Data System (ADS)
Karpenko, S. S.; Zybin, E. Yu; Kosyanchuk, V. V.
2018-02-01
In this paper we design a nonparametric method for failures detection and localization in the aircraft control system that uses the measurements of the control signals and the aircraft states only. It doesn’t require a priori information of the aircraft model parameters, training or statistical calculations, and is based on algebraic solvability conditions for the aircraft model identification problem. This makes it possible to significantly increase the efficiency of detection and localization problem solution by completely eliminating errors, associated with aircraft model uncertainties.
NASA Technical Reports Server (NTRS)
Lo, Yunnhon; Johnson, Stephen B.; Breckenridge, Jonathan T.
2014-01-01
The theory of System Health Management (SHM) and of its operational subset Fault Management (FM) states that FM is implemented as a "meta" control loop, known as an FM Control Loop (FMCL). The FMCL detects that all or part of a system is now failed, or in the future will fail (that is, cannot be controlled within acceptable limits to achieve its objectives), and takes a control action (a response) to return the system to a controllable state. In terms of control theory, the effectiveness of each FMCL is estimated based on its ability to correctly estimate the system state, and on the speed of its response to the current or impending failure effects. This paper describes how this theory has been successfully applied on the National Aeronautics and Space Administration's (NASA) Space Launch System (SLS) Program to quantitatively estimate the effectiveness of proposed abort triggers so as to select the most effective suite to protect the astronauts from catastrophic failure of the SLS. The premise behind this process is to be able to quantitatively provide the value versus risk trade-off for any given abort trigger, allowing decision makers to make more informed decisions. All current and planned crewed launch vehicles have some form of vehicle health management system integrated with an emergency launch abort system to ensure crew safety. While the design can vary, the underlying principle is the same: detect imminent catastrophic vehicle failure, initiate launch abort, and extract the crew to safety. Abort triggers are the detection mechanisms that identify that a catastrophic launch vehicle failure is occurring or is imminent and cause the initiation of a notification to the crew vehicle that the escape system must be activated. While ensuring that the abort triggers provide this function, designers must also ensure that the abort triggers do not signal that a catastrophic failure is imminent when in fact the launch vehicle can successfully achieve orbit. That is, the abort triggers must have low false negative rates to be sure that real crew-threatening failures are detected, and also low false positive rates to ensure that the crew does not abort from non-crew-threatening launch vehicle behaviors. The analysis process described in this paper is a compilation of over six years of lessons learned and refinements from experiences developing abort triggers for NASA's Constellation Program (Ares I Project) and the SLS Program, as well as the simultaneous development of SHM/FM theory. The paper will describe the abort analysis concepts and process, developed in conjunction with SLS Safety and Mission Assurance (S&MA) to define a common set of mission phase, failure scenario, and Loss of Mission Environment (LOME) combinations upon which the SLS Loss of Mission (LOM) Probabilistic Risk Assessment (PRA) models are built. This abort analysis also requires strong coordination with the Multi-Purpose Crew Vehicle (MPCV) and SLS Structures and Environments (STE) to formulate a series of abortability tables that encapsulate explosion dynamics over the ascent mission phase. The design and assessment of abort conditions and triggers to estimate their Loss of Crew (LOC) Benefits also requires in-depth integration with other groups, including Avionics, Guidance, Navigation and Control(GN&C), the Crew Office, Mission Operations, and Ground Systems. The outputs of this analysis are a critical input to SLS S&MA's LOC PRA models. The process described here may well be the first full quantitative application of SHM/FM theory to the selection of a sensor suite for any aerospace system.
Patel, R S; Tarrant, C; Bonas, S; Shaw, R L
2015-05-12
Failing a high-stakes assessment at medical school is a major event for those who go through the experience. Students who fail at medical school may be more likely to struggle in professional practice, therefore helping individuals overcome problems and respond appropriately is important. There is little understanding about what factors influence how individuals experience failure or make sense of the failing experience in remediation. The aim of this study was to investigate the complexity surrounding the failure experience from the student's perspective using interpretative phenomenological analysis (IPA). The accounts of three medical students who had failed final re-sit exams, were subjected to in-depth analysis using IPA methodology. IPA was used to analyse each transcript case-by-case allowing the researcher to make sense of the participant's subjective world. The analysis process allowed the complexity surrounding the failure to be highlighted, alongside a narrative describing how students made sense of the experience. The circumstances surrounding students as they approached assessment and experienced failure at finals were a complex interaction between academic problems, personal problems (specifically finance and relationships), strained relationships with friends, family or faculty, and various mental health problems. Each student experienced multi-dimensional issues, each with their own individual combination of problems, but experienced remediation as a one-dimensional intervention with focus only on improving performance in written exams. What these students needed to be included was help with clinical skills, plus social and emotional support. Fear of termination of the their course was a barrier to open communication with staff. These students' experience of failure was complex. The experience of remediation is influenced by the way in which students make sense of failing. Generic remediation programmes may fail to meet the needs of students for whom personal, social and mental health issues are a part of the picture.
NASA Technical Reports Server (NTRS)
Waas, A.; Babcock, C., Jr.
1986-01-01
A series of experiments was carried out to determine the mechanism of failure in compressively loaded laminated plates with a circular cutout. Real time holographic interferometry and photomicrography are used to observe the progression of failure. These observations together with post experiment plate sectioning and deplying for interior damage observation provide useful information for modelling the failure process. It is revealed that the failure is initiated as a localised instability in the zero layers, at the hole surface. With increasing load extensive delamination cracking is observed. The progression of failure is by growth of these delaminations induced by delamination buckling. Upon reaching a critical state, catastrophic failure of the plate is observed. The levels of applied load and the rate at which these events occur depend on the plate stacking sequence.
Kanaji, Shingo; Ohyama, Masato; Yasuda, Takashi; Sendo, Hiroyoshi; Suzuki, Satoshi; Kawasaki, Kentaro; Tanaka, Kenichi; Fujino, Yasuhiro; Tominaga, Masahiro; Kakeji, Yoshihiro
2016-07-01
Anastomotic failures that cannot be detected during surgery often lead to postoperative leakage. There have been no detailed reports on the intraoperative leak test for esophagojejunal anastomosis. Our purpose was to investigate the utility of routine intraoperative leak testing to prevent postoperative anastomotic leakage after performing esophagojejunostomy. We prospectively performed routine air leak tests and reviewed the records of 185 consecutive patients with gastric cancer who underwent open total gastrectomy followed by esophagojejunostomy. A positive leak test was found for six patients (3.2 %). These patients with positive leak tests were subsequently treated with additional suturing, and they developed no postoperative anastomotic leakage. However, anastomotic leakage occurred in nine patients (4.9 %) with negative leak tests. A multivariate analysis demonstrated that a patient age >75 years and the surgeon's experience <30 cases were risk factors for anastomotic leakage. Intraoperative leak testing can detect some physical dehiscence, and additional suturing may prevent anastomotic leakage. However, it cannot prevent all anastomotic leakage caused by other factors, such as the surgeons' experience and patients' age.
Study on electromagnetic radiation and mechanical characteristics of coal during an SHPB test
NASA Astrophysics Data System (ADS)
Chengwu, Li; Qifei, Wang; Pingyang, Lyu
2016-06-01
Dynamic loads provided by a Split Hopkinson pressure bar are applied in the impact failure experiment on coal with an impact velocity of 4.174-17.652 m s-1. The mechanical property characteristics of coal and an electromagnetic radiation signal can be detected and measured during the experiment. The variation of coal stress, strain, incident energy, dissipated energy and other mechanical parameters are analyzed by the unidimensional stress wave theory. It suggests that with an increase of the impact velocity, the mechanical parameters and electromagnetic radiation increased significantly and the dissipated energy of the coal sample has a high discrete growing trend during the failure process of coal impact. Combined with the received energy of the electromagnetic radiation signal, the relationship between these mechanical parameters and electromagnetic radiation during the failure process of coal burst could be analyzed by the grey correlation model. The results show that the descending order of the gray correlation degree between the mechanical characteristics and electromagnetic radiation energy are impact velocity, maximum stress, the average stress, incident energy, the average strain, maximum strain, the average strain rate and dissipation energy. Due to the correlation degree, the impact velocity and incident energy are relatively large, and the main factor affecting the electromagnetic radiation energy of coal is the energy magnitude. While the relationship between extreme stress and the radiation energy change trend is closed, the stress state of coal has a greater impact on electromagnetic radiation than the strain and destruction which can deepen the research of the coal-rock dynamic disaster electromagnetic monitoring technique.
40 CFR 63.164 - Standards: Compressors.
Code of Federal Regulations, 2013 CFR
2013-07-01
... with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be observed daily or shall be equipped with an... indicates failure of the seal system, the barrier fluid system, or both. (f) If the sensor indicates failure...
40 CFR 63.164 - Standards: Compressors.
Code of Federal Regulations, 2012 CFR
2012-07-01
... with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be observed daily or shall be equipped with an... indicates failure of the seal system, the barrier fluid system, or both. (f) If the sensor indicates failure...
NASA Astrophysics Data System (ADS)
Oommen, T.; Baise, L. G.; Gens, R.; Prakash, A.; Gupta, R. P.
2008-12-01
Seismic liquefaction is the loss of strength of soil due to shaking that leads to various ground failures such as lateral spreading, settlements, tilting, and sand boils. It is important to document these failures after earthquakes to advance our study of when and where liquefaction occurs. The current approach of mapping these failures by field investigation teams suffers due to the inaccessibility to some of the sites immediately after the event, short life of some of these failures, difficulties in mapping the aerial extent of the failure, incomplete coverage etc. After the 2001 Bhuj earthquake (India), researchers, using the Indian remote sensing satellite, illustrated that satellite remote sensing can provide a synoptic view of the terrain and offer unbiased estimates of liquefaction failures. However, a multisensor (data from different sensors onboard of the same or different satellites) and multispectral (data collected in different spectral regions) approach is needed to efficiently document liquefaction incidences and/or its potential of occurrence due to the possibility of a particular satellite being located inappropriately to image an area shortly after an earthquake. The use of SAR satellite imagery ensures the acquisition of data in all weather conditions at day and night as well as information complimentary to the optical data sets. In this study, we analyze the applicability of the various satellites (Landsat, RADARSAT, Terra-MISR, IRS-1C, IRS-1D) in mapping liquefaction failures after the 2001 Bhuj earthquake using Support Vector Data Description (SVDD). The SVDD is a kernel based nonparametric outlier detection algorithm inspired by the Support Vector Machines (SVMs), which is a new generation learning algorithm based on the statistical learning theory. We present the applicability of SVDD for unsupervised change-detection studies (i.e. to identify post-earthquake liquefaction failures). The liquefaction occurrences identified from the different satellites using SVDD have been compared to the ground truth in terms of documented liquefaction failures by other researchers. We present the applicability and appropriateness of the various satellites and spectral regions for documenting liquefaction related failures. Results illustrate that the SVDD is a promising unsupervised change-detection algorithm, which can help in automating the documentation of earthquake induced liquefaction failures.
Patel, Teresa; Fisher, Stanley P.
2016-01-01
Objective This study aimed to utilize failure modes and effects analysis (FMEA) to transform clinical insights into a risk mitigation plan for intrathecal (IT) drug delivery in pain management. Methods The FMEA methodology, which has been used for quality improvement, was adapted to assess risks (i.e., failure modes) associated with IT therapy. Ten experienced pain physicians scored 37 failure modes in the following categories: patient selection for therapy initiation (efficacy and safety concerns), patient safety during IT therapy, and product selection for IT therapy. Participants assigned severity, probability, and detection scores for each failure mode, from which a risk priority number (RPN) was calculated. Failure modes with the highest RPNs (i.e., most problematic) were discussed, and strategies were proposed to mitigate risks. Results Strategic discussions focused on 17 failure modes with the most severe outcomes, the highest probabilities of occurrence, and the most challenging detection. The topic of the highest‐ranked failure mode (RPN = 144) was manufactured monotherapy versus compounded combination products. Addressing failure modes associated with appropriate patient and product selection was predicted to be clinically important for the success of IT therapy. Conclusions The methodology of FMEA offers a systematic approach to prioritizing risks in a complex environment such as IT therapy. Unmet needs and information gaps are highlighted through the process. Risk mitigation and strategic planning to prevent and manage critical failure modes can contribute to therapeutic success. PMID:27477689
Risk Management in ETS-8 Project
NASA Astrophysics Data System (ADS)
Homma, M.
2002-01-01
Engineering Test Satellite - 8 (ETS-8) is the Japanese largest geo-synchronous satellite of 3 tons in mass, of which mission is mobile communications and navigation experiment. It is now in the flight model manufacturing phase. This paper introduces the risk management taken in this project as a reference. The mission success criteria of ETS-8 are described at first. All the risk management activities are planned taking these criteria into consideration. ETS-8 consists of many new technologies such as the large deployable antenna (19m x 17m), 64-bit MPU, 100 V solar paddle and so on. We have to pay attention to control these risk through each phase of development. In system design of ETS - 8, almost components have redundancy and there is some back-up function to avoid fatal failure. What kind of back-up function should be taken is one of the hot issues in this project. The consideration process is described as an actual case. In addition to conventional risk management procedure, FMEA and identification of the critical items so on, we conducted the validation experiment in space by use of a scale model that was launched on Ariane 5. The decision to conduct this kind of experiment is taken after evaluation between risk and cost, because it takes a lot of resources of project. The effect of this experiment is also presented. Failure detection, isolation and reconfiguration in the flight software are more important as the satellite system becomes large and complicated. We did the independent verification and validation to the software. Some remarks are noted with respect to its effectiveness.
Nakar, C; Manco-Johnson, M J; Lail, A; Donfield, S; Maahs, J; Chong, Y; Blades, T; Shapiro, A
2015-05-01
Current guidelines recommend delaying the start of immune tolerance induction (ITI) until the inhibitor titre is <10 Bethesda units (BU) to improve success. This study was conducted to evaluate ITI outcome relative to time to start ITI from inhibitor detection irrespective of inhibitor titre. Data were retrospectively collected from two U.S. haemophilia treatment centres (HTCs) on subjects with severe/moderate factor VIII (FVIII) deficiency with inhibitors who underwent ITI. Outcomes were defined pragmatically: success--negative inhibitor titre and ability to use FVIII concentrate for treatment/bleed prevention; partial success--inhibitor titre 1 to <5 BU with ability to use FVIII concentrate for treatment of bleeding; failure--ITI ongoing >3 years without achieving success/partial success, or ITI discontinuation. Fifty-eight subjects were included; 32 of 39 (82%) with high-responding inhibitor (HRI) achieved success, 7 failed. HRI subjects were subdivided based on ITI start time: 23/39 subjects started within 1 month of detection and 22/23 (96%) achieved success. Of these 23, 13 started ITI with an inhibitor titre ≥10 BU; all were successes. Eleven of 39 HRI subjects had an interval >6 months until ITI start; 7 (64%) achieved success. Time from inhibitor detection to ITI start may play a critical role in outcome. A titre ≥10 BU at ITI start did not influence outcome in subjects when ITI was initiated within 1 month of detection. Prompt ITI should be considered a viable therapeutic option in newly identified patients with inhibitors regardless of current inhibitor titre. © 2014 John Wiley & Sons Ltd.
Damage of composite structures: Detection technique, dynamic response and residual strength
NASA Astrophysics Data System (ADS)
Lestari, Wahyu
2001-10-01
Reliable and accurate health monitoring techniques can prevent catastrophic failures of structures. Conventional damage detection methods are based on visual or localized experimental methods and very often require prior information concerning the vicinity of the damage or defect. The structure must also be readily accessible for inspections. The techniques are also labor intensive. In comparison to these methods, health-monitoring techniques that are based on the structural dynamic response offers unique information on failure of structures. However, systematic relations between the experimental data and the defect are not available and frequently, the number of vibration modes needed for an accurate identification of defects is much higher than the number of modes that can be readily identified in the experiment. These motivated us to develop an experimental data based detection method with systematic relationships between the experimentally identified information and the analytical or mathematical model representing the defective structures. The developed technique use changes in vibrational curvature modes and natural frequencies. To avoid misinterpretation of the identified information, we also need to understand the effects of defects on the structural dynamic response prior to developing health-monitoring techniques. In this thesis work we focus on two type of defects in composite structures, namely delamination and edge notch like defect. Effects of nonlinearity due to the presence of defect and due to the axial stretching are studied for beams with delamination. Once defects are detected in a structure, next concern is determining the effects of the defects on the strength of the structure and its residual stiffness under dynamic loading. In this thesis, energy release rate due to dynamic loading in a delaminated structure is studied, which will be a foundation toward determining the residual strength of the structure.
Confident failures: Lapses of working memory reveal a metacognitive blind spot.
Adam, Kirsten C S; Vogel, Edward K
2017-07-01
Working memory performance fluctuates dramatically from trial to trial. On many trials, performance is no better than chance. Here, we assessed participants' awareness of working memory failures. We used a whole-report visual working memory task to quantify both trial-by-trial performance and trial-by-trial subjective ratings of inattention to the task. In Experiment 1 (N = 41), participants were probed for task-unrelated thoughts immediately following 20% of trials. In Experiment 2 (N = 30), participants gave a rating of their attentional state following 25% of trials. Finally, in Experiments 3a (N = 44) and 3b (N = 34), participants reported confidence of every response using a simple mouse-click judgment. Attention-state ratings and off-task thoughts predicted the number of items correctly identified on each trial, replicating previous findings that subjective measures of attention state predict working memory performance. However, participants correctly identified failures on only around 28% of failure trials. Across experiments, participants' metacognitive judgments reliably predicted variation in working memory performance but consistently and severely underestimated the extent of failures. Further, individual differences in metacognitive accuracy correlated with overall working memory performance, suggesting that metacognitive monitoring may be key to working memory success.
Simulation-driven machine learning: Bearing fault classification
NASA Astrophysics Data System (ADS)
Sobie, Cameron; Freitas, Carina; Nicolai, Mike
2018-01-01
Increasing the accuracy of mechanical fault detection has the potential to improve system safety and economic performance by minimizing scheduled maintenance and the probability of unexpected system failure. Advances in computational performance have enabled the application of machine learning algorithms across numerous applications including condition monitoring and failure detection. Past applications of machine learning to physical failure have relied explicitly on historical data, which limits the feasibility of this approach to in-service components with extended service histories. Furthermore, recorded failure data is often only valid for the specific circumstances and components for which it was collected. This work directly addresses these challenges for roller bearings with race faults by generating training data using information gained from high resolution simulations of roller bearing dynamics, which is used to train machine learning algorithms that are then validated against four experimental datasets. Several different machine learning methodologies are compared starting from well-established statistical feature-based methods to convolutional neural networks, and a novel application of dynamic time warping (DTW) to bearing fault classification is proposed as a robust, parameter free method for race fault detection.
Microfluidic stretchable RF electronics.
Cheng, Shi; Wu, Zhigang
2010-12-07
Stretchable electronics is a revolutionary technology that will potentially create a world of radically different electronic devices and systems that open up an entirely new spectrum of possibilities. This article proposes a microfluidic based solution for stretchable radio frequency (RF) electronics, using hybrid integration of active circuits assembled on flex foils and liquid alloy passive structures embedded in elastic substrates, e.g. polydimethylsiloxane (PDMS). This concept was employed to implement a 900 MHz stretchable RF radiation sensor, consisting of a large area elastic antenna and a cluster of conventional rigid components for RF power detection. The integrated radiation sensor except the power supply was fully embedded in a thin elastomeric substrate. Good electrical performance of the standalone stretchable antenna as well as the RF power detection sub-module was verified by experiments. The sensor successfully detected the RF radiation over 5 m distance in the system demonstration. Experiments on two-dimensional (2D) stretching up to 15%, folding and twisting of the demonstrated sensor were also carried out. Despite the integrated device was severely deformed, no failure in RF radiation sensing was observed in the tests. This technique illuminates a promising route of realizing stretchable and foldable large area integrated RF electronics that are of great interest to a variety of applications like wearable computing, health monitoring, medical diagnostics, and curvilinear electronics.
General test plan redundant sensor strapdown IMU evaluation program
NASA Technical Reports Server (NTRS)
Hartwell, T.; Irwin, H. A.; Miyatake, Y.; Wedekind, D. E.
1971-01-01
The general test plan for a redundant sensor strapdown inertial measuring unit evaluation program is presented. The inertial unit contains six gyros and three orthogonal accelerometers. The software incorporates failure detection and correction logic and a land vehicle navigation program. The principal objective of the test is a demonstration of the practicability, reliability, and performance of the inertial measuring unit with failure detection and correction in operational environments.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-16
... the package's failure. A failure of the package could expose the medical device to microbes, bacteria... research and development efforts, including, but not limited to, designs and experiments and the results of successful and unsuccessful designs and experiments; and (b) With respect to any intangible assets that are...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu, T.Y.; Bentz, J.; Simpson, R.
1997-02-01
The objective of the Lower Head Failure (LHF) Experiment Program is to experimentally investigate and characterize the failure of the reactor vessel lower head due to thermal and pressure loads under severe accident conditions. The experiment is performed using 1/5-scale models of a typical PWR pressure vessel. Experiments are performed for various internal pressure and imposed heat flux distributions with and without instrumentation guide tube penetrations. The experimental program is complemented by a modest modeling program based on the application of vessel creep rupture codes developed in the TMI Vessel Investigation Project. The first three experiments under the LHF programmore » investigated the creep rupture of simulated reactor pressure vessels without penetrations. The heat flux distributions for the three experiments are uniform (LHF-1), center-peaked (LHF-2), and side-peaked (LHF-3), respectively. For all the experiments, appreciable vessel deformation was observed to initiate at vessel wall temperatures above 900K and the vessel typically failed at approximately 1000K. The size of failure was always observed to be smaller than the heated region. For experiments with non-uniform heat flux distributions, failure typically occurs in the region of peak temperature. A brief discussion of the effect of penetration is also presented.« less
Reliable dual-redundant sensor failure detection and identification for the NASA F-8 DFBW aircraft
NASA Technical Reports Server (NTRS)
Deckert, J. C.; Desai, M. N.; Deyst, J. J., Jr.; Willsky, A. S.
1978-01-01
A technique was developed which provides reliable failure detection and identification (FDI) for a dual redundant subset of the flight control sensors onboard the NASA F-8 digital fly by wire (DFBW) aircraft. The technique was successfully applied to simulated sensor failures on the real time F-8 digital simulator and to sensor failures injected on telemetry data from a test flight of the F-8 DFBW aircraft. For failure identification the technique utilized the analytic redundancy which exists as functional and kinematic relationships among the various quantities being measured by the different control sensor types. The technique can be used not only in a dual redundant sensor system, but also in a more highly redundant system after FDI by conventional voting techniques reduced to two the number of unfailed sensors of a particular type. In addition the technique can be easily extended to the case in which only one sensor of a particular type is available.
Failure detection and correction for turbofan engines
NASA Technical Reports Server (NTRS)
Corley, R. C.; Spang, H. A., III
1977-01-01
In this paper, a failure detection and correction strategy for turbofan engines is discussed. This strategy allows continuing control of the engines in the event of a sensor failure. An extended Kalman filter is used to provide the best estimate of the state of the engine based on currently available sensor outputs. Should a sensor failure occur the control is based on the best estimate rather than the sensor output. The extended Kalman filter consists of essentially two parts, a nonlinear model of the engine and up-date logic which causes the model to track the actual engine. Details on the model and up-date logic are presented. To allow implementation, approximations are made to the feedback gain matrix which result in a single feedback matrix which is suitable for use over the entire flight envelope. The effect of these approximations on stability and response is discussed. Results from a detailed nonlinear simulation indicate that good control can be maintained even under multiple failures.
Critical fault patterns determination in fault-tolerant computer systems
NASA Technical Reports Server (NTRS)
Mccluskey, E. J.; Losq, J.
1978-01-01
The method proposed tries to enumerate all the critical fault-patterns (successive occurrences of failures) without analyzing every single possible fault. The conditions for the system to be operating in a given mode can be expressed in terms of the static states. Thus, one can find all the system states that correspond to a given critical mode of operation. The next step consists in analyzing the fault-detection mechanisms, the diagnosis algorithm and the process of switch control. From them, one can find all the possible system configurations that can result from a failure occurrence. Thus, one can list all the characteristics, with respect to detection, diagnosis, and switch control, that failures must have to constitute critical fault-patterns. Such an enumeration of the critical fault-patterns can be directly used to evaluate the overall system tolerance to failures. Present research is focused on how to efficiently make use of these system-level characteristics to enumerate all the failures that verify these characteristics.
Peyster, Eliot G.; Frank, Renee; Margulies, Kenneth B.; Feldman, Michael D.
2018-01-01
Over 26 million people worldwide suffer from heart failure annually. When the cause of heart failure cannot be identified, endomyocardial biopsy (EMB) represents the gold-standard for the evaluation of disease. However, manual EMB interpretation has high inter-rater variability. Deep convolutional neural networks (CNNs) have been successfully applied to detect cancer, diabetic retinopathy, and dermatologic lesions from images. In this study, we develop a CNN classifier to detect clinical heart failure from H&E stained whole-slide images from a total of 209 patients, 104 patients were used for training and the remaining 105 patients for independent testing. The CNN was able to identify patients with heart failure or severe pathology with a 99% sensitivity and 94% specificity on the test set, outperforming conventional feature-engineering approaches. Importantly, the CNN outperformed two expert pathologists by nearly 20%. Our results suggest that deep learning analytics of EMB can be used to predict cardiac outcome. PMID:29614076
Nirschl, Jeffrey J; Janowczyk, Andrew; Peyster, Eliot G; Frank, Renee; Margulies, Kenneth B; Feldman, Michael D; Madabhushi, Anant
2018-01-01
Over 26 million people worldwide suffer from heart failure annually. When the cause of heart failure cannot be identified, endomyocardial biopsy (EMB) represents the gold-standard for the evaluation of disease. However, manual EMB interpretation has high inter-rater variability. Deep convolutional neural networks (CNNs) have been successfully applied to detect cancer, diabetic retinopathy, and dermatologic lesions from images. In this study, we develop a CNN classifier to detect clinical heart failure from H&E stained whole-slide images from a total of 209 patients, 104 patients were used for training and the remaining 105 patients for independent testing. The CNN was able to identify patients with heart failure or severe pathology with a 99% sensitivity and 94% specificity on the test set, outperforming conventional feature-engineering approaches. Importantly, the CNN outperformed two expert pathologists by nearly 20%. Our results suggest that deep learning analytics of EMB can be used to predict cardiac outcome.
Haile, Demewoz; Takele, Abulie; Gashaw, Ketema; Demelash, Habtamu; Nigatu, Dabere
2016-01-01
Treatment failure defined as progression of disease after initiation of ART or when the anti-HIV medications can't control the infection. One of the major concerns over the rapid scaling up of ART is the emergence and transmission of HIV drug resistant strains at the population level due to treatment failure. This could lead to the failure of basic ART programs. Thus this study aimed to investigate the predictors of treatment failure among adult ART clients in Bale Zone Hospitals, South east Ethiopia. Retrospective cohort study was employed in four hospitals of Bale zone named Goba, Robe, Ginir and Delomena. A total of 4,809 adult ART clients were included in the analysis from these four hospitals. Adherence was measured by pill count method. The Kaplan Meier (KM) curve was used to describe the survival time of ART patients without treatment failure. Bivariate and multivariable Cox proportional hazards regression models were used for identifying associated factors of treatment failure. The incidence rate of treatment failure was found 9.38 (95% CI 7.79-11.30) per 1000 person years. Male ART clients were more likely to experience treatment failure as compared to females [AHR = 4.49; 95% CI: (2.61-7.73)].Similarly, lower CD4 count (<100 m3/dl) at initiation of ART was found significantly associated with higher odds of treatment failure [AHR = 3.79; 95% CI: (2.46-5.84).Bedridden [AHR = 5.02; 95% CI: (1.98-12.73)] and ambulatory [AHR = 2.12; 95% CI: (1.08-4.07)] patients were more likely to experience treatment failure as compared to patients with working functional status. TB co-infected clients had also higher odds to experience treatment failure [AHR = 3.06; 95% CI: (1.72-5.44)]. Those patients who had developed TB after ART initiation had higher odds to experience treatment failure as compared to their counter parts [AHR = 4.35; 95% CI: (1.99-9.54]. Having other opportunistic infection during ART initiation was also associated with higher odds of experiencing treatment failure [AHR = 7.0, 95% CI: (3.19-15.37)]. Similarly having fair [AHR = 4.99 95% CI: (1.90-13.13)] and poor drug adherence [AHR = 2.56; 95% CI: (1.12-5.86)]were significantly associated with higher odds of treatment failure as compared to clients with good adherence. The rate of treatment failure in Bale zone hospitals needs attention. Prevention and control of TB and other opportunistic infections, promotion of ART initiation at higher CD4 level, and better functional status, improving drug adherence are important interventions to reduce treatment failure among ART clients in Southeastern Ethiopia.
NASA Technical Reports Server (NTRS)
Culpepper, William X.; ONeill, Pat; Nicholson, Leonard L.
2000-01-01
An internuclear cascade and evaporation model has been adapted to estimate the LET spectrum generated during testing with 200 MeV protons. The model-generated heavy ion LET spectrum is compared to the heavy ion LET spectrum seen on orbit. This comparison is the basis for predicting single event failure rates from heavy ions using results from a single proton test. Of equal importance, this spectra comparison also establishes an estimate of the risk of encountering a failure mode on orbit that was not detected during proton testing. Verification of the general results of the model is presented based on experiments, individual part test results, and flight data. Acceptance of this model and its estimate of remaining risk opens the hardware verification philosophy to the consideration of radiation testing with high energy protons at the board and box level instead of the more standard method of individual part testing with low energy heavy ions.
Use of luminescent gunshot residues markers in forensic context.
Weber, I T; Melo, A J G; Lucena, M A M; Consoli, E F; Rodrigues, M O; de Sá, G F; Maldaner, A O; Talhavini, M; Alves, S
2014-11-01
Chemical evaluation of gunshot residues (GSR) produced by non-toxic lead-free ammunition (NTA) has been a challenge to forensic analyses. Our group developed some luminescent markers specific to the detection of GSR. Here, we evaluated the performance of selected markers in experiments that mimic forensic context and/or routines in which luminescent characteristics would be very useful. We evaluated the influence of markers' addition on the bullet's speed, the rate of shot failure (i.e., when the cartridge case is not fully ejected and/or a new ammunition is not automatically replaced in the gun chamber) as a function of marker percentage, the possibility of collecting luminescent gunshot residue (LGSR) in unconventional locations (e.g. the shooters' nostrils), the LGSR lifetime after hand washing, the transfer of LGSR to objects handled by the shooter, and the dispersion of LGSR at the crime scene and on simulated victims. It was observed that high amounts of marker (10 wt%) cause high rates of failure on pistols, as well as a substantial decrease in bullet speed. However, the use of 2 wt% of marker minimizes these effects and allows LGSR detection, collection and analysis. Moreover, in all conditions tested, markers showed high performance and provided important information for forensic analyses. For instance, the LGSR particles were found on the floor, ranging from 0 to 9.4 m away from the shooter, on the door panel and seats after a car shooting experiment, and were found easily on a pig leg used to simulate a victim. When a selective tagging was done, it was possible to obtain positive or negative correlation between the victim and shooter. Additionally LGSR possesses a fairly long lifetime (9 h) and good resistance to hand washing (up to 16 washes). Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Naumann, R Wendel; Brown, Jubilee
2015-01-01
To evaluate adverse events associated with electromechanical morcellation as reported to the Manufacturer and User Facility Device Experience (MAUDE) database. Retrospective analysis of an established database (Canadian Task Force classification III). A search of the MAUDE database for terms associated with commercially available electromechanical morcellation devices was undertaken for events leading to injury or death between 2004 and 2014. Data, including the types of injury, need for conversion to open surgery, type of open surgery, and clinical outcomes, were extracted from the records. Over a 10-year period, 9 events associated with death and 215 events associated with patient injury or significant delay of the surgical procedure were recorded. These involved 137 device failures, 51 organ injuries, and the morcellation of 27 previously undiagnosed malignancies. Of the 9 deaths, 1 was associated with organ injury, and the other 8 were associated with morcellation of cancer. Of the 27 undiagnosed cancers, 5 were reported by the manufacturer, 8 were reported by the patient or family, 9 were reported by medical or news reports, 2 were reported by medical professionals, and 3 were due to litigation. Morcellation of an undiagnosed malignancy was first reported to the database in December 2013. The MAUDE database appears to detect perioperative events, such as device failures and organ injury at the time of surgery, but appears to be poor at detecting late events after surgery, such as the potential spread of cancer. Outcome registries are likely a more efficient means of tracking potential long-term adverse events associated with surgical devices. Copyright © 2015 AAGL. Published by Elsevier Inc. All rights reserved.
Ducharme, Francine M; Zemek, Roger; Chauhan, Bhupendrasinh F; Gravel, Jocelyn; Chalut, Dominic; Poonai, Naveen; Guertin, Marie-Claude; Quach, Caroline; Blondeau, Lucie; Laberge, Sophie
2016-12-01
The management of paediatric asthma exacerbations is based on trials in children of all ages. Recent studies from 2009 raised the possibility that preschoolers (younger than 6 years) with viral-induced wheezing and children exposed to tobacco smoke might be at an increased risk of treatment failure. The study objective was to identify factors associated with management failure in children presenting to the emergency department with moderate or severe asthma exacerbations. We undertook a prospective, multicentre cohort study of children aged 1-17 years presenting to five emergency departments with moderate or severe asthma (defined as a Pediatric Respiratory Assessment Measure [PRAM] of 4 to 12). Children received oral corticosteroids and severity-specific inhaled bronchodilator therapy. The primary outcome was emergency department management failure (hospital admission, prolonged emergency department therapy [≥8 h], or relapse within 72 h of discharge from the emergency department with admission to hospital or prolonged emergency department stay). Viral cause was ascertained by PCR on nasopharyngeal specimens and environmental tobacco smoke exposure by salivary cotinine concentration. This study is registered at ClinicalTrials.gov (NCT02013076). Between Feb 14, 2011, and Dec 20, 2013, we screened 1893 children and enrolled 1012 eligible children. Of those eligible children, 973 participants were included in the analysis. 165 (17%) of 965 children experienced management failure in the emergency department, which was significantly associated with viral detection (110 [19%] of 579 participants with virus detection vs 46 [13%] of 354 participants without viral detection, odds ratio [OR] 1·57; 95% CI 1·04-2·37), fever (24% vs 15%, 1·96; 1·32-2·92), baseline PRAM (OR 1·38 per 1-point increase; 1·22-1·56), oxygen saturation of less than 92% (50% vs 12%, 3·94; 1·97-7·89), and presence of symptoms between exacerbations (21% vs 16%, 1·73; 1·13-2·64). Age, salivary cotinine concentration, and oral corticosteroids dose were not significantly associated with management failure. Viral detection (67% vs 46%, p<0·0001) and fever (31% vs 16%, p<0·0001) occurred more frequently in preschoolers than in older children. Viral detection was also associated with reduced speed of recovery over the 10 days after discharge. In children presenting with moderate or severe asthma, viral detection, but not age, was associated with failure of symptom management, independently from exacerbation severity (ie, baseline PRAM and oxygen saturation), fever, and symptom chronicity (viral detection). Although it did not reach statistical significance, the association between treatment management failure and exposure to tobacco smoke warrants further investigation. Canadian Institutes of Health Research. Copyright © 2016 Elsevier Ltd. All rights reserved.
Effects of age and sex ratios on offspring recruitment rates in translocated black rhinoceros.
Gedir, Jay V; Law, Peter R; du Preez, Pierre; Linklater, Wayne L
2018-06-01
Success of animal translocations depends on improving postrelease demographic rates toward establishment and subsequent growth of released populations. Short-term metrics for evaluating translocation success and its drivers, like postrelease survival and fecundity, are unlikely to represent longer-term outcomes. We used information theory to investigate 25 years of data on black rhinoceros (Diceros bicornis) translocations. We used the offspring recruitment rate (ORR) of translocated females-a metric integrating survival, fecundity, and offspring recruitment at sexual maturity-to detect determinants of success. Our unambiguously best model (AICω = 0.986) predicted that ORR increases with female age at release as a function of lower postrelease adult rhinoceros sex ratio (males:females). Delay of first postrelease reproduction and failure of some females to recruit any calves to sexual maturity most influenced the pattern of ORRs, and the leading causes of recruitment failure were postrelease female death (23% of all females) and failure to calve (24% of surviving females). We recommend translocating older females (≥6 years old) because they do not exhibit the reproductive delay and low ORRs of juveniles (<4 years old) or the higher rates of recruitment failure of juveniles and young adults (4-5.9 years old). Where translocation of juveniles is necessary, they should be released into female-biased populations, where they have higher ORRs. Our study offers the unique advantage of a long-term analysis across a large number of replicate populations-a science-by-management experiment as a proxy for a manipulative experiment, and a rare opportunity, particularly for a large, critically endangered taxon such as the black rhinoceros. Our findings differ from previous recommendations, reinforce the importance of long-term data sets and comprehensive metrics of translocation success, and suggest attention be shifted from ecological to social constraints on population growth and species recovery, particularly when translocating species with polygynous breeding systems. © 2017 Society for Conservation Biology.
Bone Marrow Failure Secondary to Cytokinesis Failure
2015-12-01
SUPPLEMENTARY NOTES 14. ABSTRACT Fanconi anemia (FA) is a human genetic disease characterized by a progressive bone marrow failure and heightened...Fanconi anemia (FA) is the most commonly inherited bone marrow failure syndrome. FA patients develop bone marrow failure during the first decade of...experiments proposed in specific aims 1- 3 (Tasks 1-3). Task 1: To determine whether HSCs from Fanconi anemia mouse models have increased cytokinesis
Experience reveals ways to minimize failures in rod-pumped wells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patterson, J.C.; Bucaram, S.M.; Curfew, J.V.
From the experience gained over the past 25 years, ARCO Oil and Gas Co. has developed recommendations to reduce equipment failure in sucker-rod pumping installations. These recommendations include equipment selection and design, operating procedures, and chemical treatment. Equipment failure and its attendant costs are extremely important in today's petroleum industry. Because rod pumping is the predominant means of artificial lift, minimizing equipment failure in rod pumped wells can have a significant impact on profitability. This compilation of recommendations comes from field locations throughout the US and other countries. The goal is to address and solve problems on a well-by-well basis.
Multimaterial lamination as a means of retarding penetration and spallation failures in plates
NASA Technical Reports Server (NTRS)
Dibattista, J. D.; Humes, D. H.
1972-01-01
Experimental data are presented which show that hypervelocity impact spallation and penetration failures of a single solid aluminum plate and of a solid aluminum plate spaced a distance behind a Whipple meteor bumper may be retarded by replacing the solid aluminum plate with a laminated plate. Four sets of experiments were conducted. The first set of experiments was conducted with projectile mass and velocity held constant and with polycarbonate cylinders impacted into single plates of different construction. The second set of experiments was done with single plates of various construction and aluminum spherical projectiles of similar mass but different velocities. These two experiments showed that a laminated plate of aluminum and polycarbonate or aluminum and methyl methacrylate could prevent spallation and penetration failures with a lower areal density than either an all-aluminum laminated plate or a solid aluminum plate. The aluminum laminated plate was in turn superior to the solid aluminum plate in resisting spallation and penetration failures. In addition, through an example of 6061-T6 aluminum and methyl methacrylate, it is shown that a laminated structure ballistically superior to its parent materials may be built. The last two sets of experiments were conducted using bumper-protected main walls of solid aluminum and of laminated aluminum and polycarbonate. Again, under hypervelocity impact conditions, the laminated main walls were superior to the solid aluminum main walls in retarding spallation and penetration failures.
Acoustic Emission Analysis of Prestressed Concrete Structures
NASA Astrophysics Data System (ADS)
Elfergani, H. A.; Pullin, R.; Holford, K. M.
2011-07-01
Corrosion is a substantial problem in numerous structures and in particular corrosion is very serious in reinforced and prestressed concrete and must, in certain applications, be given special consideration because failure may result in loss of life and high financial cost. Furthermore corrosion cannot only be considered a long term problem with many studies reporting failure of bridges and concrete pipes due to corrosion within a short period after they were constructed. The concrete pipes which transport water are examples of structures that have suffered from corrosion; for example, the pipes of The Great Man-Made River Project of Libya. Five pipe failures due to corrosion have occurred since their installation. The main reason for the damage is corrosion of prestressed wires in the pipes due to the attack of chloride ions from the surrounding soil. Detection of the corrosion in initial stages has been very important to avoid other failures and the interruption of water flow. Even though most non-destructive methods which are used in the project are able to detect wire breaks, they cannot detect the presence of corrosion. Hence in areas where no excavation has been completed, areas of serious damage can go undetected. Therefore, the major problem which faces engineers is to find the best way to detect the corrosion and prevent the pipes from deteriorating. This paper reports on the use of the Acoustic Emission (AE) technique to detect the early stages of corrosion prior to deterioration of concrete structures.
Detection of imminent vein graft occlusion: what is the optimal surveillance program?
Tinder, Chelsey N; Bandyk, Dennis F
2009-12-01
The prediction of infrainguinal vein bypass failure remains an inexact judgment. Patient demographics, technical factors, and vascular laboratory graft surveillance testing are helpful in identifying a high-risk graft cohort. The optimal surveillance program to detect the bypass at risk for imminent occlusion continues to be developed, but required elements are known and include clinical assessment for new or changes in limb ischemia symptoms, measurement of ankle and/or toe systolic pressure, and duplex ultrasound imaging of the bypass graft. Duplex ultrasound assessment of bypass hemodynamics may be the most accurate method to detect imminent vein graft occlusion. The finding of low graft flow during intraoperative assessment or at a scheduled surveillance study predicts failure; and if associated with an occlusive lesion, a graft revision can prolong patency. The most common abnormality producing graft failure is conduit stenosis caused by myointimal hyperplasia; and the majority can be repaired by an endovascular intervention. Frequency of testing to detect the failing bypass should be individualized to the patient, the type of arterial bypass, and prior duplex ultrasound scan findings. The focus of surveillance is on identification of the low-flow arterial bypass and timely repair of detected critical stenosis defined by duplex velocity spectra criteria of a peak systolic velocity 300 cm/s and peak systolic velocity ratio across the stenosis >3.5-correlating with >70% diameter-reducing stenosis. When conducted appropriately, a graft surveillance program should result in an unexpected graft failure rate of <3% per year.
FINDS: A fault inferring nonlinear detection system programmers manual, version 3.0
NASA Technical Reports Server (NTRS)
Lancraft, R. E.
1985-01-01
Detailed software documentation of the digital computer program FINDS (Fault Inferring Nonlinear Detection System) Version 3.0 is provided. FINDS is a highly modular and extensible computer program designed to monitor and detect sensor failures, while at the same time providing reliable state estimates. In this version of the program the FINDS methodology is used to detect, isolate, and compensate for failures in simulated avionics sensors used by the Advanced Transport Operating Systems (ATOPS) Transport System Research Vehicle (TSRV) in a Microwave Landing System (MLS) environment. It is intended that this report serve as a programmers guide to aid in the maintenance, modification, and revision of the FINDS software.
Bohnhoff, Marco; Dresen, Georg; Ellsworth, William L.; Ito, Hisao; Cloetingh, Sierd; Negendank, Jörg
2010-01-01
An important discovery in crustal mechanics has been that the Earth’s crust is commonly stressed close to failure, even in tectonically quiet areas. As a result, small natural or man-made perturbations to the local stress field may trigger earthquakes. To understand these processes, Passive Seismic Monitoring (PSM) with seismometer arrays is a widely used technique that has been successfully applied to study seismicity at different magnitude levels ranging from acoustic emissions generated in the laboratory under controlled conditions, to seismicity induced by hydraulic stimulations in geological reservoirs, and up to great earthquakes occurring along plate boundaries. In all these environments the appropriate deployment of seismic sensors, i.e., directly on the rock sample, at the earth’s surface or in boreholes close to the seismic sources allows for the detection and location of brittle failure processes at sufficiently low magnitude-detection threshold and with adequate spatial resolution for further analysis. One principal aim is to develop an improved understanding of the physical processes occurring at the seismic source and their relationship to the host geologic environment. In this paper we review selected case studies and future directions of PSM efforts across a wide range of scales and environments. These include induced failure within small rock samples, hydrocarbon reservoirs, and natural seismicity at convergent and transform plate boundaries. Each example represents a milestone with regard to bridging the gap between laboratory-scale experiments under controlled boundary conditions and large-scale field studies. The common motivation for all studies is to refine the understanding of how earthquakes nucleate, how they proceed and how they interact in space and time. This is of special relevance at the larger end of the magnitude scale, i.e., for large devastating earthquakes due to their severe socio-economic impact.
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1985-01-01
The application of the Generalized Likelihood Ratio technique to the detection and identification of aircraft control element failures has been evaluated in a linear digital simulation of the longitudinal dynamics of a B-737 aircraft. Simulation results show that the technique has potential but that the effects of wind turbulence and Kalman filter model errors are problems which must be overcome.
NASA Technical Reports Server (NTRS)
Cole, H. A., Jr.
1973-01-01
Random decrement signatures of structures vibrating in a random environment are studied through use of computer-generated and experimental data. Statistical properties obtained indicate that these signatures are stable in form and scale and hence, should have wide application in one-line failure detection and damping measurement. On-line procedures are described and equations for estimating record-length requirements to obtain signatures of a prescribed precision are given.
An evaluation of a real-time fault diagnosis expert system for aircraft applications
NASA Technical Reports Server (NTRS)
Schutte, Paul C.; Abbott, Kathy H.; Palmer, Michael T.; Ricks, Wendell R.
1987-01-01
A fault monitoring and diagnosis expert system called Faultfinder was conceived and developed to detect and diagnose in-flight failures in an aircraft. Faultfinder is an automated intelligent aid whose purpose is to assist the flight crew in fault monitoring, fault diagnosis, and recovery planning. The present implementation of this concept performs monitoring and diagnosis for a generic aircraft's propulsion and hydraulic subsystems. This implementation is capable of detecting and diagnosing failures of known and unknown (i.e., unforseeable) type in a real-time environment. Faultfinder uses both rule-based and model-based reasoning strategies which operate on causal, temporal, and qualitative information. A preliminary evaluation is made of the diagnostic concepts implemented in Faultfinder. The evaluation used actual aircraft accident and incident cases which were simulated to assess the effectiveness of Faultfinder in detecting and diagnosing failures. Results of this evaluation, together with the description of the current Faultfinder implementation, are presented.
Two-IMU FDI performance of the sequential probability ratio test during shuttle entry
NASA Technical Reports Server (NTRS)
Rich, T. M.
1976-01-01
Performance data for the sequential probability ratio test (SPRT) during shuttle entry are presented. Current modeling constants and failure thresholds are included for the full mission 3B from entry through landing trajectory. Minimum 100 percent detection/isolation failure levels and a discussion of the effects of failure direction are presented. Finally, a limited comparison of failures introduced at trajectory initiation shows that the SPRT algorithm performs slightly worse than the data tracking test.
32-Bit-Wide Memory Tolerates Failures
NASA Technical Reports Server (NTRS)
Buskirk, Glenn A.
1990-01-01
Electronic memory system of 32-bit words corrects bit errors caused by some common type of failures - even failure of entire 4-bit-wide random-access-memory (RAM) chip. Detects failure of two such chips, so user warned that ouput of memory may contain errors. Includes eight 4-bit-wide DRAM's configured so each bit of each DRAM assigned to different one of four parallel 8-bit words. Each DRAM contributes only 1 bit to each 8-bit word.
Remote monitoring of heart failure: benefits for therapeutic decision making.
Martirosyan, Mihran; Caliskan, Kadir; Theuns, Dominic A M J; Szili-Torok, Tamas
2017-07-01
Chronic heart failure is a cardiovascular disorder with high prevalence and incidence worldwide. The course of heart failure is characterized by periods of stability and instability. Decompensation of heart failure is associated with frequent and prolonged hospitalizations and it worsens the prognosis for the disease and increases cardiovascular mortality among affected patients. It is therefore important to monitor these patients carefully to reveal changes in their condition. Remote monitoring has been designed to facilitate an early detection of adverse events and to minimize regular follow-up visits for heart failure patients. Several new devices have been developed and introduced to the daily practice of cardiology departments worldwide. Areas covered: Currently, special tools and techniques are available to perform remote monitoring. Concurrently there are a number of modern cardiac implantable electronic devices that incorporate a remote monitoring function. All the techniques that have a remote monitoring function are discussed in this paper in detail. All the major studies on this subject have been selected for review of the recent data on remote monitoring of HF patients and demonstrate the role of remote monitoring in the therapeutic decision making for heart failure patients. Expert commentary: Remote monitoring represents a novel intensified follow-up strategy of heart failure management. Overall, theoretically, remote monitoring may play a crucial role in the early detection of heart failure progression and may improve the outcome of patients.
Redundancy management of inertial systems.
NASA Technical Reports Server (NTRS)
Mckern, R. A.; Musoff, H.
1973-01-01
The paper reviews developments in failure detection and isolation techniques applicable to gimballed and strapdown systems. It examines basic redundancy management goals of improved reliability, performance and logistic costs, and explores mechanizations available for both input and output data handling. The meaning of redundant system reliability in terms of available coverage, system MTBF, and mission time is presented and the practical hardware performance limitations of failure detection and isolation techniques are explored. Simulation results are presented illustrating implementation coverages attainable considering IMU performance models and mission detection threshold requirements. The implications of a complete GN&C redundancy management method on inertial techniques are also explored.
NASA Technical Reports Server (NTRS)
Delaat, John C.; Merrill, Walter C.
1990-01-01
The objective of the Advanced Detection, Isolation, and Accommodation Program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, an algorithm was developed which detects, isolates, and accommodates sensor failures by using analytical redundancy. The performance of this algorithm was evaluated on a real time engine simulation and was demonstrated on a full scale F100 turbofan engine. The real time implementation of the algorithm is described. The implementation used state-of-the-art microprocessor hardware and software, including parallel processing and high order language programming.
A Big Data Analysis Approach for Rail Failure Risk Assessment.
Jamshidi, Ali; Faghih-Roohi, Shahrzad; Hajizadeh, Siamak; Núñez, Alfredo; Babuska, Robert; Dollevoet, Rolf; Li, Zili; De Schutter, Bart
2017-08-01
Railway infrastructure monitoring is a vital task to ensure rail transportation safety. A rail failure could result in not only a considerable impact on train delays and maintenance costs, but also on safety of passengers. In this article, the aim is to assess the risk of a rail failure by analyzing a type of rail surface defect called squats that are detected automatically among the huge number of records from video cameras. We propose an image processing approach for automatic detection of squats, especially severe types that are prone to rail breaks. We measure the visual length of the squats and use them to model the failure risk. For the assessment of the rail failure risk, we estimate the probability of rail failure based on the growth of squats. Moreover, we perform severity and crack growth analyses to consider the impact of rail traffic loads on defects in three different growth scenarios. The failure risk estimations are provided for several samples of squats with different crack growth lengths on a busy rail track of the Dutch railway network. The results illustrate the practicality and efficiency of the proposed approach. © 2017 The Authors Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.
2017-01-01
Background Appetite loss is one complication of chronic heart failure (CHF), and its association with pancreatic exocrine insufficiency (PEI) is not well investigated in CHF. Aim We attempted to detect the association between PEI and CHF-induced appetite. Methods Patients with CHF were enrolled, and body mass index (BMI), left ventricular ejection fraction (LVEF), New York Heart Association (NYHA) cardiac function grading, B-type natriuretic peptide (BNP), serum albumin, pro-albumin and hemoglobin were evaluated. The pancreatic exocrine function was measured by fecal elastase-1 (FE-1) levels in the enrolled patients. Appetite assessment was tested by completing the simplified nutritional appetite questionnaire (SNAQ). The improvement of appetite loss by supplemented pancreatic enzymes was also researched in this study. Results The decrease of FE-1 levels was found in patients with CHF, as well as SNAQ scores. A positive correlation was observed between SNAQ scores and FE-1 levels (r = 0.694, p < 0.001). Pancreatic enzymes supplement could attenuate the decrease of SNAQ scores in CHF patients with FE-1 levels <200 μg/g stool and SNAQ < 14. Conclusions Appetite loss is commonly seen in CHF, and is partially associated with pancreatic exocrine insufficiency. Oral pancreatic enzyme replacement therapy attenuates the chronic heart failure-induced appetite loss. These results suggest a possible pancreatic-cardiac relationship in chronic heart failure, and further experiment is needed for clarifying the possible mechanisms. PMID:29155861
Improving the treatment planning and delivery process of Xoft electronic skin brachytherapy.
Manger, Ryan; Rahn, Douglas; Hoisak, Jeremy; Dragojević, Irena
2018-05-14
To develop an improved Xoft electronic skin brachytherapy process and identify areas of further improvement. A multidisciplinary team conducted a failure modes and effects analysis (FMEA) by developing a process map and a corresponding list of failure modes. The failure modes were scored for their occurrence, severity, and detectability, and a risk priority number (RPN) was calculated for each failure mode as the product of occurrence, severity, and detectability. Corrective actions were implemented to address the higher risk failure modes, and a revised process was generated. The RPNs of the failure modes were compared between the initial process and final process to assess the perceived benefits of the corrective actions. The final treatment process consists of 100 steps and 114 failure modes. The FMEA took approximately 20 person-hours (one physician, three physicists, and two therapists) to complete. The 10 most dangerous failure modes had RPNs ranging from 336 to 630. Corrective actions were effective at addressing most failure modes (10 riskiest RPNs ranging from 189 to 310), yet the RPNs were higher than those published for alternative systems. Many of these high-risk failure modes remained due to hardware design limitations. FMEA helps guide process improvement efforts by emphasizing the riskiest steps. Significant risks are apparent when using a Xoft treatment unit for skin brachytherapy due to hardware limitations such as the lack of several interlocks, a short source lifespan, and variability in source output. The process presented in this article is expected to reduce but not eliminate these risks. Copyright © 2018 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Experiences of Patients Living With Heart Failure: A Descriptive Qualitative Study.
Seah, Alvin Chuen Wei; Tan, Khoon Kiat; Huang Gan, Juvena Chew; Wang, Wenru
2016-07-01
The purpose of this study was to explore the experiences, needs, and coping strategies of patients living with heart failure in Singapore. A descriptive qualitative design was used. A purposive sample of 15 informants was recruited from two cardiology wards of a tertiary public hospital in Singapore. Individual face-to-face interviews were conducted with a semistructured interview guideline that was developed based on a review of the literature and a pilot study. Content analysis was adopted to analyze the data, and four main categories were identified: perceived causes, manifestations, and prognosis; enduring emotions; managing the condition; and needs from health care professionals. The informants were overwhelmed with the experience of living with heart failure due to the disruptive and uncertain nature of the condition. This study offers health care professionals practical and useful suggestions when providing holistic care for patients with heart failure. © The Author(s) 2015.
Performance deficits following failure: learned helplessness or self-esteem protection?
Witkowski, T; Stiensmeier-Pelster, J
1998-03-01
We report two laboratory experiments which compare two competing explanations of performance deficits following failure: one based on Seligman's learned helplessness theory (LHT), and the other, on self-esteem protection theory (SEPT). In both studies, participants (Study 1: N = 40 pupils from secondary schools in Walbrzych, Poland; Study 2: N = 45 students from the University of Bielefeld, Germany) were confronted with either success or failure in a first phase of the experiment. Then, in the second phase of the experiment the participants had to work on a set of mathematical problems (Study 1) or a set of tasks taken from Raven's Progressive Matrices (Study 2) either privately or in public. In both studies failure in the first phase causes performance deficits in the second phase only if the participants had to solve the test tasks in public. These results were interpreted in line with SEPT and as incompatible with LHT.
Nonexplicit change detection in complex dynamic settings: what eye movements reveal.
Vachon, François; Vallières, Benoît R; Jones, Dylan M; Tremblay, Sébastien
2012-12-01
We employed a computer-controlled command-and-control (C2) simulation and recorded eye movements to examine the extent and nature of the inability to detect critical changes in dynamic displays when change detection is implicit (i.e., requires no explicit report) to the operator's task. Change blindness-the failure to notice significant changes to a visual scene-may have dire consequences on performance in C2 and surveillance operations. Participants performed a radar-based risk-assessment task involving multiple subtasks. Although participants were not required to explicitly report critical changes to the operational display, change detection was critical in informing decision making. Participants' eye movements were used as an index of visual attention across the display. Nonfixated (i.e., unattended) changes were more likely to be missed than were fixated (i.e., attended) changes, supporting the idea that focused attention is necessary for conscious change detection. The finding of significant pupil dilation for changes undetected but fixated suggests that attended changes can nonetheless be missed because of a failure of attentional processes. Change blindness in complex dynamic displays takes the form of failures in establishing task-appropriate patterns of attentional allocation. These findings have implications in the design of change-detection support tools for dynamic displays and work procedure in C2 and surveillance.
NASA Technical Reports Server (NTRS)
Litt, Jonathan; Kurtkaya, Mehmet; Duyar, Ahmet
1994-01-01
This paper presents an application of a fault detection and diagnosis scheme for the sensor faults of a helicopter engine. The scheme utilizes a model-based approach with real time identification and hypothesis testing which can provide early detection, isolation, and diagnosis of failures. It is an integral part of a proposed intelligent control system with health monitoring capabilities. The intelligent control system will allow for accommodation of faults, reduce maintenance cost, and increase system availability. The scheme compares the measured outputs of the engine with the expected outputs of an engine whose sensor suite is functioning normally. If the differences between the real and expected outputs exceed threshold values, a fault is detected. The isolation of sensor failures is accomplished through a fault parameter isolation technique where parameters which model the faulty process are calculated on-line with a real-time multivariable parameter estimation algorithm. The fault parameters and their patterns can then be analyzed for diagnostic and accommodation purposes. The scheme is applied to the detection and diagnosis of sensor faults of a T700 turboshaft engine. Sensor failures are induced in a T700 nonlinear performance simulation and data obtained are used with the scheme to detect, isolate, and estimate the magnitude of the faults.
Failure prediction in ceramic composites using acoustic emission and digital image correlation
NASA Astrophysics Data System (ADS)
Whitlow, Travis; Jones, Eric; Przybyla, Craig
2016-02-01
The objective of the work performed here was to develop a methodology for linking in-situ detection of localized matrix cracking to the final failure location in continuous fiber reinforced CMCs. First, the initiation and growth of matrix cracking are measured and triangulated via acoustic emission (AE) detection. High amplitude events at relatively low static loads can be associated with initiation of large matrix cracks. When there is a localization of high amplitude events, a measurable effect on the strain field can be observed. Full field surface strain measurements were obtained using digital image correlation (DIC). An analysis using the combination of the AE and DIC data was able to predict the final failure location.
Free-Swinging Failure Tolerance for Robotic Manipulators
NASA Technical Reports Server (NTRS)
English, James
1997-01-01
Under this GSRP fellowship, software-based failure-tolerance techniques were developed for robotic manipulators. The focus was on failures characterized by the loss of actuator torque at a joint, called free-swinging failures. The research results spanned many aspects of the free-swinging failure-tolerance problem, from preparing for an expected failure to discovery of postfailure capabilities to establishing efficient methods to realize those capabilities. Developed algorithms were verified using computer-based dynamic simulations, and these were further verified using hardware experiments at Johnson Space Center.
Resilience to emotional distress in response to failure, error or mistakes: A systematic review.
Johnson, Judith; Panagioti, Maria; Bass, Jennifer; Ramsey, Lauren; Harrison, Reema
2017-03-01
Perceptions of failure have been implicated in a range of psychological disorders, and even a single experience of failure can heighten anxiety and depression. However, not all individuals experience significant emotional distress following failure, indicating the presence of resilience. The current systematic review synthesised studies investigating resilience factors to emotional distress resulting from the experience of failure. For the definition of resilience we used the Bi-Dimensional Framework for resilience research (BDF) which suggests that resilience factors are those which buffer the impact of risk factors, and outlines criteria a variable should meet in order to be considered as conferring resilience. Studies were identified through electronic searches of PsycINFO, MEDLINE, EMBASE and Web of Knowledge. Forty-six relevant studies reported in 38 papers met the inclusion criteria. These provided evidence of the presence of factors which confer resilience to emotional distress in response to failure. The strongest support was found for the factors of higher self-esteem, more positive attributional style, and lower socially-prescribed perfectionism. Weaker evidence was found for the factors of lower trait reappraisal, lower self-oriented perfectionism and higher emotional intelligence. The majority of studies used experimental or longitudinal designs. These results identify specific factors which should be targeted by resilience-building interventions. Resilience; failure; stress; self-esteem; attributional style; perfectionism. Copyright © 2016 Elsevier Ltd. All rights reserved.
Transfer Failure and Proactive Interference in Short-Term Memory
ERIC Educational Resources Information Center
Ellis, John A.
1977-01-01
Two experiments tested the hypothesis that proactive interference over a series of Brown-Peterson trials results from a combination of the subject's failure to transfer information to a permanent memory state and failure to retrieve information from permanent memory. (Editor)
What factors determine the severity of hepatitis A-related acute liver failure?
Ajmera, V; Xia, G; Vaughan, G; Forbi, J C; Ganova-Raeva, L M; Khudyakov, Y; Opio, C K; Taylor, R; Restrepo, R; Munoz, S; Fontana, R J; Lee, W M
2011-07-01
The reason(s) that hepatitis A virus (HAV) infection may progress infrequently to acute liver failure are poorly understood. We examined host and viral factors in 29 consecutive adult patients with HAV-associated acute liver failure enrolled at 10 sites participating in the US ALF Study Group. Eighteen of twenty-four acute liver failure sera were PCR positive while six had no detectable virus. HAV genotype was determined using phylogenetic analysis and the full-length genome sequences of the HAV from a cute liver failure sera were compared to those from self-limited acute HAV cases selected from the CDC database. We found that rates of nucleotide substitution did not vary significantly between the liver failure and non-liver failure cases and there was no significant variation in amino acid sequences between the two groups. Four of 18 HAV isolates were sub-genotype IB, acquired from the same study site over a 3.5-year period. Sub-genotype IB was found more frequently among acute liver failure cases compared to the non-liver failure cases (chi-square test, P < 0.01). At another centre, a mother and her son presented with HAV and liver failure within 1 month of each other. Predictors of spontaneous survival included detectable serum HAV RNA, while age, gender, HAV genotype and nucleotide substitutions were not associated with outcome. The more frequent appearance of rapid viral clearance and its association with poor outcomes in acute liver failure as well as the finding of familial cases imply a possible host genetic predisposition that contributes to a fulminant course. Recurrent cases of the rare sub-genotype IB over several years at a single centre imply a community reservoir of infection and possible increased pathogenicity of certain infrequent viral genotypes. © 2010 Blackwell Publishing Ltd.
What factors determine the severity of hepatitis A-related acute liver failure?
Ajmera, V.; Xia, G.; Vaughan, G.; Forbi, J. C.; Ganova-Raeva, L. M.; Khudyakov, Y.; Opio, C. K.; Taylor, R.; Restrepo, R.; Munoz, S.; Fontana, R. J.; Lee, W. M.
2016-01-01
SUMMARY The reason(s) that hepatitis A virus (HAV) infection may progress infrequently to acute liver failure are poorly understood. We examined host and viral factors in 29 consecutive adult patients with HAV-associated acute liver failure enrolled at 10 sites participating in the US ALF Study Group. Eighteen of twenty-four acute liver failure sera were PCR positive while six had no detectable virus. HAV genotype was determined using phylogenetic analysis and the full-length genome sequences of the HAV from a cute liver failure sera were compared to those from self-limited acute HAV cases selected from the CDC database. We found that rates of nucleotide substitution did not vary significantly between the liver failure and non-liver failure cases and there was no significant variation in amino acid sequences between the two groups. Four of 18 HAV isolates were subgenotype IB, acquired from the same study site over a 3.5-year period. Sub-genotype IB was found more frequently among acute liver failure cases compared to the non-liver failure cases (chi-square test, P < 0.01). At another centre, a mother and her son presented with HAV and liver failure within 1 month of each other. Predictors of spontaneous survival included detectable serum HAV RNA, while age, gender, HAV genotype and nucleotide substitutions were not associated with outcome. The more frequent appearance of rapid viral clearance and its association with poor outcomes in acute liver failure as well as the finding of familial cases imply a possible host genetic predisposition that contributes to a fulminant course. Recurrent cases of the rare subgenotype IB over several years at a single centre imply a community reservoir of infection and possible increased pathogenicity of certain infrequent viral genotypes. PMID:21143345
ERIC Educational Resources Information Center
Hebda-Bauer, Elaine K.; Watson, Stanley J.; Akil, Huda
2005-01-01
The impact of a previously successful or unsuccessful experience on the subsequent acquisition of a related task is not well understood. The nature of past experience may have even greater impact in individuals with learning deficits, as their cognitive processes can be easily disrupted. Mice with a targeted disruption of the [alpha] and [delta]…
Aging, Loss-of-Coolant Accident (LOCA), and high potential testing of damaged cables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vigil, R.A.; Jacobus, M.J.
1994-04-01
Experiments were conducted to assess the effects of high potential testing of cables and to assess the survivability of aged and damaged cables under Loss-of-Coolant Accident (LOCA) conditions. High potential testing at 240 Vdc/mil on undamaged cables suggested that no damage was incurred on the selected virgin cables. During aging and LOCA testing, Okonite ethylene propylene rubber (EPR) cables with a bonded jacket experienced unexpected failures. The failures appear to be primarily related to the level of thermal aging and the presence of a bonded jacket that ages more rapidly than the insulation. For Brand Rex crosslinked polyolefin (XLPO) cables,more » the results suggest that 7 mils of insulation remaining should give the cables a high probability of surviving accident exposure following aging. The voltage necessary to detect when 7 mils of insulation remain on unaged Brand Rex cables is approximately 35 kVdc. This voltage level would almost certainly be unacceptable to a utility for use as a damage assessment tool. However, additional tests indicated that a 35 kvdc voltage application would not damage virgin Brand Rex cables when tested in water. Although two damaged Rockbestos silicone rubber cables also failed during the accident test, no correlation between failures and level of damage was apparent.« less
Chen, Xiaoliang; Zhao, Bin; Ma, Shoujiang; Chen, Cen; Hu, Daoyun; Zhou, Wenshuang; Zhu, Zuqing
2015-03-23
In this paper, we study how to improve the control plane resiliency of software-defined elastic optical networks (SD-EONs) and design a master-slave OpenFlow (OF) controller arrangement. Specifically, we introduce two OF controllers (OF-Cs), i.e., the master and slave OF-Cs, and make them work in a collaborative way to protect the SD-EON against controller failures. We develop a controller communication protocol (CCP) to facilitate the cooperation of the two OF-Cs. With the CCP, the master OF-C (M-OF-C) can synchronize network status to the slave OF-C (S-OF-C) in real time, while S-OF-C can quickly detect the failure of M-OF-C and take over the network control and management (NC&M) tasks timely to avoid service disruption. We implement the proposed framework in an SD-EON control plane testbed built with high-performance servers, and perform NC&M experiments with different network failure scenarios to demonstrate its effectiveness. Experimental results indicate that the proposed system can restore services in both the data and control planes of SD-EON jointly while maintaining relatively good scalability. To the best of our knowledge, this is the first demonstration that realizes control plane resiliency in SD-EONs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Homce, G.T.; Thalimer, J.R.
1996-05-01
Most electric motor predictive maintenance methods have drawbacks that limit their effectiveness in the mining environment. The US Bureau of Miens (USBM) is developing an alternative approach to detect winding insulation breakdown in advance of complete motor failure. In order to evaluate the analysis algorithms necessary for this approach, the USBM has designed and installed a system to monitor 120 electric motors in a coal preparation plant. The computer-based experimental system continuously gathers, stores, and analyzes electrical parameters for each motor. The results are then correlated to data from conventional motor-maintenance methods and in-service failures to determine if the analysismore » algorithms can detect signs of insulation deterioration and impending failure. This paper explains the on-line testing approach used in this research, and describes monitoring system design and implementation. At this writing data analysis is underway, but conclusive results are not yet available.« less
Using pattern analysis methods to do fast detection of manufacturing pattern failures
NASA Astrophysics Data System (ADS)
Zhao, Evan; Wang, Jessie; Sun, Mason; Wang, Jeff; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh; Ding, Hua
2016-03-01
At the advanced technology node, logic design has become extremely complex and is getting more challenging as the pattern geometry size decreases. The small sizes of layout patterns are becoming very sensitive to process variations. Meanwhile, the high pressure of yield ramp is always there due to time-to-market competition. The company that achieves patterning maturity earlier than others will have a great advantage and a better chance to realize maximum profit margins. For debugging silicon failures, DFT diagnostics can identify which nets or cells caused the yield loss. But normally, a long time period is needed with many resources to identify which failures are due to one common layout pattern or structure. This paper will present a new yield diagnostic flow, based on preliminary EFA results, to show how pattern analysis can more efficiently detect pattern related systematic defects. Increased visibility on design pattern related failures also allows more precise yield loss estimation.
NASA Technical Reports Server (NTRS)
Bergmann, E.
1976-01-01
The current baseline method and software implementation of the space shuttle reaction control subsystem failure detection and identification (RCS FDI) system is presented. This algorithm is recommended for conclusion in the redundancy management (RM) module of the space shuttle guidance, navigation, and control system. Supporting software is presented, and recommended for inclusion in the system management (SM) and display and control (D&C) systems. RCS FDI uses data from sensors in the jets, in the manifold isolation valves, and in the RCS fuel and oxidizer storage tanks. A list of jet failures and fuel imbalance warnings is generated for use by the jet selection algorithm of the on-orbit and entry flight control systems, and to inform the crew and ground controllers of RCS failure status. Manifold isolation valve close commands are generated in the event of failed on or leaking jets to prevent loss of large quantities of RCS fuel.
Multi-physics modeling of multifunctional composite materials for damage detection
NASA Astrophysics Data System (ADS)
Sujidkul, Thanyawalai
This study presents a modeling of multifunction composite materials for damage detection with its verification and validation to mechanical behavior predictions of Carbon Fibre Reinforced Polymer composites (CFRPs), CFRPs laminated composites, and woven SiC/SiC matrix composites that are subjected to fracture damage. Advantages of those materials are low cost, low density, high strength-to-weight ratio, and comparable specific tensile properties, the special of SiC/SiC is good environmental stability at high temperature. Resulting in, the composite has been used for many important structures such as helicopter rotors, aerojet engines, gas turbines, hot control surfaces, sporting goods, and windmill blades. Damage or material defect detection in a mechanical component can provide vital information for the prediction of remaining useful life, which will result in the prevention of catastrophic failures. Thus the understanding of the mechanical behavior have been challenge to the prevent damage and failure of composites in different scales. The damage detection methods in composites have been investigated widely in recent years. Non-destructive techniques are the traditional methods to detect the damage such as X-ray, acoustic emission and thermography. However, due to the invisible damage in composite can be occurred, to prevent the failure in composites. The developments of damage detection methods have been considered. Due to carbon fibers are conductive materials, in resulting CFRPs can be self-sensing to detect damage. As is well known, the electrical resistance has been shown to be a sensitive measure of internal damage, and also this work study in thermal resistance can detect damage in composites. However, there is a few number of different micromechanical modeling schemes has been proposed in the published literature for various types of composites. This works will provide with a numerical, analytical, and theoretical failure models in different damages to predict the mechanical damage behavior with electrical properties and thermal properties.
Failure mode and effects analysis outputs: are they valid?
2012-01-01
Background Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Methods Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: · Face validity: by comparing the FMEA participants’ mapped processes with observational work. · Content validity: by presenting the FMEA findings to other healthcare professionals. · Criterion validity: by comparing the FMEA findings with data reported on the trust’s incident report database. · Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Results Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust’s incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. Conclusion There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA’s methodology for scoring failures, there were discrepancies between the teams’ estimates and similar incidents reported on the trust’s incident database. Furthermore, the concept of multiplying ordinal scales to prioritise failures is mathematically flawed. Until FMEA’s validity is further explored, healthcare organisations should not solely depend on their FMEA results to prioritise patient safety issues. PMID:22682433
Risk assessment of failure modes of gas diffuser liner of V94.2 siemens gas turbine by FMEA method
NASA Astrophysics Data System (ADS)
Mirzaei Rafsanjani, H.; Rezaei Nasab, A.
2012-05-01
Failure of welding connection of gas diffuser liner and exhaust casing is one of the failure modes of V94.2 gas turbines which are happened in some power plants. This defect is one of the uncertainties of customers when they want to accept the final commissioning of this product. According to this, the risk priority of this failure evaluated by failure modes and effect analysis (FMEA) method to find out whether this failure is catastrophic for turbine performance and is harmful for humans. By using history of 110 gas turbines of this model which are used in some power plants, the severity number, occurrence number and detection number of failure determined and consequently the Risk Priority Number (RPN) of failure determined. Finally, critically matrix of potential failures is created and illustrated that failure modes are located in safe zone.
Molecular cytogenetic analysis of Xq critical regions in premature ovarian failure
2013-01-01
Background One of the frequent reasons for unsuccessful conception is premature ovarian failure/primary ovarian insufficiency (POF/POI) that is defined as the loss of functional follicles below the age of 40 years. Among the genetic causes the most common one involves the X chromosome, as in Turner syndrome, partial X deletion and X-autosome translocations. Here we report a case of a 27-year-old female patient referred to genetic counselling because of premature ovarian failure. The aim of this case study to perform molecular genetic and cytogenetic analyses in order to identify the exact genetic background of the pathogenic phenotype. Results For premature ovarian failure disease diagnostics we performed the Fragile mental retardation 1 gene analysis using Southern blot technique and Repeat Primed PCR in order to identify the relationship between the Fragile mental retardation 1 gene premutation status and the premature ovarion failure disease. At this early onset, the premature ovarian failure affected patient we detected one normal allele of Fragile mental retardation 1 gene and we couldn’t verify the methylated allele, therefore we performed the cytogenetic analyses using G-banding and fluorescent in situ hybridization methods and a high resolution molecular cytogenetic method, the array comparative genomic hybridization technique. For this patient applying the G-banding, we identified a large deletion on the X chromosome at the critical region (ChrX q21.31-q28) which is associated with the premature ovarian failure phenotype. In order to detect the exact breakpoints, we used a special cytogenetic array ISCA plus CGH array and we verified a 67.355 Mb size loss at the critical region which include total 795 genes. Conclusions We conclude for this case study that the karyotyping is definitely helpful in the evaluation of premature ovarian failure patients, to identify the non submicroscopic chromosomal rearrangement, and using the array CGH technique we can contribute to the most efficient detection and mapping of exact deletion breakpoints of the deleted Xq region. PMID:24359613
Flight Test of Propulsion Monitoring and Diagnostic System
NASA Technical Reports Server (NTRS)
Gabel, Steve; Elgersma, Mike
2002-01-01
The objective of this program was to perform flight tests of the propulsion monitoring and diagnostic system (PMDS) technology concept developed by Honeywell under the NASA Advanced General Aviation Transport Experiment (AGATE) program. The PMDS concept is intended to independently monitor the performance of the engine, providing continuous status to the pilot along with warnings if necessary as well as making the data available to ground maintenance personnel via a special interface. These flight tests were intended to demonstrate the ability of the PMDS concept to detect a class of selected sensor hardware failures, and the ability to successfully model the engine for the purpose of engine diagnosis.
NASA Astrophysics Data System (ADS)
Fortmann, C. M.; Farley, M. V.; Smoot, M. A.; Fieselmann, B. F.
1988-07-01
Solarex is one of the leaders in amorphous silicon based photovoltaic production and research. The large scale production environment presents unique safety concerns related to the quantity of dangerous materials as well as the number of personnel handling these materials. The safety measures explored by this work include gas detection systems, training, and failure resistant gas handling systems. Our experiences with flow restricting orifices in the CGA connections and the use of steel cylinders is reviewed. The hazards and efficiency of wet scrubbers for silane exhausts are examined. We have found it to be useful to provide the scrubbler with temperature alarms.
NASA Technical Reports Server (NTRS)
Manchala, Daniel W.; Palazzolo, Alan B.; Kascak, Albert F.; Montague, Gerald T.; Brown, Gerald V.; Lawrence, Charles; Klusman, Steve
1994-01-01
Jet Engines may experience severe vibration due to the sudden imbalance caused by blade failure. This research investigates employment of on board magnetic bearings or piezoelectric actuators to cancel these forces in flight. This operation requires identification of the source of the vibrations via an expert system, determination of the required phase angles and amplitudes for the correction forces, and application of the desired control signals to the magnetic bearings or piezo electric actuators. This paper will show the architecture of the software system, details of the control algorithm used for the sudden imbalance correction project described above, and the laboratory test results.
Robust detection, isolation and accommodation for sensor failures
NASA Technical Reports Server (NTRS)
Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.
1986-01-01
The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques
Dissociative detachment and memory impairment: reversible amnesia or encoding failure?
Allen, J G; Console, D A; Lewis, L
1999-01-01
The authors propose that clinicians endeavor to differentiate between reversible and irreversible memory failures in patients with dissociative symptoms who report "memory gaps" and "lost time." The classic dissociative disorders, such as dissociative amnesia and dissociative identity disorder, entail reversible memory failures associated with encoding experience in altered states. The authors propose another realm of memory failures associated with severe dissociative detachment that may preclude the level of encoding of ongoing experience needed to support durable autobiographical memories. They describe how dissociative detachment may be intertwined with neurobiological factors that impair memory, and they spell out the significance of distinguishing reversible and irreversible memory impairment for diagnosis, patient education, psychotherapy, and research.
Investigation of an automatic trim algorithm for restructurable aircraft control
NASA Technical Reports Server (NTRS)
Weiss, J.; Eterno, J.; Grunberg, D.; Looze, D.; Ostroff, A.
1986-01-01
This paper develops and solves an automatic trim problem for restructurable aircraft control. The trim solution is applied as a feed-forward control to reject measurable disturbances following control element failures. Disturbance rejection and command following performances are recovered through the automatic feedback control redesign procedure described by Looze et al. (1985). For this project the existence of a failure detection mechanism is assumed, and methods to cope with potential detection and identification inaccuracies are addressed.
Optimally Robust Redundancy Relations for Failure Detection in Uncertain Systems,
1983-04-01
particular applications. While the general methods provide the basis for what in principle should be a widely applicable failure detection methodology...modifications to this result which overcome them at no fundmental increase in complexity. 4.1 Scaling A critical problem with the criteria of the preceding...criterion which takes scaling into account L 2 s[ (45) As in (38), we can multiply the C. by positive scalars to take into account unequal weightings on
Rigatelli, Gianluca; Rigateli, Gianluca; Cardaioli, Paolo; Braggion, Gabriele; Aggio, Silvio; Giordan, Massimo; Magro, Beatrice; Nascimben, Alberto; Favaro, Alberto; Roncon, Loris; Rincon, Loris
2007-02-01
We sought to prospectively assess the role of transesophageal (TEE) and intracardiac echocardiography (ICE) in detecting potential technical difficulties or failures in patients submitted to interatrial shunts percutaneous closure. We prospectively enrolled 46 consecutive patients (mean age 35+/-28, 8 years, 30 female) referred to our center for catheter-based closure of interatrial shunts. All patients were screened with TEE before the intervention. Patients who met the inclusion criteria underwent ICE study before the closure attempt (40 patients). TEE detected potential technical difficulties in 22.5% (9/40) patients, whereas ICE detected technical difficulties in 32.5% (13/40 patients). In patients with positive TEE/ICE the procedural success (92.4% versus 100% and, P = ns) and follow-up failure rate (7.7% versus 0%, P = ns) were similar to patients with negative TEE/ICE, whereas the fluoroscopy time (7 +/- 1.2 versus 5 +/- 0.7 minutes, P < 0.03), the procedural time (41 +/- 4.1 versus 30 +/- 8.2 minutes, P +/- 0.03), and technical difficulties rate (23.1% versus 0%, P = 0.013) were higher. Differences between ICE and TEE in the evaluation of rims, measurement of ASD or fossa ovalis, and detection of venous valve and embryonic septal membrane remnants impacted on technical challenges and on procedural and fluoroscopy times but did not influence the success rate and follow-up failure rate.
Toward Failure Modeling In Complex Dynamic Systems: Impact of Design and Manufacturing Variations
NASA Technical Reports Server (NTRS)
Tumer, Irem Y.; McAdams, Daniel A.; Clancy, Daniel (Technical Monitor)
2001-01-01
When designing vehicle vibration monitoring systems for aerospace devices, it is common to use well-established models of vibration features to determine whether failures or defects exist. Most of the algorithms used for failure detection rely on these models to detect significant changes during a flight environment. In actual practice, however, most vehicle vibration monitoring systems are corrupted by high rates of false alarms and missed detections. Research conducted at the NASA Ames Research Center has determined that a major reason for the high rates of false alarms and missed detections is the numerous sources of statistical variations that are not taken into account in the. modeling assumptions. In this paper, we address one such source of variations, namely, those caused during the design and manufacturing of rotating machinery components that make up aerospace systems. We present a novel way of modeling the vibration response by including design variations via probabilistic methods. The results demonstrate initial feasibility of the method, showing great promise in developing a general methodology for designing more accurate aerospace vehicle vibration monitoring systems.
miRNAs as biomarkers for diagnosis of heart failure: A systematic review and meta-analysis.
Yan, Hualin; Ma, Fan; Zhang, Yi; Wang, Chuan; Qiu, Dajian; Zhou, Kaiyu; Hua, Yimin; Li, Yifei
2017-06-01
With the rapid development of molecular biology, the kind of mircoRNA (miRNA) has been introduced into emerging role both in cardiac development and pathological procedure. Thus, we conduct this meta-analysis to find out the role of circulating miRNA as a biomarker in detecting heart failure. We searched PubMed, EMBASE, the Cochrane Central Register of Controlled Trials, and World Health Organization clinical trials registry center to identify relevant studies up to August 2016. We performed meta-analysis in a fixed/random-effect model using Meta-disc 1.4. We used STATA 14.0 to estimate the publication bias and meta-regression. Besides, we took use of SPSS 17.0 to evaluate variance between several groups. Information on true positive, false positive, false negative, and true negative, as well as the quality of research was extracted. We use results from 10 articles to analyze the pooled accuracy. The overall performance of total mixed miRNAs (TmiRs) detection was: pooled sensitivity, 0.74 (95% confidence interval [CI], 0.72 to 0.75); pooled specificity, 0.69 (95%CI, 0.67 to 0.71); and area under the summary receiver operating characteristic curves value (SROC), 0.7991. The miRNA-423-5p (miR-423-5p) detection was: pooled sensitivity, 0.81 (95%CI, 0.76 to 0.85); pooled specificity, 0.67 (95%CI, 0.61 to 0.73); and SROC, 0.8600. However, taken the same patients population, we extracted the data of BNP for detecting heart failure and performed meta-analysis with acceptable SROC as 0.9291. Among the variance analysis, the diagnostic performance of miR-423-5p claimed significant advantages of other pooled results. However, the combination of miRNAs and BNP could increase the accuracy of detecting of heart failure. Unfortunately, there was no dramatic advantage of miR-423-5p compared to BNP protocol. Despite interstudy variability, the performance test of miRNA for detecting heart failure revealed that miR-423-5p demonstrated the potential to be a biomarker. However, other miRNAs were not able to provide enough evidence on promising diagnostic value for heart failure based on the current data. Moreover, the combination of miRNAs and BNP could work as a better method to detection. Unfortunately, BNP was still the most convinced biomarker for such disease.
miRNAs as biomarkers for diagnosis of heart failure
Yan, Hualin; Ma, Fan; Zhang, Yi; Wang, Chuan; Qiu, Dajian; Zhou, Kaiyu; Hua, Yimin; Li, Yifei
2017-01-01
Abstract Background: With the rapid development of molecular biology, the kind of mircoRNA (miRNA) has been introduced into emerging role both in cardiac development and pathological procedure. Thus, we conduct this meta-analysis to find out the role of circulating miRNA as a biomarker in detecting heart failure. Methods: We searched PubMed, EMBASE, the Cochrane Central Register of Controlled Trials, and World Health Organization clinical trials registry center to identify relevant studies up to August 2016. We performed meta-analysis in a fixed/random-effect model using Meta-disc 1.4. We used STATA 14.0 to estimate the publication bias and meta-regression. Besides, we took use of SPSS 17.0 to evaluate variance between several groups. Information on true positive, false positive, false negative, and true negative, as well as the quality of research was extracted. Results: We use results from 10 articles to analyze the pooled accuracy. The overall performance of total mixed miRNAs (TmiRs) detection was: pooled sensitivity, 0.74 (95% confidence interval [CI], 0.72 to 0.75); pooled specificity, 0.69 (95%CI, 0.67 to 0.71); and area under the summary receiver operating characteristic curves value (SROC), 0.7991. The miRNA-423-5p (miR-423-5p) detection was: pooled sensitivity, 0.81 (95%CI, 0.76 to 0.85); pooled specificity, 0.67 (95%CI, 0.61 to 0.73); and SROC, 0.8600. However, taken the same patients population, we extracted the data of BNP for detecting heart failure and performed meta-analysis with acceptable SROC as 0.9291. Among the variance analysis, the diagnostic performance of miR-423-5p claimed significant advantages of other pooled results. However, the combination of miRNAs and BNP could increase the accuracy of detecting of heart failure. Unfortunately, there was no dramatic advantage of miR-423-5p compared to BNP protocol. Conclusion: Despite interstudy variability, the performance test of miRNA for detecting heart failure revealed that miR-423-5p demonstrated the potential to be a biomarker. However, other miRNAs were not able to provide enough evidence on promising diagnostic value for heart failure based on the current data. Moreover, the combination of miRNAs and BNP could work as a better method to detection. Unfortunately, BNP was still the most convinced biomarker for such disease. PMID:28562533
Cognitive impairment in heart failure: issues of measurement and etiology.
Riegel, Barbara; Bennett, Jill A; Davis, Andra; Carlson, Beverly; Montague, John; Robin, Howard; Glaser, Dale
2002-11-01
Clinicians need easy methods of screening for cognitive impairment in patients with heart failure. If correlates of cognitive impairment could be identified, more patients with early cognitive impairment could be treated before the problem interfered with adherence to treatment. To describe cognitive impairment in patients with heart failure, to explore the usefulness of 4 measures of cognitive impairment, and to assess correlates of cognitive impairment. A descriptive, correlational design was used. Four screening measures of cognition were assessed in 42 patients with heart failure: Commands subtest and Complex Ideational Material subtest of the Boston Diagnostic Aphasia Examination, Mini-Mental State Examination, and Draw-a-Clock Test. Cognitive impairment was defined as performance less than the standardized (T-score) cutoff point on at least 1 of the 4 measures. Possible correlates of cognitive impairment included age, education, hypotension, fluid overload (serum osmolality < 269 mOsm/kg), and dehydration (serum osmolality > or = 295 mOsm/kg). Cognitive impairment was detected in 12 (28.6%) of 42 participants. The 4 screening tests varied in effectiveness, but the Draw-a-Clock Test indicated impairment in 50% of the 12 impaired patients. A summed standardized score for the 4 measures was not significantly associated with age, education, hypotension, fluid overload, or dehydration in this sample. Cognitive impairment is relatively common in patients with heart failure. The Draw-a-Clock Test was most useful in detecting cognitive impairment, although it cannot be used to detect problems with verbal learning or delayed recall and should not be used as the sole screening method for patients with heart failure. Correlates of cognitive impairment require further study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rabinovitch, M.A.; Rose, C.P.; Rouleau, J.L.
1987-12-01
In heart failure secondary to chronic mechanical overload, cardiac sympathetic neurons demonstrate depressed catecholamine synthetic and transport function. To assess the potential of sympathetic neuronal imaging for detection of depressed transport function, serial scintigrams were acquired after the intravenous administration of metaiodobenzylguanidine (/sup 131/I) to 13 normal dogs, 3 autotransplanted (denervated) dogs, 5 dogs with left ventricular failure, and 5 dogs with compensated left ventricular hypertrophy due to a surgical arteriovenous shunt. Nine dogs were killed at 14 hours postinjection for determination of metaiodobenzylguanidine (/sup 131/I) and endogenous norepinephrine content in left atrium, left ventricle, liver, and spleen. By 4more » hours postinjection, autotransplanted dogs had a 39% reduction in mean left ventricular tracer accumulation, reflecting an absent intraneuronal tracer pool. Failure dogs demonstrated an accelerated early mean left ventricular tracer efflux rate (26.0%/hour versus 13.7%/hour in normals), reflecting a disproportionately increased extraneuronal tracer pool. They also showed reduced late left ventricular and left atrial concentrations of tracer, consistent with a reduced intraneuronal tracer pool. By contrast, compensated hypertrophy dogs demonstrated a normal early mean left ventricular tracer efflux rate (16.4%/hour) and essentially normal late left ventricular and left atrial concentrations of tracer. Metaiodobenzylguanidine (/sup 131/I) scintigraphic findings reflect the integrity of the cardiac sympathetic neuronal transport system in canine mechanical-overload heart failure. Metaiodobenzylguanidine (/sup 123/I) scintigraphy should be explored as a means of early detection of mechanical-overload heart failure in patients.« less
Takele, Abulie; Gashaw, Ketema; Demelash, Habtamu; Nigatu, Dabere
2016-01-01
Background Treatment failure defined as progression of disease after initiation of ART or when the anti-HIV medications can’t control the infection. One of the major concerns over the rapid scaling up of ART is the emergence and transmission of HIV drug resistant strains at the population level due to treatment failure. This could lead to the failure of basic ART programs. Thus this study aimed to investigate the predictors of treatment failure among adult ART clients in Bale Zone Hospitals, South east Ethiopia. Methods Retrospective cohort study was employed in four hospitals of Bale zone named Goba, Robe, Ginir and Delomena. A total of 4,809 adult ART clients were included in the analysis from these four hospitals. Adherence was measured by pill count method. The Kaplan Meier (KM) curve was used to describe the survival time of ART patients without treatment failure. Bivariate and multivariable Cox proportional hazards regression models were used for identifying associated factors of treatment failure. Result The incidence rate of treatment failure was found 9.38 (95% CI 7.79–11.30) per 1000 person years. Male ART clients were more likely to experience treatment failure as compared to females [AHR = 4.49; 95% CI: (2.61–7.73)].Similarly, lower CD4 count (<100 m3/dl) at initiation of ART was found significantly associated with higher odds of treatment failure [AHR = 3.79; 95% CI: (2.46–5.84).Bedridden [AHR = 5.02; 95% CI: (1.98–12.73)] and ambulatory [AHR = 2.12; 95% CI: (1.08–4.07)] patients were more likely to experience treatment failure as compared to patients with working functional status. TB co-infected clients had also higher odds to experience treatment failure [AHR = 3.06; 95% CI: (1.72–5.44)]. Those patients who had developed TB after ART initiation had higher odds to experience treatment failure as compared to their counter parts [AHR = 4.35; 95% CI: (1.99–9.54]. Having other opportunistic infection during ART initiation was also associated with higher odds of experiencing treatment failure [AHR = 7.0, 95% CI: (3.19–15.37)]. Similarly having fair [AHR = 4.99 95% CI: (1.90–13.13)] and poor drug adherence [AHR = 2.56; 95% CI: (1.12–5.86)]were significantly associated with higher odds of treatment failure as compared to clients with good adherence. Conclusion The rate of treatment failure in Bale zone hospitals needs attention. Prevention and control of TB and other opportunistic infections, promotion of ART initiation at higher CD4 level, and better functional status, improving drug adherence are important interventions to reduce treatment failure among ART clients in Southeastern Ethiopia. PMID:27716827
NASA Technical Reports Server (NTRS)
McCarty, John P.; Lyles, Garry M.
1997-01-01
Propulsion system quality is defined in this paper as having high reliability, that is, quality is a high probability of within-tolerance performance or operation. Since failures are out-of-tolerance performance, the probability of failures and their occurrence is the difference between high and low quality systems. Failures can be described at 3 levels: the system failure (which is the detectable end of a failure), the failure mode (which is the failure process), and the failure cause (which is the start). Failure causes can be evaluated & classified by type. The results of typing flight history failures shows that most failures are in unrecognized modes and result from human error or noise, i.e. failures are when engineers learn how things really work. Although the study based on US launch vehicles, a sampling of failures from other countries indicates the finding has broad application. The parameters of the design of a propulsion system are not single valued, but have dispersions associated with the manufacturing of parts. Many tests are needed to find failures, if the dispersions are large relative to tolerances, which could contribute to the large number of failures in unrecognized modes.
Success and Failure in Adult Education: The Immigrant Experience 1914-1924.
ERIC Educational Resources Information Center
Seller, Maxine S.
The educational experience of adult immigrants to the United States between 1914-24 is discussed. Attempts of educators and Americanization agencies to reach adult immigrants are described and reasons for the failure of these attempts are given, including inadequate funding, narrowness in subject matter and methods, and insensitivity to ethnic…
Construction of Academic Success and Failure in School Memories
ERIC Educational Resources Information Center
Kaya, Gamze Inan
2018-01-01
The idea of "Apprenticeship of Observation", proposing that pre-service teachers' early academic experiences might have effects on their professional development, has been a concern in teacher education in the last forty years. Early success or failure experiences of pre-service teachers in school may have a role in their professional…
Expert systems for automated maintenance of a Mars oxygen production system
NASA Astrophysics Data System (ADS)
Huang, Jen-Kuang; Ho, Ming-Tsang; Ash, Robert L.
1992-08-01
Application of expert system concepts to a breadboard Mars oxygen processor unit have been studied and tested. The research was directed toward developing the methodology required to enable autonomous operation and control of these simple chemical processors at Mars. Failure detection and isolation was the key area of concern, and schemes using forward chaining, backward chaining, knowledge-based expert systems, and rule-based expert systems were examined. Tests and simulations were conducted that investigated self-health checkout, emergency shutdown, and fault detection, in addition to normal control activities. A dynamic system model was developed using the Bond-Graph technique. The dynamic model agreed well with tests involving sudden reductions in throughput. However, nonlinear effects were observed during tests that incorporated step function increases in flow variables. Computer simulations and experiments have demonstrated the feasibility of expert systems utilizing rule-based diagnosis and decision-making algorithms.
Monitoring robot actions for error detection and recovery
NASA Technical Reports Server (NTRS)
Gini, M.; Smith, R.
1987-01-01
Reliability is a serious problem in computer controlled robot systems. Although robots serve successfully in relatively simple applications such as painting and spot welding, their potential in areas such as automated assembly is hampered by programming problems. A program for assembling parts may be logically correct, execute correctly on a simulator, and even execute correctly on a robot most of the time, yet still fail unexpectedly in the face of real world uncertainties. Recovery from such errors is far more complicated than recovery from simple controller errors, since even expected errors can often manifest themselves in unexpected ways. Here, a novel approach is presented for improving robot reliability. Instead of anticipating errors, researchers use knowledge-based programming techniques so that the robot can autonomously exploit knowledge about its task and environment to detect and recover from failures. They describe preliminary experiment of a system that they designed and constructed.
Detection of Local Temperature Change on HTS Cables via Time-Frequency Domain Reflectometry
NASA Astrophysics Data System (ADS)
Bang, Su Sik; Lee, Geon Seok; Kwon, Gu-Young; Lee, Yeong Ho; Ji, Gyeong Hwan; Sohn, Songho; Park, Kijun; Shin, Yong-June
2017-07-01
High temperature superconducting (HTS) cables are drawing attention as transmission and distribution cables in future grid, and related researches on HTS cables have been conducted actively. As HTS cables have come to the demonstration stage, failures of cooling systems inducing quench phenomenon of the HTS cables have become significant. Several diagnosis of the HTS cables have been developed but there are still some limitations of the experimental setup. In this paper, a non-destructive diagnostic technique for the detection of the local temperature change point is proposed. Also, a simulation model of HTS cables with a local temperature change point is suggested to verify the proposed diagnosis. The performance of the diagnosis is checked by comparative analysis between the proposed simulation results and experiment results of a real-world HTS cable. It is expected that the suggested simulation model and diagnosis will contribute to the commercialization of HTS cables in the power grid.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, D; Vile, D; Rosu, M
Purpose: Assess the correct implementation of risk-based methodology of TG 100 to optimize quality management and patient safety procedures for Stereotactic Body Radiation Therapy. Methods: A detailed process map of SBRT treatment procedure was generated by a team of three physicists with varying clinical experience at our institution to assess the potential high-risk failure modes. The probabilities of occurrence (O), severity (S) and detectability (D) for potential failure mode in each step of the process map were assigned by these individuals independently on the scale from1 to 10. The risk priority numbers (RPN) were computed and analyzed. The highest 30more » potential modes from each physicist’s analysis were then compared. Results: The RPN values assessed by the three physicists ranged from 30 to 300. The magnitudes of the RPN values from each physicist were different, and there was no concordance in the highest RPN values recorded by three physicists independently. The 10 highest RPN values belonged to sub steps of CT simulation, contouring and delivery in the SBRT process map. For these 10 highest RPN values, at least two physicists, irrespective of their length of experience had concordance but no general conclusions emerged. Conclusion: This study clearly shows that the risk-based assessment of a clinical process map requires great deal of preparation, group discussions, and participation by all stakeholders. One group albeit physicists cannot effectively implement risk-based methodology proposed by TG100. It should be a team effort in which the physicists can certainly play the leading role. This also corroborates TG100 recommendation that risk-based assessment of clinical processes is a multidisciplinary team effort.« less
Health Monitoring of a Rotating Disk Using a Combined Analytical-Experimental Approach
NASA Technical Reports Server (NTRS)
Abdul-Aziz, Ali; Woike, Mark R.; Lekki, John D.; Baaklini, George Y.
2009-01-01
Rotating disks undergo rigorous mechanical loading conditions that make them subject to a variety of failure mechanisms leading to structural deformities and cracking. During operation, periodic loading fluctuations and other related factors cause fractures and hidden internal cracks that can only be detected via noninvasive types of health monitoring and/or nondestructive evaluation. These evaluations go further to inspect material discontinuities and other irregularities that have grown to become critical defects that can lead to failure. Hence, the objectives of this work is to conduct a collective analytical and experimental study to present a well-rounded structural assessment of a rotating disk by means of a health monitoring approach and to appraise the capabilities of an in-house rotor spin system. The analyses utilized the finite element method to analyze the disk with and without an induced crack at different loading levels, such as rotational speeds starting at 3000 up to 10 000 rpm. A parallel experiment was conducted to spin the disk at the desired speeds in an attempt to correlate the experimental findings with the analytical results. The testing involved conducting spin experiments which, covered the rotor in both damaged and undamaged (i.e., notched and unnotched) states. Damaged disks had artificially induced through-thickness flaws represented in the web region ranging from 2.54 to 5.08 cm (1 to 2 in.) in length. This study aims to identify defects that are greater than 1.27 cm (0.5 in.), applying available means of structural health monitoring and nondestructive evaluation, and documenting failure mechanisms experienced by the rotor system under typical turbine engine operating conditions.
Spink, Kevin S; Brawley, Lawrence R; Gyurcsik, Nancy C
2016-10-01
The relationship between attributional dimensions women assign to the cause of their perceived success or failure at meeting the recommended physical activity dose and self-regulatory efficacy for future physical activity was examined among women with arthritis. Women (N = 117) aged 18-84 years, with self-reported medically-diagnosed arthritis, completed on-line questions in the fall of 2013 assessing endurance physical activity, perceived outcome for meeting the recommended levels of endurance activity, attributions for one's success or failure in meeting the recommendations, and self-regulatory efficacy to schedule/plan endurance activity over the next month. The main theoretically-driven finding revealed that the interaction of the stability dimension with perceived success/failure was significantly related to self-regulatory efficacy for scheduling and planning future physical activity (β = 0.35, p = .002). Outcomes attributed to more versus less stable factors accentuated differences in self-regulatory efficacy beliefs following perceived success and failure at being active. It appears that attributional dimensions were associated with self-regulatory efficacy in women with arthritis. This suggests that rather than objectively observed past mastery experience, women's subjective perceptions and explanations of their past experiences were related to efficacy beliefs, especially following a failure experience.
Ampoule failure sensor time response testing: Experiment 1
NASA Technical Reports Server (NTRS)
Johnson, M. L.; Watring, D. A.
1994-01-01
The response time of an ampoule failure sensor exposed to a liquid or vapor gallium-arsenide (GaAs) is investigated. The experimental configuration represents the sample/ampoule cartridge assembly used in NASA's Crystal Growth Furnace (CGF). The sensor is a chemical fuse made from a metal with which the semiconductor material reacts more rapidly than it does with the containing cartridge. For the III-IV compound of GaAs, a platinum metal was chosen based on the reaction of platinum and arsenic at elevated temperatures which forms a low melting eutectic. Ampoule failure is indicated by a step change in resistance of the failure sensor on the order of megohms. The sensors will increase the safety of crystal growth experiments by providing an indication that an ampoule has failed. Experimental results indicate that the response times (after a known ampoule failure) for the 0.003 and 0.010 inch ampoule failure sensors are 2.4 and 3.6 minutes, respectively. This ampoule failure sensor will be utilized in the CGF during the second United States Microgravity Laboratory Mission (USML-2) and is the subject of a NASA patent application.
Is Non-Completion a Failure or a New Beginning? Research Non-Completion from a Student's Perspective
ERIC Educational Resources Information Center
McCormack, Coralie
2005-01-01
Today's performance-driven model of higher degree research has constructed student withdrawal and non-completion as failure. This failure is often internalized by the student as their own failure. This paper draws on a longitudinal study that examined the experiences of four female Master's by Research degree students--Anna, Carla, Grace and…
NASA Astrophysics Data System (ADS)
Xu, Yuan; Dai, Feng
2018-03-01
A novel method is developed for characterizing the mechanical response and failure mechanism of brittle rocks under dynamic compression-shear loading: an inclined cylinder specimen using a modified split Hopkinson pressure bar (SHPB) system. With the specimen axis inclining to the loading direction of SHPB, a shear component can be introduced into the specimen. Both static and dynamic experiments are conducted on sandstone specimens. Given carefully pulse shaping, the dynamic equilibrium of the inclined specimens can be satisfied, and thus the quasi-static data reduction is employed. The normal and shear stress-strain relationships of specimens are subsequently established. The progressive failure process of the specimen illustrated via high-speed photographs manifests a mixed failure mode accommodating both the shear-dominated failure and the localized tensile damage. The elastic and shear moduli exhibit certain loading-path dependence under quasi-static loading but loading-path insensitivity under high loading rates. Loading rate dependence is evidently demonstrated through the failure characteristics involving fragmentation, compression and shear strength and failure surfaces based on Drucker-Prager criterion. Our proposed method is convenient and reliable to study the dynamic response and failure mechanism of rocks under combined compression-shear loading.
Quality control of inkjet technology for DNA microarray fabrication.
Pierik, Anke; Dijksman, Frits; Raaijmakers, Adrie; Wismans, Ton; Stapert, Henk
2008-12-01
A robust manufacturing process is essential to make high-quality DNA microarrays, especially for use in diagnostic tests. We investigated different failure modes of the inkjet printing process used to manufacture low-density microarrays. A single nozzle inkjet spotter was provided with two optical imaging systems, monitoring in real time the flight path of every droplet. If a droplet emission failure is detected, the printing process is automatically stopped. We analyzed over 1.3 million droplets. This information was used to investigate the performance of the inkjet system and to obtain detailed insight into the frequency and causes of jetting failures. Of all the substrates investigated, 96.2% were produced without any system or jetting failures. In 1.6% of the substrates, droplet emission failed and was correctly identified. Appropriate measures could then be taken to get the process back on track. In 2.2%, the imaging systems failed while droplet emission occurred correctly. In 0.1% of the substrates, droplet emission failure that was not timely detected occurred. Thus, the overall yield of the microarray manufacturing process was 99.9%, which is highly acceptable for prototyping.
NASA Astrophysics Data System (ADS)
Sassa, S.
2017-12-01
This presentation shows some recent research advances on tsunami-seabed-structure interaction following the 2011 Tohoku Earthquake Tsunami, Japan. It presents a concise summary and discussion of utilizing a geotechnical centrifuge and a large-scale hydro flume for the modelling of tsunami-seabed-structure interaction. I highlight here the role of tsunami-induced seepage in piping/boiling, erosion and bearing capacity decrease and failure of the rubble/seabed foundation. A comparison and discussion are made on the stability assessment for the design of tsunami-resistant structures on the basis of the results from both geo-centrifuge and large-scale hydrodynamic experiments. The concurrent processes of the instability involving the scour of the mound/sandy seabed, bearing capacity failure and flow of the foundation and the failure of caisson breakwaters under tsunami overflow and seepage coupling are made clear in this presentation. Three series of experiments were conducted under fifty gravities. The first series of experiments targeted the instability of the mounds themselves, and the second series of experiments clarified how the mound scour would affect the overall stability of the caissons. The third series of experiments examined the effect of a countermeasure on the basis of the results from the two series of experiments. The experimental results first demonstrated that the coupled overflow-seepage actions promoted the development of the mound scour significantly, and caused bearing capacity failure of the mound, resulting in the total failure of the caisson breakwater, which otherwise remained stable without the coupling effect. The velocity vectors obtained from the high-resolution image analysis illustrated the series of such concurrent scour/bearing-capacity-failure/flow processes leading to the instability of the breakwater. The stability of the breakwaters was significantly improved with decreasing hydraulic gradient underneath the caissons due to an embankment effect. These findings elucidate the crucial role of overflow/seepage coupling in tsunami-seabed-structure interaction from both geotechnical and hydrodynamic perspectives, as an interdisciplinary tsunami science, warranting an enhanced disaster resilience.
Structures and geriatrics from a failure analysis experience viewpoint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopper, D.M.
In a failure analysis consulting engineering practice one sees a variety of structural failures from which observations may be made concerning geriatric structures. Representative experience with power plants, refineries, offshore structures, and forensic investigations is summarized and generic observations are made regarding the maintenance of fitness for purpose of structures. Although it is important to optimize the engineering design for a range of operational and environmental variables, it is essential that fabrication and inspection controls exist along with common sense based ongoing monitoring and operations procedures. 18 figs.
Habitat selection and overlap of Atlantic salmon and smallmouth bass juveniles in nursery streams
Wathen, G.; Coghlan, S.M.; Zydlewski, Joseph D.; Trial, J.G.
2011-01-01
Introduced smallmouth bass Micropterus dolomieu have invaded much of the historic freshwater habitat of Atlantic salmon Salmo salar in North America, yet little is known about the ecological interactions between the two species. We investigated the possibility of competition for habitat between age-0 Atlantic salmon and age-0 and age-1 smallmouth bass by means of in situ observations and a mesocosm experiment. We used snorkel observation to identify the degree and timing of overlap in habitat use in our in situ observations and to describe habitat shifts by Atlantic salmon in the presence of smallmouth bass in our mesocosm experiments. In late July 2008, we observed substantial overlap in the depths and mean water column velocities used by both species in sympatric in situ conditions and an apparent shift by age-0 Atlantic salmon to shallower water that coincided with the period of high overlap. In the mesocosm experiments, we detected no overlap or habitat shifts by age-0 Atlantic salmon in the presence age-1 smallmouth bass and low overlap and no habitat shifts of Atlantic salmon and age-0 smallmouth bass in fall 2009. In 2009, summer floods with sustained high flows and low temperatures resulted in the nearly complete reproductive failure of the smallmouth bass in our study streams, and we did not observe a midsummer habitat shift by Atlantic salmon similar to that seen in 2008. Although this prevented us from replicating our 2008 experiments under similar conditions, the virtual year-class failure of smallmouth bass itself is enlightening. We suggest that future studies incorporate the effects of varying temperature and discharge to determine how abiotic factors affect the interactions between these species and thus mediate the outcomes of potential competition.
NASA Astrophysics Data System (ADS)
Bell, Andrew F.; Naylor, Mark; Heap, Michael J.; Main, Ian G.
2011-08-01
Power-law accelerations in the mean rate of strain, earthquakes and other precursors have been widely reported prior to material failure phenomena, including volcanic eruptions, landslides and laboratory deformation experiments, as predicted by several theoretical models. The Failure Forecast Method (FFM), which linearizes the power-law trend, has been routinely used to forecast the failure time in retrospective analyses; however, its performance has never been formally evaluated. Here we use synthetic and real data, recorded in laboratory brittle creep experiments and at volcanoes, to show that the assumptions of the FFM are inconsistent with the error structure of the data, leading to biased and imprecise forecasts. We show that a Generalized Linear Model method provides higher-quality forecasts that converge more accurately to the eventual failure time, accounting for the appropriate error distributions. This approach should be employed in place of the FFM to provide reliable quantitative forecasts and estimate their associated uncertainties.
Using diagnostic experiences in experience-based innovative design
NASA Astrophysics Data System (ADS)
Prabhakar, Sattiraju; Goel, Ashok K.
1992-03-01
Designing a novel class of devices requires innovation. Often, the design knowledge of these devices does not identify and address the constraints that are required for their performance in the real world operating environment. So any new design adapted from these devices tend to be similarly sketchy. In order to address this problem, we propose a case-based reasoning method called performance driven innovation (PDI). We model the design as a dynamic process, arrive at a design by adaptation from the known designs, generate failures for this design for some new constraints, and then use this failure knowledge to generate the required design knowledge for the new constraints. In this paper, we discuss two aspects of PDI: the representation of PDI cases and the translation of the failure knowledge into design knowledge for a constraint. Each case in PDI has two components: design and failure knowledge. Both of them are represented using a substance-behavior-function model. Failure knowledge has internal device failure behaviors and external environmental behaviors. The environmental behavior, for a constraint, interacting with the design behaviors, results in the failure internal behavior. The failure adaptation strategy generates functions, from the failure knowledge, which can be addressed using the routine design methods. These ideas are illustrated using a coffee-maker example.
Studies on Automobile Clutch Release Bearing Characteristics with Acoustic Emission
NASA Astrophysics Data System (ADS)
Chen, Guoliang; Chen, Xiaoyang
Automobile clutch release bearings are important automotive driveline components. For the clutch release bearing, early fatigue failure diagnosis is significant, but the early fatigue failure response signal is not obvious, because failure signals are susceptible to noise on the transmission path and to working environment factors such as interference. With an improvement in vehicle design, clutch release bearing fatigue life indicators have increasingly become an important requirement. Contact fatigue is the main failure mode of release rolling bearing components. Acoustic emission techniques in contact fatigue failure detection have unique advantages, which include highly sensitive nondestructive testing methods. In the acoustic emission technique to detect a bearing, signals are collected from multiple sensors. Each signal contains partial fault information, and there is overlap between the signals' fault information. Therefore, the sensor signals receive simultaneous source information integration is complete fragment rolling bearing fault acoustic emission signal, which is the key issue of accurate fault diagnosis. Release bearing comprises the following components: the outer ring, inner ring, rolling ball, cage. When a failure occurs (such as cracking, pitting), the other components will impact damaged point to produce acoustic emission signal. Release bearings mainly emit an acoustic emission waveform with a Rayleigh wave propagation. Elastic waves emitted from the sound source, and it is through the part surface bearing scattering. Dynamic simulation of rolling bearing failure will contribute to a more in-depth understanding of the characteristics of rolling bearing failure, because monitoring and fault diagnosis of rolling bearings provide a theoretical basis and foundation.
[Pulse wave velocity as an early marker of diastolic heart failure in patients with hypertension].
Moczulska, Beata; Kubiak, Monika; Bryczkowska, Anna; Malinowska, Ewa
2017-04-21
According to the WHO, hypertension is one of the major causes of death worldwide. It leads to a number of severe complications. Diastolic heart failure, that is heart failure with preserved ejection fraction (HFPEF), is especially common. New, but simple, indices for the early detection of patients who have not yet developed complications or are in their early developmental stages are still searched for. The aim of this study is to examine the correlation between pulse wave velocity (PWV) and markers of diastolic heart failure (DHF) assessed in echocardiography in patients with hypertension and no symptoms of heart failure. The study was comprised of 65 patients with treated hypertension. Patients with symptoms of heart failure, those with diabetes and smokers were excluded. Arterial stiffness was measured with the Mobil-O-Graph NG PWA. Pulse wave velocity (PWV) was estimated. The following markers of diastolic heart failure were assessed in the echocardiographic examination: E/A ratio - the ratio of the early (E) to late (A) ventricular filling velocities, DT - decceleration time, E/E' - the ratio of mitral peak velocity of early filling (E) to early diastolic mitral annular velocity E' in tissue Doppler echocardiography. PWV was statistically significantly higher in the DHF group. In the group of patients with heart failure, the average E/A ratio was significantly lower as compared to the group with no heart failure. Oscillometric measurement of pulse wave velocity is non-invasive, lasts a few minutes and does not require the presence of a specialist. It allows for an early detection of patients at risk of diastolic heart failure even within the conditions of primary health care.
For biomonitoring efforts aimed at early detection of aquatic invasive species (AIS), the ability to detect rare individuals is key and requires accurate species level identification to maintain a low occurrence probability of non-detection errors (failure to detect a present spe...
Model-Biased, Data-Driven Adaptive Failure Prediction
NASA Technical Reports Server (NTRS)
Leen, Todd K.
2004-01-01
This final report, which contains a research summary and a viewgraph presentation, addresses clustering and data simulation techniques for failure prediction. The researchers applied their techniques to both helicopter gearbox anomaly detection and segmentation of Earth Observing System (EOS) satellite imagery.
Macki, Mohamed; Syeda, Sbaa; Kerezoudis, Panagiotis; Bydon, Ali; Witham, Timothy F; Sciubba, Daniel M; Wolinsky, Jean-Paul; Bydon, Mohamad; Gokaslan, Ziya
2016-10-01
The objective of this independent study is to determine the impact of recombinant human bone morphogenetic protein 2 (rhBMP-2) on reoperation for pseudarthrosis and/or instrumentation failure. A nested case-control study of first-time posterolateral, instrumented fusion of the lumbar spine for degenerative spinal disease was undertaken. Cases of reoperation for pseudoarthrosis and/or instrumentation failure were assigned to controls, who did not experience the primary outcome measure at the time of reoperation. Cases and controls were matched on number of interspaces fused and inclusion of interbody. Predictors of reoperation for pseudoarthrosis and/or instrumentation failure were assessed with a conditional logistical regression controlling for rhBMP-2, age, obesity, and smoking. Of the 448 patients, 155 cases of reoperation for pseudoarthrosis and/or instrumentation were matched with 293 controls. Twenty-six percent of first-time surgeries included rhBMP-2, which was statistically more commonly used in the control cohort (33.11%) versus the case cohort (12.90%) (Unadjusted odds ratio [ORunadj]=0.28) (95% confidence interval [CI]: 0.16-0.49). Following a multivariate analysis controlling for age, obesity, and smoking, the rhBMP-2 recipients incurred a 73% lower odds of reoperation for pseudoarthrosis and/or instrumentation failure (95% CI, 0.15-0.48). Neither sarcomatous nor osseous neoplasm was detected in the study population. Mean follow up did not differ between the cases (81.57±standard deviation [SD] 4.98months) versus controls (74.75±2.49month) (ORunadj=1.01) (95% CI: 1.00-1.01). rhBMP-2 in lumbar fusion constructs protects against reoperation for pseudoarthrosis and/or instrumentation failure. However, the decision to include fusion supplements should be weighted between surgical determinants and clinical outcomes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Incipient failure detection (IFD) of SSME ball bearings
NASA Technical Reports Server (NTRS)
1982-01-01
Because of the immense noise background during the operation of a large engine such as the SSME, the relatively low level unique ball bearing signatures were often buried by the overall machine signal. As a result, the most commonly used bearing failure detection technique, pattern recognition using power spectral density (PSD) constructed from the extracted bearing signals, is rendered useless. Data enhancement techniques were carried out by using a HP5451C Fourier Analyzer. The signal was preprocessed by a Digital Audio Crop. DAC-1024I noise cancelling filter in order to estimate the desired signal corrupted by the backgound noise. Reference levels of good bearings were established. Any deviation of bearing signals from these reference levels indicate the incipient bearing failures.
Using process groups to implement failure detection in asynchronous environments
NASA Technical Reports Server (NTRS)
Ricciardi, Aleta M.; Birman, Kenneth P.
1991-01-01
Agreement on the membership of a group of processes in a distributed system is a basic problem that arises in a wide range of applications. Such groups occur when a set of processes cooperate to perform some task, share memory, monitor one another, subdivide a computation, and so forth. The group membership problems is discussed as it relates to failure detection in asynchronous, distributed systems. A rigorous, formal specification for group membership is presented under this interpretation. A solution is then presented for this problem.
NASA Technical Reports Server (NTRS)
Merrill, W. C.; Delaat, J. C.
1986-01-01
An advanced sensor failure detection, isolation, and accommodation (ADIA) algorithm has been developed for use with an aircraft turbofan engine control system. In a previous paper the authors described the ADIA algorithm and its real-time implementation. Subsequent improvements made to the algorithm and implementation are discussed, and the results of an evaluation presented. The evaluation used a real-time, hybrid computer simulation of an F100 turbofan engine.
Study of SEM induced current and voltage contrast modes to assess semiconductor reliability
NASA Technical Reports Server (NTRS)
Beall, J. R.
1976-01-01
The purpose of the scanning electron microscopy study was to review the failure history of existing integrated circuit technologies to identify predominant failure mechanisms, and to evaluate the feasibility of their detection using SEM application techniques. The study investigated the effects of E-beam irradiation damage and contamination deposition rates; developed the necessary methods for applying the techniques to the detection of latent defects and weaknesses in integrated circuits; and made recommendations for applying the techniques.
Free-Swinging Failure Tolerance for Robotic Manipulators. Degree awarded by Purdue Univ.
NASA Technical Reports Server (NTRS)
English, James
1997-01-01
Under this GSRP fellowship, software-based failure-tolerance techniques were developed for robotic manipulators. The focus was on failures characterized by the loss of actuator torque at a joint, called free-swinging failures. The research results spanned many aspects of the free-swinging failure-tolerance problem, from preparing for an expected failure to discovery of postfailure capabilities to establishing efficient methods to realize those capabilities. Developed algorithms were verified using computer-based dynamic simulations, and these were further verified using hardware experiments at Johnson Space Center.
Shifting and Sharing: Academic Physicians' Strategies for Navigating Underperformance and Failure.
LaDonna, Kori A; Ginsburg, Shiphra; Watling, Christopher
2018-05-22
Medical practice is uncertain and complex. Consequently, even outstanding performers will inevitably experience moments of underperformance and failure. Coping relies on insight and resilience. However, how physicians develop and use these skills to navigate struggle remains underexplored. A better understanding may reveal strategies to support both struggling learners and stressed practitioners. In 2015, 28 academic physicians were interviewed about their experiences with underperformance or failure. Constructivist grounded theory informed data collection and analysis. Participants' experiences with struggle ranged from patient errors and academic failures to frequent, smaller moments of interpersonal conflict and work-life imbalance. To buffer impact, participants sometimes shifted their focus to an aspect of their identity where they felt successful. Additionally, while participants perceived that insight develops by acknowledging and reflecting on error, they sometimes deflected blame for performance gaps. More often, participants seemed to accept personal responsibility while simultaneously sharing accountability for underperformance or failure with external forces. Paradoxically, participants perceived learners who used these strategies as lacking in insight. Participants demonstrated the protective and functional value of distributing responsibility for underperformance and failure. Shifting and sharing may be an element of reflection and resilience; recognizing external factors may provide a way to gain perspective and to preserve the self. However, this strategy challenges educators' assumptions that learners who deflect are avoiding personal responsibility. The authors' findings raise questions about what it means to be resilient, and how assumptions about learners' responses to failure may affect strategies to support underperforming learners.
Tutoring for Success: Empowering Graduate Nurses After Failure on the NCLEX-RN.
Lutter, Stacy L; Thompson, Cheryl W; Condon, Marian C
2017-12-01
Failure on the National Council Licensure Examination for Registered Nurses (NCLEX-RN) is a devastating experience. Most research related to NCLEX-RN is focused on predicting and preventing failure. Despite these efforts, more than 20,000 nursing school graduates experience failure on the NCLEX-RN each year, and there is a paucity of literature regarding remediation after failure. The aim of this article is to describe an individualized tutoring approach centered on establishing a trusting relationship and incorporating two core strategies for remediation: the nugget method, and a six-step strategy for question analysis. This individualized tutoring method has been used by three nursing faculty with a 95% success rate on an NCLEX retake attempt. Further research is needed to identify the elements of this tutoring method that influence success. [J Nurs Educ. 2017;56(12):758-761.]. Copyright 2017, SLACK Incorporated.
Wingham, Jennifer; Harding, Geoff; Britten, Nicky; Dalal, Hayes
2014-06-01
To develop a model of heart failure patients' attitudes, beliefs, expectations, and experiences based on published qualitative research that could influence the development of self-management strategies. A synthesis of 19 qualitative research studies using the method of meta-ethnography. This synthesis offers a conceptual model of the attitudes, beliefs, and expectations of patients with heart failure. Patients experienced a sense of disruption before developing a mental model of heart failure. Patients' reactions included becoming a strategic avoider, a selective denier, a well-intentioned manager, or an advanced self-manager. Patients responded by forming self-management strategies and finally assimilated the strategies into everyday life seeking to feel safe. This conceptual model suggests that there are a range of interplaying factors that facilitate the process of developing self-management strategies. Interventions should take into account patients' concepts of heart failure and their subsequent reactions.
Junaid, Sarah; Gregory, Thomas; Fetherston, Shirley; Emery, Roger; Amis, Andrew A; Hansen, Ulrich
2018-03-23
Definite glenoid implant loosening is identifiable on radiographs, however, identifying early loosening still eludes clinicians. Methods to monitor glenoid loosening in vitro have not been validated to clinical imaging. This study investigates the correlation between in vitro measures and CT images. Ten cadaveric scapulae were implanted with a pegged glenoid implant and fatigue tested to failure. Each scapulae were cyclically loaded superiorly and CT scanned every 20,000 cycles until failure to monitor progressive radiolucent lines. Superior and inferior rim displacements were also measured. A finite element (FE) model of one scapula was used to analyze the interfacial stresses at the implant/cement and cement/bone interfaces. All ten implants failed inferiorly at the implant-cement interface, two also failed at the cement-bone interface inferiorly, and three showed superior failure. Failure occurred at of 80,966 ± 53,729 (mean ± SD) cycles. CT scans confirmed failure of the fixation, and in most cases, was observed either before or with visual failure. Significant correlations were found between inferior rim displacement, vertical head displacement and failure of the glenoid implant. The FE model showed peak tensile stresses inferiorly and high compressive stresses superiorly, corroborating experimental findings. In vitro monitoring methods correlated to failure progression in clinical CT images possibly indicating its capacity to detect loosening earlier for earlier clinical intervention if needed. Its use in detecting failure non-destructively for implant development and testing is also valuable. The study highlights failure at the implant-cement interface and early signs of failure are identifiable in CT images. © 2018 The Authors. Journal of Orthopaedic Research ® Published by Wiley Periodicals, Inc. on behalf of the Orthopaedic Research Society. J Orthop Res 9999:XX-XX, 2018. © 2018 The Authors. Journal of Orthopaedic Research® Published by Wiley Periodicals, Inc. on behalf of the Orthopaedic Research Society.
Minimizing the Disruptive Effects of Prospective Memory in Simulated Air Traffic Control
Loft, Shayne; Smith, Rebekah E.; Remington, Roger
2015-01-01
Prospective memory refers to remembering to perform an intended action in the future. Failures of prospective memory can occur in air traffic control. In two experiments, we examined the utility of external aids for facilitating air traffic management in a simulated air traffic control task with prospective memory requirements. Participants accepted and handed-off aircraft and detected aircraft conflicts. The prospective memory task involved remembering to deviate from a routine operating procedure when accepting target aircraft. External aids that contained details of the prospective memory task appeared and flashed when target aircraft needed acceptance. In Experiment 1, external aids presented either adjacent or non-adjacent to each of the 20 target aircraft presented over the 40min test phase reduced prospective memory error by 11% compared to a condition without external aids. In Experiment 2, only a single target aircraft was presented a significant time (39min–42min) after presentation of the prospective memory instruction, and the external aids reduced prospective memory error by 34%. In both experiments, costs to the efficiency of non-prospective memory air traffic management (non-target aircraft acceptance response time, conflict detection response time) were reduced by non-adjacent aids compared to no aids or adjacent aids. In contrast, in both experiments, the efficiency of the prospective memory air traffic management (target aircraft acceptance response time) was facilitated by adjacent aids compared to non-adjacent aids. Together, these findings have potential implications for the design of automated alerting systems to maximize multi-task performance in work settings where operators monitor and control demanding perceptual displays. PMID:24059825
NASA Astrophysics Data System (ADS)
Zhao, Qi
Rock failure process is a complex phenomenon that involves elastic and plastic deformation, microscopic cracking, macroscopic fracturing, and frictional slipping of fractures. Understanding this complex behaviour has been the focus of a significant amount of research. In this work, the combined finite-discrete element method (FDEM) was first employed to study (1) the influence of rock discontinuities on hydraulic fracturing and associated seismicity and (2) the influence of in-situ stress on seismic behaviour. Simulated seismic events were analyzed using post-processing tools including frequency-magnitude distribution (b-value), spatial fractal dimension (D-value), seismic rate, and fracture clustering. These simulations demonstrated that at the local scale, fractures tended to propagate following the rock mass discontinuities; while at reservoir scale, they developed in the direction parallel to the maximum in-situ stress. Moreover, seismic signature (i.e., b-value, D-value, and seismic rate) can help to distinguish different phases of the failure process. The FDEM modelling technique and developed analysis tools were then coupled with laboratory experiments to further investigate the different phases of the progressive rock failure process. Firstly, a uniaxial compression experiment, monitored using a time-lapse ultrasonic tomography method, was carried out and reproduced by the numerical model. Using this combination of technologies, the entire deformation and failure processes were studied at macroscopic and microscopic scales. The results not only illustrated the rock failure and seismic behaviours at different stress levels, but also suggested several precursory behaviours indicating the catastrophic failure of the rock. Secondly, rotary shear experiments were conducted using a newly developed rock physics experimental apparatus ERDmu-T) that was paired with X-ray micro-computed tomography (muCT). This combination of technologies has significant advantages over conventional rotary shear experiments since it allowed for the direct observation of how two rough surfaces interact and deform without perturbing the experimental conditions. Some intriguing observations were made pertaining to key areas of the study of fault evolution, making possible for a more comprehensive interpretation of the frictional sliding behaviour. Lastly, a carefully calibrated FDEM model that was built based on the rotary experiment was utilized to investigate facets that the experiment was not able to resolve, for example, the time-continuous stress condition and the seismic activity on the shear surface. The model reproduced the mechanical behaviour observed in the laboratory experiment, shedding light on the understanding of fault evolution.
NASA Astrophysics Data System (ADS)
Simola, Kaisa; Laakso, Kari
1992-01-01
Eight years of operating experiences of 104 motor operated closing valves in different safety systems in nuclear power units were analyzed in a systematic way. The qualitative methods used were Failure Mode and Effect Analysis (FMEA) and Maintenance Effects and Criticality Analysis (MECA). These reliability engineering methods are commonly used in the design stage of equipment. The successful application of these methods for analysis and utilization of operating experiences was demonstrated.
40 CFR 264.1101 - Design and operating standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
... hazardous waste (e.g., upon detection of leakage from the primary barrier) the owner or operator must: (A... constituents into the barrier, and a leak detection system that is capable of detecting failure of the primary... requirements of the leak detection component of the secondary containment system are satisfied by installation...
Li, Jonathan Z; Chapman, Brad; Charlebois, Patrick; Hofmann, Oliver; Weiner, Brian; Porter, Alyssa J; Samuel, Reshmi; Vardhanabhuti, Saran; Zheng, Lu; Eron, Joseph; Taiwo, Babafemi; Zody, Michael C; Henn, Matthew R; Kuritzkes, Daniel R; Hide, Winston; Wilson, Cara C; Berzins, Baiba I; Acosta, Edward P; Bastow, Barbara; Kim, Peter S; Read, Sarah W; Janik, Jennifer; Meres, Debra S; Lederman, Michael M; Mong-Kryspin, Lori; Shaw, Karl E; Zimmerman, Louis G; Leavitt, Randi; De La Rosa, Guy; Jennings, Amy
2014-01-01
The impact of raltegravir-resistant HIV-1 minority variants (MVs) on raltegravir treatment failure is unknown. Illumina sequencing offers greater throughput than 454, but sequence analysis tools for viral sequencing are needed. We evaluated Illumina and 454 for the detection of HIV-1 raltegravir-resistant MVs. A5262 was a single-arm study of raltegravir and darunavir/ritonavir in treatment-naïve patients. Pre-treatment plasma was obtained from 5 participants with raltegravir resistance at the time of virologic failure. A control library was created by pooling integrase clones at predefined proportions. Multiplexed sequencing was performed with Illumina and 454 platforms at comparable costs. Illumina sequence analysis was performed with the novel snp-assess tool and 454 sequencing was analyzed with V-Phaser. Illumina sequencing resulted in significantly higher sequence coverage and a 0.095% limit of detection. Illumina accurately detected all MVs in the control library at ≥0.5% and 7/10 MVs expected at 0.1%. 454 sequencing failed to detect any MVs at 0.1% with 5 false positive calls. For MVs detected in the patient samples by both 454 and Illumina, the correlation in the detected variant frequencies was high (R2 = 0.92, P<0.001). Illumina sequencing detected 2.4-fold greater nucleotide MVs and 2.9-fold greater amino acid MVs compared to 454. The only raltegravir-resistant MV detected was an E138K mutation in one participant by Illumina sequencing, but not by 454. In participants of A5262 with raltegravir resistance at virologic failure, baseline raltegravir-resistant MVs were rarely detected. At comparable costs to 454 sequencing, Illumina demonstrated greater depth of coverage, increased sensitivity for detecting HIV MVs, and fewer false positive variant calls.
Earthquake Prediction in Large-scale Faulting Experiments
NASA Astrophysics Data System (ADS)
Junger, J.; Kilgore, B.; Beeler, N.; Dieterich, J.
2004-12-01
We study repeated earthquake slip of a 2 m long laboratory granite fault surface with approximately homogenous frictional properties. In this apparatus earthquakes follow a period of controlled, constant rate shear stress increase, analogous to tectonic loading. Slip initiates and accumulates within a limited area of the fault surface while the surrounding fault remains locked. Dynamic rupture propagation and slip of the entire fault surface is induced when slip in the nucleating zone becomes sufficiently large. We report on the event to event reproducibility of loading time (recurrence interval), failure stress, stress drop, and precursory activity. We tentatively interpret these variations as indications of the intrinsic variability of small earthquake occurrence and source physics in this controlled setting. We use the results to produce measures of earthquake predictability based on the probability density of repeating occurrence and the reproducibility of near-field precursory strain. At 4 MPa normal stress and a loading rate of 0.0001 MPa/s, the loading time is ˜25 min, with a coefficient of variation of around 10%. Static stress drop has a similar variability which results almost entirely from variability of the final (rather than initial) stress. Thus, the initial stress has low variability and event times are slip-predictable. The variability of loading time to failure is comparable to the lowest variability of recurrence time of small repeating earthquakes at Parkfield (Nadeau et al., 1998) and our result may be a good estimate of the intrinsic variability of recurrence. Distributions of loading time can be adequately represented by a log-normal or Weibel distribution but long term prediction of the next event time based on probabilistic representation of previous occurrence is not dramatically better than for field-observed small- or large-magnitude earthquake datasets. The gradually accelerating precursory aseismic slip observed in the region of nucleation in these experiments is consistent with observations and theory of Dieterich and Kilgore (1996). Precursory strains can be detected typically after 50% of the total loading time. The Dieterich and Kilgore approach implies an alternative method of earthquake prediction based on comparing real-time strain monitoring with previous precursory strain records or with physically-based models of accelerating slip. Near failure, time to failure t is approximately inversely proportional to precursory slip rate V. Based on a least squares fit to accelerating slip velocity from ten or more events, the standard deviation of the residual between predicted and observed log t is typically 0.14. Scaling these results to natural recurrence suggests that a year prior to an earthquake, failure time can be predicted from measured fault slip rate with a typical error of 140 days, and a day prior to the earthquake with a typical error of 9 hours. However, such predictions require detecting aseismic nucleating strains, which have not yet been found in the field, and on distinguishing earthquake precursors from other strain transients. There is some field evidence of precursory seismic strain for large earthquakes (Bufe and Varnes, 1993) which may be related to our observations. In instances where precursory activity is spatially variable during the interseismic period, as in our experiments, distinguishing precursory activity might be best accomplished with deep arrays of near fault instruments and pattern recognition algorithms such as principle component analysis (Rundle et al., 2000).
Tribology symposium -- 1994. PD-Volume 61
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masudi, H.
This year marks the first Tribology Symposium within the Energy-Sources Technology Conference, sponsored by the ASME Petroleum Division. The program was divided into five sessions: Tribology in High Technology, a historical discussion of some watershed events in tribology; Research/Development, design, research and development on modern manufacturing; Tribology in Manufacturing, the impact of tribology on modern manufacturing; Design/Design Representation, aspects of design related to tribological systems; and Failure Analysis, an analysis of failure, failure detection, and failure monitoring as relating to manufacturing processes. Eleven papers have been processed separately for inclusion on the data base.
Transient Region Coverage in the Propulsion IVHM Technology Experiment
NASA Technical Reports Server (NTRS)
Balaban, Edward; Sweet, Adam; Bajwa, Anupa; Maul, William; Fulton, Chris; Chicatelli, amy
2004-01-01
Over the last several years researchers at NASA Glenn and Ames Research Centers have developed a real-time fault detection and isolation system for propulsion subsystems of future space vehicles. The Propulsion IVHM Technology Experiment (PITEX), as it is called follows the model-based diagnostic methodology and employs Livingstone, developed at NASA Ames, as its reasoning engine. The system has been tested on,flight-like hardware through a series of nominal and fault scenarios. These scenarios have been developed using a highly detailed simulation of the X-34 flight demonstrator main propulsion system and include realistic failures involving valves, regulators, microswitches, and sensors. This paper focuses on one of the recent research and development efforts under PITEX - to provide more complete transient region coverage. It describes the development of the transient monitors, the corresponding modeling methodology, and the interface software responsible for coordinating the flow of information between the quantitative monitors and the qualitative, discrete representation Livingstone.
Blind jealousy? Romantic insecurity increases emotion-induced failures of visual perception.
Most, Steven B; Laurenceau, Jean-Philippe; Graber, Elana; Belcher, Amber; Smith, C Veronica
2010-04-01
Does the influence of close relationships pervade so deeply as to impact visual awareness? Results from two experiments involving heterosexual romantic couples suggest that they do. Female partners from each couple performed a rapid detection task where negative emotional distractors typically disrupt visual awareness of subsequent targets; at the same time, their male partners rated attractiveness first of landscapes, then of photos of other women. At the end of both experiments, the degree to which female partners indicated uneasiness about their male partner looking at and rating other women correlated significantly with the degree to which negative emotional distractors had disrupted their target perception during that time. This relationship was robust even when controlling for individual differences in baseline performance. Thus, emotions elicited by social contexts appear to wield power even at the level of perceptual processing. Copyright 2010 APA, all rights reserved.
Extension of Gutenberg-Richter distribution to MW -1.3, no lower limit in sight
NASA Astrophysics Data System (ADS)
Boettcher, Margaret S.; McGarr, A.; Johnston, Malcolm
2009-05-01
With twelve years of seismic data from TauTona Gold Mine, South Africa, we show that mining-induced earthquakes follow the Gutenberg-Richter relation with no scale break down to the completeness level of the catalog, at moment magnitude M W -1.3. Events recorded during relatively quiet hours in 2006 indicate that catalog detection limitations, not earthquake source physics, controlled the previously reported minimum magnitude in this mine. Within the Natural Earthquake Laboratory in South African Mines (NELSAM) experiment's dense seismic array, earthquakes that exhibit shear failure at magnitudes as small as M W -3.9 are observed, but we find no evidence that M W -3.9 represents the minimum magnitude. In contrast to previous work, our results imply small nucleation zones and that earthquake processes in the mine can readily be scaled to those in either laboratory experiments or natural faults.
Extension of Gutenberg-Richter distribution to Mw -1.3, no lower limit in sight
Boettcher, M.S.; McGarr, A.; Johnston, M.
2009-01-01
[1] With twelve years of seismic data from TauTona Gold Mine, South Africa, we show that mining-induced earthquakes follow the Gutenberg-Richter relation with no scale break down to the completeness level of the catalog, at moment magnitude Mw -1.3. Events recorded during relatively quiet hours in 2006 indicate that catalog detection limitations, not earthquake source physics, controlled the previously reported minimum magnitude in this mine. Within the Natural Earthquake Laboratory in South African Mines (NELSAM) experiment's dense seismic array, earthquakes that exhibit shear failure at magnitudes as small as Mw -3.9 are observed, but we find no evidence that Mw -3.9 represents the minimum magnitude. In contrast to previous work, our results imply small nucleation zones and that earthquake processes in the mine can readily be scaled to those in either laboratory experiments or natural faults.
Next Generation Monitoring: Tier 2 Experience
NASA Astrophysics Data System (ADS)
Fay, R.; Bland, J.; Jones, S.
2017-10-01
Monitoring IT infrastructure is essential for maximizing availability and minimizing disruption by detecting failures and developing issues. The HEP group at Liverpool have recently updated our monitoring infrastructure with the goal of increasing coverage, improving visualization capabilities, and streamlining configuration and maintenance. Here we present a summary of Liverpool’s experience, the monitoring infrastructure, and the tools used to build it. In brief, system checks are configured in Puppet using Hiera, and managed by Sensu, replacing Nagios. Centralised logging is managed with Elasticsearch, together with Logstash and Filebeat. Kibana provides an interface for interactive analysis, including visualization and dashboards. Metric collection is also configured in Puppet, managed by collectd and stored in Graphite, with Grafana providing a visualization and dashboard tool. The Uchiwa dashboard for Sensu provides a web interface for viewing infrastructure status. Alert capabilities are provided via external handlers. A custom alert handler is in development to provide an easily configurable, extensible and maintainable alert facility.
NASA Astrophysics Data System (ADS)
Anderson, Charles E., Jr.; O'Donoghue, Padraic E.; Lankford, James; Walker, James D.
1992-06-01
Complementary to a study of the compressive strength of ceramic as a function of strain rate and confinement, numerical simulations of the split-Hopkinson pressure bar (SHPB) experiments have been performed using the two-dimensional wave propagation computer program HEMP. The numerical effort had two main thrusts. Firstly, the interpretation of the experimental data relies on several assumptions. The numerical simulations were used to investigate the validity of these assumptions. The second part of the effort focused on computing the idealized constitutive response of a ceramic within the SHPB experiment. These numerical results were then compared against experimental data. Idealized models examined included a perfectly elastic material, an elastic-perfectly plastic material, and an elastic material with failure. Post-failure material was modeled as having either no strength, or a strength proportional to the mean stress. The effects of confinement were also studied. Conclusions concerning the dynamic behavior of a ceramic up to and after failure are drawn from the numerical study.
NASA Astrophysics Data System (ADS)
Kim, S. Y.; Yoo, J. H.; Kim, H. K.; Shin, K. Y.; Yoon, S. J.
2018-06-01
In this paper, we discussed the structural behavior of bolted lap-joint connections in pultruded FRP structural members. Especially, bolted connections in pultruded FRP members are investigated for their failure modes and strength. Specimens with single and multiple bolt-holes are tested in tension under bolt-loading conditions. All of the specimens are instrumented with strain gages and the load-strain responses are monitored. The failed specimens are examined for the cracks and failure patterns. The purpose of this paper is to predict the failure strength by using the ratio of the results obtained by the experiment and the finite element analysis. In the study, several tests are conducted to determine the mechanical properties of pultruded FRP materials before the main experiment. The results are used in the finite element analysis for single and multiple bolted lap-joint specimens. The results obtained by the experiment are compared with the results obtained by the finite element analysis.
Characteristics of self-worth protection in achievement behaviour.
Thompson, T
1993-11-01
Two experiments are reported comprising an investigation of individual difference variables associated with self-worth protection. This is a phenomenon whereby students in achievement situations adopt one of a number of strategies, including withdrawing effort, in order to avoid damage to self-esteem which results from attributing failure to inability. Experiment 1 confirmed the adequacy of an operational definition which identified self-worth students on the basis of two criteria. These were deteriorated performance following failure, together with subsequent enhanced performance following a face-saving excuse allowing students to explain failure without implicating low ability. The results of Experiment 2 established that the behaviour of self-worth protective students in achievement situations may be understood in terms of their low academic self-esteem coupled with uncertainty about their level of global self-esteem. Investigation of the manner in which self-worth students explain success and failure outcomes failed to demonstrate a tendency to internalise failure but revealed a propensity on the part of these students to reject due credit for their successes. The implications of these findings in terms of the prevention and modification of self-worth protective reactions in achievement situations are discussed.
Target Detection over the Diurnal Cycle Using a Multispectral Infrared Sensor.
Zhao, Huijie; Ji, Zheng; Li, Na; Gu, Jianrong; Li, Yansong
2016-12-29
When detecting a target over the diurnal cycle, a conventional infrared thermal sensor might lose the target due to the thermal crossover, which could happen at any time throughout the day when the infrared image contrast between target and background in a scene is indistinguishable due to the temperature variation. In this paper, the benefits of using a multispectral-based infrared sensor over the diurnal cycle have been shown. Firstly, a brief theoretical analysis on how the thermal crossover influences a conventional thermal sensor, within the conditions where the thermal crossover would happen and why the mid-infrared (3~5 μm) multispectral technology is effective, is presented. Furthermore, the effectiveness of this technology is also described and we describe how the prototype design and multispectral technology is employed to help solve the thermal crossover detection problem. Thirdly, several targets are set up outside and imaged in the field experiment over a 24-h period. The experimental results show that the multispectral infrared imaging system can enhance the contrast of the detected images and effectively solve the failure of the conventional infrared sensor during the diurnal cycle, which is of great significance for infrared surveillance applications.
Target Detection over the Diurnal Cycle Using a Multispectral Infrared Sensor
Zhao, Huijie; Ji, Zheng; Li, Na; Gu, Jianrong; Li, Yansong
2016-01-01
When detecting a target over the diurnal cycle, a conventional infrared thermal sensor might lose the target due to the thermal crossover, which could happen at any time throughout the day when the infrared image contrast between target and background in a scene is indistinguishable due to the temperature variation. In this paper, the benefits of using a multispectral-based infrared sensor over the diurnal cycle have been shown. Firstly, a brief theoretical analysis on how the thermal crossover influences a conventional thermal sensor, within the conditions where the thermal crossover would happen and why the mid-infrared (3~5 μm) multispectral technology is effective, is presented. Furthermore, the effectiveness of this technology is also described and we describe how the prototype design and multispectral technology is employed to help solve the thermal crossover detection problem. Thirdly, several targets are set up outside and imaged in the field experiment over a 24-h period. The experimental results show that the multispectral infrared imaging system can enhance the contrast of the detected images and effectively solve the failure of the conventional infrared sensor during the diurnal cycle, which is of great significance for infrared surveillance applications. PMID:28036073
Crotty, T P
2015-11-01
Experiments on canine lateral saphenous vein segments have shown that noradrenaline causes potent, flow dependent effects, at a threshold concentration comparable to that of plasma noradrenaline, when it stimulates a segment by diffusion from its microcirculation (vasa vasorum). The effects it causes contrast with those neuronal noradrenaline causes in vivo and that, in the light of the principle that all information is transmitted in patterns that need contrast to be detected - star patterns need darkness, sound patterns, quietness - has generated the hypothesis that plasma noradrenaline provides the obligatory contrast tissues need to detect and respond to the regulatory information encrypted in the diffusion pattern of neuronal noradrenaline. Based on the implications of that hypothesis, the controlled variable of the peripheral noradrenergic system is believed to be the maintenance of a set point balance between the contrasting effects of plasma and neuronal noradrenaline on a tissue. The hypothalamic sympathetic centres are believed to monitor that balance through the level of afferent sympathetic traffic they receive from a tissue and to correct any deviation it detects in the balance by adjusting the level of efferent sympathetic input it projects to the tissue. The failure of the centres to maintain the correct balance is believed to be responsible for inflammatory and genetic disorders. When the failure causes the balance to be polarised in favour of the effect of plasma noradrenaline that is believed to cause inflammatory diseases like dilator cardiac failure, renal hypertension, varicose veins and aneurysms; when it causes it to be polarised in favour of the effect of neuronal noradrenaline that is believed to cause genetic diseases like hypertrophic cardiopathy, pulmonary hypertension and stenoses and when, in pregnancy, a factor causes the polarity to favour plasma noradrenaline in all the maternal tissues except the uterus and conceptus, where it favours neuronal noradrenaline, that is believed to cause preeclampsia. Finally, the shift in the balance caused by the slow physiological increase in plasma noradrenaline concentration in life is believed to be responsible for ageing. Copyright © 2014 Elsevier Ltd. All rights reserved.
Ballistic Experiments with Titanium and Aluminum Targets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gogolewski, R.; Morgan, B.R.
1999-11-23
During the course of the project we conducted two sets of fundamental experiments in penetration mechanics in the LLNL Terminal Ballistics Laboratory of the Physics Directorate. The first set of full-scale experiments was conducted with a 14.5mm air propelled launcher. The object of the experiments was to determine the ballistic limit speed of 6Al-4V-alloy titanium, low fineness ratio projectiles centrally impacting 2024-T3 alloy aluminum flat plates and the failure modes of the projectiles and the targets. The second set of one-third scale experiments was conducted with a 14.5mm powder launcher. The object of these experiments was to determine the ballisticmore » limit speed of 6Al-4V alloy titanium high fineness ratio projectiles centrally impacting 6Al-4V alloy titanium flat plates and the failure modes of the projectiles and the target. We employed radiography to observe a projectile just before and after interaction with a target plate. Early on, we employed a non-damaging ''soft-catch'' technique to capture projectiles after they perforated targets. Once we realized that a projectile was not damaged during interaction with a target, we used a 4-inch thick 6061-T6-alloy aluminum witness block with a 6.0-inch x 6.0-inch cross-section to measure projectile residual penetration. We have recorded and tabulated below projectile impact speed, projectile residual (post-impact) speed, projectile failure mode, target failure mode, and pertinent comments for the experiments. The ballistic techniques employed for the experiments are similar to those employed in an earlier study.« less
Deep Internal Structure of Mars and the Geophysical Package of Netlander
NASA Technical Reports Server (NTRS)
Lognonne, P.; Giardini, D.; Banerdt, B.; Dehant, V.; Barriot, J. P.; Musmann, G.; Menvielle, M.
2000-01-01
Our present understanding of the interior structure of Mars is mostly based on the interpretation of gravity and rotation data, the chemistry of the SNC (shergottites, nakhlites, chassignites) meteoroids, and a comparison with the much better-known interior structure of the Earth. However geophysical information from previous missions have been insufficient to determine the deep internal structure of the planet. Therefore the state and size of the core and the depth and type of mantle discontinuities are unknown. Most previous seismic experiments have indeed failed, either due to a launch failure (as for the Optimism seismometer onboard the small surface stations of Mars 96) or after failure on Mars (as for the Viking 1 seismometer). The remaining Viking 2 seismometer did not produce a convincing marsquake detection, basically due to too strong wind sensitivity and too low resolution in the teleseismic frequency band. After almost a decade of continuous activity and proposals, the first network mission to Mars, NetLander (NL), is expected to be launched between 2005 and 2007. One of the main scientific objectives of this four-lander network mission will be the determination of the internal structure of the planet using a geophysical package. This package will have a seismometer, a magnetometer, and a geodetic experiment, allowing a complementary approach that will yield many new constraints on the mineralogy and temperature of the mantle and core of the planet.
Simulating fail-stop in asynchronous distributed systems
NASA Technical Reports Server (NTRS)
Sabel, Laura; Marzullo, Keith
1994-01-01
The fail-stop failure model appears frequently in the distributed systems literature. However, in an asynchronous distributed system, the fail-stop model cannot be implemented. In particular, it is impossible to reliably detect crash failures in an asynchronous system. In this paper, we show that it is possible to specify and implement a failure model that is indistinguishable from the fail-stop model from the point of view of any process within an asynchronous system. We give necessary conditions for a failure model to be indistinguishable from the fail-stop model, and derive lower bounds on the amount of process replication needed to implement such a failure model. We present a simple one-round protocol for implementing one such failure model, which we call simulated fail-stop.
Micromechanics of failure waves in glass. 2: Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Espinosa, H.D.; Xu, Y.; Brar, N.S.
1997-08-01
In an attempt to elucidate the failure mechanism responsible for the so-called failure waves in glass, numerical simulations of plate and rod impact experiments, with a multiple-plane model, have been performed. These simulations show that the failure wave phenomenon can be modeled by the nucleation and growth of penny-shaped shear defects from the specimen surface to its interior. Lateral stress increase, reduction of spall strength,and progressive attenuation of axial stress behind the failure front are properly predicted by the multiple-plane model. Numerical simulations of high-strain-rate pressure-shear experiments indicate that the model predicts reasonably well the shear resistance of the materialmore » at strain rates as high as 1 {times} 10{sup 6}/s. The agreement is believed to be the result of the model capability in simulating damage-induced anisotropy. By examining the kinetics of the failure process in plate experiments, the authors show that the progressive glass spallation in the vicinity of the failure front and the rate of increase in lateral stress are more consistent with a representation of inelasticity based on shear-activated flow surfaces, inhomogeneous flow, and microcracking, rather than pure microcracking. In the former mechanism, microcracks are likely formed at a later time at the intersection of flow surfaces, in the case of rod-on-rod impact, stress and radial velocity histories predicted by the microcracking model are in agreement with the experimental measurements. Stress attenuation, pulse duration, and release structure are properly simulated. It is shown that failure wave speeds in excess to 3,600 m/s are required for adequate prediction in rod radial expansion.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Aiman; Laguna, Ignacio; Sato, Kento
Future high-performance computing systems may face frequent failures with their rapid increase in scale and complexity. Resilience to faults has become a major challenge for large-scale applications running on supercomputers, which demands fault tolerance support for prevalent MPI applications. Among failure scenarios, process failures are one of the most severe issues as they usually lead to termination of applications. However, the widely used MPI implementations do not provide mechanisms for fault tolerance. We propose FTA-MPI (Fault Tolerance Assistant MPI), a programming model that provides support for failure detection, failure notification and recovery. Specifically, FTA-MPI exploits a try/catch model that enablesmore » failure localization and transparent recovery of process failures in MPI applications. We demonstrate FTA-MPI with synthetic applications and a molecular dynamics code CoMD, and show that FTA-MPI provides high programmability for users and enables convenient and flexible recovery of process failures.« less
New diagnostic and therapeutic possibilities for diastolic heart failure.
Jeong, Euy-Myoung; Dudley, Samuel C
2014-02-03
Despite the fact that up to half of all heart failure occurs in patients without evidence of systolic cardiac dysfunction, there are no universally accepted diagnostic markers and no approved therapies for heart failure with preserved ejection fraction (HFpEF). HFpEF, otherwise known as diastolic heart failure, has nearly the same grim prognosis as systolic heart failure, and diastolic heart failure is increasing in incidence and prevalence. Major trials have shown that many of the treatments that are salutary in systolic heart failure have no beneficial effects in diastolic heart failure, suggesting different underlying mechanisms for these two disorders. Even criteria for diagnosis of HFpEF are still debated, and there is still no gold standard marker to detect diastolic dysfunction. Here, we will review some promising new insights into the pathogenesis of diastolic dysfunction that may lead to new diagnostic and therapeutic tools.
Fault detection for hydraulic pump based on chaotic parallel RBF network
NASA Astrophysics Data System (ADS)
Lu, Chen; Ma, Ning; Wang, Zhipeng
2011-12-01
In this article, a parallel radial basis function network in conjunction with chaos theory (CPRBF network) is presented, and applied to practical fault detection for hydraulic pump, which is a critical component in aircraft. The CPRBF network consists of a number of radial basis function (RBF) subnets connected in parallel. The number of input nodes for each RBF subnet is determined by different embedding dimension based on chaotic phase-space reconstruction. The output of CPRBF is a weighted sum of all RBF subnets. It was first trained using the dataset from normal state without fault, and then a residual error generator was designed to detect failures based on the trained CPRBF network. Then, failure detection can be achieved by the analysis of the residual error. Finally, two case studies are introduced to compare the proposed CPRBF network with traditional RBF networks, in terms of prediction and detection accuracy.
The blind leading the blind: Mutual refinement of approximate theories
NASA Technical Reports Server (NTRS)
Kedar, Smadar T.; Bresina, John L.; Dent, C. Lisa
1991-01-01
The mutual refinement theory, a method for refining world models in a reactive system, is described. The method detects failures, explains their causes, and repairs the approximate models which cause the failures. The approach focuses on using one approximate model to refine another.
The Lived Experience of Heart Failure at the End of Life: A Systematic Literature Review
ERIC Educational Resources Information Center
Hopp, Faith Pratt; Thornton, Nancy; Martin, Lindsey
2010-01-01
The growing number of older adults with heart failure (HF) suggests the need for more information about how people with this condition experience their illness and strategies for coping with this condition. To address this need, the authors conducted a systematic review of the literature and an in-depth, thematic analysis of qualitative…
ERIC Educational Resources Information Center
Shu, Tse-Mei; Lam, Shui-fong
2011-01-01
The present study extended regulatory focus theory (Idson & Higgins, 2000) to an educational setting and attempted to identify individuals with high motivation after both success and failure feedback. College students in Hong Kong (N = 180) participated in an experiment with a 2 promotion focus (high vs. low) x 2 prevention focus (high vs.…
Acoustic emission and nondestructive evaluation of biomaterials and tissues.
Kohn, D H
1995-01-01
Acoustic emission (AE) is an acoustic wave generated by the release of energy from localized sources in a material subjected to an externally applied stimulus. This technique may be used nondestructively to analyze tissues, materials, and biomaterial/tissue interfaces. Applications of AE include use as an early warning tool for detecting tissue and material defects and incipient failure, monitoring damage progression, predicting failure, characterizing failure mechanisms, and serving as a tool to aid in understanding material properties and structure-function relations. All these applications may be performed in real time. This review discusses general principles of AE monitoring and the use of the technique in 3 areas of importance to biomedical engineering: (1) analysis of biomaterials, (2) analysis of tissues, and (3) analysis of tissue/biomaterial interfaces. Focus in these areas is on detection sensitivity, methods of signal analysis in both the time and frequency domains, the relationship between acoustic signals and microstructural phenomena, and the uses of the technique in establishing a relationship between signals and failure mechanisms.
Data-Driven Packet Loss Estimation for Node Healthy Sensing in Decentralized Cluster.
Fan, Hangyu; Wang, Huandong; Li, Yong
2018-01-23
Decentralized clustering of modern information technology is widely adopted in various fields these years. One of the main reason is the features of high availability and the failure-tolerance which can prevent the entire system form broking down by a failure of a single point. Recently, toolkits such as Akka are used by the public commonly to easily build such kind of cluster. However, clusters of such kind that use Gossip as their membership managing protocol and use link failure detecting mechanism to detect link failures cannot deal with the scenario that a node stochastically drops packets and corrupts the member status of the cluster. In this paper, we formulate the problem to be evaluating the link quality and finding a max clique (NP-Complete) in the connectivity graph. We then proposed an algorithm that consists of two models driven by data from application layer to respectively solving these two problems. Through simulations with statistical data and a real-world product, we demonstrate that our algorithm has a good performance.
Pilot performance in zero-visibility precision approach. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Ephrath, A. R.
1975-01-01
The pilot's short-term decisions regarding performance assessment and failure monitoring is examined. The performance of airline pilots who flew simulated zero-visibility landing approaches is reported. Results indicate that the pilot's mode of participation in the control task has a strong effect on his workload, the induced workload being lowest when the pilot acts as a monitor during a coupled approach and highest when the pilot is an active element in the control loop. A marked increase in workload at altitudes below 500 ft. is documented at all participation modes; this increase is inversely related to distance-to-go. The participation mode is shown to have a dominant effect on failure-detection performance, with a failure in a monitored (coupled) axis being detected faster than a comparable failure in a manually-controlled axis. Touchdown performance is also documented. It is concluded that the conventional instrument panel and its associated displays are inadequate for zero-visibility operations in the final phases of the landing approach.
Data processing device test apparatus and method therefor
Wilcox, Richard Jacob; Mulig, Jason D.; Eppes, David; Bruce, Michael R.; Bruce, Victoria J.; Ring, Rosalinda M.; Cole, Jr., Edward I.; Tangyunyong, Paiboon; Hawkins, Charles F.; Louie, Arnold Y.
2003-04-08
A method and apparatus mechanism for testing data processing devices are implemented. The test mechanism isolates critical paths by correlating a scanning microscope image with a selected speed path failure. A trigger signal having a preselected value is generated at the start of each pattern vector. The sweep of the scanning microscope is controlled by a computer, which also receives and processes the image signals returned from the microscope. The value of the trigger signal is correlated with a set of pattern lines being driven on the DUT. The trigger is either asserted or negated depending the detection of a pattern line failure and the particular line that failed. In response to the detection of the particular speed path failure being characterized, and the trigger signal, the control computer overlays a mask on the image of the device under test (DUT). The overlaid image provides a visual correlation of the failure with the structural elements of the DUT at the level of resolution of the microscope itself.
Investigation of Tapered Roller Bearing Damage Detection Using Oil Debris Analysis
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Krieder, Gary; Fichter, Thomas
2006-01-01
A diagnostic tool was developed for detecting fatigue damage to tapered roller bearings. Tapered roller bearings are used in helicopter transmissions and have potential for use in high bypass advanced gas turbine aircraft engines. This diagnostic tool was developed and evaluated experimentally by collecting oil debris data from failure progression tests performed by The Timken Company in their Tapered Roller Bearing Health Monitoring Test Rig. Failure progression tests were performed under simulated engine load conditions. Tests were performed on one healthy bearing and three predamaged bearings. During each test, data from an on-line, in-line, inductance type oil debris sensor was monitored and recorded for the occurrence of debris generated during failure of the bearing. The bearing was removed periodically for inspection throughout the failure progression tests. Results indicate the accumulated oil debris mass is a good predictor of damage on tapered roller bearings. The use of a fuzzy logic model to enable an easily interpreted diagnostic metric was proposed and demonstrated.
NASA Technical Reports Server (NTRS)
1973-01-01
This summary provides the general engineering community with the accumulated experience from ALERT reports issued by NASA and the Government-Industry. Data Exchange Program, and related experience gained by Government and industry. It provides expanded information on selected topics by relating the problem area (failure) to the cause, the investigation and findings, the suggestions for avoidance (inspections, screening tests, proper part applications, requirements for manufacturer's plant facilities, etc.), and failure analysis procedures. Diodes, integrated circuits, and transistors are covered in this volume.
ATS-6 engineering performance report. Volume 2: Orbit and attitude controls
NASA Technical Reports Server (NTRS)
Wales, R. O. (Editor)
1981-01-01
Attitude control is reviewed, encompassing the attitude control subsystem, spacecraft attitude precision pointing and slewing adaptive control experiment, and RF interferometer experiment. The spacecraft propulsion system (SPS) is discussed, including subsystem, SPS design description and validation, orbital operations and performance, in-orbit anomalies and contingency operations, and the cesium bombardment ion engine experiment. Thruster failure due to plugging of the propellant feed passages, a major cause for mission termination, are considered among the critical generic failures on the satellite.
The attribution of success when using navigation aids.
Brown, Michael; Houghton, Robert; Sharples, Sarah; Morley, Jeremy
2015-01-01
Attitudes towards geographic information technology is a seldom explored research area that can be explained with reference to established theories of attribution. This article reports on a study of how the attribution of success and failure in pedestrian navigation varies with level of automation, degree of success and locus of control. A total of 113 participants took part in a survey exploring reflections on personal experiences and vignettes describing fictional navigation experiences. A complex relationship was discovered in which success tends to be attributed to skill and failure to the navigation aid when participants describe their own experiences. A reversed pattern of results was found when discussing the navigation of others. It was also found that navigation success and failure are associated with personal skill to a greater extent when using paper maps, as compared with web-based routing engines or satellite navigation systems. This article explores the influences on the attribution of success and failure when using navigation aids. A survey was performed exploring interpretations of navigation experiences. Level of success, self or other as navigator and type of navigation aid used are all found to influence the attribution of outcomes to internal or external factors.
The attribution of success when using navigation aids
Brown, Michael; Houghton, Robert; Sharples, Sarah; Morley, Jeremy
2015-01-01
Attitudes towards geographic information technology is a seldom explored research area that can be explained with reference to established theories of attribution. This article reports on a study of how the attribution of success and failure in pedestrian navigation varies with level of automation, degree of success and locus of control. A total of 113 participants took part in a survey exploring reflections on personal experiences and vignettes describing fictional navigation experiences. A complex relationship was discovered in which success tends to be attributed to skill and failure to the navigation aid when participants describe their own experiences. A reversed pattern of results was found when discussing the navigation of others. It was also found that navigation success and failure are associated with personal skill to a greater extent when using paper maps, as compared with web-based routing engines or satellite navigation systems. Practitioner Summary: This article explores the influences on the attribution of success and failure when using navigation aids. A survey was performed exploring interpretations of navigation experiences. Level of success, self or other as navigator and type of navigation aid used are all found to influence the attribution of outcomes to internal or external factors. PMID:25384842
Escobar, R F; Astorga-Zaragoza, C M; Téllez-Anguiano, A C; Juárez-Romero, D; Hernández, J A; Guerrero-Ramírez, G V
2011-07-01
This paper deals with fault detection and isolation (FDI) in sensors applied to a concentric-pipe counter-flow heat exchanger. The proposed FDI is based on the analytical redundancy implementing nonlinear high-gain observers which are used to generate residuals when a sensor fault is presented (as software sensors). By evaluating the generated residual, it is possible to switch between the sensor and the observer when a failure is detected. Experiments in a heat exchanger pilot validate the effectiveness of the approach. The FDI technique is easy to implement allowing the industries to have an excellent alternative tool to keep their heat transfer process under supervision. The main contribution of this work is based on a dynamic model with heat transfer coefficients which depend on temperature and flow used to estimate the output temperatures of a heat exchanger. This model provides a satisfactory approximation of the states of the heat exchanger in order to allow its implementation in a FDI system used to perform supervision tasks. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Applications of infrared thermography for nondestructive testing of fatigue cracks in steel bridges
NASA Astrophysics Data System (ADS)
Sakagami, Takahide; Izumi, Yui; Kobayashi, Yoshihiro; Mizokami, Yoshiaki; Kawabata, Sunao
2014-05-01
In recent years, fatigue crack propagations in aged steel bridge which may lead to catastrophic structural failures have become a serious problem. For large-scale steel structures such as orthotropic steel decks in highway bridges, nondestructive inspection of deteriorations and fatigue damages are indispensable for securing their safety and for estimating their remaining strength. As conventional NDT techniques for steel bridges, visual testing, magnetic particle testing and ultrasonic testing have been commonly employed. However, these techniques are time- and labor- consuming techniques, because special equipment is required for inspection, such as scaffolding or a truck mount aerial work platform. In this paper, a new thermography NDT technique, which is based on temperature gap appeared on the surface of structural members due to thermal insulation effect of the crack, is developed for detection of fatigue cracks. The practicability of the developed technique is demonstrated by the field experiments for highway steel bridges in service. Detectable crack size and factors such as measurement time, season or spatial resolution which influence crack detectability are investigated.
NASA Astrophysics Data System (ADS)
Sanga, Ramesh; Srinivasan, V. S.; Sivaramakrishna, M.; Prabhakara Rao, G.
2018-07-01
In rotating machinery due to continuous rotational induced wear and tear, metallic debris will be produced and mixes with the in-service lubricant oil over the course of time. This debris gives the sign of potential machine failure due to the aging of critical parts like gears and bearings. The size and type of wear debris has a direct relationship with the degree of wear in the machine and gives information about the healthiness of equipment. This article presents an inductive quasi-digital sensor to detect the metallic debris, its type; size in the lubrication oil of rotating machinery. A microcontroller based low cost, low power, high resolution and high precise instrument with alarm indication and LCD is developed to detect ferrous debris of sizes from 30 µm and non-ferrous debris of 50 µm. It is thoroughly tested and calibrated with ferrous, non-ferrous debris of different sizes in the air environment. Finally, an experiment is conducted to check the performance of the instrument by circulating lubricant oil containing ferrous, non-ferrous debris through the sensor.
High-speed polarized light microscopy for in situ, dynamic measurement of birefringence properties
NASA Astrophysics Data System (ADS)
Wu, Xianyu; Pankow, Mark; Shadow Huang, Hsiao-Ying; Peters, Kara
2018-01-01
A high-speed, quantitative polarized light microscopy (QPLM) instrument has been developed to monitor the optical slow axis spatial realignment during controlled medium to high strain rate experiments at acquisition rates up to 10 kHz. This high-speed QPLM instrument is implemented within a modified drop tower and demonstrated using polycarbonate specimens. By utilizing a rotating quarter wave plate and a high-speed camera, the minimum acquisition time to generate an alignment map of a birefringent specimen is 6.1 ms. A sequential analysis method allows the QPLM instrument to generate QPLM data at the high-speed camera imaging frequency 10 kHz. The obtained QPLM data is processed using a vector correlation technique to detect anomalous optical axis realignment and retardation changes throughout the loading event. The detected anomalous optical axis realignment is shown to be associated with crack initiation, propagation, and specimen failure in a dynamically loaded polycarbonate specimen. The work provides a foundation for detecting damage in biological tissues through local collagen fiber realignment and fracture during dynamic loading.
NASA Astrophysics Data System (ADS)
Simpson, Amber; Maltese, Adam
2017-04-01
The term failure typically evokes negative connotations in educational settings and is likely to be accompanied by negative emotional states, low sense of confidence, and lack of persistence. These negative emotional and behavioral states may factor into an individual not pursuing a degree or career in science, technology, engineering, or mathematics (STEM). This is of particular concern considering the low number of women and underrepresented minorities pursing and working in a STEM field. Utilizing interview data with professionals across STEM, we sought to understand the role failure played in the persistence of individuals who enter and pursue paths toward STEM-related careers. Findings highlighted how participants' experiences with failure (1) shaped their outlooks or views of failure, (2) shaped their trajectories within STEM, and (3) provided them with additional skills or qualities. A few differences based on participants' sex, field, and highest degree also manifested in our analysis. We expect the results from this study to add research-based results to the current conversation around whether experiences with failure should be part of formal and informal educational settings and standards-based practices.
NASA Astrophysics Data System (ADS)
Cohen, D.; Michlmayr, G.; Or, D.
2012-04-01
Shearing of dense granular materials appears in many engineering and Earth sciences applications. Under a constant strain rate, the shearing stress at steady state oscillates with slow rises followed by rapid drops that are linked to the build up and failure of force chains. Experiments indicate that these drops display exponential statistics. Measurements of acoustic emissions during shearing indicates that the energy liberated by failure of these force chains has power-law statistics. Representing force chains as fibers, we use a stick-slip fiber bundle model to obtain analytical solutions of the statistical distribution of stress drops and failure energy. In the model, fibers stretch, fail, and regain strength during deformation. Fibers have Weibull-distributed threshold strengths with either quenched and annealed disorder. The shape of the distribution for drops and energy obtained from the model are similar to those measured during shearing experiments. This simple model may be useful to identify failure events linked to force chain failures. Future generalizations of the model that include different types of fiber failure may also allow identification of different types of granular failures that have distinct statistical acoustic emission signatures.
Sparacia, Gianvincenzo; Cannella, Roberto; Lo Re, Vincenzina; Gambino, Angelo; Mamone, Giuseppe; Miraglia, Roberto
2018-02-17
Cerebral microbleeds (CMBs) are small rounded lesions representing cerebral hemosiderin deposits surrounded by macrophages that results from previous microhemorrhages. The aim of this study was to review the distribution of cerebral microbleeds in patients with end-stage organ failure and their association with specific end-stage organ failure risk factors. Between August 2015 and June 2017, we evaluated 15 patients, 9 males, and 6 females, (mean age 65.5 years). Patients population was subdivided into three groups according to the organ failure: (a) chronic kidney failure (n = 8), (b) restrictive cardiomyopathy undergoing heart transplantation (n = 1), and (c) end-stage liver failure undergoing liver transplantation (n = 6). The MR exams were performed on a 3T MR unit and the SWI sequence was used for the detection of CMBs. CMBs were subdivided in supratentorial lobar distributed, supratentorial non-lobar distributed, and infratentorial distributed. A total of 91 microbleeds were observed in 15 patients. Fifty-nine CMBs lesions (64.8%) had supratentorial lobar distribution, 17 CMBs lesions (18.8%) had supratentorial non-lobar distribution and the remaining 15 CMBs lesions (16.4%) were infratentorial distributed. An overall predominance of supratentorial multiple lobar localizations was found in all types of end-stage organ failure. The presence of CMBs was significantly correlated with age, hypertension, and specific end-stage organ failure risk factors (p < 0.001). CMBs are mostly founded in supratentorial lobar localization in end-stage organ failure. The improved detection of CMBs with SWI sequences may contribute to a more accurate identification of patients with cerebral risk factors to prevent complications during or after the organ transplantation.
NASA Astrophysics Data System (ADS)
Sataer, G.; Sultan, M.; Yellich, J. A.; Becker, R.; Emil, M. K.; Palaseanu, M.
2017-12-01
Throughout the 20th century and into the 21st century, significant losses of residential, commercial and governmental property were reported along the shores of the Great Lakes region due to one or more of the following factors: high lake levels, wave actions, groundwater discharge. A collaborative effort (Western Michigan University, University of Toledo, Michigan Geological Survey [MGS], United States Geological Survey [USGS], National Oceanographic and Atmospheric Administration [NOAA]) is underway to examine the temporal topographic variations along the shoreline and the adjacent bluff extending from the City of South Haven in the south to the City of Saugatuck in the north within the Allegan County. Our objectives include two main tasks: (1) identification of the timing of, and the areas, witnessing slope failure and shoreline erosion, and (2) investigating the factors causing the observed failures and erosion. This is being accomplished over the study area by: (1) detecting and measuring slope subsidence rates (velocities along line of site) and failures using radar interferometric persistent scatter (PS) techniques applied to ESA's European Remote Sensing (ERS) satellites, ERS-1 and -2 (spatial resolution: 25 m) that were acquired in 1995 to 2007, (2) extracting temporal high resolution (20 cm) digital elevation models (DEM) for the study area from temporal imagery acquired by Unmanned Aerial Vehicles (UAVs), and applying change detection techniques to the extracted DEMs, (3) detecting change in elevation and slope profiles extracted from two LIDAR Coastal National Elevation Database (CoNED) DEMs (spatial resolution: 0.5m), acquired on 2008 and 2012, and (4) spatial and temporal correlation of the detected changes in elevation with relevant data sets (e.g., lake levels, precipitation, groundwater levels) in search of causal effects.
Expanded envelope concepts for aircraft control-element failure detection and identification
NASA Technical Reports Server (NTRS)
Weiss, Jerold L.; Hsu, John Y.
1988-01-01
The purpose of this effort was to develop and demonstrate concepts for expanding the envelope of failure detection and isolation (FDI) algorithms for aircraft-path failures. An algorithm which uses analytic-redundancy in the form of aerodynamic force and moment balance equations was used. Because aircraft-path FDI uses analytical models, there is a tradeoff between accuracy and the ability to detect and isolate failures. For single flight condition operation, design and analysis methods are developed to deal with this robustness problem. When the departure from the single flight condition is significant, algorithm adaptation is necessary. Adaptation requirements for the residual generation portion of the FDI algorithm are interpreted as the need for accurate, large-motion aero-models, over a broad range of velocity and altitude conditions. For the decision-making part of the algorithm, adaptation may require modifications to filtering operations, thresholds, and projection vectors that define the various hypothesis tests performed in the decision mechanism. Methods of obtaining and evaluating adequate residual generation and decision-making designs have been developed. The application of the residual generation ideas to a high-performance fighter is demonstrated by developing adaptive residuals for the AFTI-F-16 and simulating their behavior under a variety of maneuvers using the results of a NASA F-16 simulation.
Conesa-Muñoz, Jesús; Gonzalez-de-Soto, Mariano; Gonzalez-de-Santos, Pablo; Ribeiro, Angela
2015-03-05
This paper describes a supervisor system for monitoring the operation of automated agricultural vehicles. The system analyses all of the information provided by the sensors and subsystems on the vehicles in real time and notifies the user when a failure or potentially dangerous situation is detected. In some situations, it is even able to execute a neutralising protocol to remedy the failure. The system is based on a distributed and multi-level architecture that divides the supervision into different subsystems, allowing for better management of the detection and repair of failures. The proposed supervision system was developed to perform well in several scenarios, such as spraying canopy treatments against insects and diseases and selective weed treatments, by either spraying herbicide or burning pests with a mechanical-thermal actuator. Results are presented for selective weed treatment by the spraying of herbicide. The system successfully supervised the task; it detected failures such as service disruptions, incorrect working speeds, incorrect implement states, and potential collisions. Moreover, the system was able to prevent collisions between vehicles by taking action to avoid intersecting trajectories. The results show that the proposed system is a highly useful tool for managing fleets of autonomous vehicles. In particular, it can be used to manage agricultural vehicles during treatment operations.
Conesa-Muñoz, Jesús; Gonzalez-de-Soto, Mariano; Gonzalez-de-Santos, Pablo; Ribeiro, Angela
2015-01-01
This paper describes a supervisor system for monitoring the operation of automated agricultural vehicles. The system analyses all of the information provided by the sensors and subsystems on the vehicles in real time and notifies the user when a failure or potentially dangerous situation is detected. In some situations, it is even able to execute a neutralising protocol to remedy the failure. The system is based on a distributed and multi-level architecture that divides the supervision into different subsystems, allowing for better management of the detection and repair of failures. The proposed supervision system was developed to perform well in several scenarios, such as spraying canopy treatments against insects and diseases and selective weed treatments, by either spraying herbicide or burning pests with a mechanical-thermal actuator. Results are presented for selective weed treatment by the spraying of herbicide. The system successfully supervised the task; it detected failures such as service disruptions, incorrect working speeds, incorrect implement states, and potential collisions. Moreover, the system was able to prevent collisions between vehicles by taking action to avoid intersecting trajectories. The results show that the proposed system is a highly useful tool for managing fleets of autonomous vehicles. In particular, it can be used to manage agricultural vehicles during treatment operations. PMID:25751079
Processing device with self-scrubbing logic
Wojahn, Christopher K.
2016-03-01
An apparatus includes a processing unit including a configuration memory and self-scrubber logic coupled to read the configuration memory to detect compromised data stored in the configuration memory. The apparatus also includes a watchdog unit external to the processing unit and coupled to the self-scrubber logic to detect a failure in the self-scrubber logic. The watchdog unit is coupled to the processing unit to selectively reset the processing unit in response to detecting the failure in the self-scrubber logic. The apparatus also includes an external memory external to the processing unit and coupled to send configuration data to the configuration memory in response to a data feed signal outputted by the self-scrubber logic.
Comparison of mode of failure between primary and revision total knee arthroplasties.
Liang, H; Bae, J K; Park, C H; Kim, K I; Bae, D K; Song, S J
2018-04-01
Cognizance of common reasons for failure in primary and revision TKA, together with their time course, facilitates prevention. However, there have been few reports specifically comparing modes of failure for primary vs. revision TKA using a single prosthesis. The goal of the study was to compare the survival rates, modes of failure, and time periods associated with each mode of failure, of primary vs. revision TKA. The survival rates, modes of failure, time period for each mode of failure, and risk factors would differ between primary and revision TKA. Data from a consecutive cohort comprising 1606 knees (1174 patients) of primary TKA patients, and 258 knees (224 patients) of revision TKA patients, in all of whom surgery involved a P.F.C ® prosthesis (Depuy, Johnson & Johnson, Warsaw, IN), was retrospectively reviewed. The mean follow-up periods of primary and revision TKAs were 9.2 and 9.8 years, respectively. The average 10- and 15-year survival rates for primary TKA were 96.7% (CI 95%,±0.7%) and 85.4% (CI 95%,±2.0%), and for revision TKA 91.4% (CI 95%,±2.5%) and 80.5% (CI 95%,±4.5%). Common modes of failure included polyethylene wear, loosening, and infection. The most common mode of failure was polyethylene wear in primary TKA, and infection in revision TKA. The mean periods (i.e., latencies) of polyethylene wear and loosening did not differ between primary and revision TKAs, but the mean period of infection was significantly longer for revision TKA (1.2 vs. 4.8 years, P=0.003). Survival rates decreased with time, particularly more than 10 years post-surgery, for both primary and revision TKAs. Continuous efforts are required to prevent and detect the various modes of failure during long-term follow-up. Greater attention is necessary to detect late infection-induced failure following revision TKA. Case-control study, Level III. Copyright © 2017 Elsevier Masson SAS. All rights reserved.