Redundancy relations and robust failure detection
NASA Technical Reports Server (NTRS)
Chow, E. Y.; Lou, X. C.; Verghese, G. C.; Willsky, A. S.
1984-01-01
All failure detection methods are based on the use of redundancy, that is on (possible dynamic) relations among the measured variables. Consequently the robustness of the failure detection process depends to a great degree on the reliability of the redundancy relations given the inevitable presence of model uncertainties. The problem of determining redundancy relations which are optimally robust in a sense which includes the major issues of importance in practical failure detection is addressed. A significant amount of intuition concerning the geometry of robust failure detection is provided.
Optimally robust redundancy relations for failure detection in uncertain systems
NASA Technical Reports Server (NTRS)
Lou, X.-C.; Willsky, A. S.; Verghese, G. C.
1986-01-01
All failure detection methods are based, either explicitly or implicitly, on the use of redundancy, i.e. on (possibly dynamic) relations among the measured variables. The robustness of the failure detection process consequently depends to a great degree on the reliability of the redundancy relations, which in turn is affected by the inevitable presence of model uncertainties. In this paper the problem of determining redundancy relations that are optimally robust is addressed in a sense that includes several major issues of importance in practical failure detection and that provides a significant amount of intuition concerning the geometry of robust failure detection. A procedure is given involving the construction of a single matrix and its singular value decomposition for the determination of a complete sequence of redundancy relations, ordered in terms of their level of robustness. This procedure also provides the basis for comparing levels of robustness in redundancy provided by different sets of sensors.
Robust failure detection filters. M.S. Thesis
NASA Technical Reports Server (NTRS)
Sanmartin, A. M.
1985-01-01
The robustness of detection filters applied to the detection of actuator failures on a free-free beam is analyzed. This analysis is based on computer simulation tests of the detection filters in the presence of different types of model mismatch, and on frequency response functions of the transfers corresponding to the model mismatch. The robustness of detection filters based on a model of the beam containing a large number of structural modes varied dramatically with the placement of some of the filter poles. The dynamics of these filters were very hard to analyze. The design of detection filters with a number of modes equal to the number of sensors was trivial. They can be configured to detect any number of actuator failure events. The dynamics of these filters were very easy to analyze and their robustness properties were much improved. A change of the output transformation allowed the filter to perform satisfactorily with realistic levels of model mismatch.
Analytical redundancy and the design of robust failure detection systems
NASA Technical Reports Server (NTRS)
Chow, E. Y.; Willsky, A. S.
1984-01-01
The Failure Detection and Identification (FDI) process is viewed as consisting of two stages: residual generation and decision making. It is argued that a robust FDI system can be achieved by designing a robust residual generation process. Analytical redundancy, the basis for residual generation, is characterized in terms of a parity space. Using the concept of parity relations, residuals can be generated in a number of ways and the design of a robust residual generation process can be formulated as a minimax optimization problem. An example is included to illustrate this design methodology. Previously announcedd in STAR as N83-20653
Robust detection-isolation-accommodation for sensor failures
NASA Technical Reports Server (NTRS)
Weiss, J. L.; Pattipati, K. R.; Willsky, A. S.; Eterno, J. S.; Crawford, J. T.
1985-01-01
The results of a one year study to: (1) develop a theory for Robust Failure Detection and Identification (FDI) in the presence of model uncertainty, (2) develop a design methodology which utilizes the robust FDI ththeory, (3) apply the methodology to a sensor FDI problem for the F-100 jet engine, and (4) demonstrate the application of the theory to the evaluation of alternative FDI schemes are presented. Theoretical results in statistical discrimination are used to evaluate the robustness of residual signals (or parity relations) in terms of their usefulness for FDI. Furthermore, optimally robust parity relations are derived through the optimization of robustness metrics. The result is viewed as decentralization of the FDI process. A general structure for decentralized FDI is proposed and robustness metrics are used for determining various parameters of the algorithm.
Robust detection, isolation and accommodation for sensor failures
NASA Technical Reports Server (NTRS)
Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.
1986-01-01
The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques
Robust Fault Detection and Isolation for Stochastic Systems
NASA Technical Reports Server (NTRS)
George, Jemin; Gregory, Irene M.
2010-01-01
This paper outlines the formulation of a robust fault detection and isolation scheme that can precisely detect and isolate simultaneous actuator and sensor faults for uncertain linear stochastic systems. The given robust fault detection scheme based on the discontinuous robust observer approach would be able to distinguish between model uncertainties and actuator failures and therefore eliminate the problem of false alarms. Since the proposed approach involves precise reconstruction of sensor faults, it can also be used for sensor fault identification and the reconstruction of true outputs from faulty sensor outputs. Simulation results presented here validate the effectiveness of the robust fault detection and isolation system.
NASA Technical Reports Server (NTRS)
Hall, Steven R.; Walker, Bruce K.
1990-01-01
A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.
Gordon, N. C.; Wareham, D. W.
2009-01-01
We report the failure of the automated MicroScan WalkAway system to detect carbapenem heteroresistance in Enterobacter aerogenes. Carbapenem resistance has become an increasing concern in recent years, and robust surveillance is required to prevent dissemination of resistant strains. Reliance on automated systems may delay the detection of emerging resistance. PMID:19641071
Reliability issues in active control of large flexible space structures
NASA Technical Reports Server (NTRS)
Vandervelde, W. E.
1986-01-01
Efforts in this reporting period were centered on four research tasks: design of failure detection filters for robust performance in the presence of modeling errors, design of generalized parity relations for robust performance in the presence of modeling errors, design of failure sensitive observers using the geometric system theory of Wonham, and computational techniques for evaluation of the performance of control systems with fault tolerance and redundancy management
A Robust Damage-Reporting Strategy for Polymeric Materials Enabled by Aggregation-Induced Emission.
Robb, Maxwell J; Li, Wenle; Gergely, Ryan C R; Matthews, Christopher C; White, Scott R; Sottos, Nancy R; Moore, Jeffrey S
2016-09-28
Microscopic damage inevitably leads to failure in polymers and composite materials, but it is difficult to detect without the aid of specialized equipment. The ability to enhance the detection of small-scale damage prior to catastrophic material failure is important for improving the safety and reliability of critical engineering components, while simultaneously reducing life cycle costs associated with regular maintenance and inspection. Here, we demonstrate a simple, robust, and sensitive fluorescence-based approach for autonomous detection of damage in polymeric materials and composites enabled by aggregation-induced emission (AIE). This simple, yet powerful system relies on a single active component, and the general mechanism delivers outstanding performance in a wide variety of materials with diverse chemical and mechanical properties.
NASA Astrophysics Data System (ADS)
Guo, Wenzhang; Wang, Hao; Wu, Zhengping
2018-03-01
Most existing cascading failure mitigation strategy of power grids based on complex network ignores the impact of electrical characteristics on dynamic performance. In this paper, the robustness of the power grid under a power decentralization strategy is analysed through cascading failure simulation based on AC flow theory. The flow-sensitive (FS) centrality is introduced by integrating topological features and electrical properties to help determine the siting of the generation nodes. The simulation results of the IEEE-bus systems show that the flow-sensitive centrality method is a more stable and accurate approach and can enhance the robustness of the network remarkably. Through the study of the optimal flow-sensitive centrality selection for different networks, we find that the robustness of the network with obvious small-world effect depends more on contribution of the generation nodes detected by community structure, otherwise, contribution of the generation nodes with important influence on power flow is more critical. In addition, community structure plays a significant role in balancing the power flow distribution and further slowing the propagation of failures. These results are useful in power grid planning and cascading failure prevention.
Failure detection system design methodology. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chow, E. Y.
1980-01-01
The design of a failure detection and identification system consists of designing a robust residual generation process and a high performance decision making process. The design of these two processes are examined separately. Residual generation is based on analytical redundancy. Redundancy relations that are insensitive to modelling errors and noise effects are important for designing robust residual generation processes. The characterization of the concept of analytical redundancy in terms of a generalized parity space provides a framework in which a systematic approach to the determination of robust redundancy relations are developed. The Bayesian approach is adopted for the design of high performance decision processes. The FDI decision problem is formulated as a Bayes sequential decision problem. Since the optimal decision rule is incomputable, a methodology for designing suboptimal rules is proposed. A numerical algorithm is developed to facilitate the design and performance evaluation of suboptimal rules.
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Belcastro, Christine; Khong, thuan
2006-01-01
Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. Such systems developed for failure detection, identification, and reconfiguration, as well as upset recovery, need to be evaluated over broad regions of the flight envelope or under extreme flight conditions, and should include various sources of uncertainty. To apply formal robustness analysis, formulation of linear fractional transformation (LFT) models of complex parameter-dependent systems is required, which represent system uncertainty due to parameter uncertainty and actuator faults. This paper describes a detailed LFT model formulation procedure from the nonlinear model of a transport aircraft by using a preliminary LFT modeling software tool developed at the NASA Langley Research Center, which utilizes a matrix-based computational approach. The closed-loop system is evaluated over the entire flight envelope based on the generated LFT model which can cover nonlinear dynamics. The robustness analysis results of the closed-loop fault tolerant control system of a transport aircraft are presented. A reliable flight envelope (safe flight regime) is also calculated from the robust performance analysis results, over which the closed-loop system can achieve the desired performance of command tracking and failure detection.
The WorkPlace distributed processing environment
NASA Technical Reports Server (NTRS)
Ames, Troy; Henderson, Scott
1993-01-01
Real time control problems require robust, high performance solutions. Distributed computing can offer high performance through parallelism and robustness through redundancy. Unfortunately, implementing distributed systems with these characteristics places a significant burden on the applications programmers. Goddard Code 522 has developed WorkPlace to alleviate this burden. WorkPlace is a small, portable, embeddable network interface which automates message routing, failure detection, and re-configuration in response to failures in distributed systems. This paper describes the design and use of WorkPlace, and its application in the construction of a distributed blackboard system.
NASA Technical Reports Server (NTRS)
Weiss, Jerold L.; Hsu, John Y.
1986-01-01
The use of a decentralized approach to failure detection and isolation for use in restructurable control systems is examined. This work has produced: (1) A method for evaluating fundamental limits to FDI performance; (2) Application using flight recorded data; (3) A working control element FDI system with maximal sensitivity to critical control element failures; (4) Extensive testing on realistic simulations; and (5) A detailed design methodology involving parameter optimization (with respect to model uncertainties) and sensitivity analyses. This project has concentrated on detection and isolation of generic control element failures since these failures frequently lead to emergency conditions and since knowledge of remaining control authority is essential for control system redesign. The failures are generic in the sense that no temporal failure signature information was assumed. Thus, various forms of functional failures are treated in a unified fashion. Such a treatment results in a robust FDI system (i.e., one that covers all failure modes) but sacrifices some performance when detailed failure signature information is known, useful, and employed properly. It was assumed throughout that all sensors are validated (i.e., contain only in-spec errors) and that only the first failure of a single control element needs to be detected and isolated. The FDI system which has been developed will handle a class of multiple failures.
Optimally Robust Redundancy Relations for Failure Detection in Uncertain Systems,
1983-04-01
particular applications. While the general methods provide the basis for what in principle should be a widely applicable failure detection methodology...modifications to this result which overcome them at no fundmental increase in complexity. 4.1 Scaling A critical problem with the criteria of the preceding...criterion which takes scaling into account L 2 s[ (45) As in (38), we can multiply the C. by positive scalars to take into account unequal weightings on
Chen, Wen; Chowdhury, Fahmida N; Djuric, Ana; Yeh, Chih-Ping
2014-09-01
This paper provides a new design of robust fault detection for turbofan engines with adaptive controllers. The critical issue is that the adaptive controllers can depress the faulty effects such that the actual system outputs remain the pre-specified values, making it difficult to detect faults/failures. To solve this problem, a Total Measurable Fault Information Residual (ToMFIR) technique with the aid of system transformation is adopted to detect faults in turbofan engines with adaptive controllers. This design is a ToMFIR-redundancy-based robust fault detection. The ToMFIR is first introduced and existing results are also summarized. The Detailed design process of the ToMFIRs is presented and a turbofan engine model is simulated to verify the effectiveness of the proposed ToMFIR-based fault-detection strategy. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Simulation Assisted Risk Assessment Applied to Launch Vehicle Conceptual Design
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Go, Susie; Gee, Ken; Lawrence, Scott
2008-01-01
A simulation-based risk assessment approach is presented and is applied to the analysis of abort during the ascent phase of a space exploration mission. The approach utilizes groupings of launch vehicle failures, referred to as failure bins, which are mapped to corresponding failure environments. Physical models are used to characterize the failure environments in terms of the risk due to blast overpressure, resulting debris field, and the thermal radiation due to a fireball. The resulting risk to the crew is dynamically modeled by combining the likelihood of each failure, the severity of the failure environments as a function of initiator and time of the failure, the robustness of the crew module, and the warning time available due to early detection. The approach is shown to support the launch vehicle design process by characterizing the risk drivers and identifying regions where failure detection would significantly reduce the risk to the crew.
NASA Astrophysics Data System (ADS)
Fauji, Shantanu
We consider the problem of energy efficient and fault tolerant in--network aggregation for wireless sensor networks (WSNs). In-network aggregation is the process of aggregation while collecting data from sensors to the base station. This process should be energy efficient due to the limited energy at the sensors and tolerant to the high failure rates common in sensor networks. Tree based in--network aggregation protocols, although energy efficient, are not robust to network failures. Multipath routing protocols are robust to failures to a certain degree but are not energy efficient due to the overhead in the maintenance of multiple paths. We propose a new protocol for in-network aggregation in WSNs, which is energy efficient, achieves high lifetime, and is robust to the changes in the network topology. Our protocol, gossip--based protocol for in-network aggregation (GPIA) is based on the spreading of information via gossip. GPIA is not only adaptive to failures and changes in the network topology, but is also energy efficient. Energy efficiency of GPIA comes from all the nodes being capable of selective message reception and detecting convergence of the aggregation early. We experimentally show that GPIA provides significant improvement over some other competitors like the Ridesharing, Synopsis Diffusion and the pure version of gossip. GPIA shows ten fold, five fold and two fold improvement over the pure gossip, the synopsis diffusion and Ridesharing protocols in terms of network lifetime, respectively. Further, GPIA retains gossip's robustness to failures and improves upon the accuracy of synopsis diffusion and Ridesharing.
Data-Driven Anomaly Detection Performance for the Ares I-X Ground Diagnostic Prototype
NASA Technical Reports Server (NTRS)
Martin, Rodney A.; Schwabacher, Mark A.; Matthews, Bryan L.
2010-01-01
In this paper, we will assess the performance of a data-driven anomaly detection algorithm, the Inductive Monitoring System (IMS), which can be used to detect simulated Thrust Vector Control (TVC) system failures. However, the ability of IMS to detect these failures in a true operational setting may be related to the realistic nature of how they are simulated. As such, we will investigate both a low fidelity and high fidelity approach to simulating such failures, with the latter based upon the underlying physics. Furthermore, the ability of IMS to detect anomalies that were previously unknown and not previously simulated will be studied in earnest, as well as apparent deficiencies or misapplications that result from using the data-driven paradigm. Our conclusions indicate that robust detection performance of simulated failures using IMS is not appreciably affected by the use of a high fidelity simulation. However, we have found that the inclusion of a data-driven algorithm such as IMS into a suite of deployable health management technologies does add significant value.
Penza, Veronica; Du, Xiaofei; Stoyanov, Danail; Forgione, Antonello; Mattos, Leonardo S; De Momi, Elena
2018-04-01
Despite the benefits introduced by robotic systems in abdominal Minimally Invasive Surgery (MIS), major complications can still affect the outcome of the procedure, such as intra-operative bleeding. One of the causes is attributed to accidental damages to arteries or veins by the surgical tools, and some of the possible risk factors are related to the lack of sub-surface visibilty. Assistive tools guiding the surgical gestures to prevent these kind of injuries would represent a relevant step towards safer clinical procedures. However, it is still challenging to develop computer vision systems able to fulfill the main requirements: (i) long term robustness, (ii) adaptation to environment/object variation and (iii) real time processing. The purpose of this paper is to develop computer vision algorithms to robustly track soft tissue areas (Safety Area, SA), defined intra-operatively by the surgeon based on the real-time endoscopic images, or registered from a pre-operative surgical plan. We propose a framework to combine an optical flow algorithm with a tracking-by-detection approach in order to be robust against failures caused by: (i) partial occlusion, (ii) total occlusion, (iii) SA out of the field of view, (iv) deformation, (v) illumination changes, (vi) abrupt camera motion, (vii), blur and (viii) smoke. A Bayesian inference-based approach is used to detect the failure of the tracker, based on online context information. A Model Update Strategy (MUpS) is also proposed to improve the SA re-detection after failures, taking into account the changes of appearance of the SA model due to contact with instruments or image noise. The performance of the algorithm was assessed on two datasets, representing ex-vivo organs and in-vivo surgical scenarios. Results show that the proposed framework, enhanced with MUpS, is capable of maintain high tracking performance for extended periods of time ( ≃ 4 min - containing the aforementioned events) with high precision (0.7) and recall (0.8) values, and with a recovery time after a failure between 1 and 8 frames in the worst case. Copyright © 2017 Elsevier B.V. All rights reserved.
Robust Fault Detection for Aircraft Using Mixed Structured Singular Value Theory and Fuzzy Logic
NASA Technical Reports Server (NTRS)
Collins, Emmanuel G.
2000-01-01
The purpose of fault detection is to identify when a fault or failure has occurred in a system such as an aircraft or expendable launch vehicle. The faults may occur in sensors, actuators, structural components, etc. One of the primary approaches to model-based fault detection relies on analytical redundancy. That is the output of a computer-based model (actually a state estimator) is compared with the sensor measurements of the actual system to determine when a fault has occurred. Unfortunately, the state estimator is based on an idealized mathematical description of the underlying plant that is never totally accurate. As a result of these modeling errors, false alarms can occur. This research uses mixed structured singular value theory, a relatively recent and powerful robustness analysis tool, to develop robust estimators and demonstrates the use of these estimators in fault detection. To allow qualitative human experience to be effectively incorporated into the detection process fuzzy logic is used to predict the seriousness of the fault that has occurred.
On Robustness of Deadlock Detection Algorithms for Distributed Computing Systems.
1982-02-01
temrs : nake it much,- ore Eff’: -ult -,o detect, avcii :r -revenn -hsr fn -,he earlier muJtiroaming centralized computing systems. :eadlock :)rever...failure of site C would not have been critical after the B ^ad ’ teen sent. The effect of a type c site (site _ in our examrle’ falling would have no
Triplexer Monitor Design for Failure Detection in FTTH System
NASA Astrophysics Data System (ADS)
Fu, Minglei; Le, Zichun; Hu, Jinhua; Fei, Xia
2012-09-01
Triplexer was one of the key components in FTTH systems, which employed an analog overlay channel for video broadcasting in addition to bidirectional digital transmission. To enhance the survivability of triplexer as well as the robustness of FTTH system, a multi-ports device named triplexer monitor was designed and realized, by which failures at triplexer ports can be detected and localized. Triplexer monitor was composed of integrated circuits and its four input ports were connected with the beam splitter whose power division ratio was 95∶5. By means of detecting the sampled optical signal from the beam splitters, triplexer monitor tracked the status of the four ports in triplexer (e.g. 1310 nm, 1490 nm, 1550 nm and com ports). In this paper, the operation scenario of the triplexer monitor with external optical devices was addressed. And the integrated circuit structure of the triplexer monitor was also given. Furthermore, a failure localization algorithm was proposed, which based on the state transition diagram. In order to measure the failure detection and localization time under the circumstance of different failed ports, an experimental test-bed was built. Experiment results showed that the detection time for the failure at 1310 nm port by the triplexer monitor was less than 8.20 ms. For the failure at 1490 nm or 1550 nm port it was less than 8.20 ms and for the failure at com port it was less than 7.20 ms.
2014-05-01
vulnerable to failure is air. This could be a discharge through an air medium or along an air/surface interface. Achieving robustness in dc power...sputtering” arcs) are discharges that are most commonly located in series with the intended load; the electrical impedance of the load limits the...particularly those used at voltages > 1000 V, is detection and measurement of partial- discharge (PD) activity. The presence of PD in a component typically
Rule-Based Relaxation of Reference Identification Failures. Technical Report No. 396.
ERIC Educational Resources Information Center
Goodman, Bradley A.
In a step toward creating a robust natural language understanding system which detects and avoids miscommunication, this artificial intelligence research report provides a taxonomy of miscommunication problems that arise in expert-apprentice dialogues (including misunderstandings, wrong communication, and bad analogies), and proposes a flexible…
Simulation-driven machine learning: Bearing fault classification
NASA Astrophysics Data System (ADS)
Sobie, Cameron; Freitas, Carina; Nicolai, Mike
2018-01-01
Increasing the accuracy of mechanical fault detection has the potential to improve system safety and economic performance by minimizing scheduled maintenance and the probability of unexpected system failure. Advances in computational performance have enabled the application of machine learning algorithms across numerous applications including condition monitoring and failure detection. Past applications of machine learning to physical failure have relied explicitly on historical data, which limits the feasibility of this approach to in-service components with extended service histories. Furthermore, recorded failure data is often only valid for the specific circumstances and components for which it was collected. This work directly addresses these challenges for roller bearings with race faults by generating training data using information gained from high resolution simulations of roller bearing dynamics, which is used to train machine learning algorithms that are then validated against four experimental datasets. Several different machine learning methodologies are compared starting from well-established statistical feature-based methods to convolutional neural networks, and a novel application of dynamic time warping (DTW) to bearing fault classification is proposed as a robust, parameter free method for race fault detection.
A Diagnostic Approach for Electro-Mechanical Actuators in Aerospace Systems
NASA Technical Reports Server (NTRS)
Balaban, Edward; Saxena, Abhinav; Bansal, Prasun; Goebel, Kai Frank; Stoelting, Paul; Curran, Simon
2009-01-01
Electro-mechanical actuators (EMA) are finding increasing use in aerospace applications, especially with the trend towards all all-electric aircraft and spacecraft designs. However, electro-mechanical actuators still lack the knowledge base accumulated for other fielded actuator types, particularly with regard to fault detection and characterization. This paper presents a thorough analysis of some of the critical failure modes documented for EMAs and describes experiments conducted on detecting and isolating a subset of them. The list of failures has been prepared through an extensive Failure Modes and Criticality Analysis (FMECA) reference, literature review, and accessible industry experience. Methods for data acquisition and validation of algorithms on EMA test stands are described. A variety of condition indicators were developed that enabled detection, identification, and isolation among the various fault modes. A diagnostic algorithm based on an artificial neural network is shown to operate successfully using these condition indicators and furthermore, robustness of these diagnostic routines to sensor faults is demonstrated by showing their ability to distinguish between them and component failures. The paper concludes with a roadmap leading from this effort towards developing successful prognostic algorithms for electromechanical actuators.
Reference and Reference Failures. Technical Report No. 398.
ERIC Educational Resources Information Center
Goodman, Bradley A.
In order to build robust natural language processing systems that can detect and recover from miscommunication, the investigation of how people communicate and how they recover from problems in communication described in this artificial intelligence report focused on reference problems which a listener may have in determining what or whom a…
Reliability Assessment for Low-cost Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Freeman, Paul Michael
Existing low-cost unmanned aerospace systems are unreliable, and engineers must blend reliability analysis with fault-tolerant control in novel ways. This dissertation introduces the University of Minnesota unmanned aerial vehicle flight research platform, a comprehensive simulation and flight test facility for reliability and fault-tolerance research. An industry-standard reliability assessment technique, the failure modes and effects analysis, is performed for an unmanned aircraft. Particular attention is afforded to the control surface and servo-actuation subsystem. Maintaining effector health is essential for safe flight; failures may lead to loss of control incidents. Failure likelihood, severity, and risk are qualitatively assessed for several effector failure modes. Design changes are recommended to improve aircraft reliability based on this analysis. Most notably, the control surfaces are split, providing independent actuation and dual-redundancy. The simulation models for control surface aerodynamic effects are updated to reflect the split surfaces using a first-principles geometric analysis. The failure modes and effects analysis is extended by using a high-fidelity nonlinear aircraft simulation. A trim state discovery is performed to identify the achievable steady, wings-level flight envelope of the healthy and damaged vehicle. Tolerance of elevator actuator failures is studied using familiar tools from linear systems analysis. This analysis reveals significant inherent performance limitations for candidate adaptive/reconfigurable control algorithms used for the vehicle. Moreover, it demonstrates how these tools can be applied in a design feedback loop to make safety-critical unmanned systems more reliable. Control surface impairments that do occur must be quickly and accurately detected. This dissertation also considers fault detection and identification for an unmanned aerial vehicle using model-based and model-free approaches and applies those algorithms to experimental faulted and unfaulted flight test data. Flight tests are conducted with actuator faults that affect the plant input and sensor faults that affect the vehicle state measurements. A model-based detection strategy is designed and uses robust linear filtering methods to reject exogenous disturbances, e.g. wind, while providing robustness to model variation. A data-driven algorithm is developed to operate exclusively on raw flight test data without physical model knowledge. The fault detection and identification performance of these complementary but different methods is compared. Together, enhanced reliability assessment and multi-pronged fault detection and identification techniques can help to bring about the next generation of reliable low-cost unmanned aircraft.
Time-Frequency Methods for Structural Health Monitoring †
Pyayt, Alexander L.; Kozionov, Alexey P.; Mokhov, Ilya I.; Lang, Bernhard; Meijer, Robert J.; Krzhizhanovskaya, Valeria V.; Sloot, Peter M. A.
2014-01-01
Detection of early warning signals for the imminent failure of large and complex engineered structures is a daunting challenge with many open research questions. In this paper we report on novel ways to perform Structural Health Monitoring (SHM) of flood protection systems (levees, earthen dikes and concrete dams) using sensor data. We present a robust data-driven anomaly detection method that combines time-frequency feature extraction, using wavelet analysis and phase shift, with one-sided classification techniques to identify the onset of failure anomalies in real-time sensor measurements. The methodology has been successfully tested at three operational levees. We detected a dam leakage in the retaining dam (Germany) and “strange” behaviour of sensors installed in a Boston levee (UK) and a Rhine levee (Germany). PMID:24625740
A Study of Energy Management Systems and its Failure Modes in Smart Grid Power Distribution
NASA Astrophysics Data System (ADS)
Musani, Aatif
The subject of this thesis is distribution level load management using a pricing signal in a smart grid infrastructure. The project relates to energy management in a spe-cialized distribution system known as the Future Renewable Electric Energy Delivery and Management (FREEDM) system. Energy management through demand response is one of the key applications of smart grid. Demand response today is envisioned as a method in which the price could be communicated to the consumers and they may shift their loads from high price periods to the low price periods. The development and deployment of the FREEDM system necessitates controls of energy and power at the point of end use. In this thesis, the main objective is to develop the control model of the Energy Management System (EMS). The energy and power management in the FREEDM system is digitally controlled therefore all signals containing system states are discrete. The EMS is modeled as a discrete closed loop transfer function in the z-domain. A breakdown of power and energy control devices such as EMS components may result in energy con-sumption error. This leads to one of the main focuses of the thesis which is to identify and study component failures of the designed control system. Moreover, H-infinity ro-bust control method is applied to ensure effectiveness of the control architecture. A focus of the study is cyber security attack, specifically bad data detection in price. Test cases are used to illustrate the performance of the EMS control design, the effect of failure modes and the application of robust control technique. The EMS was represented by a linear z-domain model. The transfer function be-tween the pricing signal and the demand response was designed and used as a test bed. EMS potential failure modes were identified and studied. Three bad data detection meth-odologies were implemented and a voting policy was used to declare bad data. The run-ning mean and standard deviation analysis method proves to be the best method to detect bad data. An H-infinity robust control technique was applied for the first time to design discrete EMS controller for the FREEDM system.
Quality control of inkjet technology for DNA microarray fabrication.
Pierik, Anke; Dijksman, Frits; Raaijmakers, Adrie; Wismans, Ton; Stapert, Henk
2008-12-01
A robust manufacturing process is essential to make high-quality DNA microarrays, especially for use in diagnostic tests. We investigated different failure modes of the inkjet printing process used to manufacture low-density microarrays. A single nozzle inkjet spotter was provided with two optical imaging systems, monitoring in real time the flight path of every droplet. If a droplet emission failure is detected, the printing process is automatically stopped. We analyzed over 1.3 million droplets. This information was used to investigate the performance of the inkjet system and to obtain detailed insight into the frequency and causes of jetting failures. Of all the substrates investigated, 96.2% were produced without any system or jetting failures. In 1.6% of the substrates, droplet emission failed and was correctly identified. Appropriate measures could then be taken to get the process back on track. In 2.2%, the imaging systems failed while droplet emission occurred correctly. In 0.1% of the substrates, droplet emission failure that was not timely detected occurred. Thus, the overall yield of the microarray manufacturing process was 99.9%, which is highly acceptable for prototyping.
Daily rhythmicity of body temperature in the dog.
Refinetti, R; Piccione, G
2003-08-01
Research over the past 50 years has demonstrated the existence of circadian or daily rhythmicity in the body core temperature of a large number of mammalian species. However, previous studies have failed to identify daily rhythmicity of body temperature in dogs. We report here the successful recording of daily rhythms of rectal temperature in female Beagle dogs. The low robustness of the rhythms (41% of maximal robustness) and the small range of excursion (0.5 degrees C) are probably responsible for previous failures in detecting rhythmicity in dogs.
Robustness surfaces of complex networks
NASA Astrophysics Data System (ADS)
Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis
2014-09-01
Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.
Robustness surfaces of complex networks.
Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis
2014-09-02
Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.
Jiang, Ye; Hu, Qinglei; Ma, Guangfu
2010-01-01
In this paper, a robust adaptive fault-tolerant control approach to attitude tracking of flexible spacecraft is proposed for use in situations when there are reaction wheel/actuator failures, persistent bounded disturbances and unknown inertia parameter uncertainties. The controller is designed based on an adaptive backstepping sliding mode control scheme, and a sufficient condition under which this control law can render the system semi-globally input-to-state stable is also provided such that the closed-loop system is robust with respect to any disturbance within a quantifiable restriction on the amplitude, as well as the set of initial conditions, if the control gains are designed appropriately. Moreover, in the design, the control law does not need a fault detection and isolation mechanism even if the failure time instants, patterns and values on actuator failures are also unknown for the designers, as motivated from a practical spacecraft control application. In addition to detailed derivations of the new controller design and a rigorous sketch of all the associated stability and attitude error convergence proofs, illustrative simulation results of an application to flexible spacecraft show that high precise attitude control and vibration suppression are successfully achieved using various scenarios of controlling effective failures. 2009. Published by Elsevier Ltd.
Robust-yet-fragile nature of interdependent networks
NASA Astrophysics Data System (ADS)
Tan, Fei; Xia, Yongxiang; Wei, Zhi
2015-05-01
Interdependent networks have been shown to be extremely vulnerable based on the percolation model. Parshani et al. [Europhys. Lett. 92, 68002 (2010), 10.1209/0295-5075/92/68002] further indicated that the more intersimilar networks are, the more robust they are to random failures. When traffic load is considered, how do the coupling patterns impact cascading failures in interdependent networks? This question has been largely unexplored until now. In this paper, we address this question by investigating the robustness of interdependent Erdös-Rényi random graphs and Barabási-Albert scale-free networks under either random failures or intentional attacks. It is found that interdependent Erdös-Rényi random graphs are robust yet fragile under either random failures or intentional attacks. Interdependent Barabási-Albert scale-free networks, however, are only robust yet fragile under random failures but fragile under intentional attacks. We further analyze the interdependent communication network and power grid and achieve similar results. These results advance our understanding of how interdependency shapes network robustness.
Robustness surfaces of complex networks
Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis
2014-01-01
Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared. PMID:25178402
Extended Testability Analysis Tool
NASA Technical Reports Server (NTRS)
Melcher, Kevin; Maul, William A.; Fulton, Christopher
2012-01-01
The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.
Predicting effects of structural stress in a genome-reduced model bacterial metabolism
NASA Astrophysics Data System (ADS)
Güell, Oriol; Sagués, Francesc; Serrano, M. Ángeles
2012-08-01
Mycoplasma pneumoniae is a human pathogen recently proposed as a genome-reduced model for bacterial systems biology. Here, we study the response of its metabolic network to different forms of structural stress, including removal of individual and pairs of reactions and knockout of genes and clusters of co-expressed genes. Our results reveal a network architecture as robust as that of other model bacteria regarding multiple failures, although less robust against individual reaction inactivation. Interestingly, metabolite motifs associated to reactions can predict the propagation of inactivation cascades and damage amplification effects arising in double knockouts. We also detect a significant correlation between gene essentiality and damages produced by single gene knockouts, and find that genes controlling high-damage reactions tend to be expressed independently of each other, a functional switch mechanism that, simultaneously, acts as a genetic firewall to protect metabolism. Prediction of failure propagation is crucial for metabolic engineering or disease treatment.
The evaluation of the OSGLR algorithm for restructurable controls
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.
1986-01-01
The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.
Reliable Communication Models in Interdependent Critical Infrastructure Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Sangkeun; Chinthavali, Supriya; Shankar, Mallikarjun
Modern critical infrastructure networks are becoming increasingly interdependent where the failures in one network may cascade to other dependent networks, causing severe widespread national-scale failures. A number of previous efforts have been made to analyze the resiliency and robustness of interdependent networks based on different models. However, communication network, which plays an important role in today's infrastructures to detect and handle failures, has attracted little attention in the interdependency studies, and no previous models have captured enough practical features in the critical infrastructure networks. In this paper, we study the interdependencies between communication network and other kinds of critical infrastructuremore » networks with an aim to identify vulnerable components and design resilient communication networks. We propose several interdependency models that systematically capture various features and dynamics of failures spreading in critical infrastructure networks. We also discuss several research challenges in building reliable communication solutions to handle failures in these models.« less
The impact of the topology on cascading failures in a power grid model
NASA Astrophysics Data System (ADS)
Koç, Yakup; Warnier, Martijn; Mieghem, Piet Van; Kooij, Robert E.; Brazier, Frances M. T.
2014-05-01
Cascading failures are one of the main reasons for large scale blackouts in power transmission grids. Secure electrical power supply requires, together with careful operation, a robust design of the electrical power grid topology. Currently, the impact of the topology on grid robustness is mainly assessed by purely topological approaches, that fail to capture the essence of electric power flow. This paper proposes a metric, the effective graph resistance, to relate the topology of a power grid to its robustness against cascading failures by deliberate attacks, while also taking the fundamental characteristics of the electric power grid into account such as power flow allocation according to Kirchhoff laws. Experimental verification on synthetic power systems shows that the proposed metric reflects the grid robustness accurately. The proposed metric is used to optimize a grid topology for a higher level of robustness. To demonstrate its applicability, the metric is applied on the IEEE 118 bus power system to improve its robustness against cascading failures.
NASA Astrophysics Data System (ADS)
Li, Yixiao; Zhang, Lin; Huang, Chaogeng; Shen, Bin
2016-06-01
Failures of real-world infrastructure networks due to natural disasters often originate in a certain region, but this feature has seldom been considered in theoretical models. In this article, we introduce a possible failure pattern of geographical networks-;regional failure;-by which nodes and edges within a region malfunction. Based on a previous spatial network model (Louf et al., 2013), we study the robustness of geographical networks against regional failure, which is measured by the fraction of nodes that remain in the largest connected component, via simulations. A small-area failure results in a large reduction of their robustness measure. Furthermore, we investigate two pre-deployed mechanisms to enhance their robustness: One is to extend the cost-benefit growth mechanism of the original network model by adding more than one link in a growth step, and the other is to strengthen the interconnection of hubs in generated networks. We measure the robustness-enhancing effects of both mechanisms on the basis of their costs, i.e., the amount of excessive links and the induced geographical length. The latter mechanism is better than the former one if a normal level of costs is considered. When costs exceed a certain level, the former has an advantage. Because the costs of excessive links affect the investment decision of real-world infrastructure networks, it is practical to enhance their robustness by adding more links between hubs. These results might help design robust geographical networks economically.
Expanded envelope concepts for aircraft control-element failure detection and identification
NASA Technical Reports Server (NTRS)
Weiss, Jerold L.; Hsu, John Y.
1988-01-01
The purpose of this effort was to develop and demonstrate concepts for expanding the envelope of failure detection and isolation (FDI) algorithms for aircraft-path failures. An algorithm which uses analytic-redundancy in the form of aerodynamic force and moment balance equations was used. Because aircraft-path FDI uses analytical models, there is a tradeoff between accuracy and the ability to detect and isolate failures. For single flight condition operation, design and analysis methods are developed to deal with this robustness problem. When the departure from the single flight condition is significant, algorithm adaptation is necessary. Adaptation requirements for the residual generation portion of the FDI algorithm are interpreted as the need for accurate, large-motion aero-models, over a broad range of velocity and altitude conditions. For the decision-making part of the algorithm, adaptation may require modifications to filtering operations, thresholds, and projection vectors that define the various hypothesis tests performed in the decision mechanism. Methods of obtaining and evaluating adequate residual generation and decision-making designs have been developed. The application of the residual generation ideas to a high-performance fighter is demonstrated by developing adaptive residuals for the AFTI-F-16 and simulating their behavior under a variety of maneuvers using the results of a NASA F-16 simulation.
Optimization of cascading failure on complex network based on NNIA
NASA Astrophysics Data System (ADS)
Zhu, Qian; Zhu, Zhiliang; Qi, Yi; Yu, Hai; Xu, Yanjie
2018-07-01
Recently, the robustness of networks under cascading failure has attracted extensive attention. Different from previous studies, we concentrate on how to improve the robustness of the networks from the perspective of intelligent optimization. We establish two multi-objective optimization models that comprehensively consider the operational cost of the edges in the networks and the robustness of the networks. The NNIA (Non-dominated Neighbor Immune Algorithm) is applied to solve the optimization models. We finished simulations of the Barabási-Albert (BA) network and Erdös-Rényi (ER) network. In the solutions, we find the edges that can facilitate the propagation of cascading failure and the edges that can suppress the propagation of cascading failure. From the conclusions, we take optimal protection measures to weaken the damage caused by cascading failures. We also consider actual situations of operational cost feasibility of the edges. People can make a more practical choice based on the operational cost. Our work will be helpful in the design of highly robust networks or improvement of the robustness of networks in the future.
Robust fault-tolerant tracking control design for spacecraft under control input saturation.
Bustan, Danyal; Pariz, Naser; Sani, Seyyed Kamal Hosseini
2014-07-01
In this paper, a continuous globally stable tracking control algorithm is proposed for a spacecraft in the presence of unknown actuator failure, control input saturation, uncertainty in inertial matrix and external disturbances. The design method is based on variable structure control and has the following properties: (1) fast and accurate response in the presence of bounded disturbances; (2) robust to the partial loss of actuator effectiveness; (3) explicit consideration of control input saturation; and (4) robust to uncertainty in inertial matrix. In contrast to traditional fault-tolerant control methods, the proposed controller does not require knowledge of the actuator faults and is implemented without explicit fault detection and isolation processes. In the proposed controller a single parameter is adjusted dynamically in such a way that it is possible to prove that both attitude and angular velocity errors will tend to zero asymptotically. The stability proof is based on a Lyapunov analysis and the properties of the singularity free quaternion representation of spacecraft dynamics. Results of numerical simulations state that the proposed controller is successful in achieving high attitude performance in the presence of external disturbances, actuator failures, and control input saturation. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
A Bayesian Framework for Human Body Pose Tracking from Depth Image Sequences
Zhu, Youding; Fujimura, Kikuo
2010-01-01
This paper addresses the problem of accurate and robust tracking of 3D human body pose from depth image sequences. Recovering the large number of degrees of freedom in human body movements from a depth image sequence is challenging due to the need to resolve the depth ambiguity caused by self-occlusions and the difficulty to recover from tracking failure. Human body poses could be estimated through model fitting using dense correspondences between depth data and an articulated human model (local optimization method). Although it usually achieves a high accuracy due to dense correspondences, it may fail to recover from tracking failure. Alternately, human pose may be reconstructed by detecting and tracking human body anatomical landmarks (key-points) based on low-level depth image analysis. While this method (key-point based method) is robust and recovers from tracking failure, its pose estimation accuracy depends solely on image-based localization accuracy of key-points. To address these limitations, we present a flexible Bayesian framework for integrating pose estimation results obtained by methods based on key-points and local optimization. Experimental results are shown and performance comparison is presented to demonstrate the effectiveness of the proposed approach. PMID:22399933
Research on cascading failure in multilayer network with different coupling preference
NASA Astrophysics Data System (ADS)
Zhang, Yong; Jin, Lei; Wang, Xiao Juan
This paper is aimed at constructing robust multilayer networks against cascading failure. Considering link protection strategies in reality, we design a cascading failure model based on load distribution and extend it to multilayer. We use the cascading failure model to deduce the scale of the largest connected component after cascading failure, from which we can find that the performance of four kinds of load distribution strategies associates with the load ratio of the current edge to its adjacent edge. Coupling preference is a typical characteristic in multilayer networks which corresponds to the network robustness. The coupling preference of multilayer networks is divided into two forms: the coupling preference in layers and the coupling preference between layers. To analyze the relationship between the coupling preference and the multilayer network robustness, we design a construction algorithm to generate multilayer networks with different coupling preferences. Simulation results show that the load distribution based on the node betweenness performs the best. When the coupling coefficient in layers is zero, the scale-free network is the most robust. In the random network, the assortative coupling in layers is more robust than the disassortative coupling. For the coupling preference between layers, the assortative coupling between layers is more robust than the disassortative coupling both in the scale free network and the random network.
A scoring mechanism for the rank aggregation of network robustness
NASA Astrophysics Data System (ADS)
Yazdani, Alireza; Dueñas-Osorio, Leonardo; Li, Qilin
2013-10-01
To date, a number of metrics have been proposed to quantify inherent robustness of network topology against failures. However, each single metric usually only offers a limited view of network vulnerability to different types of random failures and targeted attacks. When applied to certain network configurations, different metrics rank network topology robustness in different orders which is rather inconsistent, and no single metric fully characterizes network robustness against different modes of failure. To overcome such inconsistency, this work proposes a multi-metric approach as the basis of evaluating aggregate ranking of network topology robustness. This is based on simultaneous utilization of a minimal set of distinct robustness metrics that are standardized so to give way to a direct comparison of vulnerability across networks with different sizes and configurations, hence leading to an initial scoring of inherent topology robustness. Subsequently, based on the inputs of initial scoring a rank aggregation method is employed to allocate an overall ranking of robustness to each network topology. A discussion is presented in support of the presented multi-metric approach and its applications to more realistically assess and rank network topology robustness.
An evidential reasoning extension to quantitative model-based failure diagnosis
NASA Technical Reports Server (NTRS)
Gertler, Janos J.; Anderson, Kenneth C.
1992-01-01
The detection and diagnosis of failures in physical systems characterized by continuous-time operation are studied. A quantitative diagnostic methodology has been developed that utilizes the mathematical model of the physical system. On the basis of the latter, diagnostic models are derived each of which comprises a set of orthogonal parity equations. To improve the robustness of the algorithm, several models may be used in parallel, providing potentially incomplete and/or conflicting inferences. Dempster's rule of combination is used to integrate evidence from the different models. The basic probability measures are assigned utilizing quantitative information extracted from the mathematical model and from online computation performed therewith.
Overview of the Smart Network Element Architecture and Recent Innovations
NASA Technical Reports Server (NTRS)
Perotti, Jose M.; Mata, Carlos T.; Oostdyk, Rebecca L.
2008-01-01
In industrial environments, system operators rely on the availability and accuracy of sensors to monitor processes and detect failures of components and/or processes. The sensors must be networked in such a way that their data is reported to a central human interface, where operators are tasked with making real-time decisions based on the state of the sensors and the components that are being monitored. Incorporating health management functions at this central location aids the operator by automating the decision-making process to suggest, and sometimes perform, the action required by current operating conditions. Integrated Systems Health Management (ISHM) aims to incorporate data from many sources, including real-time and historical data and user input, and extract information and knowledge from that data to diagnose failures and predict future failures of the system. By distributing health management processing to lower levels of the architecture, there is less bandwidth required for ISHM, enhanced data fusion, make systems and processes more robust, and improved resolution for the detection and isolation of failures in a system, subsystem, component, or process. The Smart Network Element (SNE) has been developed at NASA Kennedy Space Center to perform intelligent functions at sensors and actuators' level in support of ISHM.
Intelligent failure-tolerant control
NASA Technical Reports Server (NTRS)
Stengel, Robert F.
1991-01-01
An overview of failure-tolerant control is presented, beginning with robust control, progressing through parallel and analytical redundancy, and ending with rule-based systems and artificial neural networks. By design or implementation, failure-tolerant control systems are 'intelligent' systems. All failure-tolerant systems require some degrees of robustness to protect against catastrophic failure; failure tolerance often can be improved by adaptivity in decision-making and control, as well as by redundancy in measurement and actuation. Reliability, maintainability, and survivability can be enhanced by failure tolerance, although each objective poses different goals for control system design. Artificial intelligence concepts are helpful for integrating and codifying failure-tolerant control systems, not as alternatives but as adjuncts to conventional design methods.
Immunity-based detection, identification, and evaluation of aircraft sub-system failures
NASA Astrophysics Data System (ADS)
Moncayo, Hever Y.
This thesis describes the design, development, and flight-simulation testing of an integrated Artificial Immune System (AIS) for detection, identification, and evaluation of a wide variety of sensor, actuator, propulsion, and structural failures/damages including the prediction of the achievable states and other limitations on performance and handling qualities. The AIS scheme achieves high detection rate and low number of false alarms for all the failure categories considered. Data collected using a motion-based flight simulator are used to define the self for an extended sub-region of the flight envelope. The NASA IFCS F-15 research aircraft model is used and represents a supersonic fighter which include model following adaptive control laws based on non-linear dynamic inversion and artificial neural network augmentation. The flight simulation tests are designed to analyze and demonstrate the performance of the immunity-based aircraft failure detection, identification and evaluation (FDIE) scheme. A general robustness analysis is also presented by determining the achievable limits for a desired performance in the presence of atmospheric perturbations. For the purpose of this work, the integrated AIS scheme is implemented based on three main components. The first component performs the detection when one of the considered failures is present in the system. The second component consists in the identification of the failure category and the classification according to the failed element. During the third phase a general evaluation of the failure is performed with the estimation of the magnitude/severity of the failure and the prediction of its effect on reducing the flight envelope of the aircraft system. Solutions and alternatives to specific design issues of the AIS scheme, such as data clustering and empty space optimization, data fusion and duplication removal, definition of features, dimensionality reduction, and selection of cluster/detector shape are also analyzed in this thesis. They showed to have an important effect on detection performance and are a critical aspect when designing the configuration of the AIS. The results presented in this thesis show that the AIS paradigm addresses directly the complexity and multi-dimensionality associated with a damaged aircraft dynamic response and provides the tools necessary for a comprehensive/integrated solution to the FDIE problem. Excellent detection, identification, and evaluation performance has been recorded for all types of failures considered. The implementation of the proposed AIS-based scheme can potentially have a significant impact on the safety of aircraft operation. The output information obtained from the scheme will be useful to increase pilot situational awareness and determine automated compensation.
Tractable Quantification of Metastability for Robust Bipedal Locomotion
2015-06-01
environmental conditions, including rough terrain. The intuitive and meaningful robustness quanti cation adopted in this thesis begins by stochastic...the system as a Markov chain. Then, failure rates can be easily quanti ed by calculating the expected number of steps before failure. Once robustness is...sensor noise . . . . . . . . . . . . . . . . . . . 54 5.8 Performance evaluation on the dense mesh . . . . . . . . . . . . . . . . . 56 5.9 Stability of
Robust pedestrian detection and tracking from a moving vehicle
NASA Astrophysics Data System (ADS)
Tuong, Nguyen Xuan; Müller, Thomas; Knoll, Alois
2011-01-01
In this paper, we address the problem of multi-person detection, tracking and distance estimation in a complex scenario using multi-cameras. Specifically, we are interested in a vision system for supporting the driver in avoiding any unwanted collision with the pedestrian. We propose an approach using Histograms of Oriented Gradients (HOG) to detect pedestrians on static images and a particle filter as a robust tracking technique to follow targets from frame to frame. Because the depth map requires expensive computation, we extract depth information of targets using Direct Linear Transformation (DLT) to reconstruct 3D-coordinates of correspondent points found by running Speeded Up Robust Features (SURF) on two input images. Using the particle filter the proposed tracker can efficiently handle target occlusions in a simple background environment. However, to achieve reliable performance in complex scenarios with frequent target occlusions and complex cluttered background, results from the detection module are integrated to create feedback and recover the tracker from tracking failures due to the complexity of the environment and target appearance model variability. The proposed approach is evaluated on different data sets both in a simple background scenario and a cluttered background environment. The result shows that, by integrating detector and tracker, a reliable and stable performance is possible even if occlusion occurs frequently in highly complex environment. A vision-based collision avoidance system for an intelligent car, as a result, can be achieved.
Fuzzy-information-based robustness of interconnected networks against attacks and failures
NASA Astrophysics Data System (ADS)
Zhu, Qian; Zhu, Zhiliang; Wang, Yifan; Yu, Hai
2016-09-01
Cascading failure is fatal in applications and its investigation is essential and therefore became a focal topic in the field of complex networks in the last decade. In this paper, a cascading failure model is established for interconnected networks and the associated data-packet transport problem is discussed. A distinguished feature of the new model is its utilization of fuzzy information in resisting uncertain failures and malicious attacks. We numerically find that the giant component of the network after failures increases with tolerance parameter for any coupling preference and attacking ambiguity. Moreover, considering the effect of the coupling probability on the robustness of the networks, we find that the robustness of the assortative coupling and random coupling of the network model increases with the coupling probability. However, for disassortative coupling, there exists a critical phenomenon for coupling probability. In addition, a critical value that attacking information accuracy affects the network robustness is observed. Finally, as a practical example, the interconnected AS-level Internet in South Korea and Japan is analyzed. The actual data validates the theoretical model and analytic results. This paper thus provides some guidelines for preventing cascading failures in the design of architecture and optimization of real-world interconnected networks.
Denimal, Emmanuel; Marin, Ambroise; Guyot, Stéphane; Journaux, Ludovic; Molin, Paul
2015-08-01
In biology, hemocytometers such as Malassez slides are widely used and are effective tools for counting cells manually. In a previous work, a robust algorithm was developed for grid extraction in Malassez slide images. This algorithm was evaluated on a set of 135 images and grids were accurately detected in most cases, but there remained failures for the most difficult images. In this work, we present an optimization of this algorithm that allows for 100% grid detection and a 25% improvement in grid positioning accuracy. These improvements make the algorithm fully reliable for grid detection. This optimization also allows complete erasing of the grid without altering the cells, which eases their segmentation.
Control optimization, stabilization and computer algorithms for aircraft applications
NASA Technical Reports Server (NTRS)
1975-01-01
Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.
Robustness of assembly supply chain networks by considering risk propagation and cascading failure
NASA Astrophysics Data System (ADS)
Tang, Liang; Jing, Ke; He, Jie; Stanley, H. Eugene
2016-10-01
An assembly supply chain network (ASCN) is composed of manufacturers located in different geographical regions. To analyze the robustness of this ASCN when it suffers from catastrophe disruption events, we construct a cascading failure model of risk propagation. In our model, different disruption scenarios s are considered and the probability equation of all disruption scenarios is developed. Using production capability loss as the robustness index (RI) of an ASCN, we conduct a numerical simulation to assess its robustness. Through simulation, we compare the network robustness at different values of linking intensity and node threshold and find that weak linking intensity or high node threshold increases the robustness of the ASCN. We also compare network robustness levels under different disruption scenarios.
Model-Based Method for Sensor Validation
NASA Technical Reports Server (NTRS)
Vatan, Farrokh
2012-01-01
Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).
NASA Astrophysics Data System (ADS)
Marhadi, Kun Saptohartyadi
Structural optimization for damage tolerance under various unforeseen damage scenarios is computationally challenging. It couples non-linear progressive failure analysis with sampling-based stochastic analysis of random damage. The goal of this research was to understand the relationship between alternate load paths available in a structure and its damage tolerance, and to use this information to develop computationally efficient methods for designing damage tolerant structures. Progressive failure of a redundant truss structure subjected to small random variability was investigated to identify features that correlate with robustness and predictability of the structure's progressive failure. The identified features were used to develop numerical surrogate measures that permit computationally efficient deterministic optimization to achieve robustness and predictability of progressive failure. Analysis of damage tolerance on designs with robust progressive failure indicated that robustness and predictability of progressive failure do not guarantee damage tolerance. Damage tolerance requires a structure to redistribute its load to alternate load paths. In order to investigate the load distribution characteristics that lead to damage tolerance in structures, designs with varying degrees of damage tolerance were generated using brute force stochastic optimization. A method based on principal component analysis was used to describe load distributions (alternate load paths) in the structures. Results indicate that a structure that can develop alternate paths is not necessarily damage tolerant. The alternate load paths must have a required minimum load capability. Robustness analysis of damage tolerant optimum designs indicates that designs are tailored to specified damage. A design Optimized under one damage specification can be sensitive to other damages not considered. Effectiveness of existing load path definitions and characterizations were investigated for continuum structures. A load path definition using a relative compliance change measure (U* field) was demonstrated to be the most useful measure of load path. This measure provides quantitative information on load path trajectories and qualitative information on the effectiveness of the load path. The use of the U* description of load paths in optimizing structures for effective load paths was investigated.
NASA Technical Reports Server (NTRS)
Burken, John J.
2005-01-01
This viewgraph presentation reviews the use of a Robust Servo Linear Quadratic Regulator (LQR) and a Radial Basis Function (RBF) Neural Network in reconfigurable flight control designs in adaptation to a aircraft part failure. The method uses a robust LQR servomechanism design with model Reference adaptive control, and RBF neural networks. During the failure the LQR servomechanism behaved well, and using the neural networks improved the tracking.
Robustness of spatial micronetworks
NASA Astrophysics Data System (ADS)
McAndrew, Thomas C.; Danforth, Christopher M.; Bagrow, James P.
2015-04-01
Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure.
NASA Astrophysics Data System (ADS)
Kim, S.; Adams, D. E.; Sohn, H.
2013-01-01
As the wind power industry has grown rapidly in the recent decade, maintenance costs have become a significant concern. Due to the high repair costs for wind turbine blades, it is especially important to detect initial blade defects before they become structural failures leading to other potential failures in the tower or nacelle. This research presents a method of detecting cracks on wind turbine blades using the Vibo-Acoustic Modulation technique. Using Vibro-Acoustic Modulation, a crack detection test is conducted on a WHISPER 100 wind turbine in its operating environment. Wind turbines provide the ideal conditions in which to utilize Vibro-Acoustic Modulation because wind turbines experience large structural vibrations. The structural vibration of the wind turbine balde was used as a pumping signal and a PZT was used to generate the probing signal. Because the non-linear portion of the dynamic response is more sensitive to the presence of a crack than the environmental conditions or operating loads, the Vibro-Acoustic Modulation technique can provide a robust structural health monitoring approach for wind turbines. Structural health monitoring can significantly reduce maintenance costs when paired with predictive modeling to minimize unscheduled maintenance.
NASA Astrophysics Data System (ADS)
Wang, Xiao Juan; Guo, Shi Ze; Jin, Lei; Chen, Mo
We study the structural robustness of the scale free network against the cascading failure induced by overload. In this paper, a failure mechanism based on betweenness-degree ratio distribution is proposed. In the cascading failure model we built the initial load of an edge which is proportional to the node betweenness of its ends. During the edge random deletion, we find a phase transition. Then based on the phase transition, we divide the process of the cascading failure into two parts: the robust area and the vulnerable area, and define the corresponding indicator to measure the performance of the networks in both areas. From derivation, we find that the vulnerability of the network is determined by the distribution of betweenness-degree ratio. After that we use the connection between the node ability coefficient and distribution of betweenness-degree ratio to explain the cascading failure mechanism. In simulations, we verify the correctness of our derivations. By changing connecting preferences, we find scale free networks with a slight assortativity, which performs better both in robust area and vulnerable area.
Robustness and Vulnerability of Networks with Dynamical Dependency Groups.
Bai, Ya-Nan; Huang, Ning; Wang, Lei; Wu, Zhi-Xi
2016-11-28
The dependency property and self-recovery of failure nodes both have great effects on the robustness of networks during the cascading process. Existing investigations focused mainly on the failure mechanism of static dependency groups without considering the time-dependency of interdependent nodes and the recovery mechanism in reality. In this study, we present an evolving network model consisting of failure mechanisms and a recovery mechanism to explore network robustness, where the dependency relations among nodes vary over time. Based on generating function techniques, we provide an analytical framework for random networks with arbitrary degree distribution. In particular, we theoretically find that an abrupt percolation transition exists corresponding to the dynamical dependency groups for a wide range of topologies after initial random removal. Moreover, when the abrupt transition point is above the failure threshold of dependency groups, the evolving network with the larger dependency groups is more vulnerable; when below it, the larger dependency groups make the network more robust. Numerical simulations employing the Erdős-Rényi network and Barabási-Albert scale free network are performed to validate our theoretical results.
Early and simple detection of diastolic dysfunction during weaning from mechanical ventilation
2012-01-01
Weaning from mechanical ventilation imposes additional work on the cardiovascular system and can provoke or unmask left ventricular diastolic dysfunction with consecutive pulmonary edema or systolic dysfunction with inadequate increase of cardiac output and unsuccessful weaning. Echocardiography, which is increasingly used for hemodynamic assessment of critically ill patients, allows differentiation between systolic and diastolic failure. For various reasons, transthoracic echocardiographic assessment was limited to patients with good echo visibility and to those with sinus rhythm without excessive tachycardia. In these patients, often selected after unsuccessful weaning, echocardiographic findings were predictive for weaning failure of cardiac origin. In some studies, patients with various degrees of systolic dysfunction were included, making evaluation of the diastolic dysfunction to the weaning failure even more difficult. The recent study by Moschietto and coworkers included unselected patients and used very simple diastolic variables for assessment of diastolic function. They also included patients with atrial fibrillation and repeated echocardiographic examination only 10 minutes after starting a spontaneous breathing trial. The main finding was that weaning failure was not associated with systolic dysfunction but with diastolic dysfunction. By measuring simple and robust parameters for detection of diastolic dysfunction, the study was able to predict weaning failure in patients with sinus rhythm and atrial fibrillation as early as 10 minutes after beginning a spontaneous breathing trial. Further studies are necessary to determine whether appropriate treatment tailored according to the echocardiographic findings will result in successful weaning. PMID:22770365
Early and simple detection of diastolic dysfunction during weaning from mechanical ventilation.
Voga, Gorazd
2012-07-06
Weaning from mechanical ventilation imposes additional work on the cardiovascular system and can provoke or unmask left ventricular diastolic dysfunction with consecutive pulmonary edema or systolic dysfunction with inadequate increase of cardiac output and unsuccessful weaning. Echocardiography, which is increasingly used for hemodynamic assessment of critically ill patients, allows differentiation between systolic and diastolic failure. For various reasons, transthoracic echocardiographic assessment was limited to patients with good echo visibility and to those with sinus rhythm without excessive tachycardia. In these patients, often selected after unsuccessful weaning, echocardiographic findings were predictive for weaning failure of cardiac origin. In some studies, patients with various degrees of systolic dysfunction were included, making evaluation of the diastolic dysfunction to the weaning failure even more difficult. The recent study by Moschietto and coworkers included unselected patients and used very simple diastolic variables for assessment of diastolic function. They also included patients with atrial fibrillation and repeated echocardiographic examination only 10 minutes after starting a spontaneous breathing trial. The main finding was that weaning failure was not associated with systolic dysfunction but with diastolic dysfunction. By measuring simple and robust parameters for detection of diastolic dysfunction, the study was able to predict weaning failure in patients with sinus rhythm and atrial fibrillation as early as 10 minutes after beginning a spontaneous breathing trial. Further studies are necessary to determine whether appropriate treatment tailored according to the echocardiographic findings will result in successful weaning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoisak, J; Manger, R; Dragojevic, I
Purpose: To perform a failure mode and effects analysis (FMEA) of the process for treating superficial skin cancers with the Xoft Axxent electronic brachytherapy (eBx) system, given the recent introduction of expanded quality control (QC) initiatives at our institution. Methods: A process map was developed listing all steps in superficial treatments with Xoft eBx, from the initial patient consult to the completion of the treatment course. The process map guided the FMEA to identify the failure modes for each step in the treatment workflow and assign Risk Priority Numbers (RPN), calculated as the product of the failure mode’s probability ofmore » occurrence (O), severity (S) and lack of detectability (D). FMEA was done with and without the inclusion of recent QC initiatives such as increased staffing, physics oversight, standardized source calibration, treatment planning and documentation. The failure modes with the highest RPNs were identified and contrasted before and after introduction of the QC initiatives. Results: Based on the FMEA, the failure modes with the highest RPN were related to source calibration, treatment planning, and patient setup/treatment delivery (Fig. 1). The introduction of additional physics oversight, standardized planning and safety initiatives such as checklists and time-outs reduced the RPNs of these failure modes. High-risk failure modes that could be mitigated with improved hardware and software interlocks were identified. Conclusion: The FMEA analysis identified the steps in the treatment process presenting the highest risk. The introduction of enhanced QC initiatives mitigated the risk of some of these failure modes by decreasing their probability of occurrence and increasing their detectability. This analysis demonstrates the importance of well-designed QC policies, procedures and oversight in a Xoft eBx programme for treatment of superficial skin cancers. Unresolved high risk failure modes highlight the need for non-procedural quality initiatives such as improved planning software and more robust hardware interlock systems.« less
Prabhakaran, Shyam; Khorzad, Rebeca; Brown, Alexandra; Nannicelli, Anna P; Khare, Rahul; Holl, Jane L
2015-10-01
Although best practices have been developed for achieving door-to-needle (DTN) times ≤60 minutes for stroke thrombolysis, critical DTN process failures persist. We sought to compare these failures in the Emergency Department at an academic medical center and a community hospital. Failure modes effects and criticality analysis was used to identify system and process failures. Multidisciplinary teams involved in DTN care participated in moderated sessions at each site. As a result, DTN process maps were created and potential failures and their causes, frequency, severity, and existing safeguards were identified. For each failure, a risk priority number and criticality score were calculated; failures were then ranked, with the highest scores representing the most critical failures and targets for intervention. We detected a total of 70 failures in 50 process steps and 76 failures in 42 process steps at the community hospital and academic medical center, respectively. At the community hospital, critical failures included (1) delay in registration because of Emergency Department overcrowding, (2) incorrect triage diagnosis among walk-in patients, and (3) delay in obtaining consent for thrombolytic treatment. At the academic medical center, critical failures included (1) incorrect triage diagnosis among walk-in patients, (2) delay in stroke team activation, and (3) delay in obtaining computed tomographic imaging. Although the identification of common critical failures suggests opportunities for a generalizable process redesign, differences in the criticality and nature of failures must be addressed at the individual hospital level, to develop robust and sustainable solutions to reduce DTN time. © 2015 American Heart Association, Inc.
Effects of traffic generation patterns on the robustness of complex networks
NASA Astrophysics Data System (ADS)
Wu, Jiajing; Zeng, Junwen; Chen, Zhenhao; Tse, Chi K.; Chen, Bokui
2018-02-01
Cascading failures in communication networks with heterogeneous node functions are studied in this paper. In such networks, the traffic dynamics are highly dependent on the traffic generation patterns which are in turn determined by the locations of the hosts. The data-packet traffic model is applied to Barabási-Albert scale-free networks to study the cascading failures in such networks and to explore the effects of traffic generation patterns on network robustness. It is found that placing the hosts at high-degree nodes in a network can make the network more robust against both intentional attacks and random failures. It is also shown that the traffic generation pattern plays an important role in network design.
NASA Astrophysics Data System (ADS)
Belmonte, D.; Vedova, M. D. L. Dalla; Ferro, C.; Maggiore, P.
2017-06-01
The proposal of prognostic algorithms able to identify precursors of incipient failures of primary flight command electromechanical actuators (EMA) is beneficial for the anticipation of the incoming failure: an early and correct interpretation of the failure degradation pattern, in fact, can trig an early alert of the maintenance crew, who can properly schedule the servomechanism replacement. An innovative prognostic model-based approach, able to recognize the EMA progressive degradations before his anomalous behaviors become critical, is proposed: the Fault Detection and Identification (FDI) of the considered incipient failures is performed analyzing proper system operational parameters, able to put in evidence the corresponding degradation path, by means of a numerical algorithm based on spectral analysis techniques. Subsequently, these operational parameters will be correlated with the actual EMA health condition by means of failure maps created by a reference monitoring model-based algorithm. In this work, the proposed method has been tested in case of EMA affected by combined progressive failures: in particular, partial stator single phase turn to turn short-circuit and rotor static eccentricity are considered. In order to evaluate the prognostic method, a numerical test-bench has been conceived. Results show that the method exhibit adequate robustness and a high degree of confidence in the ability to early identify an eventual malfunctioning, minimizing the risk of fake alarms or unannounced failures.
Norrie, Gillian; Farquharson, Roy G; Greaves, Mike
2009-01-01
The significance of heritable thrombophilia in pregnancy failure is controversial. We surveyed all UK Early Pregnancy Units and 70% responded. The majority test routinely for heritable thrombophilias; 80%, 76% and 88% undertook at least one screening test in late miscarriage, recurrent miscarriage and placental abruption, respectively. The range of thrombophilias sought is inconsistent: testing for proteins C and S deficiency and F5 R506Q (factor V Leiden) is most prevalent. Detection of heritable thrombophilia frequently leads to administration of antithrombotics in subsequent pregnancies. Thus, thrombophilia testing and use of antithrombotics are widespread in the UK despite controversies regarding the role of heritable thrombophilia in the pathogenesis of pregnancy complications, and the lack of robust evidence for the efficacy of antithrombotic therapy.
Measure of robustness for complex networks
NASA Astrophysics Data System (ADS)
Youssef, Mina Nabil
Critical infrastructures are repeatedly attacked by external triggers causing tremendous amount of damages. Any infrastructure can be studied using the powerful theory of complex networks. A complex network is composed of extremely large number of different elements that exchange commodities providing significant services. The main functions of complex networks can be damaged by different types of attacks and failures that degrade the network performance. These attacks and failures are considered as disturbing dynamics, such as the spread of viruses in computer networks, the spread of epidemics in social networks, and the cascading failures in power grids. Depending on the network structure and the attack strength, every network differently suffers damages and performance degradation. Hence, quantifying the robustness of complex networks becomes an essential task. In this dissertation, new metrics are introduced to measure the robustness of technological and social networks with respect to the spread of epidemics, and the robustness of power grids with respect to cascading failures. First, we introduce a new metric called the Viral Conductance (VCSIS ) to assess the robustness of networks with respect to the spread of epidemics that are modeled through the susceptible/infected/susceptible (SIS) epidemic approach. In contrast to assessing the robustness of networks based on a classical metric, the epidemic threshold, the new metric integrates the fraction of infected nodes at steady state for all possible effective infection strengths. Through examples, VCSIS provides more insights about the robustness of networks than the epidemic threshold. In addition, both the paradoxical robustness of Barabasi-Albert preferential attachment networks and the effect of the topology on the steady state infection are studied, to show the importance of quantifying the robustness of networks. Second, a new metric VCSIR is introduced to assess the robustness of networks with respect to the spread of susceptible/infected/recovered (SIR) epidemics. To compute VCSIR, we propose a novel individual-based approach to model the spread of SIR epidemics in networks, which captures the infection size for a given effective infection rate. Thus, VCSIR quantitatively integrates the infection strength with the corresponding infection size. To optimize the VCSIR metric, a new mitigation strategy is proposed, based on a temporary reduction of contacts in social networks. The social contact network is modeled as a weighted graph that describes the frequency of contacts among the individuals. Thus, we consider the spread of an epidemic as a dynamical system, and the total number of infection cases as the state of the system, while the weight reduction in the social network is the controller variable leading to slow/reduce the spread of epidemics. Using optimal control theory, the obtained solution represents an optimal adaptive weighted network defined over a finite time interval. Moreover, given the high complexity of the optimization problem, we propose two heuristics to find the near optimal solutions by reducing the contacts among the individuals in a decentralized way. Finally, the cascading failures that can take place in power grids and have recently caused several blackouts are studied. We propose a new metric to assess the robustness of the power grid with respect to the cascading failures. The power grid topology is modeled as a network, which consists of nodes and links representing power substations and transmission lines, respectively. We also propose an optimal islanding strategy to protect the power grid when a cascading failure event takes place in the grid. The robustness metrics are numerically evaluated using real and synthetic networks to quantify their robustness with respect to disturbing dynamics. We show that the proposed metrics outperform the classical metrics in quantifying the robustness of networks and the efficiency of the mitigation strategies. In summary, our work advances the network science field in assessing the robustness of complex networks with respect to various disturbing dynamics.
Framework for a space shuttle main engine health monitoring system
NASA Technical Reports Server (NTRS)
Hawman, Michael W.; Galinaitis, William S.; Tulpule, Sharayu; Mattedi, Anita K.; Kamenetz, Jeffrey
1990-01-01
A framework developed for a health management system (HMS) which is directed at improving the safety of operation of the Space Shuttle Main Engine (SSME) is summarized. An emphasis was placed on near term technology through requirements to use existing SSME instrumentation and to demonstrate the HMS during SSME ground tests within five years. The HMS framework was developed through an analysis of SSME failure modes, fault detection algorithms, sensor technologies, and hardware architectures. A key feature of the HMS framework design is that a clear path from the ground test system to a flight HMS was maintained. Fault detection techniques based on time series, nonlinear regression, and clustering algorithms were developed and demonstrated on data from SSME ground test failures. The fault detection algorithms exhibited 100 percent detection of faults, had an extremely low false alarm rate, and were robust to sensor loss. These algorithms were incorporated into a hierarchical decision making strategy for overall assessment of SSME health. A preliminary design for a hardware architecture capable of supporting real time operation of the HMS functions was developed. Utilizing modular, commercial off-the-shelf components produced a reliable low cost design with the flexibility to incorporate advances in algorithm and sensor technology as they become available.
False Positive and False Negative Effects on Network Attacks
NASA Astrophysics Data System (ADS)
Shang, Yilun
2018-01-01
Robustness against attacks serves as evidence for complex network structures and failure mechanisms that lie behind them. Most often, due to detection capability limitation or good disguises, attacks on networks are subject to false positives and false negatives, meaning that functional nodes may be falsely regarded as compromised by the attacker and vice versa. In this work, we initiate a study of false positive/negative effects on network robustness against three fundamental types of attack strategies, namely, random attacks (RA), localized attacks (LA), and targeted attack (TA). By developing a general mathematical framework based upon the percolation model, we investigate analytically and by numerical simulations of attack robustness with false positive/negative rate (FPR/FNR) on three benchmark models including Erdős-Rényi (ER) networks, random regular (RR) networks, and scale-free (SF) networks. We show that ER networks are equivalently robust against RA and LA only when FPR equals zero or the initial network is intact. We find several interesting crossovers in RR and SF networks when FPR is taken into consideration. By defining the cost of attack, we observe diminishing marginal attack efficiency for RA, LA, and TA. Our finding highlights the potential risk of underestimating or ignoring FPR in understanding attack robustness. The results may provide insights into ways of enhancing robustness of network architecture and improve the level of protection of critical infrastructures.
Cascading failures in interdependent systems under a flow redistribution model
NASA Astrophysics Data System (ADS)
Zhang, Yingrui; Arenas, Alex; Yaǧan, Osman
2018-02-01
Robustness and cascading failures in interdependent systems has been an active research field in the past decade. However, most existing works use percolation-based models where only the largest component of each network remains functional throughout the cascade. Although suitable for communication networks, this assumption fails to capture the dependencies in systems carrying a flow (e.g., power systems, road transportation networks), where cascading failures are often triggered by redistribution of flows leading to overloading of lines. Here, we consider a model consisting of systems A and B with initial line loads and capacities given by {LA,i,CA ,i} i =1 n and {LB,i,CB ,i} i =1 n, respectively. When a line fails in system A , a fraction of its load is redistributed to alive lines in B , while remaining (1 -a ) fraction is redistributed equally among all functional lines in A ; a line failure in B is treated similarly with b giving the fraction to be redistributed to A . We give a thorough analysis of cascading failures of this model initiated by a random attack targeting p1 fraction of lines in A and p2 fraction in B . We show that (i) the model captures the real-world phenomenon of unexpected large scale cascades and exhibits interesting transition behavior: the final collapse is always first order, but it can be preceded by a sequence of first- and second-order transitions; (ii) network robustness tightly depends on the coupling coefficients a and b , and robustness is maximized at non-trivial a ,b values in general; (iii) unlike most existing models, interdependence has a multifaceted impact on system robustness in that interdependency can lead to an improved robustness for each individual network.
Cascading failures in interdependent systems under a flow redistribution model.
Zhang, Yingrui; Arenas, Alex; Yağan, Osman
2018-02-01
Robustness and cascading failures in interdependent systems has been an active research field in the past decade. However, most existing works use percolation-based models where only the largest component of each network remains functional throughout the cascade. Although suitable for communication networks, this assumption fails to capture the dependencies in systems carrying a flow (e.g., power systems, road transportation networks), where cascading failures are often triggered by redistribution of flows leading to overloading of lines. Here, we consider a model consisting of systems A and B with initial line loads and capacities given by {L_{A,i},C_{A,i}}_{i=1}^{n} and {L_{B,i},C_{B,i}}_{i=1}^{n}, respectively. When a line fails in system A, a fraction of its load is redistributed to alive lines in B, while remaining (1-a) fraction is redistributed equally among all functional lines in A; a line failure in B is treated similarly with b giving the fraction to be redistributed to A. We give a thorough analysis of cascading failures of this model initiated by a random attack targeting p_{1} fraction of lines in A and p_{2} fraction in B. We show that (i) the model captures the real-world phenomenon of unexpected large scale cascades and exhibits interesting transition behavior: the final collapse is always first order, but it can be preceded by a sequence of first- and second-order transitions; (ii) network robustness tightly depends on the coupling coefficients a and b, and robustness is maximized at non-trivial a,b values in general; (iii) unlike most existing models, interdependence has a multifaceted impact on system robustness in that interdependency can lead to an improved robustness for each individual network.
Optimizing the robustness of electrical power systems against cascading failures.
Zhang, Yingrui; Yağan, Osman
2016-06-21
Electrical power systems are one of the most important infrastructures that support our society. However, their vulnerabilities have raised great concern recently due to several large-scale blackouts around the world. In this paper, we investigate the robustness of power systems against cascading failures initiated by a random attack. This is done under a simple yet useful model based on global and equal redistribution of load upon failures. We provide a comprehensive understanding of system robustness under this model by (i) deriving an expression for the final system size as a function of the size of initial attacks; (ii) deriving the critical attack size after which system breaks down completely; (iii) showing that complete system breakdown takes place through a first-order (i.e., discontinuous) transition in terms of the attack size; and (iv) establishing the optimal load-capacity distribution that maximizes robustness. In particular, we show that robustness is maximized when the difference between the capacity and initial load is the same for all lines; i.e., when all lines have the same redundant space regardless of their initial load. This is in contrast with the intuitive and commonly used setting where capacity of a line is a fixed factor of its initial load.
Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Chin, Alexander W.; Marvis, Dimitri N.
2014-01-01
The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.
Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Chin, Alexander W.; Mavris, Dimitri N.
2016-01-01
The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.
Automatic segmentation of the wire frame of stent grafts from CT data.
Klein, Almar; van der Vliet, J Adam; Oostveen, Luuk J; Hoogeveen, Yvonne; Kool, Leo J Schultze; Renema, W Klaas Jan; Slump, Cornelis H
2012-01-01
Endovascular aortic replacement (EVAR) is an established technique, which uses stent grafts to treat aortic aneurysms in patients at risk of aneurysm rupture. Late stent graft failure is a serious complication in endovascular repair of aortic aneurysms. Better understanding of the motion characteristics of stent grafts will be beneficial for designing future devices. In addition, analysis of stent graft movement in individual patients in vivo can be valuable for predicting stent graft failure in these patients. To be able to gather information on stent graft motion in a quick and robust fashion, we propose an automatic method to segment stent grafts from CT data, consisting of three steps: the detection of seed points, finding the connections between these points to produce a graph, and graph processing to obtain the final geometric model in the form of an undirected graph. Using annotated reference data, the method was optimized and its accuracy was evaluated. The experiments were performed using data containing the AneuRx and Zenith stent grafts. The algorithm is robust for noise and small variations in the used parameter values, does not require much memory according to modern standards, and is fast enough to be used in a clinical setting (65 and 30s for the two stent types, respectively). Further, it is shown that the resulting graphs have a 95% (AneuRx) and 92% (Zenith) correspondence with the annotated data. The geometric model produced by the algorithm allows incorporation of high level information and material properties. This enables us to study the in vivo motions and forces that act on the frame of the stent. We believe that such studies will provide new insights into the behavior of the stent graft in vivo, enables the detection and prediction of stent failure in individual patients, and can help in designing better stent grafts in the future. Copyright © 2011 Elsevier B.V. All rights reserved.
Defect Detection and Segmentation Framework for Remote Field Eddy Current Sensor Data
2017-01-01
Remote-Field Eddy-Current (RFEC) technology is often used as a Non-Destructive Evaluation (NDE) method to prevent water pipe failures. By analyzing the RFEC data, it is possible to quantify the corrosion present in pipes. Quantifying the corrosion involves detecting defects and extracting their depth and shape. For large sections of pipelines, this can be extremely time-consuming if performed manually. Automated approaches are therefore well motivated. In this article, we propose an automated framework to locate and segment defects in individual pipe segments, starting from raw RFEC measurements taken over large pipelines. The framework relies on a novel feature to robustly detect these defects and a segmentation algorithm applied to the deconvolved RFEC signal. The framework is evaluated using both simulated and real datasets, demonstrating its ability to efficiently segment the shape of corrosion defects. PMID:28984823
Defect Detection and Segmentation Framework for Remote Field Eddy Current Sensor Data.
Falque, Raphael; Vidal-Calleja, Teresa; Miro, Jaime Valls
2017-10-06
Remote-Field Eddy-Current (RFEC) technology is often used as a Non-Destructive Evaluation (NDE) method to prevent water pipe failures. By analyzing the RFEC data, it is possible to quantify the corrosion present in pipes. Quantifying the corrosion involves detecting defects and extracting their depth and shape. For large sections of pipelines, this can be extremely time-consuming if performed manually. Automated approaches are therefore well motivated. In this article, we propose an automated framework to locate and segment defects in individual pipe segments, starting from raw RFEC measurements taken over large pipelines. The framework relies on a novel feature to robustly detect these defects and a segmentation algorithm applied to the deconvolved RFEC signal. The framework is evaluated using both simulated and real datasets, demonstrating its ability to efficiently segment the shape of corrosion defects.
Rockfall hazard and risk assessments along roads at a regional scale: example in Swiss Alps
NASA Astrophysics Data System (ADS)
Michoud, C.; Derron, M.-H.; Horton, P.; Jaboyedoff, M.; Baillifard, F.-J.; Loye, A.; Nicolet, P.; Pedrazzini, A.; Queyrel, A.
2012-03-01
Unlike fragmental rockfall runout assessments, there are only few robust methods to quantify rock-mass-failure susceptibilities at regional scale. A detailed slope angle analysis of recent Digital Elevation Models (DEM) can be used to detect potential rockfall source areas, thanks to the Slope Angle Distribution procedure. However, this method does not provide any information on block-release frequencies inside identified areas. The present paper adds to the Slope Angle Distribution of cliffs unit its normalized cumulative distribution function. This improvement is assimilated to a quantitative weighting of slope angles, introducing rock-mass-failure susceptibilities inside rockfall source areas previously detected. Then rockfall runout assessment is performed using the GIS- and process-based software Flow-R, providing relative frequencies for runout. Thus, taking into consideration both susceptibility results, this approach can be used to establish, after calibration, hazard and risk maps at regional scale. As an example, a risk analysis of vehicle traffic exposed to rockfalls is performed along the main roads of the Swiss alpine valley of Bagnes.
Robustness and fragility in coupled oscillator networks under targeted attacks.
Yuan, Tianyu; Aihara, Kazuyuki; Tanaka, Gouhei
2017-01-01
The dynamical tolerance of coupled oscillator networks against local failures is studied. As the fraction of failed oscillator nodes gradually increases, the mean oscillation amplitude in the entire network decreases and then suddenly vanishes at a critical fraction as a phase transition. This critical fraction, widely used as a measure of the network robustness, was analytically derived for random failures but not for targeted attacks so far. Here we derive the general formula for the critical fraction, which can be applied to both random failures and targeted attacks. We consider the effects of targeting oscillator nodes based on their degrees. First we deal with coupled identical oscillators with homogeneous edge weights. Then our theory is applied to networks with heterogeneous edge weights and to those with nonidentical oscillators. The analytical results are validated by numerical experiments. Our results reveal the key factors governing the robustness and fragility of oscillator networks.
NASA Astrophysics Data System (ADS)
Grieggs, Samuel M.; McLaughlin, Michael J.; Ezekiel, Soundararajan; Blasch, Erik
2015-06-01
As technology and internet use grows at an exponential rate, video and imagery data is becoming increasingly important. Various techniques such as Wide Area Motion imagery (WAMI), Full Motion Video (FMV), and Hyperspectral Imaging (HSI) are used to collect motion data and extract relevant information. Detecting and identifying a particular object in imagery data is an important step in understanding visual imagery, such as content-based image retrieval (CBIR). Imagery data is segmented and automatically analyzed and stored in dynamic and robust database. In our system, we seek utilize image fusion methods which require quality metrics. Many Image Fusion (IF) algorithms have been proposed based on different, but only a few metrics, used to evaluate the performance of these algorithms. In this paper, we seek a robust, objective metric to evaluate the performance of IF algorithms which compares the outcome of a given algorithm to ground truth and reports several types of errors. Given the ground truth of a motion imagery data, it will compute detection failure, false alarm, precision and recall metrics, background and foreground regions statistics, as well as split and merge of foreground regions. Using the Structural Similarity Index (SSIM), Mutual Information (MI), and entropy metrics; experimental results demonstrate the effectiveness of the proposed methodology for object detection, activity exploitation, and CBIR.
Optimization of robustness of interdependent network controllability by redundant design
2018-01-01
Controllability of complex networks has been a hot topic in recent years. Real networks regarded as interdependent networks are always coupled together by multiple networks. The cascading process of interdependent networks including interdependent failure and overload failure will destroy the robustness of controllability for the whole network. Therefore, the optimization of the robustness of interdependent network controllability is of great importance in the research area of complex networks. In this paper, based on the model of interdependent networks constructed first, we determine the cascading process under different proportions of node attacks. Then, the structural controllability of interdependent networks is measured by the minimum driver nodes. Furthermore, we propose a parameter which can be obtained by the structure and minimum driver set of interdependent networks under different proportions of node attacks and analyze the robustness for interdependent network controllability. Finally, we optimize the robustness of interdependent network controllability by redundant design including node backup and redundancy edge backup and improve the redundant design by proposing different strategies according to their cost. Comparative strategies of redundant design are conducted to find the best strategy. Results shows that node backup and redundancy edge backup can indeed decrease those nodes suffering from failure and improve the robustness of controllability. Considering the cost of redundant design, we should choose BBS (betweenness-based strategy) or DBS (degree based strategy) for node backup and HDF(high degree first) for redundancy edge backup. Above all, our proposed strategies are feasible and effective at improving the robustness of interdependent network controllability. PMID:29438426
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bojechko, Casey; Phillps, Mark; Kalet, Alan
Purpose: Complex treatments in radiation therapy require robust verification in order to prevent errors that can adversely affect the patient. For this purpose, the authors estimate the effectiveness of detecting errors with a “defense in depth” system composed of electronic portal imaging device (EPID) based dosimetry and a software-based system composed of rules-based and Bayesian network verifications. Methods: The authors analyzed incidents with a high potential severity score, scored as a 3 or 4 on a 4 point scale, recorded in an in-house voluntary incident reporting system, collected from February 2012 to August 2014. The incidents were categorized into differentmore » failure modes. The detectability, defined as the number of incidents that are detectable divided total number of incidents, was calculated for each failure mode. Results: In total, 343 incidents were used in this study. Of the incidents 67% were related to photon external beam therapy (EBRT). The majority of the EBRT incidents were related to patient positioning and only a small number of these could be detected by EPID dosimetry when performed prior to treatment (6%). A large fraction could be detected by in vivo dosimetry performed during the first fraction (74%). Rules-based and Bayesian network verifications were found to be complimentary to EPID dosimetry, able to detect errors related to patient prescriptions and documentation, and errors unrelated to photon EBRT. Combining all of the verification steps together, 91% of all EBRT incidents could be detected. Conclusions: This study shows that the defense in depth system is potentially able to detect a large majority of incidents. The most effective EPID-based dosimetry verification is in vivo measurements during the first fraction and is complemented by rules-based and Bayesian network plan checking.« less
Real-Time Diagnosis of Faults Using a Bank of Kalman Filters
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2006-01-01
A new robust method of automated real-time diagnosis of faults in an aircraft engine or a similar complex system involves the use of a bank of Kalman filters. In order to be highly reliable, a diagnostic system must be designed to account for the numerous failure conditions that an aircraft engine may encounter in operation. The method achieves this objective though the utilization of multiple Kalman filters, each of which is uniquely designed based on a specific failure hypothesis. A fault-detection-and-isolation (FDI) system, developed based on this method, is able to isolate faults in sensors and actuators while detecting component faults (abrupt degradation in engine component performance). By affording a capability for real-time identification of minor faults before they grow into major ones, the method promises to enhance safety and reduce operating costs. The robustness of this method is further enhanced by incorporating information regarding the aging condition of an engine. In general, real-time fault diagnostic methods use the nominal performance of a "healthy" new engine as a reference condition in the diagnostic process. Such an approach does not account for gradual changes in performance associated with aging of an otherwise healthy engine. By incorporating information on gradual, aging-related changes, the new method makes it possible to retain at least some of the sensitivity and accuracy needed to detect incipient faults while preventing false alarms that could result from erroneous interpretation of symptoms of aging as symptoms of failures. The figure schematically depicts an FDI system according to the new method. The FDI system is integrated with an engine, from which it accepts two sets of input signals: sensor readings and actuator commands. Two main parts of the FDI system are a bank of Kalman filters and a subsystem that implements FDI decision rules. Each Kalman filter is designed to detect a specific sensor or actuator fault. When a sensor or actuator fault occurs, large estimation errors are generated by all filters except the one using the correct hypothesis. By monitoring the residual output of each filter, the specific fault that has occurred can be detected and isolated on the basis of the decision rules. A set of parameters that indicate the performance of the engine components is estimated by the "correct" Kalman filter for use in detecting component faults. To reduce the loss of diagnostic accuracy and sensitivity in the face of aging, the FDI system accepts information from a steady-state-condition-monitoring system. This information is used to update the Kalman filters and a data bank of trim values representative of the current aging condition.
Real-time Bayesian anomaly detection in streaming environmental data
NASA Astrophysics Data System (ADS)
Hill, David J.; Minsker, Barbara S.; Amir, Eyal
2009-04-01
With large volumes of data arriving in near real time from environmental sensors, there is a need for automated detection of anomalous data caused by sensor or transmission errors or by infrequent system behaviors. This study develops and evaluates three automated anomaly detection methods using dynamic Bayesian networks (DBNs), which perform fast, incremental evaluation of data as they become available, scale to large quantities of data, and require no a priori information regarding process variables or types of anomalies that may be encountered. This study investigates these methods' abilities to identify anomalies in eight meteorological data streams from Corpus Christi, Texas. The results indicate that DBN-based detectors, using either robust Kalman filtering or Rao-Blackwellized particle filtering, outperform a DBN-based detector using Kalman filtering, with the former having false positive/negative rates of less than 2%. These methods were successful at identifying data anomalies caused by two real events: a sensor failure and a large storm.
Color constancy by characterization of illumination chromaticity
NASA Astrophysics Data System (ADS)
Nikkanen, Jarno T.
2011-05-01
Computational color constancy algorithms play a key role in achieving desired color reproduction in digital cameras. Failure to estimate illumination chromaticity correctly will result in invalid overall colour cast in the image that will be easily detected by human observers. A new algorithm is presented for computational color constancy. Low computational complexity and low memory requirement make the algorithm suitable for resource-limited camera devices, such as consumer digital cameras and camera phones. Operation of the algorithm relies on characterization of the range of possible illumination chromaticities in terms of camera sensor response. The fact that only illumination chromaticity is characterized instead of the full color gamut, for example, increases robustness against variations in sensor characteristics and against failure of diagonal model of illumination change. Multiple databases are used in order to demonstrate the good performance of the algorithm in comparison to the state-of-the-art color constancy algorithms.
Axioms for Obligation and Robustness with Temporal Logic
NASA Astrophysics Data System (ADS)
French, Tim; McCabe-Dansted, John C.; Reynolds, Mark
RoCTL* was proposed to model and specify the robustness of reactive systems. RoCTL* extended CTL* with the addition of Obligatory and Robustly operators, which quantify over failure-free paths and paths with one more failure respectively. This paper gives an axiomatisation for all the operators of RoCTL* with the exception of the Until operator; this fragment is able to express similar contrary-to-duty obligations to the full RoCTL* logic. We call this formal system NORA, and give a completeness proof. We also consider the fragments of the language containing only path quantifiers (but where variables are dependent on histories). We examine semantic properties and potential axiomatisations for these fragments.
Modeling and analyzing cascading dynamics of the Internet based on local congestion information
NASA Astrophysics Data System (ADS)
Zhu, Qian; Nie, Jianlong; Zhu, Zhiliang; Yu, Hai; Xue, Yang
2018-06-01
Cascading failure has already become one of the vital issues in network science. By considering realistic network operational settings, we propose the congestion function to represent the congested extent of node and construct a local congestion-aware routing strategy with a tunable parameter. We investigate the cascading failures on the Internet triggered by deliberate attacks. Simulation results show that the tunable parameter has an optimal value that makes the network achieve a maximum level of robustness. The robustness of the network has a positive correlation with tolerance parameter, but it has a negative correlation with the packets generation rate. In addition, there exists a threshold of the attacking proportion of nodes that makes the network achieve the lowest robustness. Moreover, by introducing the concept of time delay for information transmission on the Internet, we found that an increase of the time delay will decrease the robustness of the network rapidly. The findings of the paper will be useful for enhancing the robustness of the Internet in the future.
Overload-based cascades on multiplex networks and effects of inter-similarity
Zhou, Dong
2017-01-01
Although cascading failures caused by overload on interdependent/interconnected networks have been studied in the recent years, the effect of overlapping links (inter-similarity) on robustness to such cascades in coupled networks is not well understood. This is an important issue since shared links exist in many real-world coupled networks. In this paper, we propose a new model for load-based cascading failures in multiplex networks. We leverage it to compare different network structures, coupling schemes, and overload rules. More importantly, we systematically investigate the impact of inter-similarity on the robustness of the whole system under an initial intentional attack. Surprisingly, we find that inter-similarity can have a negative impact on robustness to overload cascades. To the best of our knowledge, we are the first to report the competition between the positive and the negative impacts of overlapping links on the robustness of coupled networks. These results provide useful suggestions for designing robust coupled traffic systems. PMID:29252988
Multi-criteria robustness analysis of metro networks
NASA Astrophysics Data System (ADS)
Wang, Xiangrong; Koç, Yakup; Derrible, Sybil; Ahmad, Sk Nasir; Pino, Willem J. A.; Kooij, Robert E.
2017-05-01
Metros (heavy rail transit systems) are integral parts of urban transportation systems. Failures in their operations can have serious impacts on urban mobility, and measuring their robustness is therefore critical. Moreover, as physical networks, metros can be viewed as topological entities, and as such they possess measurable network properties. In this article, by using network science and graph theory, we investigate ten theoretical and four numerical robustness metrics and their performance in quantifying the robustness of 33 metro networks under random failures or targeted attacks. We find that the ten theoretical metrics capture two distinct aspects of robustness of metro networks. First, several metrics place an emphasis on alternative paths. Second, other metrics place an emphasis on the length of the paths. To account for all aspects, we standardize all ten indicators and plot them on radar diagrams to assess the overall robustness for metro networks. Overall, we find that Tokyo and Rome are the most robust networks. Rome benefits from short transferring and Tokyo has a significant number of transfer stations, both in the city center and in the peripheral area of the city, promoting both a higher number of alternative paths and overall relatively short path-lengths.
On-line Bayesian model updating for structural health monitoring
NASA Astrophysics Data System (ADS)
Rocchetta, Roberto; Broggi, Matteo; Huchet, Quentin; Patelli, Edoardo
2018-03-01
Fatigue induced cracks is a dangerous failure mechanism which affects mechanical components subject to alternating load cycles. System health monitoring should be adopted to identify cracks which can jeopardise the structure. Real-time damage detection may fail in the identification of the cracks due to different sources of uncertainty which have been poorly assessed or even fully neglected. In this paper, a novel efficient and robust procedure is used for the detection of cracks locations and lengths in mechanical components. A Bayesian model updating framework is employed, which allows accounting for relevant sources of uncertainty. The idea underpinning the approach is to identify the most probable crack consistent with the experimental measurements. To tackle the computational cost of the Bayesian approach an emulator is adopted for replacing the computationally costly Finite Element model. To improve the overall robustness of the procedure, different numerical likelihoods, measurement noises and imprecision in the value of model parameters are analysed and their effects quantified. The accuracy of the stochastic updating and the efficiency of the numerical procedure are discussed. An experimental aluminium frame and on a numerical model of a typical car suspension arm are used to demonstrate the applicability of the approach.
Sliding Mode Control of the X-33 with an Engine Failure
NASA Technical Reports Server (NTRS)
Shtessel, Yuri B.; Hall, Charles E.
2000-01-01
Ascent flight control of the X-3 is performed using two XRS-2200 linear aerospike engines. in addition to aerosurfaces. The baseline control algorithms are PID with gain scheduling. Flight control using an innovative method. Sliding Mode Control. is presented for nominal and engine failed modes of flight. An easy to implement, robust controller. requiring no reconfiguration or gain scheduling is demonstrated through high fidelity flight simulations. The proposed sliding mode controller utilizes a two-loop structure and provides robust. de-coupled tracking of both orientation angle command profiles and angular rate command profiles in the presence of engine failure, bounded external disturbances (wind gusts) and uncertain matrix of inertia. Sliding mode control causes the angular rate and orientation angle tracking error dynamics to be constrained to linear, de-coupled, homogeneous, and vector valued differential equations with desired eigenvalues. Conditions that restrict engine failures to robustness domain of the sliding mode controller are derived. Overall stability of a two-loop flight control system is assessed. Simulation results show that the designed controller provides robust, accurate, de-coupled tracking of the orientation angle command profiles in the presence of external disturbances and vehicle inertia uncertainties, as well as the single engine failed case. The designed robust controller will significantly reduce the time and cost associated with flying new trajectory profiles or orbits, with new payloads, and with modified vehicles
Time-frequency vibration analysis for the detection of motor damages caused by bearing currents
NASA Astrophysics Data System (ADS)
Prudhom, Aurelien; Antonino-Daviu, Jose; Razik, Hubert; Climente-Alarcon, Vicente
2017-02-01
Motor failure due to bearing currents is an issue that has drawn an increasing industrial interest over recent years. Bearing currents usually appear in motors operated by variable frequency drives (VFD); these drives may lead to common voltage modes which cause currents induced in the motor shaft that are discharged through the bearings. The presence of these currents may lead to the motor bearing failure only few months after system startup. Vibration monitoring is one of the most common ways for detecting bearing damages caused by circulating currents; the evaluation of the amplitudes of well-known characteristic components in the vibration Fourier spectrum that are associated with race, ball or cage defects enables to evaluate the bearing condition and, hence, to identify an eventual damage due to bearing currents. However, the inherent constraints of the Fourier transform may complicate the detection of the progressive bearing degradation; for instance, in some cases, other frequency components may mask or be confused with bearing defect-related while, in other cases, the analysis may not be suitable due to the eventual non-stationary nature of the captured vibration signals. Moreover, the fact that this analysis implies to lose the time-dimension limits the amount of information obtained from this technique. This work proposes the use of time-frequency (T-F) transforms to analyse vibration data in motors affected by bearing currents. The experimental results obtained in real machines show that the vibration analysis via T-F tools may provide significant advantages for the detection of bearing current damages; among other, these techniques enable to visualise the progressive degradation of the bearing while providing an effective discrimination versus other components that are not related with the fault. Moreover, their application is valid regardless of the operation regime of the machine. Both factors confirm the robustness and reliability of these tools that may be an interesting alternative for detecting this type of failure in induction motors.
Robustness of Controllability for Networks Based on Edge-Attack
Nie, Sen; Wang, Xuwen; Zhang, Haifeng; Li, Qilang; Wang, Binghong
2014-01-01
We study the controllability of networks in the process of cascading failures under two different attacking strategies, random and intentional attack, respectively. For the highest-load edge attack, it is found that the controllability of Erdős-Rényi network, that with moderate average degree, is less robust, whereas the Scale-free network with moderate power-law exponent shows strong robustness of controllability under the same attack strategy. The vulnerability of controllability under random and intentional attacks behave differently with the increasing of removal fraction, especially, we find that the robustness of control has important role in cascades for large removal fraction. The simulation results show that for Scale-free networks with various power-law exponents, the network has larger scale of cascades do not mean that there will be more increments of driver nodes. Meanwhile, the number of driver nodes in cascading failures is also related to the edges amount in strongly connected components. PMID:24586507
Robustness of controllability for networks based on edge-attack.
Nie, Sen; Wang, Xuwen; Zhang, Haifeng; Li, Qilang; Wang, Binghong
2014-01-01
We study the controllability of networks in the process of cascading failures under two different attacking strategies, random and intentional attack, respectively. For the highest-load edge attack, it is found that the controllability of Erdős-Rényi network, that with moderate average degree, is less robust, whereas the Scale-free network with moderate power-law exponent shows strong robustness of controllability under the same attack strategy. The vulnerability of controllability under random and intentional attacks behave differently with the increasing of removal fraction, especially, we find that the robustness of control has important role in cascades for large removal fraction. The simulation results show that for Scale-free networks with various power-law exponents, the network has larger scale of cascades do not mean that there will be more increments of driver nodes. Meanwhile, the number of driver nodes in cascading failures is also related to the edges amount in strongly connected components.
NASA Astrophysics Data System (ADS)
Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.; Krenczyk, D.
2016-08-01
In the paper a survey of predictive and reactive scheduling methods is done in order to evaluate how the ability of prediction of reliability characteristics influences over robustness criteria. The most important reliability characteristics are: Mean Time to Failure, Mean Time of Repair. Survey analysis is done for a job shop scheduling problem. The paper answers the question: what method generates robust schedules in the case of a bottleneck failure occurrence before, at the beginning of planned maintenance actions or after planned maintenance actions? Efficiency of predictive schedules is evaluated using criteria: makespan, total tardiness, flow time, idle time. Efficiency of reactive schedules is evaluated using: solution robustness criterion and quality robustness criterion. This paper is the continuation of the research conducted in the paper [1], where the survey of predictive and reactive scheduling methods is done only for small size scheduling problems.
Robust Aircraft Squadron Scheduling in the Face of Absenteeism
2008-03-01
Complicating matters is absenteeism . If one or more pilots are unable to perform their previously assigned tasks, due to sickness, aircraft failure, or...ROBUST AIRCRAFT SQUADRON SCHEDULING IN THE FACE OF ABSENTEEISM THESIS Osman B Gokcen, 1st...or the United States Government. AFIT/GOR/ENS/08-06 ROBUST AIRCRAFT SQUADRON SCHEDULING IN THE FACE OF ABSENTEEISM THESIS
Vivanti, Refael; Joskowicz, Leo; Lev-Cohain, Naama; Ephrat, Ariel; Sosna, Jacob
2018-03-10
Radiological longitudinal follow-up of tumors in CT scans is essential for disease assessment and liver tumor therapy. Currently, most tumor size measurements follow the RECIST guidelines, which can be off by as much as 50%. True volumetric measurements are more accurate but require manual delineation, which is time-consuming and user-dependent. We present a convolutional neural networks (CNN) based method for robust automatic liver tumor delineation in longitudinal CT studies that uses both global and patient specific CNNs trained on a small database of delineated images. The inputs are the baseline scan and the tumor delineation, a follow-up scan, and a liver tumor global CNN voxel classifier built from radiologist-validated liver tumor delineations. The outputs are the tumor delineations in the follow-up CT scan. The baseline scan tumor delineation serves as a high-quality prior for the tumor characterization in the follow-up scans. It is used to evaluate the global CNN performance on the new case and to reliably predict failures of the global CNN on the follow-up scan. High-scoring cases are segmented with a global CNN; low-scoring cases, which are predicted to be failures of the global CNN, are segmented with a patient-specific CNN built from the baseline scan. Our experimental results on 222 tumors from 31 patients yield an average overlap error of 17% (std = 11.2) and surface distance of 2.1 mm (std = 1.8), far better than stand-alone segmentation. Importantly, the robustness of our method improved from 67% for stand-alone global CNN segmentation to 100%. Unlike other medical imaging deep learning approaches, which require large annotated training datasets, our method exploits the follow-up framework to yield accurate tumor tracking and failure detection and correction with a small training dataset. Graphical abstract Flow diagram of the proposed method. In the offline mode (orange), a global CNN is trained as a voxel classifier to segment liver tumor as in [31]. The online mode (blue) is used for each new case. The input is baseline scan with delineation and the follow-up CT scan to be segmented. The main novelty is the ability to predict failures by trying the system on the baseline scan and the ability to correct them using the patient-specific CNN.
Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1
NASA Technical Reports Server (NTRS)
Park, Thomas; Smith, Austin; Oliver, T. Emerson
2018-01-01
The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GNC software from the set of healthy measurements. This paper explores the trades and analyses that were performed in selecting a set of robust fault-detection algorithms included in the GN&C flight software. These trades included both an assessment of hardware-provided health and status data as well as an evaluation of different algorithms based on time-to-detection, type of failures detected, and probability of detecting false positives. We then provide an overview of the algorithms used for both fault-detection and measurement down selection. We next discuss the role of trajectory design, flexible-body models, and vehicle response to off-nominal conditions in setting the detection thresholds. Lastly, we present lessons learned from software integration and hardware-in-the-loop testing.
Cascading failure in scale-free networks with tunable clustering
NASA Astrophysics Data System (ADS)
Zhang, Xue-Jun; Gu, Bo; Guan, Xiang-Min; Zhu, Yan-Bo; Lv, Ren-Li
2016-02-01
Cascading failure is ubiquitous in many networked infrastructure systems, such as power grids, Internet and air transportation systems. In this paper, we extend the cascading failure model to a scale-free network with tunable clustering and focus on the effect of clustering coefficient on system robustness. It is found that the network robustness undergoes a nonmonotonic transition with the increment of clustering coefficient: both highly and lowly clustered networks are fragile under the intentional attack, and the network with moderate clustering coefficient can better resist the spread of cascading. We then provide an extensive explanation for this constructive phenomenon via the microscopic point of view and quantitative analysis. Our work can be useful to the design and optimization of infrastructure systems.
NASA Technical Reports Server (NTRS)
Kaufman, Howard
1998-01-01
Many papers relevant to reconfigurable flight control have appeared over the past fifteen years. In general these have consisted of theoretical issues, simulation experiments, and in some cases, actual flight tests. Results indicate that reconfiguration of flight controls is certainly feasible for a wide class of failures. However many of the proposed procedures although quite attractive, need further analytical and experimental studies for meaningful validation. Many procedures assume the availability of failure detection and identification logic that will supply adequately fast, the dynamics corresponding to the failed aircraft. This in general implies that the failure detection and fault identification logic must have access to all possible anticipated faults and the corresponding dynamical equations of motion. Unless some sort of explicit on line parameter identification is included, the computational demands could possibly be too excessive. This suggests the need for some form of adaptive control, either by itself as the prime procedure for control reconfiguration or in conjunction with the failure detection logic. If explicit or indirect adaptive control is used, then it is important that the identified models be such that the corresponding computed controls deliver adequate performance to the actual aircraft. Unknown changes in trim should be modelled, and parameter identification needs to be adequately insensitive to noise and at the same time capable of tracking abrupt changes. If however, both failure detection and system parameter identification turn out to be too time consuming in an emergency situation, then the concepts of direct adaptive control should be considered. If direct model reference adaptive control is to be used (on a linear model) with stability assurances, then a positive real or passivity condition needs to be satisfied for all possible configurations. This condition is often satisfied with a feedforward compensator around the plant. This compensator must be robustly designed such that the compensated plant satisfies the required positive real conditions over all expected parameter values. Furthermore, with the feedforward only around the plant, a nonzero (but bounded error) will exist in steady state between the plant and model outputs. This error can be removed by placing the compensator also in the reference model. Design of such a compensator should not be too difficult a problem since for flight control it is generally possible to feedback all the system states.
The fate of object memory traces under change detection and change blindness.
Busch, Niko A
2013-07-03
Observers often fail to detect substantial changes in a visual scene. This so-called change blindness is often taken as evidence that visual representations are sparse and volatile. This notion rests on the assumption that the failure to detect a change implies that representations of the changing objects are lost all together. However, recent evidence suggests that under change blindness, object memory representations may be formed and stored, but not retrieved. This study investigated the fate of object memory representations when changes go unnoticed. Participants were presented with scenes consisting of real world objects, one of which changed on each trial, while recording event-related potentials (ERPs). Participants were first asked to localize where the change had occurred. In an additional recognition task, participants then discriminated old objects, either from the pre-change or the post-change scene, from entirely new objects. Neural traces of object memories were studied by comparing ERPs for old and novel objects. Participants performed poorly in the detection task and often failed to recognize objects from the scene, especially pre-change objects. However, a robust old/novel effect was observed in the ERP, even when participants were change blind and did not recognize the old object. This implicit memory trace was found both for pre-change and post-change objects. These findings suggest that object memories are stored even under change blindness. Thus, visual representations may not be as sparse and volatile as previously thought. Rather, change blindness may point to a failure to retrieve and use these representations for change detection. Copyright © 2013 Elsevier B.V. All rights reserved.
Robust Feature Matching in Terrestrial Image Sequences
NASA Astrophysics Data System (ADS)
Abbas, A.; Ghuffar, S.
2018-04-01
From the last decade, the feature detection, description and matching techniques are most commonly exploited in various photogrammetric and computer vision applications, which includes: 3D reconstruction of scenes, image stitching for panoramic creation, image classification, or object recognition etc. However, in terrestrial imagery of urban scenes contains various issues, which include duplicate and identical structures (i.e. repeated windows and doors) that cause the problem in feature matching phase and ultimately lead to failure of results specially in case of camera pose and scene structure estimation. In this paper, we will address the issue related to ambiguous feature matching in urban environment due to repeating patterns.
Advances in Micromechanics Modeling of Composites Structures for Structural Health Monitoring
NASA Astrophysics Data System (ADS)
Moncada, Albert
Although high performance, light-weight composites are increasingly being used in applications ranging from aircraft, rotorcraft, weapon systems and ground vehicles, the assurance of structural reliability remains a critical issue. In composites, damage is absorbed through various fracture processes, including fiber failure, matrix cracking and delamination. An important element in achieving reliable composite systems is a strong capability of assessing and inspecting physical damage of critical structural components. Installation of a robust Structural Health Monitoring (SHM) system would be very valuable in detecting the onset of composite failure. A number of major issues still require serious attention in connection with the research and development aspects of sensor-integrated reliable SHM systems for composite structures. In particular, the sensitivity of currently available sensor systems does not allow detection of micro level damage; this limits the capability of data driven SHM systems. As a fundamental layer in SHM, modeling can provide in-depth information on material and structural behavior for sensing and detection, as well as data for learning algorithms. This dissertation focuses on the development of a multiscale analysis framework, which is used to detect various forms of damage in complex composite structures. A generalized method of cells based micromechanics analysis, as implemented in NASA's MAC/GMC code, is used for the micro-level analysis. First, a baseline study of MAC/GMC is performed to determine the governing failure theories that best capture the damage progression. The deficiencies associated with various layups and loading conditions are addressed. In most micromechanics analysis, a representative unit cell (RUC) with a common fiber packing arrangement is used. The effect of variation in this arrangement within the RUC has been studied and results indicate this variation influences the macro-scale effective material properties and failure stresses. The developed model has been used to simulate impact damage in a composite beam and an airfoil structure. The model data was verified through active interrogation using piezoelectric sensors. The multiscale model was further extended to develop a coupled damage and wave attenuation model, which was used to study different damage states such as fiber-matrix debonding in composite structures with surface bonded piezoelectric sensors.
Kitano, Hiroaki
2004-11-01
Robustness is a ubiquitously observed property of biological systems. It is considered to be a fundamental feature of complex evolvable systems. It is attained by several underlying principles that are universal to both biological organisms and sophisticated engineering systems. Robustness facilitates evolvability and robust traits are often selected by evolution. Such a mutually beneficial process is made possible by specific architectural features observed in robust systems. But there are trade-offs between robustness, fragility, performance and resource demands, which explain system behaviour, including the patterns of failure. Insights into inherent properties of robust systems will provide us with a better understanding of complex diseases and a guiding principle for therapy design.
Health management system for rocket engines
NASA Technical Reports Server (NTRS)
Nemeth, Edward
1990-01-01
The functional framework of a failure detection algorithm for the Space Shuttle Main Engine (SSME) is developed. The basic algorithm is based only on existing SSME measurements. Supplemental measurements, expected to enhance failure detection effectiveness, are identified. To support the algorithm development, a figure of merit is defined to estimate the likelihood of SSME criticality 1 failure modes and the failure modes are ranked in order of likelihood of occurrence. Nine classes of failure detection strategies are evaluated and promising features are extracted as the basis for the failure detection algorithm. The failure detection algorithm provides early warning capabilities for a wide variety of SSME failure modes. Preliminary algorithm evaluation, using data from three SSME failures representing three different failure types, demonstrated indications of imminent catastrophic failure well in advance of redline cutoff in all three cases.
Jarolim, Petr; Patel, Purvish P; Conrad, Michael J; Chang, Lei; Melenovsky, Vojtech; Wilson, David H
2015-10-01
The association between increases in cardiac troponin and adverse cardiac outcomes is well established. There is a growing interest in exploring routine cardiac troponin monitoring as a potential early indicator of adverse heart health trends. Prognostic use of cardiac troponin measurements requires an assay with very high sensitivity and outstanding analytical performance. We report development and preliminary validation of an investigational assay meeting these requirements and demonstrate its applicability to cohorts of healthy individuals and patients with heart failure. On the basis of single molecule array technology, we developed a 45-min immunoassay for cardiac troponin I (cTnI) for use on a novel, fully automated digital analyzer. We characterized its analytical performance and measured cTnI in healthy individuals and heart failure patients in a preliminary study of assay analytical efficacy. The assay exhibited a limit of detection of 0.01 ng/L, a limit of quantification of 0.08 ng/L, and a total CV of 10% at 2.0 ng/L. cTnI concentrations were well above the assay limit of detection for all samples tested, including samples from healthy individuals. cTnI was significantly higher in heart failure patients, and exhibited increasing median and interquartile concentrations with increasing New York Heart Association classification of heart failure severity. The robust 2-log increase in sensitivity relative to contemporary high-sensitivity cardiac troponin immunoassays, combined with full automation, make this assay suitable for exploring cTnI concentrations in cohorts of healthy individuals and for the potential prognostic application of serial cardiac troponin measurements in both apparently healthy and diseased individuals. © 2015 American Association for Clinical Chemistry.
Wang, Baofeng; Qi, Zhiquan; Chen, Sizhong; Liu, Zhaodu; Ma, Guocheng
2017-01-01
Vision-based vehicle detection is an important issue for advanced driver assistance systems. In this paper, we presented an improved multi-vehicle detection and tracking method using cascade Adaboost and Adaptive Kalman filter(AKF) with target identity awareness. A cascade Adaboost classifier using Haar-like features was built for vehicle detection, followed by a more comprehensive verification process which could refine the vehicle hypothesis in terms of both location and dimension. In vehicle tracking, each vehicle was tracked with independent identity by an Adaptive Kalman filter in collaboration with a data association approach. The AKF adaptively adjusted the measurement and process noise covariance through on-line stochastic modelling to compensate the dynamics changes. The data association correctly assigned different detections with tracks using global nearest neighbour(GNN) algorithm while considering the local validation. During tracking, a temporal context based track management was proposed to decide whether to initiate, maintain or terminate the tracks of different objects, thus suppressing the sparse false alarms and compensating the temporary detection failures. Finally, the proposed method was tested on various challenging real roads, and the experimental results showed that the vehicle detection performance was greatly improved with higher accuracy and robustness.
Uncertainty Modeling for Robustness Analysis of Control Upset Prevention and Recovery Systems
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Khong, Thuan H.; Shin, Jong-Yeob; Kwatny, Harry; Chang, Bor-Chin; Balas, Gary J.
2005-01-01
Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. Such systems (developed for failure detection, identification, and reconfiguration, as well as upset recovery) need to be evaluated over broad regions of the flight envelope and under extreme flight conditions, and should include various sources of uncertainty. However, formulation of linear fractional transformation (LFT) models for representing system uncertainty can be very difficult for complex parameter-dependent systems. This paper describes a preliminary LFT modeling software tool which uses a matrix-based computational approach that can be directly applied to parametric uncertainty problems involving multivariate matrix polynomial dependencies. Several examples are presented (including an F-16 at an extreme flight condition, a missile model, and a generic example with numerous crossproduct terms), and comparisons are given with other LFT modeling tools that are currently available. The LFT modeling method and preliminary software tool presented in this paper are shown to compare favorably with these methods.
A Hybrid Procedural/Deductive Executive for Autonomous Spacecraft
NASA Technical Reports Server (NTRS)
Pell, Barney; Gamble, Edward B.; Gat, Erann; Kessing, Ron; Kurien, James; Millar, William; Nayak, P. Pandurang; Plaunt, Christian; Williams, Brian C.; Lau, Sonie (Technical Monitor)
1998-01-01
The New Millennium Remote Agent (NMRA) will be the first AI system to control an actual spacecraft. The spacecraft domain places a strong premium on autonomy and requires dynamic recoveries and robust concurrent execution, all in the presence of tight real-time deadlines, changing goals, scarce resource constraints, and a wide variety of possible failures. To achieve this level of execution robustness, we have integrated a procedural executive based on generic procedures with a deductive model-based executive. A procedural executive provides sophisticated control constructs such as loops, parallel activity, locks, and synchronization which are used for robust schedule execution, hierarchical task decomposition, and routine configuration management. A deductive executive provides algorithms for sophisticated state inference and optimal failure recover), planning. The integrated executive enables designers to code knowledge via a combination of procedures and declarative models, yielding a rich modeling capability suitable to the challenges of real spacecraft control. The interface between the two executives ensures both that recovery sequences are smoothly merged into high-level schedule execution and that a high degree of reactivity is retained to effectively handle additional failures during recovery.
Robust dynamic inversion controller design and analysis (using the X-38 vehicle as a case study)
NASA Astrophysics Data System (ADS)
Ito, Daigoro
A new way to approach robust Dynamic Inversion controller synthesis is addressed in this paper. A Linear Quadratic Gaussian outer-loop controller improves the robustness of a Dynamic Inversion inner-loop controller in the presence of uncertainties. Desired dynamics are given by the dynamic compensator, which shapes the loop. The selected dynamics are based on both performance and stability robustness requirements. These requirements are straightforwardly formulated as frequency-dependent singular value bounds during synthesis of the controller. Performance and robustness of the designed controller is tested using a worst case time domain quadratic index, which is a simple but effective way to measure robustness due to parameter variation. Using this approach, a lateral-directional controller for the X-38 vehicle is designed and its robustness to parameter variations and disturbances is analyzed. It is found that if full state measurements are available, the performance of the designed lateral-directional control system, measured by the chosen cost function, improves by approximately a factor of four. Also, it is found that the designed system is stable up to a parametric variation of 1.65 standard deviation with the set of uncertainty considered. The system robustness is determined to be highly sensitive to the dihedral derivative and the roll damping coefficients. The controller analysis is extended to the nonlinear system where both control input displacements and rates are bounded. In this case, the considered nonlinear system is stable up to 48.1° in bank angle and 1.59° in sideslip angle variations, indicating it is more sensitive to variations in sideslip angle than in bank angle. This nonlinear approach is further extended for the actuator failure mode analysis. The results suggest that the designed system maintains a high level of stability in the event of aileron failure. However, only 35% or less of the original stability range is maintained for the rudder failure case. Overall, this combination of controller synthesis and robustness criteria compares well with the mu-synthesis technique. It also is readily accessible to the practicing engineer, in terms of understanding and use.
Forensic identification of CITES protected slimming cactus (Hoodia) using DNA barcoding.
Gathier, Gerard; van der Niet, Timotheus; Peelen, Tamara; van Vugt, Rogier R; Eurlings, Marcel C M; Gravendeel, Barbara
2013-11-01
Slimming cactus (Hoodia), found only in southwestern Africa, is a well-known herbal product for losing weight. Consequently, Hoodia extracts are sought-after worldwide despite a CITES Appendix II status. The failure to eradicate illegal trade is due to problems with detecting and identifying Hoodia using morphological and chemical characters. Our aim was to evaluate the potential of molecular identification of Hoodia based on DNA barcoding. Screening of nrITS1 and psbA-trnH DNA sequences from 26 accessions of Ceropegieae resulted in successful identification, while conventional chemical profiling using DLI-MS led to inaccurate detection and identification of Hoodia. The presence of Hoodia in herbal products was also successfully established using DNA sequences. A validation procedure of our DNA barcoding protocol demonstrated its robustness to changes in PCR conditions. We conclude that DNA barcoding is an effective tool for Hoodia detection and identification which can contribute to preventing illegal trade. © 2013 American Academy of Forensic Sciences.
Rolling Bearing Fault Diagnosis Based on an Improved HTT Transform
Tang, Guiji; Tian, Tian; Zhou, Chong
2018-01-01
When rolling bearing failure occurs, vibration signals generally contain different signal components, such as impulsive fault feature signals, background noise and harmonic interference signals. One of the most challenging aspects of rolling bearing fault diagnosis is how to inhibit noise and harmonic interference signals, while enhancing impulsive fault feature signals. This paper presents a novel bearing fault diagnosis method, namely an improved Hilbert time–time (IHTT) transform, by combining a Hilbert time–time (HTT) transform with principal component analysis (PCA). Firstly, the HTT transform was performed on vibration signals to derive a HTT transform matrix. Then, PCA was employed to de-noise the HTT transform matrix in order to improve the robustness of the HTT transform. Finally, the diagonal time series of the de-noised HTT transform matrix was extracted as the enhanced impulsive fault feature signal and the contained fault characteristic information was identified through further analyses of amplitude and envelope spectrums. Both simulated and experimental analyses validated the superiority of the presented method for detecting bearing failures. PMID:29662013
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1985-01-01
The application of the failure detection filter to the detection and identification of aircraft control element failures was evaluated in a linear digital simulation of the longitudinal dynamics of a B-737 Aircraft. Simulation results show that with a simple correlator and threshold detector used to process the filter residuals, the failure detection performance is seriously degraded by the effects of turbulence.
An object detection and tracking system for unmanned surface vehicles
NASA Astrophysics Data System (ADS)
Yang, Jian; Xiao, Yang; Fang, Zhiwen; Zhang, Naiwen; Wang, Li; Li, Tao
2017-10-01
Object detection and tracking are critical parts of unmanned surface vehicles(USV) to achieve automatic obstacle avoidance. Off-the-shelf object detection methods have achieved impressive accuracy in public datasets, though they still meet bottlenecks in practice, such as high time consumption and low detection quality. In this paper, we propose a novel system for USV, which is able to locate the object more accurately while being fast and stable simultaneously. Firstly, we employ Faster R-CNN to acquire several initial raw bounding boxes. Secondly, the image is segmented to a few superpixels. For each initial box, the superpixels inside will be grouped into a whole according to a combination strategy, and a new box is thereafter generated as the circumscribed bounding box of the final superpixel. Thirdly, we utilize KCF to track these objects after several frames, Faster-RCNN is again used to re-detect objects inside tracked boxes to prevent tracking failure as well as remove empty boxes. Finally, we utilize Faster R-CNN to detect objects in the next image, and refine object boxes by repeating the second module of our system. The experimental results demonstrate that our system is fast, robust and accurate, which can be applied to USV in practice.
Vegter, Eline L; Ovchinnikova, Ekaterina S; Silljé, Herman H W; Meems, Laura M G; van der Pol, Atze; van der Velde, A Rogier; Berezikov, Eugene; Voors, Adriaan A; de Boer, Rudolf A; van der Meer, Peter
2017-01-01
We recently identified a set of plasma microRNAs (miRNAs) that are downregulated in patients with heart failure in comparison with control subjects. To better understand their meaning and function, we sought to validate these circulating miRNAs in 3 different well-established rat and mouse heart failure models, and correlated the miRNAs to parameters of cardiac function. The previously identified let-7i-5p, miR-16-5p, miR-18a-5p, miR-26b-5p, miR-27a-3p, miR-30e-5p, miR-199a-3p, miR-223-3p, miR-423-3p, miR-423-5p and miR-652-3p were measured by means of quantitative real time polymerase chain reaction (qRT-PCR) in plasma samples of 8 homozygous TGR(mREN2)27 (Ren2) transgenic rats and 8 (control) Sprague-Dawley rats, 6 mice with angiotensin II-induced heart failure (AngII) and 6 control mice, and 8 mice with ischemic heart failure and 6 controls. Circulating miRNA levels were compared between the heart failure animals and healthy controls. Ren2 rats, AngII mice and mice with ischemic heart failure showed clear signs of heart failure, exemplified by increased left ventricular and lung weights, elevated end-diastolic left ventricular pressures, increased expression of cardiac stress markers and reduced left ventricular ejection fraction. All miRNAs were detectable in plasma from rats and mice. No significant differences were observed between the circulating miRNAs in heart failure animals when compared to the healthy controls (all P>0.05) and no robust associations with cardiac function could be found. The previous observation that miRNAs circulate in lower levels in human patients with heart failure could not be validated in well-established rat and mouse heart failure models. These results question the translation of data on human circulating miRNA levels to experimental models, and vice versa the validity of experimental miRNA data for human heart failure.
Development and Evaluation of Fault-Tolerant Flight Control Systems
NASA Technical Reports Server (NTRS)
Song, Yong D.; Gupta, Kajal (Technical Monitor)
2004-01-01
The research is concerned with developing a new approach to enhancing fault tolerance of flight control systems. The original motivation for fault-tolerant control comes from the need for safe operation of control elements (e.g. actuators) in the event of hardware failures in high reliability systems. One such example is modem space vehicle subjected to actuator/sensor impairments. A major task in flight control is to revise the control policy to balance impairment detectability and to achieve sufficient robustness. This involves careful selection of types and parameters of the controllers and the impairment detecting filters used. It also involves a decision, upon the identification of some failures, on whether and how a control reconfiguration should take place in order to maintain a certain system performance level. In this project new flight dynamic model under uncertain flight conditions is considered, in which the effects of both ramp and jump faults are reflected. Stabilization algorithms based on neural network and adaptive method are derived. The control algorithms are shown to be effective in dealing with uncertain dynamics due to external disturbances and unpredictable faults. The overall strategy is easy to set up and the computation involved is much less as compared with other strategies. Computer simulation software is developed. A serious of simulation studies have been conducted with varying flight conditions.
Redundant Design in Interdependent Networks
2016-01-01
Modern infrastructure networks are often coupled together and thus could be modeled as interdependent networks. Overload and interdependent effect make interdependent networks more fragile when suffering from attacks. Existing research has primarily concentrated on the cascading failure process of interdependent networks without load, or the robustness of isolated network with load. Only limited research has been done on the cascading failure process caused by overload in interdependent networks. Redundant design is a primary approach to enhance the reliability and robustness of the system. In this paper, we propose two redundant methods, node back-up and dependency redundancy, and the experiment results indicate that two measures are effective and costless. Two detailed models about redundant design are introduced based on the non-linear load-capacity model. Based on the attributes and historical failure distribution of nodes, we introduce three static selecting strategies-Random-based, Degree-based, Initial load-based and a dynamic strategy-HFD (historical failure distribution) to identify which nodes could have a back-up with priority. In addition, we consider the cost and efficiency of different redundant proportions to determine the best proportion with maximal enhancement and minimal cost. Experiments on interdependent networks demonstrate that the combination of HFD and dependency redundancy is an effective and preferred measure to implement redundant design on interdependent networks. The results suggest that the redundant design proposed in this paper can permit construction of highly robust interactive networked systems. PMID:27764174
Robustness of risk maps and survey networks to knowledge gaps about a new invasive pest.
Yemshanov, Denys; Koch, Frank H; Ben-Haim, Yakov; Smith, William D
2010-02-01
In pest risk assessment it is frequently necessary to make management decisions regarding emerging threats under severe uncertainty. Although risk maps provide useful decision support for invasive alien species, they rarely address knowledge gaps associated with the underlying risk model or how they may change the risk estimates. Failure to recognize uncertainty leads to risk-ignorant decisions and miscalculation of expected impacts as well as the costs required to minimize these impacts. Here we use the information gap concept to evaluate the robustness of risk maps to uncertainties in key assumptions about an invading organism. We generate risk maps with a spatial model of invasion that simulates potential entries of an invasive pest via international marine shipments, their spread through a landscape, and establishment on a susceptible host. In particular, we focus on the question of how much uncertainty in risk model assumptions can be tolerated before the risk map loses its value. We outline this approach with an example of a forest pest recently detected in North America, Sirex noctilio Fabricius. The results provide a spatial representation of the robustness of predictions of S. noctilio invasion risk to uncertainty and show major geographic hotspots where the consideration of uncertainty in model parameters may change management decisions about a new invasive pest. We then illustrate how the dependency between the extent of uncertainties and the degree of robustness of a risk map can be used to select a surveillance network design that is most robust to knowledge gaps about the pest.
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Wagner, E.; Motyka, P.; Hall, S. R.
1985-01-01
The performance of the detection filter in detecting and isolating aircraft control surface and actuator failures is evaluated. The basic detection filter theory assumption of no direct input-output coupling is violated in this application due to the use of acceleration measurements for detecting and isolating failures. With this coupling, residuals produced by control surface failures may only be constrained to a known plane rather than to a single direction. A detection filter design with such planar failure signatures is presented, with the design issues briefly addressed. In addition, a modification to constrain the residual to a single known direction even with direct input-output coupling is also presented. Both the detection filter and the modification are tested using a nonlinear aircraft simulation. While no thresholds were selected, both filters demonstrated an ability to detect control surface and actuator failures. Failure isolation may be a problem if there are several control surfaces which produce similar effects on the aircraft. In addition, the detection filter was sensitive to wind turbulence and modeling errors.
IRAC Full-Scale Flight Testbed Capabilities
NASA Technical Reports Server (NTRS)
Lee, James A.; Pahle, Joseph; Cogan, Bruce R.; Hanson, Curtis E.; Bosworth, John T.
2009-01-01
Overview: Provide validation of adaptive control law concepts through full scale flight evaluation in a representative avionics architecture. Develop an understanding of aircraft dynamics of current vehicles in damaged and upset conditions Real-world conditions include: a) Turbulence, sensor noise, feedback biases; and b) Coupling between pilot and adaptive system. Simulated damage includes 1) "B" matrix (surface) failures; and 2) "A" matrix failures. Evaluate robustness of control systems to anticipated and unanticipated failures.
Yazdani, Sahar; Haeri, Mohammad
2017-11-01
In this work, we study the flocking problem of multi-agent systems with uncertain dynamics subject to actuator failure and external disturbances. By considering some standard assumptions, we propose a robust adaptive fault tolerant protocol for compensating of the actuator bias fault, the partial loss of actuator effectiveness fault, the model uncertainties, and external disturbances. Under the designed protocol, velocity convergence of agents to that of virtual leader is guaranteed while the connectivity preservation of network and collision avoidance among agents are ensured as well. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Levin, Daniel T; Drivdahl, Sarah B; Momen, Nausheen; Beck, Melissa R
2002-12-01
Recently, a number of experiments have emphasized the degree to which subjects fail to detect large changes in visual scenes. This finding, referred to as "change blindness," is often considered surprising because many people have the intuition that such changes should be easy to detect. documented this intuition by showing that the majority of subjects believe they would notice changes that are actually very rarely detected. Thus subjects exhibit a metacognitive error we refer to as "change blindness blindness." Here, we test whether CBB is caused by a misestimation of the perceptual experience associated with visual changes and show that it persists even when the pre- and postchange views are separated by long delays. In addition, subjects overestimate their change detection ability both when the relevant changes are illustrated by still pictures, and when they are illustrated using videos showing the changes occurring in real time. We conclude that CBB is a robust phenomenon that cannot be accounted for by failure to understand the specific perceptual experience associated with a change. Copyright 2002 Elsevier Science (USA)
Wang, Baofeng; Qi, Zhiquan; Chen, Sizhong; Liu, Zhaodu; Ma, Guocheng
2017-01-01
Vision-based vehicle detection is an important issue for advanced driver assistance systems. In this paper, we presented an improved multi-vehicle detection and tracking method using cascade Adaboost and Adaptive Kalman filter(AKF) with target identity awareness. A cascade Adaboost classifier using Haar-like features was built for vehicle detection, followed by a more comprehensive verification process which could refine the vehicle hypothesis in terms of both location and dimension. In vehicle tracking, each vehicle was tracked with independent identity by an Adaptive Kalman filter in collaboration with a data association approach. The AKF adaptively adjusted the measurement and process noise covariance through on-line stochastic modelling to compensate the dynamics changes. The data association correctly assigned different detections with tracks using global nearest neighbour(GNN) algorithm while considering the local validation. During tracking, a temporal context based track management was proposed to decide whether to initiate, maintain or terminate the tracks of different objects, thus suppressing the sparse false alarms and compensating the temporary detection failures. Finally, the proposed method was tested on various challenging real roads, and the experimental results showed that the vehicle detection performance was greatly improved with higher accuracy and robustness. PMID:28296902
Automatically Detecting Failures in Natural Language Processing Tools for Online Community Text.
Park, Albert; Hartzler, Andrea L; Huh, Jina; McDonald, David W; Pratt, Wanda
2015-08-31
The prevalence and value of patient-generated health text are increasing, but processing such text remains problematic. Although existing biomedical natural language processing (NLP) tools are appealing, most were developed to process clinician- or researcher-generated text, such as clinical notes or journal articles. In addition to being constructed for different types of text, other challenges of using existing NLP include constantly changing technologies, source vocabularies, and characteristics of text. These continuously evolving challenges warrant the need for applying low-cost systematic assessment. However, the primarily accepted evaluation method in NLP, manual annotation, requires tremendous effort and time. The primary objective of this study is to explore an alternative approach-using low-cost, automated methods to detect failures (eg, incorrect boundaries, missed terms, mismapped concepts) when processing patient-generated text with existing biomedical NLP tools. We first characterize common failures that NLP tools can make in processing online community text. We then demonstrate the feasibility of our automated approach in detecting these common failures using one of the most popular biomedical NLP tools, MetaMap. Using 9657 posts from an online cancer community, we explored our automated failure detection approach in two steps: (1) to characterize the failure types, we first manually reviewed MetaMap's commonly occurring failures, grouped the inaccurate mappings into failure types, and then identified causes of the failures through iterative rounds of manual review using open coding, and (2) to automatically detect these failure types, we then explored combinations of existing NLP techniques and dictionary-based matching for each failure cause. Finally, we manually evaluated the automatically detected failures. From our manual review, we characterized three types of failure: (1) boundary failures, (2) missed term failures, and (3) word ambiguity failures. Within these three failure types, we discovered 12 causes of inaccurate mappings of concepts. We used automated methods to detect almost half of 383,572 MetaMap's mappings as problematic. Word sense ambiguity failure was the most widely occurring, comprising 82.22% of failures. Boundary failure was the second most frequent, amounting to 15.90% of failures, while missed term failures were the least common, making up 1.88% of failures. The automated failure detection achieved precision, recall, accuracy, and F1 score of 83.00%, 92.57%, 88.17%, and 87.52%, respectively. We illustrate the challenges of processing patient-generated online health community text and characterize failures of NLP tools on this patient-generated health text, demonstrating the feasibility of our low-cost approach to automatically detect those failures. Our approach shows the potential for scalable and effective solutions to automatically assess the constantly evolving NLP tools and source vocabularies to process patient-generated text.
Cascade defense via routing in complex networks
NASA Astrophysics Data System (ADS)
Xu, Xiao-Lan; Du, Wen-Bo; Hong, Chen
2015-05-01
As the cascading failures in networked traffic systems are becoming more and more serious, research on cascade defense in complex networks has become a hotspot in recent years. In this paper, we propose a traffic-based cascading failure model, in which each packet in the network has its own source and destination. When cascade is triggered, packets will be redistributed according to a given routing strategy. Here, a global hybrid (GH) routing strategy, which uses the dynamic information of the queue length and the static information of nodes' degree, is proposed to defense the network cascade. Comparing GH strategy with the shortest path (SP) routing, efficient routing (ER) and global dynamic (GD) routing strategies, we found that GH strategy is more effective than other routing strategies in improving the network robustness against cascading failures. Our work provides insight into the robustness of networked traffic systems.
Lanying Lin; Sheng He; Feng Fu; Xiping Wang
2015-01-01
Wood failure percentage (WFP) is an important index for evaluating the bond strength of plywood. Currently, the method used for detecting WFP is visual inspection, which lacks efficiency. In order to improve it, image processing methods are applied to wood failure detection. The present study used thresholding and K-means clustering algorithms in wood failure detection...
Robustness of power systems under a democratic-fiber-bundle-like model
NASA Astrophysics Data System (ADS)
Yaǧan, Osman
2015-06-01
We consider a power system with N transmission lines whose initial loads (i.e., power flows) L1,...,LN are independent and identically distributed with PL(x ) =P [L ≤x ] . The capacity Ci defines the maximum flow allowed on line i and is assumed to be given by Ci=(1 +α ) Li , with α >0 . We study the robustness of this power system against random attacks (or failures) that target a p fraction of the lines, under a democratic fiber-bundle-like model. Namely, when a line fails, the load it was carrying is redistributed equally among the remaining lines. Our contributions are as follows. (i) We show analytically that the final breakdown of the system always takes place through a first-order transition at the critical attack size p=1 -E/[L ] maxx(P [L >x ](α x +E [L |L >x ]) ) , where E [.] is the expectation operator; (ii) we derive conditions on the distribution PL(x ) for which the first-order breakdown of the system occurs abruptly without any preceding diverging rate of failure; (iii) we provide a detailed analysis of the robustness of the system under three specific load distributions—uniform, Pareto, and Weibull—showing that with the minimum load Lmin and mean load E [L ] fixed, Pareto distribution is the worst (in terms of robustness) among the three, whereas Weibull distribution is the best with shape parameter selected relatively large; (iv) we provide numerical results that confirm our mean-field analysis; and (v) we show that p is maximized when the load distribution is a Dirac delta function centered at E [L ] , i.e., when all lines carry the same load. This last finding is particularly surprising given that heterogeneity is known to lead to high robustness against random failures in many other systems.
NASA Astrophysics Data System (ADS)
Hsieh, Fu-Shiung
2011-03-01
Design of robust supervisory controllers for manufacturing systems with unreliable resources has received significant attention recently. Robustness analysis provides an alternative way to analyse a perturbed system to quickly respond to resource failures. Although we have analysed the robustness properties of several subclasses of ordinary Petri nets (PNs), analysis for non-ordinary PNs has not been done. Non-ordinary PNs have weighted arcs and have the advantage to compactly model operations requiring multiple parts or resources. In this article, we consider a class of flexible assembly/disassembly manufacturing systems and propose a non-ordinary flexible assembly/disassembly Petri net (NFADPN) model for this class of systems. As the class of flexible assembly/disassembly manufacturing systems can be regarded as the integration and interactions of a set of assembly/disassembly subprocesses, a bottom-up approach is adopted in this article to construct the NFADPN models. Due to the routing flexibility in NFADPN, there may exist different ways to accomplish the tasks. To characterise different ways to accomplish the tasks, we propose the concept of completely connected subprocesses. As long as there exists a set of completely connected subprocesses for certain type of products, the production of that type of products can still be maintained without requiring the whole NFADPN to be live. To take advantage of the alternative routes without enforcing liveness for the whole system, we generalise the concept of persistent production proposed to NFADPN. We propose a condition for persistent production based on the concept of completely connected subprocesses. We extend robustness analysis to NFADPN by exploiting its structure. We identify several patterns of resource failures and characterise the conditions to maintain operation in the presence of resource failures.
A geometric approach to failure detection and identification in linear systems
NASA Technical Reports Server (NTRS)
Massoumnia, M. A.
1986-01-01
Using concepts of (C,A)-invariant and unobservability (complementary observability) subspaces, a geometric formulation of the failure detection and identification filter problem is stated. Using these geometric concepts, it is shown that it is possible to design a causal linear time-invariant processor that can be used to detect and uniquely identify a component failure in a linear time-invariant system, assuming: (1) The components can fail simultaneously, and (2) The components can fail only one at a time. In addition, a geometric formulation of Beard's failure detection filter problem is stated. This new formulation completely clarifies of output separability and mutual detectability introduced by Beard and also exploits the dual relationship between a restricted version of the failure detection and identification problem and the control decoupling problem. Moreover, the frequency domain interpretation of the results is used to relate the concepts of failure sensitive observers with the generalized parity relations introduced by Chow. This interpretation unifies the various failure detection and identification concepts and design procedures.
Robust Stability and Control of Multi-Body Ground Vehicles with Uncertain Dynamics and Failures
2010-01-01
and N. Zhang, 2008. “Robust stability control of vehicle rollover subject to actuator time delay”. Proc. IMechE Part I: J. of systems and control ...Dynamic Systems and Control Conference, Boston, MA, Sept 2010 R.K. Yedavalli,”Robust Stability of Linear Interval Parameter Matrix Family Problem...for control coupled output regulation for a class of systems is presented. In section 2.1.7, the control design algorithm developed in section
Attack Vulnerability of Network Controllability
2016-01-01
Controllability of complex networks has attracted much attention, and understanding the robustness of network controllability against potential attacks and failures is of practical significance. In this paper, we systematically investigate the attack vulnerability of network controllability for the canonical model networks as well as the real-world networks subject to attacks on nodes and edges. The attack strategies are selected based on degree and betweenness centralities calculated for either the initial network or the current network during the removal, among which random failure is as a comparison. It is found that the node-based strategies are often more harmful to the network controllability than the edge-based ones, and so are the recalculated strategies than their counterparts. The Barabási-Albert scale-free model, which has a highly biased structure, proves to be the most vulnerable of the tested model networks. In contrast, the Erdős-Rényi random model, which lacks structural bias, exhibits much better robustness to both node-based and edge-based attacks. We also survey the control robustness of 25 real-world networks, and the numerical results show that most real networks are control robust to random node failures, which has not been observed in the model networks. And the recalculated betweenness-based strategy is the most efficient way to harm the controllability of real-world networks. Besides, we find that the edge degree is not a good quantity to measure the importance of an edge in terms of network controllability. PMID:27588941
Attack Vulnerability of Network Controllability.
Lu, Zhe-Ming; Li, Xin-Feng
2016-01-01
Controllability of complex networks has attracted much attention, and understanding the robustness of network controllability against potential attacks and failures is of practical significance. In this paper, we systematically investigate the attack vulnerability of network controllability for the canonical model networks as well as the real-world networks subject to attacks on nodes and edges. The attack strategies are selected based on degree and betweenness centralities calculated for either the initial network or the current network during the removal, among which random failure is as a comparison. It is found that the node-based strategies are often more harmful to the network controllability than the edge-based ones, and so are the recalculated strategies than their counterparts. The Barabási-Albert scale-free model, which has a highly biased structure, proves to be the most vulnerable of the tested model networks. In contrast, the Erdős-Rényi random model, which lacks structural bias, exhibits much better robustness to both node-based and edge-based attacks. We also survey the control robustness of 25 real-world networks, and the numerical results show that most real networks are control robust to random node failures, which has not been observed in the model networks. And the recalculated betweenness-based strategy is the most efficient way to harm the controllability of real-world networks. Besides, we find that the edge degree is not a good quantity to measure the importance of an edge in terms of network controllability.
Real time health monitoring and control system methodology for flexible space structures
NASA Astrophysics Data System (ADS)
Jayaram, Sanjay
This dissertation is concerned with the Near Real-time Autonomous Health Monitoring of Flexible Space Structures. The dynamics of multi-body flexible systems is uncertain due to factors such as high non-linearity, consideration of higher modal frequencies, high dimensionality, multiple inputs and outputs, operational constraints, as well as unexpected failures of sensors and/or actuators. Hence a systematic framework of developing a high fidelity, dynamic model of a flexible structural system needs to be understood. The fault detection mechanism that will be an integrated part of an autonomous health monitoring system comprises the detection of abnormalities in the sensors and/or actuators and correcting these detected faults (if possible). Applying the robust control law and the robust measures that are capable of detecting and recovering/replacing the actuators rectifies the actuator faults. The fault tolerant concept applied to the sensors will be in the form of an Extended Kalman Filter (EKF). The EKF is going to weigh the information coming from multiple sensors (redundant sensors used to measure the same information) and automatically identify the faulty sensors and weigh the best estimate from the remaining sensors. The mechanization is comprised of instrumenting flexible deployable panels (solar array) with multiple angular position and rate sensors connected to the data acquisition system. The sensors will give position and rate information of the solar panel in all three axes (i.e. roll, pitch and yaw). The position data corresponds to the steady state response and the rate data will give better insight on the transient response of the system. This is a critical factor for real-time autonomous health monitoring. MATLAB (and/or C++) software will be used for high fidelity modeling and fault tolerant mechanism.
APN Perceptions of Telemedicine and Homecare for Heart Failure Patients
ERIC Educational Resources Information Center
Martinez, Elea Ann
2017-01-01
Heart failure (HF) is a preventable and serious life threatening disease. Robust information is available on patient satisfaction when telehealth is used to manage chronic illness, but minimal information is available on Advance Practice Nurse (APN) satisfaction when APNs use this modality to deliver remote healthcare. The purpose of this project…
λ-augmented tree for robust data collection in Advanced Metering Infrastructure
Kamto, Joseph; Qian, Lijun; Li, Wei; ...
2016-01-01
In this study, tree multicast configuration of smart meters (SMs) can maintain the connectivity and meet the latency requirements for the Advanced Metering Infrastructure (AMI). However, such topology is extremely weak as any single failure suffices to break its connectivity. On the other hand, the impact of a SM node failure can be more or less significant: a noncut SM node will have a limited local impact compared to a cut SM node that will break the network connectivity. In this work, we design a highly connected tree with a set of backup links to minimize the weakness of treemore » topology of SMs. A topology repair scheme is proposed to address the impact of a SM node failure on the connectivity of the augmented tree network. It relies on a loop detection scheme to define the criticality of a SM node and specifically targets cut SM node by selecting backup parent SM to cover its children. Detailed algorithms to create such AMI tree and related theoretical and complexity analysis are provided with insightful simulation results: sufficient redundancy is provided to alleviate data loss at the cost of signaling overhead. It is however observed that biconnected tree provides the best compromise between the two entities.« less
λ-augmented tree for robust data collection in Advanced Metering Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamto, Joseph; Qian, Lijun; Li, Wei
In this study, tree multicast configuration of smart meters (SMs) can maintain the connectivity and meet the latency requirements for the Advanced Metering Infrastructure (AMI). However, such topology is extremely weak as any single failure suffices to break its connectivity. On the other hand, the impact of a SM node failure can be more or less significant: a noncut SM node will have a limited local impact compared to a cut SM node that will break the network connectivity. In this work, we design a highly connected tree with a set of backup links to minimize the weakness of treemore » topology of SMs. A topology repair scheme is proposed to address the impact of a SM node failure on the connectivity of the augmented tree network. It relies on a loop detection scheme to define the criticality of a SM node and specifically targets cut SM node by selecting backup parent SM to cover its children. Detailed algorithms to create such AMI tree and related theoretical and complexity analysis are provided with insightful simulation results: sufficient redundancy is provided to alleviate data loss at the cost of signaling overhead. It is however observed that biconnected tree provides the best compromise between the two entities.« less
Nishii, Nobuhiro; Miyoshi, Akihito; Kubo, Motoki; Miyamoto, Masakazu; Morimoto, Yoshimasa; Kawada, Satoshi; Nakagawa, Koji; Watanabe, Atsuyuki; Nakamura, Kazufumi; Morita, Hiroshi; Ito, Hiroshi
2018-03-01
Remote monitoring (RM) has been advocated as the new standard of care for patients with cardiovascular implantable electronic devices (CIEDs). RM has allowed the early detection of adverse clinical events, such as arrhythmia, lead failure, and battery depletion. However, lead failure was often identified only by arrhythmic events, but not impedance abnormalities. To compare the usefulness of arrhythmic events with conventional impedance abnormalities for identifying lead failure in CIED patients followed by RM. CIED patients in 12 hospitals have been followed by the RM center in Okayama University Hospital. All transmitted data have been analyzed and summarized. From April 2009 to March 2016, 1,873 patients have been followed by the RM center. During the mean follow-up period of 775 days, 42 lead failure events (atrial lead 22, right ventricular pacemaker lead 5, implantable cardioverter defibrillator [ICD] lead 15) were detected. The proportion of lead failures detected only by arrhythmic events, which were not detected by conventional impedance abnormalities, was significantly higher than that detected by impedance abnormalities (arrhythmic event 76.2%, 95% CI: 60.5-87.9%; impedance abnormalities 23.8%, 95% CI: 12.1-39.5%). Twenty-seven events (64.7%) were detected without any alert. Of 15 patients with ICD lead failure, none has experienced inappropriate therapy. RM can detect lead failure earlier, before clinical adverse events. However, CIEDs often diagnose lead failure as just arrhythmic events without any warning. Thus, to detect lead failure earlier, careful human analysis of arrhythmic events is useful. © 2017 Wiley Periodicals, Inc.
Detection of system failures in multi-axes tasks. [pilot monitored instrument approach
NASA Technical Reports Server (NTRS)
Ephrath, A. R.
1975-01-01
The effects of the pilot's participation mode in the control task on his workload level and failure detection performance were examined considering a low visibility landing approach. It is found that the participation mode had a strong effect on the pilot's workload, the induced workload being lowest when the pilot acted as a monitoring element during a coupled approach and highest when the pilot was an active element in the control loop. The effects of workload and participation mode on failure detection were separated. The participation mode was shown to have a dominant effect on the failure detection performance, with a failure in a monitored (coupled) axis being detected significantly faster than a comparable failure in a manually controlled axis.
On-Board Particulate Filter Failure Prevention and Failure Diagnostics Using Radio Frequency Sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sappok, Alex; Ragaller, Paul; Herman, Andrew
The increasing use of diesel and gasoline particulate filters requires advanced on-board diagnostics (OBD) to prevent and detect filter failures and malfunctions. Early detection of upstream (engine-out) malfunctions is paramount to preventing irreversible damage to downstream aftertreatment system components. Such early detection can mitigate the failure of the particulate filter resulting in the escape of emissions exceeding permissible limits and extend the component life. However, despite best efforts at early detection and filter failure prevention, the OBD system must also be able to detect filter failures when they occur. In this study, radio frequency (RF) sensors were used to directlymore » monitor the particulate filter state of health for both gasoline particulate filter (GPF) and diesel particulate filter (DPF) applications. The testing included controlled engine dynamometer evaluations, which characterized soot slip from various filter failure modes, as well as on-road fleet vehicle tests. The results show a high sensitivity to detect conditions resulting in soot leakage from the particulate filter, as well as potential for direct detection of structural failures including internal cracks and melted regions within the filter media itself. Furthermore, the measurements demonstrate, for the first time, the capability to employ a direct and continuous monitor of particulate filter diagnostics to both prevent and detect potential failure conditions in the field.« less
Probabilistic BPRRC: Robust Change Detection against Illumination Changes and Background Movements
NASA Astrophysics Data System (ADS)
Yokoi, Kentaro
This paper presents Probabilistic Bi-polar Radial Reach Correlation (PrBPRRC), a change detection method that is robust against illumination changes and background movements. Most of the traditional change detection methods are robust against either illumination changes or background movements; BPRRC is one of the illumination-robust change detection methods. We introduce a probabilistic background texture model into BPRRC and add the robustness against background movements including foreground invasions such as moving cars, walking people, swaying trees, and falling snow. We show the superiority of PrBPRRC in the environment with illumination changes and background movements by using three public datasets and one private dataset: ATON Highway data, Karlsruhe traffic sequence data, PETS 2007 data, and Walking-in-a-room data.
NASA Astrophysics Data System (ADS)
Mahmood, Faleh H.; Kadhim, Hussein T.; Resen, Ali K.; Shaban, Auday H.
2018-05-01
The failure such as air gap weirdness, rubbing, and scrapping between stator and rotor generator arise unavoidably and may cause extremely terrible results for a wind turbine. Therefore, we should pay more attention to detect and identify its cause-bearing failure in wind turbine to improve the operational reliability. The current paper tends to use of power spectral density analysis method of detecting internal race and external race bearing failure in micro wind turbine by estimation stator current signal of the generator. The failure detector method shows that it is well suited and effective for bearing failure detection.
Turbofan engine demonstration of sensor failure detection
NASA Technical Reports Server (NTRS)
Merrill, Walter C.; Delaat, John C.; Abdelwahab, Mahmood
1991-01-01
In the paper, the results of a full-scale engine demonstration of a sensor failure detection algorithm are presented. The algorithm detects, isolates, and accommodates sensor failures using analytical redundancy. The experimental hardware, including the F100 engine, is described. Demonstration results were obtained over a large portion of a typical flight envelope for the F100 engine. They include both subsonic and supersonic conditions at both medium and full, nonafter burning, power. Estimated accuracy, minimum detectable levels of sensor failures, and failure accommodation performance for an F100 turbofan engine control system are discussed.
Study of an automatic trajectory following control system
NASA Technical Reports Server (NTRS)
Vanlandingham, H. F.; Moose, R. L.; Zwicke, P. E.; Lucas, W. H.; Brinkley, J. D.
1983-01-01
It is shown that the estimator part of the Modified Partitioned Adaptive Controller, (MPAC) developed for nonlinear aircraft dynamics of a small jet transport can adapt to sensor failures. In addition, an investigation is made into the potential usefulness of the configuration detection technique used in the MPAC and the failure detection filter is developed that determines how a noise plant output is associated with a line or plane characteristic of a failure. It is shown by computer simulation that the estimator part and the configuration detection part of the MPAC can readily adapt to actuator and sensor failures and that the failure detection filter technique cannot detect actuator or sensor failures accurately for this type of system because of the plant modeling errors. In addition, it is shown that the decision technique, developed for the failure detection filter, can accurately determine that the plant output is related to the characteristic line or plane in the presence of sensor noise.
A robust background regression based score estimation algorithm for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei
2016-12-01
Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement in practice.
NASA Technical Reports Server (NTRS)
Motyka, P.
1983-01-01
A methodology is developed and applied for quantitatively analyzing the reliability of a dual, fail-operational redundant strapdown inertial measurement unit (RSDIMU). A Markov evaluation model is defined in terms of the operational states of the RSDIMU to predict system reliability. A 27 state model is defined based upon a candidate redundancy management system which can detect and isolate a spectrum of failure magnitudes. The results of parametric studies are presented which show the effect on reliability of the gyro failure rate, both the gyro and accelerometer failure rates together, false alarms, probability of failure detection, probability of failure isolation, and probability of damage effects and mission time. A technique is developed and evaluated for generating dynamic thresholds for detecting and isolating failures of the dual, separated IMU. Special emphasis is given to the detection of multiple, nonconcurrent failures. Digital simulation time histories are presented which show the thresholds obtained and their effectiveness in detecting and isolating sensor failures.
Robust Kalman filter design for predictive wind shear detection
NASA Technical Reports Server (NTRS)
Stratton, Alexander D.; Stengel, Robert F.
1991-01-01
Severe, low-altitude wind shear is a threat to aviation safety. Airborne sensors under development measure the radial component of wind along a line directly in front of an aircraft. In this paper, optimal estimation theory is used to define a detection algorithm to warn of hazardous wind shear from these sensors. To achieve robustness, a wind shear detection algorithm must distinguish threatening wind shear from less hazardous gustiness, despite variations in wind shear structure. This paper presents statistical analysis methods to refine wind shear detection algorithm robustness. Computational methods predict the ability to warn of severe wind shear and avoid false warning. Comparative capability of the detection algorithm as a function of its design parameters is determined, identifying designs that provide robust detection of severe wind shear.
A Taxonomy of Fallacies in System Safety Arguments
NASA Technical Reports Server (NTRS)
Greenwell, William S.; Knight, John C.; Holloway, C. Michael; Pease, Jacob J.
2006-01-01
Safety cases are gaining acceptance as assurance vehicles for safety-related systems. A safety case documents the evidence and argument that a system is safe to operate; however, logical fallacies in the underlying argument may undermine a system s safety claims. Removing these fallacies is essential to reduce the risk of safety-related system failure. We present a taxonomy of common fallacies in safety arguments that is intended to assist safety professionals in avoiding and detecting fallacious reasoning in the arguments they develop and review. The taxonomy derives from a survey of general argument fallacies and a separate survey of fallacies in real-world safety arguments. Our taxonomy is specific to safety argumentation, and it is targeted at professionals who work with safety arguments but may lack formal training in logic or argumentation. We discuss the rationale for the selection and categorization of fallacies in the taxonomy. In addition to its applications to the development and review of safety cases, our taxonomy could also support the analysis of system failures and promote the development of more robust safety case patterns.
Methodology for balancing design and process tradeoffs for deep-subwavelength technologies
NASA Astrophysics Data System (ADS)
Graur, Ioana; Wagner, Tina; Ryan, Deborah; Chidambarrao, Dureseti; Kumaraswamy, Anand; Bickford, Jeanne; Styduhar, Mark; Wang, Lee
2011-04-01
For process development of deep-subwavelength technologies, it has become accepted practice to use model-based simulation to predict systematic and parametric failures. Increasingly, these techniques are being used by designers to ensure layout manufacturability, as an alternative to, or complement to, restrictive design rules. The benefit of model-based simulation tools in the design environment is that manufacturability problems are addressed in a design-aware way by making appropriate trade-offs, e.g., between overall chip density and manufacturing cost and yield. The paper shows how library elements and the full ASIC design flow benefit from eliminating hot spots and improving design robustness early in the design cycle. It demonstrates a path to yield optimization and first time right designs implemented in leading edge technologies. The approach described herein identifies those areas in the design that could benefit from being fixed early, leading to design updates and avoiding later design churn by careful selection of design sensitivities. This paper shows how to achieve this goal by using simulation tools incorporating various models from sparse to rigorously physical, pattern detection and pattern matching, checking and validating failure thresholds.
Malfunctions in radioactivity sensors' networks
NASA Astrophysics Data System (ADS)
Khalipova, Veronika; Damart, Guillaume; Beauzamy, Bernard; Bruna, Giovanni
2018-01-01
The capacity to promptly and efficiently detect any source of contamination of the environment (a radioactive cloud) at a local and a country scale is mandatory to a safe and secure exploitation of civil nuclear energy. It must rely upon a robust network of measurement devices, to be optimized vs. several parameters, including the overall reliability, the investment, the operation and maintenance costs. We show that a network can be arranged in different ways, but many of them are inadequate. Through simulations, we test the efficiency of several configurations of sensors, in the same domain. The denser arrangement turns out to be the more efficient, but the efficiency is increased when sensors are non-uniformly distributed over the country, with accumulation at the borders. In the case of France, as radioactive threats are most likely to come from the East, the best solution is densifying the sensors close to the eastern border. Our approach differs from previous work because it is "failure oriented": we determine the laws of probability for all types of failures and deduce in this respect the best organization of the network.
Comprehensive, Quantitative Risk Assessment of CO{sub 2} Geologic Sequestration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lepinski, James
2013-09-30
A Quantitative Failure Modes and Effects Analysis (QFMEA) was developed to conduct comprehensive, quantitative risk assessments on CO{sub 2} capture, transportation, and sequestration or use in deep saline aquifers, enhanced oil recovery operations, or enhanced coal bed methane operations. The model identifies and characterizes potential risks; identifies the likely failure modes, causes, effects and methods of detection; lists possible risk prevention and risk mitigation steps; estimates potential damage recovery costs, mitigation costs and costs savings resulting from mitigation; and ranks (prioritizes) risks according to the probability of failure, the severity of failure, the difficulty of early failure detection and themore » potential for fatalities. The QFMEA model generates the necessary information needed for effective project risk management. Diverse project information can be integrated into a concise, common format that allows comprehensive, quantitative analysis, by a cross-functional team of experts, to determine: What can possibly go wrong? How much will damage recovery cost? How can it be prevented or mitigated? What is the cost savings or benefit of prevention or mitigation? Which risks should be given highest priority for resolution? The QFMEA model can be tailored to specific projects and is applicable to new projects as well as mature projects. The model can be revised and updated as new information comes available. It accepts input from multiple sources, such as literature searches, site characterization, field data, computer simulations, analogues, process influence diagrams, probability density functions, financial analysis models, cost factors, and heuristic best practices manuals, and converts the information into a standardized format in an Excel spreadsheet. Process influence diagrams, geologic models, financial models, cost factors and an insurance schedule were developed to support the QFMEA model. Comprehensive, quantitative risk assessments were conducted on three (3) sites using the QFMEA model: (1) SACROC Northern Platform CO{sub 2}-EOR Site in the Permian Basin, Scurry County, TX, (2) Pump Canyon CO{sub 2}-ECBM Site in the San Juan Basin, San Juan County, NM, and (3) Farnsworth Unit CO{sub 2}-EOR Site in the Anadarko Basin, Ochiltree County, TX. The sites were sufficiently different from each other to test the robustness of the QFMEA model.« less
Solving bezel reliability and CRT obsolescence
NASA Astrophysics Data System (ADS)
Schwartz, Richard J.; Bowen, Arlen R.; Knowles, Terry
2003-09-01
Scientific Research Corporation designed a Smart Multi-Function Color Display with Positive Pilot Feedback under the funding of an U. S. Navy Small Business Innovative Research program. The Smart Multi-Function Color Display can replace the obsolete monochrome Cathode Ray Tube display currently on the T-45C aircraft built by Boeing. The design utilizes a flat panel color Active Matrix Liquid Crystal Display and TexZec's patented Touch Thru Metal bezel technology providing both visual and biomechanical feedback to the pilot in a form, fit, and function replacement to the current T-45C display. Use of an existing color AMLCD, requires the least adaptation to fill the requirements of this application, thereby minimizing risk associated with developing a new display technology and maximizing the investment in improved user interface technology. The improved user interface uses TexZec's Touch Thru Metal technology to eliminate all of the moving parts that traditionally have limited Mean-Time-Between-Failure. The touch detection circuit consists of Commercial-Off-The-Shelf components, creating touch detection circuitry, which is simple and durable. This technology provides robust switch activation and a high level of environmental immunity, both mechanical and electrical. Replacement of all the T-45C multi-function displays with this design will improve the Mean-Time-Between-Failure and drastically reduce display life cycle costs. The design methodology described in this paper can be adapted to any new or replacement display.
A synthetic biology-based device prevents liver injury in mice.
Bai, Peng; Ye, Haifeng; Xie, Mingqi; Saxena, Pratik; Zulewski, Henryk; Charpin-El Hamri, Ghislaine; Djonov, Valentin; Fussenegger, Martin
2016-07-01
The liver performs a panoply of complex activities coordinating metabolic, immunologic and detoxification processes. Despite the liver's robustness and unique self-regeneration capacity, viral infection, autoimmune disorders, fatty liver disease, alcohol abuse and drug-induced hepatotoxicity contribute to the increasing prevalence of liver failure. Liver injuries impair the clearance of bile acids from the hepatic portal vein which leads to their spill over into the peripheral circulation where they activate the G-protein-coupled bile acid receptor TGR5 to initiate a variety of hepatoprotective processes. By functionally linking activation of ectopically expressed TGR5 to an artificial promoter controlling transcription of the hepatocyte growth factor (HGF), we created a closed-loop synthetic signalling network that coordinated liver injury-associated serum bile acid levels to expression of HGF in a self-sufficient, reversible and dose-dependent manner. After implantation of genetically engineered human cells inside auto-vascularizing, immunoprotective and clinically validated alginate-poly-(L-lysine)-alginate beads into mice, the liver-protection device detected pathologic serum bile acid levels and produced therapeutic HGF levels that protected the animals from acute drug-induced liver failure. Genetically engineered cells containing theranostic gene circuits that dynamically interface with host metabolism may provide novel opportunities for preventive, acute and chronic healthcare. Liver diseases leading to organ failure may go unnoticed as they do not trigger any symptoms or significant discomfort. We have designed a synthetic gene circuit that senses excessive bile acid levels associated with liver injuries and automatically produces a therapeutic protein in response. When integrated into mammalian cells and implanted into mice, the circuit detects the onset of liver injuries and coordinates the production of a protein pharmaceutical which prevents liver damage. Copyright © 2016 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.
Procalcitonin Identifies Cell Injury, Not Bacterial Infection, in Acute Liver Failure.
Rule, Jody A; Hynan, Linda S; Attar, Nahid; Sanders, Corron; Korzun, William J; Lee, William M
2015-01-01
Because acute liver failure (ALF) patients share many clinical features with severe sepsis and septic shock, identifying bacterial infection clinically in ALF patients is challenging. Procalcitonin (PCT) has proven to be a useful marker in detecting bacterial infection. We sought to determine whether PCT discriminated between presence and absence of infection in patients with ALF. Retrospective analysis of data and samples of 115 ALF patients from the United States Acute Liver Failure Study Group randomly selected from 1863 patients were classified for disease severity and ALF etiology. Twenty uninfected chronic liver disease (CLD) subjects served as controls. Procalcitonin concentrations in most samples were elevated, with median values for all ALF groups near or above a 2.0 ng/mL cut-off that generally indicates severe sepsis. While PCT concentrations increased somewhat with apparent liver injury severity, there were no differences in PCT levels between the pre-defined severity groups-non-SIRS and SIRS groups with no documented infections and Severe Sepsis and Septic Shock groups with documented infections, (p = 0.169). PCT values from CLD patients differed from all ALF groups (median CLD PCT value 0.104 ng/mL, (p ≤0.001)). Subjects with acetaminophen (APAP) toxicity, many without evidence of infection, demonstrated median PCT >2.0 ng/mL, regardless of SIRS features, while some culture positive subjects had PCT values <2.0 ng/mL. While PCT appears to be a robust assay for detecting bacterial infection in the general population, there was poor discrimination between ALF patients with or without bacterial infection presumably because of the massive inflammation observed. Severe hepatocyte necrosis with inflammation results in elevated PCT levels, rendering this biomarker unreliable in the ALF setting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobson, Ian; Hiskens, Ian; Linderoth, Jeffrey
Building on models of electrical power systems, and on powerful mathematical techniques including optimization, model predictive control, and simluation, this project investigated important issues related to the stable operation of power grids. A topic of particular focus was cascading failures of the power grid: simulation, quantification, mitigation, and control. We also analyzed the vulnerability of networks to component failures, and the design of networks that are responsive to and robust to such failures. Numerous other related topics were investigated, including energy hubs and cascading stall of induction machines
Monks, K; Molnár, I; Rieger, H-J; Bogáti, B; Szabó, E
2012-04-06
Robust HPLC separations lead to fewer analysis failures and better method transfer as well as providing an assurance of quality. This work presents the systematic development of an optimal, robust, fast UHPLC method for the simultaneous assay of two APIs of an eye drop sample and their impurities, in accordance with Quality by Design principles. Chromatography software is employed to effectively generate design spaces (Method Operable Design Regions), which are subsequently employed to determine the final method conditions and to evaluate robustness prior to validation. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Cheng, Xiang-Qin; Qu, Jing-Yuan; Yan, Zhe-Ping; Bian, Xin-Qian
2010-03-01
In order to improve the security and reliability for autonomous underwater vehicle (AUV) navigation, an H∞ robust fault-tolerant controller was designed after analyzing variations in state-feedback gain. Operating conditions and the design method were then analyzed so that the control problem could be expressed as a mathematical optimization problem. This permitted the use of linear matrix inequalities (LMI) to solve for the H∞ controller for the system. When considering different actuator failures, these conditions were then also mathematically expressed, allowing the H∞ robust controller to solve for these events and thus be fault-tolerant. Finally, simulation results showed that the H∞ robust fault-tolerant controller could provide precise AUV navigation control with strong robustness.
NASA Technical Reports Server (NTRS)
Morrell, Frederick R.; Bailey, Melvin L.
1987-01-01
A vector-based failure detection and isolation technique for a skewed array of two degree-of-freedom inertial sensors is developed. Failure detection is based on comparison of parity equations with a threshold, and isolation is based on comparison of logic variables which are keyed to pass/fail results of the parity test. A multi-level approach to failure detection is used to ensure adequate coverage for the flight control, display, and navigation avionics functions. Sensor error models are introduced to expose the susceptibility of the parity equations to sensor errors and physical separation effects. The algorithm is evaluated in a simulation of a commercial transport operating in a range of light to severe turbulence environments. A bias-jump failure level of 0.2 deg/hr was detected and isolated properly in the light and moderate turbulence environments, but not detected in the extreme turbulence environment. An accelerometer bias-jump failure level of 1.5 milli-g was detected over all turbulence environments. For both types of inertial sensor, hard-over, and null type failures were detected in all environments without incident. The algorithm functioned without false alarm or isolation over all turbulence environments for the runs tested.
NASA Astrophysics Data System (ADS)
Yang, Hua; Zhong, Donghong; Liu, Chenyi; Song, Kaiyou; Yin, Zhouping
2018-03-01
Object tracking is still a challenging problem in computer vision, as it entails learning an effective model to account for appearance changes caused by occlusion, out of view, plane rotation, scale change, and background clutter. This paper proposes a robust visual tracking algorithm called deep convolutional neural network (DCNNCT) to simultaneously address these challenges. The proposed DCNNCT algorithm utilizes a DCNN to extract the image feature of a tracked target, and the full range of information regarding each convolutional layer is used to express the image feature. Subsequently, the kernelized correlation filters (CF) in each convolutional layer are adaptively learned, the correlation response maps of that are combined to estimate the location of the tracked target. To avoid the case of tracking failure, an online random ferns classifier is employed to redetect the tracked target, and a dual-threshold scheme is used to obtain the final target location by comparing the tracking result with the detection result. Finally, the change in scale of the target is determined by building scale pyramids and training a CF. Extensive experiments demonstrate that the proposed algorithm is effective at tracking, especially when evaluated using an index called the overlap rate. The DCNNCT algorithm is also highly competitive in terms of robustness with respect to state-of-the-art trackers in various challenging scenarios.
NASA Technical Reports Server (NTRS)
Shives, T. R. (Editor); Willard, W. A. (Editor)
1981-01-01
The contribution of failure detection, diagnosis and prognosis to the energy challenge is discussed. Areas of special emphasis included energy management, techniques for failure detection in energy related systems, improved prognostic techniques for energy related systems and opportunities for detection, diagnosis and prognosis in the energy field.
Cascading failures with local load redistribution in interdependent Watts-Strogatz networks
NASA Astrophysics Data System (ADS)
Hong, Chen; Zhang, Jun; Du, Wen-Bo; Sallan, Jose Maria; Lordan, Oriol
2016-05-01
Cascading failures of loads in isolated networks have been studied extensively over the last decade. Since 2010, such research has extended to interdependent networks. In this paper, we study cascading failures with local load redistribution in interdependent Watts-Strogatz (WS) networks. The effects of rewiring probability and coupling strength on the resilience of interdependent WS networks have been extensively investigated. It has been found that, for small values of the tolerance parameter, interdependent networks are more vulnerable as rewiring probability increases. For larger values of the tolerance parameter, the robustness of interdependent networks firstly decreases and then increases as rewiring probability increases. Coupling strength has a different impact on robustness. For low values of coupling strength, the resilience of interdependent networks decreases with the increment of the coupling strength until it reaches a certain threshold value. For values of coupling strength above this threshold, the opposite effect is observed. Our results are helpful to understand and design resilient interdependent networks.
Towards designing robust coupled networks
NASA Astrophysics Data System (ADS)
Schneider, Christian M.; Yazdani, Nuri; Araújo, Nuno A. M.; Havlin, Shlomo; Herrmann, Hans J.
2013-06-01
Natural and technological interdependent systems have been shown to be highly vulnerable due to cascading failures and an abrupt collapse of global connectivity under initial failure. Mitigating the risk by partial disconnection endangers their functionality. Here we propose a systematic strategy of selecting a minimum number of autonomous nodes that guarantee a smooth transition in robustness. Our method which is based on betweenness is tested on various examples including the famous 2003 electrical blackout of Italy. We show that, with this strategy, the necessary number of autonomous nodes can be reduced by a factor of five compared to a random choice. We also find that the transition to abrupt collapse follows tricritical scaling characterized by a set of exponents which is independent on the protection strategy.
NASA Technical Reports Server (NTRS)
Bundick, W. Thomas
1990-01-01
A methodology for designing a failure detection and identification (FDI) system to detect and isolate control element failures in aircraft control systems is reviewed. An FDI system design for a modified B-737 aircraft resulting from this methodology is also reviewed, and the results of evaluating this system via simulation are presented. The FDI system performed well in a no-turbulence environment, but it experienced an unacceptable number of false alarms in atmospheric turbulence. An adaptive FDI system, which adjusts thresholds and other system parameters based on the estimated turbulence level, was developed and evaluated. The adaptive system performed well over all turbulence levels simulated, reliably detecting all but the smallest magnitude partially-missing-surface failures.
NASA Technical Reports Server (NTRS)
Wolf, J. A.
1978-01-01
The Highly maneuverable aircraft technology (HIMAT) remotely piloted research vehicle (RPRV) uses cross-ship comparison monitoring of the actuator RAM positions to detect a failure in the aileron, canard, and elevator control surface servosystems. Some possible sources of nuisance trips for this failure detection technique are analyzed. A FORTRAN model of the simplex servosystems and the failure detection technique were utilized to provide a convenient means of changing parameters and introducing system noise. The sensitivity of the technique to differences between servosystems and operating conditions was determined. The cross-ship comparison monitoring method presently appears to be marginal in its capability to detect an actual failure and to withstand nuisance trips.
Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization.
Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona
2016-05-31
Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms.
Real-time failure control (SAFD)
NASA Technical Reports Server (NTRS)
Panossian, Hagop V.; Kemp, Victoria R.; Eckerling, Sherry J.
1990-01-01
The Real Time Failure Control program involves development of a failure detection algorithm, referred as System for Failure and Anomaly Detection (SAFD), for the Space Shuttle Main Engine (SSME). This failure detection approach is signal-based and it entails monitoring SSME measurement signals based on predetermined and computed mean values and standard deviations. Twenty four engine measurements are included in the algorithm and provisions are made to add more parameters if needed. Six major sections of research are presented: (1) SAFD algorithm development; (2) SAFD simulations; (3) Digital Transient Model failure simulation; (4) closed-loop simulation; (5) SAFD current limitations; and (6) enhancements planned for.
Adaptive Peer Sampling with Newscast
NASA Astrophysics Data System (ADS)
Tölgyesi, Norbert; Jelasity, Márk
The peer sampling service is a middleware service that provides random samples from a large decentralized network to support gossip-based applications such as multicast, data aggregation and overlay topology management. Lightweight gossip-based implementations of the peer sampling service have been shown to provide good quality random sampling while also being extremely robust to many failure scenarios, including node churn and catastrophic failure. We identify two problems with these approaches. The first problem is related to message drop failures: if a node experiences a higher-than-average message drop rate then the probability of sampling this node in the network will decrease. The second problem is that the application layer at different nodes might request random samples at very different rates which can result in very poor random sampling especially at nodes with high request rates. We propose solutions for both problems. We focus on Newscast, a robust implementation of the peer sampling service. Our solution is based on simple extensions of the protocol and an adaptive self-control mechanism for its parameters, namely—without involving failure detectors—nodes passively monitor local protocol events using them as feedback for a local control loop for self-tuning the protocol parameters. The proposed solution is evaluated by simulation experiments.
White, Richard A.; Lu, Chunling; Rodriguez, Carly A.; Bayona, Jaime; Becerra, Mercedes C.; Burgos, Marcos; Centis, Rosella; Cohen, Theodore; Cox, Helen; D'Ambrosio, Lia; Danilovitz, Manfred; Falzon, Dennis; Gelmanova, Irina Y.; Gler, Maria T.; Grinsdale, Jennifer A.; Holtz, Timothy H.; Keshavjee, Salmaan; Leimane, Vaira; Menzies, Dick; Milstein, Meredith B.; Mishustin, Sergey P.; Pagano, Marcello; Quelapio, Maria I.; Shean, Karen; Shin, Sonya S.; Tolman, Arielle W.; van der Walt, Martha L.; Van Deun, Armand; Viiklepp, Piret
2016-01-01
Debate persists about monitoring method (culture or smear) and interval (monthly or less frequently) during treatment for multidrug-resistant tuberculosis (MDR-TB). We analysed existing data and estimated the effect of monitoring strategies on timing of failure detection. We identified studies reporting microbiological response to MDR-TB treatment and solicited individual patient data from authors. Frailty survival models were used to estimate pooled relative risk of failure detection in the last 12 months of treatment; hazard of failure using monthly culture was the reference. Data were obtained for 5410 patients across 12 observational studies. During the last 12 months of treatment, failure detection occurred in a median of 3 months by monthly culture; failure detection was delayed by 2, 7, and 9 months relying on bimonthly culture, monthly smear and bimonthly smear, respectively. Risk (95% CI) of failure detection delay resulting from monthly smear relative to culture is 0.38 (0.34–0.42) for all patients and 0.33 (0.25–0.42) for HIV-co-infected patients. Failure detection is delayed by reducing the sensitivity and frequency of the monitoring method. Monthly monitoring of sputum cultures from patients receiving MDR-TB treatment is recommended. Expanded laboratory capacity is needed for high-quality culture, and for smear microscopy and rapid molecular tests. PMID:27587552
NASA Astrophysics Data System (ADS)
Dong, Zhengcheng; Fang, Yanjun; Tian, Meng; Kong, Zhengmin
The hierarchical structure, k-core, is common in various complex networks, and the actual network always has successive layers from 1-core layer (the peripheral layer) to km-core layer (the core layer). The nodes within the core layer have been proved to be the most influential spreaders, but there is few work about how the depth of k-core layers (the value of km) can affect the robustness against cascading failures, rather than the interdependent networks. First, following the preferential attachment, a novel method is proposed to generate the scale-free network with successive k-core layers (KCBA network), and the KCBA network is validated more realistic than the traditional BA network. Then, with KCBA interdependent networks, the effect of the depth of k-core layers is investigated. Considering the load-based model, the loss of capacity on nodes is adopted to quantify the robustness instead of the number of functional nodes in the end. We conduct two attacking strategies, i.e. the RO-attack (Randomly remove only one node) and the RF-attack (Randomly remove a fraction of nodes). Results show that the robustness of KCBA networks not only depends on the depth of k-core layers, but also is slightly influenced by the initial load. With RO-attack, the networks with less k-core layers are more robust when the initial load is small. With RF-attack, the robustness improves with small km, but the improvement is getting weaker with the increment of the initial load. In a word, the lower the depth is, the more robust the networks will be.
SCADA alarms processing for wind turbine component failure detection
NASA Astrophysics Data System (ADS)
Gonzalez, E.; Reder, M.; Melero, J. J.
2016-09-01
Wind turbine failure and downtime can often compromise the profitability of a wind farm due to their high impact on the operation and maintenance (O&M) costs. Early detection of failures can facilitate the changeover from corrective maintenance towards a predictive approach. This paper presents a cost-effective methodology to combine various alarm analysis techniques, using data from the Supervisory Control and Data Acquisition (SCADA) system, in order to detect component failures. The approach categorises the alarms according to a reviewed taxonomy, turning overwhelming data into valuable information to assess component status. Then, different alarms analysis techniques are applied for two purposes: the evaluation of the SCADA alarm system capability to detect failures, and the investigation of the relation between components faults being followed by failure occurrences in others. Various case studies are presented and discussed. The study highlights the relationship between faulty behaviour in different components and between failures and adverse environmental conditions.
Development of An Intelligent Flight Propulsion Control System
NASA Technical Reports Server (NTRS)
Calise, A. J.; Rysdyk, R. T.; Leonhardt, B. K.
1999-01-01
The initial design and demonstration of an Intelligent Flight Propulsion and Control System (IFPCS) is documented. The design is based on the implementation of a nonlinear adaptive flight control architecture. This initial design of the IFPCS enhances flight safety by using propulsion sources to provide redundancy in flight control. The IFPCS enhances the conventional gain scheduled approach in significant ways: (1) The IFPCS provides a back up flight control system that results in consistent responses over a wide range of unanticipated failures. (2) The IFPCS is applicable to a variety of aircraft models without redesign and,(3) significantly reduces the laborious research and design necessary in a gain scheduled approach. The control augmentation is detailed within an approximate Input-Output Linearization setting. The availability of propulsion only provides two control inputs, symmetric and differential thrust. Earlier Propulsion Control Augmentation (PCA) work performed by NASA provided for a trajectory controller with pilot command input of glidepath and heading. This work is aimed at demonstrating the flexibility of the IFPCS in providing consistency in flying qualities under a variety of failure scenarios. This report documents the initial design phase where propulsion only is used. Results confirm that the engine dynamics and associated hard nonlineaaities result in poor handling qualities at best. However, as demonstrated in simulation, the IFPCS is capable of results similar to the gain scheduled designs of the NASA PCA work. The IFPCS design uses crude estimates of aircraft behaviour. The adaptive control architecture demonstrates robust stability and provides robust performance. In this work, robust stability means that all states, errors, and adaptive parameters remain bounded under a wide class of uncertainties and input and output disturbances. Robust performance is measured in the quality of the tracking. The results demonstrate the flexibility of the IFPCS architecture and the ability to provide robust performance under a broad range of uncertainty. Robust stability is proved using Lyapunov like analysis. Future development of the IFPCS will include integration of conventional control surfaces with the use of propulsion augmentation, and utilization of available lift and drag devices, to demonstrate adaptive control capability under a greater variety of failure scenarios. Further work will specifically address the effects of actuator saturation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank, S; Garden, A; Anderson, M
Purpose: Multi-field optimization intensity modulated proton therapy (MFO-IMPT) for oropharyngeal tumors has been established using robust planning, robust analysis, and robust optimization techniques. While there are inherent uncertainties in proton therapy treatment planning and delivery, outcome reporting are important to validate the proton treatment process. The purpose of this study is to report the first 50 oropharyngeal tumor patients treated de-novo at a single institution with MFO-IMPT. Methods: The data from the first 50 patients with squamous cell carcinoma of the oropharynx treated at MD Anderson Cancer Center from January 2011 to December 2014 on a prospective IRB approved protocolmore » were analyzed. Outcomes were analyzed to include local, regional, and distant treatment failures. Acute and late toxicities were analyzed by CTCAE v4.0. Results: All patients were treated with definitive intent. The median follow-up time of the 50 patients was 25 months. Patients by gender were male (84%) and female (16%). The average age was 61 years. 50% of patients were never smokers and 4% were current smokers. Presentation by stage; I–1, II–0, III– 9, IVA–37 (74%), IVB–3. 88% of patients were HPV/p16+. Patients were treated to 66–70 CGE. One local failure was reported at 13 months following treatment. One neck failure was reported at 12 months. 94% of patients were alive with no evidence of disease. One patient died without evidence of disease. There were no Grade 4 or Grade 5 toxicities. Conclusion: MFO-IMPT for oropharyngeal tumors is robust and provides excellent outcomes 2 years after treatment. A randomized trial is underway to determine if proton therapy will reduce chronic late toxicities of IMRT.« less
Gunderson, Bruce D; Gillberg, Jeffrey M; Wood, Mark A; Vijayaraman, Pugazhendhi; Shepard, Richard K; Ellenbogen, Kenneth A
2006-02-01
Implantable cardioverter-defibrillator (ICD) lead failures often present as inappropriate shock therapy. An algorithm that can reliably discriminate between ventricular tachyarrhythmias and noise due to lead failure may prevent patient discomfort and anxiety and avoid device-induced proarrhythmia by preventing inappropriate ICD shocks. The goal of this analysis was to test an ICD tachycardia detection algorithm that differentiates noise due to lead failure from ventricular tachyarrhythmias. We tested an algorithm that uses a measure of the ventricular intracardiac electrogram baseline to discriminate the sinus rhythm isoelectric line from the right ventricular coil-can (i.e., far-field) electrogram during oversensing of noise caused by a lead failure. The baseline measure was defined as the product of the sum (mV) and standard deviation (mV) of the voltage samples for a 188-ms window centered on each sensed electrogram. If the minimum baseline measure of the last 12 beats was <0.35 mV-mV, then the detected rhythm was considered noise due to a lead failure. The first ICD-detected episode of lead failure and inappropriate detection from 24 ICD patients with a pace/sense lead failure and all ventricular arrhythmias from 56 ICD patients without a lead failure were selected. The stored data were analyzed to determine the sensitivity and specificity of the algorithm to detect lead failures. The minimum baseline measure for the 24 lead failure episodes (0.28 +/- 0.34 mV-mV) was smaller than the 135 ventricular tachycardia (40.8 +/- 43.0 mV-mV, P <.0001) and 55 ventricular fibrillation episodes (19.1 +/- 22.8 mV-mV, P <.05). A minimum baseline <0.35 mV-mV threshold had a sensitivity of 83% (20/24) with a 100% (190/190) specificity. A baseline measure of the far-field electrogram had a high sensitivity and specificity to detect lead failure noise compared with ventricular tachycardia or fibrillation.
Artificial-neural-network-based failure detection and isolation
NASA Astrophysics Data System (ADS)
Sadok, Mokhtar; Gharsalli, Imed; Alouani, Ali T.
1998-03-01
This paper presents the design of a systematic failure detection and isolation system that uses the concept of failure sensitive variables (FSV) and artificial neural networks (ANN). The proposed approach was applied to tube leak detection in a utility boiler system. Results of the experimental testing are presented in the paper.
Automatic patient respiration failure detection system with wireless transmission
NASA Technical Reports Server (NTRS)
Dimeff, J.; Pope, J. M.
1968-01-01
Automatic respiration failure detection system detects respiration failure in patients with a surgically implanted tracheostomy tube, and actuates an audible and/or visual alarm. The system incorporates a miniature radio transmitter so that the patient is unencumbered by wires yet can be monitored from a remote location.
Fault detection and fault tolerance in robotics
NASA Technical Reports Server (NTRS)
Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.
1992-01-01
Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.
Multiple perspective vulnerability analysis of the power network
NASA Astrophysics Data System (ADS)
Wang, Shuliang; Zhang, Jianhua; Duan, Na
2018-02-01
To understand the vulnerability of the power network from multiple perspectives, multi-angle and multi-dimensional vulnerability analysis as well as community based vulnerability analysis are proposed in this paper. Taking into account of central China power grid as an example, correlation analysis of different vulnerability models is discussed. Then, vulnerabilities produced by different vulnerability metrics under the given vulnerability models and failure scenarios are analyzed. At last, applying the community detecting approach, critical areas of central China power grid are identified, Vulnerable and robust communities on both topological and functional perspective are acquired and analyzed. The approach introduced in this paper can be used to help decision makers develop optimal protection strategies. It will be also useful to give a multiple vulnerability analysis of the other infrastructure systems.
Assuring SS7 dependability: A robustness characterization of signaling network elements
NASA Astrophysics Data System (ADS)
Karmarkar, Vikram V.
1994-04-01
Current and evolving telecommunication services will rely on signaling network performance and reliability properties to build competitive call and connection control mechanisms under increasing demands on flexibility without compromising on quality. The dimensions of signaling dependability most often evaluated are the Rate of Call Loss and End-to-End Route Unavailability. A third dimension of dependability that captures the concern about large or catastrophic failures can be termed Network Robustness. This paper is concerned with the dependability aspects of the evolving Signaling System No. 7 (SS7) networks and attempts to strike a balance between the probabilistic and deterministic measures that must be evaluated to accomplish a risk-trend assessment to drive architecture decisions. Starting with high-level network dependability objectives and field experience with SS7 in the U.S., potential areas of growing stringency in network element (NE) dependability are identified to improve against current measures of SS7 network quality, as per-call signaling interactions increase. A sensitivity analysis is presented to highlight the impact due to imperfect coverage of duplex network component or element failures (i.e., correlated failures), to assist in the setting of requirements on NE robustness. A benefit analysis, covering several dimensions of dependability, is used to generate the domain of solutions available to the network architect in terms of network and network element fault tolerance that may be specified to meet the desired signaling quality goals.
Linear Parameter Varying Control Synthesis for Actuator Failure, Based on Estimated Parameter
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Wu, N. Eva; Belcastro, Christine
2002-01-01
The design of a linear parameter varying (LPV) controller for an aircraft at actuator failure cases is presented. The controller synthesis for actuator failure cases is formulated into linear matrix inequality (LMI) optimizations based on an estimated failure parameter with pre-defined estimation error bounds. The inherent conservatism of an LPV control synthesis methodology is reduced using a scaling factor on the uncertainty block which represents estimated parameter uncertainties. The fault parameter is estimated using the two-stage Kalman filter. The simulation results of the designed LPV controller for a HiMXT (Highly Maneuverable Aircraft Technology) vehicle with the on-line estimator show that the desired performance and robustness objectives are achieved for actuator failure cases.
Real-time automated failure analysis for on-orbit operations
NASA Technical Reports Server (NTRS)
Kirby, Sarah; Lauritsen, Janet; Pack, Ginger; Ha, Anhhoang; Jowers, Steven; Mcnenny, Robert; Truong, The; Dell, James
1993-01-01
A system which is to provide real-time failure analysis support to controllers at the NASA Johnson Space Center Control Center Complex (CCC) for both Space Station and Space Shuttle on-orbit operations is described. The system employs monitored systems' models of failure behavior and model evaluation algorithms which are domain-independent. These failure models are viewed as a stepping stone to more robust algorithms operating over models of intended function. The described system is designed to meet two sets of requirements. It must provide a useful failure analysis capability enhancement to the mission controller. It must satisfy CCC operational environment constraints such as cost, computer resource requirements, verification, and validation. The underlying technology and how it may be used to support operations is also discussed.
Achieving fast and stable failure detection in WDM Networks
NASA Astrophysics Data System (ADS)
Gao, Donghui; Zhou, Zhiyu; Zhang, Hanyi
2005-02-01
In dynamic networks, the failure detection time takes a major part of the convergence time, which is an important network performance index. To detect a node or link failure in the network, traditional protocols, like Hello protocol in OSPF or RSVP, exchanges keep-alive messages between neighboring nodes to keep track of the link/node state. But by default settings, it can get a minimum detection time in the measure of dozens of seconds, which can not meet the demands of fast network convergence and failure recovery. When configuring the related parameters to reduce the detection time, there will be notable instability problems. In this paper, we analyzed the problem and designed a new failure detection algorithm to reduce the network overhead of detection signaling. Through our experiment we found it is effective to enhance the stability by implicitly acknowledge other signaling messages as keep-alive messages. We conducted our proposal and the previous approaches on the ASON test-bed. The experimental results show that our algorithm gives better performances than previous schemes in about an order magnitude reduction of both false failure alarms and queuing delay to other messages, especially under light traffic load.
Correlated network of networks enhances robustness against catastrophic failures.
Min, Byungjoon; Zheng, Muhua
2018-01-01
Networks in nature rarely function in isolation but instead interact with one another with a form of a network of networks (NoN). A network of networks with interdependency between distinct networks contains instability of abrupt collapse related to the global rule of activation. As a remedy of the collapse instability, here we investigate a model of correlated NoN. We find that the collapse instability can be removed when hubs provide the majority of interconnections and interconnections are convergent between hubs. Thus, our study identifies a stable structure of correlated NoN against catastrophic failures. Our result further suggests a plausible way to enhance network robustness by manipulating connection patterns, along with other methods such as controlling the state of node based on a local rule.
Correlated network of networks enhances robustness against catastrophic failures
Zheng, Muhua
2018-01-01
Networks in nature rarely function in isolation but instead interact with one another with a form of a network of networks (NoN). A network of networks with interdependency between distinct networks contains instability of abrupt collapse related to the global rule of activation. As a remedy of the collapse instability, here we investigate a model of correlated NoN. We find that the collapse instability can be removed when hubs provide the majority of interconnections and interconnections are convergent between hubs. Thus, our study identifies a stable structure of correlated NoN against catastrophic failures. Our result further suggests a plausible way to enhance network robustness by manipulating connection patterns, along with other methods such as controlling the state of node based on a local rule. PMID:29668730
Sensor Failure Detection of FASSIP System using Principal Component Analysis
NASA Astrophysics Data System (ADS)
Sudarno; Juarsa, Mulya; Santosa, Kussigit; Deswandri; Sunaryo, Geni Rina
2018-02-01
In the nuclear reactor accident of Fukushima Daiichi in Japan, the damages of core and pressure vessel were caused by the failure of its active cooling system (diesel generator was inundated by tsunami). Thus researches on passive cooling system for Nuclear Power Plant are performed to improve the safety aspects of nuclear reactors. The FASSIP system (Passive System Simulation Facility) is an installation used to study the characteristics of passive cooling systems at nuclear power plants. The accuracy of sensor measurement of FASSIP system is essential, because as the basis for determining the characteristics of a passive cooling system. In this research, a sensor failure detection method for FASSIP system is developed, so the indication of sensor failures can be detected early. The method used is Principal Component Analysis (PCA) to reduce the dimension of the sensor, with the Squarred Prediction Error (SPE) and statistic Hotteling criteria for detecting sensor failure indication. The results shows that PCA method is capable to detect the occurrence of a failure at any sensor.
Hao, Li-Ying; Yang, Guang-Hong
2013-09-01
This paper is concerned with the problem of robust fault-tolerant compensation control problem for uncertain linear systems subject to both state and input signal quantization. By incorporating novel matrix full-rank factorization technique with sliding surface design successfully, the total failure of certain actuators can be coped with, under a special actuator redundancy assumption. In order to compensate for quantization errors, an adjustment range of quantization sensitivity for a dynamic uniform quantizer is given through the flexible choices of design parameters. Comparing with the existing results, the derived inequality condition leads to the fault tolerance ability stronger and much wider scope of applicability. With a static adjustment policy of quantization sensitivity, an adaptive sliding mode controller is then designed to maintain the sliding mode, where the gain of the nonlinear unit vector term is updated automatically to compensate for the effects of actuator faults, quantization errors, exogenous disturbances and parameter uncertainties without the need for a fault detection and isolation (FDI) mechanism. Finally, the effectiveness of the proposed design method is illustrated via a model of a rocket fairing structural-acoustic. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Bueno, R. A.
1977-01-01
Results of the generalized likelihood ratio (GLR) technique for the detection of failures in aircraft application are presented, and its relationship to the properties of the Kalman-Bucy filter is examined. Under the assumption that the system is perfectly modeled, the detectability and distinguishability of four failure types are investigated by means of analysis and simulations. Detection of failures is found satisfactory, but problems in identifying correctly the mode of a failure may arise. These issues are closely examined as well as the sensitivity of GLR to modeling errors. The advantages and disadvantages of this technique are discussed, and various modifications are suggested to reduce its limitations in performance and computational complexity.
Pre-configured polyhedron based protection against multi-link failures in optical mesh networks.
Huang, Shanguo; Guo, Bingli; Li, Xin; Zhang, Jie; Zhao, Yongli; Gu, Wanyi
2014-02-10
This paper focuses on random multi-link failures protection in optical mesh networks, instead of single, the dual or sequential failures of previous studies. Spare resource efficiency and failure robustness are major concerns in link protection strategy designing and a k-regular and k-edge connected structure is proved to be one of the optimal solutions for link protection network. Based on this, a novel pre-configured polyhedron based protection structure is proposed, and it could provide protection for both simultaneous and sequential random link failures with improved spare resource efficiency. Its performance is evaluated in terms of spare resource consumption, recovery rate and average recovery path length, as well as compared with ring based and subgraph protection under probabilistic link failure scenarios. Results show the proposed novel link protection approach has better performance than previous works.
Robust Fault Detection Using Robust Z1 Estimation and Fuzzy Logic
NASA Technical Reports Server (NTRS)
Curry, Tramone; Collins, Emmanuel G., Jr.; Selekwa, Majura; Guo, Ten-Huei (Technical Monitor)
2001-01-01
This research considers the application of robust Z(sub 1), estimation in conjunction with fuzzy logic to robust fault detection for an aircraft fight control system. It begins with the development of robust Z(sub 1) estimators based on multiplier theory and then develops a fixed threshold approach to fault detection (FD). It then considers the use of fuzzy logic for robust residual evaluation and FD. Due to modeling errors and unmeasurable disturbances, it is difficult to distinguish between the effects of an actual fault and those caused by uncertainty and disturbance. Hence, it is the aim of a robust FD system to be sensitive to faults while remaining insensitive to uncertainty and disturbances. While fixed thresholds only allow a decision on whether a fault has or has not occurred, it is more valuable to have the residual evaluation lead to a conclusion related to the degree of, or probability of, a fault. Fuzzy logic is a viable means of determining the degree of a fault and allows the introduction of human observations that may not be incorporated in the rigorous threshold theory. Hence, fuzzy logic can provide a more reliable and informative fault detection process. Using an aircraft flight control system, the results of FD using robust Z(sub 1) estimation with a fixed threshold are demonstrated. FD that combines robust Z(sub 1) estimation and fuzzy logic is also demonstrated. It is seen that combining the robust estimator with fuzzy logic proves to be advantageous in increasing the sensitivity to smaller faults while remaining insensitive to uncertainty and disturbances.
Failure detection in high-performance clusters and computers using chaotic map computations
Rao, Nageswara S.
2015-09-01
A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.
NASA Technical Reports Server (NTRS)
Eberlein, A. J.; Lahm, T. G.
1976-01-01
The degree to which flight-critical failures in a strapdown laser gyro tetrad sensor assembly can be isolated in short-haul aircraft after a failure occurrence has been detected by the skewed sensor failure-detection voting logic is investigated along with the degree to which a failure in the tetrad computer can be detected and isolated at the computer level, assuming a dual-redundant computer configuration. The tetrad system was mechanized with two two-axis inertial navigation channels (INCs), each containing two gyro/accelerometer axes, computer, control circuitry, and input/output circuitry. Gyro/accelerometer data is crossfed between the two INCs to enable each computer to independently perform the navigation task. Computer calculations are synchronized between the computers so that calculated quantities are identical and may be compared. Fail-safe performance (identification of the first failure) is accomplished with a probability approaching 100 percent of the time, while fail-operational performance (identification and isolation of the first failure) is achieved 93 to 96 percent of the time.
Brown, Dorothy Cimino; Bell, Margie; Rhodes, Linda
2013-12-01
To determine the optimal method for use of the Canine Brief Pain Inventory (CBPI) to quantitate responses of dogs with osteoarthritis to treatment with carprofen or placebo. 150 dogs with osteoarthritis. Data were analyzed from 2 studies with identical protocols in which owner-completed CBPIs were used. Treatment for each dog was classified as a success or failure by comparing the pain severity score (PSS) and pain interference score (PIS) on day 0 (baseline) with those on day 14. Treatment success or failure was defined on the basis of various combinations of reduction in the 2 scores when inclusion criteria were set as a PSS and PIS ≥ 1, 2, or 3 at baseline. Statistical analyses were performed to select the definition of treatment success that had the greatest statistical power to detect differences between carprofen and placebo treatments. Defining treatment success as a reduction of ≥ 1 in PSS and ≥ 2 in PIS in each dog had consistently robust power. Power was 62.8% in the population that included only dogs with baseline scores ≥ 2 and 64.7% in the population that included only dogs with baseline scores ≥ 3. The CBPI had robust statistical power to evaluate the treatment effect of carprofen in dogs with osteoarthritis when protocol success criteria were predefined as a reduction ≥ 1 in PIS and ≥ 2 in PSS. Results indicated the CBPI can be used as an outcome measure in clinical trials to evaluate new pain treatments when it is desirable to evaluate success in individual dogs rather than overall mean or median scores in a test population.
NASA Technical Reports Server (NTRS)
Vanschalkwyk, Christiaan Mauritz
1991-01-01
Many applications require that a control system must be tolerant to the failure of its components. This is especially true for large space-based systems that must work unattended and with long periods between maintenance. Fault tolerance can be obtained by detecting the failure of the control system component, determining which component has failed, and reconfiguring the system so that the failed component is isolated from the controller. Component failure detection experiments that were conducted on an experimental space structure, the NASA Langley Mini-Mast are presented. Two methodologies for failure detection and isolation (FDI) exist that do not require the specification of failure modes and are applicable to both actuators and sensors. These methods are known as the Failure Detection Filter and the method of Generalized Parity Relations. The latter method was applied to three different sensor types on the Mini-Mast. Failures were simulated in input-output data that were recorded during operation of the Mini-Mast. Both single and double sensor parity relations were tested and the effect of several design parameters on the performance of these relations is discussed. The detection of actuator failures is also treated. It is shown that in all the cases it is possible to identify the parity relations directly from input-output data. Frequency domain analysis is used to explain the behavior of the parity relations.
NASA Astrophysics Data System (ADS)
Abdul-Aziz, Ali; Woike, Mark R.; Clem, Michelle; Baaklini, George
2015-03-01
Efforts to update and improve turbine engine components in meeting flights safety and durability requirements are commitments that engine manufacturers try to continuously fulfill. Most of their concerns and developments energies focus on the rotating components as rotor disks. These components typically undergo rigorous operating conditions and are subject to high centrifugal loadings which subject them to various failure mechanisms. Thus, developing highly advanced health monitoring technology to screen their efficacy and performance is very essential to their prolonged service life and operational success. Nondestructive evaluation techniques are among the many screening methods that presently are being used to pre-detect hidden flaws and mini cracks prior to any appalling events occurrence. Most of these methods or procedures are confined to evaluating material's discontinuities and other defects that have mature to a point where failure is eminent. Hence, development of more robust techniques to pre-predict faults prior to any catastrophic events in these components is highly vital. This paper is focused on presenting research activities covering the ongoing research efforts at NASA Glenn Research Center (GRC) rotor dynamics laboratory in support of developing a fault detection system for key critical turbine engine components. Data obtained from spin test experiments of a rotor disk that relates to investigating behavior of blade tip clearance, tip timing and shaft displacement based on measured data acquired from sensor devices such as eddy current, capacitive and microwave are presented. Additional results linking test data with finite element modeling to characterize the structural durability of a cracked rotor as it relays to the experimental tests and findings is also presented. An obvious difference in the vibration response is shown between the notched and the baseline no notch rotor disk indicating the presence of some type of irregularity.
Dynamic one-dimensional modeling of secondary settling tanks and design impacts of sizing decisions.
Li, Ben; Stenstrom, Michael K
2014-03-01
As one of the most significant components in the activated sludge process (ASP), secondary settling tanks (SSTs) can be investigated with mathematical models to optimize design and operation. This paper takes a new look at the one-dimensional (1-D) SST model by analyzing and considering the impacts of numerical problems, especially the process robustness. An improved SST model with Yee-Roe-Davis technique as the PDE solver is proposed and compared with the widely used Takács model to show its improvement in numerical solution quality. The improved and Takács models are coupled with a bioreactor model to reevaluate ASP design basis and several popular control strategies for economic plausibility, contaminant removal efficiency and system robustness. The time-to-failure due to rising sludge blanket during overloading, as a key robustness indicator, is analyzed to demonstrate the differences caused by numerical issues in SST models. The calculated results indicate that the Takács model significantly underestimates time to failure, thus leading to a conservative design. Copyright © 2013 Elsevier Ltd. All rights reserved.
Automatic FDG-PET-based tumor and metastatic lymph node segmentation in cervical cancer
NASA Astrophysics Data System (ADS)
Arbonès, Dídac R.; Jensen, Henrik G.; Loft, Annika; Munck af Rosenschöld, Per; Hansen, Anders Elias; Igel, Christian; Darkner, Sune
2014-03-01
Treatment of cervical cancer, one of the three most commonly diagnosed cancers worldwide, often relies on delineations of the tumour and metastases based on PET imaging using the contrast agent 18F-Fluorodeoxyglucose (FDG). We present a robust automatic algorithm for segmenting the gross tumour volume (GTV) and metastatic lymph nodes in such images. As the cervix is located next to the bladder and FDG is washed out through the urine, the PET-positive GTV and the bladder cannot be easily separated. Our processing pipeline starts with a histogram-based region of interest detection followed by level set segmentation. After that, morphological image operations combined with clustering, region growing, and nearest neighbour labelling allow to remove the bladder and to identify the tumour and metastatic lymph nodes. The proposed method was applied to 125 patients and no failure could be detected by visual inspection. We compared our segmentations with results from manual delineations of corresponding MR and CT images, showing that the detected GTV lays at least 97.5% within the MR/CT delineations. We conclude that the algorithm has a very high potential for substituting the tedious manual delineation of PET positive areas.
Robustness of Oscillatory Behavior in Correlated Networks
Sasai, Takeyuki; Morino, Kai; Tanaka, Gouhei; Almendral, Juan A.; Aihara, Kazuyuki
2015-01-01
Understanding network robustness against failures of network units is useful for preventing large-scale breakdowns and damages in real-world networked systems. The tolerance of networked systems whose functions are maintained by collective dynamical behavior of the network units has recently been analyzed in the framework called dynamical robustness of complex networks. The effect of network structure on the dynamical robustness has been examined with various types of network topology, but the role of network assortativity, or degree–degree correlations, is still unclear. Here we study the dynamical robustness of correlated (assortative and disassortative) networks consisting of diffusively coupled oscillators. Numerical analyses for the correlated networks with Poisson and power-law degree distributions show that network assortativity enhances the dynamical robustness of the oscillator networks but the impact of network disassortativity depends on the detailed network connectivity. Furthermore, we theoretically analyze the dynamical robustness of correlated bimodal networks with two-peak degree distributions and show the positive impact of the network assortativity. PMID:25894574
2017-03-01
A Low- Power Wireless Image Sensor Node with Noise-Robust Moving Object Detection and a Region-of-Interest Based Rate Controller Jong Hwan Ko...Atlanta, GA 30332 USA Contact Author Email: jonghwan.ko@gatech.edu Abstract: This paper presents a low- power wireless image sensor node for...present a low- power wireless image sensor node with a noise-robust moving object detection and region-of-interest based rate controller [Fig. 1]. The
Cyber War Game in Temporal Networks
2016-02-09
a node’s mobility, failure or its resource depletion over time or action(s), this optimization problem becomes NP-com- plete. We propose two heuristic ... representing the interactions between nodes [1, 2]. One of the most important property of a network is robustness against random failures and target attacks...authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S
VHSIC/VHSIC-Like Reliability Prediction Modeling
1989-10-01
prediction would require ’ kowledge of event statistics as well as device robustness. Ii1 Additionally, although this is primarily a theoretical, bottom...Degradation in Section 5.3 P = Power PDIP = Plastic DIP P(f) = Probability of Failure due to EOS or ESD P(flc) = Probability of Failure given Contact from an...the results of those stresses: Device Stress Part Number Power Dissipation Manufacturer Test Type Part Description Junction Teniperatune Package Type
2015-03-26
albeit powerful , method available for exploring CAS. As discussed above, there are many useful mathematical tools appropriate for CAS modeling. Agent-based...cells, tele- phone calls, and sexual contacts approach power -law distributions. [48] Networks in general are robust against random failures, but...targeted failures can have powerful effects – provided the targeter has a good understanding of the network structure. Some argue (convincingly) that all
A Complex Network Analysis of Granular Fabric Evolution in Three-Dimensions
2011-01-01
organized pattern formation (e.g., strain localization), and co-evolution of emergent in- ternal structures (e.g., force cycles and force chains) [15...these networks, particularly recurring patterns or motifs, and understanding how these co-evolve are crucial to the robust characterization and...the lead up to and during failure. Since failure patterns and boundaries of flow in three-dimensional specimens can be quite complicated and difficult
NASA Technical Reports Server (NTRS)
Behbehani, K.
1980-01-01
A new sensor/actuator failure analysis technique for turbofan jet engines was developed. Three phases of failure analysis, namely detection, isolation, and accommodation are considered. Failure detection and isolation techniques are developed by utilizing the concept of Generalized Likelihood Ratio (GLR) tests. These techniques are applicable to both time varying and time invariant systems. Three GLR detectors are developed for: (1) hard-over sensor failure; (2) hard-over actuator failure; and (3) brief disturbances in the actuators. The probability distribution of the GLR detectors and the detectability of sensor/actuator failures are established. Failure type is determined by the maximum of the GLR detectors. Failure accommodation is accomplished by extending the Multivariable Nyquest Array (MNA) control design techniques to nonsquare system designs. The performance and effectiveness of the failure analysis technique are studied by applying the technique to a turbofan jet engine, namely the Quiet Clean Short Haul Experimental Engine (QCSEE). Single and multiple sensor/actuator failures in the QCSEE are simulated and analyzed and the effects of model degradation are studied.
Decrease of cardiac chaos in congestive heart failure
NASA Astrophysics Data System (ADS)
Poon, Chi-Sang; Merrill, Christopher K.
1997-10-01
The electrical properties of the mammalian heart undergo many complex transitions in normal and diseased states. It has been proposed that the normal heartbeat may display complex nonlinear dynamics, including deterministic chaos,, and that such cardiac chaos may be a useful physiological marker for the diagnosis and management, of certain heart trouble. However, it is not clear whether the heartbeat series of healthy and diseased hearts are chaotic or stochastic, or whether cardiac chaos represents normal or abnormal behaviour. Here we have used a highly sensitive technique, which is robust to random noise, to detect chaos. We analysed the electrocardiograms from a group of healthy subjects and those with severe congestive heart failure (CHF), a clinical condition associated with a high risk of sudden death. The short-term variations of beat-to-beat interval exhibited strongly and consistently chaotic behaviour in all healthy subjects, but were frequently interrupted by periods of seemingly non-chaotic fluctuations in patients with CHF. Chaotic dynamics in the CHF data, even when discernible, exhibited a high degree of random variability over time, suggesting a weaker form of chaos. These findings suggest that cardiac chaos is prevalent in healthy heart, and a decrease in such chaos may be indicative of CHF.
El-Far, Mohamed; Kouassi, Pascale; Sylla, Mohamed; Zhang, Yuwei; Fouda, Ahmed; Fabre, Thomas; Goulet, Jean-Philippe; van Grevenynghe, Julien; Lee, Terry; Singer, Joel; Harris, Marianne; Baril, Jean-Guy; Trottier, Benoit; Ancuta, Petronela; Routy, Jean-Pierre; Bernard, Nicole; Tremblay, Cécile L.; Angel, Jonathan; Conway, Brian; Côté, Pierre; Gill, John; Johnston, Lynn; Kovacs, Colin; Loutfy, Mona; Logue, Kenneth; Piché, Alain; Rachlis, Anita; Rouleau, Danielle; Thompson, Bill; Thomas, Réjean; Trottier, Sylvie; Walmsley, Sharon; Wobeser, Wendy
2016-01-01
HIV-infected slow progressors (SP) represent a heterogeneous group of subjects who spontaneously control HIV infection without treatment for several years while showing moderate signs of disease progression. Under conditions that remain poorly understood, a subgroup of these subjects experience failure of spontaneous immunological and virological control. Here we determined the frequency of SP subjects who showed loss of HIV control within our Canadian Cohort of HIV+ Slow Progressors and identified the proinflammatory cytokine IL-32 as a robust biomarker for control failure. Plasmatic levels of the proinflammatory isoforms of IL-32 (mainly β and γ) at earlier clinic visits positively correlated with the decline of CD4 T-cell counts, increased viral load, lower CD4/CD8 ratio and levels of inflammatory markers (sCD14 and IL-6) at later clinic visits. We present here a proof-of-concept for the use of IL-32 as a predictive biomarker for disease progression in SP subjects and identify IL-32 as a potential therapeutic target. PMID:26978598
Reliability Assessment of a Robust Design Under Uncertainty for a 3-D Flexible Wing
NASA Technical Reports Server (NTRS)
Gumbert, Clyde R.; Hou, Gene J. -W.; Newman, Perry A.
2003-01-01
The paper presents reliability assessment results for the robust designs under uncertainty of a 3-D flexible wing previously reported by the authors. Reliability assessments (additional optimization problems) of the active constraints at the various probabilistic robust design points are obtained and compared with the constraint values or target constraint probabilities specified in the robust design. In addition, reliability-based sensitivity derivatives with respect to design variable mean values are also obtained and shown to agree with finite difference values. These derivatives allow one to perform reliability based design without having to obtain second-order sensitivity derivatives. However, an inner-loop optimization problem must be solved for each active constraint to find the most probable point on that constraint failure surface.
Percolation of localized attack on complex networks
NASA Astrophysics Data System (ADS)
Shao, Shuai; Huang, Xuqing; Stanley, H. Eugene; Havlin, Shlomo
2015-02-01
The robustness of complex networks against node failure and malicious attack has been of interest for decades, while most of the research has focused on random attack or hub-targeted attack. In many real-world scenarios, however, attacks are neither random nor hub-targeted, but localized, where a group of neighboring nodes in a network are attacked and fail. In this paper we develop a percolation framework to analytically and numerically study the robustness of complex networks against such localized attack. In particular, we investigate this robustness in Erdős-Rényi networks, random-regular networks, and scale-free networks. Our results provide insight into how to better protect networks, enhance cybersecurity, and facilitate the design of more robust infrastructures.
Sensor failure detection system. [for the F100 turbofan engine
NASA Technical Reports Server (NTRS)
Beattie, E. C.; Laprad, R. F.; Mcglone, M. E.; Rock, S. M.; Akhter, M. M.
1981-01-01
Advanced concepts for detecting, isolating, and accommodating sensor failures were studied to determine their applicability to the gas turbine control problem. Five concepts were formulated based upon such techniques as Kalman filters and a screening process led to the selection of one advanced concept for further evaluation. The selected advanced concept uses a Kalman filter to generate residuals, a weighted sum square residuals technique to detect soft failures, likelihood ratio testing of a bank of Kalman filters for isolation, and reconfiguring of the normal mode Kalman filter by eliminating the failed input to accommodate the failure. The advanced concept was compared to a baseline parameter synthesis technique. The advanced concept was shown to be a viable concept for detecting, isolating, and accommodating sensor failures for the gas turbine applications.
Fault detection and identification in missile system guidance and control: a filtering approach
NASA Astrophysics Data System (ADS)
Padgett, Mary Lou; Evers, Johnny; Karplus, Walter J.
1996-03-01
Real-world applications of computational intelligence can enhance the fault detection and identification capabilities of a missile guidance and control system. A simulation of a bank-to- turn missile demonstrates that actuator failure may cause the missile to roll and miss the target. Failure of one fin actuator can be detected using a filter and depicting the filter output as fuzzy numbers. The properties and limitations of artificial neural networks fed by these fuzzy numbers are explored. A suite of networks is constructed to (1) detect a fault and (2) determine which fin (if any) failed. Both the zero order moment term and the fin rate term show changes during actuator failure. Simulations address the following questions: (1) How bad does the actuator failure have to be for detection to occur, (2) How bad does the actuator failure have to be for fault detection and isolation to occur, (3) are both zero order moment and fine rate terms needed. A suite of target trajectories are simulated, and properties and limitations of the approach reported. In some cases, detection of the failed actuator occurs within 0.1 second, and isolation of the failure occurs 0.1 after that. Suggestions for further research are offered.
An Adaptive Failure Detector Based on Quality of Service in Peer-to-Peer Networks
Dong, Jian; Ren, Xiao; Zuo, Decheng; Liu, Hongwei
2014-01-01
The failure detector is one of the fundamental components that maintain high availability of Peer-to-Peer (P2P) networks. Under different network conditions, the adaptive failure detector based on quality of service (QoS) can achieve the detection time and accuracy required by upper applications with lower detection overhead. In P2P systems, complexity of network and high churn lead to high message loss rate. To reduce the impact on detection accuracy, baseline detection strategy based on retransmission mechanism has been employed widely in many P2P applications; however, Chen's classic adaptive model cannot describe this kind of detection strategy. In order to provide an efficient service of failure detection in P2P systems, this paper establishes a novel QoS evaluation model for the baseline detection strategy. The relationship between the detection period and the QoS is discussed and on this basis, an adaptive failure detector (B-AFD) is proposed, which can meet the quantitative QoS metrics under changing network environment. Meanwhile, it is observed from the experimental analysis that B-AFD achieves better detection accuracy and time with lower detection overhead compared to the traditional baseline strategy and the adaptive detectors based on Chen's model. Moreover, B-AFD has better adaptability to P2P network. PMID:25198005
A distributed database view of network tracking systems
NASA Astrophysics Data System (ADS)
Yosinski, Jason; Paffenroth, Randy
2008-04-01
In distributed tracking systems, multiple non-collocated trackers cooperate to fuse local sensor data into a global track picture. Generating this global track picture at a central location is fairly straightforward, but the single point of failure and excessive bandwidth requirements introduced by centralized processing motivate the development of decentralized methods. In many decentralized tracking systems, trackers communicate with their peers via a lossy, bandwidth-limited network in which dropped, delayed, and out of order packets are typical. Oftentimes the decentralized tracking problem is viewed as a local tracking problem with a networking twist; we believe this view can underestimate the network complexities to be overcome. Indeed, a subsequent 'oversight' layer is often introduced to detect and handle track inconsistencies arising from a lack of robustness to network conditions. We instead pose the decentralized tracking problem as a distributed database problem, enabling us to draw inspiration from the vast extant literature on distributed databases. Using the two-phase commit algorithm, a well known technique for resolving transactions across a lossy network, we describe several ways in which one may build a distributed multiple hypothesis tracking system from the ground up to be robust to typical network intricacies. We pay particular attention to the dissimilar challenges presented by network track initiation vs. maintenance and suggest a hybrid system that balances speed and robustness by utilizing two-phase commit for only track initiation transactions. Finally, we present simulation results contrasting the performance of such a system with that of more traditional decentralized tracking implementations.
Effect of interaction strength on robustness of controlling edge dynamics in complex networks
NASA Astrophysics Data System (ADS)
Pang, Shao-Peng; Hao, Fei
2018-05-01
Robustness plays a critical role in the controllability of complex networks to withstand failures and perturbations. Recent advances in the edge controllability show that the interaction strength among edges plays a more important role than network structure. Therefore, we focus on the effect of interaction strength on the robustness of edge controllability. Using three categories of all edges to quantify the robustness, we develop a universal framework to evaluate and analyze the robustness in complex networks with arbitrary structures and interaction strengths. Applying our framework to a large number of model and real-world networks, we find that the interaction strength is a dominant factor for the robustness in undirected networks. Meanwhile, the strongest robustness and the optimal edge controllability in undirected networks can be achieved simultaneously. Different from the case of undirected networks, the robustness in directed networks is determined jointly by the interaction strength and the network's degree distribution. Moreover, a stronger robustness is usually associated with a larger number of driver nodes required to maintain full control in directed networks. This prompts us to provide an optimization method by adjusting the interaction strength to optimize the robustness of edge controllability.
NASA Technical Reports Server (NTRS)
Hopson, Charles B.
1987-01-01
The results of an analysis performed on seven successive Space Shuttle Main Engine (SSME) static test firings, utilizing envelope detection of external accelerometer data are discussed. The results clearly show the great potential for using envelope detection techniques in SSME incipient failure detection.
Device for detecting imminent failure of high-dielectric stress capacitors. [Patent application
McDuff, G.G.
1980-11-05
A device is described for detecting imminent failure of a high-dielectric stress capacitor utilizing circuitry for detecting pulse width variations and pulse magnitude variations. Inexpensive microprocessor circuitry is utilized to make numerical calculations of digital data supplied by detection circuitry for comparison of pulse width data and magnitude data to determine if preselected ranges have been exceeded, thereby indicating imminent failure of a capacitor. Detection circuitry may be incorporated in transmission lines, pulse power circuitry, including laser pulse circuitry or any circuitry where capacitors or capacitor banks are utilized.
Device for detecting imminent failure of high-dielectric stress capacitors
McDuff, George G.
1982-01-01
A device for detecting imminent failure of a high-dielectric stress capacitor utilizing circuitry for detecting pulse width variations and pulse magnitude variations. Inexpensive microprocessor circuitry is utilized to make numerical calculations of digital data supplied by detection circuitry for comparison of pulse width data and magnitude data to determine if preselected ranges have been exceeded, thereby indicating imminent failure of a capacitor. Detection circuitry may be incorporated in transmission lines, pulse power circuitry, including laser pulse circuitry or any circuitry where capacitors or capactior banks are utilized.
Spatial correlation analysis of cascading failures: Congestions and Blackouts
Daqing, Li; Yinan, Jiang; Rui, Kang; Havlin, Shlomo
2014-01-01
Cascading failures have become major threats to network robustness due to their potential catastrophic consequences, where local perturbations can induce global propagation of failures. Unlike failures spreading via direct contacts due to structural interdependencies, overload failures usually propagate through collective interactions among system components. Despite the critical need in developing protection or mitigation strategies in networks such as power grids and transportation, the propagation behavior of cascading failures is essentially unknown. Here we find by analyzing our collected data that jams in city traffic and faults in power grid are spatially long-range correlated with correlations decaying slowly with distance. Moreover, we find in the daily traffic, that the correlation length increases dramatically and reaches maximum, when morning or evening rush hour is approaching. Our study can impact all efforts towards improving actively system resilience ranging from evaluation of design schemes, development of protection strategies to implementation of mitigation programs. PMID:24946927
Cultural variation of perceptions of crew behaviour in multi-pilot aircraft.
Hörmann, H J
2001-09-01
As the "last line of defence" pilots in commercial aviation often have to counteract effects of unexpected system flaws that could endanger the safety of a given flight. In order to timely detect and mitigate consequences of latent or active failures, effective team behaviour of the crew members is an indispensable condition. While this fact is generally agreed in the aviation community, there seems to be a wide range of concepts how crews should interact most effectively. Within the framework of the European project JARTEL the cultural robustness of evaluations of crew behaviour was examined. 105 instructor pilots from 14 different airlines representing 12 European countries participated in this project. The instructors' evaluations of crew behaviours in eight video scenarios will be compared in relation to cultural differences on Hofstede's dimensions of Power Distance and Individualism.
Hidden Markov models for fault detection in dynamic systems
NASA Technical Reports Server (NTRS)
Smyth, Padhraic J. (Inventor)
1995-01-01
The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) (vertical bar)/x), 1 less than or equal to i isless than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.
Hidden Markov models for fault detection in dynamic systems
NASA Technical Reports Server (NTRS)
Smyth, Padhraic J. (Inventor)
1993-01-01
The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) perpendicular to x), 1 less than or equal to i is less than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.
Epidemic failure detection and consensus for extreme parallelism
Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas; ...
2017-02-01
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less
Advanced detection, isolation and accommodation of sensor failures: Real-time evaluation
NASA Technical Reports Server (NTRS)
Merrill, Walter C.; Delaat, John C.; Bruton, William M.
1987-01-01
The objective of the Advanced Detection, Isolation, and Accommodation (ADIA) Program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines by using analytical redundacy to detect sensor failures. The results of a real time hybrid computer evaluation of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 engine control system are determined. Also included are details about the microprocessor implementation of the algorithm as well as a description of the algorithm itself.
Robust QRS peak detection by multimodal information fusion of ECG and blood pressure signals.
Ding, Quan; Bai, Yong; Erol, Yusuf Bugra; Salas-Boni, Rebeca; Zhang, Xiaorong; Hu, Xiao
2016-11-01
QRS peak detection is a challenging problem when ECG signal is corrupted. However, additional physiological signals may also provide information about the QRS position. In this study, we focus on a unique benchmark provided by PhysioNet/Computing in Cardiology Challenge 2014 and Physiological Measurement focus issue: robust detection of heart beats in multimodal data, which aimed to explore robust methods for QRS detection in multimodal physiological signals. A dataset of 200 training and 210 testing records are used, where the testing records are hidden for evaluating the performance only. An information fusion framework for robust QRS detection is proposed by leveraging existing ECG and ABP analysis tools and combining heart beats derived from different sources. Results show that our approach achieves an overall accuracy of 90.94% and 88.66% on the training and testing datasets, respectively. Furthermore, we observe expected performance at each step of the proposed approach, as an evidence of the effectiveness of our approach. Discussion on the limitations of our approach is also provided.
Analysis of Infrared Signature Variation and Robust Filter-Based Supersonic Target Detection
Sun, Sun-Gu; Kim, Kyung-Tae
2014-01-01
The difficulty of small infrared target detection originates from the variations of infrared signatures. This paper presents the fundamental physics of infrared target variations and reports the results of variation analysis of infrared images acquired using a long wave infrared camera over a 24-hour period for different types of backgrounds. The detection parameters, such as signal-to-clutter ratio were compared according to the recording time, temperature and humidity. Through variation analysis, robust target detection methodologies are derived by controlling thresholds and designing a temporal contrast filter to achieve high detection rate and low false alarm rate. Experimental results validate the robustness of the proposed scheme by applying it to the synthetic and real infrared sequences. PMID:24672290
NASA Astrophysics Data System (ADS)
Amirat, Yassine; Choqueuse, Vincent; Benbouzid, Mohamed
2013-12-01
Failure detection has always been a demanding task in the electrical machines community; it has become more challenging in wind energy conversion systems because sustainability and viability of wind farms are highly dependent on the reduction of the operational and maintenance costs. Indeed the most efficient way of reducing these costs would be to continuously monitor the condition of these systems. This allows for early detection of the generator health degeneration, facilitating a proactive response, minimizing downtime, and maximizing productivity. This paper provides then an assessment of a failure detection techniques based on the homopolar component of the generator stator current and attempts to highlight the use of the ensemble empirical mode decomposition as a tool for failure detection in wind turbine generators for stationary and non-stationary cases.
Robust lane detection and tracking using multiple visual cues under stochastic lane shape conditions
NASA Astrophysics Data System (ADS)
Huang, Zhi; Fan, Baozheng; Song, Xiaolin
2018-03-01
As one of the essential components of environment perception techniques for an intelligent vehicle, lane detection is confronted with challenges including robustness against the complicated disturbance and illumination, also adaptability to stochastic lane shapes. To overcome these issues, we proposed a robust lane detection method named classification-generation-growth-based (CGG) operator to the detected lines, whereby the linear lane markings are identified by synergizing multiple visual cues with the a priori knowledge and spatial-temporal information. According to the quality of linear lane fitting, the linear and linear-parabolic models are dynamically switched to describe the actual lane. The Kalman filter with adaptive noise covariance and the region of interests (ROI) tracking are applied to improve the robustness and efficiency. Experiments were conducted with images covering various challenging scenarios. The experimental results evaluate the effectiveness of the presented method for complicated disturbances, illumination, and stochastic lane shapes.
Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji
2003-01-01
Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.
Failure detection and fault management techniques for flush airdata sensing systems
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.
1992-01-01
Methods based on chi-squared analysis are presented for detecting system and individual-port failures in the high-angle-of-attack flush airdata sensing system on the NASA F-18 High Alpha Research Vehicle. The HI-FADS hardware is introduced, and the aerodynamic model describes measured pressure in terms of dynamic pressure, angle of attack, angle of sideslip, and static pressure. Chi-squared analysis is described in the presentation of the concept for failure detection and fault management which includes nominal, iteration, and fault-management modes. A matrix of pressure orifices arranged in concentric circles on the nose of the aircraft indicate the parameters which are applied to the regression algorithms. The sensing techniques are applied to the F-18 flight data, and two examples are given of the computed angle-of-attack time histories. The failure-detection and fault-management techniques permit the matrix to be multiply redundant, and the chi-squared analysis is shown to be useful in the detection of failures.
Failure Detecting Method of Fault Current Limiter System with Rectifier
NASA Astrophysics Data System (ADS)
Tokuda, Noriaki; Matsubara, Yoshio; Asano, Masakuni; Ohkuma, Takeshi; Sato, Yoshibumi; Takahashi, Yoshihisa
A fault current limiter (FCL) is extensively needed to suppress fault current, particularly required for trunk power systems connecting high-voltage transmission lines, such as 500kV class power system which constitutes the nucleus of the electric power system. We proposed a new type FCL system (rectifier type FCL), consisting of solid-state diodes, DC reactor and bypass AC reactor, and demonstrated the excellent performances of this FCL by developing the small 6.6kV and 66kV model. It is important to detect the failure of power devices used in the rectifier under the normal operating condition, for keeping the excellent reliability of the power system. In this paper, we have proposed a new failure detecting method of power devices most suitable for the rectifier type FCL. This failure detecting system is simple and compact. We have adapted the proposed system to the 66kV prototype single-phase model and successfully demonstrated to detect the failure of power devices.
Melin, Michael; Montelius, Andreas; Rydén, Lars; Gonon, Adrian; Hagerman, Inger; Rullman, Eric
2018-01-01
Enhanced external counterpulsation (EECP) is a non-invasive treatment in which leg cuff compressions increase diastolic aortic pressure and coronary perfusion. EECP is offered to patients with refractory angina pectoris and increases physical capacity. Benefits in heart failure patients have been noted, but EECP is still considered to be experimental and its effects must be confirmed. The mechanism of action is still unclear. The aim of this study was to evaluate the effect of EECP on skeletal muscle gene expression and physical performance in patients with severe heart failure. Patients (n = 9) in NYHA III-IV despite pharmacological therapy were subjected to 35 h of EECP during 7 weeks. Before and after, lateral vastus muscle biopsies were obtained, and functional capacity was evaluated with a 6-min walk test. Skeletal muscle gene expression was evaluated using Affymetrix Hugene 1.0 arrays. Maximum walking distance increased by 15%, which is in parity to that achieved after aerobic exercise training in similar patients. Skeletal muscle gene expression analysis using Ingenuity Pathway Analysis showed an increased expression of two networks of genes with FGF-2 and IGF-1 as central regulators. The increase in gene expression was quantitatively small and no overlap with gene expression profiles after exercise training could be detected despite adequate statistical power. EECP treatment leads to a robust improvement in walking distance in patients with severe heart failure and does induce a skeletal muscle transcriptional response, but this response is small and with no significant overlap with the transcriptional signature seen after exercise training. © 2016 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
Space Shuttle Main Engine: Advanced Health Monitoring System
NASA Technical Reports Server (NTRS)
Singer, Chirs
1999-01-01
The main gola of the Space Shuttle Main Engine (SSME) Advanced Health Management system is to improve flight safety. To this end the new SSME has robust new components to improve the operating margen and operability. The features of the current SSME health monitoring system, include automated checkouts, closed loop redundant control system, catastropic failure mitigation, fail operational/ fail-safe algorithms, and post flight data and inspection trend analysis. The features of the advanced health monitoring system include: a real time vibration monitor system, a linear engine model, and an optical plume anomaly detection system. Since vibration is a fundamental measure of SSME turbopump health, it stands to reason that monitoring the vibration, will give some idea of the health of the turbopumps. However, how is it possible to avoid shutdown, when it is not necessary. A sensor algorithm has been developed which has been exposed to over 400 test cases in order to evaluate the logic. The optical plume anomaly detection (OPAD) has been developed to be a sensitive monitor of engine wear, erosion, and breakage.
Wave failure at strong coupling in intracellular C a2 + signaling system with clustered channels
NASA Astrophysics Data System (ADS)
Li, Xiang; Wu, Yuning; Gao, Xuejuan; Cai, Meichun; Shuai, Jianwei
2018-01-01
As an important intracellular signal, C a2 + ions control diverse cellular functions. In this paper, we discuss the C a2 + signaling with a two-dimensional model in which the inositol 1,4,5-trisphosphate (I P3 ) receptor channels are distributed in clusters on the endoplasmic reticulum membrane. The wave failure at large C a2 + diffusion coupling is discussed in detail in the model. We show that with varying model parameters the wave failure is a robust behavior with either deterministic or stochastic channel dynamics. We suggest that the wave failure should be a general behavior in inhomogeneous diffusing systems with clustered excitable regions and may occur in biological C a2 + signaling systems.
Syndromic surveillance for health information system failures: a feasibility study.
Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico
2013-05-01
To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65-0.85. Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures.
NASA Astrophysics Data System (ADS)
Chen, Xianshun; Feng, Liang; Ong, Yew Soon
2012-07-01
In this article, we proposed a self-adaptive memeplex robust search (SAMRS) for finding robust and reliable solutions that are less sensitive to stochastic behaviours of customer demands and have low probability of route failures, respectively, in vehicle routing problem with stochastic demands (VRPSD). In particular, the contribution of this article is three-fold. First, the proposed SAMRS employs the robust solution search scheme (RS 3) as an approximation of the computationally intensive Monte Carlo simulation, thus reducing the computation cost of fitness evaluation in VRPSD, while directing the search towards robust and reliable solutions. Furthermore, a self-adaptive individual learning based on the conceptual modelling of memeplex is introduced in the SAMRS. Finally, SAMRS incorporates a gene-meme co-evolution model with genetic and memetic representation to effectively manage the search for solutions in VRPSD. Extensive experimental results are then presented for benchmark problems to demonstrate that the proposed SAMRS serves as an efficable means of generating high-quality robust and reliable solutions in VRPSD.
Robustness of networks with assortative dependence groups
NASA Astrophysics Data System (ADS)
Wang, Hui; Li, Ming; Deng, Lin; Wang, Bing-Hong
2018-07-01
Assortativity is one of the important characteristics in real networks. To study the effects of this characteristic on the robustness of networks, we propose a percolation model on networks with assortative dependence group. The assortativity in this model means that the nodes with the same or similar degrees form dependence groups, for which one node fails, other nodes in the same group are very likely to fail. We find that the assortativity makes the nodes with large degrees easier to survive from the cascading failure. In this way, such networks are more robust than that with random dependence group, which also proves the assortative network is robust in another perspective. Furthermore, we also present exact solutions to the size of the giant component and the critical point, which are in agreement with the simulation results well.
Influence of Different Coupling Modes on the Robustness of Smart Grid under Targeted Attack.
Kang, WenJie; Hu, Gang; Zhu, PeiDong; Liu, Qiang; Hang, Zhi; Liu, Xin
2018-05-24
Many previous works only focused on the cascading failure of global coupling of one-to-one structures in interdependent networks, but the local coupling of dual coupling structures has rarely been studied due to its complex structure. This will result in a serious consequence that many conclusions of the one-to-one structure may be incorrect in the dual coupling network and do not apply to the smart grid. Therefore, it is very necessary to subdivide the dual coupling link into a top-down coupling link and a bottom-up coupling link in order to study their influence on network robustness by combining with different coupling modes. Additionally, the power flow of the power grid can cause the load of a failed node to be allocated to its neighboring nodes and trigger a new round of load distribution when the load of these nodes exceeds their capacity. This means that the robustness of smart grids may be affected by four factors, i.e., load redistribution, local coupling, dual coupling link and coupling mode; however, the research on the influence of those factors on the network robustness is missing. In this paper, firstly, we construct the smart grid as a two-layer network with a dual coupling link and divide the power grid and communication network into many subnets based on the geographical location of their nodes. Secondly, we define node importance ( N I ) as an evaluation index to access the impact of nodes on the cyber or physical network and propose three types of coupling modes based on N I of nodes in the cyber and physical subnets, i.e., Assortative Coupling in Subnets (ACIS), Disassortative Coupling in Subnets (DCIS), and Random Coupling in Subnets (RCIS). Thirdly, a cascading failure model is proposed for studying the effect of local coupling of dual coupling link in combination with ACIS, DCIS, and RCIS on the robustness of the smart grid against a targeted attack, and the survival rate of functional nodes is used to assess the robustness of the smart grid. Finally, we use the IEEE 118-Bus System and the Italian High-Voltage Electrical Transmission Network to verify our model and obtain the same conclusions: (I) DCIS applied to the top-down coupling link is better able to enhance the robustness of the smart grid against a targeted attack than RCIS or ACIS, (II) ACIS applied to a bottom-up coupling link is better able to enhance the robustness of the smart grid against a targeted attack than RCIS or DCIS, and (III) the robustness of the smart grid can be improved by increasing the tolerance α . This paper provides some guidelines for slowing down the speed of the cascading failures in the design of architecture and optimization of interdependent networks, such as a top-down link with DCIS, a bottom-up link with ACIS, and an increased tolerance α .
Integrated health management and control of complex dynamical systems
NASA Astrophysics Data System (ADS)
Tolani, Devendra K.
2005-11-01
A comprehensive control and health management strategy for human-engineered complex dynamical systems is formulated for achieving high performance and reliability over a wide range of operation. Results from diverse research areas such as Probabilistic Robust Control (PRC), Damage Mitigating/Life Extending Control (DMC), Discrete Event Supervisory (DES) Control, Symbolic Time Series Analysis (STSA) and Health and Usage Monitoring System (HUMS) have been employed to achieve this goal. Continuous-domain control modules at the lower level are synthesized by PRC and DMC theories, whereas the upper-level supervision is based on DES control theory. In the PRC approach, by allowing different levels of risk under different flight conditions, the control system can achieve the desired trade off between stability robustness and nominal performance. In the DMC approach, component damage is incorporated in the control law to reduce the damage rate for enhanced structural durability. The DES controller monitors the system performance and, based on the mission requirements (e.g., performance metrics and level of damage mitigation), switches among various lower-level controllers. The core idea is to design a framework where the DES controller at the upper-level, mimics human intelligence and makes appropriate decisions to satisfy mission requirements, enhance system performance and structural durability. Recently developed tools in STSA have been used for anomaly detection and failure prognosis. The DMC deals with the usage monitoring or operational control part of health management, where as the issue of health monitoring is addressed by the anomaly detection tools. The proposed decision and control architecture has been validated on two test-beds, simulating the operations of rotorcraft dynamics and aircraft propulsion.
Remote Structural Health Monitoring and Advanced Prognostics of Wind Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Douglas Brown; Bernard Laskowski
The prospect of substantial investment in wind energy generation represents a significant capital investment strategy. In order to maximize the life-cycle of wind turbines, associated rotors, gears, and structural towers, a capability to detect and predict (prognostics) the onset of mechanical faults at a sufficiently early stage for maintenance actions to be planned would significantly reduce both maintenance and operational costs. Advancement towards this effort has been made through the development of anomaly detection, fault detection and fault diagnosis routines to identify selected fault modes of a wind turbine based on available sensor data preceding an unscheduled emergency shutdown. Themore » anomaly detection approach employs spectral techniques to find an approximation of the data using a combination of attributes that capture the bulk of variability in the data. Fault detection and diagnosis (FDD) is performed using a neural network-based classifier trained from baseline and fault data recorded during known failure conditions. The approach has been evaluated for known baseline conditions and three selected failure modes: pitch rate failure, low oil pressure failure and a gearbox gear-tooth failure. Experimental results demonstrate the approach can distinguish between these failure modes and normal baseline behavior within a specified statistical accuracy.« less
Robust video copy detection approach based on local tangent space alignment
NASA Astrophysics Data System (ADS)
Nie, Xiushan; Qiao, Qianping
2012-04-01
We propose a robust content-based video copy detection approach based on local tangent space alignment (LTSA), which is an efficient dimensionality reduction algorithm. The idea is motivated by the fact that the content of video becomes richer and the dimension of content becomes higher. It does not give natural tools for video analysis and understanding because of the high dimensionality. The proposed approach reduces the dimensionality of video content using LTSA, and then generates video fingerprints in low dimensional space for video copy detection. Furthermore, a dynamic sliding window is applied to fingerprint matching. Experimental results show that the video copy detection approach has good robustness and discrimination.
Robustness Elasticity in Complex Networks
Matisziw, Timothy C.; Grubesic, Tony H.; Guo, Junyu
2012-01-01
Network robustness refers to a network’s resilience to stress or damage. Given that most networks are inherently dynamic, with changing topology, loads, and operational states, their robustness is also likely subject to change. However, in most analyses of network structure, it is assumed that interaction among nodes has no effect on robustness. To investigate the hypothesis that network robustness is not sensitive or elastic to the level of interaction (or flow) among network nodes, this paper explores the impacts of network disruption, namely arc deletion, over a temporal sequence of observed nodal interactions for a large Internet backbone system. In particular, a mathematical programming approach is used to identify exact bounds on robustness to arc deletion for each epoch of nodal interaction. Elasticity of the identified bounds relative to the magnitude of arc deletion is assessed. Results indicate that system robustness can be highly elastic to spatial and temporal variations in nodal interactions within complex systems. Further, the presence of this elasticity provides evidence that a failure to account for nodal interaction can confound characterizations of complex networked systems. PMID:22808060
Error and attack tolerance of complex networks
NASA Astrophysics Data System (ADS)
Albert, Réka; Jeong, Hawoong; Barabási, Albert-László
2000-07-01
Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network. Complex communication networks display a surprising degree of robustness: although key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these and other complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. Here we demonstrate that error tolerance is not shared by all redundant systems: it is displayed only by a class of inhomogeneously wired networks, called scale-free networks, which include the World-Wide Web, the Internet, social networks and cells. We find that such networks display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates. However, error tolerance comes at a high price in that these networks are extremely vulnerable to attacks (that is, to the selection and removal of a few nodes that play a vital role in maintaining the network's connectivity). Such error tolerance and attack vulnerability are generic properties of communication networks.
NASA Astrophysics Data System (ADS)
Gao, Gang; Wang, Jinzhi; Wang, Xianghua
2017-05-01
This paper investigates fault-tolerant control (FTC) for feedback linearisable systems (FLSs) and its application to an aircraft. To ensure desired transient and steady-state behaviours of the tracking error under actuator faults, the dynamic effect caused by the actuator failures on the error dynamics of a transformed model is analysed, and three control strategies are designed. The first FTC strategy is proposed as a robust controller, which relies on the explicit information about several parameters of the actuator faults. To eliminate the need for these parameters and the input chattering phenomenon, the robust control law is later combined with the adaptive technique to generate the adaptive FTC law. Next, the adaptive control law is further improved to achieve the prescribed performance under more severe input disturbance. Finally, the proposed control laws are applied to an air-breathing hypersonic vehicle (AHV) subject to actuator failures, which confirms the effectiveness of the proposed strategies.
Preliminary results from BCG and ECG measurements in the heart failure clinic.
Giovangrandi, Laurent; Inan, Omer T; Banerjee, Dipanjan; Kovacs, Gregory T A
2012-01-01
We report on the preliminary deployment of a bathroom scale-based ballistocardiogram (BCG) system for the in-hospital monitoring of patients with heart failure. These early trials provided valuable insights into the challenges and opportunities for such monitoring. In particular, the need for robust algorithms and adapted BCG metric is suggested. The system was designed to be robust and user-friendly, with dual ballistocardiogram (BCG) and electrocardiogram (ECG) capabilities. The BCG was measured from a modified bathroom scale, while the ECG (used as timing reference) was measured using dry handlebar electrodes. The signal conditioning and digitization circuits were USB-powered, and data acquisition performed using a netbook. Four patients with a NYHA class III at admission were measured daily for the duration of their treatment at Stanford hospital. A measure of BCG quality, in essence a quantitative implementation of the BCG classes originally defined in the 1950s, is proposed as a practical parameter.
Robust holographic storage system design.
Watanabe, Takahiro; Watanabe, Minoru
2011-11-21
Demand is increasing daily for large data storage systems that are useful for applications in spacecraft, space satellites, and space robots, which are all exposed to radiation-rich space environment. As candidates for use in space embedded systems, holographic storage systems are promising because they can easily provided the demanded large-storage capability. Particularly, holographic storage systems, which have no rotation mechanism, are demanded because they are virtually maintenance-free. Although a holographic memory itself is an extremely robust device even in a space radiation environment, its associated lasers and drive circuit devices are vulnerable. Such vulnerabilities sometimes engendered severe problems that prevent reading of all contents of the holographic memory, which is a turn-off failure mode of a laser array. This paper therefore presents a proposal for a recovery method for the turn-off failure mode of a laser array on a holographic storage system, and describes results of an experimental demonstration. © 2011 Optical Society of America
Wong, Timothy C.; Piehler, Kayla M.; Zareba, Karolina M.; Lin, Kathie; Phrampus, Ashley; Patel, Agam; Moon, James C.; Ugander, Martin; Valeti, Uma; Holtz, Jonathan E.; Fu, Bo; Chang, Chung‐Chou H.; Mathier, Michael; Kellman, Peter; Butler, Javed; Gheorghiade, Mihai; Schelbert, Erik B.
2013-01-01
Background Hospitalization for heart failure (HHF) is among the most important problems confronting medicine. Late gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) robustly identifies intrinsic myocardial damage. LGE may indicate inherent vulnerability to HHF, regardless of etiology, across the spectrum of heart failure stage or left ventricular ejection fraction (LVEF). Methods and Results We enrolled 1068 consecutive patients referred for CMR where 448 (42%) exhibited LGE. After a median of 1.4 years (Q1 to Q3: 0.9 to 2.0 years), 57 HHF events occurred, 15 deaths followed HHF, and 43 deaths occurred without antecedent HHF (58 total deaths). Using multivariable Cox regression adjusting for LVEF, heart failure stage, and other covariates, LGE was associated with first HHF after CMR (HR: 2.70, 95% CI: 1.32 to 5.50), death (HR: 2.13, 95% CI: 1.08 to 4.21), or either death or HHF (HR: 2.52, 95% CI: 1.49 to 4.25). Quantifying LGE extent yielded similar results; more LGE equated higher risks. LGE improved model discrimination (IDI: 0.016, 95% CI: 0.005 to 0.028, P=0.002) and reclassification of individuals at risk (continuous NRI: 0.40, 95% CI: 0.05 to 0.70, P=0.024). Adjustment for competing risks of death that shares common risk factors with HHF strengthened the LGE and HHF association (HR: 4.85, 95% CI: 1.40 to 16.9). Conclusions The presence and extent of LGE is associated with vulnerability for HHF, including higher risks of HHF across the spectrum of heart failure stage and LVEF. Even when LVEF is severely decreased, those without LGE appear to fare reasonably well. LGE may enhance risk stratification for HHF and may enhance both clinical and research efforts to reduce HHF through targeted treatment. PMID:24249712
Real-time diagnostics of the reusable rocket engine using on-line system identification
NASA Technical Reports Server (NTRS)
Guo, T.-H.; Merrill, W.; Duyar, A.
1990-01-01
A model-based failure diagnosis system has been proposed for real-time diagnosis of SSME failures. Actuation, sensor, and system degradation failure modes are all considered by the proposed system. In the case of SSME actuation failures, it was shown that real-time identification can effectively be used for failure diagnosis purposes. It is a direct approach since it reduces the detection, isolation, and the estimation of the extent of the failures to the comparison of parameter values before and after the failure. As with any model-based failure detection system, the proposed approach requires a fault model that embodies the essential characteristics of the failure process. The proposed diagnosis approach has the added advantage that it can be used as part of an intelligent control system for failure accommodation purposes.
The Need for Intelligent Control of Space Power Systems
NASA Technical Reports Server (NTRS)
May, Ryan David; Soeder, James F.; Beach, Raymond F.; McNelis, Nancy B.
2013-01-01
As manned spacecraft venture farther from Earth, the need for reliable, autonomous control of vehicle subsystems becomes critical. This is particularly true for the electrical power system which is critical to every other system. Autonomy can not be achieved by simple scripting techniques due to the communication latency times and the difficulty associated with failures (or combinations of failures) that need to be handled in as graceful a manner as possible to ensure system availability. Therefore an intelligent control system must be developed that can respond to disturbances and failures in a robust manner and ensure that critical system loads are served and all system constraints are respected.
A robust human face detection algorithm
NASA Astrophysics Data System (ADS)
Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.
2012-01-01
Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.
Reducing Cascading Failure Risk by Increasing Infrastructure Network Interdependence.
Korkali, Mert; Veneman, Jason G; Tivnan, Brian F; Bagrow, James P; Hines, Paul D H
2017-03-20
Increased interconnection between critical infrastructure networks, such as electric power and communications systems, has important implications for infrastructure reliability and security. Others have shown that increased coupling between networks that are vulnerable to internetwork cascading failures can increase vulnerability. However, the mechanisms of cascading in these models differ from those in real systems and such models disregard new functions enabled by coupling, such as intelligent control during a cascade. This paper compares the robustness of simple topological network models to models that more accurately reflect the dynamics of cascading in a particular case of coupled infrastructures. First, we compare a topological contagion model to a power grid model. Second, we compare a percolation model of internetwork cascading to three models of interdependent power-communication systems. In both comparisons, the more detailed models suggest substantially different conclusions, relative to the simpler topological models. In all but the most extreme case, our model of a "smart" power network coupled to a communication system suggests that increased power-communication coupling decreases vulnerability, in contrast to the percolation model. Together, these results suggest that robustness can be enhanced by interconnecting networks with complementary capabilities if modes of internetwork failure propagation are constrained.
Reducing Cascading Failure Risk by Increasing Infrastructure Network Interdependence
NASA Astrophysics Data System (ADS)
Korkali, Mert; Veneman, Jason G.; Tivnan, Brian F.; Bagrow, James P.; Hines, Paul D. H.
2017-03-01
Increased interconnection between critical infrastructure networks, such as electric power and communications systems, has important implications for infrastructure reliability and security. Others have shown that increased coupling between networks that are vulnerable to internetwork cascading failures can increase vulnerability. However, the mechanisms of cascading in these models differ from those in real systems and such models disregard new functions enabled by coupling, such as intelligent control during a cascade. This paper compares the robustness of simple topological network models to models that more accurately reflect the dynamics of cascading in a particular case of coupled infrastructures. First, we compare a topological contagion model to a power grid model. Second, we compare a percolation model of internetwork cascading to three models of interdependent power-communication systems. In both comparisons, the more detailed models suggest substantially different conclusions, relative to the simpler topological models. In all but the most extreme case, our model of a “smart” power network coupled to a communication system suggests that increased power-communication coupling decreases vulnerability, in contrast to the percolation model. Together, these results suggest that robustness can be enhanced by interconnecting networks with complementary capabilities if modes of internetwork failure propagation are constrained.
Reducing Cascading Failure Risk by Increasing Infrastructure Network Interdependence
Korkali, Mert; Veneman, Jason G.; Tivnan, Brian F.; Bagrow, James P.; Hines, Paul D. H.
2017-01-01
Increased interconnection between critical infrastructure networks, such as electric power and communications systems, has important implications for infrastructure reliability and security. Others have shown that increased coupling between networks that are vulnerable to internetwork cascading failures can increase vulnerability. However, the mechanisms of cascading in these models differ from those in real systems and such models disregard new functions enabled by coupling, such as intelligent control during a cascade. This paper compares the robustness of simple topological network models to models that more accurately reflect the dynamics of cascading in a particular case of coupled infrastructures. First, we compare a topological contagion model to a power grid model. Second, we compare a percolation model of internetwork cascading to three models of interdependent power-communication systems. In both comparisons, the more detailed models suggest substantially different conclusions, relative to the simpler topological models. In all but the most extreme case, our model of a “smart” power network coupled to a communication system suggests that increased power-communication coupling decreases vulnerability, in contrast to the percolation model. Together, these results suggest that robustness can be enhanced by interconnecting networks with complementary capabilities if modes of internetwork failure propagation are constrained. PMID:28317835
Syndromic surveillance for health information system failures: a feasibility study
Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico
2013-01-01
Objective To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. Methods A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. Results In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65–0.85. Conclusions Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures. PMID:23184193
Risk analysis of analytical validations by probabilistic modification of FMEA.
Barends, D M; Oldenhof, M T; Vredenbregt, M J; Nauta, M J
2012-05-01
Risk analysis is a valuable addition to validation of an analytical chemistry process, enabling not only detecting technical risks, but also risks related to human failures. Failure Mode and Effect Analysis (FMEA) can be applied, using a categorical risk scoring of the occurrence, detection and severity of failure modes, and calculating the Risk Priority Number (RPN) to select failure modes for correction. We propose a probabilistic modification of FMEA, replacing the categorical scoring of occurrence and detection by their estimated relative frequency and maintaining the categorical scoring of severity. In an example, the results of traditional FMEA of a Near Infrared (NIR) analytical procedure used for the screening of suspected counterfeited tablets are re-interpretated by this probabilistic modification of FMEA. Using this probabilistic modification of FMEA, the frequency of occurrence of undetected failure mode(s) can be estimated quantitatively, for each individual failure mode, for a set of failure modes, and the full analytical procedure. Copyright © 2012 Elsevier B.V. All rights reserved.
DC-to-AC inverter ratio failure detector
NASA Technical Reports Server (NTRS)
Ebersole, T. J.; Andrews, R. E.
1975-01-01
Failure detection technique is based upon input-output ratios, which is independent of inverter loading. Since inverter has fixed relationship between V-in/V-out and I-in/I-out, failure detection criteria are based on this ratio, which is simply inverter transformer turns ratio, K, equal to primary turns divided by secondary turns.
Impaired face detection may explain some but not all cases of developmental prosopagnosia.
Dalrymple, Kirsten A; Duchaine, Brad
2016-05-01
Developmental prosopagnosia (DP) is defined by severe face recognition difficulties due to the failure to develop the visual mechanisms for processing faces. The two-process theory of face recognition (Morton & Johnson, 1991) implies that DP could result from a failure of an innate face detection system; this failure could prevent an individual from then tuning higher-level processes for face recognition (Johnson, 2005). Work with adults indicates that some individuals with DP have normal face detection whereas others are impaired. However, face detection has not been addressed in children with DP, even though their results may be especially informative because they have had less opportunity to develop strategies that could mask detection deficits. We tested the face detection abilities of seven children with DP. Four were impaired at face detection to some degree (i.e. abnormally slow, or failed to find faces) while the remaining three children had normal face detection. Hence, the cases with impaired detection are consistent with the two-process account suggesting that DP could result from a failure of face detection. However, the cases with normal detection implicate a higher-level origin. The dissociation between normal face detection and impaired identity perception also indicates that these abilities depend on different neurocognitive processes. © 2015 John Wiley & Sons Ltd.
Robust and efficient anomaly detection using heterogeneous representations
NASA Astrophysics Data System (ADS)
Hu, Xing; Hu, Shiqiang; Xie, Jinhua; Zheng, Shiyou
2015-05-01
Various approaches have been proposed for video anomaly detection. Yet these approaches typically suffer from one or more limitations: they often characterize the pattern using its internal information, but ignore its external relationship which is important for local anomaly detection. Moreover, the high-dimensionality and the lack of robustness of pattern representation may lead to problems, including overfitting, increased computational cost and memory requirements, and high false alarm rate. We propose a video anomaly detection framework which relies on a heterogeneous representation to account for both the pattern's internal information and external relationship. The internal information is characterized by slow features learned by slow feature analysis from low-level representations, and the external relationship is characterized by the spatial contextual distances. The heterogeneous representation is compact, robust, efficient, and discriminative for anomaly detection. Moreover, both the pattern's internal information and external relationship can be taken into account in the proposed framework. Extensive experiments demonstrate the robustness and efficiency of our approach by comparison with the state-of-the-art approaches on the widely used benchmark datasets.
NASA Astrophysics Data System (ADS)
Andrews, Stephen K.; Kelvin, Lee S.; Driver, Simon P.; Robotham, Aaron S. G.
2014-01-01
The 2MASS, UKIDSS-LAS, and VISTA VIKING surveys have all now observed the GAMA 9hr region in the Ks band. Here we compare the detection rates, photometry, basic size measurements, and single-component GALFIT structural measurements for a sample of 37 591 galaxies. We explore the sensitivity limits where the data agree for a variety of issues including: detection, star-galaxy separation, photometric measurements, size and ellipticity measurements, and Sérsic measurements. We find that 2MASS fails to detect at least 20% of the galaxy population within all magnitude bins, however for those that are detected we find photometry is robust (± 0.2 mag) to 14.7 AB mag and star-galaxy separation to 14.8 AB mag. For UKIDSS-LAS we find incompleteness starts to enter at a flux limit of 18.9 AB mag, star-galaxy separation is robust to 16.3 AB mag, and structural measurements are robust to 17.7 AB mag. VISTA VIKING data are complete to approximately 20.0 AB mag and structural measurements appear robust to 18.8 AB mag.
Real-time estimation of ionospheric delay using GPS measurements
NASA Astrophysics Data System (ADS)
Lin, Lao-Sheng
1997-12-01
When radio waves such as the GPS signals propagate through the ionosphere, they experience an extra time delay. The ionospheric delay can be eliminated (to the first order) through a linear combination of L1 and L2 observations from dual-frequency GPS receivers. Taking advantage of this dispersive principle, one or more dual- frequency GPS receivers can be used to determine a model of the ionospheric delay across a region of interest and, if implemented in real-time, can support single-frequency GPS positioning and navigation applications. The research objectives of this thesis were: (1) to develop algorithms to obtain accurate absolute Total Electron Content (TEC) estimates from dual-frequency GPS observables, and (2) to develop an algorithm to improve the accuracy of real-time ionosphere modelling. In order to fulfil these objectives, four algorithms have been proposed in this thesis. A 'multi-day multipath template technique' is proposed to mitigate the pseudo-range multipath effects at static GPS reference stations. This technique is based on the assumption that the multipath disturbance at a static station will be constant if the physical environment remains unchanged from day to day. The multipath template, either single-day or multi-day, can be generated from the previous days' GPS data. A 'real-time failure detection and repair algorithm' is proposed to detect and repair the GPS carrier phase 'failures', such as the occurrence of cycle slips. The proposed algorithm uses two procedures: (1) application of a statistical test on the state difference estimated from robust and conventional Kalman filters in order to detect and identify the carrier phase failure, and (2) application of a Kalman filter algorithm to repair the 'identified carrier phase failure'. A 'L1/L2 differential delay estimation algorithm' is proposed to estimate GPS satellite transmitter and receiver L1/L2 differential delays. This algorithm, based on the single-site modelling technique, is able to estimate the sum of the satellite and receiver L1/L2 differential delay for each tracked GPS satellite. A 'UNSW grid-based algorithm' is proposed to improve the accuracy of real-time ionosphere modelling. The proposed algorithm is similar to the conventional grid-based algorithm. However, two modifications were made to the algorithm: (1) an 'exponential function' is adopted as the weighting function, and (2) the 'grid-based ionosphere model' estimated from the previous day is used to predict the ionospheric delay ratios between the grid point and reference points. (Abstract shortened by UMI.)
Nonlinear damage detection in composite structures using bispectral analysis
NASA Astrophysics Data System (ADS)
Ciampa, Francesco; Pickering, Simon; Scarselli, Gennaro; Meo, Michele
2014-03-01
Literature offers a quantitative number of diagnostic methods that can continuously provide detailed information of the material defects and damages in aerospace and civil engineering applications. Indeed, low velocity impact damages can considerably degrade the integrity of structural components and, if not detected, they can result in catastrophic failure conditions. This paper presents a nonlinear Structural Health Monitoring (SHM) method, based on ultrasonic guided waves (GW), for the detection of the nonlinear signature in a damaged composite structure. The proposed technique, based on a bispectral analysis of ultrasonic input waveforms, allows for the evaluation of the nonlinear response due to the presence of cracks and delaminations. Indeed, such a methodology was used to characterize the nonlinear behaviour of the structure, by exploiting the frequency mixing of the original waveform acquired from a sparse array of sensors. The robustness of bispectral analysis was experimentally demonstrated on a damaged carbon fibre reinforce plastic (CFRP) composite panel, and the nonlinear source was retrieved with a high level of accuracy. Unlike other linear and nonlinear ultrasonic methods for damage detection, this methodology does not require any baseline with the undamaged structure for the evaluation of the nonlinear source, nor a priori knowledge of the mechanical properties of the specimen. Moreover, bispectral analysis can be considered as a nonlinear elastic wave spectroscopy (NEWS) technique for materials showing either classical or non-classical nonlinear behaviour.
NASA Technical Reports Server (NTRS)
Troudet, T.; Garg, S.; Merrill, W.
1992-01-01
The design of a dynamic neurocontroller with good robustness properties is presented for a multivariable aircraft control problem. The internal dynamics of the neurocontroller are synthesized by a state estimator feedback loop. The neurocontrol is generated by a multilayer feedforward neural network which is trained through backpropagation to minimize an objective function that is a weighted sum of tracking errors, and control input commands and rates. The neurocontroller exhibits good robustness through stability margins in phase and vehicle output gains. By maintaining performance and stability in the presence of sensor failures in the error loops, the structure of the neurocontroller is also consistent with the classical approach of flight control design.
NASA Technical Reports Server (NTRS)
Bueno, R.; Chow, E.; Gershwin, S. B.; Willsky, A. S.
1975-01-01
The research is reported on the problems of failure detection and reliable system design for digital aircraft control systems. Failure modes, cross detection probability, wrong time detection, application of performance tools, and the GLR computer package are discussed.
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Godiwala, P. M.
1985-01-01
The performance analysis results of a fault inferring nonlinear detection system (FINDS) using sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment is presented. First, a statistical analysis of the flight recorded sensor data was made in order to determine the characteristics of sensor inaccuracies. Next, modifications were made to the detection and decision functions in the FINDS algorithm in order to improve false alarm and failure detection performance under real modelling errors present in the flight data. Finally, the failure detection and false alarm performance of the FINDS algorithm were analyzed by injecting bias failures into fourteen sensor outputs over six repetitive runs of the five minute flight data. In general, the detection speed, failure level estimation, and false alarm performance showed a marked improvement over the previously reported simulation runs. In agreement with earlier results, detection speed was faster for filter measurement sensors soon as MLS than for filter input sensors such as flight control accelerometers.
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander; Sahu, Kusum
2003-01-01
Potential users of plastic encapsulated microcircuits (PEMs) need to be reminded that unlike the military system of producing robust high-reliability microcircuits that are designed to perform acceptably in a variety of harsh environments, PEMs are primarily designed for use in benign environments where equipment is easily accessed for repair or replacement. The methods of analysis applied to military products to demonstrate high reliability cannot always be applied to PEMs. This makes it difficult for users to characterize PEMs for two reasons: 1. Due to the major differences in design and construction, the standard test practices used to ensure that military devices are robust and have high reliability often cannot be applied to PEMs that have a smaller operating temperature range and are typically more frail and susceptible to moisture absorption. In contrast, high-reliability military microcircuits usually utilize large, robust, high-temperature packages that are hermetically sealed. 2. Unlike the military high-reliability system, users of PEMs have little visibility into commercial manufacturers proprietary design, materials, die traceability, and production processes and procedures. There is no central authority that monitors PEM commercial product for quality, and there are no controls in place that can be imposed across all commercial manufacturers to provide confidence to high-reliability users that a common acceptable level of quality exists for all PEMs manufacturers. Consequently, there is no guaranteed control over the type of reliability that is built into commercial product, and there is no guarantee that different lots from the same manufacturer are equally acceptable. And regarding application, there is no guarantee that commercial products intended for use in benign environments will provide acceptable performance and reliability in harsh space environments. The qualification and screening processes contained in this document are intended to detect poor-quality lots and screen out early random failures from use in space flight hardware. However, since it cannot be guaranteed that quality was designed and built into PEMs that are appropriate for space applications, users cannot screen in quality that may not exist. It must be understood that due to the variety of materials, processes, and technologies used to design and produce PEMs, this test process may not accelerate and detect all failure mechanisms. While the tests herein will increase user confidence that PEMs with otherwise unknown reliability can be used in space environments, such testing may not guarantee the same level of reliability offered by military microcircuits. PEMs should only be used where due to performance needs there are no alternatives in the military high-reliability market, and projects are willing to accept higher risk.
Xu, Yuan; Ding, Kun; Huo, Chunlei; Zhong, Zisha; Li, Haichang; Pan, Chunhong
2015-01-01
Very high resolution (VHR) image change detection is challenging due to the low discriminative ability of change feature and the difficulty of change decision in utilizing the multilevel contextual information. Most change feature extraction techniques put emphasis on the change degree description (i.e., in what degree the changes have happened), while they ignore the change pattern description (i.e., how the changes changed), which is of equal importance in characterizing the change signatures. Moreover, the simultaneous consideration of the classification robust to the registration noise and the multiscale region-consistent fusion is often neglected in change decision. To overcome such drawbacks, in this paper, a novel VHR image change detection method is proposed based on sparse change descriptor and robust discriminative dictionary learning. Sparse change descriptor combines the change degree component and the change pattern component, which are encoded by the sparse representation error and the morphological profile feature, respectively. Robust change decision is conducted by multiscale region-consistent fusion, which is implemented by the superpixel-level cosparse representation with robust discriminative dictionary and the conditional random field model. Experimental results confirm the effectiveness of the proposed change detection technique. PMID:25918748
Fracture - An Unforgiving Failure Mode
NASA Technical Reports Server (NTRS)
Goodin, James Ronald
2006-01-01
During the 2005 Conference for the Advancement for Space Safety, after a typical presentation of safety tools, a Russian in the audience simply asked, "How does that affect the hardware?" Having participated in several International System Safety Conferences, I recalled that most attention is dedicated to safety tools and little, if any, to hardware. The intent of this paper on the hazard of fracture and failure modes associated with fracture is my attempt to draw attention to the grass roots of system safety - improving hardware robustness and resilience.
Mahieu, Nathaniel G.; Spalding, Jonathan L.; Patti, Gary J.
2016-01-01
Motivation: Current informatic techniques for processing raw chromatography/mass spectrometry data break down under several common, non-ideal conditions. Importantly, hydrophilic liquid interaction chromatography (a key separation technology for metabolomics) produces data which are especially challenging to process. We identify three critical points of failure in current informatic workflows: compound specific drift, integration region variance, and naive missing value imputation. We implement the Warpgroup algorithm to address these challenges. Results: Warpgroup adds peak subregion detection, consensus integration bound detection, and intelligent missing value imputation steps to the conventional informatic workflow. When compared with the conventional workflow, Warpgroup made major improvements to the processed data. The coefficient of variation for peaks detected in replicate injections of a complex Escherichia Coli extract were halved (a reduction of 19%). Integration regions across samples were much more robust. Additionally, many signals lost by the conventional workflow were ‘rescued’ by the Warpgroup refinement, thereby resulting in greater analyte coverage in the processed data. Availability and implementation: Warpgroup is an open source R package available on GitHub at github.com/nathaniel-mahieu/warpgroup. The package includes example data and XCMS compatibility wrappers for ease of use. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: nathaniel.mahieu@wustl.edu or gjpattij@wustl.edu PMID:26424859
22nd Annual Logistics Conference and Exhibition
2006-04-20
Prognostics & Health Management at GE Dr. Piero P.Bonissone Industrial AI Lab GE Global Research NCD Select detection model Anomaly detection results...Mode 213 x Failure mode histogram 2130014 Anomaly detection from event-log data Anomaly detection from event-log data Diagnostics/ Prognostics Using...Failure Monitoring & AssessmentTactical C4ISR Sense Respond 7 •Diagnostics, Prognostics and health management
On the robustness of EC-PC spike detection method for online neural recording.
Zhou, Yin; Wu, Tong; Rastegarnia, Amir; Guan, Cuntai; Keefer, Edward; Yang, Zhi
2014-09-30
Online spike detection is an important step to compress neural data and perform real-time neural information decoding. An unsupervised, automatic, yet robust signal processing is strongly desired, thus it can support a wide range of applications. We have developed a novel spike detection algorithm called "exponential component-polynomial component" (EC-PC) spike detection. We firstly evaluate the robustness of the EC-PC spike detector under different firing rates and SNRs. Secondly, we show that the detection Precision can be quantitatively derived without requiring additional user input parameters. We have realized the algorithm (including training) into a 0.13 μm CMOS chip, where an unsupervised, nonparametric operation has been demonstrated. Both simulated data and real data are used to evaluate the method under different firing rates (FRs), SNRs. The results show that the EC-PC spike detector is the most robust in comparison with some popular detectors. Moreover, the EC-PC detector can track changes in the background noise due to the ability to re-estimate the neural data distribution. Both real and synthesized data have been used for testing the proposed algorithm in comparison with other methods, including the absolute thresholding detector (AT), median absolute deviation detector (MAD), nonlinear energy operator detector (NEO), and continuous wavelet detector (CWD). Comparative testing results reveals that the EP-PC detection algorithm performs better than the other algorithms regardless of recording conditions. The EC-PC spike detector can be considered as an unsupervised and robust online spike detection. It is also suitable for hardware implementation. Copyright © 2014 Elsevier B.V. All rights reserved.
Detecting dynamical boundaries from kinematic data in biomechanics
NASA Astrophysics Data System (ADS)
Ross, Shane D.; Tanaka, Martin L.; Senatore, Carmine
2010-03-01
Ridges in the state space distribution of finite-time Lyapunov exponents can be used to locate dynamical boundaries. We describe a method for obtaining dynamical boundaries using only trajectories reconstructed from time series, expanding on the current approach which requires a vector field in the phase space. We analyze problems in musculoskeletal biomechanics, considered as exemplars of a class of experimental systems that contain separatrix features. Particular focus is given to postural control and balance, considering both models and experimental data. Our success in determining the boundary between recovery and failure in human balance activities suggests this approach will provide new robust stability measures, as well as measures of fall risk, that currently are not available and may have benefits for the analysis and prevention of low back pain and falls leading to injury, both of which affect a significant portion of the population.
Blind jealousy? Romantic insecurity increases emotion-induced failures of visual perception.
Most, Steven B; Laurenceau, Jean-Philippe; Graber, Elana; Belcher, Amber; Smith, C Veronica
2010-04-01
Does the influence of close relationships pervade so deeply as to impact visual awareness? Results from two experiments involving heterosexual romantic couples suggest that they do. Female partners from each couple performed a rapid detection task where negative emotional distractors typically disrupt visual awareness of subsequent targets; at the same time, their male partners rated attractiveness first of landscapes, then of photos of other women. At the end of both experiments, the degree to which female partners indicated uneasiness about their male partner looking at and rating other women correlated significantly with the degree to which negative emotional distractors had disrupted their target perception during that time. This relationship was robust even when controlling for individual differences in baseline performance. Thus, emotions elicited by social contexts appear to wield power even at the level of perceptual processing. Copyright 2010 APA, all rights reserved.
Partial and total actuator faults accommodation for input-affine nonlinear process plants.
Mihankhah, Amin; Salmasi, Farzad R; Salahshoor, Karim
2013-05-01
In this paper, a new fault-tolerant control system is proposed for input-affine nonlinear plants based on Model Reference Adaptive System (MRAS) structure. The proposed method has the capability to accommodate both partial and total actuator failures along with bounded external disturbances. In this methodology, the conventional MRAS control law is modified by augmenting two compensating terms. One of these terms is added to eliminate the nonlinear dynamic, while the other is reinforced to compensate the distractive effects of the total actuator faults and external disturbances. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed method. Moreover, the control structure has good robustness capability against the parameter variation. The performance of this scheme is evaluated using a CSTR system and the results were satisfactory. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Sokalskis, Vladislavs; Peluso, Diletta; Jagodzinski, Annika; Sinning, Christoph
2017-06-01
Right heart dysfunction has been found to be a strong prognostic factor predicting adverse outcome in various cardiopulmonary diseases. Conventional echocardiographic measurements can be limited by geometrical assumptions and impaired reproducibility. Speckle tracking-derived strain provides a robust quantification of right ventricular function. It explicitly evaluates myocardial deformation, as opposed to tissue Doppler-derived strain, which is computed from tissue velocity gradients. Right ventricular longitudinal strain provides a sensitive tool for detecting right ventricular dysfunction, even at subclinical levels. Moreover, the longitudinal strain can be applied for prognostic stratification of patients with pulmonary hypertension, pulmonary embolism, and congestive heart failure. Speckle tracking-derived right atrial strain, right ventricular longitudinal strain-derived mechanical dyssynchrony, and three-dimensional echocardiography-derived strain are emerging imaging parameters and methods. Their application in research is paving the way for their clinical use. © 2017, Wiley Periodicals, Inc.
An enhancement to the NA4 gear vibration diagnostic parameter
NASA Technical Reports Server (NTRS)
Decker, Harry J.; Handschuh, Robert F.; Zakrajsek, James J.
1994-01-01
A new vibration diagnostic parameter for health monitoring of gears, NA4*, is proposed and tested. A recently developed gear vibration diagnostic parameter NA4 outperformed other fault detection methods at indicating the start and initial progression of damage. However, in some cases, as the damage progressed, the sensitivity of the NA4 and FM4 parameters tended to decrease and no longer indicated damage. A new parameter, NA4* was developed by enhancing NA4 to improve the trending of the parameter. This allows for the indication of damage both at initiation and also as the damage progresses. The NA4* parameter was verified and compared to the NA4 and FM4 parameters using experimental data from single mesh spur and spiral bevel gear fatigue rigs. The primary failure mode for the test cases was naturally occurring tooth surface pitting. The NA4* parameter is shown to be a more robust indicator of damage.
Protection of health research participants in the United States: a review of two cases.
Douglass, Alison; Crampton, Peter
2004-06-01
Two research-related deaths and controversies in the United States during recent years have raised public concern over the safety of research participants. This paper explores the reasons why, in two studies, there was a failure of ethical oversight. The issues exposed by these failures have international relevance as they could possibly occur anywhere where human health research is carried out. Five factors that contributed to these failures are highlighted: 1. failure to support and resource research ethics committees; 2. failure of the research oversight process to adequately assess the risks and benefits of research, while giving undue emphasis to informed consent; 3. conflicts of interest arising from financial relationships and research ethics committee membership; 4. lack of consistent oversight of privately funded research; and 5. incompetent or intentional failure to adhere by ethical guidelines. There is considerable headway to be made in the United States, as in other countries, in the fostering and maintenance of robust systems of human research oversight.
NASA Astrophysics Data System (ADS)
Thionnet, A.; Chou, H. Y.; Bunsell, A.
2015-04-01
The purpose of these three papers is not to just revisit the modelling of unidirectional composites. It is to provide a robust framework based on physical processes that can be used to optimise the design and long term reliability of internally pressurised filament wound structures. The model presented in Part 1 for the case of monotonically loaded unidirectional composites is further developed to consider the effects of the viscoelastic nature of the matrix in determining the kinetics of fibre breaks under slow or sustained loading. It is shown that the relaxation of the matrix around fibre breaks leads to locally increasing loads on neighbouring fibres and in some cases their delayed failure. Although ultimate failure is similar to the elastic case in that clusters of fibre breaks ultimately control composite failure the kinetics of their development varies significantly from the elastic case. Failure loads have been shown to reduce when loading rates are lowered.
Robust multiperson detection and tracking for mobile service and social robots.
Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou
2012-10-01
This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.
Predicting the Lifetime of Dynamic Networks Experiencing Persistent Random Attacks.
Podobnik, Boris; Lipic, Tomislav; Horvatic, Davor; Majdandzic, Antonio; Bishop, Steven R; Eugene Stanley, H
2015-09-21
Estimating the critical points at which complex systems abruptly flip from one state to another is one of the remaining challenges in network science. Due to lack of knowledge about the underlying stochastic processes controlling critical transitions, it is widely considered difficult to determine the location of critical points for real-world networks, and it is even more difficult to predict the time at which these potentially catastrophic failures occur. We analyse a class of decaying dynamic networks experiencing persistent failures in which the magnitude of the overall failure is quantified by the probability that a potentially permanent internal failure will occur. When the fraction of active neighbours is reduced to a critical threshold, cascading failures can trigger a total network failure. For this class of network we find that the time to network failure, which is equivalent to network lifetime, is inversely dependent upon the magnitude of the failure and logarithmically dependent on the threshold. We analyse how permanent failures affect network robustness using network lifetime as a measure. These findings provide new methodological insight into system dynamics and, in particular, of the dynamic processes of networks. We illustrate the network model by selected examples from biology, and social science.
RECOVERY ACT: MULTIMODAL IMAGING FOR SOLAR CELL MICROCRACK DETECTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janice Hudgings; Lawrence Domash
2012-02-08
Undetected microcracks in solar cells are a principal cause of failure in service due to subsequent weather exposure, mechanical flexing or diurnal temperature cycles. Existing methods have not been able to detect cracks early enough in the production cycle to prevent inadvertent shipment to customers. This program, sponsored under the DOE Photovoltaic Supply Chain and Cross-Cutting Technologies program, studied the feasibility of quantifying surface micro-discontinuities by use of a novel technique, thermoreflectance imaging, to detect surface temperature gradients with very high spatial resolution, in combination with a suite of conventional imaging methods such as electroluminescence. The project carried out laboratorymore » tests together with computational image analyses using sample solar cells with known defects supplied by industry sources or DOE National Labs. Quantitative comparisons between the effectiveness of the new technique and conventional methods were determined in terms of the smallest detectable crack. Also the robustness of the new technique for reliable microcrack detection was determined at various stages of processing such as before and after antireflectance treatments. An overall assessment is that the new technique compares favorably with existing methods such as lock-in thermography or ultrasonics. The project was 100% completed in Sept, 2010. A detailed report of key findings from this program was published as: Q.Zhou, X.Hu, K.Al-Hemyari, K.McCarthy, L.Domash and J.Hudgings, High spatial resolution characterization of silicon solar cells using thermoreflectance imaging, J. Appl. Phys, 110, 053108 (2011).« less
Dual permeability FEM models for distributed fiber optic sensors development
NASA Astrophysics Data System (ADS)
Aguilar-López, Juan Pablo; Bogaard, Thom
2017-04-01
Fiber optic cables are commonly known for being robust and reliable mediums for transferring information at the speed of light in glass. Billions of kilometers of cable have been installed around the world for internet connection and real time information sharing. Yet, fiber optic cable is not only a mean for information transfer but also a way to sense and measure physical properties of the medium in which is installed. For dike monitoring, it has been used in the past for detecting inner core and foundation temperature changes which allow to estimate water infiltration during high water events. The DOMINO research project, aims to develop a fiber optic based dike monitoring system which allows to directly sense and measure any pore pressure change inside the dike structure. For this purpose, questions like which location, how many sensors, which measuring frequency and which accuracy are required for the sensor development. All these questions may be initially answered with a finite element model which allows to estimate the effects of pore pressure change in different locations along the cross section while having a time dependent estimation of a stability factor. The sensor aims to monitor two main failure mechanisms at the same time; The piping erosion failure mechanism and the macro-stability failure mechanism. Both mechanisms are going to be modeled and assessed in detail with a finite element based dual permeability Darcy-Richards numerical solution. In that manner, it is possible to assess different sensing configurations with different loading scenarios (e.g. High water levels, rainfall events and initial soil moisture and permeability conditions). The results obtained for the different configurations are later evaluated based on an entropy based performance evaluation. The added value of this kind of modelling approach for the sensor development is that it allows to simultaneously model the piping erosion and macro-stability failure mechanisms in a time dependent manner. In that way, the estimated pore pressures may be related to the monitored one and to both failure mechanisms. Furthermore, the approach is intended to be used in a later stage for the real time monitoring of the failure.
Robust automatic line scratch detection in films.
Newson, Alasdair; Almansa, Andrés; Gousseau, Yann; Pérez, Patrick
2014-03-01
Line scratch detection in old films is a particularly challenging problem due to the variable spatiotemporal characteristics of this defect. Some of the main problems include sensitivity to noise and texture, and false detections due to thin vertical structures belonging to the scene. We propose a robust and automatic algorithm for frame-by-frame line scratch detection in old films, as well as a temporal algorithm for the filtering of false detections. In the frame-by-frame algorithm, we relax some of the hypotheses used in previous algorithms in order to detect a wider variety of scratches. This step's robustness and lack of external parameters is ensured by the combined use of an a contrario methodology and local statistical estimation. In this manner, over-detection in textured or cluttered areas is greatly reduced. The temporal filtering algorithm eliminates false detections due to thin vertical structures by exploiting the coherence of their motion with that of the underlying scene. Experiments demonstrate the ability of the resulting detection procedure to deal with difficult situations, in particular in the presence of noise, texture, and slanted or partial scratches. Comparisons show significant advantages over previous work.
A Review of Transmission Diagnostics Research at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Zakajsek, James J.
1994-01-01
This paper presents a summary of the transmission diagnostics research work conducted at NASA Lewis Research Center over the last four years. In 1990, the Transmission Health and Usage Monitoring Research Team at NASA Lewis conducted a survey to determine the critical needs of the diagnostics community. Survey results indicated that experimental verification of gear and bearing fault detection methods, improved fault detection in planetary systems, and damage magnitude assessment and prognostics research were all critical to a highly reliable health and usage monitoring system. In response to this, a variety of transmission fault detection methods were applied to experimentally obtained fatigue data. Failure modes of the fatigue data include a variety of gear pitting failures, tooth wear, tooth fracture, and bearing spalling failures. Overall results indicate that, of the gear fault detection techniques, no one method can successfully detect all possible failure modes. The more successful methods need to be integrated into a single more reliable detection technique. A recently developed method, NA4, in addition to being one of the more successful gear fault detection methods, was also found to exhibit damage magnitude estimation capabilities.
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Godiwala, P. M.; Morrell, F. R.
1985-01-01
This paper presents the performance analysis results of a fault inferring nonlinear detection system (FINDS) using integrated avionics sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment. First, an overview of the FINDS algorithm structure is given. Then, aircraft state estimate time histories and statistics for the flight data sensors are discussed. This is followed by an explanation of modifications made to the detection and decision functions in FINDS to improve false alarm and failure detection performance. Next, the failure detection and false alarm performance of the FINDS algorithm are analyzed by injecting bias failures into fourteen sensor outputs over six repetitive runs of the five minutes of flight data. Results indicate that the detection speed, failure level estimation, and false alarm performance show a marked improvement over the previously reported simulation runs. In agreement with earlier results, detection speed is faster for filter measurement sensors such as MLS than for filter input sensors such as flight control accelerometers. Finally, the progress in modifications of the FINDS algorithm design to accommodate flight computer constraints is discussed.
Sliding-Mode Control Applied for Robust Control of a Highly Unstable Aircraft
NASA Technical Reports Server (NTRS)
Vetter, Travis Kenneth
2002-01-01
An investigation into the application of an observer based sliding mode controller for robust control of a highly unstable aircraft and methods of compensating for actuator dynamics is performed. After a brief overview of some reconfigurable controllers, sliding mode control (SMC) is selected because of its invariance properties and lack of need for parameter identification. SMC is reviewed and issues with parasitic dynamics, which cause system instability, are addressed. Utilizing sliding manifold boundary layers, the nonlinear control is converted to a linear control and sliding manifold design is performed in the frequency domain. An additional feedback form of model reference hedging is employed which is similar to a prefilter and has large benefits to system performance. The effects of inclusion of actuator dynamics into the designed plant is heavily investigated. Multiple Simulink models of the full longitudinal dynamics and wing deflection modes of the forward swept aero elastic vehicle (FSAV) are constructed. Additionally a linear state space models to analyze effects from various system parameters. The FSAV has a pole at +7 rad/sec and is non-minimum phase. The use of 'model actuators' in the feedback path, and varying there design, is heavily investigated for the resulting effects on plant robustness and tolerance to actuator failure. The use of redundant actuators is also explored and improved robustness is shown. All models are simulated with severe failure and excellent tracking, and task dependent handling qualities, and low pilot induced oscillation tendency is shown.
NASA Astrophysics Data System (ADS)
Mendoza, G.; Tkach, M.; Kucharski, J.; Chaudhry, R.
2017-12-01
This discussion is focused around the application of a bottom-up vulnerability assessment procedure for planning of climate resilience to a water treament plant for teh city of Iolanda, Zambia. This project is a Millennium Challenge Corporation (MCC) innitiaive with technical support by the UNESCO category II, International Center for Integrated Water Resources Management (ICIWaRM) with secretariat at the US Army Corps of Engineers Institute for Water Resources. The MCC is an innovative and independent U.S. foreign aid agency that is helping lead the fight against global poverty. The bottom-up vulnerability assessmentt framework examines critical performance thresholds, examines the external drivers that would lead to failure, establishes plausibility and analytical uncertainty that would lead to failure, and provides the economic justification for robustness or adaptability. This presentation will showcase the experiences in the application of the bottom-up framework to a region that is very vulnerable to climate variability, has poor instituional capacities, and has very limited data. It will illustrate the technical analysis and a decision process that led to a non-obvious climate robust solution. Most importantly it will highlight the challenges of utilizing discounted cash flow analysis (DCFA), such as net present value, in justifying robust or adaptive solutions, i.e. comparing solution under different future risks. We highlight a solution to manage the potential biases these DCFA procedures can incur.
Phase-space dissimilarity measures for industrial and biomedical applications
NASA Astrophysics Data System (ADS)
Protopopescu, V. A.; Hively, L. M.
2005-12-01
One of the most important problems in time-series analysis is the suitable characterization of the dynamics for timely, accurate, and robust condition assessment of the underlying system. Machine and physiological processes display complex, non-stationary behaviors that are affected by noise and may range from (quasi-)periodic to completely irregular (chaotic) regimes. Nevertheless, extensive experimental evidence indicates that even when the systems behave very irregularly (e.g., severe tool chatter or cardiac fibrillation), one may assume that - for all practical purposes - the dynamics are confined to low dimensional manifolds. As a result, the behavior of these systems can be described via traditional nonlinear measures (TNM), such as Lyapunov exponents, Kolmogorov entropy, and correlation dimension. While these measures are adequate for discriminating between clear-cut regular and chaotic dynamics, they are not sufficiently sensitive to distinguish between slightly different irregular (chaotic) regimes, especially when data are noisy and/or limited. Both machine and physiological dynamics usually fall into this latter category, creating a massive stumbling block to prognostication of abnormal regimes. We present here a recently developed approach that captures more efficiently changes in the underlying dynamics. We start with process-indicative, time-serial data that are checked for quality and discarded if inadequate. Acceptable data are filtered to remove confounding artifacts (e.g., sinusoidal variation in three-phase electrical signals or eye-blinks and muscular activity in EEG). The artifact-filtered data are then used to recover the essential features of the underlying dynamics via standard time-delay, phase-space reconstruction. One of the main results of this reconstruction is a discrete approximation of the distribution function (DF) on the attractor. Unaltered dynamics yield an unchanging geometry of the attractor and the visitation frequencies of its various points, corresponding to the baseline DF. Condition change is established by comparing the base line DFs to subsequent test case DFs via new, phase space dissimilarity measures (PSDM), namely the distance and - square statistics between two DFs. A clear trend in the dissimilarity measures over time indicates substantial departure from the baseline dynamics, thus signaling condition change. The severity of this departure can be interpreted as a "normal" fluctuation, abnormal behavior, impending failure, or complete breakdown. We illustrate the new approach on an assortment of machinery and biomedical examples. The machine data were collected during laboratory tests on industrial equipment, for diverse failure modes, via seeded faults and accelerated failures. The biomedical applications involve detection of physiological changes, such as epileptic seizures from EEG; ventricular fibrillation, fainting, and sepsis onset from ECG; and breathing difficulty from chest sounds. The PSDM show a consistent discrimination of normal-to-abnormal transitions, allowing earlier, more accurate, and more robust detection of the dynamical change for all of these applications in comparison to TNM.
Trong Bui, Duong; Nguyen, Nhan Duc; Jeong, Gu-Min
2018-06-25
Human activity recognition and pedestrian dead reckoning are an interesting field because of their importance utilities in daily life healthcare. Currently, these fields are facing many challenges, one of which is the lack of a robust algorithm with high performance. This paper proposes a new method to implement a robust step detection and adaptive distance estimation algorithm based on the classification of five daily wrist activities during walking at various speeds using a smart band. The key idea is that the non-parametric adaptive distance estimator is performed after two activity classifiers and a robust step detector. In this study, two classifiers perform two phases of recognizing five wrist activities during walking. Then, a robust step detection algorithm, which is integrated with an adaptive threshold, peak and valley correction algorithm, is applied to the classified activities to detect the walking steps. In addition, the misclassification activities are fed back to the previous layer. Finally, three adaptive distance estimators, which are based on a non-parametric model of the average walking speed, calculate the length of each strike. The experimental results show that the average classification accuracy is about 99%, and the accuracy of the step detection is 98.7%. The error of the estimated distance is 2.2⁻4.2% depending on the type of wrist activities.
Detection of Failure in Asynchronous Motor Using Soft Computing Method
NASA Astrophysics Data System (ADS)
Vinoth Kumar, K.; Sony, Kevin; Achenkunju John, Alan; Kuriakose, Anto; John, Ano P.
2018-04-01
This paper investigates the stator short winding failure of asynchronous motor also their effects on motor current spectrums. A fuzzy logic approach i.e., model based technique possibly will help to detect the asynchronous motor failure. Actually, fuzzy logic similar to humanoid intelligent methods besides expected linguistic empowering inferences through vague statistics. The dynamic model is technologically advanced for asynchronous motor by means of fuzzy logic classifier towards investigate the stator inter turn failure in addition open phase failure. A hardware implementation was carried out with LabVIEW for the online-monitoring of faults.
Robust Spacecraft Component Detection in Point Clouds.
Wei, Quanmao; Jiang, Zhiguo; Zhang, Haopeng
2018-03-21
Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.
Robust Spacecraft Component Detection in Point Clouds
Wei, Quanmao; Jiang, Zhiguo
2018-01-01
Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density. PMID:29561828
A novel strategy for rapid detection of NT-proBNP
NASA Astrophysics Data System (ADS)
Cui, Qiyao; Sun, Honghao; Zhu, Hui
2017-09-01
In order to establish a simple, rapid, sensitive, and specific quantitative assay to detect the biomarkers of heart failure, in this study, biotin-streptavidin technology was employed with fluorescence immunochromatographic assay to detect the concentration of the biomarkers in serum, and this method was applied to detect NT-proBNP, which is valuable for diagnostic evaluation of heart failure.
Emergence of robustness in networks of networks
NASA Astrophysics Data System (ADS)
Roth, Kevin; Morone, Flaviano; Min, Byungjoon; Makse, Hernán A.
2017-06-01
A model of interdependent networks of networks (NONs) was introduced recently [Proc. Natl. Acad. Sci. (USA) 114, 3849 (2017), 10.1073/pnas.1620808114] in the context of brain activation to identify the neural collective influencers in the brain NON. Here we investigate the emergence of robustness in such a model, and we develop an approach to derive an exact expression for the random percolation transition in Erdös-Rényi NONs of this kind. Analytical calculations are in agreement with numerical simulations, and highlight the robustness of the NON against random node failures, which thus presents a new robust universality class of NONs. The key aspect of this robust NON model is that a node can be activated even if it does not belong to the giant mutually connected component, thus allowing the NON to be built from below the percolation threshold, which is not possible in previous models of interdependent networks. Interestingly, the phase diagram of the model unveils particular patterns of interconnectivity for which the NON is most vulnerable, thereby marking the boundary above which the robustness of the system improves with increasing dependency connections.
Sensor failure detection for jet engines
NASA Technical Reports Server (NTRS)
Beattie, E. C.; Laprad, R. F.; Akhter, M. M.; Rock, S. M.
1983-01-01
Revisions to the advanced sensor failure detection, isolation, and accommodation (DIA) algorithm, developed under the sensor failure detection system program were studied to eliminate the steady state errors due to estimation filter biases. Three algorithm revisions were formulated and one revision for detailed evaluation was chosen. The selected version modifies the DIA algorithm to feedback the actual sensor outputs to the integral portion of the control for the nofailure case. In case of a failure, the estimates of the failed sensor output is fed back to the integral portion. The estimator outputs are fed back to the linear regulator portion of the control all the time. The revised algorithm is evaluated and compared to the baseline algorithm developed previously.
A study of the temporal robustness of the growing global container-shipping network
Wang, Nuo; Wu, Nuan; Dong, Ling-ling; Yan, Hua-kun; Wu, Di
2016-01-01
Whether they thrive as they grow must be determined for all constantly expanding networks. However, few studies have focused on this important network feature or the development of quantitative analytical methods. Given the formation and growth of the global container-shipping network, we proposed the concept of network temporal robustness and quantitative method. As an example, we collected container liner companies’ data at two time points (2004 and 2014) and built a shipping network with ports as nodes and routes as links. We thus obtained a quantitative value of the temporal robustness. The temporal robustness is a significant network property because, for the first time, we can clearly recognize that the shipping network has become more vulnerable to damage over the last decade: When the node failure scale reached 50% of the entire network, the temporal robustness was approximately −0.51% for random errors and −12.63% for intentional attacks. The proposed concept and analytical method described in this paper are significant for other network studies. PMID:27713549
NASA Technical Reports Server (NTRS)
Mesloh, Nick; Hill, Tim; Kosyk, Kathy
1993-01-01
This paper presents the integrated approach toward failure detection, isolation, and recovery/reconfiguration to be used for the Space Station Freedom External Active Thermal Control System (EATCS). The on-board and on-ground diagnostic capabilities of the EATCS are discussed. Time and safety critical features, as well as noncritical failures, and the detection coverage for each provided by existing capabilities are reviewed. The allocation of responsibility between on-board software and ground-based systems, to be shown during ground testing at the Johnson Space Center, is described. Failure isolation capabilities allocated to the ground include some functionality originally found on orbit but moved to the ground to reduce on-board resource requirements. Complex failures requiring the analysis of multiple external variables, such as environmental conditions, heat loads, or station attitude, are also allocated to ground personnel.
Transmission expansion with smart switching under demand uncertainty and line failures
Schumacher, Kathryn M.; Chen, Richard Li-Yang; Cohn, Amy E. M.
2016-06-07
One of the major challenges in deciding where to build new transmission lines is that there is uncertainty regarding future loads, renewal generation output and equipment failures. We propose a robust optimization model whose transmission expansion solutions ensure that demand can be met over a wide range of conditions. Specifically, we require feasible operation for all loads and renewable generation levels within given ranges, and for all single transmission line failures. Furthermore, we consider transmission switching as an allowable recovery action. This relatively inexpensive method of redirecting power flows improves resiliency, but introduces computational challenges. Lastly, we present a novelmore » algorithm to solve this model. Computational results are discussed.« less
Saingam, Prakit; Li, Bo; Yan, Tao
2018-06-01
DNA-based molecular detection of microbial pathogens in complex environments is still plagued by sensitivity, specificity and robustness issues. We propose to address these issues by viewing them as inadvertent consequences of requiring specific and adequate amplification (SAA) of target DNA molecules by current PCR methods. Using the invA gene of Salmonella as the model system, we investigated if next generation sequencing (NGS) can be used to directly detect target sequences in false-negative PCR reaction (PCR-NGS) in order to remove the SAA requirement from PCR. False-negative PCR and qPCR reactions were first created using serial dilutions of laboratory-prepared Salmonella genomic DNA and then analyzed directly by NGS. Target invA sequences were detected in all false-negative PCR and qPCR reactions, which lowered the method detection limits near the theoretical minimum of single gene copy detection. The capability of the PCR-NGS approach in correcting false negativity was further tested and confirmed under more environmentally relevant conditions using Salmonella-spiked stream water and sediment samples. Finally, the PCR-NGS approach was applied to ten urban stream water samples and detected invA sequences in eight samples that would be otherwise deemed Salmonella negative. Analysis of the non-target sequences in the false-negative reactions helped to identify primer dime-like short sequences as the main cause of the false negativity. Together, the results demonstrated that the PCR-NGS approach can significantly improve method sensitivity, correct false-negative detections, and enable sequence-based analysis for failure diagnostics in complex environmental samples. Copyright © 2018 Elsevier B.V. All rights reserved.
Evaluation of Anomaly Detection Capability for Ground-Based Pre-Launch Shuttle Operations. Chapter 8
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander
2010-01-01
This chapter will provide a thorough end-to-end description of the process for evaluation of three different data-driven algorithms for anomaly detection to select the best candidate for deployment as part of a suite of IVHM (Integrated Vehicle Health Management) technologies. These algorithms were deemed to be sufficiently mature enough to be considered viable candidates for deployment in support of the maiden launch of Ares I-X, the successor to the Space Shuttle for NASA's Constellation program. Data-driven algorithms are just one of three different types being deployed. The other two types of algorithms being deployed include a "nile-based" expert system, and a "model-based" system. Within these two categories, the deployable candidates have already been selected based upon qualitative factors such as flight heritage. For the rule-based system, SHINE (Spacecraft High-speed Inference Engine) has been selected for deployment, which is a component of BEAM (Beacon-based Exception Analysis for Multimissions), a patented technology developed at NASA's JPL (Jet Propulsion Laboratory) and serves to aid in the management and identification of operational modes. For the "model-based" system, a commercially available package developed by QSI (Qualtech Systems, Inc.), TEAMS (Testability Engineering and Maintenance System) has been selected for deployment to aid in diagnosis. In the context of this particular deployment, distinctions among the use of the terms "data-driven," "rule-based," and "model-based," can be found in. Although there are three different categories of algorithms that have been selected for deployment, our main focus in this chapter will be on the evaluation of three candidates for data-driven anomaly detection. These algorithms will be evaluated upon their capability for robustly detecting incipient faults or failures in the ground-based phase of pre-launch space shuttle operations, rather than based oil heritage as performed in previous studies. Robust detection will allow for the achievement of pre-specified minimum false alarm and/or missed detection rates in the selection of alert thresholds. All algorithms will also be optimized with respect to an aggregation of these same criteria. Our study relies upon the use of Shuttle data to act as was a proxy for and in preparation for application to Ares I-X data, which uses a very similar hardware platform for the subsystems that are being targeted (TVC - Thrust Vector Control subsystem for the SRB (Solid Rocket Booster)).
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Motyka, P.; Wagner, E.; Hall, S. R.
1986-01-01
The performance of the orthogonal series generalized likelihood ratio (OSGLR) test in detecting and isolating commercial aircraft control surface and actuator failures is evaluated. A modification to incorporate age-weighting which significantly reduces the sensitivity of the algorithm to modeling errors is presented. The steady-state implementation of the algorithm based on a single linear model valid for a cruise flight condition is tested using a nonlinear aircraft simulation. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection and isolation performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling on dynamic pressure and flap deflection is examined. Based on this testing, the OSGLR algorithm should be capable of detecting control surface failures that would affect the safe operation of a commercial aircraft. Isolation may be difficult if there are several surfaces which produce similar effects on the aircraft. Extending the algorithm over the entire operating envelope of a commercial aircraft appears feasible.
Haimovitz, Kyla; Dweck, Carol S
2016-06-01
Children's intelligence mind-sets (i.e., their beliefs about whether intelligence is fixed or malleable) robustly influence their motivation and learning. Yet, surprisingly, research has not linked parents' intelligence mind-sets to their children's. We tested the hypothesis that a different belief of parents-their failure mind-sets-may be more visible to children and therefore more prominent in shaping their beliefs. In Study 1, we found that parents can view failure as debilitating or enhancing, and that these failure mind-sets predict parenting practices and, in turn, children's intelligence mind-sets. Study 2 probed more deeply into how parents display failure mind-sets. In Study 3a, we found that children can indeed accurately perceive their parents' failure mind-sets but not their parents' intelligence mind-sets. Study 3b showed that children's perceptions of their parents' failure mind-sets also predicted their own intelligence mind-sets. Finally, Study 4 showed a causal effect of parents' failure mind-sets on their responses to their children's hypothetical failure. Overall, parents who see failure as debilitating focus on their children's performance and ability rather than on their children's learning, and their children, in turn, tend to believe that intelligence is fixed rather than malleable. © The Author(s) 2016.
Zhou, Xu; Yang, Long; Tan, Xiaoping; Zhao, Genfu; Xie, Xiaoguang; Du, Guanben
2018-07-30
Prostate specific antigen (PSA) is the most significant biomarker for the screening of prostate cancer in human serum. However, most methods for the detection of PSA often require major laboratories, precisely analytical instruments and complicated operations. Currently, the design and development of satisfying electrochemical biosensors based on biomimetic materials (e.g. synthetic receptors) and nanotechnology is highly desired. Thus, we focused on the combination of molecular recognition and versatile nanomaterials in electrochemical devices for advancing their analytical performance and robustness. Herein, by using the present prepared multifunctional hydroxyl pillar[5]arene@gold nanoparticles@graphitic carbon nitride (HP5@AuNPs@g-C 3 N 4 ) hybrid nanomaterial as robust biomimetic element, a high-performance electrochemical immunosensor for detection of PSA was constructed. The as-prepared immunosensor, with typically competitive advantages of low cost, simple preparation and fast detection, exhibited remarkable robustness, ultra-sensitivity, excellent selectivity and reproducibility. The limit of detection (LOD) and linear range were 0.12 pg mL -1 (S/N = 3) and 0.0005-10.00 ng mL -1 , respectively. The satisfying results provide a promising approach for clinical detection of PSA in human serum. Copyright © 2018 Elsevier B.V. All rights reserved.
Thermoreflectance imaging of electromigration evolution in asymmetric aluminum constrictions
NASA Astrophysics Data System (ADS)
Tian, Hao; Ahn, Woojin; Maize, Kerry; Si, Mengwei; Ye, Peide; Alam, Muhammad Ashraful; Shakouri, Ali; Bermel, Peter
2018-01-01
Electromigration (EM) is a phenomenon whereby the flow of current in metal wires moves the underlying atoms, potentially inducing electronic interconnect failures. The continued decrease in commercial lithographically defined feature sizes means that EM presents an increasing risk to the reliability of modern electronics. To mitigate these risks, it is important to look for novel mechanisms to extend lifetime without forfeiting miniaturization. Typically, only the overall increase in the interconnect resistance and failure voltage are characterized. However, if the current flows non-uniformly, spatially resolving the resulting hot spots during electromigration aging experiments may provide better insights into the fundamental mechanisms of this process. In this study, we focus on aluminum interconnects containing asymmetric reservoir and void pairs with contact pads on each end. Such reservoirs are potential candidates for self-healing. Thermoreflectance imaging was used to detect hot spots in electrical interconnects at risk of failure as the voltage was gradually increased. It reveals differential heating with increasing voltage for each polarity. We find that while current flow going from a constriction to a reservoir causes a break at the void, the identical structure with the opposite polarity can sustain higher current (J = 21 × 106 A/cm2) and more localized joule heating and yet is more stable. Ultimately, a break takes place at the contact pad where the current flows from narrow interconnect to larger pads. In summary, thermoreflectance imaging with submicron spatial resolution provides valuable information about localized electromigration evolution and the potential role of reservoirs to create more robust interconnects.
Model Based Autonomy for Robust Mars Operations
NASA Technical Reports Server (NTRS)
Kurien, James A.; Nayak, P. Pandurang; Williams, Brian C.; Lau, Sonie (Technical Monitor)
1998-01-01
Space missions have historically relied upon a large ground staff, numbering in the hundreds for complex missions, to maintain routine operations. When an anomaly occurs, this small army of engineers attempts to identify and work around the problem. A piloted Mars mission, with its multiyear duration, cost pressures, half-hour communication delays and two-week blackouts cannot be closely controlled by a battalion of engineers on Earth. Flight crew involvement in routine system operations must also be minimized to maximize science return. It also may be unrealistic to require the crew have the expertise in each mission subsystem needed to diagnose a system failure and effect a timely repair, as engineers did for Apollo 13. Enter model-based autonomy, which allows complex systems to autonomously maintain operation despite failures or anomalous conditions, contributing to safe, robust, and minimally supervised operation of spacecraft, life support, In Situ Resource Utilization (ISRU) and power systems. Autonomous reasoning is central to the approach. A reasoning algorithm uses a logical or mathematical model of a system to infer how to operate the system, diagnose failures and generate appropriate behavior to repair or reconfigure the system in response. The 'plug and play' nature of the models enables low cost development of autonomy for multiple platforms. Declarative, reusable models capture relevant aspects of the behavior of simple devices (e.g. valves or thrusters). Reasoning algorithms combine device models to create a model of the system-wide interactions and behavior of a complex, unique artifact such as a spacecraft. Rather than requiring engineers to all possible interactions and failures at design time or perform analysis during the mission, the reasoning engine generates the appropriate response to the current situation, taking into account its system-wide knowledge, the current state, and even sensor failures or unexpected behavior.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less
A Fault Tolerant System for an Integrated Avionics Sensor Configuration
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Lancraft, R. E.
1984-01-01
An aircraft sensor fault tolerant system methodology for the Transport Systems Research Vehicle in a Microwave Landing System (MLS) environment is described. The fault tolerant system provides reliable estimates in the presence of possible failures both in ground-based navigation aids, and in on-board flight control and inertial sensors. Sensor failures are identified by utilizing the analytic relationships between the various sensors arising from the aircraft point mass equations of motion. The estimation and failure detection performance of the software implementation (called FINDS) of the developed system was analyzed on a nonlinear digital simulation of the research aircraft. Simulation results showing the detection performance of FINDS, using a dual redundant sensor compliment, are presented for bias, hardover, null, ramp, increased noise and scale factor failures. In general, the results show that FINDS can distinguish between normal operating sensor errors and failures while providing an excellent detection speed for bias failures in the MLS, indicated airspeed, attitude and radar altimeter sensors.
Robust obstacle detection for unmanned surface vehicles
NASA Astrophysics Data System (ADS)
Qin, Yueming; Zhang, Xiuzhi
2018-03-01
Obstacle detection is of essential importance for Unmanned Surface Vehicles (USV). Although some obstacles (e.g., ships, islands) can be detected by Radar, there are many other obstacles (e.g., floating pieces of woods, swimmers) which are difficult to be detected via Radar because these obstacles have low radar cross section. Therefore, detecting obstacle from images taken onboard is an effective supplement. In this paper, a robust vision-based obstacle detection method for USVs is developed. The proposed method employs the monocular image sequence captured by the camera on the USVs and detects obstacles on the sea surface from the image sequence. The experiment results show that the proposed scheme is efficient to fulfill the obstacle detection task.
Efficient and robust quantum random number generation by photon number detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Applegate, M. J.; Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge CB3 0HE; Thomas, O.
2015-08-17
We present an efficient and robust quantum random number generator based upon high-rate room temperature photon number detection. We employ an electric field-modulated silicon avalanche photodiode, a type of device particularly suited to high-rate photon number detection with excellent photon number resolution to detect, without an applied dead-time, up to 4 photons from the optical pulses emitted by a laser. By both measuring and modeling the response of the detector to the incident photons, we are able to determine the illumination conditions that achieve an optimal bit rate that we show is robust against variation in the photon flux. Wemore » extract random bits from the detected photon numbers with an efficiency of 99% corresponding to 1.97 bits per detected photon number yielding a bit rate of 143 Mbit/s, and verify that the extracted bits pass stringent statistical tests for randomness. Our scheme is highly scalable and has the potential of multi-Gbit/s bit rates.« less
Topological patterns in street networks of self-organized urban settlements
NASA Astrophysics Data System (ADS)
Buhl, J.; Gautrais, J.; Reeves, N.; Solé, R. V.; Valverde, S.; Kuntz, P.; Theraulaz, G.
2006-02-01
Many urban settlements result from a spatially distributed, decentralized building process. Here we analyze the topological patterns of organization of a large collection of such settlements using the approach of complex networks. The global efficiency (based on the inverse of shortest-path lengths), robustness to disconnections and cost (in terms of length) of these graphs is studied and their possible origins analyzed. A wide range of patterns is found, from tree-like settlements (highly vulnerable to random failures) to meshed urban patterns. The latter are shown to be more robust and efficient.
Speedy routing recovery protocol for large failure tolerance in wireless sensor networks.
Lee, Joa-Hyoung; Jung, In-Bum
2010-01-01
Wireless sensor networks are expected to play an increasingly important role in data collection in hazardous areas. However, the physical fragility of a sensor node makes reliable routing in hazardous areas a challenging problem. Because several sensor nodes in a hazardous area could be damaged simultaneously, the network should be able to recover routing after node failures over large areas. Many routing protocols take single-node failure recovery into account, but it is difficult for these protocols to recover the routing after large-scale failures. In this paper, we propose a routing protocol, referred to as ARF (Adaptive routing protocol for fast Recovery from large-scale Failure), to recover a network quickly after failures over large areas. ARF detects failures by counting the packet losses from parent nodes, and upon failure detection, it decreases the routing interval to notify the neighbor nodes of the failure. Our experimental results indicate that ARF could provide recovery from large-area failures quickly with less packets and energy consumption than previous protocols.
A Computational Framework to Control Verification and Robustness Analysis
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2010-01-01
This paper presents a methodology for evaluating the robustness of a controller based on its ability to satisfy the design requirements. The framework proposed is generic since it allows for high-fidelity models, arbitrary control structures and arbitrary functional dependencies between the requirements and the uncertain parameters. The cornerstone of this contribution is the ability to bound the region of the uncertain parameter space where the degradation in closed-loop performance remains acceptable. The size of this bounding set, whose geometry can be prescribed according to deterministic or probabilistic uncertainty models, is a measure of robustness. The robustness metrics proposed herein are the parametric safety margin, the reliability index, the failure probability and upper bounds to this probability. The performance observed at the control verification setting, where the assumptions and approximations used for control design may no longer hold, will fully determine the proposed control assessment.
Failure detection and identification for a reconfigurable flight control system
NASA Technical Reports Server (NTRS)
Dallery, Francois
1987-01-01
Failure detection and identification logic for a fault-tolerant longitudinal control system were investigated. Aircraft dynamics were based upon the cruise condition for a hypothetical transonic business jet transport configuration. The fault-tolerant control system consists of conventional control and estimation plus a new outer loop containing failure detection, identification, and reconfiguration (FDIR) logic. It is assumed that the additional logic has access to all measurements, as well as to the outputs of the control and estimation logic. The pilot may also command the FDIR logic to perform special tests.
NASA Technical Reports Server (NTRS)
1976-01-01
Analytic techniques have been developed for detecting and identifying abrupt changes in dynamic systems. The GLR technique monitors the output of the Kalman filter and searches for the time that the failure occured, thus allowing it to be sensitive to new data and consequently increasing the chances for fast system recovery following detection of a failure. All failure detections are based on functional redundancy. Performance tests of the F-8 aircraft flight control system and computerized modelling of the technique are presented.
Kapich, Davorin D.
1987-01-01
A bearing system includes backup bearings for supporting a rotating shaft upon failure of primary bearings. In the preferred embodiment, the backup bearings are rolling element bearings having their rolling elements disposed out of contact with their associated respective inner races during normal functioning of the primary bearings. Displacement detection sensors are provided for detecting displacement of the shaft upon failure of the primary bearings. Upon detection of the failure of the primary bearings, the rolling elements and inner races of the backup bearings are brought into mutual contact by axial displacement of the shaft.
Failure detection system risk reduction assessment
NASA Technical Reports Server (NTRS)
Aguilar, Robert B. (Inventor); Huang, Zhaofeng (Inventor)
2012-01-01
A process includes determining a probability of a failure mode of a system being analyzed reaching a failure limit as a function of time to failure limit, determining a probability of a mitigation of the failure mode as a function of a time to failure limit, and quantifying a risk reduction based on the probability of the failure mode reaching the failure limit and the probability of the mitigation.
NASA Astrophysics Data System (ADS)
Zarrabian, Sina; Belkacemi, Rabie; Babalola, Adeniyi A.
2016-12-01
In this paper, a novel intelligent control is proposed based on Artificial Neural Networks (ANN) to mitigate cascading failure (CF) and prevent blackout in smart grid systems after N-1-1 contingency condition in real-time. The fundamental contribution of this research is to deploy the machine learning concept for preventing blackout at early stages of its occurrence and to make smart grids more resilient, reliable, and robust. The proposed method provides the best action selection strategy for adaptive adjustment of generators' output power through frequency control. This method is able to relieve congestion of transmission lines and prevent consecutive transmission line outage after N-1-1 contingency condition. The proposed ANN-based control approach is tested on an experimental 100 kW test system developed by the authors to test intelligent systems. Additionally, the proposed approach is validated on the large-scale IEEE 118-bus power system by simulation studies. Experimental results show that the ANN approach is very promising and provides accurate and robust control by preventing blackout. The technique is compared to a heuristic multi-agent system (MAS) approach based on communication interchanges. The ANN approach showed more accurate and robust response than the MAS algorithm.
Precise visual navigation using multi-stereo vision and landmark matching
NASA Astrophysics Data System (ADS)
Zhu, Zhiwei; Oskiper, Taragay; Samarasekera, Supun; Kumar, Rakesh
2007-04-01
Traditional vision-based navigation system often drifts over time during navigation. In this paper, we propose a set of techniques which greatly reduce the long term drift and also improve its robustness to many failure conditions. In our approach, two pairs of stereo cameras are integrated to form a forward/backward multi-stereo camera system. As a result, the Field-Of-View of the system is extended significantly to capture more natural landmarks from the scene. This helps to increase the pose estimation accuracy as well as reduce the failure situations. Secondly, a global landmark matching technique is used to recognize the previously visited locations during navigation. Using the matched landmarks, a pose correction technique is used to eliminate the accumulated navigation drift. Finally, in order to further improve the robustness of the system, measurements from low-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS) sensors are integrated with the visual odometry in an extended Kalman Filtering framework. Our system is significantly more accurate and robust than previously published techniques (1~5% localization error) over long-distance navigation both indoors and outdoors. Real world experiments on a human worn system show that the location can be estimated within 1 meter over 500 meters (around 0.1% localization error averagely) without the use of GPS information.
A robust vision-based sensor fusion approach for real-time pose estimation.
Assa, Akbar; Janabi-Sharifi, Farrokh
2014-02-01
Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.
Building a robust vehicle detection and classification module
NASA Astrophysics Data System (ADS)
Grigoryev, Anton; Khanipov, Timur; Koptelov, Ivan; Bocharov, Dmitry; Postnikov, Vassily; Nikolaev, Dmitry
2015-12-01
The growing adoption of intelligent transportation systems (ITS) and autonomous driving requires robust real-time solutions for various event and object detection problems. Most of real-world systems still cannot rely on computer vision algorithms and employ a wide range of costly additional hardware like LIDARs. In this paper we explore engineering challenges encountered in building a highly robust visual vehicle detection and classification module that works under broad range of environmental and road conditions. The resulting technology is competitive to traditional non-visual means of traffic monitoring. The main focus of the paper is on software and hardware architecture, algorithm selection and domain-specific heuristics that help the computer vision system avoid implausible answers.
TU-FG-201-09: Predicting Accelerator Dysfunction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Able, C; Nguyen, C; Baydush, A
Purpose: To develop an integrated statistical process control (SPC) framework using digital performance and component data accumulated within the accelerator system that can detect dysfunction prior to unscheduled downtime. Methods: Seven digital accelerators were monitored for twelve to 18 months. The accelerators were operated in a ‘run to failure mode’ with the individual institutions determining when service would be initiated. Institutions were required to submit detailed service reports. Trajectory and text log files resulting from a robust daily VMAT QA delivery were decoded and evaluated using Individual and Moving Range (I/MR) control charts. The SPC evaluation was presented in amore » customized dashboard interface that allows the user to review 525 monitored parameters (480 MLC parameters). Chart limits were calculated using a hybrid technique that includes the standard SPC 3σ limits and an empirical factor based on the parameter/system specification. The individual (I) grand mean values and control limit ranges of the I/MR charts of all accelerators were compared using statistical (ranked analysis of variance (ANOVA)) and graphical analyses to determine consistency of operating parameters. Results: When an alarm or warning was directly connected to field service, process control charts predicted dysfunction consistently on beam generation related parameters (BGP)– RF Driver Voltage, Gun Grid Voltage, and Forward Power (W); beam uniformity parameters – angle and position steering coil currents; and Gantry position accuracy parameter: cross correlation max-value. Control charts for individual MLC – cross correlation max-value/position detected 50% to 60% of MLCs serviced prior to dysfunction or failure. In general, non-random changes were detected 5 to 80 days prior to a service intervention. The ANOVA comparison of BGP determined that each accelerator parameter operated at a distinct value. Conclusion: The SPC framework shows promise. Long term monitoring coordinated with service will be required to definitively determine the effectiveness of the model. Varian Medical System, Inc. provided funding in support of the research presented.« less
HIV resistance testing and detected drug resistance in Europe.
Schultze, Anna; Phillips, Andrew N; Paredes, Roger; Battegay, Manuel; Rockstroh, Jürgen K; Machala, Ladislav; Tomazic, Janez; Girard, Pierre M; Januskevica, Inga; Gronborg-Laut, Kamilla; Lundgren, Jens D; Cozzi-Lepri, Alessandro
2015-07-17
To describe regional differences and trends in resistance testing among individuals experiencing virological failure and the prevalence of detected resistance among those individuals who had a genotypic resistance test done following virological failure. Multinational cohort study. Individuals in EuroSIDA with virological failure (>1 RNA measurement >500 on ART after >6 months on ART) after 1997 were included. Adjusted odds ratios (aORs) for resistance testing following virological failure and aORs for the detection of resistance among those who had a test were calculated using logistic regression with generalized estimating equations. Compared to 74.2% of ART-experienced individuals in 1997, only 5.1% showed evidence of virological failure in 2012. The odds of resistance testing declined after 2004 (global P < 0.001). Resistance was detected in 77.9% of the tests, NRTI resistance being most common (70.3%), followed by NNRTI (51.6%) and protease inhibitor (46.1%) resistance. The odds of detecting resistance were lower in tests done in 1997-1998, 1999-2000 and 2009-2010, compared to those carried out in 2003-2004 (global P < 0.001). Resistance testing was less common in Eastern Europe [aOR 0.72, 95% confidence interval (CI) 0.55-0.94] compared to Southern Europe, whereas the detection of resistance given that a test was done was less common in Northern (aOR 0.29, 95% CI 0.21-0.39) and Central Eastern (aOR 0.47, 95% CI 0.29-0.76) Europe, compared to Southern Europe. Despite a concurrent decline in virological failure and testing, drug resistance was commonly detected. This suggests a selective approach to resistance testing. The regional differences identified indicate that policy aiming to minimize the emergence of resistance is of particular relevance in some European regions, notably in the countries in Eastern Europe.
NASA Technical Reports Server (NTRS)
Vanschalkwyk, Christiaan M.
1992-01-01
We discuss the application of Generalized Parity Relations to two experimental flexible space structures, the NASA Langley Mini-Mast and Marshall Space Flight Center ACES mast. We concentrate on the generation of residuals and make no attempt to implement the Decision Function. It should be clear from the examples that are presented whether it would be possible to detect the failure of a specific component. We derive the equations from Generalized Parity Relations. Two special cases are treated: namely, Single Sensor Parity Relations (SSPR) and Double Sensor Parity Relations (DSPR). Generalized Parity Relations for actuators are also derived. The NASA Langley Mini-Mast and the application of SSPR and DSPR to a set of displacement sensors located at the tip of the Mini-Mast are discussed. The performance of a reduced order model that includes the first five models of the mast is compared to a set of parity relations that was identified on a set of input-output data. Both time domain and frequency domain comparisons are made. The effect of the sampling period and model order on the performance of the Residual Generators are also discussed. Failure detection experiments where the sensor set consisted of two gyros and an accelerometer are presented. The effects of model order and sampling frequency are again illustrated. The detection of actuator failures is discussed. We use Generalized Parity Relations to monitor control system component failures on the ACES mast. An overview is given of the Failure Detection Filter and experimental results are discussed. Conclusions and directions for future research are given.
Real-Time Detection of Infusion Site Failures in a Closed-Loop Artificial Pancreas.
Howsmon, Daniel P; Baysal, Nihat; Buckingham, Bruce A; Forlenza, Gregory P; Ly, Trang T; Maahs, David M; Marcal, Tatiana; Towers, Lindsey; Mauritzen, Eric; Deshpande, Sunil; Huyett, Lauren M; Pinsker, Jordan E; Gondhalekar, Ravi; Doyle, Francis J; Dassau, Eyal; Hahn, Juergen; Bequette, B Wayne
2018-05-01
As evidence emerges that artificial pancreas systems improve clinical outcomes for patients with type 1 diabetes, the burden of this disease will hopefully begin to be alleviated for many patients and caregivers. However, reliance on automated insulin delivery potentially means patients will be slower to act when devices stop functioning appropriately. One such scenario involves an insulin infusion site failure, where the insulin that is recorded as delivered fails to affect the patient's glucose as expected. Alerting patients to these events in real time would potentially reduce hyperglycemia and ketosis associated with infusion site failures. An infusion site failure detection algorithm was deployed in a randomized crossover study with artificial pancreas and sensor-augmented pump arms in an outpatient setting. Each arm lasted two weeks. Nineteen participants wore infusion sets for up to 7 days. Clinicians contacted patients to confirm infusion site failures detected by the algorithm and instructed on set replacement if failure was confirmed. In real time and under zone model predictive control, the infusion site failure detection algorithm achieved a sensitivity of 88.0% (n = 25) while issuing only 0.22 false positives per day, compared with a sensitivity of 73.3% (n = 15) and 0.27 false positives per day in the SAP arm (as indicated by retrospective analysis). No association between intervention strategy and duration of infusion sets was observed ( P = .58). As patient burden is reduced by each generation of advanced diabetes technology, fault detection algorithms will help ensure that patients are alerted when they need to manually intervene. Clinical Trial Identifier: www.clinicaltrials.gov,NCT02773875.
Automatic OPC repair flow: optimized implementation of the repair recipe
NASA Astrophysics Data System (ADS)
Bahnas, Mohamed; Al-Imam, Mohamed; Word, James
2007-10-01
Virtual manufacturing that is enabled by rapid, accurate, full-chip simulation is a main pillar in achieving successful mask tape-out in the cutting-edge low-k1 lithography. It facilitates detecting printing failures before a costly and time-consuming mask tape-out and wafer print occur. The OPC verification step role is critical at the early production phases of a new process development, since various layout patterns will be suspected that they might to fail or cause performance degradation, and in turn need to be accurately flagged to be fed back to the OPC Engineer for further learning and enhancing in the OPC recipe. At the advanced phases of the process development, there is much less probability of detecting failures but still the OPC Verification step act as the last-line-of-defense for the whole RET implemented work. In recent publication the optimum approach of responding to these detected failures was addressed, and a solution was proposed to repair these defects in an automated methodology and fully integrated and compatible with the main RET/OPC flow. In this paper the authors will present further work and optimizations of this Repair flow. An automated analysis methodology for root causes of the defects and classification of them to cover all possible causes will be discussed. This automated analysis approach will include all the learning experience of the previously highlighted causes and include any new discoveries. Next, according to the automated pre-classification of the defects, application of the appropriate approach of OPC repair (i.e. OPC knob) on each classified defect location can be easily selected, instead of applying all approaches on all locations. This will help in cutting down the runtime of the OPC repair processing and reduce the needed number of iterations to reach the status of zero defects. An output report for existing causes of defects and how the tool handled them will be generated. The report will with help further learning and facilitate the enhancement of the main OPC recipe. Accordingly, the main OPC recipe can be more robust, converging faster and probably in a fewer number of iterations. This knowledge feedback loop is one of the fruitful benefits of the Automatic OPC Repair flow.
Optimized feature-detection for on-board vision-based surveillance
NASA Astrophysics Data System (ADS)
Gond, Laetitia; Monnin, David; Schneider, Armin
2012-06-01
The detection and matching of robust features in images is an important step in many computer vision applications. In this paper, the importance of the keypoint detection algorithms and their inherent parameters in the particular context of an image-based change detection system for IED detection is studied. Through extensive application-oriented experiments, we draw an evaluation and comparison of the most popular feature detectors proposed by the computer vision community. We analyze how to automatically adjust these algorithms to changing imaging conditions and suggest improvements in order to achieve more exibility and robustness in their practical implementation.
2009-12-01
facilitating reliable stereo matching, occlusion handling, accurate 3D reconstruction and robust moving target detection . We use the fact that all the...a moving platform, we will have to naturally and effectively handle obvious motion parallax and object occlusions in order to be able to detect ...facilitating reliable stereo matching, occlusion handling, accurate 3D reconstruction and robust moving target detection . Based on the above two
Cascade phenomenon against subsequent failures in complex networks
NASA Astrophysics Data System (ADS)
Jiang, Zhong-Yuan; Liu, Zhi-Quan; He, Xuan; Ma, Jian-Feng
2018-06-01
Cascade phenomenon may lead to catastrophic disasters which extremely imperil the network safety or security in various complex systems such as communication networks, power grids, social networks and so on. In some flow-based networks, the load of failed nodes can be redistributed locally to their neighboring nodes to maximally preserve the traffic oscillations or large-scale cascading failures. However, in such local flow redistribution model, a small set of key nodes attacked subsequently can result in network collapse. Then it is a critical problem to effectively find the set of key nodes in the network. To our best knowledge, this work is the first to study this problem comprehensively. We first introduce the extra capacity for every node to put up with flow fluctuations from neighbors, and two extra capacity distributions including degree based distribution and average distribution are employed. Four heuristic key nodes discovering methods including High-Degree-First (HDF), Low-Degree-First (LDF), Random and Greedy Algorithms (GA) are presented. Extensive simulations are realized in both scale-free networks and random networks. The results show that the greedy algorithm can efficiently find the set of key nodes in both scale-free and random networks. Our work studies network robustness against cascading failures from a very novel perspective, and methods and results are very useful for network robustness evaluations and protections.
Stamping SERS for creatinine sensing
NASA Astrophysics Data System (ADS)
Li, Ming; Du, Yong; Zhao, Fusheng; Zeng, Jianbo; Santos, Greggy M.; Mohan, Chandra; Shih, Wei-Chuan
2015-03-01
Urine can be obtained easily, readily and non-invasively. The analysis of urine can provide metabolic information of the body and the condition of renal function. Creatinine is one of the major components of human urine associated with muscle metabolism. Since the content of creatinine excreted into urine is relatively constant, it is used as an internal standard to normalize water variations. Moreover, the detection of creatinine concentration in urine is important for the renal clearance test, which can monitor the filtration function of kidney and health status. In more details, kidney failure can be imminent when the creatinine concentration in urine is high. A simple device and protocol for creatinine sensing in urine samples can be valuable for point-of-care applications. We reported quantitative analysis of creatinine in urine samples by using stamping surface enhanced Raman scattering (S-SERS) technique with nanoporous gold disk (NPGD) based SERS substrate. S-SERS technique enables label-free and multiplexed molecular sensing under dry condition, while NPGD provides a robust, controllable, and high-sensitivity SERS substrate. The performance of S-SERS with NGPDs is evaluated by the detection and quantification of pure creatinine and creatinine in artificial urine within physiologically relevant concentration ranges.
Chakraborty, Arindom
2016-12-01
A common objective in longitudinal studies is to characterize the relationship between a longitudinal response process and a time-to-event data. Ordinal nature of the response and possible missing information on covariates add complications to the joint model. In such circumstances, some influential observations often present in the data may upset the analysis. In this paper, a joint model based on ordinal partial mixed model and an accelerated failure time model is used, to account for the repeated ordered response and time-to-event data, respectively. Here, we propose an influence function-based robust estimation method. Monte Carlo expectation maximization method-based algorithm is used for parameter estimation. A detailed simulation study has been done to evaluate the performance of the proposed method. As an application, a data on muscular dystrophy among children is used. Robust estimates are then compared with classical maximum likelihood estimates. © The Author(s) 2014.
Linear quadratic servo control of a reusable rocket engine
NASA Technical Reports Server (NTRS)
Musgrave, Jeffrey L.
1991-01-01
A design method for a servo compensator is developed in the frequency domain using singular values. The method is applied to a reusable rocket engine. An intelligent control system for reusable rocket engines was proposed which includes a diagnostic system, a control system, and an intelligent coordinator which determines engine control strategies based on the identified failure modes. The method provides a means of generating various linear multivariable controllers capable of meeting performance and robustness specifications and accommodating failure modes identified by the diagnostic system. Command following with set point control is necessary for engine operation. A Kalman filter reconstructs the state while loop transfer recovery recovers the required degree of robustness while maintaining satisfactory rejection of sensor noise from the command error. The approach is applied to the design of a controller for a rocket engine satisfying performance constraints in the frequency domain. Simulation results demonstrate the performance of the linear design on a nonlinear engine model over all power levels during mainstage operation.
NASA Astrophysics Data System (ADS)
Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.
2016-03-01
Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sallaberry, Cedric Jean-Marie.; Helton, Jon Craig
2012-10-01
Weak link (WL)/strong link (SL) systems are important parts of the overall operational design of high-consequence systems. In such designs, the SL system is very robust and is intended to permit operation of the entire system under, and only under, intended conditions. In contrast, the WL system is intended to fail in a predictable and irreversible manner under accident conditions and render the entire system inoperable before an accidental operation of the SL system. The likelihood that the WL system will fail to deactivate the entire system before the SL system fails (i.e., degrades into a configuration that could allowmore » an accidental operation of the entire system) is referred to as probability of loss of assured safety (PLOAS). This report describes the Fortran 90 program CPLOAS_2 that implements the following representations for PLOAS for situations in which both link physical properties and link failure properties are time-dependent: (i) failure of all SLs before failure of any WL, (ii) failure of any SL before failure of any WL, (iii) failure of all SLs before failure of all WLs, and (iv) failure of any SL before failure of all WLs. The effects of aleatory uncertainty and epistemic uncertainty in the definition and numerical evaluation of PLOAS can be included in the calculations performed by CPLOAS_2.« less
Ferrographic and spectrographic analysis of oil sampled before and after failure of a jet engine
NASA Technical Reports Server (NTRS)
Jones, W. R., Jr.
1980-01-01
An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph as well as plasma, atomic absorption, and emission spectrometers. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism, nor a high level of wear debris was detected in the oil sample from the engine just prior to the test in which the failure occurred. However, low concentrations of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure.
Jiang, Xuejun; Guo, Xu; Zhang, Ning; Wang, Bo
2018-01-01
This article presents and investigates performance of a series of robust multivariate nonparametric tests for detection of location shift between two multivariate samples in randomized controlled trials. The tests are built upon robust estimators of distribution locations (medians, Hodges-Lehmann estimators, and an extended U statistic) with both unscaled and scaled versions. The nonparametric tests are robust to outliers and do not assume that the two samples are drawn from multivariate normal distributions. Bootstrap and permutation approaches are introduced for determining the p-values of the proposed test statistics. Simulation studies are conducted and numerical results are reported to examine performance of the proposed statistical tests. The numerical results demonstrate that the robust multivariate nonparametric tests constructed from the Hodges-Lehmann estimators are more efficient than those based on medians and the extended U statistic. The permutation approach can provide a more stringent control of Type I error and is generally more powerful than the bootstrap procedure. The proposed robust nonparametric tests are applied to detect multivariate distributional difference between the intervention and control groups in the Thai Healthy Choices study and examine the intervention effect of a four-session motivational interviewing-based intervention developed in the study to reduce risk behaviors among youth living with HIV. PMID:29672555
Monahan, Mark; Barton, Pelham; Taylor, Clare J; Roalfe, Andrea K; Hobbs, F D Richard; Cowie, Martin; Davis, Russell; Deeks, Jon; Mant, Jonathan; McCahon, Deborah; McDonagh, Theresa; Sutton, George; Tait, Lynda
2017-08-15
Detection and treatment of heart failure (HF) can improve quality of life and reduce premature mortality. However, symptoms such as breathlessness are common in primary care, have a variety of causes and not all patients require cardiac imaging. In systems where healthcare resources are limited, ensuring those patients who are likely to have HF undergo appropriate and timely investigation is vital. A decision tree was developed to assess the cost-effectiveness of using the MICE (Male, Infarction, Crepitations, Edema) decision rule compared to other diagnostic strategies to identify HF patients presenting to primary care. Data from REFER (REFer for EchocaRdiogram), a HF diagnostic accuracy study, was used to determine which patients received the correct diagnosis decision. The model adopted a UK National Health Service (NHS) perspective. The current recommended National Institute for Health and Care Excellence (NICE) guidelines for identifying patients with HF was the most cost-effective option with a cost of £4400 per quality adjusted life year (QALY) gained compared to a "do nothing" strategy. That is, patients presenting with symptoms suggestive of HF should be referred straight for echocardiography if they had a history of myocardial infarction or if their NT-proBNP level was ≥400pg/ml. The MICE rule was more expensive and less effective than the other comparators. Base-case results were robust to sensitivity analyses. This represents the first cost-utility analysis comparing HF diagnostic strategies for symptomatic patients. Current guidelines in England were the most cost-effective option for identifying patients for confirmatory HF diagnosis. The low number of HF with Reduced Ejection Fraction patients (12%) in the REFER patient population limited the benefits of early detection. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
2014-09-30
Duration AUV Missions with Minimal Human Intervention James Bellingham Monterey Bay Aquarium Research Institute 7700 Sandholdt Road Moss Landing...subsystem failures and environmental challenges. For example, should an AUV suffer the failure of one of its internal actuators, can that failure be...reduce the need for operator intervention in the event of performance anomalies on long- duration AUV deployments, - To allow the vehicle to detect
On-line detection of key radionuclides for fuel-rod failure in a pressurized water reactor.
Qin, Guoxiu; Chen, Xilin; Guo, Xiaoqing; Ni, Ning
2016-08-01
For early on-line detection of fuel rod failure, the key radionuclides useful in monitoring must leak easily from failing rods. Yield, half-life, and mass share of fission products that enter the primary coolant also need to be considered in on-line analyses. From all the nuclides that enter the primary coolant during fuel-rod failure, (135)Xe and (88)Kr were ultimately chosen as crucial for on-line monitoring of fuel-rod failure. A monitoring system for fuel-rod failure detection for pressurized water reactor (PWR) based on the LaBr3(Ce) detector was assembled and tested. The samples of coolant from the PWR were measured using the system as well as a HPGe γ-ray spectrometer. A comparison showed the method was feasible. Finally, the γ-ray spectra of primary coolant were measured under normal operations and during fuel-rod failure. The two peaks of (135)Xe (249.8keV) and (88)Kr (2392.1keV) were visible, confirming that the method is capable of monitoring fuel-rod failure on-line. Copyright © 2016 Elsevier Ltd. All rights reserved.
Signal analysis techniques for incipient failure detection in turbomachinery
NASA Technical Reports Server (NTRS)
Coffin, T.
1985-01-01
Signal analysis techniques for the detection and classification of incipient mechanical failures in turbomachinery were developed, implemented and evaluated. Signal analysis techniques available to describe dynamic measurement characteristics are reviewed. Time domain and spectral methods are described, and statistical classification in terms of moments is discussed. Several of these waveform analysis techniques were implemented on a computer and applied to dynamic signals. A laboratory evaluation of the methods with respect to signal detection capability is described. Plans for further technique evaluation and data base development to characterize turbopump incipient failure modes from Space Shuttle main engine (SSME) hot firing measurements are outlined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Silva, T; Ketcha, M; Siewerdsen, J H
Purpose: In image-guided spine surgery, mapping 3D preoperative images to 2D intraoperative images via 3D-2D registration can provide valuable assistance in target localization. However, the presence of surgical instrumentation, hardware implants, and soft-tissue resection/displacement causes mismatches in image content, confounding existing registration methods. Manual/semi-automatic methods to mask such extraneous content is time consuming, user-dependent, error prone, and disruptive to clinical workflow. We developed and evaluated 2 novel similarity metrics within a robust registration framework to overcome such challenges in target localization. Methods: An IRB-approved retrospective study in 19 spine surgery patients included 19 preoperative 3D CT images and 50 intraoperativemore » mobile radiographs in cervical, thoracic, and lumbar spine regions. A neuroradiologist provided truth definition of vertebral positions in CT and radiography. 3D-2D registration was performed using the CMA-ES optimizer with 4 gradient-based image similarity metrics: (1) gradient information (GI); (2) gradient correlation (GC); (3) a novel variant referred to as gradient orientation (GO); and (4) a second variant referred to as truncated gradient correlation (TGC). Registration accuracy was evaluated in terms of the projection distance error (PDE) of the vertebral levels. Results: Conventional similarity metrics were susceptible to gross registration error and failure modes associated with the presence of surgical instrumentation: for GI, the median PDE and interquartile range was 33.0±43.6 mm; similarly for GC, PDE = 23.0±92.6 mm respectively. The robust metrics GO and TGC, on the other hand, demonstrated major improvement in PDE (7.6 ±9.4 mm and 8.1± 18.1 mm, respectively) and elimination of gross failure modes. Conclusion: The proposed GO and TGC similarity measures improve registration accuracy and robustness to gross failure in the presence of strong image content mismatch. Such registration capability could offer valuable assistance in target localization without disruption of clinical workflow. G. Kleinszig and S. Vogt are employees of Siemens Healthcare.« less
Scheduling policies of intelligent sensors and sensor/actuators in flexible structures
NASA Astrophysics Data System (ADS)
Demetriou, Michael A.; Potami, Raffaele
2006-03-01
In this note, we revisit the problem of actuator/sensor placement in large civil infrastructures and flexible space structures within the context of spatial robustness. The positioning of these devices becomes more important in systems employing wireless sensor and actuator networks (WSAN) for improved control performance and for rapid failure detection. The ability of the sensing and actuating devices to possess the property of spatial robustness results in reduced control energy and therefore the spatial distribution of disturbances is integrated into the location optimization measures. In our studies, the structure under consideration is a flexible plate clamped at all sides. First, we consider the case of sensor placement and the optimization scheme attempts to produce those locations that minimize the effects of the spatial distribution of disturbances on the state estimation error; thus the sensor locations produce state estimators with minimized disturbance-to-error transfer function norms. A two-stage optimization procedure is employed whereby one first considers the open loop system and the spatial distribution of disturbances is found that produces the maximal effects on the entire open loop state. Once this "worst" spatial distribution of disturbances is found, the optimization scheme subsequently finds the locations that produce state estimators with minimum transfer function norms. In the second part, we consider the collocated actuator/sensor pairs and the optimization scheme produces those locations that result in compensators with the smallest norms of the disturbance-to-state transfer functions. Going a step further, an intelligent control scheme is presented which, at each time interval, activates a subset of the actuator/sensor pairs in order provide robustness against spatiotemporally moving disturbances and minimize power consumption by keeping some sensor/actuators in sleep mode.
Test plan. GCPS task 7, subtask 7.1: IHM development
NASA Technical Reports Server (NTRS)
Greenberg, H. S.
1994-01-01
The overall objective of Task 7 is to identify cost-effective life cycle integrated health management (IHM) approaches for a reusable launch vehicle's primary structure. Acceptable IHM approaches must: eliminate and accommodate faults through robust designs, identify optimum inspection/maintenance periods, automate ground and on-board test and check-out, and accommodate and detect structural faults by providing wide and localized area sensor and test coverage as required. These requirements are elements of our targeted primary structure low cost operations approach using airline-like maintenance by exception philosophies. This development plan will follow an evolutionary path paving the way to the ultimate development of flight-quality production, operations, and vehicle systems. This effort will be focused on maturing the recommended sensor technologies required for localized and wide area health monitoring to a technology readiness level (TRL) of 6 and to establish flight ready system design requirements. The following is a brief list of IHM program objectives: design out faults by analyzing material properties, structural geometry, and load and environment variables and identify failure modes and damage tolerance requirements; design in system robustness while meeting performance objectives (weight limitations) of the reusable launch vehicle primary structure; establish structural integrity margins to preclude the need for test and checkout and predict optimum inspection/maintenance periods through life prediction analysis; identify optimum fault protection system concept definitions combining system robustness and integrity margins established above with cost effective health monitoring technologies; and use coupons, panels, and integrated full scale primary structure test articles to identify, evaluate, and characterize the preferred NDE/NDI/IHM sensor technologies that will be a part of the fault protection system.
NASA Astrophysics Data System (ADS)
Mannar, Kamal; Ceglarek, Darek
2005-11-01
Customer feedback in the form of warranty/field performance is an important and direct indicator of quality and robustness of a product. Linking warranty information to manufacturing measurements can identify key design parameters and process variables (DPs and PVs) that are related to warranty failures. Warranty data has been traditionally used in reliability studies to determine failure distributions and warranty cost. This paper proposes a novel Fault Region Localization (FRL) methodology to map warranty failures to manufacturing measurements (hence to DPs/PVs) to diagnose warranty failures and perform tolerance revaluation. The FRL methodology consists of two parts: 1. Identifying relations between warranty failures and DPs and PVs using the Generalized Rough Set (GRS) method. GRS is a supervised learning technique to identify specific DPs and PVs related to the given warranty failures and then determining the corresponding Warranty Fault Regions (WFR), Normal Region (NR) and Boundary region (BND). GRS expands traditional Rough Set method by allowing inclusion of noise and uncertainty of warranty data classes. 2. Revaluating the original tolerances of DPs/PVs based on the WFR and BND region identified. The FRL methodology is illustrated using case studies based on two warranty failures from the electronics industry.
Failure detection and fault management techniques for flush airdata sensing systems
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.
1992-01-01
A high-angle-of-attack flush airdata sensing system was installed and flight tested on the F-18 High Alpha Research Vehicle at NASA-Dryden. This system uses a matrix of pressure orifices arranged in concentric circles on the nose of the vehicle to determine angles of attack, angles of sideslip, dynamic pressure, and static pressure as well as other airdata parameters. Results presented use an arrangement of 11 symmetrically distributed ports on the aircraft nose. Experience with this sensing system data indicates that the primary concern for real-time implementation is the detection and management of overall system and individual pressure sensor failures. The multiple port sensing system is more tolerant to small disturbances in the measured pressure data than conventional probe-based intrusive airdata systems. However, under adverse circumstances, large undetected failures in individual pressure ports can result in algorithm divergence and catastrophic failure of the entire system. How system and individual port failures may be detected using chi sq. analysis is shown. Once identified, the effects of failures are eliminated using weighted least squares.
Gear Fault Detection Effectiveness as Applied to Tooth Surface Pitting Fatigue Damage
NASA Technical Reports Server (NTRS)
Lewicki, David G.; Dempsey, Paula J.; Heath, Gregory F.; Shanthakumaran, Perumal
2009-01-01
A study was performed to evaluate fault detection effectiveness as applied to gear tooth pitting fatigue damage. Vibration and oil-debris monitoring (ODM) data were gathered from 24 sets of spur pinion and face gears run during a previous endurance evaluation study. Three common condition indicators (RMS, FM4, and NA4) were deduced from the time-averaged vibration data and used with the ODM to evaluate their performance for gear fault detection. The NA4 parameter showed to be a very good condition indicator for the detection of gear tooth surface pitting failures. The FM4 and RMS parameters performed average to below average in detection of gear tooth surface pitting failures. The ODM sensor was successful in detecting a significant amount of debris from all the gear tooth pitting fatigue failures. Excluding outliers, the average cumulative mass at the end of a test was 40 mg.
A preliminary design for flight testing the FINDS algorithm
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Godiwala, P. M.
1986-01-01
This report presents a preliminary design for flight testing the FINDS (Fault Inferring Nonlinear Detection System) algorithm on a target flight computer. The FINDS software was ported onto the target flight computer by reducing the code size by 65%. Several modifications were made to the computational algorithms resulting in a near real-time execution speed. Finally, a new failure detection strategy was developed resulting in a significant improvement in the detection time performance. In particular, low level MLS, IMU and IAS sensor failures are detected instantaneously with the new detection strategy, while accelerometer and the rate gyro failures are detected within the minimum time allowed by the information generated in the sensor residuals based on the point mass equations of motion. All of the results have been demonstrated by using five minutes of sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment.
Reliable Broadcast under Cascading Failures in Interdependent Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Sisi; Lee, Sangkeun; Chinthavali, Supriya
Reliable broadcast is an essential tool to disseminate information among a set of nodes in the presence of failures. We present a novel study of reliable broadcast in interdependent networks, in which the failures in one network may cascade to another network. In particular, we focus on the interdependency between the communication network and power grid network, where the power grid depends on the signals from the communication network for control and the communication network depends on the grid for power. In this paper, we build a resilient solution to handle crash failures in the communication network that may causemore » cascading failures and may even partition the network. In order to guarantee that all the correct nodes deliver the messages, we use soft links, which are inactive backup links to non-neighboring nodes that are only active when failures occur. At the core of our work is a fully distributed algorithm for the nodes to predict and collect the information of cascading failures so that soft links can be maintained to correct nodes prior to the failures. In the presence of failures, soft links are activated to guarantee message delivery and new soft links are built accordingly for long term robustness. Our evaluation results show that the algorithm achieves low packet drop rate and handles cascading failures with little overhead.« less
Probability of loss of assured safety in systems with multiple time-dependent failure modes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Jon Craig; Pilch, Martin.; Sallaberry, Cedric Jean-Marie.
2012-09-01
Weak link (WL)/strong link (SL) systems are important parts of the overall operational design of high-consequence systems. In such designs, the SL system is very robust and is intended to permit operation of the entire system under, and only under, intended conditions. In contrast, the WL system is intended to fail in a predictable and irreversible manner under accident conditions and render the entire system inoperable before an accidental operation of the SL system. The likelihood that the WL system will fail to deactivate the entire system before the SL system fails (i.e., degrades into a configuration that could allowmore » an accidental operation of the entire system) is referred to as probability of loss of assured safety (PLOAS). Representations for PLOAS for situations in which both link physical properties and link failure properties are time-dependent are derived and numerically evaluated for a variety of WL/SL configurations, including PLOAS defined by (i) failure of all SLs before failure of any WL, (ii) failure of any SL before failure of any WL, (iii) failure of all SLs before failure of all WLs, and (iv) failure of any SL before failure of all WLs. The effects of aleatory uncertainty and epistemic uncertainty in the definition and numerical evaluation of PLOAS are considered.« less
Kelly, Damian J; McCann, Gerald P; Blackman, Daniel; Curzen, Nicholas P; Dalby, Miles; Greenwood, John P; Fairbrother, Kathryn; Shipley, Lorraine; Kelion, Andrew; Heatherington, Simon; Khan, Jamal N; Nazir, Sheraz; Alahmar, Albert; Flather, Marcus; Swanton, Howard; Schofield, Peter; Gunning, Mark; Hall, Roger; Gershlick, Anthony H
2013-02-22
Primary percutaneous coronary intervention (PPCI) is the preferred strategy for acute ST-segment elevation myocardial infarction (STEMI), with evidence of improved clinical outcomes compared to fibrinolytic therapy. However, there is no consensus on how best to manage multivessel coronary disease detected at the time of PPCI, with little robust data on best management of angiographically significant stenoses detected in non-infarct-related (N-IRA) coronary arteries. CVLPRIT will determine the optimal management of N-IRA lesions detected during PPCI. CVLPRIT (Complete Versus culprit-Lesion only PRimary PCI Trial) is an open-label, prospective, randomised, multicentre trial. STEMI patients undergo verbal "assent" on presentation. Patients are included when angiographic MVD has been detected, and randomised to culprit (IRA)-only PCI (n=150) or in-patient complete multivessel PCI (n=150). Cumulative major adverse cardiac events (MACE) - all-cause mortality, recurrent MI, heart failure, need for revascularisation (PCI or CABG) will be recorded at 12 months. Secondary endpoints include safety endpoints of confirmed ischaemic stroke, intracranial haemorrhage, major non-intracranial bleeding, and repair of vascular complications. A cardiac magnetic resonance (CMR) substudy will provide mechanistic data on infarct size, myocardial salvage index and microvascular obstruction. A cost efficacy analysis will be undertaken. The management of multivessel coronary artery disease in the setting of PPCI for STEMI, including the timing of when to perform non-culprit-artery revascularisation if undertaken, remains unresolved. CVLPRIT will yield mechanistic insights into the myocardial consequence of N-IRA intervention undertaken during the peri-infarct period.
Level of Automation and Failure Frequency Effects on Simulated Lunar Lander Performance
NASA Technical Reports Server (NTRS)
Marquez, Jessica J.; Ramirez, Margarita
2014-01-01
A human-in-the-loop experiment was conducted at the NASA Ames Research Center Vertical Motion Simulator, where instrument-rated pilots completed a simulated terminal descent phase of a lunar landing. Ten pilots participated in a 2 x 2 mixed design experiment, with level of automation as the within-subjects factor and failure frequency as the between subjects factor. The two evaluated levels of automation were high (fully automated landing) and low (manual controlled landing). During test trials, participants were exposed to either a high number of failures (75% failure frequency) or low number of failures (25% failure frequency). In order to investigate the pilots' sensitivity to changes in levels of automation and failure frequency, the dependent measure selected for this experiment was accuracy of failure diagnosis, from which D Prime and Decision Criterion were derived. For each of the dependent measures, no significant difference was found for level of automation and no significant interaction was detected between level of automation and failure frequency. A significant effect was identified for failure frequency suggesting failure frequency has a significant effect on pilots' sensitivity to failure detection and diagnosis. Participants were more likely to correctly identify and diagnose failures if they experienced the higher levels of failures, regardless of level of automation
Robust reliable sampled-data control for switched systems with application to flight control
NASA Astrophysics Data System (ADS)
Sakthivel, R.; Joby, Maya; Shi, P.; Mathiyalagan, K.
2016-11-01
This paper addresses the robust reliable stabilisation problem for a class of uncertain switched systems with random delays and norm bounded uncertainties. The main aim of this paper is to obtain the reliable robust sampled-data control design which involves random time delay with an appropriate gain control matrix for achieving the robust exponential stabilisation for uncertain switched system against actuator failures. In particular, the involved delays are assumed to be randomly time-varying which obeys certain mutually uncorrelated Bernoulli distributed white noise sequences. By constructing an appropriate Lyapunov-Krasovskii functional (LKF) and employing an average-dwell time approach, a new set of criteria is derived for ensuring the robust exponential stability of the closed-loop switched system. More precisely, the Schur complement and Jensen's integral inequality are used in derivation of stabilisation criteria. By considering the relationship among the random time-varying delay and its lower and upper bounds, a new set of sufficient condition is established for the existence of reliable robust sampled-data control in terms of solution to linear matrix inequalities (LMIs). Finally, an illustrative example based on the F-18 aircraft model is provided to show the effectiveness of the proposed design procedures.
Robust Fault Detection for Switched Fuzzy Systems With Unknown Input.
Han, Jian; Zhang, Huaguang; Wang, Yingchun; Sun, Xun
2017-10-03
This paper investigates the fault detection problem for a class of switched nonlinear systems in the T-S fuzzy framework. The unknown input is considered in the systems. A novel fault detection unknown input observer design method is proposed. Based on the proposed observer, the unknown input can be removed from the fault detection residual. The weighted H∞ performance level is considered to ensure the robustness. In addition, the weighted H₋ performance level is introduced, which can increase the sensibility of the proposed detection method. To verify the proposed scheme, a numerical simulation example and an electromechanical system simulation example are provided at the end of this paper.
NASA Astrophysics Data System (ADS)
El Houda Thabet, Rihab; Combastel, Christophe; Raïssi, Tarek; Zolghadri, Ali
2015-09-01
The paper develops a set membership detection methodology which is applied to the detection of abnormal positions of aircraft control surfaces. Robust and early detection of such abnormal positions is an important issue for early system reconfiguration and overall optimisation of aircraft design. In order to improve fault sensitivity while ensuring a high level of robustness, the method combines a data-driven characterisation of noise and a model-driven approach based on interval prediction. The efficiency of the proposed methodology is illustrated through simulation results obtained based on data recorded in several flight scenarios of a highly representative aircraft benchmark.
Ramtinfar, Sara; Chabok, Shahrokh Yousefzadeh; Chari, Aliakbar Jafari; Reihanian, Zoheir; Leili, Ehsan Kazemnezhad; Alizadeh, Arsalan
2016-10-01
The aim of this study is to compare the discriminant function of multiple organ dysfunction score (MODS) and sequential organ failure assessment (SOFA) components in predicting the Intensive Care Unit (ICU) mortality and neurologic outcome. A descriptive-analytic study was conducted at a level I trauma center. Data were collected from patients with severe traumatic brain injury admitted to the neurosurgical ICU. Basic demographic data, SOFA and MOD scores were recorded daily for all patients. Odd's ratios (ORs) were calculated to determine the relationship of each component score to mortality, and area under receiver operating characteristic (AUROC) curve was used to compare the discriminative ability of two tools with respect to ICU mortality. The most common organ failure observed was respiratory detected by SOFA of 26% and MODS of 13%, and the second common was cardiovascular detected by SOFA of 18% and MODS of 13%. No hepatic or renal failure occurred, and coagulation failure reported as 2.5% by SOFA and MODS. Cardiovascular failure defined by both tools had a correlation to ICU mortality and it was more significant for SOFA (OR = 6.9, CI = 3.6-13.3, P < 0.05 for SOFA; OR = 5, CI = 3-8.3, P < 0.05 for MODS; AUROC = 0.82 for SOFA; AUROC = 0.73 for MODS). The relationship of cardiovascular failure to dichotomized neurologic outcome was not significant statistically. ICU mortality was not associated with respiratory or coagulation failure. Cardiovascular failure defined by either tool significantly related to ICU mortality. Compared to MODS, SOFA-defined cardiovascular failure was a stronger predictor of death. ICU mortality was not affected by respiratory or coagulation failures.
A Robust Zero-Watermarking Algorithm for Audio
NASA Astrophysics Data System (ADS)
Chen, Ning; Zhu, Jie
2007-12-01
In traditional watermarking algorithms, the insertion of watermark into the host signal inevitably introduces some perceptible quality degradation. Another problem is the inherent conflict between imperceptibility and robustness. Zero-watermarking technique can solve these problems successfully. Instead of embedding watermark, the zero-watermarking technique extracts some essential characteristics from the host signal and uses them for watermark detection. However, most of the available zero-watermarking schemes are designed for still image and their robustness is not satisfactory. In this paper, an efficient and robust zero-watermarking technique for audio signal is presented. The multiresolution characteristic of discrete wavelet transform (DWT), the energy compression characteristic of discrete cosine transform (DCT), and the Gaussian noise suppression property of higher-order cumulant are combined to extract essential features from the host audio signal and they are then used for watermark recovery. Simulation results demonstrate the effectiveness of our scheme in terms of inaudibility, detection reliability, and robustness.
On the robustness of a Bayes estimate. [in reliability theory
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1974-01-01
This paper examines the robustness of a Bayes estimator with respect to the assigned prior distribution. A Bayesian analysis for a stochastic scale parameter of a Weibull failure model is summarized in which the natural conjugate is assigned as the prior distribution of the random parameter. The sensitivity analysis is carried out by the Monte Carlo method in which, although an inverted gamma is the assigned prior, realizations are generated using distribution functions of varying shape. For several distributional forms and even for some fixed values of the parameter, simulated mean squared errors of Bayes and minimum variance unbiased estimators are determined and compared. Results indicate that the Bayes estimator remains squared-error superior and appears to be largely robust to the form of the assigned prior distribution.
The Identification of Software Failure Regions
1990-06-01
be used to detect non-obviously redundant test cases. A preliminary examination of the manual analysis method is performed with a set of programs ...failure regions are defined and a method of failure region analysis is described in detail. The thesis describes how this analysis may be used to detect...is the termination of the ability of a functional unit to perform its required function. (Glossary, 1983) The presence of faults in program code
Fault Detection and Isolation for Hydraulic Control
NASA Technical Reports Server (NTRS)
1987-01-01
Pressure sensors and isolation valves act to shut down defective servochannel. Redundant hydraulic system indirectly senses failure in any of its electrical control channels and mechanically isolates hydraulic channel controlled by faulty electrical channel so flat it cannot participate in operating system. With failure-detection and isolation technique, system can sustains two failed channels and still functions at full performance levels. Scheme useful on aircraft or other systems with hydraulic servovalves where failure cannot be tolerated.
Davidovitch, Lior; Stoklosa, Richard; Majer, Jonathan; Nietrzeba, Alex; Whittle, Peter; Mengersen, Kerrie; Ben-Haim, Yakov
2009-06-01
Surveillance for invasive non-indigenous species (NIS) is an integral part of a quarantine system. Estimating the efficiency of a surveillance strategy relies on many uncertain parameters estimated by experts, such as the efficiency of its components in face of the specific NIS, the ability of the NIS to inhabit different environments, and so on. Due to the importance of detecting an invasive NIS within a critical period of time, it is crucial that these uncertainties be accounted for in the design of the surveillance system. We formulate a detection model that takes into account, in addition to structured sampling for incursive NIS, incidental detection by untrained workers. We use info-gap theory for satisficing (not minimizing) the probability of detection, while at the same time maximizing the robustness to uncertainty. We demonstrate the trade-off between robustness to uncertainty, and an increase in the required probability of detection. An empirical example based on the detection of Pheidole megacephala on Barrow Island demonstrates the use of info-gap analysis to select a surveillance strategy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerlach, Joerg; Kessler, Lutz; Paul, Udo
2007-05-17
The concept of forming limit curves (FLC) is widely used in industrial practice. The required data should be delivered for typical material properties (measured on coils with properties in a range of +/- of the standard deviation from the mean production values) by the material suppliers. In particular it should be noted that its use for the validation of forming robustness providing forming limit curves for the variety of scattering in the mechanical properties is impossible. Therefore a forecast of the expected limit strains without expensive cost and time-consuming experiments is necessary. In the paper the quality of a regressionmore » analysis for determining forming limit curves based on tensile test results is presented and discussed.Owing to the specific definition of limit strains with FLCs following linear strain paths, the significance of this failure definition is limited. To consider nonlinear strain path effects, different methods are given in literature. One simple method is the concept of limit stresses. It should be noted that the determined value of the critical stress is dependent on the extrapolation of the tensile test curve. When the yield curve extrapolation is very similar to an exponential function, the definition of the critical stress value is very complicated due to the low slope of the hardening function at large strains.A new method to determine general failure behavior in sheet metal forming is the common use and interpretation of three criteria: onset on material instability (comparable with FLC concept), value of critical shear fracture and the value of ductile fracture. This method seems to be particularly successful for newly developed high strength steel grades in connection with more complex strain paths for some specific material elements. Nevertheless the identification of the different failure material parameters or functions will increase and the user has to learn with the interpretation of the numerical results.« less
Dielectric Spectroscopic Detection of Early Failures in 3-D Integrated Circuits.
Obeng, Yaw; Okoro, C A; Ahn, Jung-Joon; You, Lin; Kopanski, Joseph J
The commercial introduction of three dimensional integrated circuits (3D-ICs) has been hindered by reliability challenges, such as stress related failures, resistivity changes, and unexplained early failures. In this paper, we discuss a new RF-based metrology, based on dielectric spectroscopy, for detecting and characterizing electrically active defects in fully integrated 3D devices. These defects are traceable to the chemistry of the insolation dielectrics used in the through silicon via (TSV) construction. We show that these defects may be responsible for some of the unexplained early reliability failures observed in TSV enabled 3D devices.
Rodríguez-Lázaro, David; Pla, Maria; Scortti, Mariela; Monzó, Héctor J.; Vázquez-Boland, José A.
2005-01-01
We describe a novel quantitative real-time (Q)-PCR assay for Listeria monocytogenes based on the coamplification of a target hly gene fragment and an internal amplification control (IAC). The IAC is a chimeric double-stranded DNA containing a fragment of the rapeseed BnACCg8 gene flanked by the hly-specific target sequences. This IAC is detected using a second TaqMan probe labeled with a different fluorophore, enabling the simultaneous monitoring of the hly and IAC signals. The hly-IAC assay had a specificity and sensitivity of 100%, as assessed using 49 L. monocytogenes isolates of different serotypes and 96 strains of nontarget bacteria, including 51 Listeria isolates. The detection and quantification limits were 8 and 30 genome equivalents, and the coefficients for PCR linearity (R2) and efficiency (E) were 0.997 and 0.80, respectively. We tested the performance of the hly-IAC Q-PCR assay using various broth media and food matrices. Fraser and half-Fraser media, raw pork, and raw or cold-smoked salmon were strongly PCR-inhibitory. This Q-PCR assay for L. monocytogenes, the first incorporating an IAC to be described for quantitative detection of a food-borne pathogen, is a simple and robust tool facilitating the identification of false negatives or underestimations of contamination loads due to PCR failure. PMID:16332910
Redundancy management of multiple KT-70 inertial measurement units applicable to the space shuttle
NASA Technical Reports Server (NTRS)
Cook, L. J.
1975-01-01
Results of an investigation of velocity failure detection and isolation for 3 inertial measuring units (IMU) and 2 inertial measuring units (IMU) configurations are presented. The failure detection and isolation algorithm performance was highly successful and most types of velocity errors were detected and isolated. The failure detection and isolation algorithm also included attitude FDI but was not evaluated because of the lack of time and low resolution in the gimbal angle synchro outputs. The shuttle KT-70 IMUs will have dual-speed resolvers and high resolution gimbal angle readouts. It was demonstrated by these tests that a single computer utilizing a serial data bus can successfully control a redundant 3-IMU system and perform FDI.
Detecting Structural Failures Via Acoustic Impulse Responses
NASA Technical Reports Server (NTRS)
Bayard, David S.; Joshi, Sanjay S.
1995-01-01
Advanced method of acoustic pulse reflectivity testing developed for use in determining sizes and locations of failures within structures. Used to detect breaks in electrical transmission lines, detect faults in optical fibers, and determine mechanical properties of materials. In method, structure vibrationally excited with acoustic pulse (a "ping") at one location and acoustic response measured at same or different location. Measured acoustic response digitized, then processed by finite-impulse-response (FIR) filtering algorithm unique to method and based on acoustic-wave-propagation and -reflection properties of structure. Offers several advantages: does not require training, does not require prior knowledge of mathematical model of acoustic response of structure, enables detection and localization of multiple failures, and yields data on extent of damage at each location.
Fung, Erik; Hui, Elsie; Yang, Xiaobo; Lui, Leong T; Cheng, King F; Li, Qi; Fan, Yiting; Sahota, Daljit S; Ma, Bosco H M; Lee, Jenny S W; Lee, Alex P W; Woo, Jean
2018-01-01
Heart failure and frailty are clinical syndromes that present with overlapping phenotypic characteristics. Importantly, their co-presence is associated with increased mortality and morbidity. While mechanical and electrical device therapies for heart failure are vital for select patients with advanced stage disease, the majority of patients and especially those with undiagnosed heart failure would benefit from early disease detection and prompt initiation of guideline-directed medical therapies. In this article, we review the problematic interactions between heart failure and frailty, introduce a focused cardiac screening program for community-living elderly initiated by a mobile communication device app leading to the Undiagnosed heart Failure in frail Older individuals (UFO) study, and discuss how the knowledge of pre-frailty and frailty status could be exploited for the detection of previously undiagnosed heart failure or advanced cardiac disease. The widespread use of mobile devices coupled with increasing availability of novel, effective medical and minimally invasive therapies have incentivized new approaches to heart failure case finding and disease management.
An improved, robust, axial line singularity method for bodies of revolution
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.
1989-01-01
The failures encountered in attempts to increase the range of applicability of the axial line singularity method for representing incompressible, inviscid flow about an inclined and slender body-of-revolution are presently noted to be common to all efforts to solve Fredholm equations of the first kind. It is shown that a previously developed smoothing technique yields a robust method for numerical solution of the governing equations; this technique is easily retrofitted to existing codes, and allows the number of circularities to be increased until the most accurate line singularity solution is obtained.
Robust Kriged Kalman Filtering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baingana, Brian; Dall'Anese, Emiliano; Mateos, Gonzalo
2015-11-11
Although the kriged Kalman filter (KKF) has well-documented merits for prediction of spatial-temporal processes, its performance degrades in the presence of outliers due to anomalous events, or measurement equipment failures. This paper proposes a robust KKF model that explicitly accounts for presence of measurement outliers. Exploiting outlier sparsity, a novel l1-regularized estimator that jointly predicts the spatial-temporal process at unmonitored locations, while identifying measurement outliers is put forth. Numerical tests are conducted on a synthetic Internet protocol (IP) network, and real transformer load data. Test results corroborate the effectiveness of the novel estimator in joint spatial prediction and outlier identification.
Control of large flexible space structures
NASA Technical Reports Server (NTRS)
Vandervelde, W. E.
1986-01-01
Progress in robust design of generalized parity relations, design of failure sensitive observers using the geometric system theory of Wonham, computational techniques for evaluation of the performance of control systems with fault tolerance and redundancy management features, and the design and evaluation od control systems for structures having nonlinear joints are described.
Flight test results of the strapdown ring laser gyro tetrad inertial navigation system
NASA Technical Reports Server (NTRS)
Carestia, R. A.; Hruby, R. J.; Bjorkman, W. S.
1983-01-01
A helicopter flight test program undertaken to evaluate the performance of Tetrad (a strap down, laser gyro, inertial navigation system) is described. The results of 34 flights show a mean final navigational velocity error of 5.06 knots, with a standard deviation of 3.84 knots; a corresponding mean final position error of 2.66 n. mi., with a standard deviation of 1.48 n. mi.; and a modeled mean position error growth rate for the 34 tests of 1.96 knots, with a standard deviation of 1.09 knots. No laser gyro or accelerometer failures were detected during the flight tests. Off line parity residual studies used simulated failures with the prerecorded flight test and laboratory test data. The airborne Tetrad system's failure--detection logic, exercised during the tests, successfully demonstrated the detection of simulated ""hard'' failures and the system's ability to continue successfully to navigate by removing the simulated faulted sensor from the computations. Tetrad's four ring laser gyros provided reliable and accurate angular rate sensing during the 4 yr of the test program, and no sensor failures were detected during the evaluation of free inertial navigation performance.
Sauer, Juergen; Chavaillaz, Alain; Wastell, David
2016-06-01
This work examined the effects of operators' exposure to various types of automation failures in training. Forty-five participants were trained for 3.5 h on a simulated process control environment. During training, participants either experienced a fully reliable, automatic fault repair facility (i.e. faults detected and correctly diagnosed), a misdiagnosis-prone one (i.e. faults detected but not correctly diagnosed) or a miss-prone one (i.e. faults not detected). One week after training, participants were tested for 3 h, experiencing two types of automation failures (misdiagnosis, miss). The results showed that automation bias was very high when operators trained on miss-prone automation encountered a failure of the diagnostic system. Operator errors resulting from automation bias were much higher when automation misdiagnosed a fault than when it missed one. Differences in trust levels that were instilled by the different training experiences disappeared during the testing session. Practitioner Summary: The experience of automation failures during training has some consequences. A greater potential for operator errors may be expected when an automatic system failed to diagnose a fault than when it failed to detect one.
Sensor failure detection for jet engines using analytical redundance
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1984-01-01
Analytical redundant sensor failure detection, isolation and accommodation techniques for gas turbine engines are surveyed. Both the theoretical technology base and demonstrated concepts are discussed. Also included is a discussion of current technology needs and ongoing Government sponsored programs to meet those needs.
Robust Mokken Scale Analysis by Means of the Forward Search Algorithm for Outlier Detection
ERIC Educational Resources Information Center
Zijlstra, Wobbe P.; van der Ark, L. Andries; Sijtsma, Klaas
2011-01-01
Exploratory Mokken scale analysis (MSA) is a popular method for identifying scales from larger sets of items. As with any statistical method, in MSA the presence of outliers in the data may result in biased results and wrong conclusions. The forward search algorithm is a robust diagnostic method for outlier detection, which we adapt here to…
Robustness of remote stress detection from visible spectrum recordings
NASA Astrophysics Data System (ADS)
Kaur, Balvinder; Moses, Sophia; Luthra, Megha; Ikonomidou, Vasiliki N.
2016-05-01
In our recent work, we have shown that it is possible to extract high fidelity timing information of the cardiac pulse wave from visible spectrum videos, which can then be used as a basis for stress detection. In that approach, we used both heart rate variability (HRV) metrics and the differential pulse transit time (dPTT) as indicators of the presence of stress. One of the main concerns in this analysis is its robustness in the presence of noise, as the remotely acquired signal that we call blood wave (BW) signal is degraded with respect to the signal acquired using contact sensors. In this work, we discuss the robustness of our metrics in the presence of multiplicative noise. Specifically, we study the effects of subtle motion due to respiration and changes in illumination levels due to light flickering on the BW signal, the HRV-driven features, and the dPTT. Our sensitivity study involved both Monte Carlo simulations and experimental data from human facial videos, and indicates that our metrics are robust even under moderate amounts of noise. Generated results will help the remote stress detection community with developing requirements for visual spectrum based stress detection systems.
An energy-efficient failure detector for vehicular cloud computing.
Liu, Jiaxi; Wu, Zhibo; Dong, Jian; Wu, Jin; Wen, Dongxin
2018-01-01
Failure detectors are one of the fundamental components for maintaining the high availability of vehicular cloud computing. In vehicular cloud computing, lots of RSUs are deployed along the road to improve the connectivity. Many of them are equipped with solar battery due to the unavailability or excess expense of wired electrical power. So it is important to reduce the battery consumption of RSU. However, the existing failure detection algorithms are not designed to save battery consumption RSU. To solve this problem, a new energy-efficient failure detector 2E-FD has been proposed specifically for vehicular cloud computing. 2E-FD does not only provide acceptable failure detection service, but also saves the battery consumption of RSU. Through the comparative experiments, the results show that our failure detector has better performance in terms of speed, accuracy and battery consumption.
An energy-efficient failure detector for vehicular cloud computing
Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Wen, Dongxin
2018-01-01
Failure detectors are one of the fundamental components for maintaining the high availability of vehicular cloud computing. In vehicular cloud computing, lots of RSUs are deployed along the road to improve the connectivity. Many of them are equipped with solar battery due to the unavailability or excess expense of wired electrical power. So it is important to reduce the battery consumption of RSU. However, the existing failure detection algorithms are not designed to save battery consumption RSU. To solve this problem, a new energy-efficient failure detector 2E-FD has been proposed specifically for vehicular cloud computing. 2E-FD does not only provide acceptable failure detection service, but also saves the battery consumption of RSU. Through the comparative experiments, the results show that our failure detector has better performance in terms of speed, accuracy and battery consumption. PMID:29352282
Metabolomic analysis of urine samples by UHPLC-QTOF-MS: Impact of normalization strategies.
Gagnebin, Yoric; Tonoli, David; Lescuyer, Pierre; Ponte, Belen; de Seigneux, Sophie; Martin, Pierre-Yves; Schappler, Julie; Boccard, Julien; Rudaz, Serge
2017-02-22
Among the various biological matrices used in metabolomics, urine is a biofluid of major interest because of its non-invasive collection and its availability in large quantities. However, significant sources of variability in urine metabolomics based on UHPLC-MS are related to the analytical drift and variation of the sample concentration, thus requiring normalization. A sequential normalization strategy was developed to remove these detrimental effects, including: (i) pre-acquisition sample normalization by individual dilution factors to narrow the concentration range and to standardize the analytical conditions, (ii) post-acquisition data normalization by quality control-based robust LOESS signal correction (QC-RLSC) to correct for potential analytical drift, and (iii) post-acquisition data normalization by MS total useful signal (MSTUS) or probabilistic quotient normalization (PQN) to prevent the impact of concentration variability. This generic strategy was performed with urine samples from healthy individuals and was further implemented in the context of a clinical study to detect alterations in urine metabolomic profiles due to kidney failure. In the case of kidney failure, the relation between creatinine/osmolality and the sample concentration is modified, and relying only on these measurements for normalization could be highly detrimental. The sequential normalization strategy was demonstrated to significantly improve patient stratification by decreasing the unwanted variability and thus enhancing data quality. Copyright © 2016 Elsevier B.V. All rights reserved.
Integrated otpical monitoring of MEMS for closed-loop control
NASA Astrophysics Data System (ADS)
Dawson, Jeremy M.; Wang, Limin; McCormick, W. B.; Rittenhouse, S. A.; Famouri, Parviz F.; Hornak, Lawrence A.
2003-01-01
Robust control and failure assessment of MEMS employed in physically demanding, mission critical applications will allow for higher degrees of quality assurance in MEMS operation. Device fault detection and closed-loop control require detailed knowledge of the operational states of MEMS over the lifetime of the device, obtained by a means decoupled from the system. Preliminary through-wafer optical monitoring research efforts have shown that through-wafer optical probing is suitable for characterizing and monitoring the behavior of MEMS, and can be implemented in an integrated optical monitoring package for continuous in-situ device monitoring. This presentation will discuss research undertaken to establish integrated optical device metrology for closed-loop control of a MUMPS fabricated lateral harmonic oscillator. Successful linear closed-loop control results using a through-wafer optical microprobe position feedback signal will be presented. A theoretical optical output field intensity study of grating structures, fabricated on the shuttle of the resonator, was performed to improve the position resolution of the optical microprobe position signal. Through-wafer microprobe signals providing a positional resolution of 2 μm using grating structures will be shown, along with initial binary Fresnel diffractive optical microelement design layout, process development, and testing results. Progress in the design, fabrication, and test of integrated optical elements for multiple microprobe signal delivery and recovery will be discussed, as well as simulation of device system model parameter changes for failure assessment.
Failure detection and identification
NASA Technical Reports Server (NTRS)
Massoumnia, Mohammad-Ali; Verghese, George C.; Willsky, Alan S.
1989-01-01
Using the geometric concept of an unobservability subspace, a solution is given to the problem of detecting and identifying control system component failures in linear, time-invariant systems. Conditions are developed for the existence of a causal, linear, time-invariant processor that can detect and uniquely identify a component failure, first for the case where components can fail simultaneously, and then for the case where they fail only one at a time. Explicit design algorithms are provided when these conditions are satisfied. In addition to time-domain solvability conditions, frequency-domain interpretations of the results are given, and connections are drawn with results already available in the literature.
A survey of design methods for failure detection in dynamic systems
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1975-01-01
A number of methods for detecting abrupt changes (such as failures) in stochastic dynamical systems are surveyed. The class of linear systems is concentrated on but the basic concepts, if not the detailed analyses, carry over to other classes of systems. The methods surveyed range from the design of specific failure-sensitive filters, to the use of statistical tests on filter innovations, to the development of jump process formulations. Tradeoffs in complexity versus performance are discussed.
Vision-based vehicle detection and tracking algorithm design
NASA Astrophysics Data System (ADS)
Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi
2009-12-01
The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.
Autonomous control system reconfiguration for spacecraft with non-redundant actuators
NASA Astrophysics Data System (ADS)
Grossman, Walter
1995-05-01
The Small Satellite Technology Initiative (SSTI) 'CLARK' spacecraft is required to be single-failure tolerant, i.e., no failure of any single component or subsystem shall result in complete mission loss. Fault tolerance is usually achieved by implementing redundant subsystems. Fault tolerant systems are therefore heavier and cost more to build and launch than non-redundent, non fault-tolerant spacecraft. The SSTI CLARK satellite Attitude Determination and Control System (ADACS) achieves single-fault tolerance without redundancy. The attitude determination system system uses a Kalman Filter which is inherently robust to loss of any single attitude sensor. The attitude control system uses three orthogonal reaction wheels for attitude control and three magnetic dipoles for momentum control. The nominal six-actuator control system functions by projecting the attitude correction torque onto the reaction wheels while a slower momentum management outer loop removes the excess momentum in the direction normal to the local B field. The actuators are not redundant so the nominal control law cannot be implemented in the event of a loss of a single actuator (dipole or reaction wheel). The spacecraft dynamical state (attitude, angular rate, and momentum) is controllable from any five-element subset of the six actuators. With loss of an actuator the instantaneous control authority may not span R(3) but the controllability gramian integral(limits between t,0) Phi(t, tau)B(tau )B(prime)(tau) Phi(prime)(t, tau)d tau retains full rank. Upon detection of an actuator failure the control torque is decomposed onto the remaining active axes. The attitude control torque is effected and the over-orbit momentum is controlled. The resulting control system performance approaches that of the nominal system.
Onboard Sensor Data Qualification in Human-Rated Launch Vehicles
NASA Technical Reports Server (NTRS)
Wong, Edmond; Melcher, Kevin J.; Maul, William A.; Chicatelli, Amy K.; Sowers, Thomas S.; Fulton, Christopher; Bickford, Randall
2012-01-01
The avionics system software for human-rated launch vehicles requires an implementation approach that is robust to failures, especially the failure of sensors used to monitor vehicle conditions that might result in an abort determination. Sensor measurements provide the basis for operational decisions on human-rated launch vehicles. This data is often used to assess the health of system or subsystem components, to identify failures, and to take corrective action. An incorrect conclusion and/or response may result if the sensor itself provides faulty data, or if the data provided by the sensor has been corrupted. Operational decisions based on faulty sensor data have the potential to be catastrophic, resulting in loss of mission or loss of crew. To prevent these later situations from occurring, a Modular Architecture and Generalized Methodology for Sensor Data Qualification in Human-rated Launch Vehicles has been developed. Sensor Data Qualification (SDQ) is a set of algorithms that can be implemented in onboard flight software, and can be used to qualify data obtained from flight-critical sensors prior to the data being used by other flight software algorithms. Qualified data has been analyzed by SDQ and is determined to be a true representation of the sensed system state; that is, the sensor data is determined not to be corrupted by sensor faults or signal transmission faults. Sensor data can become corrupted by faults at any point in the signal path between the sensor and the flight computer. Qualifying the sensor data has the benefit of ensuring that erroneous data is identified and flagged before otherwise being used for operational decisions, thus increasing confidence in the response of the other flight software processes using the qualified data, and decreasing the probability of false alarms or missed detections.
NASA Astrophysics Data System (ADS)
Helsen, Jan; Gioia, Nicoletta; Peeters, Cédric; Jordaens, Pieter-Jan
2017-05-01
Particularly offshore there is a trend to cluster wind turbines in large wind farms, and in the near future to operate such a farm as an integrated power production plant. Predictability of individual turbine behavior across the entire fleet is key in such a strategy. Failure of turbine subcomponents should be detected well in advance to allow early planning of all necessary maintenance actions; Such that they can be performed during low wind and low electricity demand periods. In order to obtain the insights to predict component failure, it is necessary to have an integrated clean dataset spanning all turbines of the fleet for a sufficiently long period of time. This paper illustrates our big-data approach to do this. In addition, advanced failure detection algorithms are necessary to detect failures in this dataset. This paper discusses a multi-level monitoring approach that consists of a combination of machine learning and advanced physics based signal-processing techniques. The advantage of combining different data sources to detect system degradation is in the higher certainty due to multivariable criteria. In order to able to perform long-term acceleration data signal processing at high frequency a streaming processing approach is necessary. This allows the data to be analysed as the sensors generate it. This paper illustrates this streaming concept on 5kHz acceleration data. A continuous spectrogram is generated from the data-stream. Real-life offshore wind turbine data is used. Using this streaming approach for calculating bearing failure features on continuous acceleration data will support failure propagation detection.
Using failure mode and effects analysis to improve the safety of neonatal parenteral nutrition.
Arenas Villafranca, Jose Javier; Gómez Sánchez, Araceli; Nieto Guindo, Miriam; Faus Felipe, Vicente
2014-07-15
Failure mode and effects analysis (FMEA) was used to identify potential errors and to enable the implementation of measures to improve the safety of neonatal parenteral nutrition (PN). FMEA was used to analyze the preparation and dispensing of neonatal PN from the perspective of the pharmacy service in a general hospital. A process diagram was drafted, illustrating the different phases of the neonatal PN process. Next, the failures that could occur in each of these phases were compiled and cataloged, and a questionnaire was developed in which respondents were asked to rate the following aspects of each error: incidence, detectability, and severity. The highest scoring failures were considered high risk and identified as priority areas for improvements to be made. The evaluation process detected a total of 82 possible failures. Among the phases with the highest number of possible errors were transcription of the medical order, formulation of the PN, and preparation of material for the formulation. After the classification of these 82 possible failures and of their relative importance, a checklist was developed to achieve greater control in the error-detection process. FMEA demonstrated that use of the checklist reduced the level of risk and improved the detectability of errors. FMEA was useful for detecting medication errors in the PN preparation process and enabling corrective measures to be taken. A checklist was developed to reduce errors in the most critical aspects of the process. Copyright © 2014 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Moussawi, A; Derzsy, N; Lin, X; Szymanski, B K; Korniss, G
2017-09-15
Cascading failures are a critical vulnerability of complex information or infrastructure networks. Here we investigate the properties of load-based cascading failures in real and synthetic spatially-embedded network structures, and propose mitigation strategies to reduce the severity of damages caused by such failures. We introduce a stochastic method for optimal heterogeneous distribution of resources (node capacities) subject to a fixed total cost. Additionally, we design and compare the performance of networks with N-stable and (N-1)-stable network-capacity allocations by triggering cascades using various real-world node-attack and node-failure scenarios. We show that failure mitigation through increased node protection can be effectively achieved against single-node failures. However, mitigating against multiple node failures is much more difficult due to the combinatorial increase in possible sets of initially failing nodes. We analyze the robustness of the system with increasing protection, and find that a critical tolerance exists at which the system undergoes a phase transition, and above which the network almost completely survives an attack. Moreover, we show that cascade-size distributions measured in this region exhibit a power-law decay. Finally, we find a strong correlation between cascade sizes induced by individual nodes and sets of nodes. We also show that network topology alone is a weak predictor in determining the progression of cascading failures.
Shah, Sheel; Lubeck, Eric; Schwarzkopf, Maayan; He, Ting-Fang; Greenbaum, Alon; Sohn, Chang Ho; Lignell, Antti; Choi, Harry M T; Gradinaru, Viviana; Pierce, Niles A; Cai, Long
2016-08-01
Accurate and robust detection of mRNA molecules in thick tissue samples can reveal gene expression patterns in single cells within their native environment. Preserving spatial relationships while accessing the transcriptome of selected cells is a crucial feature for advancing many biological areas - from developmental biology to neuroscience. However, because of the high autofluorescence background of many tissue samples, it is difficult to detect single-molecule fluorescence in situ hybridization (smFISH) signals robustly in opaque thick samples. Here, we draw on principles from the emerging discipline of dynamic nucleic acid nanotechnology to develop a robust method for multi-color, multi-RNA imaging in deep tissues using single-molecule hybridization chain reaction (smHCR). Using this approach, single transcripts can be imaged using epifluorescence, confocal or selective plane illumination microscopy (SPIM) depending on the imaging depth required. We show that smHCR has high sensitivity in detecting mRNAs in cell culture and whole-mount zebrafish embryos, and that combined with SPIM and PACT (passive CLARITY technique) tissue hydrogel embedding and clearing, smHCR can detect single mRNAs deep within thick (0.5 mm) brain slices. By simultaneously achieving ∼20-fold signal amplification and diffraction-limited spatial resolution, smHCR offers a robust and versatile approach for detecting single mRNAs in situ, including in thick tissues where high background undermines the performance of unamplified smFISH. © 2016. Published by The Company of Biologists Ltd.
Inter-computer communication architecture for a mixed redundancy distributed system
NASA Technical Reports Server (NTRS)
Lala, Jaynarayan H.; Adams, Stuart J.
1987-01-01
The triply redundant intercomputer network for the Advanced Information Processing System (AIPS), an architecture developed to serve as the core avionics system for a broad range of aerospace vehicles, is discussed. The AIPS intercomputer network provides a high-speed, Byzantine-fault-resilient communication service between processing sites, even in the presence of arbitrary failures of simplex and duplex processing sites on the IC network. The IC network contention poll has evolved from the Laning Poll. An analysis of the failure modes and effects and a simulation of the AIPS contention poll, demonstrate the robustness of the system.
Diamond Blackfan Anemia: Diagnosis, Treatment and Molecular Pathogenesis
Lipton, Jeffrey M.; Ellis, Steven R.
2009-01-01
Synopsis Diamond Blackfan anemia (DBA) is a genetically and clinically heterogeneous disorder characterized by erythroid failure, congenital anomalies and a predisposition to cancer. Faulty ribosome biogenesis, resulting in pro-apoptotic erythropoiesis leading to erythroid failure, is hypothesized to be the underlying defect. The genes identified to date that are mutated in DBA all encode ribosomal proteins associated with either the small (RPS) or large (RPL) subunit and in these cases haploinsufficiency gives rise to the disease. Extraordinarily robust laboratory and clinical investigations have recently led to demonstrable improvements in clinical care for patients with DBA. PMID:19327583
Sleep apnoea syndromes and the cardiovascular system.
Pepperell, Justin C
2011-06-01
Management of SAS and cardiovascular disease risk should be closely linked. It is important to screen for cardiovascular disease risk in patients with SAS and vice versa. CSA/CSR may be improved by ventilation strategies in heart failure, but benefit remains to be proven. For OSA, although CPAP may reduce cardiovascular disease risk, its main benefit is symptom control. In the longer-term, CPAP should be used alongside standard cardiovascular risk reduction strategies including robust weight management programmes, with referral for bariatric surgery in appropriate cases. CPAP and NIV should be considered for acute admissions with decompensated cardiac failure.
Online two-stage association method for robust multiple people tracking
NASA Astrophysics Data System (ADS)
Lv, Jingqin; Fang, Jiangxiong; Yang, Jie
2011-07-01
Robust multiple people tracking is very important for many applications. It is a challenging problem due to occlusion and interaction in crowded scenarios. This paper proposes an online two-stage association method for robust multiple people tracking. In the first stage, short tracklets generated by linking people detection responses grow longer by particle filter based tracking, with detection confidence embedded into the observation model. And, an examining scheme runs at each frame for the reliability of tracking. In the second stage, multiple people tracking is achieved by linking tracklets to generate trajectories. An online tracklet association method is proposed to solve the linking problem, which allows applications in time-critical scenarios. This method is evaluated on the popular CAVIAR dataset. The experimental results show that our two-stage method is robust.
Notes on power of normality tests of error terms in regression models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Střelec, Luboš
2015-03-10
Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importancemore » of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.« less
2016-01-01
Passive content fingerprinting is widely used for video content identification and monitoring. However, many challenges remain unsolved especially for partial-copies detection. The main challenge is to find the right balance between the computational cost of fingerprint extraction and fingerprint dimension, without compromising detection performance against various attacks (robustness). Fast video detection performance is desirable in several modern applications, for instance, in those where video detection involves the use of large video databases or in applications requiring real-time video detection of partial copies, a process whose difficulty increases when videos suffer severe transformations. In this context, conventional fingerprinting methods are not fully suitable to cope with the attacks and transformations mentioned before, either because the robustness of these methods is not enough or because their execution time is very high, where the time bottleneck is commonly found in the fingerprint extraction and matching operations. Motivated by these issues, in this work we propose a content fingerprinting method based on the extraction of a set of independent binary global and local fingerprints. Although these features are robust against common video transformations, their combination is more discriminant against severe video transformations such as signal processing attacks, geometric transformations and temporal and spatial desynchronization. Additionally, we use an efficient multilevel filtering system accelerating the processes of fingerprint extraction and matching. This multilevel filtering system helps to rapidly identify potential similar video copies upon which the fingerprint process is carried out only, thus saving computational time. We tested with datasets of real copied videos, and the results show how our method outperforms state-of-the-art methods regarding detection scores. Furthermore, the granularity of our method makes it suitable for partial-copy detection; that is, by processing only short segments of 1 second length. PMID:27861492
Fire flame detection based on GICA and target tracking
NASA Astrophysics Data System (ADS)
Rong, Jianzhong; Zhou, Dechuang; Yao, Wei; Gao, Wei; Chen, Juan; Wang, Jian
2013-04-01
To improve the video fire detection rate, a robust fire detection algorithm based on the color, motion and pattern characteristics of fire targets was proposed, which proved a satisfactory fire detection rate for different fire scenes. In this fire detection algorithm: (a) a rule-based generic color model was developed based on analysis on a large quantity of flame pixels; (b) from the traditional GICA (Geometrical Independent Component Analysis) model, a Cumulative Geometrical Independent Component Analysis (C-GICA) model was developed for motion detection without static background and (c) a BP neural network fire recognition model based on multi-features of the fire pattern was developed. Fire detection tests on benchmark fire video clips of different scenes have shown the robustness, accuracy and fast-response of the algorithm.
Tol, Trupti; Kadam, Nilesh; Raotole, Nilesh; Desai, Anita; Samanta, Gautam
2016-02-05
The combination of Abacavir, Lamivudine and Dolutegravir is an anti-retroviral formulation that displays high efficacy and superiority in comparison to other anti-retroviral combinations. Analysis of related substances in this combination drug product was very challenging due to the presence of nearly thirty peaks including the three active pharmaceutical ingredients (APIs), eleven known impurities and other pharmaceutical excipients. Objective of this study was to develop a single, selective, and robust high performance liquid chromatography method for the efficient separation of all peaks. Initially, one-factor-at-a-time (OFAT) approach was adopted to develop the method. But, it could not resolve all the critical peaks in such complex matrix. This led to the advent of two different HPLC methods for the determination of related substances, one for Abacavir and Lamivudine and the other for Dolutegravir. But, since analysis of a single sample using two methods instead of one is time and resource consuming and thus expensive, an attempt was made to develop a single and robust method by adopting quality by design (QbD) principles. Design of Experiments (DoE) was applied as a tool to achieve the optimum conditions through Response surface methodology with three method variables, pH, temperature, and mobile phase composition. As the study progressed, it was discovered that establishment of the design space was not viable due to the completely distant pH requirements of the two responses, i.e. (i) retention time for Lamivudine carboxylic acid and (ii) resolution between Abacavir impurity B and unknown impurity. Eventually, neglecting one of these two responses each time, two distinguished design spaces have been established and verified. Edge of failures at both design spaces indicate high probability of failure. It therefore, becomes very important to identify the most robust zone or normal operating range (NOR) within the design space with low risk of failure and high quality assurance. For NOR establishment, Monte Carlo simulation was performed on the basis of which process capability index (Cpk) was derived. Finally, the selectivity issue problem faced due to the pH dependency and the dissimilar pH needs of the two critical responses was resolved by introducing pH gradient into the program. This new ternary gradient program has provided a single robust method. Thus, two HPLC methods for the analysis of the combination drug product have been replaced with a selective, robust, and cost effective single method. Copyright © 2015 Elsevier B.V. All rights reserved.
Resolving Multi-Stakeholder Robustness Asymmetries in Coupled Agricultural and Urban Systems
NASA Astrophysics Data System (ADS)
Li, Yu; Giuliani, Matteo; Castelletti, Andrea; Reed, Patrick
2016-04-01
The evolving pressures from a changing climate and society are increasingly motivating decision support frameworks that consider the robustness of management actions across many possible futures. Focusing on robustness is helpful for investigating key vulnerabilities within current water systems and for identifying potential tradeoffs across candidate adaptation responses. To date, most robustness studies assume a social planner perspective by evaluating highly aggregated measures of system performance. This aggregate treatment of stakeholders does not explore the equity or intrinsic multi-stakeholder conflicts implicit to the system-wide measures of performance benefits and costs. The commonly present heterogeneity across complex management interests, however, may produce strong asymmetries for alternative adaptation options, designed to satisfy system-level targets. In this work, we advance traditional robustness decision frameworks by replacing the centralized social planner with a bottom-up, agent-based approach, where stakeholders are modeled as individuals, and represented as potentially self-interested agents. This agent-based model enables a more explicit exploration of the potential inequities and asymmetries in the distribution of the system-wide benefit. The approach is demonstrated by exploring the potential conflicts between urban flooding and agricultural production in the Lake Como system (Italy). Lake Como is a regulated lake that is operated to supply water to the downstream agricultural district (Muzza as the pilot study area in this work) composed of a set of farmers with heterogeneous characteristics in terms of water allocation, cropping patterns, and land properties. Supplying water to farmers increases the risk of floods along the lakeshore and therefore the system is operated based on the tradeoff between these two objectives. We generated an ensemble of co-varying climate and socio-economic conditions and evaluated the robustness of the current Lake Como system management as well as of possible adaptation options (e.g., improved irrigation efficiency or changes in the dam operating rules). Numerical results show that crops prices and costs are the main drivers of the simulated system failures when evaluated in terms of system-level expected profitability. Analysis conducted at the farmer-agent scale highlights alternatively that temperature and inflows are the critical drivers leading to failures. Finally, we show that the robustness of the considered adaptation options varies spatially, strongly influenced by stakeholders' context, the metrics used to define success, and the assumed preferences for reservoir operations in balancing urban flooding and agricultural productivity.
Arduino-based noise robust online heart-rate detection.
Das, Sangita; Pal, Saurabh; Mitra, Madhuchhanda
2017-04-01
This paper introduces a noise robust real time heart rate detection system from electrocardiogram (ECG) data. An online data acquisition system is developed to collect ECG signals from human subjects. Heart rate is detected using window-based autocorrelation peak localisation technique. A low-cost Arduino UNO board is used to implement the complete automated process. The performance of the system is compared with PC-based heart rate detection technique. Accuracy of the system is validated through simulated noisy ECG data with various levels of signal to noise ratio (SNR). The mean percentage error of detected heart rate is found to be 0.72% for the noisy database with five different noise levels.
Pennell, William E.; Sutton, Jr., Harry G.
1981-01-01
Method and apparatus for detecting failure in a welded connection, particrly applicable to not readily accessible welds such as those joining components within the reactor vessel of a nuclear reactor system. A preselected tag gas is sealed within a chamber which extends through selected portions of the base metal and weld deposit. In the event of a failure, such as development of a crack extending from the chamber to an outer surface, the tag gas is released. The environment about the welded area is directed to an analyzer which, in the event of presence of the tag gas, evidences the failure. A trigger gas can be included with the tag gas to actuate the analyzer.
An investigation of gear mesh failure prediction techniques. M.S. Thesis - Cleveland State Univ.
NASA Technical Reports Server (NTRS)
Zakrajsek, James J.
1989-01-01
A study was performed in which several gear failure prediction methods were investigated and applied to experimental data from a gear fatigue test apparatus. The primary objective was to provide a baseline understanding of the prediction methods and to evaluate their diagnostic capabilities. The methods investigated use the signal average in both the time and frequency domain to detect gear failure. Data from eleven gear fatigue tests were recorded at periodic time intervals as the gears were run from initiation to failure. Four major failure modes, consisting of heavy wear, tooth breakage, single pits, and distributed pitting were observed among the failed gears. Results show that the prediction methods were able to detect only those gear failures which involved heavy wear or distributed pitting. None of the methods could predict fatigue cracks, which resulted in tooth breakage, or single pits. It is suspected that the fatigue cracks were not detected because of limitations in data acquisition rather than in methodology. Additionally, the frequency response between the gear shaft and the transducer was found to significantly affect the vibration signal. The specific frequencies affected were filtered out of the signal average prior to application of the methods.
ERIC Educational Resources Information Center
Baker, Ryan S. J. d.; Corbett, Albert T.; Gowda, Sujith M.
2013-01-01
Recently, there has been growing emphasis on supporting robust learning within intelligent tutoring systems, assessed by measures such as transfer to related skills, preparation for future learning, and longer term retention. It has been shown that different pedagogical strategies promote robust learning to different degrees. However, the student…
Impact analysis of two kinds of failure strategies in Beijing road transportation network
NASA Astrophysics Data System (ADS)
Zhang, Zundong; Xu, Xiaoyang; Zhang, Zhaoran; Zhou, Huijuan
The Beijing road transportation network (BRTN), as a large-scale technological network, exhibits very complex and complicate features during daily periods. And it has been widely highlighted that how statistical characteristics (i.e. average path length and global network efficiency) change while the network evolves. In this paper, by using different modeling concepts, three kinds of network models of BRTN namely the abstract network model, the static network model with road mileage as weights and the dynamic network model with travel time as weights — are constructed, respectively, according to the topological data and the real detected flow data. The degree distribution of the three kinds of network models are analyzed, which proves that the urban road infrastructure network and the dynamic network behavior like scale-free networks. By analyzing and comparing the important statistical characteristics of three models under random attacks and intentional attacks, it shows that the urban road infrastructure network and the dynamic network of BRTN are both robust and vulnerable.
Functional Fault Modeling Conventions and Practices for Real-Time Fault Isolation
NASA Technical Reports Server (NTRS)
Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara
2010-01-01
The purpose of this paper is to present the conventions, best practices, and processes that were established based on the prototype development of a Functional Fault Model (FFM) for a Cryogenic System that would be used for real-time Fault Isolation in a Fault Detection, Isolation, and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using a suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FFMs were created offline but would eventually be used by a real-time reasoner to isolate faults in a Cryogenic System. Through their development and review, a set of modeling conventions and best practices were established. The prototype FFM development also provided a pathfinder for future FFM development processes. This paper documents the rationale and considerations for robust FFMs that can easily be transitioned to a real-time operating environment.
Stereo matching algorithm based on double components model
NASA Astrophysics Data System (ADS)
Zhou, Xiao; Ou, Kejun; Zhao, Jianxin; Mou, Xingang
2018-03-01
The tiny wires are the great threat to the safety of the UAV flight. Because they have only several pixels isolated far from the background, while most of the existing stereo matching methods require a certain area of the support region to improve the robustness, or assume the depth dependence of the neighboring pixels to meet requirement of global or semi global optimization method. So there will be some false alarms even failures when images contains tiny wires. A new stereo matching algorithm is approved in the paper based on double components model. According to different texture types the input image is decomposed into two independent component images. One contains only sparse wire texture image and another contains all remaining parts. Different matching schemes are adopted for each component image pairs. Experiment proved that the algorithm can effectively calculate the depth image of complex scene of patrol UAV, which can detect tiny wires besides the large size objects. Compared with the current mainstream method it has obvious advantages.
A survey of design methods for failure detection in dynamic systems
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1975-01-01
A number of methods for the detection of abrupt changes (such as failures) in stochastic dynamical systems were surveyed. The class of linear systems were emphasized, but the basic concepts, if not the detailed analyses, carry over to other classes of systems. The methods surveyed range from the design of specific failure-sensitive filters, to the use of statistical tests on filter innovations, to the development of jump process formulations. Tradeoffs in complexity versus performance are discussed.
Gear Fault Detection Effectiveness as Applied to Tooth Surface Pitting Fatigue Damage
NASA Technical Reports Server (NTRS)
Lewicki, David G.; Dempsey, Paula J.; Heath, Gregory F.; Shanthakumaran, Perumal
2010-01-01
A study was performed to evaluate fault detection effectiveness as applied to gear-tooth-pitting-fatigue damage. Vibration and oil-debris monitoring (ODM) data were gathered from 24 sets of spur pinion and face gears run during a previous endurance evaluation study. Three common condition indicators (RMS, FM4, and NA4 [Ed. 's note: See Appendix A-Definitions D were deduced from the time-averaged vibration data and used with the ODM to evaluate their performance for gear fault detection. The NA4 parameter showed to be a very good condition indicator for the detection of gear tooth surface pitting failures. The FM4 and RMS parameters perfomu:d average to below average in detection of gear tooth surface pitting failures. The ODM sensor was successful in detecting a significant 8lDOunt of debris from all the gear tooth pitting fatigue failures. Excluding outliers, the average cumulative mass at the end of a test was 40 mg.
NASA Astrophysics Data System (ADS)
Edwards, John L.; Beekman, Randy M.; Buchanan, David B.; Farner, Scott; Gershzohn, Gary R.; Khuzadi, Mbuyi; Mikula, D. F.; Nissen, Gerry; Peck, James; Taylor, Shaun
2007-04-01
Human space travel is inherently dangerous. Hazardous conditions will exist. Real time health monitoring of critical subsystems is essential for providing a safe abort timeline in the event of a catastrophic subsystem failure. In this paper, we discuss a practical and cost effective process for developing critical subsystem failure detection, diagnosis and response (FDDR). We also present the results of a real time health monitoring simulation of a propellant ullage pressurization subsystem failure. The health monitoring development process identifies hazards, isolates hazard causes, defines software partitioning requirements and quantifies software algorithm development. The process provides a means to establish the number and placement of sensors necessary to provide real time health monitoring. We discuss how health monitoring software tracks subsystem control commands, interprets off-nominal operational sensor data, predicts failure propagation timelines, corroborate failures predictions and formats failure protocol.
Robust Gain-Scheduled Fault Tolerant Control for a Transport Aircraft
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Gregory, Irene
2007-01-01
This paper presents an application of robust gain-scheduled control concepts using a linear parameter-varying (LPV) control synthesis method to design fault tolerant controllers for a civil transport aircraft. To apply the robust LPV control synthesis method, the nonlinear dynamics must be represented by an LPV model, which is developed using the function substitution method over the entire flight envelope. The developed LPV model associated with the aerodynamic coefficient uncertainties represents nonlinear dynamics including those outside the equilibrium manifold. Passive and active fault tolerant controllers (FTC) are designed for the longitudinal dynamics of the Boeing 747-100/200 aircraft in the presence of elevator failure. Both FTC laws are evaluated in the full nonlinear aircraft simulation in the presence of the elevator fault and the results are compared to show pros and cons of each control law.
Karmazyn, Morris; Gan, Xiaohong Tracey
2017-10-01
Heart failure is a major medical and economic burden throughout the world. Although various treatment options are available to treat heart failure, death rates in both men and women remain high. Potential adjunctive therapies may lie with use of herbal medications, many of which possess potent pharmacological properties. Among the most widely studied is ginseng, a member of the genus Panax that is grown in many parts of the world and that has been used as a medical treatment for a variety of conditions for thousands of years, particularly in Asian societies. There are a number of ginseng species, each possessing distinct pharmacological effects due primarily to differences in their bioactive components including saponin ginsenosides and polysaccharides. While experimental evidence for salutary effects of ginseng on heart failure is robust, clinical evidence is less so, primarily due to a paucity of large-scale well-controlled clinical trials. However, there is evidence from small trials that ginseng-containing Chinese medications such as Shenmai can offer benefit when administered as adjunctive therapy to heart failure patients. Substantial additional studies are required, particularly in the clinical arena, to provide evidence for a favourable effect of ginseng in heart failure patients.
Anker, Stefan D; Schroeder, Stefan; Atar, Dan; Bax, Jeroen J; Ceconi, Claudio; Cowie, Martin R; Crisp, Adam; Dominjon, Fabienne; Ford, Ian; Ghofrani, Hossein-Ardeschir; Gropper, Savion; Hindricks, Gerhard; Hlatky, Mark A; Holcomb, Richard; Honarpour, Narimon; Jukema, J Wouter; Kim, Albert M; Kunz, Michael; Lefkowitz, Martin; Le Floch, Chantal; Landmesser, Ulf; McDonagh, Theresa A; McMurray, John J; Merkely, Bela; Packer, Milton; Prasad, Krishna; Revkin, James; Rosano, Giuseppe M C; Somaratne, Ransi; Stough, Wendy Gattis; Voors, Adriaan A; Ruschitzka, Frank
2016-05-01
Composite endpoints are commonly used as the primary measure of efficacy in heart failure clinical trials to assess the overall treatment effect and to increase the efficiency of trials. Clinical trials still must enrol large numbers of patients to accrue a sufficient number of outcome events and have adequate power to draw conclusions about the efficacy and safety of new treatments for heart failure. Additionally, the societal and health system perspectives on heart failure have raised interest in ascertaining the effects of therapy on outcomes such as repeat hospitalization and the patient's burden of disease. Thus, novel methods for using composite endpoints in clinical trials (e.g. clinical status composite endpoints, recurrent event analyses) are being applied in current and planned trials. Endpoints that measure functional status or reflect the patient experience are important but used cautiously because heart failure treatments may improve function yet have adverse effects on mortality. This paper discusses the use of traditional and new composite endpoints, identifies qualities of robust composites, and outlines opportunities for future research. © 2016 The Authors. European Journal of Heart Failure © 2016 European Society of Cardiology.
Analytical Study of different types Of network failure detection and possible remedies
NASA Astrophysics Data System (ADS)
Saxena, Shikha; Chandra, Somnath
2012-07-01
Faults in a network have various causes,such as the failure of one or more routers, fiber-cuts, failure of physical elements at the optical layer, or extraneous causes like power outages. These faults are usually detected as failures of a set of dependent logical entities and the links affected by the failed components. A reliable control plane plays a crucial role in creating high-level services in the next-generation transport network based on the Generalized Multiprotocol Label Switching (GMPLS) or Automatically Switched Optical Networks (ASON) model. In this paper, approaches to control-plane survivability, based on protection and restoration mechanisms, are examined. Procedures for the control plane state recovery are also discussed, including link and node failure recovery and the concepts of monitoring paths (MPs) and monitoring cycles (MCs) for unique localization of shared risk linked group (SRLG) failures in all-optical networks. An SRLG failure is a failure of multiple links due to a failure of a common resource. MCs (MPs) start and end at same (distinct) monitoring location(s). They are constructed such that any SRLG failure results in the failure of a unique combination of paths and cycles. We derive necessary and sufficient conditions on the set of MCs and MPs needed for localizing an SRLG failure in an arbitrary graph. Procedure of Protection and Restoration of the SRLG failure by backup re-provisioning algorithm have also been discussed.
NASA Technical Reports Server (NTRS)
Hunter, H. E.
1972-01-01
The Avco Data Analysis and Prediction Techniques (ADAPT) were employed to determine laws capable of detecting failures in a heat plant up to three days in advance of the occurrence of the failure. The projected performance of algorithms yielded a detection probability of 90% with false alarm rates of the order of 1 per year for a sample rate of 1 per day with each detection, followed by 3 hourly samplings. This performance was verified on 173 independent test cases. The program also demonstrated diagnostic algorithms and the ability to predict the time of failure to approximately plus or minus 8 hours up to three days in advance of the failure. The ADAPT programs produce simple algorithms which have a unique possibility of a relatively low cost updating procedure. The algorithms were implemented on general purpose computers at Kennedy Space Flight Center and tested against current data.
Respiratory failure in diabetic ketoacidosis.
Konstantinov, Nikifor K; Rohrscheib, Mark; Agaba, Emmanuel I; Dorin, Richard I; Murata, Glen H; Tzamaloukas, Antonios H
2015-07-25
Respiratory failure complicating the course of diabetic ketoacidosis (DKA) is a source of increased morbidity and mortality. Detection of respiratory failure in DKA requires focused clinical monitoring, careful interpretation of arterial blood gases, and investigation for conditions that can affect adversely the respiration. Conditions that compromise respiratory function caused by DKA can be detected at presentation but are usually more prevalent during treatment. These conditions include deficits of potassium, magnesium and phosphate and hydrostatic or non-hydrostatic pulmonary edema. Conditions not caused by DKA that can worsen respiratory function under the added stress of DKA include infections of the respiratory system, pre-existing respiratory or neuromuscular disease and miscellaneous other conditions. Prompt recognition and management of the conditions that can lead to respiratory failure in DKA may prevent respiratory failure and improve mortality from DKA.
Respiratory failure in diabetic ketoacidosis
Konstantinov, Nikifor K; Rohrscheib, Mark; Agaba, Emmanuel I; Dorin, Richard I; Murata, Glen H; Tzamaloukas, Antonios H
2015-01-01
Respiratory failure complicating the course of diabetic ketoacidosis (DKA) is a source of increased morbidity and mortality. Detection of respiratory failure in DKA requires focused clinical monitoring, careful interpretation of arterial blood gases, and investigation for conditions that can affect adversely the respiration. Conditions that compromise respiratory function caused by DKA can be detected at presentation but are usually more prevalent during treatment. These conditions include deficits of potassium, magnesium and phosphate and hydrostatic or non-hydrostatic pulmonary edema. Conditions not caused by DKA that can worsen respiratory function under the added stress of DKA include infections of the respiratory system, pre-existing respiratory or neuromuscular disease and miscellaneous other conditions. Prompt recognition and management of the conditions that can lead to respiratory failure in DKA may prevent respiratory failure and improve mortality from DKA. PMID:26240698
Misra, Sudip; Singh, Ranjit; Rohith Mohan, S. V.
2010-01-01
The proposed mechanism for jamming attack detection for wireless sensor networks is novel in three respects: firstly, it upgrades the jammer to include versatile military jammers; secondly, it graduates from the existing node-centric detection system to the network-centric system making it robust and economical at the nodes, and thirdly, it tackles the problem through fuzzy inference system, as the decision regarding intensity of jamming is seldom crisp. The system with its high robustness, ability to grade nodes with jamming indices, and its true-detection rate as high as 99.8%, is worthy of consideration for information warfare defense purposes. PMID:22319307
Simulated performance of an order statistic threshold strategy for detection of narrowband signals
NASA Technical Reports Server (NTRS)
Satorius, E.; Brady, R.; Deich, W.; Gulkis, S.; Olsen, E.
1988-01-01
The application of order statistics to signal detection is becoming an increasingly active area of research. This is due to the inherent robustness of rank estimators in the presence of large outliers that would significantly degrade more conventional mean-level-based detection systems. A detection strategy is presented in which the threshold estimate is obtained using order statistics. The performance of this algorithm in the presence of simulated interference and broadband noise is evaluated. In this way, the robustness of the proposed strategy in the presence of the interference can be fully assessed as a function of the interference, noise, and detector parameters.
Smart phones: platform enabling modular, chemical, biological, and explosives sensing
NASA Astrophysics Data System (ADS)
Finch, Amethist S.; Coppock, Matthew; Bickford, Justin R.; Conn, Marvin A.; Proctor, Thomas J.; Stratis-Cullum, Dimitra N.
2013-05-01
Reliable, robust, and portable technologies are needed for the rapid identification and detection of chemical, biological, and explosive (CBE) materials. A key to addressing the persistent threat to U.S. troops in the current war on terror is the rapid detection and identification of the precursor materials used in development of improvised explosive devices, homemade explosives, and bio-warfare agents. However, a universal methodology for detection and prevention of CBE materials in the use of these devices has proven difficult. Herein, we discuss our efforts towards the development of a modular, robust, inexpensive, pervasive, archival, and compact platform (android based smart phone) enabling the rapid detection of these materials.
Optimizing Data Management in Grid Environments
NASA Astrophysics Data System (ADS)
Zissimos, Antonis; Doka, Katerina; Chazapis, Antony; Tsoumakos, Dimitrios; Koziris, Nectarios
Grids currently serve as platforms for numerous scientific as well as business applications that generate and access vast amounts of data. In this paper, we address the need for efficient, scalable and robust data management in Grid environments. We propose a fully decentralized and adaptive mechanism comprising of two components: A Distributed Replica Location Service (DRLS) and a data transfer mechanism called GridTorrent. They both adopt Peer-to-Peer techniques in order to overcome performance bottlenecks and single points of failure. On one hand, DRLS ensures resilience by relying on a Byzantine-tolerant protocol and is able to handle massive concurrent requests even during node churn. On the other hand, GridTorrent allows for maximum bandwidth utilization through collaborative sharing among the various data providers and consumers. The proposed integrated architecture is completely backwards-compatible with already deployed Grids. To demonstrate these points, experiments have been conducted in LAN as well as WAN environments under various workloads. The evaluation shows that our scheme vastly outperforms the conventional mechanisms in both efficiency (up to 10 times faster) and robustness in case of failures and flash crowd instances.
Controlling Tensegrity Robots Through Evolution
NASA Technical Reports Server (NTRS)
Iscen, Atil; Agogino, Adrian; SunSpiral, Vytas; Tumer, Kagan
2013-01-01
Tensegrity structures (built from interconnected rods and cables) have the potential to offer a revolutionary new robotic design that is light-weight, energy-efficient, robust to failures, capable of unique modes of locomotion, impact tolerant, and compliant (reducing damage between the robot and its environment). Unfortunately robots built from tensegrity structures are difficult to control with traditional methods due to their oscillatory nature, nonlinear coupling between components and overall complexity. Fortunately this formidable control challenge can be overcome through the use of evolutionary algorithms. In this paper we show that evolutionary algorithms can be used to efficiently control a ball-shaped tensegrity robot. Experimental results performed with a variety of evolutionary algorithms in a detailed soft-body physics simulator show that a centralized evolutionary algorithm performs 400 percent better than a hand-coded solution, while the multi-agent evolution performs 800 percent better. In addition, evolution is able to discover diverse control solutions (both crawling and rolling) that are robust against structural failures and can be adapted to a wide range of energy and actuation constraints. These successful controls will form the basis for building high-performance tensegrity robots in the near future.
Fung, Erik; Hui, Elsie; Yang, Xiaobo; Lui, Leong T.; Cheng, King F.; Li, Qi; Fan, Yiting; Sahota, Daljit S.; Ma, Bosco H. M.; Lee, Jenny S. W.; Lee, Alex P. W.; Woo, Jean
2018-01-01
Heart failure and frailty are clinical syndromes that present with overlapping phenotypic characteristics. Importantly, their co-presence is associated with increased mortality and morbidity. While mechanical and electrical device therapies for heart failure are vital for select patients with advanced stage disease, the majority of patients and especially those with undiagnosed heart failure would benefit from early disease detection and prompt initiation of guideline-directed medical therapies. In this article, we review the problematic interactions between heart failure and frailty, introduce a focused cardiac screening program for community-living elderly initiated by a mobile communication device app leading to the Undiagnosed heart Failure in frail Older individuals (UFO) study, and discuss how the knowledge of pre-frailty and frailty status could be exploited for the detection of previously undiagnosed heart failure or advanced cardiac disease. The widespread use of mobile devices coupled with increasing availability of novel, effective medical and minimally invasive therapies have incentivized new approaches to heart failure case finding and disease management. PMID:29740330
Towards an Automated Acoustic Detection System for Free Ranging Elephants.
Zeppelzauer, Matthias; Hensman, Sean; Stoeger, Angela S
The human-elephant conflict is one of the most serious conservation problems in Asia and Africa today. The involuntary confrontation of humans and elephants claims the lives of many animals and humans every year. A promising approach to alleviate this conflict is the development of an acoustic early warning system. Such a system requires the robust automated detection of elephant vocalizations under unconstrained field conditions. Today, no system exists that fulfills these requirements. In this paper, we present a method for the automated detection of elephant vocalizations that is robust to the diverse noise sources present in the field. We evaluate the method on a dataset recorded under natural field conditions to simulate a real-world scenario. The proposed method outperformed existing approaches and robustly and accurately detected elephants. It thus can form the basis for a future automated early warning system for elephants. Furthermore, the method may be a useful tool for scientists in bioacoustics for the study of wildlife recordings.
Self-synchronization for spread spectrum audio watermarks after time scale modification
NASA Astrophysics Data System (ADS)
Nadeau, Andrew; Sharma, Gaurav
2014-02-01
De-synchronizing operations such as insertion, deletion, and warping pose significant challenges for watermarking. Because these operations are not typical for classical communications, watermarking techniques such as spread spectrum can perform poorly. Conversely, specialized synchronization solutions can be challenging to analyze/ optimize. This paper addresses desynchronization for blind spread spectrum watermarks, detected without reference to any unmodified signal, using the robustness properties of short blocks. Synchronization relies on dynamic time warping to search over block alignments to find a sequence with maximum correlation to the watermark. This differs from synchronization schemes that must first locate invariant features of the original signal, or estimate and reverse desynchronization before detection. Without these extra synchronization steps, analysis for the proposed scheme builds on classical SS concepts and allows characterizes the relationship between the size of search space (number of detection alignment tests) and intrinsic robustness (continuous search space region covered by each individual detection test). The critical metrics that determine the search space, robustness, and performance are: time-frequency resolution of the watermarking transform, and blocklength resolution of the alignment. Simultaneous robustness to (a) MP3 compression, (b) insertion/deletion, and (c) time-scale modification is also demonstrated for a practical audio watermarking scheme developed in the proposed framework.
A hybrid robust fault tolerant control based on adaptive joint unscented Kalman filter.
Shabbouei Hagh, Yashar; Mohammadi Asl, Reza; Cocquempot, Vincent
2017-01-01
In this paper, a new hybrid robust fault tolerant control scheme is proposed. A robust H ∞ control law is used in non-faulty situation, while a Non-Singular Terminal Sliding Mode (NTSM) controller is activated as soon as an actuator fault is detected. Since a linear robust controller is designed, the system is first linearized through the feedback linearization method. To switch from one controller to the other, a fuzzy based switching system is used. An Adaptive Joint Unscented Kalman Filter (AJUKF) is used for fault detection and diagnosis. The proposed method is based on the simultaneous estimation of the system states and parameters. In order to show the efficiency of the proposed scheme, a simulated 3-DOF robotic manipulator is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Change Deafness and the Organizational Properties of Sounds
ERIC Educational Resources Information Center
Gregg, Melissa K.; Samuel, Arthur G.
2008-01-01
Change blindness, or the failure to detect (often large) changes to visual scenes, has been demonstrated in a variety of different situations. Failures to detect auditory changes are far less studied, and thus little is known about the nature of change deafness. Five experiments were conducted to explore the processes involved in change deafness…
46 CFR 161.002-8 - Automatic fire detecting systems, general requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... detecting system shall consist of a power supply; a control unit on which are located visible and audible... control unit. Power failure alarm devices may be separately housed from the control unit and may be combined with other power failure alarm systems when specifically approved. (b) [Reserved] [21 FR 9032, Nov...
46 CFR 161.002-8 - Automatic fire detecting systems, general requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... detecting system shall consist of a power supply; a control unit on which are located visible and audible... control unit. Power failure alarm devices may be separately housed from the control unit and may be combined with other power failure alarm systems when specifically approved. (b) [Reserved] [21 FR 9032, Nov...
A dual-processor multi-frequency implementation of the FINDS algorithm
NASA Technical Reports Server (NTRS)
Godiwala, Pankaj M.; Caglayan, Alper K.
1987-01-01
This report presents a parallel processing implementation of the FINDS (Fault Inferring Nonlinear Detection System) algorithm on a dual processor configured target flight computer. First, a filter initialization scheme is presented which allows the no-fail filter (NFF) states to be initialized using the first iteration of the flight data. A modified failure isolation strategy, compatible with the new failure detection strategy reported earlier, is discussed and the performance of the new FDI algorithm is analyzed using flight recorded data from the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment. The results show that low level MLS, IMU, and IAS sensor failures are detected and isolated instantaneously, while accelerometer and rate gyro failures continue to take comparatively longer to detect and isolate. The parallel implementation is accomplished by partitioning the FINDS algorithm into two parts: one based on the translational dynamics and the other based on the rotational kinematics. Finally, a multi-rate implementation of the algorithm is presented yielding significantly low execution times with acceptable estimation and FDI performance.
Anomaly Monitoring Method for Key Components of Satellite
Fan, Linjun; Xiao, Weidong; Tang, Jun
2014-01-01
This paper presented a fault diagnosis method for key components of satellite, called Anomaly Monitoring Method (AMM), which is made up of state estimation based on Multivariate State Estimation Techniques (MSET) and anomaly detection based on Sequential Probability Ratio Test (SPRT). On the basis of analysis failure of lithium-ion batteries (LIBs), we divided the failure of LIBs into internal failure, external failure, and thermal runaway and selected electrolyte resistance (R e) and the charge transfer resistance (R ct) as the key parameters of state estimation. Then, through the actual in-orbit telemetry data of the key parameters of LIBs, we obtained the actual residual value (R X) and healthy residual value (R L) of LIBs based on the state estimation of MSET, and then, through the residual values (R X and R L) of LIBs, we detected the anomaly states based on the anomaly detection of SPRT. Lastly, we conducted an example of AMM for LIBs, and, according to the results of AMM, we validated the feasibility and effectiveness of AMM by comparing it with the results of threshold detective method (TDM). PMID:24587703
Robust and Accurate Anomaly Detection in ECG Artifacts Using Time Series Motif Discovery
Sivaraks, Haemwaan
2015-01-01
Electrocardiogram (ECG) anomaly detection is an important technique for detecting dissimilar heartbeats which helps identify abnormal ECGs before the diagnosis process. Currently available ECG anomaly detection methods, ranging from academic research to commercial ECG machines, still suffer from a high false alarm rate because these methods are not able to differentiate ECG artifacts from real ECG signal, especially, in ECG artifacts that are similar to ECG signals in terms of shape and/or frequency. The problem leads to high vigilance for physicians and misinterpretation risk for nonspecialists. Therefore, this work proposes a novel anomaly detection technique that is highly robust and accurate in the presence of ECG artifacts which can effectively reduce the false alarm rate. Expert knowledge from cardiologists and motif discovery technique is utilized in our design. In addition, every step of the algorithm conforms to the interpretation of cardiologists. Our method can be utilized to both single-lead ECGs and multilead ECGs. Our experiment results on real ECG datasets are interpreted and evaluated by cardiologists. Our proposed algorithm can mostly achieve 100% of accuracy on detection (AoD), sensitivity, specificity, and positive predictive value with 0% false alarm rate. The results demonstrate that our proposed method is highly accurate and robust to artifacts, compared with competitive anomaly detection methods. PMID:25688284
Health information systems: failure, success and improvisation.
Heeks, Richard
2006-02-01
The generalised assumption of health information systems (HIS) success is questioned by a few commentators in the medical informatics field. They point to widespread HIS failure. The purpose of this paper was therefore to develop a better conceptual foundation for, and practical guidance on, health information systems failure (and success). Literature and case analysis plus pilot testing of developed model. Defining HIS failure and success is complex, and the current evidence base on HIS success and failure rates was found to be weak. Nonetheless, the best current estimate is that HIS failure is an important problem. The paper therefore derives and explains the "design-reality gap" conceptual model. This is shown to be robust in explaining multiple cases of HIS success and failure, yet provides a contingency that encompasses the differences which exist in different HIS contexts. The design-reality gap model is piloted to demonstrate its value as a tool for risk assessment and mitigation on HIS projects. It also throws into question traditional, structured development methodologies, highlighting the importance of emergent change and improvisation in HIS. The design-reality gap model can be used to address the problem of HIS failure, both as a post hoc evaluative tool and as a pre hoc risk assessment and mitigation tool. It also validates a set of methods, techniques, roles and competencies needed to support the dynamic improvisations that are found to underpin cases of HIS success.
Ferrographic and spectrometer oil analysis from a failed gas turbine engine
NASA Technical Reports Server (NTRS)
Jones, W. R., Jr.
1982-01-01
An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. It was concluded that a severe surge may have caused interference between rotating and stationary compressor that either directly or indirectly ignited the titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph, a plasma, an atomic absorption, and an emission spectrometer to see if this information would aid in the engine failure diagnosis. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism nor a high level of wear debris was detected in the engine oil sample taken just prior to the test in which the failure occurred. However, low concentrations (0.2 to 0.5 ppm) of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations ( 2 ppm) were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure. The oil analyses eliminated a lubrication system bearing or shaft seal failure as the cause of the engine failure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lall, Pradeep; Wei, Junchao; Sakalaukus, Peter
A new method has been developed for assessment of the onset of degradation in solid state luminaires to classify failure mechanisms by using metrics beyond lumen degradation that are currently used for identification of failure. Luminous Flux output, Correlated Color Temperature Data on Philips LED Lamps has been gathered under 85°C/85%RH till lamp failure. Failure modes of the test population of the lamps have been studied to understand the failure mechanisms in 85°C/85%RH accelerated test. Results indicate that the dominant failure mechanism is the discoloration of the LED encapsulant inside the lamps which is the likely cause for the luminousmore » flux degradation and the color shift. The acquired data has been used in conjunction with Bayesian Probabilistic Models to identify luminaires with onset of degradation much prior to failure through identification of decision boundaries between lamps with accrued damage and lamps beyond the failure threshold in the feature space. In addition luminaires with different failure modes have been classified separately from healthy pristine luminaires. The α-λ plots have been used to evaluate the robustness of the proposed methodology. Results show that the predicted degradation for the lamps tracks the true degradation observed during 85°C/85%RH during accelerated life test fairly closely within the ±20% confidence bounds. Correlation of model prediction with experimental results indicates that the presented methodology allows the early identification of the onset of failure much prior to development of complete failure distributions and can be used for assessing the damage state of SSLs in fairly large deployments. It is expected that, the new prediction technique will allow the development of failure distributions without testing till L70 life for the manifestation of failure.« less
NASA Astrophysics Data System (ADS)
Kim, Sungho; Choi, Byungin; Kim, Jieun; Kwon, Soon; Kim, Kyung-Tae
2012-05-01
This paper presents a separate spatio-temporal filter based small infrared target detection method to address the sea-based infrared search and track (IRST) problem in dense sun-glint environment. It is critical to detect small infrared targets such as sea-skimming missiles or asymmetric small ships for national defense. On the sea surface, sun-glint clutters degrade the detection performance. Furthermore, if we have to detect true targets using only three images with a low frame rate camera, then the problem is more difficult. We propose a novel three plot correlation filter and statistics based clutter reduction method to achieve robust small target detection rate in dense sun-glint environment. We validate the robust detection performance of the proposed method via real infrared test sequences including synthetic targets.
Miftari, Rame; Nura, Adem; Topçiu-Shufta, Valdete; Miftari, Valon; Murseli, Arbenita; Haxhibeqiri, Valdete
2017-01-01
Aim: The aim of this study was determination of validity of 99mTcDTPA estimation of GFR for early detection of chronic kidney failure Material and methods: There were 110 patients (54 males and 56 females) with kidney disease referred for evaluation of renal function at UCC of Kosovo. All patients were included in two groups. In the first group were included 30 patients confirmed with renal failure, whereas in the second group were included 80 patients with other renal disease. In study were included only patients with ready results of creatinine, urea and glucose in the blood serum. For estimation of GFR we have used the Gate GFR DTPA method. The statistical data processing was conducted using statistical methods such as arithmetic average, the student t-test, percentage or rate, sensitivity, specificity and accuracy of the test. Results: The average age of all patients was 36 years old. The average age of female was 37 whereas of male 35. Patients with renal failure was significantly older than patients with other renal disease (p<0.005). Renal failure was found in 30 patients (27.27%). The concentration of urea and creatinine in blood serum of patients with renal failure were significantly higher than in patients with other renal disease (P< 0.00001). GFR in patients with renal failure were significantly lower than in patients with other renal disease, 51.75 ml/min (p<0.00001). Sensitivity of uremia and creatininemia for detection of renal failure were 83.33%, whereas sensitivity of 99mTcDTPA GFR was 100%. Specificity of uraemia and creatininemia were 63% whereas specificity of 99mTcDTPA GFR was 47.5%. Diagnostic accuracy of blood urea and creatinine in detecting of renal failure were 69%, whereas diagnostic accuracy of 99mTcDTPA GFR was 61.8%. Conclusion: Gate 99mTc DTPA scintigraphy in collaboration with biochemical tests are very sensitive methods for early detection of patients with chronic renal failure. PMID:28883673
Failure detection and recovery in the assembly/contingency subsystem
NASA Technical Reports Server (NTRS)
Gantenbein, Rex E.
1993-01-01
The Assembly/Contingency Subsystem (ACS) is the primary communications link on board the Space Station. Any failure in a component of this system or in the external devices through which it communicates with ground-based systems will isolate the Station. The ACS software design includes a failure management capability (ACFM) that provides protocols for failure detection, isolation, and recovery (FDIR). The the ACFM design requirements as outlined in the current ACS software requirements specification document are reviewed. The activities carried out in this review include: (1) an informal, but thorough, end-to-end failure mode and effects analysis of the proposed software architecture for the ACFM; and (2) a prototype of the ACFM software, implemented as a C program under the UNIX operating system. The purpose of this review is to evaluate the FDIR protocols specified in the ACS design and the specifications themselves in light of their use in implementing the ACFM. The basis of failure detection in the ACFM is the loss of signal between the ground and the Station, which (under the appropriate circumstances) will initiate recovery to restore communications. This recovery involves the reconfiguration of the ACS to either a backup set of components or to a degraded communications mode. The initiation of recovery depends largely on the criticality of the failure mode, which is defined by tables in the ACFM and can be modified to provide a measure of flexibility in recovery procedures.
Impact of self-healing capability on network robustness
NASA Astrophysics Data System (ADS)
Shang, Yilun
2015-04-01
A wide spectrum of real-life systems ranging from neurons to botnets display spontaneous recovery ability. Using the generating function formalism applied to static uncorrelated random networks with arbitrary degree distributions, the microscopic mechanism underlying the depreciation-recovery process is characterized and the effect of varying self-healing capability on network robustness is revealed. It is found that the self-healing capability of nodes has a profound impact on the phase transition in the emergence of percolating clusters, and that salient difference exists in upholding network integrity under random failures and intentional attacks. The results provide a theoretical framework for quantitatively understanding the self-healing phenomenon in varied complex systems.
Impact of self-healing capability on network robustness.
Shang, Yilun
2015-04-01
A wide spectrum of real-life systems ranging from neurons to botnets display spontaneous recovery ability. Using the generating function formalism applied to static uncorrelated random networks with arbitrary degree distributions, the microscopic mechanism underlying the depreciation-recovery process is characterized and the effect of varying self-healing capability on network robustness is revealed. It is found that the self-healing capability of nodes has a profound impact on the phase transition in the emergence of percolating clusters, and that salient difference exists in upholding network integrity under random failures and intentional attacks. The results provide a theoretical framework for quantitatively understanding the self-healing phenomenon in varied complex systems.
NASA Astrophysics Data System (ADS)
Cauquil, Jean-Marc; Seguineau, Cédric; Vasse, Christophe; Raynal, Gaetan; Benschop, Tonny
2018-05-01
The cooler reliability is a major performance requested by the customers, especially for 24h/24h applications, which are a growing market. Thales has built a reliability policy based on accelerate ageing and tests to establish a robust knowledge on acceleration factors. The current trend seems to prove that the RM2 mean time to failure is now higher than 30,000hr. Even with accelerate ageing; the reliability growth becomes hardly manageable for such large figures. The paper focuses on these figures and comments the robustness of such a method when projections over 30,000hr of MTTF are needed.
Generic, scalable and decentralized fault detection for robot swarms.
Tarapore, Danesh; Christensen, Anders Lyhne; Timmis, Jon
2017-01-01
Robot swarms are large-scale multirobot systems with decentralized control which means that each robot acts based only on local perception and on local coordination with neighboring robots. The decentralized approach to control confers number of potential benefits. In particular, inherent scalability and robustness are often highlighted as key distinguishing features of robot swarms compared with systems that rely on traditional approaches to multirobot coordination. It has, however, been shown that swarm robotics systems are not always fault tolerant. To realize the robustness potential of robot swarms, it is thus essential to give systems the capacity to actively detect and accommodate faults. In this paper, we present a generic fault-detection system for robot swarms. We show how robots with limited and imperfect sensing capabilities are able to observe and classify the behavior of one another. In order to achieve this, the underlying classifier is an immune system-inspired algorithm that learns to distinguish between normal behavior and abnormal behavior online. Through a series of experiments, we systematically assess the performance of our approach in a detailed simulation environment. In particular, we analyze our system's capacity to correctly detect robots with faults, false positive rates, performance in a foraging task in which each robot exhibits a composite behavior, and performance under perturbations of the task environment. Results show that our generic fault-detection system is robust, that it is able to detect faults in a timely manner, and that it achieves a low false positive rate. The developed fault-detection system has the potential to enable long-term autonomy for robust multirobot systems, thus increasing the usefulness of robots for a diverse repertoire of upcoming applications in the area of distributed intelligent automation.
Generic, scalable and decentralized fault detection for robot swarms
Christensen, Anders Lyhne; Timmis, Jon
2017-01-01
Robot swarms are large-scale multirobot systems with decentralized control which means that each robot acts based only on local perception and on local coordination with neighboring robots. The decentralized approach to control confers number of potential benefits. In particular, inherent scalability and robustness are often highlighted as key distinguishing features of robot swarms compared with systems that rely on traditional approaches to multirobot coordination. It has, however, been shown that swarm robotics systems are not always fault tolerant. To realize the robustness potential of robot swarms, it is thus essential to give systems the capacity to actively detect and accommodate faults. In this paper, we present a generic fault-detection system for robot swarms. We show how robots with limited and imperfect sensing capabilities are able to observe and classify the behavior of one another. In order to achieve this, the underlying classifier is an immune system-inspired algorithm that learns to distinguish between normal behavior and abnormal behavior online. Through a series of experiments, we systematically assess the performance of our approach in a detailed simulation environment. In particular, we analyze our system’s capacity to correctly detect robots with faults, false positive rates, performance in a foraging task in which each robot exhibits a composite behavior, and performance under perturbations of the task environment. Results show that our generic fault-detection system is robust, that it is able to detect faults in a timely manner, and that it achieves a low false positive rate. The developed fault-detection system has the potential to enable long-term autonomy for robust multirobot systems, thus increasing the usefulness of robots for a diverse repertoire of upcoming applications in the area of distributed intelligent automation. PMID:28806756
NASA Technical Reports Server (NTRS)
Davis, Robert N.; Polites, Michael E.; Trevino, Luis C.
2004-01-01
This paper details a novel scheme for autonomous component health management (ACHM) with failed actuator detection and failed sensor detection, identification, and avoidance. This new scheme has features that far exceed the performance of systems with triple-redundant sensing and voting, yet requires fewer sensors and could be applied to any system with redundant sensing. Relevant background to the ACHM scheme is provided, and the simulation results for the application of that scheme to a single-axis spacecraft attitude control system with a 3rd order plant and dual-redundant measurement of system states are presented. ACHM fulfills key functions needed by an integrated vehicle health monitoring (IVHM) system. It is: autonomous; adaptive; works in realtime; provides optimal state estimation; identifies failed components; avoids failed components; reconfigures for multiple failures; reconfigures for intermittent failures; works for hard-over, soft, and zero-output failures; and works for both open- and closed-loop systems. The ACHM scheme combines a prefilter that generates preliminary state estimates, detects and identifies failed sensors and actuators, and avoids the use of failed sensors in state estimation with a fixed-gain Kalman filter that generates optimal state estimates and provides model-based state estimates that comprise an integral part of the failure detection logic. The results show that ACHM successfully isolates multiple persistent and intermittent hard-over, soft, and zero-output failures. It is now ready to be tested on a computer model of an actual system.
An intelligent control system for failure detection and controller reconfiguration
NASA Technical Reports Server (NTRS)
Biswas, Saroj K.
1994-01-01
We present an architecture of an intelligent restructurable control system to automatically detect failure of system components, assess its impact on system performance and safety, and reconfigure the controller for performance recovery. Fault detection is based on neural network associative memories and pattern classifiers, and is implemented using a multilayer feedforward network. Details of the fault detection network along with simulation results on health monitoring of a dc motor have been presented. Conceptual developments for fault assessment using an expert system and controller reconfiguration using a neural network are outlined.
Optimized autonomous operations of a 20 K space hydrogen sorption cryocooler
NASA Astrophysics Data System (ADS)
Borders, J.; Morgante, G.; Prina, M.; Pearson, D.; Bhandari, P.
2004-06-01
A fully redundant hydrogen sorption cryocooler is being developed for the European Space Agency Planck mission, dedicated to the measurement of the temperature anisotropies of the cosmic microwave background radiation with unprecedented sensitivity and resolution [Advances in Cryogenic Engineering 45A (2000) 499]. In order to achieve this ambitious scientific task, this cooler is required to provide a stable temperature reference (˜20 K) and appropriate cooling (˜1 W) to the two instruments on-board, with a flight operational lifetime of 18 months. During mission operations, communication with the spacecraft will be possible in a restricted time-window, not longer than 2 h/day. This implies the need for an operations control structure with the required robustness to safely perform autonomous procedures. The cooler performance depends on many operating parameters (such as the temperatures of the pre-cooling stages and the warm radiator), therefore the operation control system needs the capability to adapt to variations of these boundary conditions, while maintaining safe operating procedures. An engineering bread board (EBB) cooler was assembled and tested to evaluate the behavior of the system under conditions simulating flight operations and the test data were used to refine and improve the operation control software. In order to minimize scientific data loss, the cooler is required to detect all possible failure modes and to autonomously react to them by taking the appropriate action in a rapid fashion. Various procedures and schemes both general and specific in nature were developed, tested and implemented to achieve these goals. In general, the robustness to malfunctions was increased by implementing an automatic classification of anomalies in different levels relative to the seriousness of the error. The response is therefore proportional to the failure level. Specifically, the start-up sequence duration was significantly reduced, allowing a much faster activation of the system, particularly useful in case of restarts after inadvertent shutdowns arising from malfunctions in the spacecraft. The capacity of the system to detect J-T plugs was increased to the point that the cooler is able to autonomously identify actual contaminants clogging from gas flow reductions due to off-nominal operating conditions. Once a plug is confirmed, the software autonomously energizes, and subsequently turns off, a J-T defrost heater until the clog is removed, bringing the system back to normal operating conditions. In this paper, all the cooler Operational Modes are presented, together with the description of the logic structure of the procedures and the advantages they produce for the operations.
Holmes, Scott; Pena Diaz, Ana M; Athwal, George S; Faber, Kenneth J; O'Gorman, David B
2017-02-01
Propionibacterium (P) acnes infection of the shoulder after arthroplasty is a common and serious complication. Current detection methods for P acnes involve anaerobic cultures that require prolonged incubation periods (typically 7-14 days). We have developed a polymerase chain reaction (PCR)-restriction fragment length polymorphism (RFLP) approach that sensitively and specifically identifies P acnes in tissue specimens within a 24-hour period. Primers were designed to amplify a unique region of the 16S rRNA gene in P acnes that contained a unique HaeIII restriction enzyme site. PCR and RFLP analyses were optimized to detect P acnes DNA in in vitro cultures and in arthroscopic surgical biopsy specimens from patients with P acnes infections. A 564 base-pair PCR amplicon was derived from all of the known P acnes strains. HaeIII digests of the amplicon yielded a restriction fragment pattern that was unique to P acnes. P acnes-specific amplicons were detected in as few as 10 bacterial cells and in clinical biopsy specimens of infected shoulder tissues. This PCR-RFLP assay combines the sensitivity of PCR with the specificity of RFLP mapping to identify P acnes in surgical isolates. The assay is robust and rapid, and a P acnes-positive tissue specimen can be confirmed within 24 hours of sampling, facilitating treatment decision making, targeted antibiotic therapy, and monitoring to minimize implant failure and revision surgery. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Sakai, Kenshi; Upadhyaya, Shrinivasa K; Andrade-Sanchez, Pedro; Sviridova, Nina V
2017-03-01
Real-world processes are often combinations of deterministic and stochastic processes. Soil failure observed during farm tillage is one example of this phenomenon. In this paper, we investigated the nonlinear features of soil failure patterns in a farm tillage process. We demonstrate emerging determinism in soil failure patterns from stochastic processes under specific soil conditions. We normalized the deterministic nonlinear prediction considering autocorrelation and propose it as a robust way of extracting a nonlinear dynamical system from noise contaminated motion. Soil is a typical granular material. The results obtained here are expected to be applicable to granular materials in general. From a global scale to nano scale, the granular material is featured in seismology, geotechnology, soil mechanics, and particle technology. The results and discussions presented here are applicable in these wide research areas. The proposed method and our findings are useful with respect to the application of nonlinear dynamics to investigate complex motions generated from granular materials.
NASA Technical Reports Server (NTRS)
Burken, John J.; Hanson, Curtis E.; Lee, James A.; Kaneshige, John T.
2009-01-01
This report describes the improvements and enhancements to a neural network based approach for directly adapting to aerodynamic changes resulting from damage or failures. This research is a follow-on effort to flight tests performed on the NASA F-15 aircraft as part of the Intelligent Flight Control System research effort. Previous flight test results demonstrated the potential for performance improvement under destabilizing damage conditions. Little or no improvement was provided under simulated control surface failures, however, and the adaptive system was prone to pilot-induced oscillations. An improved controller was designed to reduce the occurrence of pilot-induced oscillations and increase robustness to failures in general. This report presents an analysis of the neural networks used in the previous flight test, the improved adaptive controller, and the baseline case with no adaptation. Flight test results demonstrate significant improvement in performance by using the new adaptive controller compared with the previous adaptive system and the baseline system for control surface failures.
Susceptibility to Cracking of Different Lots of CDR35 Capacitors
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander
2017-01-01
On-orbit flight anomalies that occurred after several months of operation were attributed to excessive leakage currents in CDR35 style 0.47 microF 50 V capacitors operating at 10 V. In this work, a lot of capacitors similar to the lot that caused the anomaly have been evaluated in parallel with another lot of similar parts to assess their susceptibility to cracking under manual soldering conditions and get insight into a possible mechanism of failure. Leakage currents in capacitors were monitored at different voltages and environmental conditions before and after terminal solder dip testing that was used to simulate thermal shock during manual soldering. Results of cross-sectioning, acoustic microscopy, and measurements of electrical and mechanical characteristics of the parts have been analyzed, and possible mechanisms of failures considered. It is shown that the susceptibility to cracking and failures caused by manual soldering is lot-related. Recommendations for testing that would help to select lots that are more robust against manual soldering stresses and mitigate the risk of failures suggested.
Kim, Sanghyeok; Won, Sejeong; Sim, Gi-Dong; Park, Inkyu; Lee, Soon-Bok
2013-03-01
Metal nanoparticle solutions are widely used for the fabrication of printed electronic devices. The mechanical properties of the solution-processed metal nanoparticle thin films are very important for the robust and reliable operation of printed electronic devices. In this paper, we report the tensile characteristics of silver nanoparticle (Ag NP) thin films on flexible polymer substrates by observing the microstructures and measuring the electrical resistance under tensile strain. The effects of the annealing temperatures and periods of Ag NP thin films on their failure strains are explained with a microstructural investigation. The maximum failure strain for Ag NP thin film was 6.6% after initial sintering at 150 °C for 30 min. Thermal annealing at higher temperatures for longer periods resulted in a reduction of the maximum failure strain, presumably due to higher porosity and larger pore size. We also found that solution-processed Ag NP thin films have lower failure strains than those of electron beam evaporated Ag thin films due to their highly porous film morphologies.
Failure Diagnosis for the Holdup Tank System via ISFA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Huijuan; Bragg-Sitton, Shannon; Smidts, Carol
This paper discusses the use of the integrated system failure analysis (ISFA) technique for fault diagnosis for the holdup tank system. ISFA is a simulation-based, qualitative and integrated approach used to study fault propagation in systems containing both hardware and software subsystems. The holdup tank system consists of a tank containing a fluid whose level is controlled by an inlet valve and an outlet valve. We introduce the component and functional models of the system, quantify the main parameters and simulate possible failure-propagation paths based on the fault propagation approach, ISFA. The results show that most component failures in themore » holdup tank system can be identified clearly and that ISFA is viable as a technique for fault diagnosis. Since ISFA is a qualitative technique that can be used in the very early stages of system design, this case study provides indications that it can be used early to study design aspects that relate to robustness and fault tolerance.« less
Characterization of emission microscopy and liquid crystal thermography in IC fault localization
NASA Astrophysics Data System (ADS)
Lau, C. K.; Sim, K. S.
2013-05-01
This paper characterizes two fault localization techniques - Emission Microscopy (EMMI) and Liquid Crystal Thermography (LCT) by using integrated circuit (IC) leakage failures. The majority of today's semiconductor failures do not reveal a clear visual defect on the die surface and therefore require fault localization tools to identify the fault location. Among the various fault localization tools, liquid crystal thermography and frontside emission microscopy are commonly used in most semiconductor failure analysis laboratories. Many people misunderstand that both techniques are the same and both are detecting hot spot in chip failing with short or leakage. As a result, analysts tend to use only LCT since this technique involves very simple test setup compared to EMMI. The omission of EMMI as the alternative technique in fault localization always leads to incomplete analysis when LCT fails to localize any hot spot on a failing chip. Therefore, this research was established to characterize and compare both the techniques in terms of their sensitivity in detecting the fault location in common semiconductor failures. A new method was also proposed as an alternative technique i.e. the backside LCT technique. The research observed that both techniques have successfully detected the defect locations resulted from the leakage failures. LCT wass observed more sensitive than EMMI in the frontside analysis approach. On the other hand, EMMI performed better in the backside analysis approach. LCT was more sensitive in localizing ESD defect location and EMMI was more sensitive in detecting non ESD defect location. Backside LCT was proven to work as effectively as the frontside LCT and was ready to serve as an alternative technique to the backside EMMI. The research confirmed that LCT detects heat generation and EMMI detects photon emission (recombination radiation). The analysis results also suggested that both techniques complementing each other in the IC fault localization. It is necessary for a failure analyst to use both techniques when one of the techniques produces no result.
From Three-Photon Greenberger-Horne-Zeilinger States to Ballistic Universal Quantum Computation.
Gimeno-Segovia, Mercedes; Shadbolt, Pete; Browne, Dan E; Rudolph, Terry
2015-07-10
Single photons, manipulated using integrated linear optics, constitute a promising platform for universal quantum computation. A series of increasingly efficient proposals have shown linear-optical quantum computing to be formally scalable. However, existing schemes typically require extensive adaptive switching, which is experimentally challenging and noisy, thousands of photon sources per renormalized qubit, and/or large quantum memories for repeat-until-success strategies. Our work overcomes all these problems. We present a scheme to construct a cluster state universal for quantum computation, which uses no adaptive switching, no large memories, and which is at least an order of magnitude more resource efficient than previous passive schemes. Unlike previous proposals, it is constructed entirely from loss-detecting gates and offers a robustness to photon loss. Even without the use of an active loss-tolerant encoding, our scheme naturally tolerates a total loss rate ∼1.6% in the photons detected in the gates. This scheme uses only 3 Greenberger-Horne-Zeilinger states as a resource, together with a passive linear-optical network. We fully describe and model the iterative process of cluster generation, including photon loss and gate failure. This demonstrates that building a linear-optical quantum computer needs to be less challenging than previously thought.
Nwakanma, Davis C.; Duffy, Craig W.; Amambua-Ngwa, Alfred; Oriero, Eniyou C.; Bojang, Kalifa A.; Pinder, Margaret; Drakeley, Chris J.; Sutherland, Colin J.; Milligan, Paul J.; MacInnis, Bronwyn; Kwiatkowski, Dominic P.; Clark, Taane G.; Greenwood, Brian M.; Conway, David J.
2014-01-01
Background. Analysis of genome-wide polymorphism in many organisms has potential to identify genes under recent selection. However, data on historical allele frequency changes are rarely available for direct confirmation. Methods. We genotyped single nucleotide polymorphisms (SNPs) in 4 Plasmodium falciparum drug resistance genes in 668 archived parasite-positive blood samples of a Gambian population between 1984 and 2008. This covered a period before antimalarial resistance was detected locally, through subsequent failure of multiple drugs until introduction of artemisinin combination therapy. We separately performed genome-wide sequence analysis of 52 clinical isolates from 2008 to prospect for loci under recent directional selection. Results. Resistance alleles increased from very low frequencies, peaking in 2000 for chloroquine resistance-associated crt and mdr1 genes and at the end of the survey period for dhfr and dhps genes respectively associated with pyrimethamine and sulfadoxine resistance. Temporal changes fit a model incorporating likely selection coefficients over the period. Three of the drug resistance loci were in the top 4 regions under strong selection implicated by the genome-wide analysis. Conclusions. Genome-wide polymorphism analysis of an endemic population sample robustly identifies loci with detailed documentation of recent selection, demonstrating power to prospectively detect emerging drug resistance genes. PMID:24265439
NASA Astrophysics Data System (ADS)
Meillier, Céline; Chatelain, Florent; Michel, Olivier; Bacon, Roland; Piqueras, Laure; Bacher, Raphael; Ayasso, Hacheme
2016-04-01
We present SELFI, the Source Emission Line FInder, a new Bayesian method optimized for detection of faint galaxies in Multi Unit Spectroscopic Explorer (MUSE) deep fields. MUSE is the new panoramic integral field spectrograph at the Very Large Telescope (VLT) that has unique capabilities for spectroscopic investigation of the deep sky. It has provided data cubes with 324 million voxels over a single 1 arcmin2 field of view. To address the challenge of faint-galaxy detection in these large data cubes, we developed a new method that processes 3D data either for modeling or for estimation and extraction of source configurations. This object-based approach yields a natural sparse representation of the sources in massive data fields, such as MUSE data cubes. In the Bayesian framework, the parameters that describe the observed sources are considered random variables. The Bayesian model leads to a general and robust algorithm where the parameters are estimated in a fully data-driven way. This detection algorithm was applied to the MUSE observation of Hubble Deep Field-South. With 27 h total integration time, these observations provide a catalog of 189 sources of various categories and with secured redshift. The algorithm retrieved 91% of the galaxies with only 9% false detection. This method also allowed the discovery of three new Lyα emitters and one [OII] emitter, all without any Hubble Space Telescope counterpart. We analyzed the reasons for failure for some targets, and found that the most important limitation of the method is when faint sources are located in the vicinity of bright spatially resolved galaxies that cannot be approximated by the Sérsic elliptical profile. The software and its documentation are available on the MUSE science web service (muse-vlt.eu/science).
Visual encoding and fixation target selection in free viewing: presaccadic brain potentials
Nikolaev, Andrey R.; Jurica, Peter; Nakatani, Chie; Plomp, Gijs; van Leeuwen, Cees
2013-01-01
In scrutinizing a scene, the eyes alternate between fixations and saccades. During a fixation, two component processes can be distinguished: visual encoding and selection of the next fixation target. We aimed to distinguish the neural correlates of these processes in the electrical brain activity prior to a saccade onset. Participants viewed color photographs of natural scenes, in preparation for a change detection task. Then, for each participant and each scene we computed an image heat map, with temperature representing the duration and density of fixations. The temperature difference between the start and end points of saccades was taken as a measure of the expected task-relevance of the information concentrated in specific regions of a scene. Visual encoding was evaluated according to whether subsequent change was correctly detected. Saccades with larger temperature difference were more likely to be followed by correct detection than ones with smaller temperature differences. The amplitude of presaccadic activity over anterior brain areas was larger for correct detection than for detection failure. This difference was observed for short “scrutinizing” but not for long “explorative” saccades, suggesting that presaccadic activity reflects top-down saccade guidance. Thus, successful encoding requires local scanning of scene regions which are expected to be task-relevant. Next, we evaluated fixation target selection. Saccades “moving up” in temperature were preceded by presaccadic activity of higher amplitude than those “moving down”. This finding suggests that presaccadic activity reflects attention deployed to the following fixation location. Our findings illustrate how presaccadic activity can elucidate concurrent brain processes related to the immediate goal of planning the next saccade and the larger-scale goal of constructing a robust representation of the visual scene. PMID:23818877
Robust path planning for flexible needle insertion using Markov decision processes.
Tan, Xiaoyu; Yu, Pengqian; Lim, Kah-Bin; Chui, Chee-Kong
2018-05-11
Flexible needle has the potential to accurately navigate to a treatment region in the least invasive manner. We propose a new planning method using Markov decision processes (MDPs) for flexible needle navigation that can perform robust path planning and steering under the circumstance of complex tissue-needle interactions. This method enhances the robustness of flexible needle steering from three different perspectives. First, the method considers the problem caused by soft tissue deformation. The method then resolves the common needle penetration failure caused by patterns of targets, while the last solution addresses the uncertainty issues in flexible needle motion due to complex and unpredictable tissue-needle interaction. Computer simulation and phantom experimental results show that the proposed method can perform robust planning and generate a secure control policy for flexible needle steering. Compared with a traditional method using MDPs, the proposed method achieves higher accuracy and probability of success in avoiding obstacles under complicated and uncertain tissue-needle interactions. Future work will involve experiment with biological tissue in vivo. The proposed robust path planning method can securely steer flexible needle within soft phantom tissues and achieve high adaptability in computer simulation.
A robust real-time abnormal region detection framework from capsule endoscopy images
NASA Astrophysics Data System (ADS)
Cheng, Yanfen; Liu, Xu; Li, Huiping
2009-02-01
In this paper we present a novel method to detect abnormal regions from capsule endoscopy images. Wireless Capsule Endoscopy (WCE) is a recent technology where a capsule with an embedded camera is swallowed by the patient to visualize the gastrointestinal tract. One challenge is one procedure of diagnosis will send out over 50,000 images, making physicians' reviewing process expensive. Physicians' reviewing process involves in identifying images containing abnormal regions (tumor, bleeding, etc) from this large number of image sequence. In this paper we construct a novel framework for robust and real-time abnormal region detection from large amount of capsule endoscopy images. The detected potential abnormal regions can be labeled out automatically to let physicians review further, therefore, reduce the overall reviewing process. In this paper we construct an abnormal region detection framework with the following advantages: 1) Trainable. Users can define and label any type of abnormal region they want to find; The abnormal regions, such as tumor, bleeding, etc., can be pre-defined and labeled using the graphical user interface tool we provided. 2) Efficient. Due to the large number of image data, the detection speed is very important. Our system can detect very efficiently at different scales due to the integral image features we used; 3) Robust. After feature selection we use a cascade of classifiers to further enforce the detection accuracy.
NASA Technical Reports Server (NTRS)
Hruby, R. J.; Bjorkman, W. S.; Schmidt, S. F.; Carestia, R. A.
1979-01-01
Algorithms were developed that attempt to identify which sensor in a tetrad configuration has experienced a step failure. An algorithm is also described that provides a measure of the confidence with which the correct identification was made. Experimental results are presented from real-time tests conducted on a three-axis motion facility utilizing an ortho-skew tetrad strapdown inertial sensor package. The effects of prediction errors and of quantization on correct failure identification are discussed as well as an algorithm for detecting second failures through prediction.
NASA Astrophysics Data System (ADS)
Sheikh, Muhammad; Elmarakbi, Ahmed; Elkady, Mustafa
2017-12-01
This paper focuses on state of charge (SOC) dependent mechanical failure analysis of 18650 lithium-ion battery to detect signs of thermal runaway. Quasi-static loading conditions are used with four test protocols (Rod, Circular punch, three-point bend and flat plate) to analyse the propagation of mechanical failures and failure induced temperature changes. Finite element analysis (FEA) is used to model single battery cell with the concentric layered formation which represents a complete cell. The numerical simulation model is designed with solid element formation where stell casing and all layers followed the same formation, and fine mesh is used for all layers. Experimental work is also performed to analyse deformation of 18650 lithium-ion cell. The numerical simulation model is validated with experimental results. Deformation of cell mimics thermal runaway and various thermal runaway detection strategies are employed in this work including, force-displacement, voltage-temperature, stress-strain, SOC dependency and separator failure. Results show that cell can undergo severe conditions even with no fracture or rupture, these conditions may slow to develop but they can lead to catastrophic failures. The numerical simulation technique is proved to be useful in predicting initial battery failures, and results are in good correlation with the experimental results.
Robust Platinum Resistor Thermometer (PRT) Sensors and Reliable Bonding for Space Missions
NASA Technical Reports Server (NTRS)
Cucullu, Gordy C., III; Mikhaylov, Rebecca; Rajeshuni, Ramesham; Petkov, Mihail; Hills, David; Uribe, Jose; Okuno, James; De Los Santos, Greg
2013-01-01
Platinum resistance thermometers (PRTs) provide accurate temperature measurements over a wide temperature range and are used extensively on space missions due to their simplicity and linearity. A standard on spacecraft, PRTs are used to provide precision temperature control and vehicle health assessment. This paper reviews the extensive reliability testing of platinum resistor thermometer sensors (PRTs) and bonding methods used on the Mars Science Laboratory (MSL) mission and for the upcoming Soil Moisture Active Passive (SMAP) mission. During the Mars Exploration Rover (MER) mission, several key, JPL-packaged PRTs failed on those rovers prior to and within 1-Sol of landing due to thermally induced stresses. Similar failures can be traced back to other JPL missions dating back thirty years. As a result, MSL sought out a PRT more forgiving to the packaging configurations used at JPL, and extensively tested the Honeywell HRTS-5760-B-U-0-12 sensor to successfully demonstrate suitable robustness to thermal cycling. Specifically, this PRT was cycled 2,000 times, simulating three Martian winters and summers. The PRTs were bonded to six substrate materials (Aluminum 7050, treated Magnesium AZ231-B, Stainless Steel 304, Albemet, Titanium 6AL4V, and G-10), using four different aerospace adhesives--two epoxies and two silicones--that conformed to MSL's low out-gassing requirements. An additional epoxy was tested in a shorter environmental cycling test, when the need for a different temperature range adhesive was necessary for mobility and actuator hardware late in the fabrication process. All of this testing, along with electrostatic discharge (ESD) and destructive part analyses, demonstrate that this PRT is highly robust, and not subject to the failure of PRTs on previous missions. While there were two PRTs that failed during fabrication, to date there have been no in-flight PRT failures on MSL, including those on the Curiosity rover. Since MSL, the sensor has gone through a change in construction such that the manufacturer significantly restricts the minimum temperature. However, significant subsequent testing was performed with this new version of the part to show that it indeed is still robust to at least Mars minimum temperatures of -135(sup o)C. The additional completed testing will be described. This work has resulted in a successful sensor package qualification and a reliable bonding method suitable for use over large temperature extremes.
Robust Platinum Resistor Thermometer (PRT) Sensors and Reliable Bonding for Space Missions
NASA Technical Reports Server (NTRS)
Cucullu, Gordy C. III; Mikhaylov, Rebecca; Ramesham, Rajeshuni; Petkov, Mihail; Hills, David; Uribe, Jose; Okuno, James; De Los Santos, Greg
2013-01-01
Platinum resistance thermometers (PRTs) provide accurate temperature measurements over a wide temperature range and are used extensively on space missions due to their simplicity and linearity. A standard on spacecraft, PRTs are used to provide precision temperature control and vehicle health assessment. This paper reviews the extensive reliability testing of platinum resistor thermometer sensors (PRTs) and bonding methods used on the Mars Science Laboratory (MSL) mission and for the upcoming Soil Moisture Active Passive (SMAP) mission. During the Mars Exploration Rover (MER) mission, several key, JPL-packaged PRTs failed on those rovers prior to and within 1-Sol of landing due to thermally induced stresses. Similar failures can be traced back to other JPL missions dating back thirty years. As a result, MSL sought out a PRT more forgiving to the packaging configurations used at JPL, and extensively tested the Honeywell HRTS-5760-B-U-0-12 sensor to successfully demonstrate suitable robustness to thermal cycling. Specifically, this PRT was cycled 2,000 times, simulating three Martian winters and summers. The PRTs were bonded to six substrate materials (Aluminum 7050, treated Magnesium AZ231-B, Stainless Steel 304, Albemet, Titanium 6AL4V, and G-10), using four different aerospace adhesives--two epoxies and two silicones--that conformed to MSL's low out-gassing requirements. An additional epoxy was tested in a shorter environmental cycling test, when the need for a different temperature range adhesive was necessary for mobility and actuator hardware late in the fabrication process. All of this testing, along with electrostatic discharge (ESD) and destructive part analyses, demonstrate that this PRT is highly robust, and not subject to the failure of PRTs on previous missions. While there were two PRTs that failed during fabrication, to date there have been no in-flight PRT failures on MSL, including those on the Curiosity rover. Since MSL, the sensor has gone through a change in construction such that the manufacturer significantly restricts the minimum temperature. However, significant subsequent testing was performed with this new version of the part to show that it indeed is still robust to at least Mars minimum temperatures of -135 degrees Centigrade. The additional completed testing will be described. This work has resulted in a successful sensor package qualification and a reliable bonding method suitable for use over large temperature extremes
Detection-enhanced steady state entanglement with ions.
Bentley, C D B; Carvalho, A R R; Kielpinski, D; Hope, J J
2014-07-25
Driven dissipative steady state entanglement schemes take advantage of coupling to the environment to robustly prepare highly entangled states. We present a scheme for two trapped ions to generate a maximally entangled steady state with fidelity above 0.99, appropriate for use in quantum protocols. Furthermore, we extend the scheme by introducing detection of our dissipation process, significantly enhancing the fidelity. Our scheme is robust to anomalous heating and requires no sympathetic cooling.
A method for detecting nonlinear determinism in normal and epileptic brain EEG signals.
Meghdadi, Amir H; Fazel-Rezai, Reza; Aghakhani, Yahya
2007-01-01
A robust method of detecting determinism for short time series is proposed and applied to both healthy and epileptic EEG signals. The method provides a robust measure of determinism through characterizing the trajectories of the signal components which are obtained through singular value decomposition. Robustness of the method is shown by calculating proposed index of determinism at different levels of white and colored noise added to a simulated chaotic signal. The method is shown to be able to detect determinism at considerably high levels of additive noise. The method is then applied to both intracranial and scalp EEG recordings collected in different data sets for healthy and epileptic brain signals. The results show that for all of the studied EEG data sets there is enough evidence of determinism. The determinism is more significant for intracranial EEG recordings particularly during seizure activity.
Heart failure symptom relationships: a systematic review.
Herr, Janet K; Salyer, Jeanne; Lyon, Debra E; Goodloe, Lauren; Schubert, Christine; Clement, Dolores G
2014-01-01
Heart failure is a prevalent chronic health condition in the United States. Individuals who have heart failure experience as many as 2 to 9 symptoms. The examination of relationships among heart failure symptoms may benefit patients and clinicians who are charged with managing heart failure symptoms. The purpose of this systematic review was to summarize what is known about relationships among heart failure symptoms, a precursor to the identification of heart failure symptom clusters, as well as to examine studies specifically addressing symptom clusters described in this population. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines were followed in the conduct of this systematic review. PubMed, PsychINFO, Cumulative Index of Nursing and Allied Health Literature, and the Cochrane Database were searched using the search term heart failure in combination with a pair of symptoms. Of a total of 1316 studies identified from database searches, 34 were included in this systematic review. More than 1 investigator found a moderate level of correlation between depression and fatigue, depression and anxiety, depression and sleep, depression and pain, anxiety and fatigue, and dyspnea and fatigue. The findings of this systematic review provide support for the presence of heart failure symptom clusters. Depression was related to several of the symptoms, providing an indication to clinicians that individuals with heart failure who experience depression may have other concurrent symptoms. Some symptom relationships such as the relationships between fatigue and anxiety or sleep or pain were dependent on the symptom characteristics studied. Symptom prevalence in the sample and restricted sampling may influence the robustness of the symptom relationships. These findings suggest that studies defining the phenotype of individual heart failure symptoms may be a beneficial step in the study of heart failure symptom clusters.
A Convex Approach to Fault Tolerant Control
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Cox, David E.; Bauer, Frank (Technical Monitor)
2002-01-01
The design of control laws for dynamic systems with the potential for actuator failures is considered in this work. The use of Linear Matrix Inequalities allows more freedom in controller design criteria than typically available with robust control. This work proposes an extension of fault-scheduled control design techniques that can find a fixed controller with provable performance over a set of plants. Through convexity of the objective function, performance bounds on this set of plants implies performance bounds on a range of systems defined by a convex hull. This is used to incorporate performance bounds for a variety of soft and hard failures into the control design problem.
Parameters affecting the resilience of scale-free networks to random failures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Link, Hamilton E.; LaViolette, Randall A.; Lane, Terran
2005-09-01
It is commonly believed that scale-free networks are robust to massive numbers of random node deletions. For example, Cohen et al. in (1) study scale-free networks including some which approximate the measured degree distribution of the Internet. Their results suggest that if each node in this network failed independently with probability 0.99, most of the remaining nodes would still be connected in a giant component. In this paper, we show that a large and important subclass of scale-free networks are not robust to massive numbers of random node deletions. In particular, we study scale-free networks which have minimum node degreemore » of 1 and a power-law degree distribution beginning with nodes of degree 1 (power-law networks). We show that, in a power-law network approximating the Internet's reported distribution, when the probability of deletion of each node is 0.5 only about 25% of the surviving nodes in the network remain connected in a giant component, and the giant component does not persist beyond a critical failure rate of 0.9. The new result is partially due to improved analytical accommodation of the large number of degree-0 nodes that result after node deletions. Our results apply to power-law networks with a wide range of power-law exponents, including Internet-like networks. We give both analytical and empirical evidence that such networks are not generally robust to massive random node deletions.« less
Verification and Tuning of an Adaptive Controller for an Unmanned Air Vehicle
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Matsutani, Megumi; Annaswamy, Anuradha M.
2010-01-01
This paper focuses on the analysis and tuning of a controller based on the Adaptive Control Technology for Safe Flight (ACTS) architecture. The ACTS architecture consists of a nominal, non-adaptive controller that provides satisfactory performance under nominal flying conditions, and an adaptive controller that provides robustness under off-nominal ones. A framework unifying control verification and gain tuning is used to make the controller s ability to satisfy the closed-loop requirements more robust to uncertainty. In this paper we tune the gains of both controllers using this approach. Some advantages and drawbacks of adaptation are identified by performing a global robustness assessment of both the adaptive controller and its non-adaptive counterpart. The analyses used to determine these characteristics are based on evaluating the degradation in closed-loop performance resulting from uncertainties having increasing levels of severity. The specific adverse conditions considered can be grouped into three categories: aerodynamic uncertainties, structural damage, and actuator failures. These failures include partial and total loss of control effectiveness, locked-in-place control surface deflections, and engine out conditions. The requirements considered are the peak structural loading, the ability of the controller to track pilot commands, the ability of the controller to keep the aircraft s state within the reliable flight envelope, and the handling/riding qualities of the aircraft. The nominal controller resulting from these tuning strategies was successfully validated using the NASA GTM Flight Test Vehicle.
The Artful Dodger: Answering the Wrong Question the Right Way
ERIC Educational Resources Information Center
Rogers, Todd; Norton, Michael I.
2011-01-01
What happens when speakers try to "dodge" a question they would rather not answer by answering a different question? In 4 studies, we show that listeners can fail to detect dodges when speakers answer similar--but objectively incorrect--questions (the "artful dodge"), a detection failure that goes hand-in-hand with a failure to rate dodgers more…
A detailed description of the sequential probability ratio test for 2-IMU FDI
NASA Technical Reports Server (NTRS)
Rich, T. M.
1976-01-01
The sequential probability ratio test (SPRT) for 2-IMU FDI (inertial measuring unit failure detection/isolation) is described. The SPRT is a statistical technique for detecting and isolating soft IMU failures originally developed for the strapdown inertial reference unit. The flowchart of a subroutine incorporating the 2-IMU SPRT is included.
Lenhard, Stephen C.; Yerby, Brittany; Forsgren, Mikael F.; Liachenko, Serguei; Johansson, Edvin; Pilling, Mark A.; Peterson, Richard A.; Yang, Xi; Williams, Dominic P.; Ungersma, Sharon E.; Morgan, Ryan E.; Brouwer, Kim L. R.; Jucker, Beat M.; Hockings, Paul D.
2018-01-01
Drug-induced liver injury (DILI) is a leading cause of acute liver failure and transplantation. DILI can be the result of impaired hepatobiliary transporters, with altered bile formation, flow, and subsequent cholestasis. We used gadoxetate dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), combined with pharmacokinetic modelling, to measure hepatobiliary transporter function in vivo in rats. The sensitivity and robustness of the method was tested by evaluating the effect of a clinical dose of the antibiotic rifampicin in four different preclinical imaging centers. The mean gadoxetate uptake rate constant for the vehicle groups at all centers was 39.3 +/- 3.4 s-1 (n = 23) and 11.7 +/- 1.3 s-1 (n = 20) for the rifampicin groups. The mean gadoxetate efflux rate constant for the vehicle groups was 1.53 +/- 0.08 s-1 (n = 23) and for the rifampicin treated groups was 0.94 +/- 0.08 s-1 (n = 20). Both the uptake and excretion transporters of gadoxetate were statistically significantly inhibited by the clinical dose of rifampicin at all centers and the size of this treatment group effect was consistent across the centers. Gadoxetate is a clinically approved MRI contrast agent, so this method is readily transferable to the clinic. Conclusion: Rate constants of gadoxetate uptake and excretion are sensitive and robust biomarkers to detect early changes in hepatobiliary transporter function in vivo in rats prior to established biomarkers of liver toxicity. PMID:29771932
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brett Emery Trabun; Gamage, Thoshitha Thanushka; Bakken, David Edward
This disclosure describes, in part, a system management component and failure detection component for use in a power grid data network to identify anomalies within the network and systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription. The failure detection component may identify an anomaly within the network and a source of the anomaly. Based on the identified anomaly, data rates and or datamore » paths may be adjusted in real-time to ensure that the power grid data network does not become overloaded and/or fail.« less
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1986-01-01
A hypothetical turbofan engine simplified simulation with a multivariable control and sensor failure detection, isolation, and accommodation logic (HYTESS II) is presented. The digital program, written in FORTRAN, is self-contained, efficient, realistic and easily used. Simulated engine dynamics were developed from linearized operating point models. However, essential nonlinear effects are retained. The simulation is representative of the hypothetical, low bypass ratio turbofan engine with an advanced control and failure detection logic. Included is a description of the engine dynamics, the control algorithm, and the sensor failure detection logic. Details of the simulation including block diagrams, variable descriptions, common block definitions, subroutine descriptions, and input requirements are given. Example simulation results are also presented.
A solenoid failure detection system for cold gas attitude control jet valves
NASA Technical Reports Server (NTRS)
Johnston, P. A.
1970-01-01
The development of a solenoid valve failure detection system is described. The technique requires the addition of a radioactive gas to the propellant of a cold gas jet attitude control system. Solenoid failure is detected with an avalanche radiation detector located in the jet nozzle which senses the radiation emitted by the leaking radioactive gas. Measurements of carbon monoxide leakage rates through a Mariner type solenoid valve are presented as a function of gas activity and detector configuration. A cylindrical avalanche detector with a factor of 40 improvement in leak sensitivity is proposed for flight systems because it allows the quantity of radioactive gas that must be added to the propellant to be reduced to a practical level.
NASA Technical Reports Server (NTRS)
Siwakosit, W.; Hess, R. A.; Bacon, Bart (Technical Monitor); Burken, John (Technical Monitor)
2000-01-01
A multi-input, multi-output reconfigurable flight control system design utilizing a robust controller and an adaptive filter is presented. The robust control design consists of a reduced-order, linear dynamic inversion controller with an outer-loop compensation matrix derived from Quantitative Feedback Theory (QFT). A principle feature of the scheme is placement of the adaptive filter in series with the QFT compensator thus exploiting the inherent robustness of the nominal flight control system in the presence of plant uncertainties. An example of the scheme is presented in a pilot-in-the-loop computer simulation using a simplified model of the lateral-directional dynamics of the NASA F18 High Angle of Attack Research Vehicle (HARV) that included nonlinear anti-wind up logic and actuator limitations. Prediction of handling qualities and pilot-induced oscillation tendencies in the presence of these nonlinearities is included in the example.
System and Method for Dynamic Aeroelastic Control
NASA Technical Reports Server (NTRS)
Suh, Peter M. (Inventor)
2015-01-01
The present invention proposes a hardware and software architecture for dynamic modal structural monitoring that uses a robust modal filter to monitor a potentially very large-scale array of sensors in real time, and tolerant of asymmetric sensor noise and sensor failures, to achieve aircraft performance optimization such as minimizing aircraft flutter, drag and maximizing fuel efficiency.
MeDICi Software Superglue for Data Analysis Pipelines
Ian Gorton
2017-12-09
The Middleware for Data-Intensive Computing (MeDICi) Integration Framework is an integrated middleware platform developed to solve data analysis and processing needs of scientists across many domains. MeDICi is scalable, easily modified, and robust to multiple languages, protocols, and hardware platforms, and in use today by PNNL scientists for bioinformatics, power grid failure analysis, and text analysis.
Fast traffic sign recognition with a rotation invariant binary pattern based feature.
Yin, Shouyi; Ouyang, Peng; Liu, Leibo; Guo, Yike; Wei, Shaojun
2015-01-19
Robust and fast traffic sign recognition is very important but difficult for safe driving assistance systems. This study addresses fast and robust traffic sign recognition to enhance driving safety. The proposed method includes three stages. First, a typical Hough transformation is adopted to implement coarse-grained location of the candidate regions of traffic signs. Second, a RIBP (Rotation Invariant Binary Pattern) based feature in the affine and Gaussian space is proposed to reduce the time of traffic sign detection and achieve robust traffic sign detection in terms of scale, rotation, and illumination. Third, the techniques of ANN (Artificial Neutral Network) based feature dimension reduction and classification are designed to reduce the traffic sign recognition time. Compared with the current work, the experimental results in the public datasets show that this work achieves robustness in traffic sign recognition with comparable recognition accuracy and faster processing speed, including training speed and recognition speed.
Fast Traffic Sign Recognition with a Rotation Invariant Binary Pattern Based Feature
Yin, Shouyi; Ouyang, Peng; Liu, Leibo; Guo, Yike; Wei, Shaojun
2015-01-01
Robust and fast traffic sign recognition is very important but difficult for safe driving assistance systems. This study addresses fast and robust traffic sign recognition to enhance driving safety. The proposed method includes three stages. First, a typical Hough transformation is adopted to implement coarse-grained location of the candidate regions of traffic signs. Second, a RIBP (Rotation Invariant Binary Pattern) based feature in the affine and Gaussian space is proposed to reduce the time of traffic sign detection and achieve robust traffic sign detection in terms of scale, rotation, and illumination. Third, the techniques of ANN (Artificial Neutral Network) based feature dimension reduction and classification are designed to reduce the traffic sign recognition time. Compared with the current work, the experimental results in the public datasets show that this work achieves robustness in traffic sign recognition with comparable recognition accuracy and faster processing speed, including training speed and recognition speed. PMID:25608217
Failure detection and isolation analysis of a redundant strapdown inertial measurement unit
NASA Technical Reports Server (NTRS)
Motyka, P.; Landey, M.; Mckern, R.
1981-01-01
The objective of this study was to define and develop techniques for failure detection and isolation (FDI) algorithms for a dual fail/operational redundant strapdown inertial navigation system are defined and developed. The FDI techniques chosen include provisions for hard and soft failure detection in the context of flight control and navigation. Analyses were done to determine error detection and switching levels for the inertial navigation system, which is intended for a conventional takeoff or landing (CTOL) operating environment. In addition, investigations of false alarms and missed alarms were included for the FDI techniques developed, along with the analyses of filters to be used in conjunction with FDI processing. Two specific FDI algorithms were compared: the generalized likelihood test and the edge vector test. A deterministic digital computer simulation was used to compare and evaluate the algorithms and FDI systems.
CNV-TV: a robust method to discover copy number variation from short sequencing reads.
Duan, Junbo; Zhang, Ji-Gang; Deng, Hong-Wen; Wang, Yu-Ping
2013-05-02
Copy number variation (CNV) is an important structural variation (SV) in human genome. Various studies have shown that CNVs are associated with complex diseases. Traditional CNV detection methods such as fluorescence in situ hybridization (FISH) and array comparative genomic hybridization (aCGH) suffer from low resolution. The next generation sequencing (NGS) technique promises a higher resolution detection of CNVs and several methods were recently proposed for realizing such a promise. However, the performances of these methods are not robust under some conditions, e.g., some of them may fail to detect CNVs of short sizes. There has been a strong demand for reliable detection of CNVs from high resolution NGS data. A novel and robust method to detect CNV from short sequencing reads is proposed in this study. The detection of CNV is modeled as a change-point detection from the read depth (RD) signal derived from the NGS, which is fitted with a total variation (TV) penalized least squares model. The performance (e.g., sensitivity and specificity) of the proposed approach are evaluated by comparison with several recently published methods on both simulated and real data from the 1000 Genomes Project. The experimental results showed that both the true positive rate and false positive rate of the proposed detection method do not change significantly for CNVs with different copy numbers and lengthes, when compared with several existing methods. Therefore, our proposed approach results in a more reliable detection of CNVs than the existing methods.
Immunity-Based Aircraft Fault Detection System
NASA Technical Reports Server (NTRS)
Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.
2004-01-01
In the study reported in this paper, we have developed and applied an Artificial Immune System (AIS) algorithm for aircraft fault detection, as an extension to a previous work on intelligent flight control (IFC). Though the prior studies had established the benefits of IFC, one area of weakness that needed to be strengthened was the control dead band induced by commanding a failed surface. Since the IFC approach uses fault accommodation with no detection, the dead band, although it reduces over time due to learning, is present and causes degradation in handling qualities. If the failure can be identified, this dead band can be further A ed to ensure rapid fault accommodation and better handling qualities. The paper describes the application of an immunity-based approach that can detect a broad spectrum of known and unforeseen failures. The approach incorporates the knowledge of the normal operational behavior of the aircraft from sensory data, and probabilistically generates a set of pattern detectors that can detect any abnormalities (including faults) in the behavior pattern indicating unsafe in-flight operation. We developed a tool called MILD (Multi-level Immune Learning Detection) based on a real-valued negative selection algorithm that can generate a small number of specialized detectors (as signatures of known failure conditions) and a larger set of generalized detectors for unknown (or possible) fault conditions. Once the fault is detected and identified, an adaptive control system would use this detection information to stabilize the aircraft by utilizing available resources (control surfaces). We experimented with data sets collected under normal and various simulated failure conditions using a piloted motion-base simulation facility. The reported results are from a collection of test cases that reflect the performance of the proposed immunity-based fault detection algorithm.
Repeated Induction of Inattentional Blindness in a Simulated Aviation Environment
NASA Technical Reports Server (NTRS)
Kennedy, Kellie D.; Stephens, Chad L.; Williams, Ralph A.; Schutte, Paul C.
2017-01-01
The study reported herein is a subset of a larger investigation on the role of automation in the context of the flight deck and used a fixed-based, human-in-the-loop simulator. This paper explored the relationship between automation and inattentional blindness (IB) occurrences in a repeated induction paradigm using two types of runway incursions. The critical stimuli for both runway incursions were directly relevant to primary task performance. Sixty non-pilot participants performed the final five minutes of a landing scenario twice in one of three automation conditions: full automation (FA), partial automation (PA), and no automation (NA). The first induction resulted in a 70 percent (42 of 60) detection failure rate with those in the PA condition significantly more likely to detect the incursion compared to the FA condition or the NA condition. The second induction yielded a 50 percent detection failure rate. Although detection improved (detection failure rates declined) in all conditions, those in the FA condition demonstrated the greatest improvement with doubled detection rates. The detection behavior in the first trial did not preclude a failed detection in the second induction. Group membership (IB vs. Detection) in the FA condition showed a greater improvement than those in the NA condition and rated the Mental Demand and Effort subscales of the NASA-TLX (NASA Task Load Index) significantly higher for Time 2 compared Time 1. Participants in the FA condition used the experience of IB exposure to improve task performance whereas those in the NA condition did not, indicating the availability and reallocation of attentional resources in the FA condition. These findings support the role of engagement in operational attention detriment and the consideration of attentional failure causation to determine appropriate mitigation strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rhee, Seung; Spencer, Cherrill; /Stanford U. /SLAC
2009-01-23
Failure occurs when one or more of the intended functions of a product are no longer fulfilled to the customer's satisfaction. The most critical product failures are those that escape design reviews and in-house quality inspection and are found by the customer. The product may work for a while until its performance degrades to an unacceptable level or it may have not worked even before customer took possession of the product. The end results of failures which may lead to unsafe conditions or major losses of the main function are rated high in severity. Failure Modes and Effects Analysis (FMEA)more » is a tool widely used in the automotive, aerospace, and electronics industries to identify, prioritize, and eliminate known potential failures, problems, and errors from systems under design, before the product is released (Stamatis, 1997). Several industrial FMEA standards such as those published by the Society of Automotive Engineers, US Department of Defense, and the Automotive Industry Action Group employ the Risk Priority Number (RPN) to measure risk and severity of failures. The Risk Priority Number (RPN) is a product of 3 indices: Occurrence (O), Severity (S), and Detection (D). In a traditional FMEA process design engineers typically analyze the 'root cause' and 'end-effects' of potential failures in a sub-system or component and assign penalty points through the O, S, D values to each failure. The analysis is organized around categories called failure modes, which link the causes and effects of failures. A few actions are taken upon completing the FMEA worksheet. The RPN column generally will identify the high-risk areas. The idea of performing FMEA is to eliminate or reduce known and potential failures before they reach the customers. Thus, a plan of action must be in place for the next task. Not all failures can be resolved during the product development cycle, thus prioritization of actions must be made within the design group. One definition of detection difficulty (D) is how well the organization controls the development process. Another definition relates to the detectability of a particular failure in the product when it is in the hands of the customer. The former asks 'What is the chance of catching the problem before we give it to the customer'? The latter asks 'What is the chance of the customer catching the problem before the problem results in a catastrophic failure?' (Palady, 1995) These differing definitions confuse the FMEA users when one tries to determine detection difficulty. Are we trying to measure how easy it is to detect where a failure has occurred or when it has occurred? Or are we trying to measure how easy or difficult it is to prevent failures? Ordinal scale variables are used to rank-order industries such as, hotels, restaurants, and movies (Note that a 4 star hotel is not necessarily twice as good as a 2 star hotel). Ordinal values preserve rank in a group of items, but the distance between the values cannot be measured since a distance function does not exist. Thus, the product or sum of ordinal variables loses its rank since each parameter has different scales. The RPN is a product of 3 independent ordinal variables, it can indicate that some failure types are 'worse' than others, but give no quantitative indication of their relative effects. To resolve the ambiguity of measuring detection difficulty and the irrational logic of multiplying 3 ordinal indices, a new methodology was created to overcome these shortcomings, Life Cost-Based FMEA. Life Cost-Based FMEA measures failure/risk in terms of monetary cost. Cost is a universal parameter that can be easily related to severity by engineers and others. Thus, failure cost can be estimated using the following simplest form: Expected Failure Cost = {sup n}{Sigma}{sub i=1}p{sub i}c{sub i}, p: Probability of a particular failure occurring; c: Monetary cost associated with that particular failure; and n: Total number of failure scenarios. FMEA is most effective when there are inputs into it from all concerned disciplines of the product development team. However, FMEA is a long process and can become tedious and won't be effective if too many people participate. An ideal team should have 3 to 4 people from: design, manufacturing, and service departments if possible. Depending on how complex the system is, the entire process can take anywhere from one to four weeks working full time. Thus, it is important to agree to the time commitment before starting the analysis else, anxious managers might stop the procedure before it is completed.« less
Elgendi, Mohamed; Eskofier, Björn; Dokos, Socrates; Abbott, Derek
2014-01-01
Cardiovascular diseases are the number one cause of death worldwide. Currently, portable battery-operated systems such as mobile phones with wireless ECG sensors have the potential to be used in continuous cardiac function assessment that can be easily integrated into daily life. These portable point-of-care diagnostic systems can therefore help unveil and treat cardiovascular diseases. The basis for ECG analysis is a robust detection of the prominent QRS complex, as well as other ECG signal characteristics. However, it is not clear from the literature which ECG analysis algorithms are suited for an implementation on a mobile device. We investigate current QRS detection algorithms based on three assessment criteria: 1) robustness to noise, 2) parameter choice, and 3) numerical efficiency, in order to target a universal fast-robust detector. Furthermore, existing QRS detection algorithms may provide an acceptable solution only on small segments of ECG signals, within a certain amplitude range, or amid particular types of arrhythmia and/or noise. These issues are discussed in the context of a comparison with the most conventional algorithms, followed by future recommendations for developing reliable QRS detection schemes suitable for implementation on battery-operated mobile devices.
Hongyi Xu; Barbic, Jernej
2017-01-01
We present an algorithm for fast continuous collision detection between points and signed distance fields, and demonstrate how to robustly use it for 6-DoF haptic rendering of contact between objects with complex geometry. Continuous collision detection is often needed in computer animation, haptics, and virtual reality applications, but has so far only been investigated for polygon (triangular) geometry representations. We demonstrate how to robustly and continuously detect intersections between points and level sets of the signed distance field. We suggest using an octree subdivision of the distance field for fast traversal of distance field cells. We also give a method to resolve continuous collisions between point clouds organized into a tree hierarchy and a signed distance field, enabling rendering of contact between rigid objects with complex geometry. We investigate and compare two 6-DoF haptic rendering methods now applicable to point-versus-distance field contact for the first time: continuous integration of penalty forces, and a constraint-based method. An experimental comparison to discrete collision detection demonstrates that the continuous method is more robust and can correctly resolve collisions even under high velocities and during complex contact.
Elgendi, Mohamed; Eskofier, Björn; Dokos, Socrates; Abbott, Derek
2014-01-01
Cardiovascular diseases are the number one cause of death worldwide. Currently, portable battery-operated systems such as mobile phones with wireless ECG sensors have the potential to be used in continuous cardiac function assessment that can be easily integrated into daily life. These portable point-of-care diagnostic systems can therefore help unveil and treat cardiovascular diseases. The basis for ECG analysis is a robust detection of the prominent QRS complex, as well as other ECG signal characteristics. However, it is not clear from the literature which ECG analysis algorithms are suited for an implementation on a mobile device. We investigate current QRS detection algorithms based on three assessment criteria: 1) robustness to noise, 2) parameter choice, and 3) numerical efficiency, in order to target a universal fast-robust detector. Furthermore, existing QRS detection algorithms may provide an acceptable solution only on small segments of ECG signals, within a certain amplitude range, or amid particular types of arrhythmia and/or noise. These issues are discussed in the context of a comparison with the most conventional algorithms, followed by future recommendations for developing reliable QRS detection schemes suitable for implementation on battery-operated mobile devices. PMID:24409290
NASA Astrophysics Data System (ADS)
Gharibnezhad, Fahit; Mujica, Luis E.; Rodellar, José
2015-01-01
Using Principal Component Analysis (PCA) for Structural Health Monitoring (SHM) has received considerable attention over the past few years. PCA has been used not only as a direct method to identify, classify and localize damages but also as a significant primary step for other methods. Despite several positive specifications that PCA conveys, it is very sensitive to outliers. Outliers are anomalous observations that can affect the variance and the covariance as vital parts of PCA method. Therefore, the results based on PCA in the presence of outliers are not fully satisfactory. As a main contribution, this work suggests the use of robust variant of PCA not sensitive to outliers, as an effective way to deal with this problem in SHM field. In addition, the robust PCA is compared with the classical PCA in the sense of detecting probable damages. The comparison between the results shows that robust PCA can distinguish the damages much better than using classical one, and even in many cases allows the detection where classic PCA is not able to discern between damaged and non-damaged structures. Moreover, different types of robust PCA are compared with each other as well as with classical counterpart in the term of damage detection. All the results are obtained through experiments with an aircraft turbine blade using piezoelectric transducers as sensors and actuators and adding simulated damages.
Soverini, Simona; De Benedittis, Caterina; Castagnetti, Fausto; Gugliotta, Gabriele; Mancini, Manuela; Bavaro, Luana; Machova Polakova, Katerina; Linhartova, Jana; Iurlo, Alessandra; Russo, Domenico; Pane, Fabrizio; Saglio, Giuseppe; Rosti, Gianantonio; Cavo, Michele; Baccarani, Michele; Martinelli, Giovanni
2016-08-02
Imatinib-resistant chronic myeloid leukemia (CML) patients receiving second-line tyrosine kinase inhibitor (TKI) therapy with dasatinib or nilotinib have a higher risk of disease relapse and progression and not infrequently BCR-ABL1 kinase domain (KD) mutations are implicated in therapeutic failure. In this setting, earlier detection of emerging BCR-ABL1 KD mutations would offer greater chances of efficacy for subsequent salvage therapy and limit the biological consequences of full BCR-ABL1 kinase reactivation. Taking advantage of an already set up and validated next-generation deep amplicon sequencing (DS) assay, we aimed to assess whether DS may allow a larger window of detection of emerging BCR-ABL1 KD mutants predicting for an impending relapse. a total of 125 longitudinal samples from 51 CML patients who had acquired dasatinib- or nilotinib-resistant mutations during second-line therapy were analyzed by DS from the time of failure and mutation detection by conventional sequencing backwards. BCR-ABL1/ABL1%(IS) transcript levels were used to define whether the patient had 'optimal response', 'warning' or 'failure' at the time of first mutation detection by DS. DS was able to backtrack dasatinib- or nilotinib-resistant mutations to the previous sample(s) in 23/51 (45 %) pts. Median mutation burden at the time of first detection by DS was 5.5 % (range, 1.5-17.5 %); median interval between detection by DS and detection by conventional sequencing was 3 months (range, 1-9 months). In 5 cases, the mutations were detectable at baseline. In the remaining cases, response level at the time mutations were first detected by DS could be defined as 'Warning' (according to the 2013 ELN definitions of response to 2nd-line therapy) in 13 cases, as 'Optimal response' in one case, as 'Failure' in 4 cases. No dasatinib- or nilotinib-resistant mutations were detected by DS in 15 randomly selected patients with 'warning' at various timepoints, that later turned into optimal responders with no treatment changes. DS enables a larger window of detection of emerging BCR-ABL1 KD mutations predicting for an impending relapse. A 'Warning' response may represent a rational trigger, besides 'Failure', for DS-based mutation screening in CML patients undergoing second-line TKI therapy.
NASA Astrophysics Data System (ADS)
Ortuño, María; Guinau, Marta; Calvet, Jaume; Furdada, Glòria; Bordonau, Jaume; Ruiz, Antonio; Camafort, Miquel
2017-10-01
Slope failures have been traditionally detected by field inspection and aerial-photo interpretation. These approaches are generally insufficient to identify subtle landforms, especially those generated during the early stages of failures, and particularly where the site is located in forested and remote terrains. We present the identification and characterization of several large and medium size slope failures previously undetected within the Orri massif, Central Pyrenees. Around 130 scarps were interpreted as being part of Rock Slope Failures (RSFs), while other smaller and more superficial failures were interpreted as complex movements combining colluvium slow flow/slope creep and RSFs. Except for one of them, these slope failures had not been previously detected, albeit they extend across a 15% of the studied region. The failures were identified through the analysis of a high-resolution (1 m) LIDAR-derived bare earth Digital Elevation Model (DEM). Most of the scarps are undetectable either by fieldwork, photo interpretation or 5 m resolution topography analysis owing to their small heights (0.5 to 2 m) and their location within forest areas. In many cases, these landforms are not evident in the field due to the presence of other minor irregularities in the slope and the lack of open views due to the forest. 2D and 3D visualization of hillshade maps with different sun azimuths provided an overall picture of the scarp assemblage and permitted a more complete analysis of the geometry of the scarps with respect to the slope and the structural fabric. The sharpness of some of the landforms suggests ongoing activity, which should be explored in future detailed studies in order to assess potential hazards affecting the Portainé ski resort. Our results reveal that close analysis of the 1 m LIDAR-derived DEM can significantly help to detect early-stage slope deformations in high mountain regions, and that expert judgment of the DEM is essential when dealing with subtle landforms. The incorporation of this approach in regional mapping represents a great advance in completing the catalogue of slope failures and will eventually contribute to a better understanding of the spatial factors controlling them.
System for Anomaly and Failure Detection (SAFD) system development
NASA Technical Reports Server (NTRS)
Oreilly, D.
1992-01-01
This task specified developing the hardware and software necessary to implement the System for Anomaly and Failure Detection (SAFD) algorithm, developed under Technology Test Bed (TTB) Task 21, on the TTB engine stand. This effort involved building two units; one unit to be installed in the Block II Space Shuttle Main Engine (SSME) Hardware Simulation Lab (HSL) at Marshall Space Flight Center (MSFC), and one unit to be installed at the TTB engine stand. Rocketdyne personnel from the HSL performed the task. The SAFD algorithm was developed as an improvement over the current redline system used in the Space Shuttle Main Engine Controller (SSMEC). Simulation tests and execution against previous hot fire tests demonstrated that the SAFD algorithm can detect engine failure as much as tens of seconds before the redline system recognized the failure. Although the current algorithm only operates during steady state conditions (engine not throttling), work is underway to expand the algorithm to work during transient condition.
Ferrographic and spectrometer oil analysis from a failed gas turbine engine
NASA Technical Reports Server (NTRS)
Jones, W. R., Jr.
1983-01-01
An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. It was concluded that a severe surge may have caused interference between rotating and stationary compressor parts that either directly or indirectly ignited the titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph, and with plasma, atomic absorption, and emission spectrometers to see if this information would aid in the engine failure diagnosis. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism nor a high level of wear debris was detected in the engine oil sample taken just prior to the test in which the failure occurred. However, low concentrations (0.2 to 0.5 ppm) of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations (2 ppm) were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure. The oil analyses eliminated a lubrication system bearing or shaft seal failure as the cause of the engine failure. Previously announced in STAR as N83-12433
Negative Selection Algorithm for Aircraft Fault Detection
NASA Technical Reports Server (NTRS)
Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.
2004-01-01
We investigated a real-valued Negative Selection Algorithm (NSA) for fault detection in man-in-the-loop aircraft operation. The detection algorithm uses body-axes angular rate sensory data exhibiting the normal flight behavior patterns, to generate probabilistically a set of fault detectors that can detect any abnormalities (including faults and damages) in the behavior pattern of the aircraft flight. We performed experiments with datasets (collected under normal and various simulated failure conditions) using the NASA Ames man-in-the-loop high-fidelity C-17 flight simulator. The paper provides results of experiments with different datasets representing various failure conditions.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katti, Amogh; Di Fatta, Giuseppe; Naughton III, Thomas J
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implementedmore » and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.« less
Evolution of Biomarker Guided Therapy for Heart Failure: Current Concepts and Trial Evidence
Pruett, Amanda E; Lee, Amanda K; Patterson, Herbert; Schwartz, Todd A; Glotzer, Jana M; Adams, Jr, Kirkwood F
2015-01-01
Optimizing management of patients with heart failure remains quite challenging despite many significant advances in drug and device therapy for this syndrome. Although a large body of evidence from robust clinical trials supports multiple thera-pies, utilization of these well-established treatments remains inconsistent and outcomes suboptimal in “real-world” patients with heart failure. Disease management programs may be effective, but are difficult to implement due to cost and logistical issues. Another approach to optimizing therapy is to utilize biomarkers to guide therapeutic choices. Natriuretic peptides pro-vide additional information of significant clinical value in the diagnosis and estimation of risk inpatients with heart failure. Ongoing research suggests a potential important added role for natriuretic peptides in heart failure. Guiding therapy based on serial changes in these biomarkers may be an effective strategy to optimize treatment and achieve better outcomes in this syn-drome. Initial, innovative, proof-of-concept studies have provided encouraging results and important insights into key as-pects of this strategy, but well designed, large-scale, multicenter, randomized, outcome trials are needed to definitively estab-lish this novel approach to management. Given the immense and growing public health burden of heart failure, identification of cost-effective ways to decrease the morbidity and mortality due to this syndrome is critical. PMID:24251462
Environmental testing to prevent on-orbit TDRS failures
NASA Technical Reports Server (NTRS)
Cutler, Robert M.
1994-01-01
Can improved environmental testing prevent on-orbit component failures such as those experienced in the Tracking and Data Relay Satellite (TDRS) constellation? TDRS communications have been available to user spacecraft continuously for over 11 years, during which the five TDRS's placed in orbit have demonstrated their redundancies and robustness by surviving 26 component failures. Nevertheless, additional environmental testing prior to launch could prevent the occurrence of some types of failures, and could help to maintain communication services. Specific testing challenges involve traveling wave tube assemblies (TWTA's) whose lives may decrease with on-off cycling, and heaters that are subject to thermal cycles. The development of test conditions and procedures should account for known thermal variations. Testing may also have the potential to prevent failures in which components such as diplexers have had their lives dramatically shortened because of particle migration in a weightless environment. Reliability modeling could be used to select additional components that could benefit from special testing, but experience shows that this approach has serious limitations. Through knowledge of on-orbit experience, and with advances in testing, communication satellite programs might avoid the occurrence of some types of failures, and extend future spacecraft longevity beyond the current TDRS design life of ten years. However, determining which components to test, and how must testing to do, remain problematical.
Failures of Perception in the Low-Prevalence Effect: Evidence From Active and Passive Visual Search
Hout, Michael C.; Walenchok, Stephen C.; Goldinger, Stephen D.; Wolfe, Jeremy M.
2017-01-01
In visual search, rare targets are missed disproportionately often. This low-prevalence effect (LPE) is a robust problem with demonstrable societal consequences. What is the source of the LPE? Is it a perceptual bias against rare targets or a later process, such as premature search termination or motor response errors? In 4 experiments, we examined the LPE using standard visual search (with eye tracking) and 2 variants of rapid serial visual presentation (RSVP) in which observers made present/absent decisions after sequences ended. In all experiments, observers looked for 2 target categories (teddy bear and butterfly) simultaneously. To minimize simple motor errors, caused by repetitive absent responses, we held overall target prevalence at 50%, with 1 low-prevalence and 1 high-prevalence target type. Across conditions, observers either searched for targets among other real-world objects or searched for specific bears or butterflies among within-category distractors. We report 4 main results: (a) In standard search, high-prevalence targets were found more quickly and accurately than low-prevalence targets. (b) The LPE persisted in RSVP search, even though observers never terminated search on their own. (c) Eye-tracking analyses showed that high-prevalence targets elicited better attentional guidance and faster perceptual decisions. And (d) even when observers looked directly at low-prevalence targets, they often (12%–34% of trials) failed to detect them. These results strongly argue that low-prevalence misses represent failures of perception when early search termination or motor errors are controlled. PMID:25915073
Systems biology approaches to pancreatic cancer detection, prevention and treatment.
Alian, Osama M; Philip, Philip A; Sarkar, Fazlul H; Azmi, Asfar S
2014-01-01
Pancreatic cancer [PC] is a complex disease harboring multiple genetic alterations. It is now well known that deregulation in the expression and function of oncogenes and tumor suppressor genes contributes to the development and progression of PC. The last 40 years have not seen any major improvements in the dismal overall cure rate for PC where drug resistance is an emerging and recurring obstacle for successful treatment of PC. Additionally, the lack of molecular biomarkers for patient selection limits drug availabilities for tailored therapy for patients diagnosed with PC. The very high failure rate of new drugs in Phase III clinical trials in PC calls for a more robust pre-clinical and clinical testing of new compounds. In order to rationally choose combinations of targeted agents that may improve therapeutic outcome by overcoming drug resistance, one needs to apply newer research tools such as systems and network biology. These newer tools are expected to assist in the design of effective drug combinations for the treatment of PC and are expected to become an important part in any future clinical trials. In this review we will provide background information on the current state of PC research, the reasons for drug failure and how to overcome these issues using systems sciences. We conclude this review with an example on how systems and network methodologies can help in the design efficacious drug combinations for this deadly and by far incurable disease.
Robust spike classification based on frequency domain neural waveform features.
Yang, Chenhui; Yuan, Yuan; Si, Jennie
2013-12-01
We introduce a new spike classification algorithm based on frequency domain features of the spike snippets. The goal for the algorithm is to provide high classification accuracy, low false misclassification, ease of implementation, robustness to signal degradation, and objectivity in classification outcomes. In this paper, we propose a spike classification algorithm based on frequency domain features (CFDF). It makes use of frequency domain contents of the recorded neural waveforms for spike classification. The self-organizing map (SOM) is used as a tool to determine the cluster number intuitively and directly by viewing the SOM output map. After that, spike classification can be easily performed using clustering algorithms such as the k-Means. In conjunction with our previously developed multiscale correlation of wavelet coefficient (MCWC) spike detection algorithm, we show that the MCWC and CFDF detection and classification system is robust when tested on several sets of artificial and real neural waveforms. The CFDF is comparable to or outperforms some popular automatic spike classification algorithms with artificial and real neural data. The detection and classification of neural action potentials or neural spikes is an important step in single-unit-based neuroscientific studies and applications. After the detection of neural snippets potentially containing neural spikes, a robust classification algorithm is applied for the analysis of the snippets to (1) extract similar waveforms into one class for them to be considered coming from one unit, and to (2) remove noise snippets if they do not contain any features of an action potential. Usually, a snippet is a small 2 or 3 ms segment of the recorded waveform, and differences in neural action potentials can be subtle from one unit to another. Therefore, a robust, high performance classification system like the CFDF is necessary. In addition, the proposed algorithm does not require any assumptions on statistical properties of the noise and proves to be robust under noise contamination.