NASA Technical Reports Server (NTRS)
Weiss, Jerold L.; Hsu, John Y.
1986-01-01
The use of a decentralized approach to failure detection and isolation for use in restructurable control systems is examined. This work has produced: (1) A method for evaluating fundamental limits to FDI performance; (2) Application using flight recorded data; (3) A working control element FDI system with maximal sensitivity to critical control element failures; (4) Extensive testing on realistic simulations; and (5) A detailed design methodology involving parameter optimization (with respect to model uncertainties) and sensitivity analyses. This project has concentrated on detection and isolation of generic control element failures since these failures frequently lead to emergency conditions and since knowledge of remaining control authority is essential for control system redesign. The failures are generic in the sense that no temporal failure signature information was assumed. Thus, various forms of functional failures are treated in a unified fashion. Such a treatment results in a robust FDI system (i.e., one that covers all failure modes) but sacrifices some performance when detailed failure signature information is known, useful, and employed properly. It was assumed throughout that all sensors are validated (i.e., contain only in-spec errors) and that only the first failure of a single control element needs to be detected and isolated. The FDI system which has been developed will handle a class of multiple failures.
Operational Failures and Interruptions in Hospital Nursing
Tucker, Anita L; Spear, Steven J
2006-01-01
Objective To describe the work environment of hospital nurses with particular focus on the performance of work systems supplying information, materials, and equipment for patient care. Data Sources Primary observation, semistructured interviews, and surveys of hospital nurses. Study Design We sampled a cross-sectional group of six U.S. hospitals to examine the frequency of work system failures and their impact on nurse productivity. Data Collection We collected minute-by-minute data on the activities of 11 nurses. In addition, we conducted interviews with six of these nurses using questions related to obstacles to care. Finally, we created and administered two surveys in 48 nursing units, one for nurses and one for managers, asking about the frequency of specific work system failures. Principal Findings Nurses we observed experienced an average of 8.4 work system failures per 8-hour shift. The five most frequent types of failures, accounting for 6.4 of these obstacles, involved medications, orders, supplies, staffing, and equipment. Survey questions asking nurses how frequently they experienced these five categories of obstacles yielded similar frequencies. For an average 8-hour shift, the average task time was only 3.1 minutes, and in spite of this, nurses were interrupted mid-task an average of eight times per shift. Conclusions Our findings suggest that nurse effectiveness can be increased by creating improvement processes triggered by the occurrence of work system failures, with the goal of reducing future occurrences. Second, given that nursing work is fragmented and unpredictable, designing processes that are robust to interruption can help prevent errors. PMID:16704505
Lunar Module Electrical Power System Design Considerations and Failure Modes
NASA Technical Reports Server (NTRS)
Interbartolo, Michael
2009-01-01
This slide presentation reviews the design and redesign considerations of the Apollo lunar module electrical power system. Included in the work are graphics showing the lunar module power system. It describes the in-flight failures, and the lessons learned from these failures.
The WorkPlace distributed processing environment
NASA Technical Reports Server (NTRS)
Ames, Troy; Henderson, Scott
1993-01-01
Real time control problems require robust, high performance solutions. Distributed computing can offer high performance through parallelism and robustness through redundancy. Unfortunately, implementing distributed systems with these characteristics places a significant burden on the applications programmers. Goddard Code 522 has developed WorkPlace to alleviate this burden. WorkPlace is a small, portable, embeddable network interface which automates message routing, failure detection, and re-configuration in response to failures in distributed systems. This paper describes the design and use of WorkPlace, and its application in the construction of a distributed blackboard system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayo, Jackson R.; Chen, Frank Xiaoxiao; Pebay, Philippe Pierre
2010-06-01
Effective failure prediction and mitigation strategies in high-performance computing systems could provide huge gains in resilience of tightly coupled large-scale scientific codes. These gains would come from prediction-directed process migration and resource servicing, intelligent resource allocation, and checkpointing driven by failure predictors rather than at regular intervals based on nominal mean time to failure. Given probabilistic associations of outlier behavior in hardware-related metrics with eventual failure in hardware, system software, and/or applications, this paper explores approaches for quantifying the effects of prediction and mitigation strategies and demonstrates these using actual production system data. We describe context-relevant methodologies for determining themore » accuracy and cost-benefit of predictors. While many research studies have quantified the expected impact of growing system size, and the associated shortened mean time to failure (MTTF), on application performance in large-scale high-performance computing (HPC) platforms, there has been little if any work to quantify the possible gains from predicting system resource failures with significant but imperfect accuracy. This possibly stems from HPC system complexity and the fact that, to date, no one has established any good predictors of failure in these systems. Our work in the OVIS project aims to discover these predictors via a variety of data collection techniques and statistical analysis methods that yield probabilistic predictions. The question then is, 'How good or useful are these predictions?' We investigate methods for answering this question in a general setting, and illustrate them using a specific failure predictor discovered on a production system at Sandia.« less
DEPEND - A design environment for prediction and evaluation of system dependability
NASA Technical Reports Server (NTRS)
Goswami, Kumar K.; Iyer, Ravishankar K.
1990-01-01
The development of DEPEND, an integrated simulation environment for the design and dependability analysis of fault-tolerant systems, is described. DEPEND models both hardware and software components at a functional level, and allows automatic failure injection to assess system performance and reliability. It relieves the user of the work needed to inject failures, maintain statistics, and output reports. The automatic failure injection scheme is geared toward evaluating a system under high stress (workload) conditions. The failures that are injected can affect both hardware and software components. To illustrate the capability of the simulator, a distributed system which employs a prediction-based, dynamic load-balancing heuristic is evaluated. Experiments were conducted to determine the impact of failures on system performance and to identify the failures to which the system is especially susceptible.
Estimating distributions with increasing failure rate in an imperfect repair model.
Kvam, Paul H; Singh, Harshinder; Whitaker, Lyn R
2002-03-01
A failed system is repaired minimally if after failure, it is restored to the working condition of an identical system of the same age. We extend the nonparametric maximum likelihood estimator (MLE) of a system's lifetime distribution function to test units that are known to have an increasing failure rate. Such items comprise a significant portion of working components in industry. The order-restricted MLE is shown to be consistent. Similar results hold for the Brown-Proschan imperfect repair model, which dictates that a failed component is repaired perfectly with some unknown probability, and is otherwise repaired minimally. The estimators derived are motivated and illustrated by failure data in the nuclear industry. Failure times for groups of emergency diesel generators and motor-driven pumps are analyzed using the order-restricted methods. The order-restricted estimators are consistent and show distinct differences from the ordinary MLEs. Simulation results suggest significant improvement in reliability estimation is available in many cases when component failure data exhibit the IFR property.
Failure propagation in multi-cell lithium ion batteries
Lamb, Joshua; Orendorff, Christopher J.; Steele, Leigh Anna M.; ...
2014-10-22
Traditionally, safety and impact of failure concerns of lithium ion batteries have dealt with the field failure of single cells. However, large and complex battery systems require the consideration of how a single cell failure will impact the system as a whole. Initial failure that leads to the thermal runaway of other cells within the system creates a much more serious condition than the failure of a single cell. This work examines the behavior of small modules of cylindrical and stacked pouch cells after thermal runaway is induced in a single cell through nail penetration trigger [1] within the module.more » Cylindrical cells are observed to be less prone to propagate, if failure propagates at all, owing to the limited contact between neighboring cells. However, the electrical connectivity is found to be impactful as the 10S1P cylindrical cell module did not show failure propagation through the module, while the 1S10P module had an energetic thermal runaway consuming the module minutes after the initiation failure trigger. Modules built using pouch cells conversely showed the impact of strong heat transfer between cells. In this case, a large surface area of the cells was in direct contact with its neighbors, allowing failure to propagate through the entire battery within 60-80 seconds for all configurations (parallel or series) tested. This work demonstrates the increased severity possible when a point failure impacts the surrounding battery system.« less
On-board fault management for autonomous spacecraft
NASA Technical Reports Server (NTRS)
Fesq, Lorraine M.; Stephan, Amy; Doyle, Susan C.; Martin, Eric; Sellers, Suzanne
1991-01-01
The dynamic nature of the Cargo Transfer Vehicle's (CTV) mission and the high level of autonomy required mandate a complete fault management system capable of operating under uncertain conditions. Such a fault management system must take into account the current mission phase and the environment (including the target vehicle), as well as the CTV's state of health. This level of capability is beyond the scope of current on-board fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems that can meet the needs of spacecraft that have long-range autonomy requirements. We have implemented a model-based approach to fault detection and isolation that does not require explicit characterization of failures prior to launch. It is thus able to detect failures that were not considered in the failure and effects analysis. We have applied this technique to several different subsystems and tested our approach against both simulations and an electrical power system hardware testbed. We present findings from simulation and hardware tests which demonstrate the ability of our model-based system to detect and isolate failures, and describe our work in porting the Ada version of this system to a flight-qualified processor. We also discuss current research aimed at expanding our system to monitor the entire spacecraft.
NASA Technical Reports Server (NTRS)
McCarty, John P.; Lyles, Garry M.
1997-01-01
Propulsion system quality is defined in this paper as having high reliability, that is, quality is a high probability of within-tolerance performance or operation. Since failures are out-of-tolerance performance, the probability of failures and their occurrence is the difference between high and low quality systems. Failures can be described at 3 levels: the system failure (which is the detectable end of a failure), the failure mode (which is the failure process), and the failure cause (which is the start). Failure causes can be evaluated & classified by type. The results of typing flight history failures shows that most failures are in unrecognized modes and result from human error or noise, i.e. failures are when engineers learn how things really work. Although the study based on US launch vehicles, a sampling of failures from other countries indicates the finding has broad application. The parameters of the design of a propulsion system are not single valued, but have dispersions associated with the manufacturing of parts. Many tests are needed to find failures, if the dispersions are large relative to tolerances, which could contribute to the large number of failures in unrecognized modes.
Military Health Service System Ambulatory Work Unit (AWU).
1988-04-01
E-40 BBC-4 Ambulatory Work Unit Distribution Screen Passes BBC - Neurosurgery Clinic .... ............. . E-40 BBD -I Initial Record...Screen Failures BBD - Ophthalmology Clinic ... ............ E-41 BBD -2 Distribution Screen Failures BBD - Ophthalmology Clinic ............ E-41 BBD -3...Descriptive Statistics Distribution Screen Passes BBD - Ophthalmology Clinic ............ E-42 BBD -4 Ambulatory Work Unit Distribution Screen Passes BBD
NASA Technical Reports Server (NTRS)
Davis, Robert N.; Polites, Michael E.; Trevino, Luis C.
2004-01-01
This paper details a novel scheme for autonomous component health management (ACHM) with failed actuator detection and failed sensor detection, identification, and avoidance. This new scheme has features that far exceed the performance of systems with triple-redundant sensing and voting, yet requires fewer sensors and could be applied to any system with redundant sensing. Relevant background to the ACHM scheme is provided, and the simulation results for the application of that scheme to a single-axis spacecraft attitude control system with a 3rd order plant and dual-redundant measurement of system states are presented. ACHM fulfills key functions needed by an integrated vehicle health monitoring (IVHM) system. It is: autonomous; adaptive; works in realtime; provides optimal state estimation; identifies failed components; avoids failed components; reconfigures for multiple failures; reconfigures for intermittent failures; works for hard-over, soft, and zero-output failures; and works for both open- and closed-loop systems. The ACHM scheme combines a prefilter that generates preliminary state estimates, detects and identifies failed sensors and actuators, and avoids the use of failed sensors in state estimation with a fixed-gain Kalman filter that generates optimal state estimates and provides model-based state estimates that comprise an integral part of the failure detection logic. The results show that ACHM successfully isolates multiple persistent and intermittent hard-over, soft, and zero-output failures. It is now ready to be tested on a computer model of an actual system.
Generic Sensor Failure Modeling for Cooperative Systems.
Jäger, Georg; Zug, Sebastian; Casimiro, António
2018-03-20
The advent of cooperative systems entails a dynamic composition of their components. As this contrasts current, statically composed systems, new approaches for maintaining their safety are required. In that endeavor, we propose an integration step that evaluates the failure model of shared information in relation to an application's fault tolerance and thereby promises maintainability of such system's safety. However, it also poses new requirements on failure models, which are not fulfilled by state-of-the-art approaches. Consequently, this work presents a mathematically defined generic failure model as well as a processing chain for automatically extracting such failure models from empirical data. By examining data of an Sharp GP2D12 distance sensor, we show that the generic failure model not only fulfills the predefined requirements, but also models failure characteristics appropriately when compared to traditional techniques.
System for Anomaly and Failure Detection (SAFD) system development
NASA Technical Reports Server (NTRS)
Oreilly, D.
1992-01-01
This task specified developing the hardware and software necessary to implement the System for Anomaly and Failure Detection (SAFD) algorithm, developed under Technology Test Bed (TTB) Task 21, on the TTB engine stand. This effort involved building two units; one unit to be installed in the Block II Space Shuttle Main Engine (SSME) Hardware Simulation Lab (HSL) at Marshall Space Flight Center (MSFC), and one unit to be installed at the TTB engine stand. Rocketdyne personnel from the HSL performed the task. The SAFD algorithm was developed as an improvement over the current redline system used in the Space Shuttle Main Engine Controller (SSMEC). Simulation tests and execution against previous hot fire tests demonstrated that the SAFD algorithm can detect engine failure as much as tens of seconds before the redline system recognized the failure. Although the current algorithm only operates during steady state conditions (engine not throttling), work is underway to expand the algorithm to work during transient condition.
Quantitative method of medication system interface evaluation.
Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F
2007-01-01
The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.
The moral failure of the patriarchy.
Watson, J
1990-01-01
The present health care system operates within a larger structure that now has to be openly acknowledged as patriarchal: Caring is viewed as women's work, which is not valued and which is considered less important than the work of men. The moral failure of this worldview is evident in such health care crises as care of the homelessness and those with AIDS, and dramatic rises in rates of infant mortality among the poor. This failure demands a health care revolution--a revolution in the sense that society must give up that which no longer works.
Fault management for the Space Station Freedom control center
NASA Technical Reports Server (NTRS)
Clark, Colin; Jowers, Steven; Mcnenny, Robert; Culbert, Chris; Kirby, Sarah; Lauritsen, Janet
1992-01-01
This paper describes model based reasoning fault isolation in complex systems using automated digraph analysis. It discusses the use of the digraph representation as the paradigm for modeling physical systems and a method for executing these failure models to provide real-time failure analysis. It also discusses the generality, ease of development and maintenance, complexity management, and susceptibility to verification and validation of digraph failure models. It specifically describes how a NASA-developed digraph evaluation tool and an automated process working with that tool can identify failures in a monitored system when supplied with one or more fault indications. This approach is well suited to commercial applications of real-time failure analysis in complex systems because it is both powerful and cost effective.
Generic Sensor Failure Modeling for Cooperative Systems
Jäger, Georg; Zug, Sebastian
2018-01-01
The advent of cooperative systems entails a dynamic composition of their components. As this contrasts current, statically composed systems, new approaches for maintaining their safety are required. In that endeavor, we propose an integration step that evaluates the failure model of shared information in relation to an application’s fault tolerance and thereby promises maintainability of such system’s safety. However, it also poses new requirements on failure models, which are not fulfilled by state-of-the-art approaches. Consequently, this work presents a mathematically defined generic failure model as well as a processing chain for automatically extracting such failure models from empirical data. By examining data of an Sharp GP2D12 distance sensor, we show that the generic failure model not only fulfills the predefined requirements, but also models failure characteristics appropriately when compared to traditional techniques. PMID:29558435
Comprehension and retrieval of failure cases in airborne observatories
NASA Technical Reports Server (NTRS)
Alvarado, Sergio J.; Mock, Kenrick J.
1995-01-01
This paper describes research dealing with the computational problem of analyzing and repairing failures of electronic and mechanical systems of telescopes in NASA's airborne observatories, such as KAO (Kuiper Airborne Observatory) and SOFIA (Stratospheric Observatory for Infrared Astronomy). The research has resulted in the development of an experimental system that acquires knowledge of failure analysis from input text, and answers questions regarding failure detection and correction. The system's design builds upon previous work on text comprehension and question answering, including: knowledge representation for conceptual analysis of failure descriptions, strategies for mapping natural language into conceptual representations, case-based reasoning strategies for memory organization and indexing, and strategies for memory search and retrieval. These techniques have been combined into a model that accounts for: (a) how to build a knowledge base of system failures and repair procedures from descriptions that appear in telescope-operators' logbooks and FMEA (failure modes and effects analysis) manuals; and (b) how to use that knowledge base to search and retrieve answers to questions about causes and effects of failures, as well as diagnosis and repair procedures. This model has been implemented in FANSYS (Failure ANalysis SYStem), a prototype text comprehension and question answering program for failure analysis.
Comprehension and retrieval of failure cases in airborne observatories
NASA Astrophysics Data System (ADS)
Alvarado, Sergio J.; Mock, Kenrick J.
1995-05-01
This paper describes research dealing with the computational problem of analyzing and repairing failures of electronic and mechanical systems of telescopes in NASA's airborne observatories, such as KAO (Kuiper Airborne Observatory) and SOFIA (Stratospheric Observatory for Infrared Astronomy). The research has resulted in the development of an experimental system that acquires knowledge of failure analysis from input text, and answers questions regarding failure detection and correction. The system's design builds upon previous work on text comprehension and question answering, including: knowledge representation for conceptual analysis of failure descriptions, strategies for mapping natural language into conceptual representations, case-based reasoning strategies for memory organization and indexing, and strategies for memory search and retrieval. These techniques have been combined into a model that accounts for: (a) how to build a knowledge base of system failures and repair procedures from descriptions that appear in telescope-operators' logbooks and FMEA (failure modes and effects analysis) manuals; and (b) how to use that knowledge base to search and retrieve answers to questions about causes and effects of failures, as well as diagnosis and repair procedures. This model has been implemented in FANSYS (Failure ANalysis SYStem), a prototype text comprehension and question answering program for failure analysis.
Strategies of learning from failure.
Edmondson, Amy C
2011-04-01
Many executives believe that all failure is bad (although it usually provides Lessons) and that Learning from it is pretty straightforward. The author, a professor at Harvard Business School, thinks both beliefs are misguided. In organizational life, she says, some failures are inevitable and some are even good. And successful learning from failure is not simple: It requires context-specific strategies. But first leaders must understand how the blame game gets in the way and work to create an organizational culture in which employees feel safe admitting or reporting on failure. Failures fall into three categories: preventable ones in predictable operations, which usually involve deviations from spec; unavoidable ones in complex systems, which may arise from unique combinations of needs, people, and problems; and intelligent ones at the frontier, where "good" failures occur quickly and on a small scale, providing the most valuable information. Strong leadership can build a learning culture-one in which failures large and small are consistently reported and deeply analyzed, and opportunities to experiment are proactively sought. Executives commonly and understandably worry that taking a sympathetic stance toward failure will create an "anything goes" work environment. They should instead recognize that failure is inevitable in today's complex work organizations.
NASA Technical Reports Server (NTRS)
Tao, Gang; Joshi, Suresh M.
2008-01-01
In this paper, the problem of controlling systems with failures and faults is introduced, and an overview of recent work on direct adaptive control for compensation of uncertain actuator failures is presented. Actuator failures may be characterized by some unknown system inputs being stuck at some unknown (fixed or varying) values at unknown time instants, that cannot be influenced by the control signals. The key task of adaptive compensation is to design the control signals in such a manner that the remaining actuators can automatically and seamlessly take over for the failed ones, and achieve desired stability and asymptotic tracking. A certain degree of redundancy is necessary to accomplish failure compensation. The objective of adaptive control design is to effectively use the available actuation redundancy to handle failures without the knowledge of the failure patterns, parameters, and time of occurrence. This is a challenging problem because failures introduce large uncertainties in the dynamic structure of the system, in addition to parametric uncertainties and unknown disturbances. The paper addresses some theoretical issues in adaptive actuator failure compensation: actuator failure modeling, redundant actuation requirements, plant-model matching, error system dynamics, adaptation laws, and stability, tracking, and performance analysis. Adaptive control designs can be shown to effectively handle uncertain actuator failures without explicit failure detection. Some open technical challenges and research problems in this important research area are discussed.
Mini-Ckpts: Surviving OS Failures in Persistent Memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiala, David; Mueller, Frank; Ferreira, Kurt Brian
Concern is growing in the high-performance computing (HPC) community on the reliability of future extreme-scale systems. Current efforts have focused on application fault-tolerance rather than the operating system (OS), despite the fact that recent studies have suggested that failures in OS memory are more likely. The OS is critical to a system's correct and efficient operation of the node and processes it governs -- and in HPC also for any other nodes a parallelized application runs on and communicates with: Any single node failure generally forces all processes of this application to terminate due to tight communication in HPC. Therefore,more » the OS itself must be capable of tolerating failures. In this work, we introduce mini-ckpts, a framework which enables application survival despite the occurrence of a fatal OS failure or crash. Mini-ckpts achieves this tolerance by ensuring that the critical data describing a process is preserved in persistent memory prior to the failure. Following the failure, the OS is rejuvenated via a warm reboot and the application continues execution effectively making the failure and restart transparent. The mini-ckpts rejuvenation and recovery process is measured to take between three to six seconds and has a failure-free overhead of between 3-5% for a number of key HPC workloads. In contrast to current fault-tolerance methods, this work ensures that the operating and runtime system can continue in the presence of faults. This is a much finer-grained and dynamic method of fault-tolerance than the current, coarse-grained, application-centric methods. Handling faults at this level has the potential to greatly reduce overheads and enables mitigation of additional fault scenarios.« less
Sophisticated Calculation of the 1oo4-architecture for Safety-related Systems Conforming to IEC61508
NASA Astrophysics Data System (ADS)
Hayek, A.; Bokhaiti, M. Al; Schwarz, M. H.; Boercsoek, J.
2012-05-01
With the publication and enforcement of the standard IEC 61508 of safety related systems, recent system architectures have been presented and evaluated. Among a number of techniques and measures to the evaluation of safety integrity level (SIL) for safety-related systems, several measures such as reliability block diagrams and Markov models are used to analyze the probability of failure on demand (PFD) and mean time to failure (MTTF) which conform to IEC 61508. The current paper deals with the quantitative analysis of the novel 1oo4-architecture (one out of four) presented in recent work. Therefore sophisticated calculations for the required parameters are introduced. The provided 1oo4-architecture represents an advanced safety architecture based on on-chip redundancy, which is 3-failure safe. This means that at least one of the four channels have to work correctly in order to trigger the safety function.
NASA Technical Reports Server (NTRS)
Mccann, Robert S.; Spirkovska, Lilly; Smith, Irene
2013-01-01
Integrated System Health Management (ISHM) technologies have advanced to the point where they can provide significant automated assistance with real-time fault detection, diagnosis, guided troubleshooting, and failure consequence assessment. To exploit these capabilities in actual operational environments, however, ISHM information must be integrated into operational concepts and associated information displays in ways that enable human operators to process and understand the ISHM system information rapidly and effectively. In this paper, we explore these design issues in the context of an advanced caution and warning system (ACAWS) for next-generation crewed spacecraft missions. User interface concepts for depicting failure diagnoses, failure effects, redundancy loss, "what-if" failure analysis scenarios, and resolution of ambiguity groups are discussed and illustrated.
Thermal barrier coating life prediction model
NASA Technical Reports Server (NTRS)
Pilsner, B. H.; Hillery, R. V.; Mcknight, R. L.; Cook, T. S.; Kim, K. S.; Duderstadt, E. C.
1986-01-01
The objectives of this program are to determine the predominant modes of degradation of a plasma sprayed thermal barrier coating system, and then to develop and verify life prediction models accounting for these degradation modes. The program is divided into two phases, each consisting of several tasks. The work in Phase 1 is aimed at identifying the relative importance of the various failure modes, and developing and verifying life prediction model(s) for the predominant model for a thermal barrier coating system. Two possible predominant failure mechanisms being evaluated are bond coat oxidation and bond coat creep. The work in Phase 2 will develop design-capable, causal, life prediction models for thermomechanical and thermochemical failure modes, and for the exceptional conditions of foreign object damage and erosion.
An immune-inspired swarm aggregation algorithm for self-healing swarm robotic systems.
Timmis, J; Ismail, A R; Bjerknes, J D; Winfield, A F T
2016-08-01
Swarm robotics is concerned with the decentralised coordination of multiple robots having only limited communication and interaction abilities. Although fault tolerance and robustness to individual robot failures have often been used to justify the use of swarm robotic systems, recent studies have shown that swarm robotic systems are susceptible to certain types of failure. In this paper we propose an approach to self-healing swarm robotic systems and take inspiration from the process of granuloma formation, a process of containment and repair found in the immune system. We use a case study of a swarm performing team work where previous works have demonstrated that partially failed robots have the most detrimental effect on overall swarm behaviour. We have developed an immune inspired approach that permits the recovery from certain failure modes during operation of the swarm, overcoming issues that effect swarm behaviour associated with partially failed robots. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
[Examination of safety improvement by failure record analysis that uses reliability engineering].
Kato, Kyoichi; Sato, Hisaya; Abe, Yoshihisa; Ishimori, Yoshiyuki; Hirano, Hiroshi; Higashimura, Kyoji; Amauchi, Hiroshi; Yanakita, Takashi; Kikuchi, Kei; Nakazawa, Yasuo
2010-08-20
How the maintenance checks of the medical treatment system, including start of work check and the ending check, was effective for preventive maintenance and the safety improvement was verified. In this research, date on the failure of devices in multiple facilities was collected, and the data of the trouble repair record was analyzed by the technique of reliability engineering. An analysis of data on the system (8 general systems, 6 Angio systems, 11 CT systems, 8 MRI systems, 8 RI systems, and the radiation therapy system 9) used in eight hospitals was performed. The data collection period assumed nine months from April to December 2008. Seven items were analyzed. (1) Mean time between failures (MTBF) (2) Mean time to repair (MTTR) (3) Mean down time (MDT) (4) Number found by check in morning (5) Failure generation time according to modality. The classification of the breakdowns per device, the incidence, and the tendency could be understood by introducing reliability engineering. Analysis, evaluation, and feedback on the failure generation history are useful to keep downtime to a minimum and to ensure safety.
Reliability and Maintainability Analysis for the Amine Swingbed Carbon Dioxide Removal System
NASA Technical Reports Server (NTRS)
Dunbar, Tyler
2016-01-01
I have performed a reliability & maintainability analysis for the Amine Swingbed payload system. The Amine Swingbed is a carbon dioxide removal technology that has gone through 2,400 hours of International Space Station on-orbit use between 2013 and 2016. While the Amine Swingbed is currently an experimental payload system, the Amine Swingbed may be converted to system hardware. If the Amine Swingbed becomes system hardware, it will supplement the Carbon Dioxide Removal Assembly (CDRA) as the primary CO2 removal technology on the International Space Station. NASA is also considering using the Amine Swingbed as the primary carbon dioxide removal technology for future extravehicular mobility units and for the Orion, which will be used for the Asteroid Redirect and Journey to Mars missions. The qualitative component of the reliability and maintainability analysis is a Failure Modes and Effects Analysis (FMEA). In the FMEA, I have investigated how individual components in the Amine Swingbed may fail, and what the worst case scenario is should a failure occur. The significant failure effects are the loss of ability to remove carbon dioxide, the formation of ammonia due to chemical degradation of the amine, and loss of atmosphere because the Amine Swingbed uses the vacuum of space to regenerate the Amine Swingbed. In the quantitative component of the reliability and maintainability analysis, I have assumed a constant failure rate for both electronic and nonelectronic parts. Using this data, I have created a Poisson distribution to predict the failure rate of the Amine Swingbed as a whole. I have determined a mean time to failure for the Amine Swingbed to be approximately 1,400 hours. The observed mean time to failure for the system is between 600 and 1,200 hours. This range includes initial testing of the Amine Swingbed, as well as software faults that are understood to be non-critical. If many of the commercial parts were switched to military-grade parts, the expected mean time to failure would be 2,300 hours. Both calculated mean times to failure for the Amine Swingbed use conservative failure rate models. The observed mean time to failure for CDRA is 2,500 hours. Working on this project and for NASA in general has helped me gain insight into current aeronautics missions, reliability engineering, circuit analysis, and different cultures. Prior my internship, I did not have a lot knowledge about the work being performed at NASA. As a chemical engineer, I had not really considered working for NASA as a career path. By engaging in interactions with civil servants, contractors, and other interns, I have learned a great deal about modern challenges that NASA is addressing. My work has helped me develop a knowledge base in safety and reliability that would be difficult to find elsewhere. Prior to this internship, I had not thought about reliability engineering. Now, I have gained a skillset in performing reliability analyses, and understanding the inner workings of a large mechanical system. I have also gained experience in understanding how electrical systems work while I was analyzing the electrical components of the Amine Swingbed. I did not expect to be exposed to as many different cultures as I have while working at NASA. I am referring to both within NASA and the Houston area. NASA employs individuals with a broad range of backgrounds. It has been great to learn from individuals who have highly diverse experiences and outlooks on the world. In the Houston area, I have come across individuals from different parts of the world. Interacting with such a high number of individuals with significantly different backgrounds has helped me to grow as a person in ways that I did not expect. My time at NASA has opened a window into the field of aeronautics. After earning a bachelor's degree in chemical engineering, I plan to go to graduate school for a PhD in engineering. Prior to coming to NASA, I was not aware of the graduate Pathways program. I intend to apply for the graduate Pathways program as positions are opened up. I would like to pursue future opportunities with NASA, especially as my engineering career progresses.
Complex Failure Forewarning System - DHS Conference Proceedings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Hively, Lee M; Prowell, Stacy J
2011-01-01
As the critical infrastructures of the United States have become more and more dependent on public and private networks, the potential for widespread national impact resulting from disruption or failure of these networks has also increased. Securing the nation s critical infrastructures requires protecting not only their physical systems but, just as important, the cyber portions of the systems on which they rely. A failure is inclusive of random events, design flaws, and instabilities caused by cyber (and/or physical) attack. One such domain, aging bridges, is used to explain the Complex Structure Failure Forewarning System. We discuss the workings ofmore » such a system in the context of the necessary sensors, command and control and data collection as well as the cyber security efforts that would support this system. Their application and the implications of this computing architecture are also discussed, with respect to our nation s aging infrastructure.« less
NASA Technical Reports Server (NTRS)
Vanschalkwyk, Christiaan Mauritz
1991-01-01
Many applications require that a control system must be tolerant to the failure of its components. This is especially true for large space-based systems that must work unattended and with long periods between maintenance. Fault tolerance can be obtained by detecting the failure of the control system component, determining which component has failed, and reconfiguring the system so that the failed component is isolated from the controller. Component failure detection experiments that were conducted on an experimental space structure, the NASA Langley Mini-Mast are presented. Two methodologies for failure detection and isolation (FDI) exist that do not require the specification of failure modes and are applicable to both actuators and sensors. These methods are known as the Failure Detection Filter and the method of Generalized Parity Relations. The latter method was applied to three different sensor types on the Mini-Mast. Failures were simulated in input-output data that were recorded during operation of the Mini-Mast. Both single and double sensor parity relations were tested and the effect of several design parameters on the performance of these relations is discussed. The detection of actuator failures is also treated. It is shown that in all the cases it is possible to identify the parity relations directly from input-output data. Frequency domain analysis is used to explain the behavior of the parity relations.
de Carvalho, Paulo Victor Rodrigues; Gomes, José Orlando; Huber, Gilbert Jacob; Vidal, Mario Cesar
2009-05-01
A fundamental challenge in improving the safety of complex systems is to understand how accidents emerge in normal working situations, with equipment functioning normally in normally structured organizations. We present a field study of the en route mid-air collision between a commercial carrier and an executive jet, in the clear afternoon Amazon sky in which 154 people lost their lives, that illustrates one response to this challenge. Our focus was on how and why the several safety barriers of a well structured air traffic system melted down enabling the occurrence of this tragedy, without any catastrophic component failure, and in a situation where everything was functioning normally. We identify strong consistencies and feedbacks regarding factors of system day-to-day functioning that made monitoring and awareness difficult, and the cognitive strategies that operators have developed to deal with overall system behavior. These findings emphasize the active problem-solving behavior needed in air traffic control work, and highlight how the day-to-day functioning of the system can jeopardize such behavior. An immediate consequence is that safety managers and engineers should review their traditional safety approach and accident models based on equipment failure probability, linear combinations of failures, rules and procedures, and human errors, to deal with complex patterns of coincidence possibilities, unexpected links, resonance among system functions and activities, and system cognition.
Forewarning of Failure in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Hively, Lee M; Prowell, Stacy J
2011-01-01
As the critical infrastructures of the United States have become more and more dependent on public and private networks, the potential for widespread national impact resulting from disruption or failure of these networks has also increased. Securing the nation s critical infrastructures requires protecting not only their physical systems but, just as important, the cyber portions of the systems on which they rely. A failure is inclusive of random events, design flaws, and instabilities caused by cyber (and/or physical) attack. One such domain is failure in critical equipment. A second is aging bridges. We discuss the workings of such amore » system in the context of the necessary sensors, command and control and data collection as well as the cyber security efforts that would support this system. Their application and the implications of this computing architecture are also discussed, with respect to our nation s aging infrastructure.« less
Low-cost failure sensor design and development for water pipeline distribution systems.
Khan, K; Widdop, P D; Day, A J; Wood, A S; Mounce, S R; Machell, J
2002-01-01
This paper describes the design and development of a new sensor which is low cost to manufacture and install and is reliable in operation with sufficient accuracy, resolution and repeatability for use in newly developed systems for pipeline monitoring and leakage detection. To provide an appropriate signal, the concept of a "failure" sensor is introduced, in which the output is not necessarily proportional to the input, but is unmistakably affected when an unusual event occurs. The design of this failure sensor is based on the water opacity which can be indicative of an unusual event in a water distribution network. The laboratory work and field trials necessary to design and prove out this type of failure sensor are described here. It is concluded that a low-cost failure sensor of this type has good potential for use in a comprehensive water monitoring and management system based on Artificial Neural Networks (ANN).
Full Envelope Reconfigurable Control Design for the X-33 Vehicle
NASA Technical Reports Server (NTRS)
Cotting, M. Christopher; Burken, John J.; Lee, Seung-Hee (Technical Monitor)
2001-01-01
In the event of a control surface failure, the purpose of a reconfigurable control system is to redistribute the control effort among the remaining working surfaces such that satisfactory stability and performance are retained. An Off-line Nonlinear General Constrained Optimization (ONCO) approach was used for the reconfigurable X-33 control design method. Three example failures are shown using a high fidelity 6 DOF simulation (case I ascent with a left body flap jammed at 25 deg.; case 2 entry with a right inboard elevon jam at 25 deg.; and case 3, landing (TAEM) with a left rudder jam at -30 deg.) Failure comparisons between responses with the nominal controller and reconfigurable controllers show the benefits of reconfiguration. Single jam aerosurface failures were considered, and failure detection and identification is considered accomplished in the actuator controller. The X-33 flight control system will incorporate reconfigurable flight control in the baseline system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less
Fundamental Technology Development for Gas-Turbine Engine Health Management
NASA Technical Reports Server (NTRS)
Mercer, Carolyn R.; Simon, Donald L.; Hunter, Gary W.; Arnold, Steven M.; Reveley, Mary S.; Anderson, Lynn M.
2007-01-01
Integrated vehicle health management technologies promise to dramatically improve the safety of commercial aircraft by reducing system and component failures as causal and contributing factors in aircraft accidents. To realize this promise, fundamental technology development is needed to produce reliable health management components. These components include diagnostic and prognostic algorithms, physics-based and data-driven lifing and failure models, sensors, and a sensor infrastructure including wireless communications, power scavenging, and electronics. In addition, system assessment methods are needed to effectively prioritize development efforts. Development work is needed throughout the vehicle, but particular challenges are presented by the hot, rotating environment of the propulsion system. This presentation describes current work in the field of health management technologies for propulsion systems for commercial aviation.
Approximation of Failure Probability Using Conditional Sampling
NASA Technical Reports Server (NTRS)
Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.
2008-01-01
In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.
Epidemic failure detection and consensus for extreme parallelism
Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas; ...
2017-02-01
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less
Decomposition-Based Failure Mode Identification Method for Risk-Free Design of Large Systems
NASA Technical Reports Server (NTRS)
Tumer, Irem Y.; Stone, Robert B.; Roberts, Rory A.; Clancy, Daniel (Technical Monitor)
2002-01-01
When designing products, it is crucial to assure failure and risk-free operation in the intended operating environment. Failures are typically studied and eliminated as much as possible during the early stages of design. The few failures that go undetected result in unacceptable damage and losses in high-risk applications where public safety is of concern. Published NASA and NTSB accident reports point to a variety of components identified as sources of failures in the reported cases. In previous work, data from these reports were processed and placed in matrix form for all the system components and failure modes encountered, and then manipulated using matrix methods to determine similarities between the different components and failure modes. In this paper, these matrices are represented in the form of a linear combination of failures modes, mathematically formed using Principal Components Analysis (PCA) decomposition. The PCA decomposition results in a low-dimensionality representation of all failure modes and components of interest, represented in a transformed coordinate system. Such a representation opens the way for efficient pattern analysis and prediction of failure modes with highest potential risks on the final product, rather than making decisions based on the large space of component and failure mode data. The mathematics of the proposed method are explained first using a simple example problem. The method is then applied to component failure data gathered from helicopter, accident reports to demonstrate its potential.
Narrowing the scope of failure prediction using targeted fault load injection
NASA Astrophysics Data System (ADS)
Jordan, Paul L.; Peterson, Gilbert L.; Lin, Alan C.; Mendenhall, Michael J.; Sellers, Andrew J.
2018-05-01
As society becomes more dependent upon computer systems to perform increasingly critical tasks, ensuring that those systems do not fail becomes increasingly important. Many organizations depend heavily on desktop computers for day-to-day operations. Unfortunately, the software that runs on these computers is written by humans and, as such, is still subject to human error and consequent failure. A natural solution is to use statistical machine learning to predict failure. However, since failure is still a relatively rare event, obtaining labelled training data to train these models is not a trivial task. This work presents new simulated fault-inducing loads that extend the focus of traditional fault injection techniques to predict failure in the Microsoft enterprise authentication service and Apache web server. These new fault loads were successful in creating failure conditions that were identifiable using statistical learning methods, with fewer irrelevant faults being created.
Flight Validation of a Metrics Driven L(sub 1) Adaptive Control
NASA Technical Reports Server (NTRS)
Dobrokhodov, Vladimir; Kitsios, Ioannis; Kaminer, Isaac; Jones, Kevin D.; Xargay, Enric; Hovakimyan, Naira; Cao, Chengyu; Lizarraga, Mariano I.; Gregory, Irene M.
2008-01-01
The paper addresses initial steps involved in the development and flight implementation of new metrics driven L1 adaptive flight control system. The work concentrates on (i) definition of appropriate control driven metrics that account for the control surface failures; (ii) tailoring recently developed L1 adaptive controller to the design of adaptive flight control systems that explicitly address these metrics in the presence of control surface failures and dynamic changes under adverse flight conditions; (iii) development of a flight control system for implementation of the resulting algorithms onboard of small UAV; and (iv) conducting a comprehensive flight test program that demonstrates performance of the developed adaptive control algorithms in the presence of failures. As the initial milestone the paper concentrates on the adaptive flight system setup and initial efforts addressing the ability of a commercial off-the-shelf AP with and without adaptive augmentation to recover from control surface failures.
Cascading failure in scale-free networks with tunable clustering
NASA Astrophysics Data System (ADS)
Zhang, Xue-Jun; Gu, Bo; Guan, Xiang-Min; Zhu, Yan-Bo; Lv, Ren-Li
2016-02-01
Cascading failure is ubiquitous in many networked infrastructure systems, such as power grids, Internet and air transportation systems. In this paper, we extend the cascading failure model to a scale-free network with tunable clustering and focus on the effect of clustering coefficient on system robustness. It is found that the network robustness undergoes a nonmonotonic transition with the increment of clustering coefficient: both highly and lowly clustered networks are fragile under the intentional attack, and the network with moderate clustering coefficient can better resist the spread of cascading. We then provide an extensive explanation for this constructive phenomenon via the microscopic point of view and quantitative analysis. Our work can be useful to the design and optimization of infrastructure systems.
NASA-LaRc Flight-Critical Digital Systems Technology Workshop
NASA Technical Reports Server (NTRS)
Meissner, C. W., Jr. (Editor); Dunham, J. R. (Editor); Crim, G. (Editor)
1989-01-01
The outcome is documented of a Flight-Critical Digital Systems Technology Workshop held at NASA-Langley December 13 to 15 1988. The purpose of the workshop was to elicit the aerospace industry's view of the issues which must be addressed for the practical realization of flight-critical digital systems. The workshop was divided into three parts: an overview session; three half-day meetings of seven working groups addressing aeronautical and space requirements, system design for validation, failure modes, system modeling, reliable software, and flight test; and a half-day summary of the research issues presented by the working group chairmen. Issues that generated the most consensus across the workshop were: (1) the lack of effective design and validation methods with support tools to enable engineering of highly-integrated, flight-critical digital systems, and (2) the lack of high quality laboratory and field data on system failures especially due to electromagnetic environment (EME).
A Convex Approach to Fault Tolerant Control
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Cox, David E.; Bauer, Frank (Technical Monitor)
2002-01-01
The design of control laws for dynamic systems with the potential for actuator failures is considered in this work. The use of Linear Matrix Inequalities allows more freedom in controller design criteria than typically available with robust control. This work proposes an extension of fault-scheduled control design techniques that can find a fixed controller with provable performance over a set of plants. Through convexity of the objective function, performance bounds on this set of plants implies performance bounds on a range of systems defined by a convex hull. This is used to incorporate performance bounds for a variety of soft and hard failures into the control design problem.
Self-healing failures in the aerial plant
NASA Astrophysics Data System (ADS)
Kiss, Gabor D.
1994-03-01
This account begins in the wee hours of a bitterly cold night in the winter of '92 - '93. A fiber optic transmission system starts to incur unacceptable errors and switches to a protect channel. The system is being run at 1550 nm because it is a route which is long enough to otherwise require a repeater at 1310 nm. OTDR measurement shows high splice losses. By dawn the high-loss splices have partially recovered so the system is switched back to the original fibers. Failure of the mechanical splices is suspected, the RBOC requests post-mortem assistance from Bellcore, and a team is dispatched immediately to work with RBOC personnel in determining the cause of the failure.
UAV Swarm Behavior Modeling for Early Exposure of Failure Modes
2016-09-01
Systems Center Atlantic, for his patience with me through this two-year process. He worked with my schedule and was very understanding of the...emergence of new failure modes? The MP modeling environment provides a breakdown of all potential event traces. Given that the research questions call...for the revelation of potential failure modes, MP was selected as the modeling environment because it provides a substantial set of results and data
Kim, Dong Seong; Park, Jong Sou
2014-01-01
It is important to assess availability of virtualized systems in IT business infrastructures. Previous work on availability modeling and analysis of the virtualized systems used a simplified configuration and assumption in which only one virtual machine (VM) runs on a virtual machine monitor (VMM) hosted on a physical server. In this paper, we show a comprehensive availability model using stochastic reward nets (SRN). The model takes into account (i) the detailed failures and recovery behaviors of multiple VMs, (ii) various other failure modes and corresponding recovery behaviors (e.g., hardware faults, failure and recovery due to Mandelbugs and aging-related bugs), and (iii) dependency between different subcomponents (e.g., between physical host failure and VMM, etc.) in a virtualized servers system. We also show numerical analysis on steady state availability, downtime in hours per year, transaction loss, and sensitivity analysis. This model provides a new finding on how to increase system availability by combining both software rejuvenations at VM and VMM in a wise manner. PMID:25165732
Deterministic Reconfigurable Control Design for the X-33 Vehicle
NASA Technical Reports Server (NTRS)
Wagner, Elaine A.; Burken, John J.; Hanson, Curtis E.; Wohletz, Jerry M.
1998-01-01
In the event of a control surface failure, the purpose of a reconfigurable control system is to redistribute the control effort among the remaining working surfaces such that satisfactory stability and performance are retained. Four reconfigurable control design methods were investigated for the X-33 vehicle: Redistributed Pseudo-Inverse, General Constrained Optimization, Automated Failure Dependent Gain Schedule, and an Off-line Nonlinear General Constrained Optimization. The Off-line Nonlinear General Constrained Optimization approach was chosen for implementation on the X-33. Two example failures are shown, a right outboard elevon jam at 25 deg. at a Mach 3 entry condition, and a left rudder jam at 30 degrees. Note however, that reconfigurable control laws have been designed for the entire flight envelope. Comparisons between responses with the nominal controller and reconfigurable controllers show the benefits of reconfiguration. Single jam aerosurface failures were considered, and failure detection and identification is considered accomplished in the actuator controller. The X-33 flight control system will incorporate reconfigurable flight control in the baseline system.
Importance of teamwork, communication and culture on failure-to-rescue in the elderly.
Ghaferi, A A; Dimick, J B
2016-01-01
Surgical mortality increases significantly with age. Wide variations in mortality rates across hospitals suggest potential levers for improvement. Failure-to-rescue has been posited as a potential mechanism underlying these differences. A review was undertaken of the literature evaluating surgery, mortality, failure-to-rescue and the elderly. This was followed by a review of ongoing studies and unpublished work aiming to understand better the mechanisms underlying variations in surgical mortality in elderly patients. Multiple hospital macro-system factors, such as nurse staffing, available hospital technology and teaching status, are associated with differences in failure-to-rescue rates. There is emerging literature regarding important micro-system factors associated with failure-to-rescue. These are grouped into three broad categories: hospital resources, attitudes and behaviours. Ongoing work to produce interventions to reduce variations in failure-to-rescue rates include a focus on teamwork, communication and safety culture. Researchers are using novel mixed-methods approaches and theories adapted from organizational studies in high-reliability organizations in an effort to improve the care of elderly surgical patients. Although elderly surgical patients experience failure-to-rescue events at much higher rates than their younger counterparts, patient-level effects do not sufficiently explain these differences. Increased attention to the role of organizational dynamics in hospitals' ability to rescue these high-risk patients will establish high-yield interventions aimed at improving patient safety. © 2015 BJS Society Ltd Published by John Wiley & Sons Ltd.
An expert system to perform on-line controller restructuring for abrupt model changes
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.
1990-01-01
Work in progress on an expert system used to reconfigure and tune airframe/engine control systems on-line in real time in response to battle damage or structural failures is presented. The closed loop system is monitored constantly for changes in structure and performance, the detection of which prompts the expert system to choose and apply a particular control restructuring algorithm based on the type and severity of the damage. Each algorithm is designed to handle specific types of failures and each is applicable only in certain situations. The expert system uses information about the system model to identify the failure and to select the technique best suited to compensate for it. A depth-first search is used to find a solution. Once a new controller is designed and implemented it must be tuned to recover the original closed-loop handling qualities and responsiveness from the degraded system. Ideally, the pilot should not be able to tell the difference between the original and redesigned systems. The key is that the system must have inherent redundancy so that degraded or missing capabilities can be restored by creative use of alternate functionalities. With enough redundancy in the control system, minor battle damage affecting individual control surfaces or actuators, compressor efficiency, etc., can be compensated for such that the closed-loop performance in not noticeably altered. The work is applied to a Black Hawk/T700 system.
Queering Social Work Education
ERIC Educational Resources Information Center
Hillock, Susan, Ed.; Mulé, Nick J., Ed.
2016-01-01
Until now there has been a systemic failure within social work education to address the unique experiences and concerns of LGBTQ individuals and communities. "Queering Social Work Education", the first book of its kind in North America, responds to the need for theoretically informed, inclusive, and sensitive approaches in social work…
1985-04-24
reliability/ downtime/ communication lines/ man-machine interface/ other: 2. A noticeable (to the user) failure happens about and that number has been...improving/ steady/ getting.worse. 3. The number of failures /errors for NOHIMS is acceptable/ somewhat acceptable/ somewhat unacceptable/ unacceptable...somewhat fast/ somewhat slow/ slow. 7. When a NWHIMS failure occurs, it affects the day-to-day provision of medical care because work procedures must
How Systems Engineering and Risk Management Defend Against Murphy's Law and Human Error
NASA Technical Reports Server (NTRS)
Bay, Michael; Connley, Warren
2004-01-01
Systems Engineering and Risk Management processes can work synergistically to defend against the causes of many mission ending failures. Defending against mission ending failures is facilitated by fostering a team that has a healthy respect for Murphy's Law and a team with a of curiosity for how things work, how they can fail, and what they need to know. This curiosity is channeled into making the unknowns known or what is uncertain more certain. Efforts to assure mission success require the expenditure of energy in the following areas: 1. Understanding what defines Mission Success as guided by the customer's needs, objectives and constraints. 2. Understanding how the system is supposed to work and how the system is to be produced, fueled by the curiosity of how the system should work and how it should be produced. 3. Understanding how the system can fail and how the system might not be produced on time and within cost, fueled by the curiosity of how the system might fail and how production might be difficult. 4. Understanding what we need to know and what we need learn for proper completion of the above three items, fueled by the curiosity of what we might not know in order to make the best decisions.
Creating Resilient IT: How the Sign-Out Sheet Shows Clinicians Make Healthcare Work
Nemeth, Christopher; Nunnally, Mark; O’Connor, Michael; Cook, Richard
2006-01-01
Information technology (IT) systems have been described as brittle and prone to automation surprises. Recent report of information system failure, particularly computerized physician order entry (CPOE) systems, shows the result of IT failure in actual practice. Such mismatches with healthcare work requirements necessitate improvement to IT research and development. Efforts to develop successful IT systems for healthcare’s sharp end must incorporate properties that reflect workers’ initiative in respose to domain constraints. Resilience is the feature of some systems that makes it possible for them to respond to sudden, unanticipated demands for performance and return to normal operation quickly, with minimum decrement in performance. Workers create resilience at healthcare’s sharp end by daily confronting constraints and obstacles that need to be surmounted in order to accomplish results. The sign-out sheet is an example of resilience that can be used to guide IT development. PMID:17238408
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katti, Amogh; Di Fatta, Giuseppe; Naughton III, Thomas J
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implementedmore » and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.« less
Preventing Spacecraft Failures Due to Tribological Problems
NASA Technical Reports Server (NTRS)
Fusaro, Robert L.
2001-01-01
Many mechanical failures that occur on spacecraft are caused by tribological problems. This publication presents a study that was conducted by the author on various preventatives, analyses, controls and tests (PACTs) that could be used to prevent spacecraft mechanical system failure. A matrix is presented in the paper that plots tribology failure modes versus various PACTs that should be performed before a spacecraft is launched in order to insure success. A strawman matrix was constructed by the author and then was sent out to industry and government spacecraft designers, scientists and builders of spacecraft for their input. The final matrix is the result of their input. In addition to the matrix, this publication describes the various PACTs that can be performed and some fundamental knowledge on the correct usage of lubricants for spacecraft applications. Even though the work was done specifically to prevent spacecraft failures the basic methodology can be applied to other mechanical system areas.
NASA Astrophysics Data System (ADS)
Hutchenson, K. D.; Hartley-McBride, S.; Saults, T.; Schmidt, D. P.
2006-05-01
The International Monitoring System (IMS) is composed in part of radionuclide particulate and gas monitoring systems. Monitoring the operational status of these systems is an important aspect of nuclear weapon test monitoring. Quality data, process control techniques, and predictive models are necessary to detect and predict system component failures. Predicting failures in advance provides time to mitigate these failures, thus minimizing operational downtime. The Provisional Technical Secretariat (PTS) requires IMS radionuclide systems be operational 95 percent of the time. The United States National Data Center (US NDC) offers contributing components to the IMS. This effort focuses on the initial research and process development using prognostics for monitoring and predicting failures of the RASA two (2) days into the future. The predictions, using time series methods, are input to an expert decision system, called SHADES (State of Health Airflow and Detection Expert System). The results enable personnel to make informed judgments about the health of the RASA system. Data are read from a relational database, processed, and displayed to the user in a GIS as a prototype GUI. This procedure mimics the real time application process that could be implemented as an operational system, This initial proof-of-concept effort developed predictive models focused on RASA components for a single site (USP79). Future work shall include the incorporation of other RASA systems, as well as their environmental conditions that play a significant role in performance. Similarly, SHADES currently accommodates specific component behaviors at this one site. Future work shall also include important environmental variables that play an important part of the prediction algorithms.
NASA Technical Reports Server (NTRS)
Lekki, John; Tokars, Roger; Jaros, Dave; Riggs, M. Terrence; Evans, Kenneth P.; Gyekenyesi, Andrew
2009-01-01
A self diagnostic accelerometer system has been shown to be sensitive to multiple failure modes of charge mode accelerometers. These failures include sensor structural damage, an electrical open circuit and most importantly sensor detachment. In this paper, experimental work that was performed to determine the capabilities of a self diagnostic accelerometer system while operating in the presence of various levels of mechanical noise, emulating real world conditions, is presented. The results show that the system can successfully conduct a self diagnostic routine under these conditions.
DOT National Transportation Integrated Search
1974-08-01
Volume 4 describes the automation requirements. A presentation of automation requirements is made for an advanced air traffic management system in terms of controller work force, computer resources, controller productivity, system manning, failure ef...
NASA Technical Reports Server (NTRS)
Lewis, John F.; Cole, Harold; Cronin, Gary; Gazda, Daniel B.; Steele, John
2006-01-01
Following the Colombia accident, the Extravehicular Mobility Units (EMU) onboard ISS were unused for several months. Upon startup, the units experienced a failure in the coolant system. This failure resulted in the loss of Extravehicular Activity (EVA) capability from the US segment of ISS. With limited on-orbit evidence, a team of chemists, engineers, metallurgists, and microbiologists were able to identify the cause of the failure and develop recovery hardware and procedures. As a result of this work, the ISS crew regained the capability to perform EVAs from the US segment of the ISS.
Development of wheelchair caster testing equipment and preliminary testing of caster models
Mhatre, Anand; Ott, Joseph
2017-01-01
Background Because of the adverse environmental conditions present in less-resourced environments (LREs), the World Health Organization (WHO) has recommended that specialised wheelchair test methods may need to be developed to support product quality standards in these environments. A group of experts identified caster test methods as a high priority because of their common failure in LREs, and the insufficiency of existing test methods described in the International Organization for Standardization (ISO) Wheelchair Testing Standards (ISO 7176). Objectives To develop and demonstrate the feasibility of a caster system test method. Method Background literature and expert opinions were collected to identify existing caster test methods, caster failures common in LREs and environmental conditions present in LREs. Several conceptual designs for the caster testing method were developed, and through an iterative process using expert feedback, a final concept and a design were developed and a prototype was fabricated. Feasibility tests were conducted by testing a series of caster systems from wheelchairs used in LREs, and failure modes were recorded and compared to anecdotal reports about field failures. Results The new caster testing system was developed and it provides the flexibility to expose caster systems to typical conditions in LREs. Caster failures such as stem bolt fractures, fork fractures, bearing failures and tire cracking occurred during testing trials and are consistent with field failures. Conclusion The new caster test system has the capability to incorporate necessary test factors that degrade caster quality in LREs. Future work includes developing and validating a testing protocol that results in failure modes common during wheelchair use in LRE. PMID:29062762
SNS STRIPPER FOIL FAILURE MODES AND THEIR CURES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galambos, John D; Luck, Chris; Plum, Michael A
2010-01-01
The diamond stripper foils in use at the Spallation Neutron Source worked successfully with no failures until May 3, 2009, when we started experiencing a rash of foil system failures after increasing the beam power to ~840 kW. The main contributors to the failures are thought to be 1) convoy electrons, stripped from the incoming H beam, that strike the foil bracket and may also reflect back from the electron catcher, and 2) vacuum breakdown from the charge developed on the foil by secondary electron emission. In this paper we will detail these and other failure mechanisms, and describe themore » improvements we have made to mitigate them.« less
DOT National Transportation Integrated Search
1974-08-01
Volume 4 describes the automation requirements. A presentation of automation requirements is made for an advanced air traffic management system in terms of controller work for-e, computer resources, controller productivity, system manning, failure ef...
NASA Astrophysics Data System (ADS)
Singh, Gurmeet; Naikan, V. N. A.
2017-12-01
Thermography has been widely used as a technique for anomaly detection in induction motors. International Electrical Testing Association (NETA) proposed guidelines for thermographic inspection of electrical systems and rotating equipment. These guidelines help in anomaly detection and estimating its severity. However, it focus only on location of hotspot rather than diagnosing the fault. This paper addresses two such faults i.e. inter-turn fault and failure of cooling system, where both results in increase of stator temperature. Present paper proposes two thermal profile indicators using thermal analysis of IRT images. These indicators are in compliance with NETA standard. These indicators help in correctly diagnosing inter-turn fault and failure of cooling system. The work has been experimentally validated for healthy and with seeded faults scenarios of induction motors.
Modular space vehicle boards, control software, reprogramming, and failure recovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judd, Stephen; Dallmann, Nicholas; McCabe, Kevin
A space vehicle may have a modular board configuration that commonly uses some or all components and a common operating system for at least some of the boards. Each modular board may have its own dedicated processing, and processing loads may be distributed. The space vehicle may be reprogrammable, and may be launched without code that enables all functionality and/or components. Code errors may be detected and the space vehicle may be reset to a working code version to prevent system failure.
Intelligent on-line fault tolerant control for unanticipated catastrophic failures.
Yen, Gary G; Ho, Liang-Wei
2004-10-01
As dynamic systems become increasingly complex, experience rapidly changing environments, and encounter a greater variety of unexpected component failures, solving the control problems of such systems is a grand challenge for control engineers. Traditional control design techniques are not adequate to cope with these systems, which may suffer from unanticipated dynamic failures. In this research work, we investigate the on-line fault tolerant control problem and propose an intelligent on-line control strategy to handle the desired trajectories tracking problem for systems suffering from various unanticipated catastrophic faults. Through theoretical analysis, the sufficient condition of system stability has been derived and two different on-line control laws have been developed. The approach of the proposed intelligent control strategy is to continuously monitor the system performance and identify what the system's current state is by using a fault detection method based upon our best knowledge of the nominal system and nominal controller. Once a fault is detected, the proposed intelligent controller will adjust its control signal to compensate for the unknown system failure dynamics by using an artificial neural network as an on-line estimator to approximate the unexpected and unknown failure dynamics. The first control law is derived directly from the Lyapunov stability theory, while the second control law is derived based upon the discrete-time sliding mode control technique. Both control laws have been implemented in a variety of failure scenarios to validate the proposed intelligent control scheme. The simulation results, including a three-tank benchmark problem, comply with theoretical analysis and demonstrate a significant improvement in trajectory following performance based upon the proposed intelligent control strategy.
30 CFR 75.222 - Roof control plan-approval criteria.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., roof bolts should be installed on at least 5-foot centers where the work is performed. (2) Where the... opening before any other work or travel in the intersection. (f) ATRS systems in working sections where... panel in advance of the frontal abutment stresses of the panel being mined. (2) When a ground failure...
30 CFR 75.222 - Roof control plan-approval criteria.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., roof bolts should be installed on at least 5-foot centers where the work is performed. (2) Where the... opening before any other work or travel in the intersection. (f) ATRS systems in working sections where... panel in advance of the frontal abutment stresses of the panel being mined. (2) When a ground failure...
Quantifying Safety Margin Using the Risk-Informed Safety Margin Characterization (RISMC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, David; Bucknor, Matthew; Brunett, Acacia
2015-04-26
The Risk-Informed Safety Margin Characterization (RISMC), developed by Idaho National Laboratory as part of the Light-Water Reactor Sustainability Project, utilizes a probabilistic safety margin comparison between a load and capacity distribution, rather than a deterministic comparison between two values, as is usually done in best-estimate plus uncertainty analyses. The goal is to determine the failure probability, or in other words, the probability of the system load equaling or exceeding the system capacity. While this method has been used in pilot studies, there has been little work conducted investigating the statistical significance of the resulting failure probability. In particular, it ismore » difficult to determine how many simulations are necessary to properly characterize the failure probability. This work uses classical (frequentist) statistics and confidence intervals to examine the impact in statistical accuracy when the number of simulations is varied. Two methods are proposed to establish confidence intervals related to the failure probability established using a RISMC analysis. The confidence interval provides information about the statistical accuracy of the method utilized to explore the uncertainty space, and offers a quantitative method to gauge the increase in statistical accuracy due to performing additional simulations.« less
DOT National Transportation Integrated Search
1974-08-01
Volume 4 describes the automation requirements. A presentation of automation requirements is made for an advanced air traffic management system in terms of controller work for-e, computer resources, controller productivity, system manning, failure ef...
NASA Astrophysics Data System (ADS)
Gromek, Katherine Emily
A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.
Holden, Richard J.; Schubert, Christiane C.; Mickelson, Robin S.
2014-01-01
Human factors and ergonomics approaches have been successfully applied to study and improve the work performance of healthcare professionals. However, there has been relatively little work in “patient-engaged human factors,” or the application of human factors to the health-related work of patients and other nonprofessionals. This study applied a foundational human factors tool, the systems model, to investigate the barriers to self-care performance among chronically ill elderly patients and their informal (family) caregivers. A Patient Work System model was developed to guide the collection and analysis of interviews, surveys, and observations of patients with heart failure (n=30) and their informal caregivers (n=14). Iterative analyses revealed the nature and prevalence of self-care barriers across components of the Patient Work System. Person-related barriers were common and stemmed from patients’ biomedical conditions, limitations, knowledge deficits, preferences, and perceptions as well as the characteristics of informal caregivers and healthcare professionals. Task barriers were also highly prevalent and included task difficulty, timing, complexity, ambiguity, conflict, and undesirable consequences. Tool barriers were related to both availability and access of tools and technologies and their design, usability, and impact. Context barriers were found across three domains—physical-spatial, social-cultural, and organizational—and multiple “spaces” such as “at home,” “on the go,” and “in the community.” Barriers often stemmed not from single factors but from the interaction of several work system components. Study findings suggest the need to further explore multiple actors, context, and interactions in the patient work system during research and intervention design, as well as the need to develop new models and measures for studying patient and family work. PMID:25479983
Holden, Richard J; Schubert, Christiane C; Mickelson, Robin S
2015-03-01
Human factors and ergonomics approaches have been successfully applied to study and improve the work performance of healthcare professionals. However, there has been relatively little work in "patient-engaged human factors," or the application of human factors to the health-related work of patients and other nonprofessionals. This study applied a foundational human factors tool, the systems model, to investigate the barriers to self-care performance among chronically ill elderly patients and their informal (family) caregivers. A Patient Work System model was developed to guide the collection and analysis of interviews, surveys, and observations of patients with heart failure (n = 30) and their informal caregivers (n = 14). Iterative analyses revealed the nature and prevalence of self-care barriers across components of the Patient Work System. Person-related barriers were common and stemmed from patients' biomedical conditions, limitations, knowledge deficits, preferences, and perceptions as well as the characteristics of informal caregivers and healthcare professionals. Task barriers were also highly prevalent and included task difficulty, timing, complexity, ambiguity, conflict, and undesirable consequences. Tool barriers were related to both availability and access of tools and technologies and their design, usability, and impact. Context barriers were found across three domains-physical-spatial, social-cultural, and organizational-and multiple "spaces" such as "at home," "on the go," and "in the community." Barriers often stemmed not from single factors but from the interaction of several work system components. Study findings suggest the need to further explore multiple actors, contexts, and interactions in the patient work system during research and intervention design, as well as the need to develop new models and measures for studying patient and family work. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
RICIS Symposium 1992: Mission and Safety Critical Systems Research and Applications
NASA Technical Reports Server (NTRS)
1992-01-01
This conference deals with computer systems which control systems whose failure to operate correctly could produce the loss of life and or property, mission and safety critical systems. Topics covered are: the work of standards groups, computer systems design and architecture, software reliability, process control systems, knowledge based expert systems, and computer and telecommunication protocols.
NASA Astrophysics Data System (ADS)
Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko
2017-08-01
We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Li; Chen, Zizhong; Song, Shuaiwen
2016-01-18
Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Li; Chen, Zizhong; Song, Shuaiwen Leon
2015-11-16
Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.
Tethered Satellite System Contingency Investigation Board
NASA Technical Reports Server (NTRS)
1992-01-01
The Tethered Satellite System (TSS-1) was launched aboard the Space Shuttle Atlantis (STS-46) on July 31, 1992. During the attempted on-orbit operations, the Tethered Satellite System failed to deploy successfully beyond 256 meters. The satellite was retrieved successfully and was returned on August 6, 1992. The National Aeronautics and Space Administration (NASA) Associate Administrator for Space Flight formed the Tethered Satellite System (TSS-1) Contingency Investigation Board on August 12, 1992. The TSS-1 Contingency Investigation Board was asked to review the anomalies which occurred, to determine the probable cause, and to recommend corrective measures to prevent recurrence. The board was supported by the TSS Systems Working group as identified in MSFC-TSS-11-90, 'Tethered Satellite System (TSS) Contingency Plan'. The board identified five anomalies for investigation: initial failure to retract the U2 umbilical; initial failure to flyaway; unplanned tether deployment stop at 179 meters; unplanned tether deployment stop at 256 meters; and failure to move tether in either direction at 224 meters. Initial observations of the returned flight hardware revealed evidence of mechanical interference by a bolt with the level wind mechanism travel as well as a helical shaped wrap of tether which indicated that the tether had been unwound from the reel beyond the travel by the level wind mechanism. Examination of the detailed mission events from flight data and mission logs related to the initial failure to flyaway and the failure to move in either direction at 224 meters, together with known preflight concerns regarding slack tether, focused the assessment of these anomalies on the upper tether control mechanism. After the second meeting, the board requested the working group to complete and validate a detailed integrated mission sequence to focus the fault tree analysis on a stuck U2 umbilical, level wind mechanical interference, and slack tether in upper tether control mechanism and to prepare a detailed plan for hardware inspection, test, and analysis including any appropriate hardware disassembly.
Tethered Satellite System Contingency Investigation Board
NASA Astrophysics Data System (ADS)
1992-11-01
The Tethered Satellite System (TSS-1) was launched aboard the Space Shuttle Atlantis (STS-46) on July 31, 1992. During the attempted on-orbit operations, the Tethered Satellite System failed to deploy successfully beyond 256 meters. The satellite was retrieved successfully and was returned on August 6, 1992. The National Aeronautics and Space Administration (NASA) Associate Administrator for Space Flight formed the Tethered Satellite System (TSS-1) Contingency Investigation Board on August 12, 1992. The TSS-1 Contingency Investigation Board was asked to review the anomalies which occurred, to determine the probable cause, and to recommend corrective measures to prevent recurrence. The board was supported by the TSS Systems Working group as identified in MSFC-TSS-11-90, 'Tethered Satellite System (TSS) Contingency Plan'. The board identified five anomalies for investigation: initial failure to retract the U2 umbilical; initial failure to flyaway; unplanned tether deployment stop at 179 meters; unplanned tether deployment stop at 256 meters; and failure to move tether in either direction at 224 meters. Initial observations of the returned flight hardware revealed evidence of mechanical interference by a bolt with the level wind mechanism travel as well as a helical shaped wrap of tether which indicated that the tether had been unwound from the reel beyond the travel by the level wind mechanism. Examination of the detailed mission events from flight data and mission logs related to the initial failure to flyaway and the failure to move in either direction at 224 meters, together with known preflight concerns regarding slack tether, focused the assessment of these anomalies on the upper tether control mechanism. After the second meeting, the board requested the working group to complete and validate a detailed integrated mission sequence to focus the fault tree analysis on a stuck U2 umbilical, level wind mechanical interference, and slack tether in upper tether control mechanism and to prepare a detailed plan for hardware inspection, test, and analysis including any appropriate hardware disassembly.
NASA Astrophysics Data System (ADS)
Zheng, W.; Gao, J. M.; Wang, R. X.; Chen, K.; Jiang, Y.
2017-12-01
This paper put forward a new method of technical characteristics deployment based on Reliability Function Deployment (RFD) by analysing the advantages and shortages of related research works on mechanical reliability design. The matrix decomposition structure of RFD was used to describe the correlative relation between failure mechanisms, soft failures and hard failures. By considering the correlation of multiple failure modes, the reliability loss of one failure mode to the whole part was defined, and a calculation and analysis model for reliability loss was presented. According to the reliability loss, the reliability index value of the whole part was allocated to each failure mode. On the basis of the deployment of reliability index value, the inverse reliability method was employed to acquire the values of technology characteristics. The feasibility and validity of proposed method were illustrated by a development case of machining centre’s transmission system.
Factors Affecting Employment at Initiation of Dialysis
Muehrer, Rebecca J.; Schatell, Dori; Witten, Beth; Gangnon, Ronald; Becker, Bryan N.
2011-01-01
Summary Background and objectives Half the individuals who reach ESRD are working age (<65 years old) and many are at risk for job loss. Factors that contribute to job retention among working-age patients with chronic kidney disease before ESRD are unknown. The purpose of the study is to understand factors associated with maintaining employment among working-age patients with advanced kidney failure. Design, setting, participants, & measurements In this retrospective study we reviewed the United States Renal Data System database (1992 through 2003) and selected all patients (n = 102,104) who were working age and employed 6 months before dialysis initiation. Factors that were examined for an association with maintaining employment status included demographics, comorbid conditions, ESRD cause, insurance, predialysis erythropoietin use, and dialysis modality. Results Maintaining employment at the same level during the final 6 months before dialysis was more likely among (1) white men ages 30 to 49 years; (2) patients with either glomerulonephritis, cystic, or urologic causes of renal failure; (3) patients choosing peritoneal dialysis for their first treatment; (4) those with employer group or other health plans; and (5) erythropoietin usage before ESRD. Maintaining employment status was less likely among patients with congestive heart failure, cardiovascular disease, cancer, and other chronic illnesses. Conclusions The rate of unemployment in working-age patients with chronic kidney disease and ESRD is high compared with that of the general population. Treating anemia with erythropoietin before kidney failure and educating patients about work-friendly home dialysis options might improve job retention. PMID:21393489
X-framework: Space system failure analysis framework
NASA Astrophysics Data System (ADS)
Newman, John Steven
Space program and space systems failures result in financial losses in the multi-hundred million dollar range every year. In addition to financial loss, space system failures may also represent the loss of opportunity, loss of critical scientific, commercial and/or national defense capabilities, as well as loss of public confidence. The need exists to improve learning and expand the scope of lessons documented and offered to the space industry project team. One of the barriers to incorporating lessons learned include the way in which space system failures are documented. Multiple classes of space system failure information are identified, ranging from "sound bite" summaries in space insurance compendia, to articles in journals, lengthy data-oriented (what happened) reports, and in some rare cases, reports that treat not only the what, but also the why. In addition there are periodically published "corporate crisis" reports, typically issued after multiple or highly visible failures that explore management roles in the failure, often within a politically oriented context. Given the general lack of consistency, it is clear that a good multi-level space system/program failure framework with analytical and predictive capability is needed. This research effort set out to develop such a model. The X-Framework (x-fw) is proposed as an innovative forensic failure analysis approach, providing a multi-level understanding of the space system failure event beginning with the proximate cause, extending to the directly related work or operational processes and upward through successive management layers. The x-fw focus is on capability and control at the process level and examines: (1) management accountability and control, (2) resource and requirement allocation, and (3) planning, analysis, and risk management at each level of management. The x-fw model provides an innovative failure analysis approach for acquiring a multi-level perspective, direct and indirect causation of failures, and generating better and more consistent reports. Through this approach failures can be more fully understood, existing programs can be evaluated and future failures avoided. The x-fw development involved a review of the historical failure analysis and prevention literature, coupled with examination of numerous failure case studies. Analytical approaches included use of a relational failure "knowledge base" for classification and sorting of x-fw elements and attributes for each case. In addition a novel "management mapping" technique was developed as a means of displaying an integrated snapshot of indirect causes within the management chain. Further research opportunities will extend the depth of knowledge available for many of the component level cases. In addition, the x-fw has the potential to expand the scope of space sector lessons learned, and contribute to knowledge management and organizational learning.
NASA Technical Reports Server (NTRS)
Werlink, Rudolph J.; Pena, Francisco
2015-01-01
This Paper will describe the results of pressurization to failure of 100 gallon composite tanks using liquid nitrogen. Advanced methods of health monitoring will be compared as will the experimental data to a finite element model. The testing is wholly under NASA including unique PZT (Lead Zirconate Titanate) based active vibration technology. Other technologies include fiber optics strain based systems including NASA AFRC technology, Acoustic Emission, Acellent smart sensor, this work is expected to lead to a practical in-Sutu system for composite tanks.
Conesa-Muñoz, Jesús; Gonzalez-de-Soto, Mariano; Gonzalez-de-Santos, Pablo; Ribeiro, Angela
2015-03-05
This paper describes a supervisor system for monitoring the operation of automated agricultural vehicles. The system analyses all of the information provided by the sensors and subsystems on the vehicles in real time and notifies the user when a failure or potentially dangerous situation is detected. In some situations, it is even able to execute a neutralising protocol to remedy the failure. The system is based on a distributed and multi-level architecture that divides the supervision into different subsystems, allowing for better management of the detection and repair of failures. The proposed supervision system was developed to perform well in several scenarios, such as spraying canopy treatments against insects and diseases and selective weed treatments, by either spraying herbicide or burning pests with a mechanical-thermal actuator. Results are presented for selective weed treatment by the spraying of herbicide. The system successfully supervised the task; it detected failures such as service disruptions, incorrect working speeds, incorrect implement states, and potential collisions. Moreover, the system was able to prevent collisions between vehicles by taking action to avoid intersecting trajectories. The results show that the proposed system is a highly useful tool for managing fleets of autonomous vehicles. In particular, it can be used to manage agricultural vehicles during treatment operations.
Conesa-Muñoz, Jesús; Gonzalez-de-Soto, Mariano; Gonzalez-de-Santos, Pablo; Ribeiro, Angela
2015-01-01
This paper describes a supervisor system for monitoring the operation of automated agricultural vehicles. The system analyses all of the information provided by the sensors and subsystems on the vehicles in real time and notifies the user when a failure or potentially dangerous situation is detected. In some situations, it is even able to execute a neutralising protocol to remedy the failure. The system is based on a distributed and multi-level architecture that divides the supervision into different subsystems, allowing for better management of the detection and repair of failures. The proposed supervision system was developed to perform well in several scenarios, such as spraying canopy treatments against insects and diseases and selective weed treatments, by either spraying herbicide or burning pests with a mechanical-thermal actuator. Results are presented for selective weed treatment by the spraying of herbicide. The system successfully supervised the task; it detected failures such as service disruptions, incorrect working speeds, incorrect implement states, and potential collisions. Moreover, the system was able to prevent collisions between vehicles by taking action to avoid intersecting trajectories. The results show that the proposed system is a highly useful tool for managing fleets of autonomous vehicles. In particular, it can be used to manage agricultural vehicles during treatment operations. PMID:25751079
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winter, J.R.; Keywood, S.S.
PTFE-based gaskets in chemical plant service typically fail in an extrusion mode, sometimes referred to as blowout. Test work previously published by Monsanto indicated that correctly installed PTFE-based gaskets have pressure performance far exceeding system pressure ratings. These results have since been confirmed by extensive testing at the Montreal based Ecole Polytechnique Tightness Testing and Research Laboratory (TTRL), funded by a consortium of gasket users and manufacturers. With the knowledge that properly installed gaskets can withstand system pressures in excess of 1,000 psig [6,894 kPa], failures at two chemical plants were re-examined. This analysis indicates that extrusion type failures canmore » be caused by excessive internal pressures, associated with sections of pipe having an external source of heat coincident with a blocked flow condition. This results in high system pressures which explain the extrusion type failures observed. The paper discusses details of individual failures and examines methods to prevent them. Other causes for extrusion failures are reviewed, with a recommendation that stronger gasket materials not be utilized to correct problems until it is verified that excessive pressure build-up is not the problem. Also summarized are the requirements for proper installation to achieve the potential blowout resistance found in these gaskets.« less
Defining Human Failure Events for Petroleum Risk Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring; Knut Øien
2014-06-01
In this paper, an identification and description of barriers and human failure events (HFEs) for human reliability analysis (HRA) is performed. The barriers, called target systems, are identified from risk significant accident scenarios represented as defined situations of hazard and accident (DSHAs). This report serves as the foundation for further work to develop petroleum HFEs compatible with the SPAR-H method and intended for reuse in future HRAs.
Investigating the Interplay between Energy Efficiency and Resilience in High Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Li; Song, Shuaiwen; Wu, Panruo
2015-05-29
Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.
Jipp, Meike
2016-12-01
This study explored whether working memory and sustained attention influence cognitive lock-up, which is a delay in the response to consecutive automation failures. Previous research has demonstrated that the information that automation provides about failures and the time pressure that is associated with a task influence cognitive lock-up. Previous research has also demonstrated considerable variability in cognitive lock-up between participants. This is why individual differences might influence cognitive lock-up. The present study tested whether working memory-including flexibility in executive functioning-and sustained attention might be crucial in this regard. Eighty-five participants were asked to monitor automated aircraft functions. The experimental manipulation consisted of whether or not an initial automation failure was followed by a consecutive failure. Reaction times to the failures were recorded. Participants' working-memory and sustained-attention abilities were assessed with standardized tests. As expected, participants' reactions to consecutive failures were slower than their reactions to initial failures. In addition, working-memory and sustained-attention abilities enhanced the speed with which participants reacted to failures, more so with regard to consecutive than to initial failures. The findings highlight that operators with better working memory and sustained attention have small advantages when initial failures occur, but their advantages increase across consecutive failures. The results stress the need to consider personnel selection strategies to mitigate cognitive lock-up in general and training procedures to enhance the performance of low ability operators. © 2016, Human Factors and Ergonomics Society.
Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip
NASA Astrophysics Data System (ADS)
Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang
2016-09-01
Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.
Physical nature of longevity of light actinides in dynamic failure phenomenon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uchaev, A. Ya., E-mail: uchaev@expd.vniief.ru; Punin, V. T.; Selchenkova, N. I.
It is shown in this work that the physical nature of the longevity of light actinides under extreme conditions in a range of nonequilibrium states of t ∼ 10{sup –6}–10{sup –10} s is determined by the time needed for the formation of a critical concentration of a cascade of failure centers, which changes connectivity of the body. These centers form a percolation cluster. The longevity is composed of waiting time t{sub w} for the appearance of failure centers and clusterization time t{sub c} of cascade of failure centers, when connectivity in the system of failure centers and the percolation clustermore » arise. A unique mechanism of the dynamic failure process, a unique order parameter, and an equal dimensionality of the space in which the process occurs determine the physical nature of the longevity of metals, including fissionable materials.« less
Availability analysis of an HTGR fuel recycle facility. Summary report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharmahd, J.N.
1979-11-01
An availability analysis of reprocessing systems in a high-temperature gas-cooled reactor (HTGR) fuel recycle facility was completed. This report summarizes work done to date to define and determine reprocessing system availability for a previously planned HTGR recycle reference facility (HRRF). Schedules and procedures for further work during reprocessing development and for HRRF design and construction are proposed in this report. Probable failure rates, transfer times, and repair times are estimated for major system components. Unscheduled down times are summarized.
Kumar, Mohit; Yadav, Shiv Prasad
2012-03-01
This paper addresses the fuzzy system reliability analysis using different types of intuitionistic fuzzy numbers. Till now, in the literature, to analyze the fuzzy system reliability, it is assumed that the failure rates of all components of a system follow the same type of fuzzy set or intuitionistic fuzzy set. However, in practical problems, such type of situation rarely occurs. Therefore, in the present paper, a new algorithm has been introduced to construct the membership function and non-membership function of fuzzy reliability of a system having components following different types of intuitionistic fuzzy failure rates. Functions of intuitionistic fuzzy numbers are calculated to construct the membership function and non-membership function of fuzzy reliability via non-linear programming techniques. Using the proposed algorithm, membership functions and non-membership functions of fuzzy reliability of a series system and a parallel systems are constructed. Our study generalizes the various works of the literature. Numerical examples are given to illustrate the proposed algorithm. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Comparison between four dissimilar solar panel configurations
NASA Astrophysics Data System (ADS)
Suleiman, K.; Ali, U. A.; Yusuf, Ibrahim; Koko, A. D.; Bala, S. I.
2017-12-01
Several studies on photovoltaic systems focused on how it operates and energy required in operating it. Little attention is paid on its configurations, modeling of mean time to system failure, availability, cost benefit and comparisons of parallel and series-parallel designs. In this research work, four system configurations were studied. Configuration I consists of two sub-components arranged in parallel with 24 V each, configuration II consists of four sub-components arranged logically in parallel with 12 V each, configuration III consists of four sub-components arranged in series-parallel with 8 V each, and configuration IV has six sub-components with 6 V each arranged in series-parallel. Comparative analysis was made using Chapman Kolmogorov's method. The derivation for explicit expression of mean time to system failure, steady state availability and cost benefit analysis were performed, based on the comparison. Ranking method was used to determine the optimal configuration of the systems. The results of analytical and numerical solutions of system availability and mean time to system failure were determined and it was found that configuration I is the optimal configuration.
Towards a Personal Health Management Assistant.
Ferguson, G; Quinn, J; Horwitz, C; Swift, M; Allen, J; Galescu, L
2010-10-01
We describe design and prototyping efforts for a Personal Health Management Assistant for heart failure patients as part of Project HealthDesign. An assistant is more than simply an application. An assistant understands what its users need to do, interacts naturally with them, reacts to what they say and do, and is proactive in helping them manage their health. In this project, we focused on heart failure, which is not only a prevalent and economically significant disease, but also one that is very amenable to self-care. Working with patients, and building on our prior experience with conversational assistants, we designed and developed a prototype system that helps heart failure patients record objective and subjective observations using spoken natural language conversation. Our experience suggests that it is feasible to build such systems and that patients would use them. The system is designed to support rapid application to other self-care settings. Copyright © 2010 Elsevier Inc. All rights reserved.
Brianza, Stefano; Vogel, Susan; Rothstock, Stephan; Desrochers, Andrè; Boure, Ludovic
2013-01-01
To compare the torsional strength of calf metatarsal bones with defects produced by removal of 2 different implants. In vitro mechanical comparison of paired bones with bicortical defects resulting from the implantation of 2 different external fixation systems: the transfixation pin (TP) and the pin sleeve system (PS). Neonatal calf metatarsal bones (n = 6 pairs). From each pair, 1 bone was surgically instrumented with 2 PS implants and the contralateral bone with 2 TP implants. Implants were removed immediately leaving bicortical defects at identical locations between paired metatarsi. Each bone was tested in torque until failure. The mechanical variables statistically compared were the torsional stiffness, the torque and angle at failure, and work to failure. For TP and PS constructs, respectively, there were no significant differences between construct types for any of the variables tested. Mean ± SD torsional stiffness: 5.50 ± 2.68 and 5.35 ± 1.79 (Nm/°), P = .75; torque: 57.42 ± 14.84 and 53.43 ± 10.16 (Nm); P = .34; angle at failure: 14.76 ± 4.33 and 15.45 ± 4.84 (°), P = .69; and work to failure 7.45 ± 3.19 and 8.89 ± 3.79 (J), P = .17). Bicortical defects resulting from the removal of PS and TP implants equally affect the investigated mechanical properties of neonate calf metatarsal bones. © Copyright 2012 by The American College of Veterinary Surgeons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, B; Sun, B; Yaddanapudi, S
Purpose: To describe the clinical use of a Linear Accelerator (Linac) DailyQA system with only EPID and OBI. To assess the reliability over an 18-month period and improve the robustness of this system based on QA failure analysis. Methods: A DailyQA solution utilizing an in-house designed phantom, combined EPID and OBI image acquisitions, and a web-based data analysis and reporting system was commissioned and used in our clinic to measure geometric, dosimetry and imaging components of a Varian Truebeam Linac. During an 18-month period (335 working days), the Daily QA results, including the output constancy, beam flatness and symmetry, uniformity,more » TPR20/10, MV and KV imaging quality, were collected and analyzed. For output constancy measurement, an independent monthly QA system with an ionization chamber (IC) and annual/incidental TG51 measurements with ADCL IC were performed and cross-compared to Daily QA system. Thorough analyses were performed on the recorded QA failures to evaluate the machine performance, optimize the data analysis algorithm, adjust the tolerance setting and improve the training procedure to prevent future failures. Results: A clinical workflow including beam delivery, data analysis, QA report generation and physics approval was established and optimized to suit daily clinical operation. The output tests over the 335 working day period cross-correlated with the monthly QA system within 1.3% and TG51 results within 1%. QA passed with one attempt on 236 days out of 335 days. Based on the QA failures analysis, the Gamma criteria is revised from (1%, 1mm) to (2%, 1mm) considering both QA accuracy and efficiency. Data analysis algorithm is improved to handle multiple entries for a repeating test. Conclusion: We described our 18-month clinical experience on a novel DailyQA system using only EPID and OBI. The long term data presented demonstrated the system is suitable and reliable for Linac daily QA.« less
[Low Fidelity Simulation of a Zero-Y Robot
NASA Technical Reports Server (NTRS)
Sweet, Adam
2001-01-01
The item to be cleared is a low-fidelity software simulation model of a hypothetical freeflying robot designed for use in zero gravity environments. This simulation model works with the HCC simulation system that was developed by Xerox PARC and NASA Ames Research Center. HCC has been previously cleared for distribution. When used with the HCC software, the model computes the location and orientation of the simulated robot over time. Failures (such as a broken motor) can be injected into the simulation to produce simulated behavior corresponding to the failure. Release of this simulation will allow researchers to test their software diagnosis systems by attempting to diagnose the simulated failure from the simulated behavior. This model does not contain any encryption software nor can it perform any control tasks that might be export controlled.
The effects of heart failure on renal function.
Udani, Suneel M; Koyner, Jay L
2010-08-01
Heart-kidney interactions have been increasingly recognized by clinicians and researchers who study and treat heart failure and kidney disease. A classification system has been developed to categorize the different manifestations of cardiac and renal dysfunction. Work has highlighted the significant negative prognostic effect of worsening renal function on outcomes for individuals with heart failure. The etiology of concomitant cardiac and renal dysfunction remains unclear; however, evidence supports alternatives to the established theory of underfilling, including effects of venous congestion and changes in intra-abdominal pressure. Conventional therapy focuses on blockade of the renin-angiotensin-aldosterone system with expanding use of direct renin and aldosterone antagonists. Novel therapeutic interventions using extracorporeal therapy and antagonists of the adenosine pathway show promise and require further investigation. 2010 Elsevier Inc. All rights reserved.
The Effects of Heart Failure on Renal Function
Udani, Suneel M; Koyner, Jay L
2010-01-01
Summary Heart-kidney interactions have been increasingly recognized by clinicians and researchers involved in the study and treatment of heart failure and kidney disease. A classification system has been developed to categorize the different manifestations of cardiac and renal dysfunction. Recent work has highlighted the significant negative prognostic effect of worsening renal function on outcomes for individuals with heart failure. The etiology of the concomitant cardiac and renal dysfunction remains unclear; however, increasing evidence supports alternatives to the established theory of underfilling, including effects of venous congestion and changes in intra-abdominal pressure. Conventional therapy focuses on blockade of the renin-angiotensin-aldosterone system with expanding use of direct renin and aldosterone antagonists. Novel therapeutic interventions using extracorporeal therapy and antagonists of the adenosine pathway show promise and require further investigation. PMID:20621250
Hazards/Failure Modes and Effects Analysis MK 1 MOD 0 LSO-HUD Console System.
1980-03-24
AsI~f~ ! 127 = 3gc Z Isre -0 -q ~sI I I ~~~ ~ _ _ 3_______ II! -0udC Z Z’ P4 12 d-U * ~s ’:i~i42 S- 60 -, Uh ~ U3l I OM -C ~ . - U 4~ dcd 8U-q Ali...8 VI SCOPE AND METHODOLOGY OF ANALYSIS ........ 1O FIGURE 1: H/ FMEA /(SSA) WORK SHEET FORMAT ........... 14 APPENDIX A: HAZARD/FAILURE MODES AND...EFFECTS ANALYSIS (H/ FMEA ) -- WORK SHEETS ......... 15(A-O) TABLE: SUBSYSTEM: UNIT I Heads-Up Display Console .............. 17(A-1) UNIT 2 Auxiliary
Boundary Spanning in Offshored Information Systems Development Projects
ERIC Educational Resources Information Center
Krishnan, Poornima
2010-01-01
Recent growth in offshore outsourcing of information systems (IS) services is accompanied by managing the offshore projects successfully. Much of the project failures can be attributed to geographic and organizational boundaries which create differences in culture, language, work patterns, and decision making processes among the offshore project…
2013 update on congenital heart disease, clinical cardiology, heart failure, and heart transplant.
Subirana, M Teresa; Barón-Esquivias, Gonzalo; Manito, Nicolás; Oliver, José M; Ripoll, Tomás; Lambert, Jose Luis; Zunzunegui, José L; Bover, Ramon; García-Pinilla, José Manuel
2014-03-01
This article presents the most relevant developments in 2013 in 3 key areas of cardiology: congenital heart disease, clinical cardiology, and heart failure and transplant. Within the area of congenital heart disease, we reviewed contributions related to sudden death in adult congenital heart disease, the importance of specific echocardiographic parameters in assessing the systemic right ventricle, problems in patients with repaired tetralogy of Fallot and indication for pulmonary valve replacement, and confirmation of the role of specific factors in the selection of candidates for Fontan surgery. The most recent publications in clinical cardiology include a study by a European working group on correct diagnostic work-up in cardiomyopathies, studies on the cost-effectiveness of percutaneous aortic valve implantation, a consensus document on the management of type B aortic dissection, and guidelines on aortic valve and ascending aortic disease. The most noteworthy developments in heart failure and transplantation include new American guidelines on heart failure, therapeutic advances in acute heart failure (serelaxin), the management of comorbidities such as iron deficiency, risk assessment using new biomarkers, and advances in ventricular assist devices. Copyright © 2013 Sociedad Española de Cardiología. Published by Elsevier Espana. All rights reserved.
Providing the full DDF link protection for bus-connected SIEPON based system architecture
NASA Astrophysics Data System (ADS)
Hwang, I.-Shyan; Pakpahan, Andrew Fernando; Liem, Andrew Tanny; Nikoukar, AliAkbar
2016-09-01
Currently a massive amount of traffic per second is delivered through EPON systems, one of the prominent access network technologies for delivering the next generation network. Therefore, it is vital to keep the EPON optical distribution network (ODN) working by providing the necessity protection mechanism in the deployed devices; otherwise, when failures occur it will cause a great loss for both network operators and business customers. In this paper, we propose a bus-connected architecture to protect and recover distribution drop fiber (DDF) link faults or transceiver failures at ONU(s) in SIEPON system. The proposed architecture provides a cost-effective architecture, which delivers the high fault-tolerance in handling multiple DDF faults, while also providing flexibility in choosing the backup ONU assignments. Simulation results show that the proposed architecture provides the reliability and maintains quality of service (QoS) performance in terms of mean packet delay, system throughput, packet loss and EF jitter when DDF link failures occur.
Student ability to apply the concepts of work and energy to extended systems
NASA Astrophysics Data System (ADS)
Lindsey, Beth A.; Heron, Paula R. L.; Shaffer, Peter S.
2009-11-01
We report results from an investigation of student ability to apply the concepts of work and energy to situations in which the internal structure of a system cannot be ignored, that is, the system cannot be treated as a particle. Students in introductory calculus-based physics courses were asked written and online questions after relevant instruction by lectures, textbook, and laboratory. Several difficulties were identified. Some related to student ability to calculate the work done on a system. Failure to associate work with the change in energy of a system was also widespread. The results have implications for instruction that aims for a rigorous treatment of energy concepts that is consistent with the first law of thermodynamics. The findings are guiding the development of two tutorials to supplement instruction.
Reliability of a k—out—of—n : G System with Identical Repairable Elements
NASA Astrophysics Data System (ADS)
Sharifi, M.; Nia, A. Torabi; Shafie, P.; Norozi-Zare, F.; Sabet-Ghadam, A.
2009-09-01
k—out—of—n models, are one of the most useful models to calculate the reliability of complex systems like electrical and mechanical devices. In this paper, we consider a k—out—of—n : G system with identical elements. The failure rate of each element is constant. The elements are repairable and the repair rate of each element is constant. The system works when at least k elements work. The system of equations are established and sought for the parameters like MTTF in real time situation. It seems that this model can tackle more realistic situations.
Marinetto, Michael
2011-01-01
This paper explores the issue of joined-up governance by considering child protection failures, firstly, the case of Victoria Climbié who was killed by her guardians despite being known as an at risk child by various public agencies. The seeming inability of the child protection system to prevent Victoria Climbié's death resulted in a public inquiry under the chairmanship of Lord Laming. The Laming report of 2003 looked, in part, to the lack of joined-up working between agencies to explain this failure to intervene and made a number of recommendations to improve joined-up governance. Using evidence from detailed testimonies given by key personnel during the Laming Inquiry, the argument of this paper is that we cannot focus exclusively on formal structures or decision-making processes but must also consider the normal, daily and informal routines of professional workers. These very same routines may inadvertently culminate in the sort of systemic failures that lead to child protection tragedies. Analysis of the micro-world inhabited by professional workers would benefit most, it is argued here, from the policy-based concept of street-level bureaucracy developed by Michael Lipsky some 30 years ago. The latter half of the paper considers child protection failures that emerged after the Laming-inspired reforms. In particular, the case of ‘Baby P’ highlights, once again, how the working practices of street-level professionals, rather than a lack of joined-up systems, may possibly complement an analysis of, and help us to explain, failures in the child protection system. A Lipskian analysis generally offers, although there are some caveats, only pessimistic conclusions about the prospects of governing authorities being able to avoid future child protection disasters. These conclusions are not wholeheartedly accepted. There exists a glimmer of optimism because street-level bureaucrats still remain accountable, but not necessarily in terms of top-down relations of authority rather, in terms of interpersonal forms of accountability – accountability to professionals and citizen consumers of services.
Job characteristics in nursing and cognitive failure at work.
Elfering, Achim; Grebner, Simone; Dudan, Anna
2011-06-01
Stressors in nursing put high demands on cognitive control and, therefore, may increase the risk of cognitive failures that put patients at risk. Task-related stressors were expected to be positively associated with cognitive failure at work and job control was expected to be negatively associated with cognitive failure at work. Ninety-six registered nurses from 11 Swiss hospitals were investigated (89 women, 7 men, mean age = 36 years, standard deviation = 12 years, 80% supervisors, response rate 48%). A new German version of the Workplace Cognitive Failure Scale (WCFS) was employed to assess failure in memory function, failure in attention regulation, and failure in action exertion. In linear regression analyses, WCFS was related to work characteristics, neuroticism, and conscientiousness. The German WCFS was valid and reliable. The factorial structure of the original WCF could be replicated. Multilevel regression task-related stressors and conscientiousness were significantly related to attention control and action exertion. The study sheds light on the association between job characteristics and work-related cognitive failure. These associations were unique, i.e. associations were shown even when individual differences in conscientiousness and neuroticism were controlled for. A job redesign in nursing should address task stressors.
Fracture of a Brittle-Particle Ductile Matrix Composite with Applications to a Coating System
NASA Astrophysics Data System (ADS)
Bianculli, Steven J.
In material systems consisting of hard second phase particles in a ductile matrix, failure initiating from cracking of the second phase particles is an important failure mechanism. This dissertation applies the principles of fracture mechanics to consider this problem, first from the standpoint of fracture of the particles, and then the onset of crack propagation from fractured particles. This research was inspired by the observation of the failure mechanism of a commercial zinc-based anti-corrosion coating and the analysis was initially approached as coatings problem. As the work progressed it became evident that failure mechanism was relevant to a broad range of composite material systems and research approach was generalized to consider failure of a system consisting of ellipsoidal second phase particles in a ductile matrix. The starting point for the analysis is the classical Eshelby Problem, which considered stress transfer from the matrix to an ellipsoidal inclusion. The particle fracture problem is approached by considering cracks within particles and how they are affected by the particle/matrix interface, the difference in properties between the particle and matrix, and by particle shape. These effects are mapped out for a wide range of material combinations. The trends developed show that, although the particle fracture problem is very complex, the potential for fracture among a range of particle shapes can, for certain ranges in particle shape, be considered easily on the basis of the Eshelby Stress alone. Additionally, the evaluation of cracks near the curved particle/matrix interface adds to the existing body of work of cracks approaching bi-material interfaces in layered material systems. The onset of crack propagation from fractured particles is then considered as a function of particle shape and mismatch in material properties between the particle and matrix. This behavior is mapped out for a wide range of material combinations. The final section of this dissertation qualitatively considers an approach to determine critical particle sizes, below which crack propagation will not occur for a coating system that exhibited stable cracks in an interfacial layer between the coating and substrate.
The resilient hybrid fiber sensor network with self-healing function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Shibo, E-mail: Shibo-Xu@tju.edu.cn; Liu, Tiegen; Ge, Chunfeng
This paper presents a novel resilient fiber sensor network (FSN) with multi-ring architecture, which could interconnect various kinds of fiber sensors responsible for more than one measurands. We explain how the intelligent control system provides sensors with self-healing function meanwhile sensors are working properly, besides each fiber in FSN is under real-time monitoring. We explain the software process and emergency mechanism to respond failures or other circumstances. To improve the efficiency in the use of limited spectrum resources in some situations, we have two different structures to distribute the light sources rationally. Then, we propose a hybrid sensor working inmore » FSN which is a combination of a distributed sensor and a FBG (Fiber Bragg Grating) array fused in a common fiber sensing temperature and vibrations simultaneously with neglectable crosstalk to each other. By making a failure to a working fiber in experiment, the feasibility and effectiveness of the network with a hybrid sensor has been demonstrated, hybrid sensors could not only work as designed but also survive from destructive failures with the help of resilient network and smart and quick self-healing actions. The network has improved the viability of the fiber sensors and diversity of measurands.« less
Effect of Surge Current Testing on Reliability of Solid Tantalum Capacitors
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander
2008-01-01
Tantalum capacitors manufactured per military specifications are established reliability components and have less than 0.001% of failures per 1000 hours for grades D or S, thus positioning these parts among electronic components with the highest reliability characteristics. Still, failures of tantalum capacitors do happen and when it occurs it might have catastrophic consequences for the system. To reduce this risk, further development of a screening and qualification system with special attention to the possible deficiencies in the existing procedures is necessary. The purpose of this work is evaluation of the effect of surge current stress testing on reliability of the parts at both steady-state and multiple surge current stress conditions. In order to reveal possible degradation and precipitate more failures, various part types were tested and stressed in the range of voltage and temperature conditions exceeding the specified limits. A model to estimate the probability of post-surge current testing-screening failures and measures to improve the effectiveness of the screening process has been suggested.
Reconfigurable Control Design for the Full X-33 Flight Envelope
NASA Technical Reports Server (NTRS)
Cotting, M. Christopher; Burken, John J.
2001-01-01
A reconfigurable control law for the full X-33 flight envelope has been designed to accommodate a failed control surface and redistribute the control effort among the remaining working surfaces to retain satisfactory stability and performance. An offline nonlinear constrained optimization approach has been used for the X-33 reconfigurable control design method. Using a nonlinear, six-degree-of-freedom simulation, three example failures are evaluated: ascent with a left body flap jammed at maximum deflection; entry with a right inboard elevon jammed at maximum deflection; and landing with a left rudder jammed at maximum deflection. Failure detection and identification are accomplished in the actuator controller. Failure response comparisons between the nominal control mixer and the reconfigurable control subsystem (mixer) show the benefits of reconfiguration. Single aerosurface jamming failures are considered. The cases evaluated are representative of the study conducted to prove the adequate and safe performance of the reconfigurable control mixer throughout the full flight envelope. The X-33 flight control system incorporates reconfigurable flight control in the existing baseline system.
Model Based Autonomy for Robust Mars Operations
NASA Technical Reports Server (NTRS)
Kurien, James A.; Nayak, P. Pandurang; Williams, Brian C.; Lau, Sonie (Technical Monitor)
1998-01-01
Space missions have historically relied upon a large ground staff, numbering in the hundreds for complex missions, to maintain routine operations. When an anomaly occurs, this small army of engineers attempts to identify and work around the problem. A piloted Mars mission, with its multiyear duration, cost pressures, half-hour communication delays and two-week blackouts cannot be closely controlled by a battalion of engineers on Earth. Flight crew involvement in routine system operations must also be minimized to maximize science return. It also may be unrealistic to require the crew have the expertise in each mission subsystem needed to diagnose a system failure and effect a timely repair, as engineers did for Apollo 13. Enter model-based autonomy, which allows complex systems to autonomously maintain operation despite failures or anomalous conditions, contributing to safe, robust, and minimally supervised operation of spacecraft, life support, In Situ Resource Utilization (ISRU) and power systems. Autonomous reasoning is central to the approach. A reasoning algorithm uses a logical or mathematical model of a system to infer how to operate the system, diagnose failures and generate appropriate behavior to repair or reconfigure the system in response. The 'plug and play' nature of the models enables low cost development of autonomy for multiple platforms. Declarative, reusable models capture relevant aspects of the behavior of simple devices (e.g. valves or thrusters). Reasoning algorithms combine device models to create a model of the system-wide interactions and behavior of a complex, unique artifact such as a spacecraft. Rather than requiring engineers to all possible interactions and failures at design time or perform analysis during the mission, the reasoning engine generates the appropriate response to the current situation, taking into account its system-wide knowledge, the current state, and even sensor failures or unexpected behavior.
Design and simulation of liquid cooled system for power battery of PHEV
NASA Astrophysics Data System (ADS)
Wang, Jianpeng; Xu, Haijun; Xu, Xiaojun; Pan, Cunyun
2017-09-01
Various battery chemistries have different responses to failure, but the most common failure mode of a cell under abusive conditions is the generation of heat and gas. To prevent battery thermal abuse, a battery thermal management system is essential. An excellent design of battery thermal management system can ensure that the battery is working at a suitable temperature and keeps the battery temperature diffenence at 2-3 °C. This paper presents a thermal-elcetric coupling model for a 37Ah lithium battery using AMESim. A liquid cooled system of hybrid electric vehicle power battery is designed to control the battery temperature.A liquid cooled model of thermal management system is built using AMESim, the simulation results showed that the temperature difference within 3°C of cell in the pack.
The assessment of low probability containment failure modes using dynamic PRA
NASA Astrophysics Data System (ADS)
Brunett, Acacia Joann
Although low probability containment failure modes in nuclear power plants may lead to large releases of radioactive material, these modes are typically crudely modeled in system level codes and have large associated uncertainties. Conventional risk assessment techniques (i.e. the fault-tree/event-tree methodology) are capable of accounting for these failure modes to some degree, however, they require the analyst to pre-specify the ordering of events, which can vary within the range of uncertainty of the phenomena. More recently, dynamic probabilistic risk assessment (DPRA) techniques have been developed which remove the dependency on the analyst. Through DPRA, it is now possible to perform a mechanistic and consistent analysis of low probability phenomena, with the timing of the possible events determined by the computational model simulating the reactor behavior. The purpose of this work is to utilize DPRA tools to assess low probability containment failure modes and the driving mechanisms. Particular focus is given to the risk-dominant containment failure modes considered in NUREG-1150, which has long been the standard for PRA techniques. More specifically, this work focuses on the low probability phenomena occurring during a station blackout (SBO) with late power recovery in the Zion Nuclear Power Plant, a Westinghouse pressurized water reactor (PWR). Subsequent to the major risk study performed in NUREG-1150, significant experimentation and modeling regarding the mechanisms driving containment failure modes have been performed. In light of this improved understanding, NUREG-1150 containment failure modes are reviewed in this work using the current state of knowledge. For some unresolved mechanisms, such as containment loading from high pressure melt ejection and combustion events, additional analyses are performed using the accident simulation tool MELCOR to explore the bounding containment loads for realistic scenarios. A dynamic treatment in the characterization of combustible gas ignition is also presented in this work. In most risk studies, combustion is treated simplistically in that it is assumed an ignition occurs if the gas mixture achieves a concentration favorable for ignition under the premise that an adequate ignition source is available. However, the criteria affecting ignition (such as the magnitude, location and frequency of the ignition sources) are complicated. This work demonstrates a technique for characterizing the properties of an ignition source to determine a probability of ignition. The ignition model developed in this work and implemented within a dynamic framework is utilized to analyze the implications and risk significance of late combustion events. This work also explores the feasibility of using dynamic event trees (DETs) with a deterministic sampling approach to analyze low probability phenomena. The flexibility of this approach is demonstrated through the rediscretization of containment fragility curves used in construction of the DET to show convergence to a true solution. Such a rediscretization also reduces the computational burden introduced through extremely fine fragility curve discretization by subsequent refinement of fragility curve regions of interest. Another advantage of the approach is the ability to perform sensitivity studies on the cumulative distribution functions (CDFs) used to determine branching probabilities without the need for rerunning the simulation code. Through review of the NUREG-1150 containment failure modes using the current state of knowledge, it is found that some failure modes, such as Alpha and rocket, can be excluded from further studies; other failure modes, such as failure to isolate, bypass, high pressure melt ejection (HPME), combustion-induced failure and overpressurization are still concerns to varying degrees. As part of this analysis, scoping studies performed in MELCOR show that HPME and the resulting direct containment heating (DCH) do not impose a significant threat to containment integrity. Additional scoping studies regarding the effect of recovery actions on in-vessel hydrogen generation show that reflooding a partially degraded core do not significantly affect hydrogen generation in-vessel, and the NUREG-1150 assumption that insufficient hydrogen is generated in-vessel to produce an energetic deflagration is confirmed. The DET analyses performed in this work show that very late power recovery produces the potential for very energetic combustion events which are capable of failing containment with a non-negligible probability, and that containment cooling systems have a significant impact on core concrete attack, and therefore combustible gas generation ex-vessel. Ultimately, the overall risk of combustion-induced containment failure is low, but its conditional likelihood can have a significant effect on accident mitigation strategies. It is also shown in this work that DETs are particularly well suited to examine low probability events because of their ability to rediscretize CDFs and observe solution convergence.
SU-E-T-627: Failure Modes and Effect Analysis for Monthly Quality Assurance of Linear Accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, J; Xiao, Y; Wang, J
2014-06-15
Purpose: To develop and implement a failure mode and effect analysis (FMEA) on routine monthly Quality Assurance (QA) tests (physical tests part) of linear accelerator. Methods: A systematic failure mode and effect analysis method was performed for monthly QA procedures. A detailed process tree of monthly QA was created and potential failure modes were defined. Each failure mode may have many influencing factors. For each factor, a risk probability number (RPN) was calculated from the product of probability of occurrence (O), the severity of effect (S), and detectability of the failure (D). The RPN scores are in a range ofmore » 1 to 1000, with higher scores indicating stronger correlation to a given influencing factor of a failure mode. Five medical physicists in our institution were responsible to discuss and to define the O, S, D values. Results: 15 possible failure modes were identified and all RPN scores of all influencing factors of these 15 failue modes were from 8 to 150, and the checklist of FMEA in monthly QA was drawn. The system showed consistent and accurate response to erroneous conditions. Conclusion: The influencing factors of RPN greater than 50 were considered as highly-correlated factors of a certain out-oftolerance monthly QA test. FMEA is a fast and flexible tool to develop an implement a quality management (QM) frame work of monthly QA, which improved the QA efficiency of our QA team. The FMEA work may incorporate more quantification and monitoring fuctions in future.« less
Investigation of pump and pump switch failures in rainwater harvesting systems
NASA Astrophysics Data System (ADS)
Moglia, Magnus; Gan, Kein; Delbridge, Nathan; Sharma, Ashok K.; Tjandraatmadja, Grace
2016-07-01
Rainwater harvesting is an important technology in cities that can contribute to a number of functions, such as sustainable water management in the face of demand growth and drought as well as the detention of rainwater to increase flood protection and reduce damage to waterways. The objective of this article is to investigate the integrity of residential rainwater harvesting systems, drawing on the results of the field inspection of 417 rainwater systems across Melbourne that was combined with a survey of householders' situation, maintenance behaviour and attitudes. Specifically, the study moves beyond the assumption that rainwater systems are always operational and functional and draws on the collected data to explore the various reasons and rates of failure associated with pumps and pump switches, leaving for later further exploration of the failure in other components such as the collection area, gutters, tank, and overflows. To the best of the authors' knowledge, there is no data like this in academic literature or in the water sector. Straightforward Bayesian Network models were constructed in order to analyse the factors contributing to various types of failures, including system age, type of use, the reason for installation, installer, and maintenance behaviour. Results show that a number of issues commonly exist, such as failure of pumps (5% of systems), automatic pump switches that mediate between the tank and reticulated water (9% of systems), and systems with inadequate setups (i.e. no pump) limiting their use. In conclusion, there appears to be a lack of enforcement or quality controls in both installation practices by sometimes unskilled contractors and lack of ongoing maintenance checks. Mechanisms for quality control and asset management are required, but difficult to promote or enforce. Further work is needed into how privately owned assets that have public benefits could be better managed.
Incorporation of RAM techniques into simulation modeling
NASA Astrophysics Data System (ADS)
Nelson, S. C., Jr.; Haire, M. J.; Schryver, J. C.
1995-01-01
This work concludes that reliability, availability, and maintainability (RAM) analytical techniques can be incorporated into computer network simulation modeling to yield an important new analytical tool. This paper describes the incorporation of failure and repair information into network simulation to build a stochastic computer model to represent the RAM Performance of two vehicles being developed for the US Army: The Advanced Field Artillery System (AFAS) and the Future Armored Resupply Vehicle (FARV). The AFAS is the US Army's next generation self-propelled cannon artillery system. The FARV is a resupply vehicle for the AFAS. Both vehicles utilize automation technologies to improve the operational performance of the vehicles and reduce manpower. The network simulation model used in this work is task based. The model programmed in this application requirements a typical battle mission and the failures and repairs that occur during that battle. Each task that the FARV performs--upload, travel to the AFAS, refuel, perform tactical/survivability moves, return to logistic resupply, etc.--is modeled. Such a model reproduces a model reproduces operational phenomena (e.g., failures and repairs) that are likely to occur in actual performance. Simulation tasks are modeled as discrete chronological steps; after the completion of each task decisions are programmed that determine the next path to be followed. The result is a complex logic diagram or network. The network simulation model is developed within a hierarchy of vehicle systems, subsystems, and equipment and includes failure management subnetworks. RAM information and other performance measures are collected which have impact on design requirements. Design changes are evaluated through 'what if' questions, sensitivity studies, and battle scenario changes.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
NASA Technical Reports Server (NTRS)
Gaier, James R.; Siamidis, John; Larkin, Elizabeth M. G.
2010-01-01
The National Aeronautics and Space Administration (NASA) is currently developing a new universal docking mechanism for future space exploration missions called the Low Impact Docking System (LIDS). A candidate LIDS main interface seal design is a composite assembly of silicone elastomer seals vacuum molded into grooves in an electroless nickel plated aluminum retainer. The strength of the silicone-tometal bond is a critical consideration for the new system, especially due to the presence of small areas of disbond created during the molding process. In the work presented herein, seal-to-retainer bonds of subscale seal specimens with different sizes of intentional disbond were destructively tensile tested. Nominal specimens without intentional disbonds were also tested. Tension was applied either uniformly on the entire seal circumference or locally in one short circumferential length. Bond failure due to uniform tension produced a wide scatter of observable failure modes and measured load-displacement behaviors. Although the preferable failure mode for the seal-to-retainer bond is cohesive failure of the elastomer material, the dominant observed failure mode under the uniform loading condition was found to be the less desirable adhesive failure of the bond in question. The uniform tension case results did not show a correlation between disbond size and bond strength. Localized tension was found to produce failure either as immediate tearing of the elastomer material outside the bond region or as complete peel-out of the seal in one piece. The obtained results represent a valuable benchmark for comparison in the future between adhesion loads under various separation conditions and composite seal bond strength.
Biofeedback in the treatment of heart failure.
McKee, Michael G; Moravec, Christine S
2010-07-01
Biofeedback training can be used to reduce activation of the sympathetic nervous system (SNS) and increase activation of the parasympathetic nervous system (PNS). It is well established that hyperactivation of the SNS contributes to disease progression in chronic heart failure. It has been postulated that underactivation of the PNS may also play a role in heart failure pathophysiology. In addition to autonomic imbalance, a chronic inflammatory process is now recognized as being involved in heart failure progression, and recent work has established that activation of the inflammatory process may be attenuated by vagal nerve stimulation. By interfering with both autonomic imbalance and the inflammatory process, biofeedback-assisted stress management may be an effective treatment for patients with heart failure by improving clinical status and quality of life. Recent studies have suggested that biofeedback and stress management have a positive impact in patients with chronic heart failure, and patients with higher perceived control over their disease have been shown to have better quality of life. Our ongoing study of biofeedback-assisted stress management in the treatment of end-stage heart failure will also examine biologic end points in treated patients at the time of heart transplant, in order to assess the effects of biofeedback training on the cellular and molecular components of the failing heart. We hypothesize that the effects of biofeedback training will extend to remodeling the failing human heart, in addition to improving quality of life.
Reliability and Maintainability Data for Lead Lithium Cooling Systems
Cadwallader, Lee
2016-11-16
This article presents component failure rate data for use in assessment of lead lithium cooling systems. Best estimate data applicable to this liquid metal coolant is presented. Repair times for similar components are also referenced in this work. These data support probabilistic safety assessment and reliability, availability, maintainability and inspectability analyses.
Failure Impact Analysis of Key Management in AMI Using Cybernomic Situational Assessment (CSA)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Sheldon, Frederick T; Hauser, Katie R
2013-01-01
In earlier work, we presented a computational framework for quantifying the security of a system in terms of the average loss a stakeholder stands to sustain as a result of threats to the system. We named this system, the Cyberspace Security Econometrics System (CSES). In this paper, we refine the framework and apply it to cryptographic key management within the Advanced Metering Infrastructure (AMI) as an example. The stakeholders, requirements, components, and threats are determined. We then populate the matrices with justified values by addressing the AMI at a higher level, rather than trying to consider every piece of hardwaremore » and software involved. We accomplish this task by leveraging the recently established NISTR 7628 guideline for smart grid security. This allowed us to choose the stakeholders, requirements, components, and threats realistically. We reviewed the literature and selected an industry technical working group to select three representative threats from a collection of 29 threats. From this subset, we populate the stakes, dependency, and impact matrices, and the threat vector with realistic numbers. Each Stakeholder s Mean Failure Cost is then computed.« less
Evaluation of a Linear Cumulative Damage Failure Model for Epoxy Adhesive
NASA Technical Reports Server (NTRS)
Richardson, David E.; Batista-Rodriquez, Alicia; Macon, David; Totman, Peter; McCool, Alex (Technical Monitor)
2001-01-01
Recently a significant amount of work has been conducted to provide more complex and accurate material models for use in the evaluation of adhesive bondlines. Some of this has been prompted by recent studies into the effects of residual stresses on the integrity of bondlines. Several techniques have been developed for the analysis of bondline residual stresses. Key to these analyses is the criterion that is used for predicting failure. Residual stress loading of an adhesive bondline can occur over the life of the component. For many bonded systems, this can be several years. It is impractical to directly characterize failure of adhesive bondlines under a constant load for several years. Therefore, alternative approaches for predictions of bondline failures are required. In the past, cumulative damage failure models have been developed. These models have ranged from very simple to very complex. This paper documents the generation and evaluation of some of the most simple linear damage accumulation tensile failure models for an epoxy adhesive. This paper shows how several variations on the failure model were generated and presents an evaluation of the accuracy of these failure models in predicting creep failure of the adhesive. The paper shows that a simple failure model can be generated from short-term failure data for accurate predictions of long-term adhesive performance.
Testing of a variable-stroke Stirling engine
NASA Technical Reports Server (NTRS)
Thieme, Lanny G.; Allen, David J.
1986-01-01
Testing of a variable-stroke Stirling engine at NASA Lewis has been completed. In support of the DOE Stirling Engine Highway Vehicle Systems Program, the engine was tested for about 70 hours total with both He and H2 as working fluids over a range of pressures and strokes. A direct comparison was made of part-load efficiencies obtained with variable-stroke (VS) and variable-pressure operation. Two failures with the variable-angle swash-plate drive system limited testing to low power levels. These failures are not thought to be caused by problems inherent with the VS concept but do emphasize the need for careful design in the area of the crossheads.
Testing of a variable-stroke Stirling engine
NASA Technical Reports Server (NTRS)
Thieme, L. G.; Allen, D. J.
1986-01-01
Testing of a variable-stroke Stirling engine at NASA Lewis has been completed. In support of the DOE Stirling Engine Highway Vehicle Systems Program, the engine was tested for about 70 hours total with both He and H2 working fluids over a range of pressures and strokes. A direct comparison was made of part-load efficiencies obtained with variable-stroke (VS) and variable-pressure operation. Two failures with the variable-angle swash-plate drive system limited testing to low power levels. These failures are not thought to be caused by problems inherent with the VS concept but do emphasize the need for careful design in the area of the crossheads.
Yazdani, Sahar; Haeri, Mohammad
2017-11-01
In this work, we study the flocking problem of multi-agent systems with uncertain dynamics subject to actuator failure and external disturbances. By considering some standard assumptions, we propose a robust adaptive fault tolerant protocol for compensating of the actuator bias fault, the partial loss of actuator effectiveness fault, the model uncertainties, and external disturbances. Under the designed protocol, velocity convergence of agents to that of virtual leader is guaranteed while the connectivity preservation of network and collision avoidance among agents are ensured as well. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
An autonomous recovery mechanism against optical distribution network failures in EPON
NASA Astrophysics Data System (ADS)
Liem, Andrew Tanny; Hwang, I.-Shyan; Nikoukar, AliAkbar
2014-10-01
Ethernet Passive Optical Network (EPON) is chosen for servicing diverse applications with higher bandwidth and Quality-of-Service (QoS), starting from Fiber-To-The-Home (FTTH), FTTB (business/building) and FTTO (office). Typically, a single OLT can provide services to both residential and business customers on the same Optical Line Terminal (OLT) port; thus, any failures in the system will cause a great loss for both network operators and customers. Network operators are looking for low-cost and high service availability mechanisms that focus on the failures that occur within the drop fiber section because the majority of faults are in this particular section. Therefore, in this paper, we propose an autonomous recovery mechanism that provides protection and recovery against Drop Distribution Fiber (DDF) link faults or transceiver failure at the ONU(s) in EPON systems. In the proposed mechanism, the ONU can automatically detect any signal anomalies in the physical layer or transceiver failure, switching the working line to the protection line and sending the critical event alarm to OLT via its neighbor. Each ONU has a protection line, which is connected to the nearest neighbor ONU, and therefore, when failure occurs, the ONU can still transmit and receive data via the neighbor ONU. Lastly, the Fault Dynamic Bandwidth Allocation for recovery mechanism is presented. Simulation results show that our proposed autonomous recovery mechanism is able to maintain the overall QoS performance in terms of mean packet delay, system throughput, packet loss and EF jitter.
Idris, Hajara; Junaidu, Sahalu B.; Adewumi, Aderemi O.
2017-01-01
The Grid scheduler, schedules user jobs on the best available resource in terms of resource characteristics by optimizing job execution time. Resource failure in Grid is no longer an exception but a regular occurring event as resources are increasingly being used by the scientific community to solve computationally intensive problems which typically run for days or even months. It is therefore absolutely essential that these long-running applications are able to tolerate failures and avoid re-computations from scratch after resource failure has occurred, to satisfy the user’s Quality of Service (QoS) requirement. Job Scheduling with Fault Tolerance in Grid Computing using Ant Colony Optimization is proposed to ensure that jobs are executed successfully even when resource failure has occurred. The technique employed in this paper, is the use of resource failure rate, as well as checkpoint-based roll back recovery strategy. Check-pointing aims at reducing the amount of work that is lost upon failure of the system by immediately saving the state of the system. A comparison of the proposed approach with an existing Ant Colony Optimization (ACO) algorithm is discussed. The experimental results of the implemented Fault Tolerance scheduling algorithm show that there is an improvement in the user’s QoS requirement over the existing ACO algorithm, which has no fault tolerance integrated in it. The performance evaluation of the two algorithms was measured in terms of the three main scheduling performance metrics: makespan, throughput and average turnaround time. PMID:28545075
The Emergency Landing Planner Experiment
NASA Technical Reports Server (NTRS)
Meuleau, Nocolas F.; Neukom, Christian; Plaunt, Christian John; Smith, David E.; Smith, Tristan B.
2011-01-01
In previous work, we described an Emergency Landing Planner (ELP) designed to assist pilots in choosing the best emergency landing site when damage or failures occur in an aircraft. In this paper, we briefly describe the system, but focus on the integration of this system into the cockpit of a 6 DOF full-motion simulator and a study designed to evaluate the ELP. We discuss the results of this study, the lessons learned, and some of the issues involved in advancing this work further.
Cascading failures in interdependent systems under a flow redistribution model
NASA Astrophysics Data System (ADS)
Zhang, Yingrui; Arenas, Alex; Yaǧan, Osman
2018-02-01
Robustness and cascading failures in interdependent systems has been an active research field in the past decade. However, most existing works use percolation-based models where only the largest component of each network remains functional throughout the cascade. Although suitable for communication networks, this assumption fails to capture the dependencies in systems carrying a flow (e.g., power systems, road transportation networks), where cascading failures are often triggered by redistribution of flows leading to overloading of lines. Here, we consider a model consisting of systems A and B with initial line loads and capacities given by {LA,i,CA ,i} i =1 n and {LB,i,CB ,i} i =1 n, respectively. When a line fails in system A , a fraction of its load is redistributed to alive lines in B , while remaining (1 -a ) fraction is redistributed equally among all functional lines in A ; a line failure in B is treated similarly with b giving the fraction to be redistributed to A . We give a thorough analysis of cascading failures of this model initiated by a random attack targeting p1 fraction of lines in A and p2 fraction in B . We show that (i) the model captures the real-world phenomenon of unexpected large scale cascades and exhibits interesting transition behavior: the final collapse is always first order, but it can be preceded by a sequence of first- and second-order transitions; (ii) network robustness tightly depends on the coupling coefficients a and b , and robustness is maximized at non-trivial a ,b values in general; (iii) unlike most existing models, interdependence has a multifaceted impact on system robustness in that interdependency can lead to an improved robustness for each individual network.
Cascading failures in interdependent systems under a flow redistribution model.
Zhang, Yingrui; Arenas, Alex; Yağan, Osman
2018-02-01
Robustness and cascading failures in interdependent systems has been an active research field in the past decade. However, most existing works use percolation-based models where only the largest component of each network remains functional throughout the cascade. Although suitable for communication networks, this assumption fails to capture the dependencies in systems carrying a flow (e.g., power systems, road transportation networks), where cascading failures are often triggered by redistribution of flows leading to overloading of lines. Here, we consider a model consisting of systems A and B with initial line loads and capacities given by {L_{A,i},C_{A,i}}_{i=1}^{n} and {L_{B,i},C_{B,i}}_{i=1}^{n}, respectively. When a line fails in system A, a fraction of its load is redistributed to alive lines in B, while remaining (1-a) fraction is redistributed equally among all functional lines in A; a line failure in B is treated similarly with b giving the fraction to be redistributed to A. We give a thorough analysis of cascading failures of this model initiated by a random attack targeting p_{1} fraction of lines in A and p_{2} fraction in B. We show that (i) the model captures the real-world phenomenon of unexpected large scale cascades and exhibits interesting transition behavior: the final collapse is always first order, but it can be preceded by a sequence of first- and second-order transitions; (ii) network robustness tightly depends on the coupling coefficients a and b, and robustness is maximized at non-trivial a,b values in general; (iii) unlike most existing models, interdependence has a multifaceted impact on system robustness in that interdependency can lead to an improved robustness for each individual network.
Missile and Space Systems Reliability versus Cost Trade-Off Study
1983-01-01
F00-1C09 Robert C. Schneider F00-1C09 V . PERFORMING ORGANIZATION NAME AM0 ADDRESS 16 PRGRAM ELEMENT. PROJECT. TASK BoeingAerosace CmpAnyA CA WORK UNIT...reliability problems, which has the - real bearing on program effectiveness. A well planned and funded reliability effort can prevent or ferret out...failure analysis, and the in- corporation and verification of design corrections to prevent recurrence of failures. 302.2.2 A TMJ test plan shall be
Sauer, Juergen; Chavaillaz, Alain; Wastell, David
2016-06-01
This work examined the effects of operators' exposure to various types of automation failures in training. Forty-five participants were trained for 3.5 h on a simulated process control environment. During training, participants either experienced a fully reliable, automatic fault repair facility (i.e. faults detected and correctly diagnosed), a misdiagnosis-prone one (i.e. faults detected but not correctly diagnosed) or a miss-prone one (i.e. faults not detected). One week after training, participants were tested for 3 h, experiencing two types of automation failures (misdiagnosis, miss). The results showed that automation bias was very high when operators trained on miss-prone automation encountered a failure of the diagnostic system. Operator errors resulting from automation bias were much higher when automation misdiagnosed a fault than when it missed one. Differences in trust levels that were instilled by the different training experiences disappeared during the testing session. Practitioner Summary: The experience of automation failures during training has some consequences. A greater potential for operator errors may be expected when an automatic system failed to diagnose a fault than when it failed to detect one.
Making intelligent systems team players. A guide to developing intelligent monitoring systems
NASA Technical Reports Server (NTRS)
Land, Sherry A.; Malin, Jane T.; Thronesberry, Carroll; Schreckenghost, Debra L.
1995-01-01
This reference guide for developers of intelligent monitoring systems is based on lessons learned by developers of the DEcision Support SYstem (DESSY), an expert system that monitors Space Shuttle telemetry data in real time. DESSY makes inferences about commands, state transitions, and simple failures. It performs failure detection rather than in-depth failure diagnostics. A listing of rules from DESSY and cue cards from DESSY subsystems are included to give the development community a better understanding of the selected model system. The G-2 programming tool used in developing DESSY provides an object-oriented, rule-based environment, but many of the principles in use here can be applied to any type of monitoring intelligent system. The step-by-step instructions and examples given for each stage of development are in G-2, but can be used with other development tools. This guide first defines the authors' concept of real-time monitoring systems, then tells prospective developers how to determine system requirements, how to build the system through a combined design/development process, and how to solve problems involved in working with real-time data. It explains the relationships among operational prototyping, software evolution, and the user interface. It also explains methods of testing, verification, and validation. It includes suggestions for preparing reference documentation and training users.
Jun, Jin; Faulkner, Kenneth M
2018-04-01
To review the current literature on hospital nursing factors associated with 30-day readmission rates of patients with heart failure. Heart failure is a common, yet debilitating chronic illness with high mortality and morbidity. One in five patients with heart failure will experience unplanned readmission to a hospital within 30 days. Given the significance of heart failure to individuals, families and healthcare system, the Center for Medicare and Medicaid Services has made reducing 30-day readmission rates a priority. Scoping review, which maps the key concepts of a research area, is used. Published primary studies in English assessing factors related to nurses in hospitals and readmission of patients with heart failure were included. Other inclusion criteria were written in English and published in peer-reviewed journals. The search resulted in 2,782 articles. After removing duplicates and reviewing the inclusion and exclusion criteria, five articles were selected. Three nursing workforce factors emerged as follows: (i) nursing staffing, (ii) nursing care and work environment, and (iii) nurses' knowledge of heart failure. This is the first scoping review examining the association between hospital nursing factors and 30-day readmission rates of patients with heart failure. Further studies examining the extent of nursing structural and process factors influencing the outcomes of patients with heart failure are needed. Nurses are an integral part of the healthcare system. Identifying the factors related to nurses in hospitals is important to ensure comprehensive delivery of care to the chronically ill population. Hospital administrators, managers and policymakers can use the findings from this review to implement strategies to reduce 30-day readmission rates of patients with heart failure. © 2018 John Wiley & Sons Ltd.
Medium Fidelity Simulation of Oxygen Tank Venting
NASA Technical Reports Server (NTRS)
Sweet, Adam; Kurien, James; Lau, Sonie (Technical Monitor)
2001-01-01
The item to he cleared is a medium-fidelity software simulation model of a vented cryogenic tank. Such tanks are commonly used to transport cryogenic liquids such as liquid oxygen via truck, and have appeared on liquid-fueled rockets for decades. This simulation model works with the HCC simulation system that was developed by Xerox PARC and NASA Ames Research Center. HCC has been previously cleared for distribution. When used with the HCC software, the model generates simulated readings for the tank pressure and temperature as the simulated cryogenic liquid boils off and is vented. Failures (such as a broken vent valve) can be injected into the simulation to produce readings corresponding to the failure. Release of this simulation will allow researchers to test their software diagnosis systems by attempting to diagnose the simulated failure from the simulated readings. This model does not contain any encryption software nor can it perform any control tasks that might be export controlled.
Cascade defense via routing in complex networks
NASA Astrophysics Data System (ADS)
Xu, Xiao-Lan; Du, Wen-Bo; Hong, Chen
2015-05-01
As the cascading failures in networked traffic systems are becoming more and more serious, research on cascade defense in complex networks has become a hotspot in recent years. In this paper, we propose a traffic-based cascading failure model, in which each packet in the network has its own source and destination. When cascade is triggered, packets will be redistributed according to a given routing strategy. Here, a global hybrid (GH) routing strategy, which uses the dynamic information of the queue length and the static information of nodes' degree, is proposed to defense the network cascade. Comparing GH strategy with the shortest path (SP) routing, efficient routing (ER) and global dynamic (GD) routing strategies, we found that GH strategy is more effective than other routing strategies in improving the network robustness against cascading failures. Our work provides insight into the robustness of networked traffic systems.
Ellinas, Christos; Allan, Neil; Durugbo, Christopher; Johansson, Anders
2015-01-01
Current societal requirements necessitate the effective delivery of complex projects that can do more while using less. Yet, recent large-scale project failures suggest that our ability to successfully deliver them is still at its infancy. Such failures can be seen to arise through various failure mechanisms; this work focuses on one such mechanism. Specifically, it examines the likelihood of a project sustaining a large-scale catastrophe, as triggered by single task failure and delivered via a cascading process. To do so, an analytical model was developed and tested on an empirical dataset by the means of numerical simulation. This paper makes three main contributions. First, it provides a methodology to identify the tasks most capable of impacting a project. In doing so, it is noted that a significant number of tasks induce no cascades, while a handful are capable of triggering surprisingly large ones. Secondly, it illustrates that crude task characteristics cannot aid in identifying them, highlighting the complexity of the underlying process and the utility of this approach. Thirdly, it draws parallels with systems encountered within the natural sciences by noting the emergence of self-organised criticality, commonly found within natural systems. These findings strengthen the need to account for structural intricacies of a project's underlying task precedence structure as they can provide the conditions upon which large-scale catastrophes materialise.
Why system safety programs can fail
NASA Technical Reports Server (NTRS)
Hammer, W.
1971-01-01
Factors that cause system safety programs to fail are discussed from the viewpoint that in general these programs have not achieved their intended aims. The one item which is considered to contribute most to failure of a system safety program is a poor statement of work which consists of ambiguity, lack of clear definition, use of obsolete requirements, and pure typographical errors. It is pointed out that unless safety requirements are stated clearly, and where they are readily apparent as firm requirements, some of them will be overlooked by designers and contractors. The lack of clarity is stated as being a major contributing factor in system safety program failure and usually evidenced in: (1) lack of clear requirements by the procuring activity, (2) lack of clear understanding of system safety by other managers, and (3) lack of clear methodology to be employed by system safety engineers.
Methods for Decontamination of a Bipropellant Propulsion System
NASA Technical Reports Server (NTRS)
McClure, Mark B.; Greene, Benjamin
2012-01-01
Most propulsion systems are designed to be filled and flown, draining can be done but decontamination may be difficult. Transport of these systems may be difficult as well because flight weight vessels are not designed around DOT or UN shipping requirements. Repairs, failure analysis work or post firing inspections may be difficult or impossible to perform due to the hazards of residual propellants being present.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rhee, Seung; Spencer, Cherrill; /Stanford U. /SLAC
2009-01-23
Failure occurs when one or more of the intended functions of a product are no longer fulfilled to the customer's satisfaction. The most critical product failures are those that escape design reviews and in-house quality inspection and are found by the customer. The product may work for a while until its performance degrades to an unacceptable level or it may have not worked even before customer took possession of the product. The end results of failures which may lead to unsafe conditions or major losses of the main function are rated high in severity. Failure Modes and Effects Analysis (FMEA)more » is a tool widely used in the automotive, aerospace, and electronics industries to identify, prioritize, and eliminate known potential failures, problems, and errors from systems under design, before the product is released (Stamatis, 1997). Several industrial FMEA standards such as those published by the Society of Automotive Engineers, US Department of Defense, and the Automotive Industry Action Group employ the Risk Priority Number (RPN) to measure risk and severity of failures. The Risk Priority Number (RPN) is a product of 3 indices: Occurrence (O), Severity (S), and Detection (D). In a traditional FMEA process design engineers typically analyze the 'root cause' and 'end-effects' of potential failures in a sub-system or component and assign penalty points through the O, S, D values to each failure. The analysis is organized around categories called failure modes, which link the causes and effects of failures. A few actions are taken upon completing the FMEA worksheet. The RPN column generally will identify the high-risk areas. The idea of performing FMEA is to eliminate or reduce known and potential failures before they reach the customers. Thus, a plan of action must be in place for the next task. Not all failures can be resolved during the product development cycle, thus prioritization of actions must be made within the design group. One definition of detection difficulty (D) is how well the organization controls the development process. Another definition relates to the detectability of a particular failure in the product when it is in the hands of the customer. The former asks 'What is the chance of catching the problem before we give it to the customer'? The latter asks 'What is the chance of the customer catching the problem before the problem results in a catastrophic failure?' (Palady, 1995) These differing definitions confuse the FMEA users when one tries to determine detection difficulty. Are we trying to measure how easy it is to detect where a failure has occurred or when it has occurred? Or are we trying to measure how easy or difficult it is to prevent failures? Ordinal scale variables are used to rank-order industries such as, hotels, restaurants, and movies (Note that a 4 star hotel is not necessarily twice as good as a 2 star hotel). Ordinal values preserve rank in a group of items, but the distance between the values cannot be measured since a distance function does not exist. Thus, the product or sum of ordinal variables loses its rank since each parameter has different scales. The RPN is a product of 3 independent ordinal variables, it can indicate that some failure types are 'worse' than others, but give no quantitative indication of their relative effects. To resolve the ambiguity of measuring detection difficulty and the irrational logic of multiplying 3 ordinal indices, a new methodology was created to overcome these shortcomings, Life Cost-Based FMEA. Life Cost-Based FMEA measures failure/risk in terms of monetary cost. Cost is a universal parameter that can be easily related to severity by engineers and others. Thus, failure cost can be estimated using the following simplest form: Expected Failure Cost = {sup n}{Sigma}{sub i=1}p{sub i}c{sub i}, p: Probability of a particular failure occurring; c: Monetary cost associated with that particular failure; and n: Total number of failure scenarios. FMEA is most effective when there are inputs into it from all concerned disciplines of the product development team. However, FMEA is a long process and can become tedious and won't be effective if too many people participate. An ideal team should have 3 to 4 people from: design, manufacturing, and service departments if possible. Depending on how complex the system is, the entire process can take anywhere from one to four weeks working full time. Thus, it is important to agree to the time commitment before starting the analysis else, anxious managers might stop the procedure before it is completed.« less
Web-Enhanced General Chemistry Increases Student Completion Rates, Success, and Satisfaction
ERIC Educational Resources Information Center
Amaral, Katie E.; Shank, John D.; Shibley, Ivan A., Jr.; Shibley, Lisa R.
2013-01-01
General Chemistry I historically had one of the highest failure and withdrawal rates at Penn State Berks, a four-year college within the Penn State system. The course was completely redesigned to incorporate more group work, the use of classroom response systems, peer mentors, and a stronger online presence via the learning management system…
Working Group on Ice Forces on Structures. A State-of-the-Art Report.
1980-06-01
observed in Soviet Design Codes, but the randomness of ice properties is not directly observed anywhere. 3.3 Contact system The mode of ice failure against...ups .............. o..........................90 2.133 Factors limiting ice ride-up ..................... o............91 2.134 Procedures for designing ...o................................110 3.3 Contact system .................................................. 115 3.4 Damping
From Accountability to Prevention: Early Warning Systems Put Data to Work for Struggling Students
ERIC Educational Resources Information Center
O'Cummings, Mindee; Therriault, Susan Bowles
2015-01-01
Educators at all levels care deeply about helping students succeed academically, graduate on time, and emerge from school well prepared for college and careers. Today's emphasis on holding schools accountable for failure--while necessary--is at best a means to an end. Findings from local and statewide accountability systems can help state…
2017-10-01
activin A Quantikine ELISA (R&D Systems) following the manufacturer’s instructions. All samples were run in duplicates after a 1:4 dilution in PBS. Ang-1...Nearest person month(s) worked: 8.4 months Contribution to Project: Dr. Wilson coordinated the experiments and performed imaging, chromatography, ELISA
Increased cardiac work provides a link between systemic hypertension and heart failure.
Wilson, Alexander J; Wang, Vicky Y; Sands, Gregory B; Young, Alistair A; Nash, Martyn P; LeGrice, Ian J
2017-01-01
The spontaneously hypertensive rat (SHR) is an established model of human hypertensive heart disease transitioning into heart failure. The study of the progression to heart failure in these animals has been limited by the lack of longitudinal data. We used MRI to quantify left ventricular mass, volume, and cardiac work in SHRs at age 3 to 21 month and compared these indices to data from Wistar-Kyoto (WKY) controls. SHR had lower ejection fraction compared with WKY at all ages, but there was no difference in cardiac output at any age. At 21 month the SHR had significantly elevated stroke work (51 ± 3 mL.mmHg SHR vs. 24 ± 2 mL.mmHg WKY; n = 8, 4; P < 0.001) and cardiac minute work (14.2 ± 1.2 L.mmHg/min SHR vs. 6.2 ± 0.8 L.mmHg/min WKY; n = 8, 4; P < 0.001) compared to control, in addition to significantly larger left ventricular mass to body mass ratio (3.61 ± 0.15 mg/g SHR vs. 2.11 ± 0.008 mg/g WKY; n = 8, 6; P < 0.001). SHRs showed impaired systolic function, but developed hypertrophy to compensate and successfully maintained cardiac output. However, this was associated with an increase in cardiac work at age 21 month, which has previously demonstrated fibrosis and cell death. The interplay between these factors may be the mechanism for progression to failure in this animal model. © 2017 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of The Physiological Society and the American Physiological Society.
Conceptual modeling of coincident failures in multiversion software
NASA Technical Reports Server (NTRS)
Littlewood, Bev; Miller, Douglas R.
1989-01-01
Recent work by Eckhardt and Lee (1985) shows that independently developed program versions fail dependently (specifically, simultaneous failure of several is greater than would be the case under true independence). The present authors show there is a precise duality between input choice and program choice in this model and consider a generalization in which different versions can be developed using diverse methodologies. The use of diverse methodologies is shown to decrease the probability of the simultaneous failure of several versions. Indeed, it is theoretically possible to obtain versions which exhibit better than independent failure behavior. The authors try to formalize the notion of methodological diversity by considering the sequence of decision outcomes that constitute a methodology. They show that diversity of decision implies likely diversity of behavior for the different verions developed under such forced diversity. For certain one-out-of-n systems the authors obtain an optimal method for allocating diversity between versions. For two-out-of-three systems there seem to be no simple optimality results which do not depend on constraints which cannot be verified in practice.
Simulation-driven machine learning: Bearing fault classification
NASA Astrophysics Data System (ADS)
Sobie, Cameron; Freitas, Carina; Nicolai, Mike
2018-01-01
Increasing the accuracy of mechanical fault detection has the potential to improve system safety and economic performance by minimizing scheduled maintenance and the probability of unexpected system failure. Advances in computational performance have enabled the application of machine learning algorithms across numerous applications including condition monitoring and failure detection. Past applications of machine learning to physical failure have relied explicitly on historical data, which limits the feasibility of this approach to in-service components with extended service histories. Furthermore, recorded failure data is often only valid for the specific circumstances and components for which it was collected. This work directly addresses these challenges for roller bearings with race faults by generating training data using information gained from high resolution simulations of roller bearing dynamics, which is used to train machine learning algorithms that are then validated against four experimental datasets. Several different machine learning methodologies are compared starting from well-established statistical feature-based methods to convolutional neural networks, and a novel application of dynamic time warping (DTW) to bearing fault classification is proposed as a robust, parameter free method for race fault detection.
A Review of Transmission Diagnostics Research at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Zakajsek, James J.
1994-01-01
This paper presents a summary of the transmission diagnostics research work conducted at NASA Lewis Research Center over the last four years. In 1990, the Transmission Health and Usage Monitoring Research Team at NASA Lewis conducted a survey to determine the critical needs of the diagnostics community. Survey results indicated that experimental verification of gear and bearing fault detection methods, improved fault detection in planetary systems, and damage magnitude assessment and prognostics research were all critical to a highly reliable health and usage monitoring system. In response to this, a variety of transmission fault detection methods were applied to experimentally obtained fatigue data. Failure modes of the fatigue data include a variety of gear pitting failures, tooth wear, tooth fracture, and bearing spalling failures. Overall results indicate that, of the gear fault detection techniques, no one method can successfully detect all possible failure modes. The more successful methods need to be integrated into a single more reliable detection technique. A recently developed method, NA4, in addition to being one of the more successful gear fault detection methods, was also found to exhibit damage magnitude estimation capabilities.
Automated Generation of Fault Management Artifacts from a Simple System Model
NASA Technical Reports Server (NTRS)
Kennedy, Andrew K.; Day, John C.
2013-01-01
Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.
Progressive Damage Analysis of Bonded Composite Joints
NASA Technical Reports Server (NTRS)
Leone, Frank A., Jr.; Girolamo, Donato; Davila, Carlos G.
2012-01-01
The present work is related to the development and application of progressive damage modeling techniques to bonded joint technology. The joint designs studied in this work include a conventional composite splice joint and a NASA-patented durable redundant joint. Both designs involve honeycomb sandwich structures with carbon/epoxy facesheets joined using adhesively bonded doublers.Progressive damage modeling allows for the prediction of the initiation and evolution of damage within a structure. For structures that include multiple material systems, such as the joint designs under consideration, the number of potential failure mechanisms that must be accounted for drastically increases the complexity of the analyses. Potential failure mechanisms include fiber fracture, intraply matrix cracking, delamination, core crushing, adhesive failure, and their interactions. The bonded joints were modeled using highly parametric, explicitly solved finite element models, with damage modeling implemented via custom user-written subroutines. Each ply was discretely meshed using three-dimensional solid elements. Layers of cohesive elements were included between each ply to account for the possibility of delaminations and were used to model the adhesive layers forming the joint. Good correlation with experimental results was achieved both in terms of load-displacement history and the predicted failure mechanism(s).
Progressive Damage Modeling of Durable Bonded Joint Technology
NASA Technical Reports Server (NTRS)
Leone, Frank A.; Davila, Carlos G.; Lin, Shih-Yung; Smeltzer, Stan; Girolamo, Donato; Ghose, Sayata; Guzman, Juan C.; McCarville, Duglas A.
2013-01-01
The development of durable bonded joint technology for assembling composite structures for launch vehicles is being pursued for the U.S. Space Launch System. The present work is related to the development and application of progressive damage modeling techniques to bonded joint technology applicable to a wide range of sandwich structures for a Heavy Lift Launch Vehicle. The joint designs studied in this work include a conventional composite splice joint and a NASA-patented Durable Redundant Joint. Both designs involve a honeycomb sandwich with carbon/epoxy facesheets joined with adhesively bonded doublers. Progressive damage modeling allows for the prediction of the initiation and evolution of damage. For structures that include multiple materials, the number of potential failure mechanisms that must be considered increases the complexity of the analyses. Potential failure mechanisms include fiber fracture, matrix cracking, delamination, core crushing, adhesive failure, and their interactions. The joints were modeled using Abaqus parametric finite element models, in which damage was modeled with user-written subroutines. Each ply was meshed discretely, and layers of cohesive elements were used to account for delaminations and to model the adhesive layers. Good correlation with experimental results was achieved both in terms of load-displacement history and predicted failure mechanisms.
Health IT success and failure: recommendations from literature and an AMIA workshop.
Kaplan, Bonnie; Harris-Salamone, Kimberly D
2009-01-01
With the United States joining other countries in national efforts to reap the many benefits that use of health information technology can bring for health care quality and savings, sobering reports recall the complexity and difficulties of implementing even smaller-scale systems. Despite best practice research that identified success factors for health information technology projects, a majority, in some sense, still fail. Similar problems plague a variety of different kinds of applications, and have done so for many years. Ten AMIA working groups sponsored a workshop at the AMIA Fall 2006 Symposium. It was entitled "Avoiding The F-Word: IT Project Morbidity, Mortality, and Immortality" and focused on this under-addressed problem. PARTICIPANTS discussed communication, workflow, and quality; the complexity of information technology undertakings; the need to integrate all aspects of projects, work environments, and regulatory and policy requirements; and the difficulty of getting all the parts and participants in harmony. While recognizing that there still are technical issues related to functionality and interoperability, discussion affirmed the emerging consensus that problems are due to sociological, cultural, and financial issues, and hence are more managerial than technical. Participants drew on lessons from experience and research in identifying important issues, action items, and recommendations to address the following: what "success" and "failure" mean, what contributes to making successful or unsuccessful systems, how to use failure as an enhanced learning opportunity for continued improvement, how system successes or failures should be studied, and what AMIA should do to enhance opportunities for successes. The workshop laid out a research agenda and recommended action items, reflecting the conviction that AMIA members and AMIA as an organization can take a leadership role to make projects more practical and likely to succeed in health care settings.
NASA Astrophysics Data System (ADS)
Vallianatos, Filippos; Chatzopoulos, George
2014-05-01
Strong observational indications support the hypothesis that many large earthquakes are preceded by accelerating seismic release rates which described by a power law time to failure relation. In the present work, a unified theoretical framework is discussed based on the ideas of non-extensive statistical physics along with fundamental principles of physics such as the energy conservation in a faulted crustal volume undergoing stress loading. We derive the time-to-failure power-law of: a) cumulative number of earthquakes, b) cumulative Benioff strain and c) cumulative energy released in a fault system that obeys a hierarchical distribution law extracted from Tsallis entropy. Considering the analytic conditions near the time of failure, we derive from first principles the time-to-failure power-law and show that a common critical exponent m(q) exists, which is a function of the non-extensive entropic parameter q. We conclude that the cumulative precursory parameters are function of the energy supplied to the system and the size of the precursory volume. In addition the q-exponential distribution which describes the fault system is a crucial factor on the appearance of power-law acceleration in the seismicity. Our results based on Tsallis entropy and the energy conservation gives a new view on the empirical laws derived by other researchers. Examples and applications of this technique to observations of accelerating seismicity will also be presented and discussed. This work was implemented through the project IMPACT-ARC in the framework of action "ARCHIMEDES III-Support of Research Teams at TEI of Crete" (MIS380353) of the Operational Program "Education and Lifelong Learning" and is co-financed by the European Union (European Social Fund) and Greek national funds
Thermal barrier coating life prediction model development
NASA Technical Reports Server (NTRS)
Hillery, R. V.; Pilsner, B. H.; Mcknight, R. L.; Cook, T. S.; Hartle, M. S.
1988-01-01
This report describes work performed to determine the predominat modes of degradation of a plasma sprayed thermal barrier coating system and to develop and verify life prediction models accounting for these degradation modes. The primary TBC system consisted of a low pressure plasma sprayed NiCrAlY bond coat, an air plasma sprayed ZrO2-Y2O3 top coat, and a Rene' 80 substrate. The work was divided into 3 technical tasks. The primary failure mode to be addressed was loss of the zirconia layer through spalling. Experiments showed that oxidation of the bond coat is a significant contributor to coating failure. It was evident from the test results that the species of oxide scale initially formed on the bond coat plays a role in coating degradation and failure. It was also shown that elevated temperature creep of the bond coat plays a role in coating failure. An empirical model was developed for predicting the test life of specimens with selected coating, specimen, and test condition variations. In the second task, a coating life prediction model was developed based on the data from Task 1 experiments, results from thermomechanical experiments performed as part of Task 2, and finite element analyses of the TBC system during thermal cycles. The third and final task attempted to verify the validity of the model developed in Task 2. This was done by using the model to predict the test lives of several coating variations and specimen geometries, then comparing these predicted lives to experimentally determined test lives. It was found that the model correctly predicts trends, but that additional refinement is needed to accurately predict coating life.
NASA Astrophysics Data System (ADS)
Mertens, James Charles Edwin
For decades, microelectronics manufacturing has been concerned with failures related to electromigration phenomena in conductors experiencing high current densities. The influence of interconnect microstructure on device failures related to electromigration in BGA and flip chip solder interconnects has become a significant interest with reduced individual solder interconnect volumes. A survey indicates that x-ray computed micro-tomography (muXCT) is an emerging, novel means for characterizing the microstructures' role in governing electromigration failures. This work details the design and construction of a lab-scale muXCT system to characterize electromigration in the Sn-0.7Cu lead-free solder system by leveraging in situ imaging. In order to enhance the attenuation contrast observed in multi-phase material systems, a modeling approach has been developed to predict settings for the controllable imaging parameters which yield relatively high detection rates over the range of x-ray energies for which maximum attenuation contrast is expected in the polychromatic x-ray imaging system. In order to develop this predictive tool, a model has been constructed for the Bremsstrahlung spectrum of an x-ray tube, and calculations for the detector's efficiency over the relevant range of x-ray energies have been made, and the product of emitted and detected spectra has been used to calculate the effective x-ray imaging spectrum. An approach has also been established for filtering 'zinger' noise in x-ray radiographs, which has proven problematic at high x-ray energies used for solder imaging. The performance of this filter has been compared with a known existing method and the results indicate a significant increase in the accuracy of zinger filtered radiographs. The obtained results indicate the conception of a powerful means for the study of failure causing processes in solder systems used as interconnects in microelectronic packaging devices. These results include the volumetric quantification of parameters which are indicative of both electromigration tolerance of solders and the dominant mechanisms for atomic migration in response to current stressing. This work is aimed to further the community's understanding of failure-causing electromigration processes in industrially relevant material systems for microelectronic interconnect applications and to advance the capability of available characterization techniques for their interrogation.
Failure to Launch: Structural Shift and the New Lost Generation
ERIC Educational Resources Information Center
Carnevale, Anthony P.; Hanson, Andrew R.; Gulish, Artem
2013-01-01
The lockstep march from school to work and then on to retirement no longer applies for a growing share of Americans. Many young adults are launching their careers later, while older adults are working longer. As a result, the education and labor market institutions that were the foundation of a 20th century system are out of sync with the 21st…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Sonjoy; Goswami, Kundan; Datta, Biswa N.
2014-12-10
Failure of structural systems under dynamic loading can be prevented via active vibration control which shifts the damped natural frequencies of the systems away from the dominant range of loading spectrum. The damped natural frequencies and the dynamic load typically show significant variations in practice. A computationally efficient methodology based on quadratic partial eigenvalue assignment technique and optimization under uncertainty has been formulated in the present work that will rigorously account for these variations and result in an economic and resilient design of structures. A novel scheme based on hierarchical clustering and importance sampling is also developed in this workmore » for accurate and efficient estimation of probability of failure to guarantee the desired resilience level of the designed system. Numerical examples are presented to illustrate the proposed methodology.« less
Initial design and evaluation of automatic restructurable flight control system concepts
NASA Technical Reports Server (NTRS)
Weiss, J. L.; Looze, D. P.; Eterno, J. S.; Grunberg, D. B.
1986-01-01
Results of efforts to develop automatic control design procedures for restructurable aircraft control systems is presented. The restructurable aircraft control problem involves designing a fault tolerance control system which can accommodate a wide variety of unanticipated aircraft failure. Under NASA sponsorship, many of the technologies which make such a system possible were developed and tested. Future work will focus on developing a methodology for integrating these technologies and demonstration of a complete system.
Protoflight photovoltaic power module system-level tests in the space power facility
NASA Technical Reports Server (NTRS)
Rivera, Juan C.; Kirch, Luke A.
1989-01-01
Work Package Four, which includes the NASA-Lewis and Rocketdyne, has selected an approach for the Space Station Freedom Photovoltaic (PV) Power Module flight certification that combines system level qualification and acceptance testing in the thermal vacuum environment: The protoflight vehicle approach. This approach maximizes ground test verification to assure system level performance and to minimize risk of on-orbit failures. The preliminary plans for system level thermal vacuum environmental testing of the protoflight PV Power Module in the NASA-Lewis Space Power Facility (SPF), are addressed. Details of the facility modifications to refurbish SPF, after 13 years of downtime, are briefly discussed. The results of an evaluation of the effectiveness of system level environmental testing in screening out incipient part and workmanship defects and unique failure modes are discussed. Preliminary test objectives, test hardware configurations, test support equipment, and operations are presented.
Value measurement in health care: a new perspective.
Michelman, J E; Rausch, P E; Barton, T L
1999-08-01
Vital to the success of any healthcare organization is the ability to obtain useful information and feedback about its performance. In particular, healthcare organizations need to begin to understand how non-value-adding work activities detract from their bottom lines. Additionally, financial managers and information systems need to provide data and reports throughout the continuum of care. Overall, healthcare organizations must align the management information and control systems with the planning and decision-making processes. The horizontal information system is a tool to manage three common problems facing today's healthcare managers: (1) the use of existing information to focus on control rather than improve business, (2) failure to focus on satisfying the customer, and (3) failure to combine their efforts with those of the employees by developing trust and a common focus.
Effect of Component Failures on Economics of Distributed Photovoltaic Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lubin, Barry T.
2012-02-02
This report describes an applied research program to assess the realistic costs of grid connected photovoltaic (PV) installations. A Board of Advisors was assembled that included management from the regional electric power utilities, as well as other participants from companies that work in the electric power industry. Although the program started with the intention of addressing effective load carrying capacity (ELCC) for utility-owned photovoltaic installations, results from the literature study and recommendations from the Board of Advisors led investigators to the conclusion that obtaining effective data for this analysis would be difficult, if not impossible. The effort was then re-focusedmore » on assessing the realistic costs and economic valuations of grid-connected PV installations. The 17 kW PV installation on the University of Hartford's Lincoln Theater was used as one source of actual data. The change in objective required a more technically oriented group. The re-organized working group (changes made due to the need for more technically oriented participants) made site visits to medium-sized PV installations in Connecticut with the objective of developing sources of operating histories. An extensive literature review helped to focus efforts in several technical and economic subjects. The objective of determining the consequences of component failures on both generation and economic returns required three analyses. The first was a Monte-Carlo-based simulation model for failure occurrences and the resulting downtime. Published failure data, though limited, was used to verify the results. A second model was developed to predict the reduction in or loss of electrical generation related to the downtime due to these failures. Finally, a comprehensive economic analysis, including these failures, was developed to determine realistic net present values of installed PV arrays. Two types of societal benefits were explored, with quantitative valuations developed for both. Some societal benefits associated with financial benefits to the utility of having a distributed generation capacity that is not fossil-fuel based have been included into the economic models. Also included and quantified in the models are several benefits to society more generally: job creation and some estimates of benefits from avoiding greenhouse emissions. PV system failures result in a lowering of the economic values of a grid-connected system, but this turned out to be a surprisingly small effect on the overall economics. The most significant benefit noted resulted from including the societal benefits accrued to the utility. This provided a marked increase in the valuations of the array and made the overall value proposition a financially attractive one, in that net present values exceeded installation costs. These results indicate that the Department of Energy and state regulatory bodies should consider focusing on societal benefits that create economic value for the utility, confirm these quantitative values, and work to have them accepted by the utilities and reflected in the rate structures for power obtained from grid-connected arrays. Understanding and applying the economic benefits evident in this work can significantly improve the business case for grid-connected PV installations. This work also indicates that the societal benefits to the population are real and defensible, but not nearly as easy to justify in a business case as are the benefits that accrue directly to the utility.« less
Modeling Hydraulic Components for Automated FMEA of a Braking System
2014-12-23
Modeling Hydraulic Components for Automated FMEA of a Braking System Peter Struss, Alessandro Fraracci Tech. Univ. of Munich, 85748 Garching...Germany struss@in.tum.de ABSTRACT This paper presents work on model-based automation of failure-modes-and-effects analysis ( FMEA ) applied to...the hydraulic part of a vehicle braking system. We describe the FMEA task and the application problem and outline the foundations for automating the
Learning from failure in health care: frequent opportunities, pervasive barriers.
Edmondson, A C
2004-12-01
The notion that hospitals and medical practices should learn from failures, both their own and others', has obvious appeal. Yet, healthcare organisations that systematically and effectively learn from the failures that occur in the care delivery process, especially from small mistakes and problems rather than from consequential adverse events, are rare. This article explores pervasive barriers embedded in healthcare's organisational systems that make shared or organisational learning from failure difficult and then recommends strategies for overcoming these barriers to learning from failure, emphasising the critical role of leadership. Firstly, leaders must create a compelling vision that motivates and communicates urgency for change; secondly, leaders must work to create an environment of psychological safety that fosters open reporting, active questioning, and frequent sharing of insights and concerns; and thirdly, case study research on one hospital's organisational learning initiative suggests that leaders can empower and support team learning throughout their organisations as a way of identifying, analysing, and removing hazards that threaten patient safety.
Derivation and experimental verification of clock synchronization theory
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.
1994-01-01
The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Mid-Point Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the clock system's behavior. It is found that a 100% penalty is paid to tolerate worst case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as 3 clock ticks. Clock skew grows to 6 clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst case conditions. conditions.
Experimental validation of clock synchronization algorithms
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Graham, R. Lynn
1992-01-01
The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Midpoint Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the behavior of the clock system. It is found that a 100 percent penalty is paid to tolerate worst-case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as three clock ticks. Clock skew grows to six clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst-case conditions.
Learning from failure in health care: frequent opportunities, pervasive barriers
Edmondson, A
2004-01-01
The notion that hospitals and medical practices should learn from failures, both their own and others', has obvious appeal. Yet, healthcare organisations that systematically and effectively learn from the failures that occur in the care delivery process, especially from small mistakes and problems rather than from consequential adverse events, are rare. This article explores pervasive barriers embedded in healthcare's organisational systems that make shared or organisational learning from failure difficult and then recommends strategies for overcoming these barriers to learning from failure, emphasising the critical role of leadership. Firstly, leaders must create a compelling vision that motivates and communicates urgency for change; secondly, leaders must work to create an environment of psychological safety that fosters open reporting, active questioning, and frequent sharing of insights and concerns; and thirdly, case study research on one hospital's organisational learning initiative suggests that leaders can empower and support team learning throughout their organisations as a way of identifying, analysing, and removing hazards that threaten patient safety. PMID:15576689
NASA Technical Reports Server (NTRS)
Castner, Willard L.; Jacobs, Jeremy B.
2006-01-01
In April 2004 a Space Shuttle Orbiter Reaction Control System (RCS) thruster was found to be cracked while undergoing a nozzle (niobium/C103 alloy) retrofit. As a failure resulting from an in-flight RCS thruster burn-through (initiated from a crack) could be catastrophic, an official Space Shuttle Program flight constraint was issued until flight safety could be adequately demonstrated. This paper describes the laboratory test program which was undertaken to reproduce the cracking in order to fully understand and bound the driving environments. The associated rationale developed to justify continued safe flight of the Orbiter RCS system is also described. The laboratory testing successfully reproduced the niobium cracking, and established specific bounding conditions necessary to cause cracking in the C103 thruster injectors. Each of the following conditions is necessary in combination together: 1) a mechanically disturbed / cold-worked free surface, 2) an externally applied sustained tensile stress near yield strength, 3) presence of fluorine-containing fluids on exposed tensile / cold-worked free surfaces, and 4) sustained exposure to temperatures greater than 400 F. As a result of this work, it was concluded that fluorine-containing materials (e.g. HF acid, Krytox , Brayco etc.) should be carefully controlled or altogether eliminated during processing of niobium and its alloys.
Confident failures: Lapses of working memory reveal a metacognitive blind spot.
Adam, Kirsten C S; Vogel, Edward K
2017-07-01
Working memory performance fluctuates dramatically from trial to trial. On many trials, performance is no better than chance. Here, we assessed participants' awareness of working memory failures. We used a whole-report visual working memory task to quantify both trial-by-trial performance and trial-by-trial subjective ratings of inattention to the task. In Experiment 1 (N = 41), participants were probed for task-unrelated thoughts immediately following 20% of trials. In Experiment 2 (N = 30), participants gave a rating of their attentional state following 25% of trials. Finally, in Experiments 3a (N = 44) and 3b (N = 34), participants reported confidence of every response using a simple mouse-click judgment. Attention-state ratings and off-task thoughts predicted the number of items correctly identified on each trial, replicating previous findings that subjective measures of attention state predict working memory performance. However, participants correctly identified failures on only around 28% of failure trials. Across experiments, participants' metacognitive judgments reliably predicted variation in working memory performance but consistently and severely underestimated the extent of failures. Further, individual differences in metacognitive accuracy correlated with overall working memory performance, suggesting that metacognitive monitoring may be key to working memory success.
1991-04-04
solution to this immediate problem and, as the technology developed, opened doors to applied tribology for advanced maintenance through Mechanical Systems...Integrity Management. The development of other technologies as well enhanced Spectron’s capability, but it was the major advances in electronics and...strain gages will also be studied. The results of this program will provide a basis for future work in the area of advanced sensor technology . ONCUBSIONS
Social Support and Heart Failure: Differing Effects by Race
2015-05-11
responses. These compensatory physiologic responses include increased sympathetic nervous system activity, inflammation, and constriction of blood vessels... physiological differences between African Americans and Caucasians. For instance the process by which sodium is processed in the body may vary between...associated cardiovascular and inflammatory diseases (76). One important hormone at work in the cardiovascular system is aldosterone and it may have a
RELAV - RELIABILITY/AVAILABILITY ANALYSIS PROGRAM
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
RELAV (Reliability/Availability Analysis Program) is a comprehensive analytical tool to determine the reliability or availability of any general system which can be modeled as embedded k-out-of-n groups of items (components) and/or subgroups. Both ground and flight systems at NASA's Jet Propulsion Laboratory have utilized this program. RELAV can assess current system performance during the later testing phases of a system design, as well as model candidate designs/architectures or validate and form predictions during the early phases of a design. Systems are commonly modeled as System Block Diagrams (SBDs). RELAV calculates the success probability of each group of items and/or subgroups within the system assuming k-out-of-n operating rules apply for each group. The program operates on a folding basis; i.e. it works its way towards the system level from the most embedded level by folding related groups into single components. The entire folding process involves probabilities; therefore, availability problems are performed in terms of the probability of success, and reliability problems are performed for specific mission lengths. An enhanced cumulative binomial algorithm is used for groups where all probabilities are equal, while a fast algorithm based upon "Computing k-out-of-n System Reliability", Barlow & Heidtmann, IEEE TRANSACTIONS ON RELIABILITY, October 1984, is used for groups with unequal probabilities. Inputs to the program include a description of the system and any one of the following: 1) availabilities of the items, 2) mean time between failures and mean time to repairs for the items from which availabilities are calculated, 3) mean time between failures and mission length(s) from which reliabilities are calculated, or 4) failure rates and mission length(s) from which reliabilities are calculated. The results are probabilities of success of each group and the system in the given configuration. RELAV assumes exponential failure distributions for reliability calculations and infinite repair resources for availability calculations. No more than 967 items or groups can be modeled by RELAV. If larger problems can be broken into subsystems of 967 items or less, the subsystem results can be used as item inputs to a system problem. The calculated availabilities are steady-state values. Group results are presented in the order in which they were calculated (from the most embedded level out to the system level). This provides a good mechanism to perform trade studies. Starting from the system result and working backwards, the granularity gets finer; therefore, system elements that contribute most to system degradation are detected quickly. RELAV is a C-language program originally developed under the UNIX operating system on a MASSCOMP MC500 computer. It has been modified, as necessary, and ported to an IBM PC compatible with a math coprocessor. The current version of the program runs in the DOS environment and requires a Turbo C vers. 2.0 compiler. RELAV has a memory requirement of 103 KB and was developed in 1989. RELAV is a copyrighted work with all copyright vested in NASA.
Eccher, Claudio; Eccher, Lorenzo; Izzo, Umberto
2005-01-01
In this poster we describe the security solutions implemented in a web-based cooperative work frame-work for managing heart failure patients among different health care professionals involved in the care process. The solution, developed in close collaboration with the Law Department of the University of Trento, is compliant with the new Italian Personal Data Protection Code, issued in 2003, that regulates also the storing and processing of health data.
Launch Vehicle Failure Dynamics and Abort Triggering Analysis
NASA Technical Reports Server (NTRS)
Hanson, John M.; Hill, Ashely D.; Beard, Bernard B.
2011-01-01
Launch vehicle ascent is a time of high risk for an on-board crew. There are many types of failures that can kill the crew if the crew is still on-board when the failure becomes catastrophic. For some failure scenarios, there is plenty of time for the crew to be warned and to depart, whereas in some there is insufficient time for the crew to escape. There is a large fraction of possible failures for which time is of the essence and a successful abort is possible if the detection and action happens quickly enough. This paper focuses on abort determination based primarily on data already available from the GN&C system. This work is the result of failure analysis efforts performed during the Ares I launch vehicle development program. Derivation of attitude and attitude rate abort triggers to ensure that abort occurs as quickly as possible when needed, but that false positives are avoided, forms a major portion of the paper. Some of the potential failure modes requiring use of these triggers are described, along with analysis used to determine the success rate of getting the crew off prior to vehicle demise.
School, Alienation, and Delinquency
ERIC Educational Resources Information Center
Liazos, Alexander
1978-01-01
Schools create delinquents because of their success, not their failure. Under the present economic system, schools must prepare youths, especially of the lower classes, for alienated work and lives. Society and economy must change first, since they demand alienated labor, before schools can prepare people for liberated lives. (Author)
SU-E-T-495: Neutron Induced Electronics Failure Rate Analysis for a Single Room Proton Accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knutson, N; DeWees, T; Klein, E
2014-06-01
Purpose: To determine the failure rate as a function of neutron dose of the range modulator's servo motor controller system (SMCS) while shielded with Borated Polyethylene (BPE) and unshielded in a single room proton accelerator. Methods: Two experimental setups were constructed using two servo motor controllers and two motors. Each SMCS was then placed 30 cm from the end of the plugged proton accelerator applicator. The motor was then turned on and observed from outside of the vault while being irradiated to known neutron doses determined from bubble detector measurements. Anytime the motor deviated from the programmed motion a failuremore » was recorded along with the delivered dose. The experiment was repeated using 9 cm of BPE shielding surrounding the SMCS. Results: Ten SMCS failures were recorded in each experiment. The dose per monitor unit for the unshielded SMCS was 0.0211 mSv/MU and 0.0144 mSv/MU for the shielded SMCS. The mean dose to produce a failure for the unshielded SMCS was 63.5 ± 58.3 mSv versus 17.0 ±12.2 mSv for the shielded. The mean number of MUs between failures were 2297 ± 1891 MU for the unshielded SMCS and 2122 ± 1523 MU for the shielded. A Wilcoxon Signed Ranked test showed the dose between failures were significantly different (P value = 0.044) while the number of MUs between failures were not (P value = 1.000). Statistical analysis determined a SMCS neutron dose of 5.3 mSv produces a 5% chance of failure. Depending on the workload and location of the SMCS, this failure rate could impede clinical workflow. Conclusion: BPE shielding was shown to not reduce the average failure of the SMCS and relocation of the system outside of the accelerator vault was required to lower the failure rate enough to avoid impeding clinical work flow.« less
A Tissue Engineered Model of Aging: Interdependence and Cooperative Effects in Failing Tissues.
Acun, A; Vural, D C; Zorlutuna, P
2017-07-11
Aging remains a fundamental open problem in modern biology. Although there exist a number of theories on aging on the cellular scale, nearly nothing is known about how microscopic failures cascade to macroscopic failures of tissues, organs and ultimately the organism. The goal of this work is to bridge microscopic cell failure to macroscopic manifestations of aging. We use tissue engineered constructs to control the cellular-level damage and cell-cell distance in individual tissues to establish the role of complex interdependence and interactions between cells in aging tissues. We found that while microscopic mechanisms drive aging, the interdependency between cells plays a major role in tissue death, providing evidence on how cellular aging is connected to its higher systemic consequences.
Launch Vehicle Abort Analysis for Failures Leading to Loss of Control
NASA Technical Reports Server (NTRS)
Hanson, John M.; Hill, Ashley D.; Beard, Bernard B.
2013-01-01
Launch vehicle ascent is a time of high risk for an onboard crew. There is a large fraction of possible failures for which time is of the essence and a successful abort is possible if the detection and action happens quickly enough. This paper focuses on abort determination based on data already available from the Guidance, Navigation, and Control system. This work is the result of failure analysis efforts performed during the Ares I launch vehicle development program. The two primary areas of focus are the derivation of abort triggers to ensure that abort occurs as quickly as possible when needed, but that false aborts are avoided, and evaluation of success in aborting off the failing launch vehicle.
S&MA Internship to Support Orion and the European Service Module
NASA Technical Reports Server (NTRS)
Hutcheson, Connor
2016-01-01
As a University Space Research Association (USRA) intern for NASA Johnson Space Center (JSC) during the summer 2016 work term, I worked on three main projects for the Space Exploration Division (NC) of the Safety and Mission Assurance (S&MA) Directorate. I worked on all three projects concurrently. One of the projects involved facilitating the status and closure of technical actions that were created during European Service Module (ESM) safety reviews by the MPCV Safety & Engineering Review Panel (MSERP). The two main duties included accurately collecting and summarizing qualitative data, and communicating that information to the European Space Agency (ESA) and Airbus (ESA's prime contractor) in a clear, succinct and precise manner. This project also required that I create a report on the challenges and opportunities of international S&MA. With its heavy emphasis on soft skills, this project taught me how to communicate better, by showing me how to present and share information in an easy-to-read and understandable format, and by showing me how to cooperate with and culturally respect international partners on a technical project. The second project involved working with the Orion Thermal Protection System (TPS) Process Failure Modes and Effects Analysis (PFMEA) Working Group to create the first full version of the Orion TPS PFMEA. The Orion TPS PFMEA Working Group met twice a week to analyze the Avcoat block installation process for failure modes, the failure modes effects, and how such failure modes could be controlled. I was in charge of implementing changes that were discussed in meeting, but were not implemented real time. Another major task included creating a significant portion of the content alongside another team member outside the two weekly meetings. This project caused me to become knowledgeable about TPS, heatshields, space-rated manufacturing, and non-destructive evaluation (NDE). The project also helped me to become better at working with a small team and helped improved my technical communication skills. My main duty for the third project was creating a Safety Verification Tracking Log (SVTL) for the Orbital Maneuvering System Engine (OMS-E), and contacting subject matter experts to close Hazard Report (HR) control verifications. This project also required me to support other OMS-E safety process tasks, like monitoring OMS-E vibration testing for Quality Assurance (QA) purposes. This project helped me become even more proficient in Excel. Throughout the project, I gained knowledge about the OMS-E system and improved my understanding of pressure systems and propellant systems. In terms of education goals, this work term has affirmed my desire to take a few more space-related courses, like orbital mechanics, so that I can have a better understanding of human spaceflight and the industry surrounding it. However, the work term did not persuade me to pursue a master's degree. In terms of career goals, this work term has helped me clarify the direction I would like to head in the future. The perspective of three summer terms working for NASA in S&MA has allowed me to observe that most S&MA employees joined S&MA after working in other NASA directorates, such as Engineering or Flight Operations. It is my belief that it would be advantageous for both NASA and I for me to broaden my knowledge base and technical skill set by completing hands-on technical work on human spaceflight projects, and for me to integrate my safety experience directly into technical work in other directorates. The other significant advantage to this proposed situation is that if I were to eventually return to S&MA, I would be returning with a substantial set of hands-on technical experience and knowledge, which would be a significant resource for S&MA tasks and projects.
New Approaches to Capture High Frequency Agricultural Dynamics in Africa through Mobile Phones
NASA Astrophysics Data System (ADS)
Evans, T. P.; Attari, S.; Plale, B. A.; Caylor, K. K.; Estes, L. D.; Sheffield, J.
2015-12-01
Crop failure early warning systems relying on remote sensing constitute a new critical resource to assess areas where food shortages may arise, but there is a disconnect between the patterns of crop production on the ground and the environmental and decision-making dynamics that led to a particular crop production outcome. In Africa many governments use mid-growing season household surveys to get an on-the-ground assessment of current agricultural conditions. But these efforts are cost prohibitive over large scales and only offer a one-time snapshot at a particular time point. They also rely on farmers to recall past decisions and farmer recall may be imperfect when answering retrospectively on a decision made several months back (e.g. quantity of seed planted). We introduce a novel mobile-phone based approach to acquire information from farmers over large spatial extents, at high frequency at relatively low-cost compared to household survey approaches. This system makes compromises in number of questions which can feasibly be asked of a respondent (compared to household interviews), but the benefit of capturing weekly data from farmers is very exciting. We present data gathered from farmers in Kenya and Zambia to understand key dimensions of agricultural decision making such as choice of seed variety/planting date, frequency and timing of weeding/fertilizing and coping strategies such as pursuing off-farm labor. A particularly novel aspect of this work is reporting from farmers of what their expectation of end-season harvest will be on a week-by-week basis. Farmer's themselves can serve as sentinels of crop failure in this system. And farmers responses to drought are as much driven by their expectations of looming crop failure that may be different from that gleaned from remote sensing based assessment. This work is one piece of a larger design to link farmers to high-density meteorological data in Africa as an additional tool to improve crop failure early warning systems and understand adaptation to climate variability.
ERIC Educational Resources Information Center
Price, Hugh B.
2007-01-01
This working paper examines the approaches, wisdom, and experience generated by the ChalleNGe program, as well as the vast storehouse of knowledge and research, models and systems possessed by the military services that are potentially applicable to educating and developing youngsters who are at greatest risk of academic failure, economic…
Yun, Richard J; Krystal, John H; Mathalon, Daniel H
2010-03-01
The human working memory system provides an experimentally useful model for examination of neural overload effects on subsequent functioning of the overloaded system. This study employed functional magnetic resonance imaging in conjunction with a parametric working memory task to characterize the behavioral and neural effects of cognitive overload on subsequent cognitive performance, with particular attention to cognitive-limbic interactions. Overloading the working memory system was associated with varying degrees of subsequent decline in performance accuracy and reduced activation of brain regions central to both task performance and suppression of negative affect. The degree of performance decline was independently predicted by three separate factors operating during the overload condition: the degree of task failure, the degree of amygdala activation, and the degree of inverse coupling between the amygdala and dorsolateral prefrontal cortex. These findings suggest that vulnerability to overload effects in cognitive functioning may be mediated by reduced amygdala suppression and subsequent amygdala-prefrontal interaction.
Medication management strategies used by older adults with heart failure: A systems-based analysis.
Mickelson, Robin S; Holden, Richard J
2017-09-01
Older adults with heart failure use strategies to cope with the constraining barriers impeding medication management. Strategies are behavioral adaptations that allow goal achievement despite these constraining conditions. When strategies do not exist, are ineffective or maladaptive, medication performance and health outcomes are at risk. While constraints to medication adherence are described in literature, strategies used by patients to manage medications are less well-described or understood. Guided by cognitive engineering concepts, the aim of this study was to describe and analyze the strategies used by older adults with heart failure to achieve their medication management goals. This mixed methods study employed an empirical strategies analysis method to elicit medication management strategies used by older adults with heart failure. Observation and interview data collected from 61 older adults with heart failure and 31 caregivers were analyzed using qualitative content analysis to derive categories, patterns and themes within and across cases. Data derived thematic sub-categories described planned and ad hoc methods of strategic adaptations. Stable strategies proactively adjusted the medication management process, environment, or the patients themselves. Patients applied situational strategies (planned or ad hoc) to irregular or unexpected situations. Medication non-adherence was a strategy employed when life goals conflicted with medication adherence. The health system was a source of constraints without providing commensurate strategies. Patients strived to control their medication system and achieve goals using adaptive strategies. Future patient self-mangement research can benefit from methods and theories used to study professional work, such as strategies analysis.
McHugh, Matthew D.; Ma, Chenjuan
2013-01-01
Background Provisions of the Affordable Care Act that increase hospitals’ financial accountability for preventable readmissions have heightened interest in identifying system-level interventions to reduce readmissions. Objectives To determine the relationship between hospital nursing; i.e. nurse work environment, nurse staffing levels, and nurse education, and 30-day readmissions among Medicare patients with heart failure, acute myocardial infarction, and pneumonia. Method and Design Analysis of linked data from California, New Jersey, and Pennsylvania that included information on the organization of hospital nursing (i.e., work environment, patient-to-nurse ratios, and proportion of nurses holding a BSN degree) from a survey of nurses, as well as patient discharge data, and American Hospital Association Annual Survey data. Robust logistic regression was used to estimate the relationship between nursing factors and 30-day readmission. Results Nearly one-quarter of heart failure index admissions (23.3% [n=39,954]); 19.1% (n=12,131) of myocardial infarction admissions; and 17.8% (n=25,169) of pneumonia admissions were readmitted within 30-days. Each additional patient per nurse in the average nurse’s workload was associated with a 7% higher odds of readmission for heart failure (OR=1.07, [1.05–1.09]), 6% for pneumonia patients (OR=1.06, [1.03–1.09]), and 9% for myocardial infarction patients (OR=1.09, [1.05–1.13]). Care in a hospital with a good versus poor work environment was associated with odds of readmission that were 7% lower for heart failure (OR = 0.93, [0.89–0.97]); 6% lower for myocardial infarction (OR = 0.94, [0.88–0.98]); and 10% lower for pneumonia (OR = 0.90, [0.85–0.96]) patients. Conclusions Improving nurses’ work environments and staffing may be effective interventions for preventing readmissions. PMID:23151591
NASA Astrophysics Data System (ADS)
Vandre, Eric
2014-11-01
Dynamic wetting is crucial to processes where a liquid displaces another fluid along a solid surface, such as the deposition of a coating liquid onto a moving substrate. Dynamic wetting fails when process speed exceeds some critical value, leading to incomplete fluid displacement and transient phenomena that impact a variety of applications, such as microfluidic devices, oil-recovery systems, and splashing droplets. Liquid coating processes are particularly sensitive to wetting failure, which can induce air entrainment and other catastrophic coating defects. Despite the industrial incentives for careful control of wetting behavior, the hydrodynamic factors that influence the transition to wetting failure remain poorly understood from empirical and theoretical perspectives. This work investigates the fundamentals of wetting failure in a variety of systems that are relevant to industrial coating flows. A hydrodynamic model is developed where an advancing fluid displaces a receding fluid along a smooth, moving substrate. Numerical solutions predict the onset of wetting failure at a critical substrate speed, which coincides with a turning point in the steady-state solution path for a given set of system parameters. Flow-field analysis reveals a physical mechanism where wetting failure results when capillary forces can no longer support the pressure gradients necessary to steadily displace the receding fluid. Novel experimental systems are used to measure the substrate speeds and meniscus shapes associated with the onset of air entrainment during wetting failure. Using high-speed visualization techniques, air entrainment is identified by the elongation of triangular air films with system-dependent size. Air films become unstable to thickness perturbations and ultimately rupture, leading to the entrainment of air bubbles. Meniscus confinement in a narrow gap between the substrate and a stationary plate is shown to delay air entrainment to higher speeds for a variety of water/glycerol solutions. In addition, liquid pressurization (relative to ambient air) further postpones air entrainment when the meniscus is located near a sharp corner along the plate. Recorded critical speeds compare well to predictions from the model, supporting the hydrodynamic mechanism for the onset of wetting failure. Lastly, the industrial practice of curtain coating is investigated using the hydrodynamic model. Due to the complexity of this system, a new computational approach is developed combining a finite element method and lubrication theory in order to improve the efficiency of the numerical analysis. Results show that the onset of wetting failure varies strongly with the operating conditions of this system. In addition, stresses from the air flow dramatically affect the steady wetting behavior of curtain coating. Ultimately, these findings emphasize the important role of two-fluid displacement mechanics in high-speed wetting systems.
Advanced Caution and Warning System, Final Report - 2011
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly; Aaseng, Gordon; Iverson, David; McCann, Robert S.; Robinson, Peter; Dittemore, Gary; Liolios, Sotirios; Baskaran, Vijay; Johnson, Jeremy; Lee, Charles;
2013-01-01
The work described in this report is a continuation of the ACAWS work funded in fiscal year (FY) 2010 under the Exploration Technology Development Program (ETDP), Integrated Systems Health Management (ISHM) project. In FY 2010, we developed requirements for an ACAWS system and vetted the requirements with potential users via a concept demonstration system. In FY 2011, we developed a working prototype of aspects of that concept, with placeholders for technologies to be fully developed in future phases of the project. The objective is to develop general capability to assist operators with system health monitoring and failure diagnosis. Moreover, ACAWS was integrated with the Discrete Controls (DC) task of the Autonomous Systems and Avionics (ASA) project. The primary objective of DC is to demonstrate an electronic and interactive procedure display environment and multiple levels of automation (automatic execution by computer, execution by computer if the operator consents, and manual execution by the operator).
NASA Astrophysics Data System (ADS)
Donati, Massimiliano; Bacchillone, Tony; Saponara, Sergio; Fanucci, Luca
2011-05-01
Today Chronic Heart Failure (CHF) represents one of leading cause of hospitalization among chronic disease, especially for elderly citizens, with a consequent considerable impact on patient quality of life, resources congestion and healthcare costs for the National Sanitary System. The current healthcare model is mostly in-hospital based and consists of periodic visits, but unfortunately it does not allow to promptly detect exacerbations resulting in a large number of rehospitalization. Recently physicians and administrators identify telemonitoring systems as a strategy able to provide effective and cost efficient healthcare services for CHF patients, ensuring early diagnosis and treatments in case of necessity. This work presents a complete and integrated ICT solution to improve the management of chronic heart failure through the remote monitoring of vital signs at patient home, able to connect in-hospital care of acute syndrome with out-of-hospital follow-up. The proposed platform represents the patient's interface, acting as link between biomedical sensors and the data collection point at the Hospital Information System (HIS) in order to handle in transparent way the reception, analysis and forwarding of the main physiological parameters.
Condition-based diagnosis of mechatronic systems using a fractional calculus approach
NASA Astrophysics Data System (ADS)
Gutiérrez-Carvajal, Ricardo Enrique; Flávio de Melo, Leonimer; Maurício Rosário, João; Tenreiro Machado, J. A.
2016-07-01
While fractional calculus (FC) is as old as integer calculus, its application has been mainly restricted to mathematics. However, many real systems are better described using FC equations than with integer models. FC is a suitable tool for describing systems characterised by their fractal nature, long-term memory and chaotic behaviour. It is a promising methodology for failure analysis and modelling, since the behaviour of a failing system depends on factors that increase the model's complexity. This paper explores the proficiency of FC in modelling complex behaviour by tuning only a few parameters. This work proposes a novel two-step strategy for diagnosis, first modelling common failure conditions and, second, by comparing these models with real machine signals and using the difference to feed a computational classifier. Our proposal is validated using an electrical motor coupled with a mechanical gear reducer.
Information Extraction for System-Software Safety Analysis: Calendar Year 2007 Year-End Report
NASA Technical Reports Server (NTRS)
Malin, Jane T.
2008-01-01
This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis on the models to identify possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations; 4) perform discrete-time-based simulation on the models to investigate scenarios where these paths may play a role in failures and mishaps; and 5) identify resulting candidate scenarios for software integration testing. This paper describes new challenges in a NASA abort system case, and enhancements made to develop the integrated tool set.
ESSAA: Embedded system safety analysis assistant
NASA Technical Reports Server (NTRS)
Wallace, Peter; Holzer, Joseph; Guarro, Sergio; Hyatt, Larry
1987-01-01
The Embedded System Safety Analysis Assistant (ESSAA) is a knowledge-based tool that can assist in identifying disaster scenarios. Imbedded software issues hazardous control commands to the surrounding hardware. ESSAA is intended to work from outputs to inputs, as a complement to simulation and verification methods. Rather than treating the software in isolation, it examines the context in which the software is to be deployed. Given a specified disasterous outcome, ESSAA works from a qualitative, abstract model of the complete system to infer sets of environmental conditions and/or failures that could cause a disasterous outcome. The scenarios can then be examined in depth for plausibility using existing techniques.
Failing to retain a new generation of doctors: qualitative insights from a high-income country.
Humphries, Niamh; Crowe, Sophie; Brugha, Ruairí
2018-02-27
The failure of high-income countries, such as Ireland, to achieve a self-sufficient medical workforce has global implications, particularly for low-income, source countries. In the past decade, Ireland has doubled the number of doctors it trains annually, but because of its failure to retain doctors, it remains heavily reliant on internationally trained doctors to staff its health system. To halve its dependence on internationally trained doctors by 2030, in line with World Health Organisation (WHO) recommendations, Ireland must become more adept at retaining doctors. This paper presents findings from in-depth interviews conducted with 50 early career doctors between May and July 2015. The paper explores the generational component of Ireland's failure to retain doctors and makes recommendations for retention policy and practice. Interviews revealed that a new generation of doctors differ from previous generations in several distinct ways. Their early experiences of training and practice have been in an over-stretched, under-staffed health system and this shapes their decision to remain in Ireland, or to leave. Perhaps as a result of the distinct challenges they have faced in an austerity-constrained health system and their awareness of the working conditions available globally, they challenge the traditional view of medicine as a vocation that should be prioritised before family and other commitments. A new generation of doctors have career options that are also strongly shaped by globalisation and by the opportunities presented by emigration. Understanding the medical workforce from a generational perspective requires that the health system address the issues of concern to a new generation of doctors, in terms of working conditions and training structures and also in terms of their desire for a more acceptable balance between work and life. This will be an important step towards future-proofing the medical workforce and is essential to achieving medical workforce self-sufficiency.
Position, Attitude, and Fault-Tolerant Control of Tilting-Rotor Quadcopter
NASA Astrophysics Data System (ADS)
Kumar, Rumit
The aim of this thesis is to present algorithms for autonomous control of tilt-rotor quadcopter UAV. In particular, this research work describes position, attitude and fault tolerant control in tilt-rotor quadcopter. Quadcopters are one of the most popular and reliable unmanned aerial systems because of the design simplicity, hovering capabilities and minimal operational cost. Numerous applications for quadcopters have been explored all over the world but very little work has been done to explore design enhancements and address the fault-tolerant capabilities of the quadcopters. The tilting rotor quadcopter is a structural advancement of traditional quadcopter and it provides additional actuated controls as the propeller motors are actuated for tilt which can be utilized to improve efficiency of the aerial vehicle during flight. The tilting rotor quadcopter design is accomplished by using an additional servo motor for each rotor that enables the rotor to tilt about the axis of the quadcopter arm. Tilting rotor quadcopter is a more agile version of conventional quadcopter and it is a fully actuated system. The tilt-rotor quadcopter is capable of following complex trajectories with ease. The control strategy in this work is to use the propeller tilts for position and orientation control during autonomous flight of the quadcopter. In conventional quadcopters, two propellers rotate in clockwise direction and other two propellers rotate in counter clockwise direction to cancel out the effective yawing moment of the system. The variation in rotational speeds of these four propellers is utilized for maneuvering. On the other hand, this work incorporates use of varying propeller rotational speeds along with tilting of the propellers for maneuvering during flight. The rotational motion of propellers work in sync with propeller tilts to control the position and orientation of the UAV during the flight. A PD flight controller is developed to achieve various modes of the flight. Further, the performance of the controller and the tilt-rotor design has been compared with respect to the conventional quadcopter in the presence of wind disturbances and sensor uncertainties. In this work, another novel feed-forward control design approach is presented for complex trajectory tracking during autonomous flight. Differential flatness based feed-forward position control is employed to enhance the performance of the UAV during complex trajectory tracking. By accounting for differential flatness based feed-forward control input parameters, a new PD controller is designed to achieve the desired performance in autonomous flight. The results for tracking complex trajectories have been presented by performing numerical simulations with and without environmental uncertainties to demonstrate robustness of the controller during flight. The conventional quadcopters are under-actuated systems and, upon failure of one propeller, the conventional quadcopter would have a tendency of spinning about the primary axis fixed to the vehicle as an outcome of the asymmetry in resultant yawing moment in the system. In this work, control of tilt-rotor quadcopter is presented upon failure of one propeller during flight. The tilt-rotor quadcopter is capable of handling a propeller failure and hence is a fault-tolerant system. The dynamic model of tilting-rotor quadcopter with one propeller failure is derived and a controller has been designed to achieve hovering and navigation capability. The simulation results of way point navigation, complex trajectory tracking and fault-tolerance are presented.
Development of An Intelligent Flight Propulsion Control System
NASA Technical Reports Server (NTRS)
Calise, A. J.; Rysdyk, R. T.; Leonhardt, B. K.
1999-01-01
The initial design and demonstration of an Intelligent Flight Propulsion and Control System (IFPCS) is documented. The design is based on the implementation of a nonlinear adaptive flight control architecture. This initial design of the IFPCS enhances flight safety by using propulsion sources to provide redundancy in flight control. The IFPCS enhances the conventional gain scheduled approach in significant ways: (1) The IFPCS provides a back up flight control system that results in consistent responses over a wide range of unanticipated failures. (2) The IFPCS is applicable to a variety of aircraft models without redesign and,(3) significantly reduces the laborious research and design necessary in a gain scheduled approach. The control augmentation is detailed within an approximate Input-Output Linearization setting. The availability of propulsion only provides two control inputs, symmetric and differential thrust. Earlier Propulsion Control Augmentation (PCA) work performed by NASA provided for a trajectory controller with pilot command input of glidepath and heading. This work is aimed at demonstrating the flexibility of the IFPCS in providing consistency in flying qualities under a variety of failure scenarios. This report documents the initial design phase where propulsion only is used. Results confirm that the engine dynamics and associated hard nonlineaaities result in poor handling qualities at best. However, as demonstrated in simulation, the IFPCS is capable of results similar to the gain scheduled designs of the NASA PCA work. The IFPCS design uses crude estimates of aircraft behaviour. The adaptive control architecture demonstrates robust stability and provides robust performance. In this work, robust stability means that all states, errors, and adaptive parameters remain bounded under a wide class of uncertainties and input and output disturbances. Robust performance is measured in the quality of the tracking. The results demonstrate the flexibility of the IFPCS architecture and the ability to provide robust performance under a broad range of uncertainty. Robust stability is proved using Lyapunov like analysis. Future development of the IFPCS will include integration of conventional control surfaces with the use of propulsion augmentation, and utilization of available lift and drag devices, to demonstrate adaptive control capability under a greater variety of failure scenarios. Further work will specifically address the effects of actuator saturation.
Using Combined SFTA and SFMECA Techniques for Space Critical Software
NASA Astrophysics Data System (ADS)
Nicodemos, F. G.; Lahoz, C. H. N.; Abdala, M. A. D.; Saotome, O.
2012-01-01
This work addresses the combined Software Fault Tree Analysis (SFTA) and Software Failure Modes, Effects and Criticality Analysis (SFMECA) techniques applied to space critical software of satellite launch vehicles. The combined approach is under research as part of the Verification and Validation (V&V) efforts to increase software dependability and as future application in other projects under development at Instituto de Aeronáutica e Espaço (IAE). The applicability of such approach was conducted on system software specification and applied to a case study based on the Brazilian Satellite Launcher (VLS). The main goal is to identify possible failure causes and obtain compensating provisions that lead to inclusion of new functional and non-functional system software requirements.
NASA Astrophysics Data System (ADS)
Vasseur, Jeremie; Lavallée, Yan; Hess, Kai-Uwe; Wassermann, Joachim; Dingwell, Donald B.
2013-04-01
Along with many others, volcanic unrest is regarded as a catastrophic material failure phenomenon and is often preceded by diverse precursory signals. Although a volcanic system intrinsically behave in a non-linear and stochastic way, these precursors display systematic evolutionary trends to upcoming eruptions. Seismic signals in particular are in general dramatically increasing prior to an eruption and have been extensively reported to show accelerating rates through time, as well as in the laboratory before failure of rock samples. At the lab-scale, acoustic emissions (AE) are high frequency transient stress waves used to track fracture initiation and propagation inside a rock sample. Synthesized glass samples featuring a range of porosities (0 - 30%) and natural rock samples from volcán de Colima, Mexico, have been failed under high temperature uniaxial compression experiments at constant stresses and strain rates. Using the monitored AEs and the generated mechanical work during deformation, we investigated the evolutionary trends of energy patterns associated to different degrees of heterogeneity. We observed that the failure of dense, poorly porous glasses is achieved by exceeding elevated strength and thus requires a significant accumulation of strain, meaning only pervasive small-scale cracking is occurring. More porous glasses as well as volcanic samples need much lower applied stress and deformation to fail, as fractures are nucleating, propagating and coalescing into localized large-scale cracks, taking the advantage of the existence of numerous defects (voids for glasses, voids and crystals for volcanic rocks). These observations demonstrate that the mechanical work generated through cracking is efficiently distributed inside denser and more homogeneous samples, as underlined by the overall lower AE energy released during experiments. In contrast, the quicker and larger AE energy released during the loading of heterogeneous samples shows that the mechanical work tends to concentrate in specific weak regions facilitating dynamical failure of the material through dissipation of the accumulated strain energy. Applying a statistical Global Linearization Method (GLM) in multi-phase silicate liquids samples leads to a maximum likelihood power-law fit of the accelerating rates of released AEs. The calculated α exponent of the famous empirical Failure Forecast Method (FFM) tends to decrease from 2 towards 1 with increasing porosity, suggesting a shift towards an idealized exponential-like acceleration. Single-phase silicate liquids behave more elastically during deformation without much cracking and suddenly releasing their accumulated strain energy at failure, implying less clear trends in monitored AEs. In a predictive prospective, these results support the fact that failure forecasting power is enhanced by the presence of heterogeneities inside a material.
Managing heart failure in the long-term care setting: nurses' experiences in Ontario, Canada.
Strachan, Patricia H; Kaasalainen, Sharon; Horton, Amy; Jarman, Hellen; D'Elia, Teresa; Van Der Horst, Mary-Lou; Newhouse, Ian; Kelley, Mary Lou; McAiney, Carrie; McKelvie, Robert; Heckman, George A
2014-01-01
Implementation of heart failure guidelines in long-term care (LTC) settings is challenging. Understanding the conditions of nursing practice can improve management, reduce suffering, and prevent hospital admission of LTC residents living with heart failure. The aim of the study was to understand the experiences of LTC nurses managing care for residents with heart failure. This was a descriptive qualitative study nested in Phase 2 of a three-phase mixed methods project designed to investigate barriers and solutions to implementing the Canadian Cardiovascular Society heart failure guidelines into LTC homes. Five focus groups totaling 33 nurses working in LTC settings in Ontario, Canada, were audiorecorded, then transcribed verbatim, and entered into NVivo9. A complex adaptive systems framework informed this analysis. Thematic content analysis was conducted by the research team. Triangulation, rigorous discussion, and a search for negative cases were conducted. Data were collected between May and July 2010. Nurses characterized their experiences managing heart failure in relation to many influences on their capacity for decision-making in LTC settings: (a) a reactive versus proactive approach to chronic illness; (b) ability to interpret heart failure signs, symptoms, and acuity; (c) compromised information flow; (d) access to resources; and (e) moral distress. Heart failure guideline implementation reflects multiple dynamic influences. Leadership that addresses these factors is required to optimize the conditions of heart failure care and related nursing practice.
Simultaneously Coupled Mechanical-Electrochemical-Thermal Simulation of Lithium-Ion Cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, C.; Santhanagopalan, S.; Sprague, M. A.
2016-07-28
Understanding the combined electrochemical-thermal and mechanical response of a system has a variety of applications, for example, structural failure from electrochemical fatigue and the potential induced changes of material properties. For lithium-ion batteries, there is an added concern over the safety of the system in the event of mechanical failure of the cell components. In this work, we present a generic multi-scale simultaneously coupled mechanical-electrochemical-thermal model to examine the interaction between mechanical failure and electrochemical-thermal responses. We treat the battery cell as a homogeneous material while locally we explicitly solve for the mechanical response of individual components using a homogenizationmore » model and the electrochemical-thermal responses using an electrochemical model for the battery. A benchmark problem is established to demonstrate the proposed modeling framework. The model shows the capability to capture the gradual evolution of cell electrochemical-thermal responses, and predicts the variation of those responses under different short-circuit conditions.« less
Simultaneously Coupled Mechanical-Electrochemical-Thermal Simulation of Lithium-Ion Cells: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chao; Santhanagopalan, Shriram; Sprague, Michael A.
2016-08-01
Understanding the combined electrochemical-thermal and mechanical response of a system has a variety of applications, for example, structural failure from electrochemical fatigue and the potential induced changes of material properties. For lithium-ion batteries, there is an added concern over the safety of the system in the event of mechanical failure of the cell components. In this work, we present a generic multi-scale simultaneously coupled mechanical-electrochemical-thermal model to examine the interaction between mechanical failure and electrochemical-thermal responses. We treat the battery cell as a homogeneous material while locally we explicitly solve for the mechanical response of individual components using a homogenizationmore » model and the electrochemical-thermal responses using an electrochemical model for the battery. A benchmark problem is established to demonstrate the proposed modeling framework. The model shows the capability to capture the gradual evolution of cell electrochemical-thermal responses, and predicts the variation of those responses under different short-circuit conditions.« less
ERIC Educational Resources Information Center
Felder, Nathaniel L.
The grading system at the University of North Carolina at Asheville before fall 1978 provided four designations: H (honors); G (good or well above average); P (pass or satisfactory); and F (failure). This range does not provide a grade for unsatisfactory but passing work. It was suspected that this led teachers to give "average" grades…
Impacts of Job Stress and Cognitive Failure on Patient Safety Incidents among Hospital Nurses.
Park, Young-Mi; Kim, Souk Young
2013-12-01
This study aimed to identify the impacts of job stress and cognitive failure on patient safety incidents among hospital nurses in Korea. The study included 279 nurses who worked for at least 6 months in five general hospitals in Korea. Data were collected with self-administered questionnaires designed to measure job stress, cognitive failure, and patient safety incidents. This study showed that 27.9% of the participants had experienced patient safety incidents in the past 6 months. Factors affecting incidents were found to be shift work [odds ratio (OR) = 6.85], cognitive failure (OR = 2.92), lacking job autonomy (OR = 0.97), and job instability (OR = 1.02). Patient safety incidents were affected by shift work, cognitive failure, and job stress. Many countermeasures to reduce the incidents caused by shift work, and plans to reduce job stress to reduce the workers' cognitive failure are required. In addition, there is a necessity to reduce job instability and clearly define the scope and authority for duties that are directly related to the patient's safety.
Impacts of Job Stress and Cognitive Failure on Patient Safety Incidents among Hospital Nurses
Park, Young-Mi; Kim, Souk Young
2013-01-01
Background This study aimed to identify the impacts of job stress and cognitive failure on patient safety incidents among hospital nurses in Korea. Methods The study included 279 nurses who worked for at least 6 months in five general hospitals in Korea. Data were collected with self-administered questionnaires designed to measure job stress, cognitive failure, and patient safety incidents. Results This study showed that 27.9% of the participants had experienced patient safety incidents in the past 6 months. Factors affecting incidents were found to be shift work [odds ratio (OR) = 6.85], cognitive failure (OR = 2.92), lacking job autonomy (OR = 0.97), and job instability (OR = 1.02). Conclusion Patient safety incidents were affected by shift work, cognitive failure, and job stress. Many countermeasures to reduce the incidents caused by shift work, and plans to reduce job stress to reduce the workers' cognitive failure are required. In addition, there is a necessity to reduce job instability and clearly define the scope and authority for duties that are directly related to the patient's safety. PMID:24422177
NASA Technical Reports Server (NTRS)
Gibbel, Mark; Larson, Timothy
2000-01-01
An Engineering-of-Failure approach to designing and executing an accelerated product qualification test was performed to support a risk assessment of a "work-around" necessitated by an on-orbit failure of another piece of hardware on the Mars Global Surveyor spacecraft. The proposed work-around involved exceeding the previous qualification experience both in terms of extreme cold exposure level and in terms of demonstrated low cycle fatigue life for the power shunt assemblies. An analysis was performed to identify potential failure sites, modes and associated failure mechanisms consistent with the new use conditions. A test was then designed and executed which accelerated the failure mechanisms identified by analysis. Verification of the resulting failure mechanism concluded the effort.
Error, blame, and the law in health care--an antipodean perspective.
Runciman, William B; Merry, Alan F; Tito, Fiona
2003-06-17
Patients are frequently harmed by problems arising from the health care process itself. Addressing these problems requires understanding the role of errors, violations, and system failures in their genesis. Problem-solving is inhibited by a tendency to blame those involved, often inappropriately. This has been aggravated by the need to attribute blame before compensation can be obtained through tort and the human failing of attributing blame simply because there has been a serious outcome. Blaming and punishing for errors that are made by well-intentioned people working in the health care system drives the problem of iatrogenic harm underground and alienates people who are best placed to prevent such problems from recurring. On the other hand, failure to assign blame when it is due is also undesirable and erodes trust in the medical profession. Understanding the distinction between blameworthy behavior and inevitable human errors and appreciating the systemic factors that underlie most failures in complex systems are essential for the response to a harmed patient to be informed, fair, and effective in improving safety. It is important to meet society's needs to blame and exact retribution when appropriate. However, this should not be a prerequisite for compensation, which should be appropriately structured, fair, timely, and, ideally, properly funded as an intrinsic part of health care and social security systems.
2017-07-01
work , the guideline document (1) provides a basis for identifying high voltage design risks, (2) defines areas of concern as a function of environment ... work , the guideline document 1) provides a basis for identifying high voltage design risks, 2) defines areas of concern as a function of environment ...pressures (y-axis - breakdown voltage [volts-peak]) As an example of the impact of the aerospace environment , consider the calculation of the safe
Parent Anxiety and School Reform: When Interests Collide, Whose Needs Come First?
ERIC Educational Resources Information Center
Fried, Robert L.
1998-01-01
Alfie Kohn's attack on affluent, educationally focused parents (in the April 1998 "Kappan") sidesteps citizens' dismay at reformers' failure to make schools work for disadvantaged, average, or gifted students. What are the systemic implications? Elite colleges must champion reforms, schools must communicate curriculum essentials…
Object Relations and the Development of Values.
ERIC Educational Resources Information Center
Gazda, George M.; Sedgwick, Charlalee
1990-01-01
Claims acquisition of values is related to successes and failures of early relationships. Describes steps person goes through in making identifications, explaining steps that move person toward construction of value system. Refers to works of Heinz Kohut to explain how child's idealizing has within it necessary components for child's growth in…
49 CFR 229.49 - Main reservoir system.
Code of Federal Regulations, 2012 CFR
2012-10-01
... inch above the maximum working air pressure fixed by the chief mechanical officer of the carrier... reservoir of air under pressure to be used for operating those power controls. The reservoir shall be provided with means to automatically prevent the loss of pressure in the event of a failure of main air...
49 CFR 229.49 - Main reservoir system.
Code of Federal Regulations, 2011 CFR
2011-10-01
... inch above the maximum working air pressure fixed by the chief mechanical officer of the carrier... reservoir of air under pressure to be used for operating those power controls. The reservoir shall be provided with means to automatically prevent the loss of pressure in the event of a failure of main air...
49 CFR 229.49 - Main reservoir system.
Code of Federal Regulations, 2013 CFR
2013-10-01
... inch above the maximum working air pressure fixed by the chief mechanical officer of the carrier... reservoir of air under pressure to be used for operating those power controls. The reservoir shall be provided with means to automatically prevent the loss of pressure in the event of a failure of main air...
Monitoring Distributed Real-Time Systems: A Survey and Future Directions
NASA Technical Reports Server (NTRS)
Goodloe, Alwyn E.; Pike, Lee
2010-01-01
Runtime monitors have been proposed as a means to increase the reliability of safety-critical systems. In particular, this report addresses runtime monitors for distributed hard real-time systems. This class of systems has had little attention from the monitoring community. The need for monitors is shown by discussing examples of avionic systems failure. We survey related work in the field of runtime monitoring. Several potential monitoring architectures for distributed real-time systems are presented along with a discussion of how they might be used to monitor properties of interest.
Evaluation of 1.5-T Cell Flash Memory Total Ionizing Dose Response
NASA Astrophysics Data System (ADS)
Clark, Lawrence T.; Holbert, Keith E.; Adams, James W.; Navale, Harshad; Anderson, Blake C.
2015-12-01
Flash memory is an essential part of systems used in harsh environments, experienced by both terrestrial and aerospace TID applications. This paper presents studies of COTS flash memory TID hardness. While there is substantial literature on flash memory TID response, this work focuses for the first time on 1.5 transistor per cell flash memory. The experimental results show hardness varying from about 100 krad(Si) to over 250 krad(Si) depending on the usage model. We explore the circuit and device aspects of the results, based on the extensive reliability literature for this flash memory type. Failure modes indicate both device damage and circuit marginalities. Sector erase failure limits, but read only operation allows TID exceeding 200 krad(Si). The failures are analyzed by type.
NASA Technical Reports Server (NTRS)
Gentry, Gregory J.; Reysa, Richard P.; Williams, Dave E.
2004-01-01
The International Space Station continues to build up its life support equipment capability. Several ECLS equipment failures have occurred since Lab activation in February 2001. Major problems occurring between February 2001 and February 2002 were discussed in other works. Major problems occurring between February 2002 and February 2003 are discussed in this paper, as are updates from previously ongoing unresolved problems. This paper addresses failures, and root cause, with particular emphasis on likely micro-gravity causes. Impact to overall station operations and proposed and accomplished fixes will also be discussed.
Chest Wall Diseases: Respiratory Pathophysiology.
Tzelepis, George E
2018-06-01
The chest wall consists of various structures that function in an integrated fashion to ventilate the lungs. Disorders affecting the bony structures or soft tissues of the chest wall may impose elastic loads by stiffening the chest wall and decreasing respiratory system compliance. These alterations increase the work of breathing and lead to hypoventilation and hypercapnia. Respiratory failure may occur acutely or after a variable period of time. This review focuses on the pathophysiology of respiratory function in specific diseases and disorders of the chest wall, and highlights pathogenic mechanisms of respiratory failure. Copyright © 2018 Elsevier Inc. All rights reserved.
Real-time diagnostics of the reusable rocket engine using on-line system identification
NASA Technical Reports Server (NTRS)
Guo, T.-H.; Merrill, W.; Duyar, A.
1990-01-01
A model-based failure diagnosis system has been proposed for real-time diagnosis of SSME failures. Actuation, sensor, and system degradation failure modes are all considered by the proposed system. In the case of SSME actuation failures, it was shown that real-time identification can effectively be used for failure diagnosis purposes. It is a direct approach since it reduces the detection, isolation, and the estimation of the extent of the failures to the comparison of parameter values before and after the failure. As with any model-based failure detection system, the proposed approach requires a fault model that embodies the essential characteristics of the failure process. The proposed diagnosis approach has the added advantage that it can be used as part of an intelligent control system for failure accommodation purposes.
Deranged Cardiac Metabolism and the Pathogenesis of Heart Failure
2016-01-01
Activation of the neuro-hormonal system is a pathophysiological consequence of heart failure. Neuro-hormonal activation promotes metabolic changes, such as insulin resistance, and determines an increased use of non-carbohydrate substrates for energy production. Fasting blood ketone bodies as well as fat oxidation are increased in patients with heart failure, yielding a state of metabolic inefficiency. The net result is additional depletion of myocardial adenosine triphosphate, phosphocreatine and creatine kinase levels with further decreased efficiency of mechanical work. In this context, manipulation of cardiac energy metabolism by modification of substrate use by the failing heart has produced positive clinical results. The results of current research support the concept that shifting the energy substrate preference away from fatty acid metabolism and towards glucose metabolism could be an effective adjunctive treatment in patients with heart failure. The additional use of drugs able to partially inhibit fatty acids oxidation in patients with heart failure may therefore yield a significant protective effect for clinical symptoms and cardiac function improvement, and simultaneously ameliorate left ventricular remodelling. Certainly, to clarify the exact therapeutic role of metabolic therapy in heart failure, a large multicentre, randomised controlled trial should be performed. PMID:28785448
1999-02-24
technology. Y2K related failures in business systems will generally cause an en - terprise to lose partial or complete control of critical...generation systems may include steam turbines, diesel en - gines, or hydraulic turbines connected to alternators that gener- ERCOT ;*_... Inter...control centers used to manage sub- transmission and distribution sys- tems. These systems are typically operated using a subset of an en - ergy
Service Life Extension of the Propulsion System of Long-Term Manned Orbital Stations
NASA Technical Reports Server (NTRS)
Kamath, Ulhas; Kuznetsov, Sergei; Spencer, Victor
2014-01-01
One of the critical non-replaceable systems of a long-term manned orbital station is the propulsion system. Since the propulsion system operates beginning with the launch of station elements into orbit, its service life determines the service life of the station overall. Weighing almost a million pounds, the International Space Station (ISS) is about four times as large as the Russian space station Mir and about five times as large as the U.S. Skylab. Constructed over a span of more than a decade with the help of over 100 space flights, elements and modules of the ISS provide more research space than any spacecraft ever built. Originally envisaged for a service life of fifteen years, this Earth orbiting laboratory has been in orbit since 1998. Some elements that have been launched later in the assembly sequence were not yet built when the first elements were placed in orbit. Hence, some of the early modules that were launched at the inception of the program were already nearing the end of their design life when the ISS was finally ready and operational. To maximize the return on global investments on ISS, it is essential for the valuable research on ISS to continue as long as the station can be sustained safely in orbit. This paper describes the work performed to extend the service life of the ISS propulsion system. A system comprises of many components with varying failure rates. Reliability of a system is the probability that it will perform its intended function under encountered operating conditions, for a specified period of time. As we are interested in finding out how reliable a system would be in the future, reliability expressed as a function of time provides valuable insight. In a hypothetical bathtub shaped failure rate curve, the failure rate, defined as the number of failures per unit time that a currently healthy component will suffer in a given future time interval, decreases during infant-mortality period, stays nearly constant during the service life and increases at the end when the design service life ends and wear-out phase begins. However, the component failure rates do not remain constant over the entire cycle life. The failure rate depends on various factors such as design complexity, current age of the component, operating conditions, severity of environmental stress factors, etc. Development, qualification and acceptance test processes provide rigorous screening of components to weed out imperfections that might otherwise cause infant mortality failures. If sufficient samples are tested to failure, the failure time versus failure quantity can be analyzed statistically to develop a failure probability distribution function (PDF), a statistical model of the probability of failure versus time. Driven by cost and schedule constraints however, spacecraft components are generally not tested in large numbers. Uncertainties in failure rate and remaining life estimates increase when fewer units are tested. To account for this, spacecraft operators prefer to limit useful operations to a period shorter than the maximum demonstrated service life of the weakest component. Running each component to its failure to determine the maximum possible service life of a system can become overly expensive and impractical. Spacecraft operators therefore, specify the required service life and an acceptable factor of safety (FOS). The designers use these requirements to limit the life test duration. Midway through the design life, when benefits justify additional investments, supplementary life test may be performed to demonstrate the capability to safely extend the service life of the system. An innovative approach is required to evaluate the entire system, without having to go through an elaborate test program of propulsion system elements. Evaluating every component through a brute force test program would be a cost prohibitive and time consuming endeavor. ISS propulsion system components were designed and built decades ago. There are no representative ground test articles for some of the components. A 'test everything' approach would require manufacturing new test articles. The paper outlines some of the techniques used for selective testing, by way of cherry picking candidate components based on failure mode effects analysis, system level impacts, hazard analysis, etc. The type of testing required for extending the service life depends on the design and criticality of the component, failure modes and failure mechanisms, life cycle margin provided by the original certification, operational and environmental stresses encountered, etc. When specific failure mechanism being considered and the underlying relationship of that mode to the stresses provided in the test can be correlated by supporting analysis, time and effort required for conducting life extension testing can be significantly reduced. Exposure to corrosive propellants over long periods of time, for instance, lead to specific failure mechanisms in several components used in the propulsion system. Using Arrhenius model, which is tied to chemically dependent failure mechanisms such as corrosion or chemical reactions, it is possible to subject carefully selected test articles to accelerated life test. Arrhenius model reflects the proportional relationship between time to failure of a component and the exponential of the inverse of absolute temperature acting on the component. The acceleration factor is used to perform tests at higher stresses that allow direct correlation between the times to failure at a high test temperature to the temperatures to be expected in actual use. As long as the temperatures are such that new failure mechanisms are not introduced, this becomes a very useful method for testing to failure a relatively small sample of items for a much shorter amount of time. In this article, based on the example of the propulsion system of the first ISS module Zarya, theoretical approaches and practical activities of extending the service life of the propulsion system are reviewed with the goal of determining the maximum duration of its safe operation.
ERIC Educational Resources Information Center
Faubert, Brenton
2012-01-01
The purpose of this report is to review the body of literature concerned with reducing school failure by improving equity in schools and classrooms. The literature review will be used to inform the Organisation for Economic Cooperation and Development (OECD) Project "Overcoming School Failure: Policies that Work" and hopefully, future educational…
Contralateral Delay Activity Tracks Fluctuations in Working Memory Performance.
Adam, Kirsten C S; Robison, Matthew K; Vogel, Edward K
2018-01-08
Neural measures of working memory storage, such as the contralateral delay activity (CDA), are powerful tools in working memory research. CDA amplitude is sensitive to working memory load, reaches an asymptote at known behavioral limits, and predicts individual differences in capacity. An open question, however, is whether neural measures of load also track trial-by-trial fluctuations in performance. Here, we used a whole-report working memory task to test the relationship between CDA amplitude and working memory performance. If working memory failures are due to decision-based errors and retrieval failures, CDA amplitude would not differentiate good and poor performance trials when load is held constant. If failures arise during storage, then CDA amplitude should track both working memory load and trial-by-trial performance. As expected, CDA amplitude tracked load (Experiment 1), reaching an asymptote at three items. In Experiment 2, we tracked fluctuations in trial-by-trial performance. CDA amplitude was larger (more negative) for high-performance trials compared with low-performance trials, suggesting that fluctuations in performance were related to the successful storage of items. During working memory failures, participants oriented their attention to the correct side of the screen (lateralized P1) and maintained covert attention to the correct side during the delay period (lateralized alpha power suppression). Despite the preservation of attentional orienting, we found impairments consistent with an executive attention theory of individual differences in working memory capacity; fluctuations in executive control (indexed by pretrial frontal theta power) may be to blame for storage failures.
Common Cause Failures and Ultra Reliability
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2012-01-01
A common cause failure occurs when several failures have the same origin. Common cause failures are either common event failures, where the cause is a single external event, or common mode failures, where two systems fail in the same way for the same reason. Common mode failures can occur at different times because of a design defect or a repeated external event. Common event failures reduce the reliability of on-line redundant systems but not of systems using off-line spare parts. Common mode failures reduce the dependability of systems using off-line spare parts and on-line redundancy.
Identification of priorities for medication safety in neonatal intensive care.
Kunac, Desireé L; Reith, David M
2005-01-01
Although neonates are reported to be at greater risk of medication error than infants and older children, little is known about the causes and characteristics of error in this patient group. Failure mode and effects analysis (FMEA) is a technique used in industry to evaluate system safety and identify potential hazards in advance. The aim of this study was to identify and prioritize potential failures in the neonatal intensive care unit (NICU) medication use process through application of FMEA. Using the FMEA framework and a systems-based approach, an eight-member multidisciplinary panel worked as a team to create a flow diagram of the neonatal unit medication use process. Then by brainstorming, the panel identified all potential failures, their causes and their effects at each step in the process. Each panel member independently rated failures based on occurrence, severity and likelihood of detection to allow calculation of a risk priority score (RPS). The panel identified 72 failures, with 193 associated causes and effects. Vulnerabilities were found to be distributed across the entire process, but multiple failures and associated causes were possible when prescribing the medication and when preparing the drug for administration. The top ranking issue was a perceived lack of awareness of medication safety issues (RPS score 273), due to a lack of medication safety training. The next highest ranking issues were found to occur at the administration stage. Common potential failures related to errors in the dose, timing of administration, infusion pump settings and route of administration. Perceived causes were multiple, but were largely associated with unsafe systems for medication preparation and storage in the unit, variable staff skill level and lack of computerised technology. Interventions to decrease medication-related adverse events in the NICU should aim to increase staff awareness of medication safety issues and focus on medication administration processes.
Work-hardening behaviour of Mg single crystals oriented for basal slip
NASA Astrophysics Data System (ADS)
Bhattacharya, B.; Niewczas, M.
2011-06-01
Work-hardening behaviour of Mg single crystals oriented for basal slip was studied by means of tensile tests carried out at 4, 78 and 295 K. The crystals show critical resolved shear stress values (CRSS) for a {0001} ? basal slip system in the range 1-1.5 MPa. The samples exhibit two-stage work hardening characteristics consisting of a long easy glide stage and a stage of rapid hardening terminated by failure. The onset of the plastic flow up to the point of fracture is accompanied by a low work-hardening rate in the range 5 × 10-5-5 × 10-4 µ, corresponding to the hardening rate in Stage I of copper single crystals. The analysis of thermally activated glide parameters suggests that forest interactions are rate-controlling processes. The very low value of the activation distance found at 4 K, ∼0.047 b, is attributed to zero-point energy effects. The failure of crystals occurs well before their hardening capacity is exhausted by mechanisms which are characteristic of deformation temperature.
Intelligent failure-tolerant control
NASA Technical Reports Server (NTRS)
Stengel, Robert F.
1991-01-01
An overview of failure-tolerant control is presented, beginning with robust control, progressing through parallel and analytical redundancy, and ending with rule-based systems and artificial neural networks. By design or implementation, failure-tolerant control systems are 'intelligent' systems. All failure-tolerant systems require some degrees of robustness to protect against catastrophic failure; failure tolerance often can be improved by adaptivity in decision-making and control, as well as by redundancy in measurement and actuation. Reliability, maintainability, and survivability can be enhanced by failure tolerance, although each objective poses different goals for control system design. Artificial intelligence concepts are helpful for integrating and codifying failure-tolerant control systems, not as alternatives but as adjuncts to conventional design methods.
Application of failure mode and effect analysis in an assisted reproduction technology laboratory.
Intra, Giulia; Alteri, Alessandra; Corti, Laura; Rabellotti, Elisa; Papaleo, Enrico; Restelli, Liliana; Biondo, Stefania; Garancini, Maria Paola; Candiani, Massimo; Viganò, Paola
2016-08-01
Assisted reproduction technology laboratories have a very high degree of complexity. Mismatches of gametes or embryos can occur, with catastrophic consequences for patients. To minimize the risk of error, a multi-institutional working group applied failure mode and effects analysis (FMEA) to each critical activity/step as a method of risk assessment. This analysis led to the identification of the potential failure modes, together with their causes and effects, using the risk priority number (RPN) scoring system. In total, 11 individual steps and 68 different potential failure modes were identified. The highest ranked failure modes, with an RPN score of 25, encompassed 17 failures and pertained to "patient mismatch" and "biological sample mismatch". The maximum reduction in risk, with RPN reduced from 25 to 5, was mostly related to the introduction of witnessing. The critical failure modes in sample processing were improved by 50% in the RPN by focusing on staff training. Three indicators of FMEA success, based on technical skill, competence and traceability, have been evaluated after FMEA implementation. Witnessing by a second human operator should be introduced in the laboratory to avoid sample mix-ups. These findings confirm that FMEA can effectively reduce errors in assisted reproduction technology laboratories. Copyright © 2016 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.
The probability of containment failure by direct containment heating in Zion. Supplement 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilch, M.M.; Allen, M.D.; Stamps, D.W.
1994-12-01
Supplement 1 of NUREG/CR-6075 brings to closure the DCH issue for the Zion plant. It includes the documentation of the peer review process for NUREG/CR-6075, the assessments of four new splinter scenarios defined in working group meetings, and modeling enhancements recommended by the working groups. In the four new scenarios, consistency of the initial conditions has been implemented by using insights from systems-level codes. SCDAP/RELAP5 was used to analyze three short-term station blackout cases with Different lead rates. In all three case, the hot leg or surge line failed well before the lower head and thus the primary system depressurizedmore » to a point where DCH was no longer considered a threat. However, these calculations were continued to lower head failure in order to gain insights that were useful in establishing the initial and boundary conditions. The most useful insights are that the RCS pressure is-low at vessel breach metallic blockages in the core region do not melt and relocate into the lower plenum, and melting of upper plenum steel is correlated with hot leg failure. THE SCDAP/RELAP output was used as input to CONTAIN to assess the containment conditions at vessel breach. The containment-side conditions predicted by CONTAIN are similar to those originally specified in NUREG/CR-6075.« less
Fokkinga, Wietske A; Kreulen, Cees M; Vallittu, Pekka K; Creugers, Nico H J
2004-01-01
This study sought to aggregate literature data on in vitro failure loads and failure modes of prefabricated fiber-reinforced composite (FRC) post systems and to compare them to those of prefabricated metal, custom-cast, and ceramic post systems. The literature was searched using MEDLINE from 1984 to 2003 for dental articles in English. Keywords used were (post or core or buildup or dowel) and (teeth or tooth). Additional inclusion/exclusion steps were conducted, each step by two independent readers: (1) Abstracts describing post-and-core techniques to reconstruct endodontically treated teeth and their mechanical and physical characteristics were included (descriptive studies or reviews were excluded); (2) articles that included FRC post systems were selected; (3) in vitro studies, single-rooted human teeth, prefabricated FRC posts, and composite as the core material were the selection criteria; and (4) failure loads and modes were extracted from the selected papers, and failure modes were dichotomized (distinction was made between "favorable failures," defined as reparable failures, and "unfavorable failures,"defined as irreparable [root] fractures). The literature search revealed 1,984 abstracts. Included were 244, 42, and 12 articles in the first, second, and third selection steps, respectively. Custom-cast post systems showed higher failure loads than prefabricated FRC post systems, whereas ceramic showed lower failure loads. Significantly more favorable failures occurred with prefabricated FRC post systems than with prefabricated and custom-cast metal post systems. The variable "post system" had a significant effect on mean failure loads. FRC post systems more frequently showed favorable failure modes than did metal post systems.
Work-related fatalities among youth ages 11-17 in North Carolina, 1990-2008.
Rauscher, Kimberly J; Runyan, Carol W; Radisch, Deborah
2011-02-01
Local and national surveillance systems are in place that identify occupational deaths. However, due to certain restrictions, they are limited in their ability to accurately count these deaths among adolescent workers. In this population-based study, we relied on primary data from the North Carolina medical examiner system to identify and describe all work-related fatalities among North Carolina youth under age 18 between 1990 and 2008. We identified 31 work-related deaths among youth ages 11-17. The majority occurred between 1990 and 1999. Most occurred in construction and agriculture. Vehicles and guns were responsible for the majority of deaths. Although the prevalence of adolescent work-related fatalities has seen a decline in North Carolina, the 31 deaths we detected signal a failure of the systems in place to prevent young worker fatalities. More remains to be done to protect the lives of adolescent workers. Copyright © 2010 Wiley-Liss, Inc.
Inductive Learning Approaches for Improving Pilot Awareness of Aircraft Faults
NASA Technical Reports Server (NTRS)
Spikovska, Lilly; Iverson, David L.; Poll, Scott; Pryor, anna
2005-01-01
Neural network flight controllers are able to accommodate a variety of aircraft control surface faults without detectable degradation of aircraft handling qualities. Under some faults, however, the effective flight envelope is reduced; this can lead to unexpected behavior if a pilot performs an action that exceeds the remaining control authority of the damaged aircraft. The goal of our work is to increase the pilot s situational awareness by informing him of the type of damage and resulting reduction in flight envelope. Our methodology integrates two inductive learning systems with novel visualization techniques. One learning system, the Inductive Monitoring System (IMS), learns to detect when a simulation includes faulty controls, while two others, Inductive Classification System (INCLASS) and multiple binary decision tree system (utilizing C4.5), determine the type of fault. In off-line training using only non-failure data, IMS constructs a characterization of nominal flight control performance based on control signals issued by the neural net flight controller. This characterization can be used to determine the degree of control augmentation required in the pitch, roll, and yaw command channels to counteract control surface failures. This derived information is typically sufficient to distinguish between the various control surface failures and is used to train both INCLASS and C4.5. Using data from failed control surface flight simulations, INCLASS and C4.5 independently discover and amplify features in IMS results that can be used to differentiate each distinct control surface failure situation. In real-time flight simulations, distinguishing features learned during training are used to classify control surface failures. Knowledge about the type of failure can be used by an additional automated system to alter its approach for planning tactical and strategic maneuvers. The knowledge can also be used directly to increase the pilot s situational awareness and inform manual maneuver decisions. Our multi-modal display of this information provides speech output to issue control surface failure warnings to a lesser-used communication channel and provides graphical displays with pilot-selectable !eve!s of details to issues additional information about the failure. We also describe a potential presentation for flight envelope reduction that can be viewed separately or integrated with an existing attitude indicator instrument. Preliminary results suggest that the inductive approach is capable of detecting that a control surface has failed and determining the type of fault. Furthermore, preliminary evaluations suggest that the interface discloses a concise summary of this information to the pilot.
Ultrawideband Electromagnetic Interference to Aircraft Radios
NASA Technical Reports Server (NTRS)
Ely, Jay J.; Fuller, Gerald L.; Shaver, Timothy W.
2002-01-01
A very recent FCC Final Rule now permits marketing and operation of new products that incorporate Ultrawideband (UWB) technology into handheld devices. Wireless product developers are working to rapidly bring this versatile, powerful and expectedly inexpensive technology into numerous consumer wireless devices. Past studies addressing the potential for passenger-carried portable electronic devices (PEDs) to interfere with aircraft electronic systems suggest that UWB transmitters may pose a significant threat to aircraft communication and navigation radio receivers. NASA, United Airlines and Eagles Wings Incorporated have performed preliminary testing that clearly shows the potential for handheld UWB transmitters to cause cockpit failure indications for the air traffic control radio beacon system (ATCRBS), blanking of aircraft on the traffic alert and collision avoidance system (TCAS) displays, and cause erratic motion and failure of instrument landing system (ILS) localizer and glideslope pointers on the pilot horizontal situation and attitude director displays. This paper provides details of the preliminary testing and recommends further assessment of aircraft systems for susceptibility to UWB electromagnetic interference.
The Hamiltonian and Lagrangian approaches to the dynamics of nonholonomic systems
NASA Astrophysics Data System (ADS)
Koon, Wang Sang; Marsden, Jerrold E.
1997-08-01
This paper compares the Hamiltonian approach to systems with nonholonomic constraints (see [31, 2, 4, 29] and references therein) with the Lagrangian approach (see [16, 27, 9]). There are many differences in the approaches and each has its own advantages; some structures have been discovered on one side and their analogues on the other side are interesting to clarify. For example, the momentum equation and the reconstruction equation were first found on the Lagrangian side and are useful for the control theory of these systems, while the failure of the reduced two-form to be closed (i.e., the failure of the Poisson bracket to satisfy the Jacobi identity) was first noticed on the Hamiltonian side. Clarifying the relation between these approaches is important for the future development of the control theory and stability and bifurcation theory for such systems. In addition to this work, we treat, in this unified framework, a simplified model of the bicycle (see [12, 13]), which is an important underactuated (nonminimum phase) control system.
Nuclear Safety for Space Systems
NASA Astrophysics Data System (ADS)
Offiong, Etim
2010-09-01
It is trite, albeit a truism, to say that nuclear power can provide propulsion thrust needed to launch space vehicles and also, to provide electricity for powering on-board systems, especially for missions to the Moon, Mars and other deep space missions. Nuclear Power Sources(NPSs) are known to provide more capabilities than solar power, fuel cells and conventional chemical means. The worry has always been that of safety. The earliest superpowers(US and former Soviet Union) have designed and launched several nuclear-powered systems, with some failures. Nuclear failures and accidents, however little the number, could be far-reaching geographically, and are catastrophic to humans and the environment. Building on the numerous research works on nuclear power on Earth and in space, this paper seeks to bring to bear, issues relating to safety of space systems - spacecrafts, astronauts, Earth environment and extra terrestrial habitats - in the use and application of nuclear power sources. It also introduces a new formal training course in Space Systems Safety.
ERIC Educational Resources Information Center
Kirst, Michael W.
The slump experienced by many high school seniors stems in part from the failure of the K-12 school system and colleges and universities to provide incentives for high school seniors to work hard. Senior slump appears to be the rational response of students to some disjunctions between the K-12 and postsecondary systems, including a lack of…
NASA Astrophysics Data System (ADS)
Belmonte, D.; Vedova, M. D. L. Dalla; Ferro, C.; Maggiore, P.
2017-06-01
The proposal of prognostic algorithms able to identify precursors of incipient failures of primary flight command electromechanical actuators (EMA) is beneficial for the anticipation of the incoming failure: an early and correct interpretation of the failure degradation pattern, in fact, can trig an early alert of the maintenance crew, who can properly schedule the servomechanism replacement. An innovative prognostic model-based approach, able to recognize the EMA progressive degradations before his anomalous behaviors become critical, is proposed: the Fault Detection and Identification (FDI) of the considered incipient failures is performed analyzing proper system operational parameters, able to put in evidence the corresponding degradation path, by means of a numerical algorithm based on spectral analysis techniques. Subsequently, these operational parameters will be correlated with the actual EMA health condition by means of failure maps created by a reference monitoring model-based algorithm. In this work, the proposed method has been tested in case of EMA affected by combined progressive failures: in particular, partial stator single phase turn to turn short-circuit and rotor static eccentricity are considered. In order to evaluate the prognostic method, a numerical test-bench has been conceived. Results show that the method exhibit adequate robustness and a high degree of confidence in the ability to early identify an eventual malfunctioning, minimizing the risk of fake alarms or unannounced failures.
Liu, Zengkai; Liu, Yonghong; Cai, Baoping
2014-01-01
Reliability analysis of the electrical control system of a subsea blowout preventer (BOP) stack is carried out based on Markov method. For the subsea BOP electrical control system used in the current work, the 3-2-1-0 and 3-2-0 input voting schemes are available. The effects of the voting schemes on system performance are evaluated based on Markov models. In addition, the effects of failure rates of the modules and repair time on system reliability indices are also investigated. PMID:25409010
The Management and Security Expert (MASE)
NASA Technical Reports Server (NTRS)
Miller, Mark D.; Barr, Stanley J.; Gryphon, Coranth D.; Keegan, Jeff; Kniker, Catherine A.; Krolak, Patrick D.
1991-01-01
The Management and Security Expert (MASE) is a distributed expert system that monitors the operating systems and applications of a network. It is capable of gleaning the information provided by the different operating systems in order to optimize hardware and software performance; recognize potential hardware and/or software failure, and either repair the problem before it becomes an emergency, or notify the systems manager of the problem; and monitor applications and known security holes for indications of an intruder or virus. MASE can eradicate much of the guess work of system management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mossahebi, S; Feigenberg, S; Nichols, E
Purpose: GammaPod™, the first stereotactic radiotherapy device for early stage breast cancer treatment, has been recently installed and commissioned at our institution. A multidisciplinary working group applied the failure mode and effects analysis (FMEA) approach to perform a risk analysis. Methods: FMEA was applied to the GammaPod™ treatment process by: 1) generating process maps for each stage of treatment; 2) identifying potential failure modes and outlining their causes and effects; 3) scoring the potential failure modes using the risk priority number (RPN) system based on the product of severity, frequency of occurrence, and detectability (ranging 1–10). An RPN of highermore » than 150 was set as the threshold for minimal concern of risk. For these high-risk failure modes, potential quality assurance procedures and risk control techniques have been proposed. A new set of severity, occurrence, and detectability values were re-assessed in presence of the suggested mitigation strategies. Results: In the single-day image-and-treat workflow, 19, 22, and 27 sub-processes were identified for the stages of simulation, treatment planning, and delivery processes, respectively. During the simulation stage, 38 potential failure modes were found and scored, in terms of RPN, in the range of 9-392. 34 potential failure modes were analyzed in treatment planning with a score range of 16-200. For the treatment delivery stage, 47 potential failure modes were found with an RPN score range of 16-392. The most critical failure modes consisted of breast-cup pressure loss and incorrect target localization due to patient upper-body alignment inaccuracies. The final RPN score of these failure modes based on recommended actions were assessed to be below 150. Conclusion: FMEA risk analysis technique was applied to the treatment process of GammaPod™, a new stereotactic radiotherapy technology. Application of systematic risk analysis methods is projected to lead to improved quality of GammaPod™ treatments. Ying Niu and Cedric Yu are affiliated with Xcision Medical Systems.« less
Reliability demonstration test for load-sharing systems with exponential and Weibull components
Hu, Qingpei; Yu, Dan; Xie, Min
2017-01-01
Conducting a Reliability Demonstration Test (RDT) is a crucial step in production. Products are tested under certain schemes to demonstrate whether their reliability indices reach pre-specified thresholds. Test schemes for RDT have been studied in different situations, e.g., lifetime testing, degradation testing and accelerated testing. Systems designed with several structures are also investigated in many RDT plans. Despite the availability of a range of test plans for different systems, RDT planning for load-sharing systems hasn’t yet received the attention it deserves. In this paper, we propose a demonstration method for two specific types of load-sharing systems with components subject to two distributions: exponential and Weibull. Based on the assumptions and interpretations made in several previous works on such load-sharing systems, we set the mean time to failure (MTTF) of the total system as the demonstration target. We represent the MTTF as a summation of mean time between successive component failures. Next, we introduce generalized test statistics for both the underlying distributions. Finally, RDT plans for the two types of systems are established on the basis of these test statistics. PMID:29284030
Reliability demonstration test for load-sharing systems with exponential and Weibull components.
Xu, Jianyu; Hu, Qingpei; Yu, Dan; Xie, Min
2017-01-01
Conducting a Reliability Demonstration Test (RDT) is a crucial step in production. Products are tested under certain schemes to demonstrate whether their reliability indices reach pre-specified thresholds. Test schemes for RDT have been studied in different situations, e.g., lifetime testing, degradation testing and accelerated testing. Systems designed with several structures are also investigated in many RDT plans. Despite the availability of a range of test plans for different systems, RDT planning for load-sharing systems hasn't yet received the attention it deserves. In this paper, we propose a demonstration method for two specific types of load-sharing systems with components subject to two distributions: exponential and Weibull. Based on the assumptions and interpretations made in several previous works on such load-sharing systems, we set the mean time to failure (MTTF) of the total system as the demonstration target. We represent the MTTF as a summation of mean time between successive component failures. Next, we introduce generalized test statistics for both the underlying distributions. Finally, RDT plans for the two types of systems are established on the basis of these test statistics.
Tatsumi, Eisuke; Nakatani, Takeshi; Imachi, Kou; Umezu, Mitsuo; Kyo, Shun-Ei; Sase, Kazuhiro; Takatani, Setsuo; Matsuda, Hikaru
2007-01-01
A series of guidelines for development and assessment of next-generation medical devices has been drafted under an interagency collaborative project by the Ministry of Health, Labor and Welfare and the Ministry of Economy, Trade and Industry. The working group for assessment guidelines of next-generation artificial hearts reviewed the trend in the prevalence of heart failure and examined the potential usefulness of such devices in Japan and in other countries as a fundamental part of the process of establishing appropriate guidelines. At present, more than 23 million people suffer from heart failure in developed countries, including Japan. Although Japan currently has the lowest mortality from heart failure among those countries, the number of patients is gradually increasing as our lifestyle becomes more Westernized; the associated medical expenses are rapidly growing. The number of heart transplantations, however, is limited due to the overwhelming shortage of donor hearts, not only in Japan but worldwide. Meanwhile, clinical studies and surveys have revealed that the major causes of death in patients undergoing long-term use of ventricular assist devices (VADs) were infection, thrombosis, and mechanical failure, all of which are typical of VADs. It is therefore of urgent and universal necessity to develop next-generation artificial hearts that have excellent durability to provide at least 2 years of event-free operation with a superior quality of life and that can be used for destination therapy to save patients with irreversible heart failure. It is also very important to ensure that an environment that facilitates the development, testing, and approval evaluation processes of next-generation artificial hearts be established as soon as possible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Paul Allan
We investigate dynamic wave-triggered slip under laboratory shear conditions. The experiment is composed of a three-block system containing two gouge layers composed of glass beads and held in place by a fixed load in a biaxial configuration. When the system is sheared under steady state conditions at a normal load of 4 MPa, we find that shear failure may be instantaneously triggered by a dynamic wave, corresponding to material weakening and softening if the system is in a critical shear stress state (near failure). Following triggering, the gouge material remains in a perturbed state over multiple slip cycles as evidencedmore » by the recovery of the material strength, shear modulus, and slip recurrence time. This work suggests that faults must be critically stressed to trigger under dynamic conditions and that the recovery process following a dynamically triggered event differs from the recovery following a spontaneous event.« less
Johnson, Paul Allan
2016-02-28
We investigate dynamic wave-triggered slip under laboratory shear conditions. The experiment is composed of a three-block system containing two gouge layers composed of glass beads and held in place by a fixed load in a biaxial configuration. When the system is sheared under steady state conditions at a normal load of 4 MPa, we find that shear failure may be instantaneously triggered by a dynamic wave, corresponding to material weakening and softening if the system is in a critical shear stress state (near failure). Following triggering, the gouge material remains in a perturbed state over multiple slip cycles as evidencedmore » by the recovery of the material strength, shear modulus, and slip recurrence time. This work suggests that faults must be critically stressed to trigger under dynamic conditions and that the recovery process following a dynamically triggered event differs from the recovery following a spontaneous event.« less
Trends and problems in development of the power plants electrical part
NASA Astrophysics Data System (ADS)
Gusev, Yu. P.
2015-03-01
The article discusses some problems relating to development of the electrical part of modern nuclear and thermal power plants, which are stemming from the use of new process and electrical equipment, such as gas turbine units, power converters, and intellectual microprocessor devices in relay protection and automated control systems. It is pointed out that the failure rates of electrical equipment at Russian and foreign power plants tend to increase. The ongoing power plant technical refitting and innovative development processes generate the need to significantly widen the scope of research works on the electrical part of power plants and rendering scientific support to works on putting in use innovative equipment. It is indicated that one of main factors causing the growth of electrical equipment failures is that some of components of this equipment have insufficiently compatible dynamic characteristics. This, in turn may be due to lack or obsolescence of regulatory documents specifying the requirements for design solutions and operation of electric power equipment that incorporates electronic and microprocessor control and protection devices. It is proposed to restore the system of developing new and updating existing departmental regulatory technical documents that existed in the 1970s, one of the fundamental principles of which was placing long-term responsibility on higher schools and leading design institutions for rendering scientific-technical support to innovative development of components and systems forming the electrical part of power plants. This will make it possible to achieve lower failure rates of electrical equipment and to steadily improve the competitiveness of the Russian electric power industry and energy efficiency of generating companies.
Coelli, Fernando C; Almeida, Renan M V R; Pereira, Wagner C A
2010-12-01
This work develops a cost analysis estimation for a mammography clinic, taking into account resource utilization and equipment failure rates. Two standard clinic models were simulated, the first with one mammography equipment, two technicians and one doctor, and the second (based on an actually functioning clinic) with two equipments, three technicians and one doctor. Cost data and model parameters were obtained by direct measurements, literature reviews and other hospital data. A discrete-event simulation model was developed, in order to estimate the unit cost (total costs/number of examinations in a defined period) of mammography examinations at those clinics. The cost analysis considered simulated changes in resource utilization rates and in examination failure probabilities (failures on the image acquisition system). In addition, a sensitivity analysis was performed, taking into account changes in the probabilities of equipment failure types. For the two clinic configurations, the estimated mammography unit costs were, respectively, US$ 41.31 and US$ 53.46 in the absence of examination failures. As the examination failures increased up to 10% of total examinations, unit costs approached US$ 54.53 and US$ 53.95, respectively. The sensitivity analysis showed that type 3 (the most serious) failure increases had a very large impact on the patient attendance, up to the point of actually making attendance unfeasible. Discrete-event simulation allowed for the definition of the more efficient clinic, contingent on the expected prevalence of resource utilization and equipment failures. © 2010 Blackwell Publishing Ltd.
Reliability and availability evaluation of Wireless Sensor Networks for industrial applications.
Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco
2012-01-01
Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements.
Reliability and Availability Evaluation of Wireless Sensor Networks for Industrial Applications
Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco
2012-01-01
Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements. PMID:22368497
WSEAT Shock Testing Margin Assessment Using Energy Spectra Final Report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisemore, Carl; Babuska, Vit; Booher, Jason
Several programs at Sandia National Laboratories have adopted energy spectra as a metric to relate the severity of mechanical insults to structural capacity. The purpose being to gain insight into the system's capability, reliability, and to quantify the ultimate margin between the normal operating envelope and the likely system failure point -- a system margin assessment. The fundamental concern with the use of energy metrics was that the applicability domain and implementation details were not completely defined for many problems of interest. The goal of this WSEAT project was to examine that domain of applicability and work out the necessarymore » implementation details. The goal of this project was to provide experimental validation for the energy spectra based methods in the context of margin assessment as they relate to shock environments. The extensive test results concluded that failure predictions using energy methods did not agree with failure predictions using S-N data. As a result, a modification to the energy methods was developed following the form of Basquin's equation to incorporate the power law exponent for fatigue damage. This update to the energy-based framework brings the energy based metrics into agreement with experimental data and historical S-N data.« less
Spacecraft Parachute Recovery System Testing from a Failure Rate Perspective
NASA Technical Reports Server (NTRS)
Stewart, Christine E.
2013-01-01
Spacecraft parachute recovery systems, especially those with a parachute cluster, require testing to identify and reduce failures. This is especially important when the spacecraft in question is human-rated. Due to the recent effort to make spaceflight affordable, the importance of determining a minimum requirement for testing has increased. The number of tests required to achieve a mature design, with a relatively constant failure rate, can be estimated from a review of previous complex spacecraft recovery systems. Examination of the Apollo parachute testing and the Shuttle Solid Rocket Booster recovery chute system operation will clarify at which point in those programs the system reached maturity. This examination will also clarify the risks inherent in not performing a sufficient number of tests prior to operation with humans on-board. When looking at complex parachute systems used in spaceflight landing systems, a pattern begins to emerge regarding the need for a minimum amount of testing required to wring out the failure modes and reduce the failure rate of the parachute system to an acceptable level for human spaceflight. Not only a sufficient number of system level testing, but also the ability to update the design as failure modes are found is required to drive the failure rate of the system down to an acceptable level. In addition, sufficient data and images are necessary to identify incipient failure modes or to identify failure causes when a system failure occurs. In order to demonstrate the need for sufficient system level testing prior to an acceptable failure rate, the Apollo Earth Landing System (ELS) test program and the Shuttle Solid Rocket Booster Recovery System failure history will be examined, as well as some experiences in the Orion Capsule Parachute Assembly System will be noted.
Use of a Modern Polymerization Pilot-Plant for Undergraduate Control Projects.
ERIC Educational Resources Information Center
Mendoza-Bustos, S. A.; And Others
1991-01-01
Described is a project where students gain experience in handling large volumes of hazardous materials, process start up and shut down, equipment failures, operational variations, scaling up, equipment cleaning, and run-time scheduling while working in a modern pilot plant. Included are the system design, experimental procedures, and results. (KR)
Young People and Prostitution: An End to the Beginning?
ERIC Educational Resources Information Center
Ayre, Patrick; Barrett, David
2000-01-01
Examines some reasons for the failure to protect young people in England and Wales from sexual abuse inherent in prostitution. Identifies characteristics of the child protection system which fit poorly for work with these youth. Argues that lasting improvement of these children's well-being depends on the creation of "joined-up,"…
The Nature of Teacher Interactions at a High Achieving, High-Risk Urban Middle School
ERIC Educational Resources Information Center
Heaton, Charles Richard
2010-01-01
Increasingly, teachers are asked to work both systemically and systematically in addressing school performance and student failure. Structuring school communities for teacher collaboration is one area where educators have found common ground; the present and unprecedented need is reflected in the policy statements of the leading education…
Understanding Teamwork in Trauma Resuscitation through Analysis of Team Errors
ERIC Educational Resources Information Center
Sarcevic, Aleksandra
2009-01-01
An analysis of human errors in complex work settings can lead to important insights into the workspace design. This type of analysis is particularly relevant to safety-critical, socio-technical systems that are highly dynamic, stressful and time-constrained, and where failures can result in catastrophic societal, economic or environmental…
State Energy Resilience Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, J.; Finster, M.; Pillon, J.
2016-12-01
The energy sector infrastructure’s high degree of interconnectedness with other critical infrastructure systems can lead to cascading and escalating failures that can strongly affect both economic and social activities.The operational goal is to maintain energy availability for customers and consumers. For this body of work, a State Energy Resilience Framework in five steps is proposed.
From Dissemination to Propagation: A New Paradigm for Education Developers
ERIC Educational Resources Information Center
Froyd, Jeffrey E.; Henderson, Charles; Cole, Renée S.; Friedrichsen, Debra; Khatri, Raina; Stanford, Courtney
2017-01-01
Scholarly studies and national reports document failure of current efforts to achieve broad, sustained adoption of research-based instructional practices, despite compelling bodies of evidence supporting efficacy of many of these practices. The authors of this paper argue that many change agents who are working to promote systemic adoption of…
Goldstein, N
1983-12-01
In the midst of the critical struggle over the failures of rehabilitation and the impotency of the prison system, the role of the psychiatrist in the prison has become increasingly unclear. This article presents a persuasive argument for working in prisons and discusses ethical considerations, treatment approaches, and the special problems and challenges provided by prison psychiatry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, D. I.; Han, S. H.
A PSA analyst has been manually determining fire-induced component failure modes and modeling them into the PSA logics. These can be difficult and time-consuming tasks as they need much information and many events are to be modeled. KAERI has been developing the IPRO-ZONE (interface program for constructing zone effect table) to facilitate fire PSA works for identifying and modeling fire-induced component failure modes, and to construct a one top fire event PSA model. With the output of the IPRO-ZONE, the AIMS-PSA, and internal event one top PSA model, one top fire events PSA model is automatically constructed. The outputs ofmore » the IPRO-ZONE include information on fire zones/fire scenarios, fire propagation areas, equipment failure modes affected by a fire, internal PSA basic events corresponding to fire-induced equipment failure modes, and fire events to be modeled. This paper introduces the IPRO-ZONE, and its application results to fire PSA of Ulchin Unit 3 and SMART(System-integrated Modular Advanced Reactor). (authors)« less
Failure analysis of energy storage spring in automobile composite brake chamber
NASA Astrophysics Data System (ADS)
Luo, Zai; Wei, Qing; Hu, Xiaofeng
2015-02-01
This paper set energy storage spring of parking brake cavity, part of automobile composite brake chamber, as the research object. And constructed the fault tree model of energy storage spring which caused parking brake failure based on the fault tree analysis method. Next, the parking brake failure model of energy storage spring was established by analyzing the working principle of composite brake chamber. Finally, the data of working load and the push rod stroke measured by comprehensive test-bed valve was used to validate the failure model above. The experimental result shows that the failure model can distinguish whether the energy storage spring is faulted.
Requirement Generation for Space Infrastructure Systems
NASA Astrophysics Data System (ADS)
Hempsell, M.
Despite heavy investment, in the half-century period between 1970 and 2020 there will almost no progress in the capability provided by the space infrastructure. It is argued that this is due to a failure during the requirement generation phase of the infrastructure's elements, a failure that is primarily due to following the accepted good practice of involving stakeholders while establishing a mission based set of technical requirements. This argument is supported by both a consideration of the history of the requirement generation phase of past space infrastructure projects, in particular the Space Shuttle, and an analysis of the interactions of the stakeholders during this phase. Traditional stakeholder involvement only works well in mature infrastructures where investment aims to make minor improvements, whereas space activity is still in the early experimental stages and is open to major new initiatives that aim to radically change the way we work in space. A new approach to requirement generation is proposed, which is more appropriate to these current circumstances. This uses a methodology centred on the basic functions the system is intended to perform rather than its expected missions.
Designing an efficient LT-code with unequal error protection for image transmission
NASA Astrophysics Data System (ADS)
S. Marques, F.; Schwartz, C.; Pinho, M. S.; Finamore, W. A.
2015-10-01
The use of images from earth observation satellites is spread over different applications, such as a car navigation systems and a disaster monitoring. In general, those images are captured by on board imaging devices and must be transmitted to the Earth using a communication system. Even though a high resolution image can produce a better Quality of Service, it leads to transmitters with high bit rate which require a large bandwidth and expend a large amount of energy. Therefore, it is very important to design efficient communication systems. From communication theory, it is well known that a source encoder is crucial in an efficient system. In a remote sensing satellite image transmission, this efficiency is achieved by using an image compressor, to reduce the amount of data which must be transmitted. The Consultative Committee for Space Data Systems (CCSDS), a multinational forum for the development of communications and data system standards for space flight, establishes a recommended standard for a data compression algorithm for images from space systems. Unfortunately, in the satellite communication channel, the transmitted signal is corrupted by the presence of noise, interference signals, etc. Therefore, the receiver of a digital communication system may fail to recover the transmitted bit. Actually, a channel code can be used to reduce the effect of this failure. In 2002, the Luby Transform code (LT-code) was introduced and it was shown that it was very efficient when the binary erasure channel model was used. Since the effect of the bit recovery failure depends on the position of the bit in the compressed image stream, in the last decade many e orts have been made to develop LT-code with unequal error protection. In 2012, Arslan et al. showed improvements when LT-codes with unequal error protection were used in images compressed by SPIHT algorithm. The techniques presented by Arslan et al. can be adapted to work with the algorithm for image compression recommended by CCSDS. In fact, to design a LT-code with an unequal error protection, the bit stream produced by the algorithm recommended by CCSDS must be partitioned in M disjoint sets of bits. Using the weighted approach, the LT-code produces M different failure probabilities for each set of bits, p1, ..., pM leading to a total probability of failure, p which is an average of p1, ..., pM. In general, the parameters of the LT-code with unequal error protection is chosen using a heuristic procedure. In this work, we analyze the problem of choosing the LT-code parameters to optimize two figure of merits: (a) the probability of achieving a minimum acceptable PSNR, and (b) the mean of PSNR, given that the minimum acceptable PSNR has been achieved. Given the rate-distortion curve achieved by CCSDS recommended algorithm, this work establishes a closed form of the mean of PSNR (given that the minimum acceptable PSNR has been achieved) as a function of p1, ..., pM. The main contribution of this work is the study of a criteria to select the parameters p1, ..., pM to optimize the performance of image transmission.
Model-Based Method for Sensor Validation
NASA Technical Reports Server (NTRS)
Vatan, Farrokh
2012-01-01
Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).
A decentralized approach to reducing the social costs of cascading failures
NASA Astrophysics Data System (ADS)
Hines, Paul
Large cascading failures in electrical power networks come with enormous social costs. These can be direct financial costs, such as the loss of refrigerated foods in grocery stores, or more indirect social costs, such as the traffic congestion that results from the failure of traffic signals. While engineers and policy makers have made numerous technical and organizational changes to reduce the frequency and impact of large cascading failures, the existing data, as described in Chapter 2 of this work, indicate that the overall frequency and impact of large electrical blackouts in the United States are not decreasing. Motivated by the cascading failure problem, this thesis describes a new method for Distributed Model Predictive Control and a power systems application. The central goal of the method, when applied to power systems, is to reduce the social costs of cascading failures by making small, targeted reductions in load and generation and changes to generator voltage set points. Unlike some existing schemes that operate from centrally located control centers, the method is operated by software agents located at substations distributed throughout the power network. The resulting multi-agent control system is a new approach to decentralized control, combining Distributed Model Predictive Control and Reciprocal Altruism. Experimental results indicate that this scheme can in fact decrease the average size, and thus social costs, of cascading failures. Over 100 randomly generated disturbances to a model of the IEEE 300 bus test network, the method resulted in nearly an order of magnitude decrease in average event size (measured in cost) relative to cascading failure simulations without remedial control actions. Additionally, the communication requirements for the method are measured, and found to be within the bandwidth capabilities of current communications technology (on the order of 100kB/second). Experiments on several resistor networks with varying structures, including a random graph, a scale-free network and a power grid indicate that the effectiveness of decentralized control schemes, like the method proposed here, is a function of the structure of the network that is to be controlled.
Krikalev works with the TORU teleoperated control system in the SM during Expedition 11
2005-06-19
ISS011-E-09184 (18 June 2005) --- Cosmonaut Sergei K. Krikalev, Expedition 11 commander representing Russia's Federal Space Agency, practices docking procedures with the TORU teleoperated control system in the Zvezda Service Module of the International Space Station (ISS) in preparation for the docking of the Progress 18 spacecraft. Krikalev, using the Simvol-TS screen and hand controllers, could manually dock the Progress to the Station in the event of a failure of the Kurs automated docking system.
Tyurin works with the TORU teleoperated control system in the SM during Expedition 14
2007-01-20
ISS014-E-12482 (19 Jan. 2007) --- Cosmonaut Mikhail Tyurin, Expedition 14 flight engineer representing Russia's Federal Space Agency, practices docking procedures with the TORU teleoperated control system in the Zvezda Service Module of the International Space Station in preparation for the docking of the Progress 24 spacecraft. Tyurin, using the Simvol-TS screen and hand controllers, could manually dock the Progress to the station in the event of a failure of the Kurs automated docking system.
Histological evaluation and optimization of surgical vessel sealing systems
NASA Astrophysics Data System (ADS)
Lathrop, Robert; Ryan, Thomas; Gaspredes, Jonathan; Woloszko, Jean; Coad, James E.
2017-02-01
Surgical vessel sealing systems are widely used to achieve hemostasis and dissection in open surgery and minimally invasive, laparoscopic surgery. This enabling technology was developed about 17 years ago and continues to evolve with new devices and systems achieving improved outcomes. Histopathological assessment of thermally sealed tissues is a valuable tool for refining and comparing performance among surgical vessel sealing systems. Early work in this field typically assessed seal time, burst rate, and failure rate (in-situ). Later work compared histological staining methods with birefringence to assess the extent of thermal damage to tissues adjacent to the device. Understanding the microscopic architecture of a sealed vessel is crucial to optimizing the performance of power delivery algorithms and device design parameters. Manufacturers rely on these techniques to develop new products. A system for histopathological evaluation of vessels and sealing performance was established, to enable the direct assessment of a treatment's tissue effects. The parameters included the commonly used seal time, pressure burst rate and failure rate, as well as extensions of the assessment to include its likelihood to form steam vacuoles, adjacent thermal effect near the device, and extent of thermally affected tissue extruded back into the vessel lumen. This comprehensive assessment method provides an improved means of assessing the quality of a sealed vessel and understanding the exact mechanisms which create an optimally sealed vessel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riesen, Rolf E.; Bridges, Patrick G.; Stearley, Jon R.
Next-generation exascale systems, those capable of performing a quintillion (10{sup 18}) operations per second, are expected to be delivered in the next 8-10 years. These systems, which will be 1,000 times faster than current systems, will be of unprecedented scale. As these systems continue to grow in size, faults will become increasingly common, even over the course of small calculations. Therefore, issues such as fault tolerance and reliability will limit application scalability. Current techniques to ensure progress across faults like checkpoint/restart, the dominant fault tolerance mechanism for the last 25 years, are increasingly problematic at the scales of future systemsmore » due to their excessive overheads. In this work, we evaluate a number of techniques to decrease the overhead of checkpoint/restart and keep this method viable for future exascale systems. More specifically, this work evaluates state-machine replication to dramatically increase the checkpoint interval (the time between successive checkpoint) and hash-based, probabilistic incremental checkpointing using graphics processing units to decrease the checkpoint commit time (the time to save one checkpoint). Using a combination of empirical analysis, modeling, and simulation, we study the costs and benefits of these approaches on a wide range of parameters. These results, which cover of number of high-performance computing capability workloads, different failure distributions, hardware mean time to failures, and I/O bandwidths, show the potential benefits of these techniques for meeting the reliability demands of future exascale platforms.« less
NASA Technical Reports Server (NTRS)
Maggio, Gaspare; Groen, Frank; Hamlin, Teri; Youngblood, Robert
2010-01-01
Accident Precursor Analysis (APA) serves as the bridge between existing risk modeling activities, which are often based on historical or generic failure statistics, and system anomalies, which provide crucial information about the failure mechanisms that are actually operative in the system. APA docs more than simply track experience: it systematically evaluates experience, looking for under-appreciated risks that may warrant changes to design or operational practice. This paper presents the pilot application of the NASA APA process to Space Shuttle Orbiter systems. In this effort, the working sessions conducted at Johnson Space Center (JSC) piloted the APA process developed by Information Systems Laboratories (ISL) over the last two years under the auspices of NASA's Office of Safety & Mission Assurance, with the assistance of the Safety & Mission Assurance (S&MA) Shuttle & Exploration Analysis Branch. This process is built around facilitated working sessions involving diverse system experts. One important aspect of this particular APA process is its focus on understanding the physical mechanism responsible for an operational anomaly, followed by evaluation of the risk significance of the observed anomaly as well as consideration of generalizations of the underlying mechanism to other contexts. Model completeness will probably always be an issue, but this process tries to leverage operating experience to the extent possible in order to address completeness issues before a catastrophe occurs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jay Dean; Oberkampf, William Louis; Helton, Jon Craig
2004-12-01
Relationships to determine the probability that a weak link (WL)/strong link (SL) safety system will fail to function as intended in a fire environment are investigated. In the systems under study, failure of the WL system before failure of the SL system is intended to render the overall system inoperational and thus prevent the possible occurrence of accidents with potentially serious consequences. Formal developments of the probability that the WL system fails to deactivate the overall system before failure of the SL system (i.e., the probability of loss of assured safety, PLOAS) are presented for several WWSL configurations: (i) onemore » WL, one SL, (ii) multiple WLs, multiple SLs with failure of any SL before any WL constituting failure of the safety system, (iii) multiple WLs, multiple SLs with failure of all SLs before any WL constituting failure of the safety system, and (iv) multiple WLs, multiple SLs and multiple sublinks in each SL with failure of any sublink constituting failure of the associated SL and failure of all SLs before failure of any WL constituting failure of the safety system. The indicated probabilities derive from time-dependent temperatures in the WL/SL system and variability (i.e., aleatory uncertainty) in the temperatures at which the individual components of this system fail and are formally defined as multidimensional integrals. Numerical procedures based on quadrature (i.e., trapezoidal rule, Simpson's rule) and also on Monte Carlo techniques (i.e., simple random sampling, importance sampling) are described and illustrated for the evaluation of these integrals. Example uncertainty and sensitivity analyses for PLOAS involving the representation of uncertainty (i.e., epistemic uncertainty) with probability theory and also with evidence theory are presented.« less
Heroic Reliability Improvement in Manned Space Systems
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2017-01-01
System reliability can be significantly improved by a strong continued effort to identify and remove all the causes of actual failures. Newly designed systems often have unexpected high failure rates which can be reduced by successive design improvements until the final operational system has an acceptable failure rate. There are many causes of failures and many ways to remove them. New systems may have poor specifications, design errors, or mistaken operations concepts. Correcting unexpected problems as they occur can produce large early gains in reliability. Improved technology in materials, components, and design approaches can increase reliability. The reliability growth is achieved by repeatedly operating the system until it fails, identifying the failure cause, and fixing the problem. The failure rate reduction that can be obtained depends on the number and the failure rates of the correctable failures. Under the strong assumption that the failure causes can be removed, the decline in overall failure rate can be predicted. If a failure occurs at the rate of lambda per unit time, the expected time before the failure occurs and can be corrected is 1/lambda, the Mean Time Before Failure (MTBF). Finding and fixing a less frequent failure with the rate of lambda/2 per unit time requires twice as long, time of 1/(2 lambda). Cutting the failure rate in half requires doubling the test and redesign time and finding and eliminating the failure causes.Reducing the failure rate significantly requires a heroic reliability improvement effort.
NASA Technical Reports Server (NTRS)
Pineda, Evan Jorge; Waas, Anthony M.
2013-01-01
A thermodynamically-based work potential theory for modeling progressive damage and failure in fiber-reinforced laminates is presented. The current, multiple-internal state variable (ISV) formulation, referred to as enhanced Schapery theory (EST), utilizes separate ISVs for modeling the effects of damage and failure. Consistent characteristic lengths are introduced into the formulation to govern the evolution of the failure ISVs. Using the stationarity of the total work potential with respect to each ISV, a set of thermodynamically consistent evolution equations for the ISVs are derived. The theory is implemented into a commercial finite element code. The model is verified against experimental results from two laminated, T800/3900-2 panels containing a central notch and different fiber-orientation stacking sequences. Global load versus displacement, global load versus local strain gage data, and macroscopic failure paths obtained from the models are compared against the experimental results.
NASA Astrophysics Data System (ADS)
Pantazopoulos, G.; Vazdirvanidis, A.
2014-03-01
Emphasis is placed on the evaluation of corrosion failures of copper and machineable brass alloys during service. Typical corrosion failures of the presented case histories mainly focussed on stress corrosion cracking and dezincification that acted as the major degradation mechanisms in components used in piping and water supply systems. SEM assessment, coupled with EDS spectroscopy, revealed the main cracking modes together with the root-source(s) that are responsible for the damage initiation and evolution. In addition, fracture surface observations contributed to the identification of the incurred fracture mechanisms and potential environmental issues that stimulated crack initiation and propagation. Very frequently, the detection of chlorides among the corrosion products served as a suggestive evidence of the influence of working environment on passive layer destabilisation and metal dissolution.
Probability of loss of assured safety in systems with multiple time-dependent failure modes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Jon Craig; Pilch, Martin.; Sallaberry, Cedric Jean-Marie.
2012-09-01
Weak link (WL)/strong link (SL) systems are important parts of the overall operational design of high-consequence systems. In such designs, the SL system is very robust and is intended to permit operation of the entire system under, and only under, intended conditions. In contrast, the WL system is intended to fail in a predictable and irreversible manner under accident conditions and render the entire system inoperable before an accidental operation of the SL system. The likelihood that the WL system will fail to deactivate the entire system before the SL system fails (i.e., degrades into a configuration that could allowmore » an accidental operation of the entire system) is referred to as probability of loss of assured safety (PLOAS). Representations for PLOAS for situations in which both link physical properties and link failure properties are time-dependent are derived and numerically evaluated for a variety of WL/SL configurations, including PLOAS defined by (i) failure of all SLs before failure of any WL, (ii) failure of any SL before failure of any WL, (iii) failure of all SLs before failure of all WLs, and (iv) failure of any SL before failure of all WLs. The effects of aleatory uncertainty and epistemic uncertainty in the definition and numerical evaluation of PLOAS are considered.« less
PACS technologies and reliability: are we making things better or worse?
NASA Astrophysics Data System (ADS)
Horii, Steven C.; Redfern, Regina O.; Kundel, Harold L.; Nodine, Calvin F.
2002-05-01
In the process of installing picture archiving and communications (PACS) and speech recognition equipment, upgrading it, and working with previously stored digital image information, the authors encountered a number of problems. Examination of these difficulties illustrated the complex nature of our existing systems and how difficult it is, in many cases, to predict the behavior of these systems. This was found to be true even for our relatively small number of interconnected systems. The purpose of this paper is to illustrate some of the principles of understanding complex system interaction through examples from our experience. The work for this paper grew out of a number of studies we had carried out on our PACS over several years. The complex nature of our systems was evaluated through comparison of our operations with known examples of systems in other industries. Three scenarios: a network failure, a system software upgrade, and attempting to read media from an old archive showed that the major systems used in the radiology departments of many healthcare facilities (HIS, RIS, PACS, and speed recognition) are likely to interact in complex and often unpredictable ways. These interactions may be very difficult or impossible to predict, so that some plans should be made to overcome the negative aspects of the problems that result. Failures and problems, often unpredictable ones, are a likely side effect of having multiple information handling and processing systems interconnected and interoperating. Planning to avoid, or at least not be so vulnerable, to such difficulties is an important aspect of systems planning.
McGrath, Rita Gunther
2011-04-01
It's hardly news that business leaders work in increasingly uncertain environments, where failures are bound to be more common than successes. Yet if you ask executives how well, on a scale of one to 10, their organizations learn from failure, you'll often get a sheepish "Two-or maybe three" in response. Such organizations are missing a big opportunity: Failure may be inevitable but, if managed well, can be very useful. A certain amount of failure can help you keep your options open, find out what doesn't work, create the conditions to attract resources and attention, make room for new leaders, and develop intuition and skill. The key to reaping these benefits is to foster "intelligent failure" throughout your organization. McGrath describes several principles that can help you put intelligent failure to work. You should decide what success and failure would look like before you start a project. Document your initial assumptions, test and revise them as you go, and convert them into knowledge. Fail fast-the longer something takes, the less you'll learn-and fail cheaply, to contain your downside risk. Limit the number of uncertainties in new projects, and build a culture that tolerates, and sometimes even celebrates, failure. Finally, codify and share what you learn. These principles won't give you a means of avoiding all failures down the road-that's simply not realistic. They will help you use small losses to attain bigger wins over time.
NASA Technical Reports Server (NTRS)
Bazley, Jesse A.
2011-01-01
This presentation will discuss the International Space Station s (ISS) Regenerative Environmental Control and Life Support System (ECLSS) operations with discussion of the on-orbit lessons learned, specifically regarding the challenges that have been faced as the system has expanded with a growing ISS crew. Over the 10 year history of the ISS, there have been numerous challenges, failures, and triumphs in the quest to keep the crew alive and comfortable. Successful operation of the ECLSS not only requires maintenance of the hardware, but also management of the station resources in case of hardware failure or missed re-supply. This involves effective communication between the primary International Partners (NASA and Roskosmos) and the secondary partners (JAXA and ESA) in order to keep a reserve of the contingency consumables and allow for re-supply of failed hardware. The ISS ECLSS utilizes consumables storage for contingency usage as well as longer-term regenerative systems, which allow for conservation of the expensive resources brought up by re-supply vehicles. This long-term hardware, and the interactions with software, was a challenge for Systems Engineers when they were designed and require multiple operational workarounds in order to function continuously. On a day-to-day basis, the ECLSS provides big challenges to the on console controllers. Main challenges involve the utilization of the resources that have been brought up by the visiting vehicles prior to undocking, balance of contributions between the International Partners for both systems and resources, and maintaining balance between the many interdependent systems, which includes providing the resources they need when they need it. The current biggest challenge for ECLSS is the Regenerative ECLSS system, which continuously recycles urine and condensate water into drinking water and oxygen. These systems were brought to full functionality on STS-126 (ULF-2) mission. Through system failures and recovery, the ECLSS console has learned how to balance the water within the systems, store and use water for contingencies, and continue to work with the International Partners for short-term failures. Through these challenges and the system failures, the most important lesson learned has been the importance of redundancy and operational workarounds. It is only because of the flexibility of the hardware and the software that flight controllers have the opportunity to continue operating the system as a whole for mission success.
Risk Based Reliability Centered Maintenance of DOD Fire Protection Systems
1999-01-01
2.2.3 Failure Mode and Effect Analysis ( FMEA )............................ 2.2.4 Failure Mode Risk Characterization...Step 2 - System functions and functional failures definition Step 3 - Failure mode and effect analysis ( FMEA ) Step 4 - Failure mode risk...system). The Interface Location column identifies the location where the FMEA of the fire protection system began or stopped. For example, for the fire
Peer-to-peer architectures for exascale computing : LDRD final report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vorobeychik, Yevgeniy; Mayo, Jackson R.; Minnich, Ronald G.
2010-09-01
The goal of this research was to investigate the potential for employing dynamic, decentralized software architectures to achieve reliability in future high-performance computing platforms. These architectures, inspired by peer-to-peer networks such as botnets that already scale to millions of unreliable nodes, hold promise for enabling scientific applications to run usefully on next-generation exascale platforms ({approx} 10{sup 18} operations per second). Traditional parallel programming techniques suffer rapid deterioration of performance scaling with growing platform size, as the work of coping with increasingly frequent failures dominates over useful computation. Our studies suggest that new architectures, in which failures are treated as ubiquitousmore » and their effects are considered as simply another controllable source of error in a scientific computation, can remove such obstacles to exascale computing for certain applications. We have developed a simulation framework, as well as a preliminary implementation in a large-scale emulation environment, for exploration of these 'fault-oblivious computing' approaches. High-performance computing (HPC) faces a fundamental problem of increasing total component failure rates due to increasing system sizes, which threaten to degrade system reliability to an unusable level by the time the exascale range is reached ({approx} 10{sup 18} operations per second, requiring of order millions of processors). As computer scientists seek a way to scale system software for next-generation exascale machines, it is worth considering peer-to-peer (P2P) architectures that are already capable of supporting 10{sup 6}-10{sup 7} unreliable nodes. Exascale platforms will require a different way of looking at systems and software because the machine will likely not be available in its entirety for a meaningful execution time. Realistic estimates of failure rates range from a few times per day to more than once per hour for these platforms. P2P architectures give us a starting point for crafting applications and system software for exascale. In the context of the Internet, P2P applications (e.g., file sharing, botnets) have already solved this problem for 10{sup 6}-10{sup 7} nodes. Usually based on a fractal distributed hash table structure, these systems have proven robust in practice to constant and unpredictable outages, failures, and even subversion. For example, a recent estimate of botnet turnover (i.e., the number of machines leaving and joining) is about 11% per week. Nonetheless, P2P networks remain effective despite these failures: The Conficker botnet has grown to {approx} 5 x 10{sup 6} peers. Unlike today's system software and applications, those for next-generation exascale machines cannot assume a static structure and, to be scalable over millions of nodes, must be decentralized. P2P architectures achieve both, and provide a promising model for 'fault-oblivious computing'. This project aimed to study the dynamics of P2P networks in the context of a design for exascale systems and applications. Having no single point of failure, the most successful P2P architectures are adaptive and self-organizing. While there has been some previous work applying P2P to message passing, little attention has been previously paid to the tightly coupled exascale domain. Typically, the per-node footprint of P2P systems is small, making them ideal for HPC use. The implementation on each peer node cooperates en masse to 'heal' disruptions rather than relying on a controlling 'master' node. Understanding this cooperative behavior from a complex systems viewpoint is essential to predicting useful environments for the inextricably unreliable exascale platforms of the future. We sought to obtain theoretical insight into the stability and large-scale behavior of candidate architectures, and to work toward leveraging Sandia's Emulytics platform to test promising candidates in a realistic (ultimately {ge} 10{sup 7} nodes) setting. Our primary example applications are drawn from linear algebra: a Jacobi relaxation solver for the heat equation, and the closely related technique of value iteration in optimization. We aimed to apply P2P concepts in designing implementations capable of surviving an unreliable machine of 10{sup 6} nodes.« less
Paper 5643 - Role of Maintenance in the Performance of Stormwater Control Measures
NASA Astrophysics Data System (ADS)
Hunt, W. F., III; Merriman, L.; Winston, R.; Brown, R. A.
2014-12-01
Stormwater Control Measures are required by jurisdictions across the USA and internationally to treat runoff quantity and quality. Like any anthropogenic device, these systems must be maintained. However, often times once a system has been constructed, it is neglected, either assumed it will work in perpetuity or (more likely) just forgotten. Recent research on multiple stormwater practices illustrates the pitfalls of neglecting certain practices, while highlighting other SCMs that are resilient despite lack of care. The focus of this presentation will be to highlight three often-used SCMs, constructed stormwater wetlands, bioretention, and permeable pavement, describing each SCM's failure modes. The degree to which water quality and hydrologic mitigation function is lost will be presented for each practice. Moreover, design and construction guidance will be provided so that the exposure to failure mechanisms is limited for each practice. Of the three practices, it appears that their resilience to failure is (in descending order): constructed stormwater wetlands, bioretention, and permeable pavement. One key to the former two practices robustness seems to be the important role in vegetation, which helps heal "wounds" of neglect. Because constructed stormwater wetlands do not rely upon filtration, they tend to be slighly less prone to failure than bioretention (which is a filtration-based SCM).
Evolution of a modular software network
Fortuna, Miguel A.; Bonachela, Juan A.; Levin, Simon A.
2011-01-01
“Evolution behaves like a tinkerer” (François Jacob, Science, 1977). Software systems provide a singular opportunity to understand biological processes using concepts from network theory. The Debian GNU/Linux operating system allows us to explore the evolution of a complex network in a unique way. The modular design detected during its growth is based on the reuse of existing code in order to minimize costs during programming. The increase of modularity experienced by the system over time has not counterbalanced the increase in incompatibilities between software packages within modules. This negative effect is far from being a failure of design. A random process of package installation shows that the higher the modularity, the larger the fraction of packages working properly in a local computer. The decrease in the relative number of conflicts between packages from different modules avoids a failure in the functionality of one package spreading throughout the entire system. Some potential analogies with the evolutionary and ecological processes determining the structure of ecological networks of interacting species are discussed. PMID:22106260
NASA Technical Reports Server (NTRS)
Gunn, Jody M.
2006-01-01
The state-of-the-practice for engineering and development of Ground Systems has evolved significantly over the past half decade. Missions that challenge ground system developers with significantly reduced budgets in spite of requirements for greater and previously unimagined functionality are now the norm. Making the right trades early in the mission lifecycle is one of the key factors to minimizing ground system costs. The Mission Operations Strategic Leadership Team at the Jet Propulsion Laboratory has spent the last year collecting and working through successes and failures in ground systems for application to future missions.
Security Informatics Research Challenges for Mitigating Cyber Friendly Fire
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carroll, Thomas E.; Greitzer, Frank L.; Roberts, Adam D.
This paper addresses cognitive implications and research needs surrounding the problem of cyber friendly re (FF). We dene cyber FF as intentional o*ensive or defensive cyber/electronic actions intended to protect cyber systems against enemy forces or to attack enemy cyber systems, which unintentionally harms the mission e*ectiveness of friendly or neutral forces. We describe examples of cyber FF and discuss how it ts within a general conceptual framework for cyber security failures. Because it involves human failure, cyber FF may be considered to belong to a sub-class of cyber security failures characterized as unintentional insider threats. Cyber FF is closelymore » related to combat friendly re in that maintaining situation awareness (SA) is paramount to avoiding unintended consequences. Cyber SA concerns knowledge of a system's topology (connectedness and relationships of the nodes in a system), and critical knowledge elements such as the characteristics and vulnerabilities of the components that comprise the system and its nodes, the nature of the activities or work performed, and the available defensive and o*ensive countermeasures that may be applied to thwart network attacks. We describe a test bed designed to support empirical research on factors a*ecting cyber FF. Finally, we discuss mitigation strategies to combat cyber FF, including both training concepts and suggestions for decision aids and visualization approaches.« less
Comparing Different Fault Identification Algorithms in Distributed Power System
NASA Astrophysics Data System (ADS)
Alkaabi, Salim
A power system is a huge complex system that delivers the electrical power from the generation units to the consumers. As the demand for electrical power increases, distributed power generation was introduced to the power system. Faults may occur in the power system at any time in different locations. These faults cause a huge damage to the system as they might lead to full failure of the power system. Using distributed generation in the power system made it even harder to identify the location of the faults in the system. The main objective of this work is to test the different fault location identification algorithms while tested on a power system with the different amount of power injected using distributed generators. As faults may lead the system to full failure, this is an important area for research. In this thesis different fault location identification algorithms have been tested and compared while the different amount of power is injected from distributed generators. The algorithms were tested on IEEE 34 node test feeder using MATLAB and the results were compared to find when these algorithms might fail and the reliability of these methods.
Gray, Shannon E; Hassani-Mahmooei, Behrooz; Cameron, Ian D; Kendall, Elizabeth; Kenardy, Justin; Collie, Alex
2018-02-12
Purpose To determine the incidence of employed people who try and fail to return-to-work (RTW) following a transport crash. To identify predictors of RTW failure. A historical cohort study was conducted in the state of Victoria, Australia. People insured through the state-based compulsory third party transport accident compensation scheme were included. Inclusion criteria included date of crash between 2003 and 2012 (inclusive), age 15-70 years at the time of crash, sustained a non-catastrophic injury and received at least 1 day of income replacement. A matrix was created from an administrative payments dataset that mapped their RTW pattern for each day up to 3 years' post-crash. A gap of 7 days of no payment followed by resumption of a payment was considered a RTW failure and was flagged. These event flags were then entered into a regression analysis to determine the odds of having a failed RTW attempt. 17% of individuals had a RTW fail, with males having 20% lower odds of experiencing RTW failure. Those who were younger, had minor injuries (sprains, strains, contusions, abrasions, non-limb fractures), or were from more advantaged socio-economic group, were less likely to experience a RTW failure. Most likely to experience a RTW failure were individuals with whiplash, dislocations or particularly those admitted to hospital. Understanding the causes and predictors of failed RTW can help insurers, employers and health systems identify at-risk individuals. This can enable earlier and more targeted support and more effective employment outcomes.
USAF Evaluation of an Automated Adaptive Flight Training System
1975-10-01
system. C. What is the most effective wav to utilize the system in ^jierational training’ Student opinion for this question JS equally divided...None Utility hydraulic failure Flap failure left engine failure Right engine failure Stah 2 aug failure No g\\ ro approach procedure, no MIDI
Application of Density Functional Theory to Systems Containing Metal Atoms
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.
2006-01-01
The accuracy of density functional theory (DFT) for problems involving metal atoms is considered. The DFT results are compared with experiment as well as results obtained using the coupled cluster approach. The comparisons include geometries, frequencies, and bond energies. The systems considered include MO2, M(OH)+n, MNO+, and MCO+2. The DFT works well for frequencies and geometries, even in case with symmetry breaking; however, some examples have been found where the symmetry breaking is quite severe and the DFT methods do not work well. The calculation of bond energies is more difficult and examples of successes as well as failures of DFT will be given.
Respiratory failure happens when not enough oxygen passes from your lungs into your blood. Your body's organs, ... brain, need oxygen-rich blood to work well. Respiratory failure also can happen if your lungs can' ...
Exploring the temperature dependence of failure mechanisms in fragmenting metal cylinders
NASA Astrophysics Data System (ADS)
Jones, David; Chapman, David; Hazell, Paul; Bland, Simon; Eakins, Daniel
2011-06-01
We present current work to investigate the influence of temperature on the dynamic fragmentation of metals. Pre-heated/cooled cylinders of Ti-6Al-4V were subjected to rapid radial expansion up to and past the point of failure using a modified expanding insert method on a single stage gas gun. Additional experiments were performed using an electromagnetic drive system to produce uniform deformations on targets of differing dimensions (radius, wall thickness). Issues concerning the geometry of the experiments, methods of heating and cooling the sample and diagnostics are covered. Finally, the role of temperature on adiabatic shear banding and fragment distribution statistics is discussed.
A Note About HARP's State Trimming Method
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Hayhurst, Kelly J.; Johnson, Sally C.
1998-01-01
This short note provides some additional insight into how the HARP program works. In some cases, it is possible for HARP to tdm away too many states and obtain an optimistic result. The HARP Version 7.0 manual warns the user that 'Unlike the ALL model, the SAME model can automatically drop failure modes for certain system models. The user is cautioned to insure that no important failure modes are dropped; otherwise, a non-conservative result can be given.' This note provides an example of where this occurs and a pointer to further documentation that gives a means of bounding the error associated with trimming these states.
Immunity-based detection, identification, and evaluation of aircraft sub-system failures
NASA Astrophysics Data System (ADS)
Moncayo, Hever Y.
This thesis describes the design, development, and flight-simulation testing of an integrated Artificial Immune System (AIS) for detection, identification, and evaluation of a wide variety of sensor, actuator, propulsion, and structural failures/damages including the prediction of the achievable states and other limitations on performance and handling qualities. The AIS scheme achieves high detection rate and low number of false alarms for all the failure categories considered. Data collected using a motion-based flight simulator are used to define the self for an extended sub-region of the flight envelope. The NASA IFCS F-15 research aircraft model is used and represents a supersonic fighter which include model following adaptive control laws based on non-linear dynamic inversion and artificial neural network augmentation. The flight simulation tests are designed to analyze and demonstrate the performance of the immunity-based aircraft failure detection, identification and evaluation (FDIE) scheme. A general robustness analysis is also presented by determining the achievable limits for a desired performance in the presence of atmospheric perturbations. For the purpose of this work, the integrated AIS scheme is implemented based on three main components. The first component performs the detection when one of the considered failures is present in the system. The second component consists in the identification of the failure category and the classification according to the failed element. During the third phase a general evaluation of the failure is performed with the estimation of the magnitude/severity of the failure and the prediction of its effect on reducing the flight envelope of the aircraft system. Solutions and alternatives to specific design issues of the AIS scheme, such as data clustering and empty space optimization, data fusion and duplication removal, definition of features, dimensionality reduction, and selection of cluster/detector shape are also analyzed in this thesis. They showed to have an important effect on detection performance and are a critical aspect when designing the configuration of the AIS. The results presented in this thesis show that the AIS paradigm addresses directly the complexity and multi-dimensionality associated with a damaged aircraft dynamic response and provides the tools necessary for a comprehensive/integrated solution to the FDIE problem. Excellent detection, identification, and evaluation performance has been recorded for all types of failures considered. The implementation of the proposed AIS-based scheme can potentially have a significant impact on the safety of aircraft operation. The output information obtained from the scheme will be useful to increase pilot situational awareness and determine automated compensation.
Nordgren, Lena; Söderlund, Anne
2016-11-09
To live with heart failure means that life is delimited. Still, people with heart failure can have a desire to stay active in working life as long as possible. Although a number of factors affect sick leave and rehabilitation processes, little is known about sick leave and vocational rehabilitation concerning people with heart failure. This study aimed to identify emotions and encounters with healthcare professionals as possible predictors for the self-estimated ability to return to work in people on sick leave due to heart failure. A population-based cross-sectional study design was used. The study was conducted in Sweden. Data were collected in 2012 from 3 different sources: 2 official registries and 1 postal questionnaire. A total of 590 individuals were included. Descriptive statistics, correlation analysis and linear multiple regression analysis were used. 3 variables, feeling strengthened in the situation (β=-0.21, p=0.02), feeling happy (β=-0.24, p=0.02) and receiving encouragement about work (β=-0.32, p≤0.001), were identified as possible predictive factors for the self-estimated ability to return to work. To feel strengthened, happy and to receive encouragement about work can affect the return to work process for people on sick leave due to heart failure. In order to develop and implement rehabilitation programmes to meet these needs, more research is needed. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olalla, Carlos; Maksimovic, Dragan; Deline, Chris
Here, this paper quantifies the impact of distributed power electronics in photovoltaic (PV) systems in terms of end-of-life energy-capture performance and reliability. The analysis is based on simulations of PV installations over system lifetime at various degradation rates. It is shown how module-level or submodule-level power converters can mitigate variations in cell degradation over time, effectively increasing the system lifespan by 5-10 years compared with the nominal 25-year lifetime. An important aspect typically overlooked when characterizing such improvements is the reliability of distributed power electronics, as power converter failures may not only diminish energy yield improvements but also adversely affectmore » the overall system operation. Failure models are developed, and power electronics reliability is taken into account in this work, in order to provide a more comprehensive view of the opportunities and limitations offered by distributed power electronics in PV systems. Lastly, it is shown how a differential power-processing approach achieves the best mismatch mitigation performance and the least susceptibility to converter faults.« less
A Taxonomy of Fallacies in System Safety Arguments
NASA Technical Reports Server (NTRS)
Greenwell, William S.; Knight, John C.; Holloway, C. Michael; Pease, Jacob J.
2006-01-01
Safety cases are gaining acceptance as assurance vehicles for safety-related systems. A safety case documents the evidence and argument that a system is safe to operate; however, logical fallacies in the underlying argument may undermine a system s safety claims. Removing these fallacies is essential to reduce the risk of safety-related system failure. We present a taxonomy of common fallacies in safety arguments that is intended to assist safety professionals in avoiding and detecting fallacious reasoning in the arguments they develop and review. The taxonomy derives from a survey of general argument fallacies and a separate survey of fallacies in real-world safety arguments. Our taxonomy is specific to safety argumentation, and it is targeted at professionals who work with safety arguments but may lack formal training in logic or argumentation. We discuss the rationale for the selection and categorization of fallacies in the taxonomy. In addition to its applications to the development and review of safety cases, our taxonomy could also support the analysis of system failures and promote the development of more robust safety case patterns.
Olalla, Carlos; Maksimovic, Dragan; Deline, Chris; ...
2017-04-26
Here, this paper quantifies the impact of distributed power electronics in photovoltaic (PV) systems in terms of end-of-life energy-capture performance and reliability. The analysis is based on simulations of PV installations over system lifetime at various degradation rates. It is shown how module-level or submodule-level power converters can mitigate variations in cell degradation over time, effectively increasing the system lifespan by 5-10 years compared with the nominal 25-year lifetime. An important aspect typically overlooked when characterizing such improvements is the reliability of distributed power electronics, as power converter failures may not only diminish energy yield improvements but also adversely affectmore » the overall system operation. Failure models are developed, and power electronics reliability is taken into account in this work, in order to provide a more comprehensive view of the opportunities and limitations offered by distributed power electronics in PV systems. Lastly, it is shown how a differential power-processing approach achieves the best mismatch mitigation performance and the least susceptibility to converter faults.« less
Space rescue system definition (system performance analysis and trades)
NASA Astrophysics Data System (ADS)
Housten, Sam; Elsner, Tim; Redler, Ken; Svendsen, Hal; Wenzel, Sheri
This paper addresses key technical issues involved in the system definition of the Assured Crew Return Vehicle (ACRV). The perspective on these issues is that of a prospective ACRV contractor, performing system analysis and trade studies. The objective of these analyses and trade studies is to develop the recovery vehicle system concept and top level requirements. The starting point for this work is the definition of the set of design missions for the ACRV. This set of missions encompasses three classes of contingency/emergency (crew illness/injury, space station catastrophe/failure, transportation element catastrophe/failure). The need is to provide a system to return Space Station crew to Earth quickly (less than 24 hours) in response to randomly occurring contingency events over an extended period of time (30 years of planned Space Station life). The main topics addressed and characterized in this paper include the following: Key Recovery (Rescue) Site Access Considerations; Rescue Site Locations and Distribution; Vehicle Cross Range vs Site Access; On-orbit Loiter Capability and Vehicle Design; and Water vs. Land Recovery.
Failure Accommodation Tested in Magnetic Suspension Systems for Rotating Machinery
NASA Technical Reports Server (NTRS)
Provenza, Andy J.
2000-01-01
The NASA Glenn Research Center at Lewis Field and Texas A&M University are developing techniques for accommodating certain types of failures in magnetic suspension systems used in rotating machinery. In recent years, magnetic bearings have become a viable alternative to rolling element bearings for many applications. For example, industrial machinery such as machine tool spindles and turbomolecular pumps can today be bought off the shelf with magnetically supported rotating components. Nova Gas Transmission Ltd. has large gas compressors in Canada that have been running flawlessly for years on magnetic bearings. To help mature this technology and quiet concerns over the reliability of magnetic bearings, NASA researchers have been investigating ways of making the bearing system tolerant to faults. Since the potential benefits from an oil-free, actively controlled bearing system are so attractive, research that is focused on assuring system reliability and safety is justifiable. With support from the Fast Quiet Engine program, Glenn's Structural Mechanics and Dynamics Branch is working to demonstrate fault-tolerant magnetic suspension systems targeted for aerospace engine applications. The Flywheel Energy Storage Program is also helping to fund this research.
Meteoroid and Orbital Debris Threats to NASA's Docking Seals: Initial Assessment and Methodology
NASA Technical Reports Server (NTRS)
deGroh, Henry C., III; Nahra, Henry K.
2009-01-01
The Crew Exploration Vehicle (CEV) will be exposed to the Micrometeoroid Orbital Debris (MMOD) environment in Low Earth Orbit (LEO) during missions to the International Space Station (ISS) and to the micrometeoroid environment during lunar missions. The CEV will be equipped with a docking system which enables it to connect to ISS and the lunar module known as Altair; this docking system includes a hatch that opens so crew and supplies can pass between the spacecrafts. This docking system is known as the Low Impact Docking System (LIDS) and uses a silicone rubber seal to seal in cabin air. The rubber seal on LIDS presses against a metal flange on ISS (or Altair). All of these mating surfaces are exposed to the space environment prior to docking. The effects of atomic oxygen, ultraviolet and ionizing radiation, and MMOD have been estimated using ground based facilities. This work presents an initial methodology to predict meteoroid and orbital debris threats to candidate docking seals being considered for LIDS. The methodology integrates the results of ground based hypervelocity impacts on silicone rubber seals and aluminum sheets, risk assessments of the MMOD environment for a variety of mission scenarios, and candidate failure criteria. The experimental effort that addressed the effects of projectile incidence angle, speed, mass, and density, relations between projectile size and resulting crater size, and relations between crater size and the leak rate of candidate seals has culminated in a definition of the seal/flange failure criteria. The risk assessment performed with the BUMPER code used the failure criteria to determine the probability of failure of the seal/flange system and compared the risk to the allotted risk dictated by NASA's program requirements.
Meteoroid and Orbital Debris Threats to NASA's Docking Seals: Initial Assessment and Methodology
NASA Technical Reports Server (NTRS)
deGroh, Henry C., III; Gallo, Christopher A.; Nahra, Henry K.
2009-01-01
The Crew Exploration Vehicle (CEV) will be exposed to the Micrometeoroid Orbital Debris (MMOD) environment in Low Earth Orbit (LEO) during missions to the International Space Station (ISS) and to the micrometeoroid environment during lunar missions. The CEV will be equipped with a docking system which enables it to connect to ISS and the lunar module known as Altair; this docking system includes a hatch that opens so crew and supplies can pass between the spacecrafts. This docking system is known as the Low Impact Docking System (LIDS) and uses a silicone rubber seal to seal in cabin air. The rubber seal on LIDS presses against a metal flange on ISS (or Altair). All of these mating surfaces are exposed to the space environment prior to docking. The effects of atomic oxygen, ultraviolet and ionizing radiation, and MMOD have been estimated using ground based facilities. This work presents an initial methodology to predict meteoroid and orbital debris threats to candidate docking seals being considered for LIDS. The methodology integrates the results of ground based hypervelocity impacts on silicone rubber seals and aluminum sheets, risk assessments of the MMOD environment for a variety of mission scenarios, and candidate failure criteria. The experimental effort that addressed the effects of projectile incidence angle, speed, mass, and density, relations between projectile size and resulting crater size, and relations between crater size and the leak rate of candidate seals has culminated in a definition of the seal/flange failure criteria. The risk assessment performed with the BUMPER code used the failure criteria to determine the probability of failure of the seal/flange system and compared the risk to the allotted risk dictated by NASA s program requirements.
ERIC Educational Resources Information Center
Lapierre, Laurent M.; Hammer, Leslie B.; Truxillo, Donald M.; Murphy, Lauren A.
2012-01-01
The first goal of this study was to test whether family interference with work (FIW) is positively related to increased workplace cognitive failure (WCF), which is defined as errors made at work that indicate lapses in memory (e.g., failing to recall work procedures), attention (e.g., not fully listening to instruction), and motor function (e.g.,…
A study of unstable rock failures using finite difference and discrete element methods
NASA Astrophysics Data System (ADS)
Garvey, Ryan J.
Case histories in mining have long described pillars or faces of rock failing violently with an accompanying rapid ejection of debris and broken material into the working areas of the mine. These unstable failures have resulted in large losses of life and collapses of entire mine panels. Modern mining operations take significant steps to reduce the likelihood of unstable failure, however eliminating their occurrence is difficult in practice. Researchers over several decades have supplemented studies of unstable failures through the application of various numerical methods. The direction of the current research is to extend these methods and to develop improved numerical tools with which to study unstable failures in underground mining layouts. An extensive study is first conducted on the expression of unstable failure in discrete element and finite difference methods. Simulated uniaxial compressive strength tests are run on brittle rock specimens. Stable or unstable loading conditions are applied onto the brittle specimens by a pair of elastic platens with ranging stiffnesses. Determinations of instability are established through stress and strain histories taken for the specimen and the system. Additional numerical tools are then developed for the finite difference method to analyze unstable failure in larger mine models. Instability identifiers are established for assessing the locations and relative magnitudes of unstable failure through measures of rapid dynamic motion. An energy balance is developed which calculates the excess energy released as a result of unstable equilibria in rock systems. These tools are validated through uniaxial and triaxial compressive strength tests and are extended to models of coal pillars and a simplified mining layout. The results of the finite difference simulations reveal that the instability identifiers and excess energy calculations provide a generalized methodology for assessing unstable failures within potentially complex mine models. These combined numerical tools may be applied in future studies to design primary and secondary supports in bump-prone conditions, evaluate retreat mining cut sequences, asses pillar de-stressing techniques, or perform backanalyses on unstable failures in select mining layouts.
a Study on Satellite Diagnostic Expert Systems Using Case-Based Approach
NASA Astrophysics Data System (ADS)
Park, Young-Tack; Kim, Jae-Hoon; Park, Hyun-Soo
1997-06-01
Many research works are on going to monitor and diagnose diverse malfunctions of satellite systems as the complexity and number of satellites increase. Currently, many works on monitoring and diagnosis are carried out by human experts but there are needs to automate much of the routine works of them. Hence, it is necessary to study on using expert systems which can assist human experts routine work by doing automatically, thereby allow human experts devote their expertise more critical and important areas of monitoring and diagnosis. In this paper, we are employing artificial intelligence techniques to model human experts' knowledge and inference the constructed knowledge. Especially, case-based approaches are used to construct a knowledge base to model human expert capabilities which use previous typical exemplars. We have designed and implemented a prototype case-based system for diagnosing satellite malfunctions using cases. Our system remembers typical failure cases and diagnoses a current malfunction by indexing the case base. Diverse methods are used to build a more user friendly interface which allows human experts can build a knowledge base in as easy way.
Developing and implementing a heart failure data mart for research and quality improvement.
Abu-Rish Blakeney, Erin; Wolpin, Seth; Lavallee, Danielle C; Dardas, Todd; Cheng, Richard; Zierler, Brenda
2018-04-19
The purpose of this project was to build and formatively evaluate a near-real time heart failure (HF) data mart. Heart Failure (HF) is a leading cause of hospital readmissions. Increased efforts to use data meaningfully may enable healthcare organizations to better evaluate effectiveness of care pathways and quality improvements, and to prospectively identify risk among HF patients. We followed a modified version of the Systems Development Life Cycle: 1) Conceptualization, 2) Requirements Analysis, 3) Iterative Development, and 4) Application Release. This foundational work reflects the first of a two-phase project. Phase two (in process) involves the implementation and evaluation of predictive analytics for clinical decision support. We engaged stakeholders to build working definitions and established automated processes for creating an HF data mart containing actionable information for diverse audiences. As of December 2017, the data mart contains information from over 175,000 distinct patients and >100 variables from each of their nearly 300,000 visits. The HF data mart will be used to enhance care, assist in clinical decision-making, and improve overall quality of care. This model holds the potential to be scaled and generalized beyond the initial focus and setting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sallaberry, Cedric Jean-Marie.; Helton, Jon Craig
2012-10-01
Weak link (WL)/strong link (SL) systems are important parts of the overall operational design of high-consequence systems. In such designs, the SL system is very robust and is intended to permit operation of the entire system under, and only under, intended conditions. In contrast, the WL system is intended to fail in a predictable and irreversible manner under accident conditions and render the entire system inoperable before an accidental operation of the SL system. The likelihood that the WL system will fail to deactivate the entire system before the SL system fails (i.e., degrades into a configuration that could allowmore » an accidental operation of the entire system) is referred to as probability of loss of assured safety (PLOAS). This report describes the Fortran 90 program CPLOAS_2 that implements the following representations for PLOAS for situations in which both link physical properties and link failure properties are time-dependent: (i) failure of all SLs before failure of any WL, (ii) failure of any SL before failure of any WL, (iii) failure of all SLs before failure of all WLs, and (iv) failure of any SL before failure of all WLs. The effects of aleatory uncertainty and epistemic uncertainty in the definition and numerical evaluation of PLOAS can be included in the calculations performed by CPLOAS_2.« less
Program Helps In Analysis Of Failures
NASA Technical Reports Server (NTRS)
Stevenson, R. W.; Austin, M. E.; Miller, J. G.
1993-01-01
Failure Environment Analysis Tool (FEAT) computer program developed to enable people to see and better understand effects of failures in system. User selects failures from either engineering schematic diagrams or digraph-model graphics, and effects or potential causes of failures highlighted in color on same schematic-diagram or digraph representation. Uses digraph models to answer two questions: What will happen to system if set of failure events occurs? and What are possible causes of set of selected failures? Helps design reviewers understand exactly what redundancies built into system and where there is need to protect weak parts of system or remove them by redesign. Program also useful in operations, where it helps identify causes of failure after they occur. FEAT reduces costs of evaluation of designs, training, and learning how failures propagate through system. Written using Macintosh Programmers Workshop C v3.1. Can be linked with CLIPS 5.0 (MSC-21927, available from COSMIC).
A Low-Pressure Oxygen Storage System for Oxygen Supply in Low-Resource Settings.
Rassool, Roger P; Sobott, Bryn A; Peake, David J; Mutetire, Bagayana S; Moschovis, Peter P; Black, Jim Fp
2017-12-01
Widespread access to medical oxygen would reduce global pneumonia mortality. Oxygen concentrators are one proposed solution, but they have limitations, in particular vulnerability to electricity fluctuations and failure during blackouts. The low-pressure oxygen storage system addresses these limitations in low-resource settings. This study reports testing of the system in Melbourne, Australia, and nonclinical field testing in Mbarara, Uganda. The system included a power-conditioning unit, a standard oxygen concentrator, and an oxygen store. In Melbourne, pressure and flows were monitored during cycles of filling/emptying, with forced voltage fluctuations. The bladders were tested by increasing pressure until they ruptured. In Mbarara, the system was tested by accelerated cycles of filling/emptying and then run on grid power for 30 d. The low-pressure oxygen storage system performed well, including sustaining a pressure approximately twice the standard working pressure before rupture of the outer bag. Flow of 1.2 L/min was continuously maintained to a simulated patient during 30 d on grid power, despite power failures totaling 2.9% of the total time, with durations of 1-176 min (mean 36.2, median 18.5). The low-pressure oxygen storage system was robust and durable, with accelerated testing equivalent to at least 2 y of operation revealing no visible signs of imminent failure. Despite power cuts, the system continuously provided oxygen, equivalent to the treatment of one child, for 30 d under typical power conditions for sub-Saharan Africa. The low-pressure oxygen storage system is ready for clinical field trials. Copyright © 2017 by Daedalus Enterprises.
Aircraft Capability Management
NASA Technical Reports Server (NTRS)
Mumaw, Randy; Feary, Mike
2018-01-01
This presentation presents an overview of work performed at NASA Ames Research Center in 2017. The work concerns the analysis of current aircraft system management displays, and the initial development of an interface for providing information about aircraft system status. The new interface proposes a shift away from current aircraft system alerting interfaces that report the status of physical components, and towards displaying the implications of degradations on mission capability. The proposed interface describes these component failures in terms of operational consequences of aircraft system degradations. The research activity was an effort to examine the utility of different representations of complex systems and operating environments to support real-time decision making of off-nominal situations. A specific focus was to develop representations that provide better integrated information to allow pilots to more easily reason about the operational consequences of the off-nominal situations. The work is also seen as a pathway to autonomy, as information is integrated and understood in a form that automated responses could be developed for the off-nominal situations in the future.
Technical Reliability Studies. EOS/ESD Technology Abstracts
1981-01-01
MECHANISMS MELAMINE MESFETS MICROWAVE MIS 15025 AUTOMATIC MACHINE PRECAUTIONS FOR HOS/OiOS 15006 INSTRUCTIONS FOR INSTALLATION AND...ELIMINATION OF EOS INDUCED SECONDARY FAILURE MECHANISMS 15000 USE OF MELAMINE WORK-SURFACE FOR ESD POTENTIAL BLEED OFF 16141 MICROWAVE NANOSECOND... microwave devices, optoelectronics, and selected nonelectronic parts employed in military, space and commercial applications. In addition, a System
Tests of landscape influence: nest predation and brood parasitism in fragmented ecosystems
Joshua J. Tewksbury; Lindy Garner; Shannon H. Garner; John D. Lloyd; Victoria A. Saab; Thomas E. Martin
2006-01-01
The effects of landscape fragmentation on nest predation and brood parasitism, the two primary causes of avian reproductive failure, have been difficult to generalize across landscapes, yet few studies have clearly considered the context and spatial scale of fragmentation. Working in two river systems fragmented by agricultural and rural-housing development, we tracked...
ERIC Educational Resources Information Center
Milner, H. Richard, IV; Delale-O'Connor, Lori A.; Murray, Ira E.; Farinde, Abiola A.
2016-01-01
Background/Context: Prior research on "Milliken v. Bradley" focuses on the failure of this case to implement interdistrict busing in the highly segregated Detroit schools. Much of this work focuses explicitly on desegregation, rather than on equity and addressing individual, systemic, institutional, and organizational challenges that may…
A Methodology for Quantifying Certain Design Requirements During the Design Phase
NASA Technical Reports Server (NTRS)
Adams, Timothy; Rhodes, Russel
2005-01-01
A methodology for developing and balancing quantitative design requirements for safety, reliability, and maintainability has been proposed. Conceived as the basis of a more rational approach to the design of spacecraft, the methodology would also be applicable to the design of automobiles, washing machines, television receivers, or almost any other commercial product. Heretofore, it has been common practice to start by determining the requirements for reliability of elements of a spacecraft or other system to ensure a given design life for the system. Next, safety requirements are determined by assessing the total reliability of the system and adding redundant components and subsystems necessary to attain safety goals. As thus described, common practice leaves the maintainability burden to fall to chance; therefore, there is no control of recurring costs or of the responsiveness of the system. The means that have been used in assessing maintainability have been oriented toward determining the logistical sparing of components so that the components are available when needed. The process established for developing and balancing quantitative requirements for safety (S), reliability (R), and maintainability (M) derives and integrates NASA s top-level safety requirements and the controls needed to obtain program key objectives for safety and recurring cost (see figure). Being quantitative, the process conveniently uses common mathematical models. Even though the process is shown as being worked from the top down, it can also be worked from the bottom up. This process uses three math models: (1) the binomial distribution (greaterthan- or-equal-to case), (2) reliability for a series system, and (3) the Poisson distribution (less-than-or-equal-to case). The zero-fail case for the binomial distribution approximates the commonly known exponential distribution or "constant failure rate" distribution. Either model can be used. The binomial distribution was selected for modeling flexibility because it conveniently addresses both the zero-fail and failure cases. The failure case is typically used for unmanned spacecraft as with missiles.
NASA Astrophysics Data System (ADS)
Niu, Xiqun
Polybutylene (PB) is a semicrystalline thermoplastics. It has been widely used in potable water distribution piping system. However, field practice shows that failure occurs much earlier than the expected service lifetime. What are the causes and how to appropriately evaluate its lifetime motivate this study. In this thesis, three parts of work have been done. First is the understanding of PB, which includes material thermo and mechanical characterization, aging phenomena and notch sensitivity. The second part analyzes the applicability of the existing lifetime testing method for PB. It is shown that PB is an anomaly in terms of the temperature-lifetime relation because of the fracture mechanism transition across the testing temperature range. The third part is the development of the methodology of lifetime prediction for PB pipe. The fracture process of PB pipe consists of three stages, i.e., crack initiation, slow crack growth (SCG) and crack instability. The practical lifetime of PB pipe is primarily determined by the duration of the first two stages. The mechanism of crack initiation and the quantitative estimation of the time to crack initiation are studied by employing environment stress cracking technique. A fatigue slow crack growth testing method has been developed and applied in the study of SCG. By using Paris-Erdogan equation, a model is constructed to evaluate the time for SCG. As a result, the total lifetime is determined. Through this work, the failure mechanisms of PB pipe has been analyzed and the lifetime prediction methodology has been developed.
Person to Person Biological Heat Bypass During EVA Emergencies
NASA Technical Reports Server (NTRS)
Koscheyev, Victor S.; Leon, Gloria R.; Lee, Joo-Young; Kim, Jung-Hyun; Berowiski, Anna; Trevino, Robert C.
2007-01-01
During EVA and other extreme environments, mutual human support is sometimes the last way to survive when there is a failure of the life support equipment. The possibility to transfer a coolant to remove heat or a warming fluid to increase heat from one individual to another to support the thermal balance of the individual with system failure was assessed. The following scenarios were considered: 1. one participant has a cooling system that is not working well and already has a body heat deficit equal to 100-120 kcal and a finger temperature decline to 25 C; 2. one participant has the same status of overcooling and the other mild overheating. Preliminary findings showed promise in using such sharing tactics to extend the time duration of survival in extreme situations when there is a high metabolic rate in the donor.
Statistical modeling of software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1992-01-01
This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.
Advancement of High Power Quasi-CW Laser Diode Arrays For Space-based Laser Instruments
NASA Technical Reports Server (NTRS)
Amzajerdian, Farzin; Meadows, Byron L.; Baker, nathaniel R.; Baggott, Renee S.; Singh, Upendra N.; Kavaya, Michael J.
2004-01-01
Space-based laser and lidar instruments play an important role in NASA s plans for meeting its objectives in both Earth Science and Space Exploration areas. Almost all the lidar instrument concepts being considered by NASA scientist utilize moderate to high power diode-pumped solid state lasers as their transmitter source. Perhaps the most critical component of any solid state laser system is its pump laser diode array which essentially dictates instrument efficiency, reliability and lifetime. For this reason, premature failures and rapid degradation of high power laser diode arrays that have been experienced by laser system designers are of major concern to NASA. This work addresses these reliability and lifetime issues by attempting to eliminate the causes of failures and developing methods for screening laser diode arrays and qualifying them for operation in space.
Proven Innovations and New Initiatives in Ground System Development
NASA Technical Reports Server (NTRS)
Gunn, Jody M.
2006-01-01
The state-of-the-practice for engineering and development of Ground Systems has evolved significantly over the past half decade. Missions that challenge ground system developers with significantly reduced budgets in spite of requirements for greater and previously unimagined functionality are now the norm. Making the right trades early in the mission lifecycle is one of the key factors to minimizing ground system costs. The Mission Operations Strategic Leadership Team at the Jet Propulsion Laboratory has spent the last year collecting and working through successes and failures in ground systems for application to future missions.
Failure detection system risk reduction assessment
NASA Technical Reports Server (NTRS)
Aguilar, Robert B. (Inventor); Huang, Zhaofeng (Inventor)
2012-01-01
A process includes determining a probability of a failure mode of a system being analyzed reaching a failure limit as a function of time to failure limit, determining a probability of a mitigation of the failure mode as a function of a time to failure limit, and quantifying a risk reduction based on the probability of the failure mode reaching the failure limit and the probability of the mitigation.
Identifying the latent failures underpinning medication administration errors: an exploratory study.
Lawton, Rebecca; Carruthers, Sam; Gardner, Peter; Wright, John; McEachan, Rosie R C
2012-08-01
The primary aim of this article was to identify the latent failures that are perceived to underpin medication errors. The study was conducted within three medical wards in a hospital in the United Kingdom. The study employed a cross-sectional qualitative design. Interviews were conducted with 12 nurses and eight managers. Interviews were transcribed and subject to thematic content analysis. A two-step inter-rater comparison tested the reliability of the themes. Ten latent failures were identified based on the analysis of the interviews. These were ward climate, local working environment, workload, human resources, team communication, routine procedures, bed management, written policies and procedures, supervision and leadership, and training. The discussion focuses on ward climate, the most prevalent theme, which is conceptualized here as interacting with failures in the nine other organizational structures and processes. This study is the first of its kind to identify the latent failures perceived to underpin medication errors in a systematic way. The findings can be used as a platform for researchers to test the impact of organization-level patient safety interventions and to design proactive error management tools and incident reporting systems in hospitals. © Health Research and Educational Trust.
Make Program Failures Work for You.
ERIC Educational Resources Information Center
Keller, M. Jean; Mills, Helen H.
1984-01-01
Recreation program planners can learn from program failures. Failures should not be viewed as negative statements about personnel. Examining feelings in a supportive staff environment is suggested as a technique for developing competence. (DF)
Bone Graft Substitute Provides Metaphyseal Fixation for a Stemless Humeral Implant.
Kim, Myung-Sun; Kovacevic, David; Milks, Ryan A; Jun, Bong-Jae; Rodriguez, Eric; DeLozier, Katherine R; Derwin, Kathleen A; Iannotti, Joseph P
2015-07-01
Stemless humeral fixation has become an alternative to traditional total shoulder arthroplasty, but metaphyseal fixation may be compromised by the quality of the trabecular bone that diminishes with age and disease, and augmentation of the fixation may be desirable. The authors hypothesized that a bone graft substitute (BGS) could achieve initial fixation comparable to polymethylmethacrylate (PMMA) bone cement. Fifteen fresh-frozen human male humerii were randomly implanted using a stemless humeral prosthesis, and metaphyseal fixation was augmented with either high-viscosity PMMA bone cement (PMMA group) or a magnesium-based injectable BGS (OsteoCrete; Bone Solutions Inc, Dallas, Texas) (OC group). Both groups were compared with a control group with no augmentation. Initial stiffness, failure load, failure displacement, failure cycle, and total work were compared among groups. The PMMA and OC groups showed markedly higher failure loads, failure displacements, and failure cycles than the control group (P<.01). There were no statistically significant differences in initial stiffness, failure load, failure displacement, failure cycle, or total work between the PMMA and OC groups. The biomechanical properties of magnesium-based BGS fixation compared favorably with PMMA bone cement in the fixation of stemless humeral prostheses and may provide sufficient initial fixation for this clinical application. Future work will investigate the long-term remodeling characteristics and bone quality at the prosthetic-bone interface in an in vivo model to evaluate the clinical efficacy of this approach. Copyright 2015, SLACK Incorporated.
Data Mining for ISHM of Liquid Rocket Propulsion Status Update
NASA Technical Reports Server (NTRS)
Srivastava, Ashok; Schwabacher, Mark; Oza, Nijunj; Martin, Rodney; Watson, Richard; Matthews, Bryan
2006-01-01
This document consists of presentation slides that review the current status of data mining to support the work with the Integrated Systems Health Management (ISHM) for the systems associated with Liquid Rocket Propulsion. The aim of this project is to have test stand data from Rocketdyne to design algorithms that will aid in the early detection of impending failures during operation. These methods will be extended and improved for future platforms (i.e., CEV/CLV).
Improving patient safety: patient-focused, high-reliability team training.
McKeon, Leslie M; Cunningham, Patricia D; Oswaks, Jill S Detty
2009-01-01
Healthcare systems are recognizing "human factor" flaws that result in adverse outcomes. Nurses work around system failures, although increasing healthcare complexity makes this harder to do without risk of error. Aviation and military organizations achieve ultrasafe outcomes through high-reliability practice. We describe how reliability principles were used to teach nurses to improve patient safety at the front line of care. Outcomes include safety-oriented, teamwork communication competency; reflections on safety culture and clinical leadership are discussed.
Preliminary calculations related to the accident at Three Mile Island
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirchner, W.L.; Stevenson, M.G.
This report discusses preliminary studies of the Three Mile Island Unit 2 (TMI-2) accident based on available methods and data. The work reported includes: (1) a TRAC base case calculation out to 3 hours into the accident sequence; (2) TRAC parametric calculations, these are the same as the base case except for a single hypothetical change in the system conditions, such as assuming the high pressure injection (HPI) system operated as designed rather than as in the accident; (3) fuel rod cladding failure, cladding oxidation due to zirconium metal-steam reactions, hydrogen release due to cladding oxidation, cladding ballooning, cladding embrittlement,more » and subsequent cladding breakup estimates based on TRAC calculated cladding temperatures and system pressures. Some conclusions of this work are: the TRAC base case accident calculation agrees very well with known system conditions to nearly 3 hours into the accident; the parametric calculations indicate that, loss-of-core cooling was most influenced by the throttling of High-Pressure Injection (HPI) flows, given the accident initiating events and the pressurizer electromagnetic-operated valve (EMOV) failing to close as designed; failure of nearly all the rods and gaseous fission product gas release from the failed rods is predicted to have occurred at about 2 hours and 30 minutes; cladding oxidation (zirconium-steam reaction) up to 3 hours resulted in the production of approximately 40 kilograms of hydrogen.« less
Profitable failure: antidepressant drugs and the triumph of flawed experiments.
McGoey, Linsey
2010-01-01
Drawing on an analysis of Irving Kirsch and colleagues' controversial 2008 article in "PLoS [Public Library of Science] Magazine" on the efficacy of SSRI antidepressant drugs such as Prozac, I examine flaws within the methodologies of randomized controlled trials (RCTs) that have made it difficult for regulators, clinicians and patients to determine the therapeutic value of this class of drug. I then argue, drawing analogies to work by Pierre Bourdieu and Michael Power, that it is the very limitations of RCTs -- their inadequacies in producing reliable evidence of clinical effects -- that help to strengthen assumptions of their superiority as methodological tools. Finally, I suggest that the case of RCTs helps to explore the question of why failure is often useful in consolidating the authority of those who have presided over that failure, and why systems widely recognized to be ineffective tend to assume greater authority at the very moment when people speak of their malfunction.
Reliability Analysis of Systems Subject to First-Passage Failure
NASA Technical Reports Server (NTRS)
Lutes, Loren D.; Sarkani, Shahram
2009-01-01
An obvious goal of reliability analysis is the avoidance of system failure. However, it is generally recognized that it is often not feasible to design a practical or useful system for which failure is impossible. Thus it is necessary to use techniques that estimate the likelihood of failure based on modeling the uncertainty about such items as the demands on and capacities of various elements in the system. This usually involves the use of probability theory, and a design is considered acceptable if it has a sufficiently small probability of failure. This report contains findings of analyses of systems subject to first-passage failure.
Mass and Reliability System (MaRS)
NASA Technical Reports Server (NTRS)
Barnes, Sarah
2016-01-01
The Safety and Mission Assurance (S&MA) Directorate is responsible for mitigating risk, providing system safety, and lowering risk for space programs from ground to space. The S&MA is divided into 4 divisions: The Space Exploration Division (NC), the International Space Station Division (NE), the Safety & Test Operations Division (NS), and the Quality and Flight Equipment Division (NT). The interns, myself and Arun Aruljothi, will be working with the Risk & Reliability Analysis Branch under the NC Division's. The mission of this division is to identify, characterize, diminish, and communicate risk by implementing an efficient and effective assurance model. The team utilizes Reliability and Maintainability (R&M) and Probabilistic Risk Assessment (PRA) to ensure decisions concerning risks are informed, vehicles are safe and reliable, and program/project requirements are realistic and realized. This project pertains to the Orion mission, so it is geared toward a long duration Human Space Flight Program(s). For space missions, payload is a critical concept; balancing what hardware can be replaced by components verse by Orbital Replacement Units (ORU) or subassemblies is key. For this effort a database was created that combines mass and reliability data, called Mass and Reliability System or MaRS. The U.S. International Space Station (ISS) components are used as reference parts in the MaRS database. Using ISS components as a platform is beneficial because of the historical context and the environment similarities to a space flight mission. MaRS uses a combination of systems: International Space Station PART for failure data, Vehicle Master Database (VMDB) for ORU & components, Maintenance & Analysis Data Set (MADS) for operation hours and other pertinent data, & Hardware History Retrieval System (HHRS) for unit weights. MaRS is populated using a Visual Basic Application. Once populated, the excel spreadsheet is comprised of information on ISS components including: operation hours, random/nonrandom failures, software/hardware failures, quantity, orbital replaceable units (ORU), date of placement, unit weight, frequency of part, etc. The motivation for creating such a database will be the development of a mass/reliability parametric model to estimate mass required for replacement parts. Once complete, engineers working on future space flight missions will have access a mean time to failures and on parts along with their mass, this will be used to make proper decisions for long duration space flight missions
Forecasting overhaul or replacement intervals based on estimated system failure intensity
NASA Astrophysics Data System (ADS)
Gannon, James M.
1994-12-01
System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.
Frantz, Stefan; Falcao-Pires, Ines; Balligand, Jean-Luc; Bauersachs, Johann; Brutsaert, Dirk; Ciccarelli, Michele; Dawson, Dana; de Windt, Leon J; Giacca, Mauro; Hamdani, Nazha; Hilfiker-Kleiner, Denise; Hirsch, Emilio; Leite-Moreira, Adelino; Mayr, Manuel; Thum, Thomas; Tocchetti, Carlo G; van der Velden, Jolanda; Varricchi, Gilda; Heymans, Stephane
2018-03-01
Activation of the immune system in heart failure (HF) has been recognized for over 20 years. Initially, experimental studies demonstrated a maladaptive role of the immune system. However, several phase III trials failed to show beneficial effects in HF with therapies directed against an immune activation. Preclinical studies today describe positive and negative effects of immune activation in HF. These different effects depend on timing and aetiology of HF. Therefore, herein we give a detailed review on immune mechanisms and their importance for the development of HF with a special focus on commonalities and differences between different forms of cardiomyopathies. The role of the immune system in ischaemic, hypertensive, diabetic, toxic, viral, genetic, peripartum, and autoimmune cardiomyopathy is discussed in depth. Overall, initial damage to the heart leads to disease specific activation of the immune system whereas in the chronic phase of HF overlapping mechanisms occur in different aetiologies. © 2018 The Authors. European Journal of Heart Failure published by John Wiley & Sons Ltd on behalf of European Society of Cardiology.
Unified continuum damage model for matrix cracking in composite rotor blades
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pollayi, Hemaraju; Harursampath, Dineshkumar
This paper deals with modeling of the first damage mode, matrix micro-cracking, in helicopter rotor/wind turbine blades and how this effects the overall cross-sectional stiffness. The helicopter/wind turbine rotor system operates in a highly dynamic and unsteady environment leading to severe vibratory loads present in the system. Repeated exposure to this loading condition can induce damage in the composite rotor blades. These rotor/turbine blades are generally made of fiber-reinforced laminated composites and exhibit various competing modes of damage such as matrix micro-cracking, delamination, and fiber breakage. There is a need to study the behavior of the composite rotor system undermore » various key damage modes in composite materials for developing Structural Health Monitoring (SHM) system. Each blade is modeled as a beam based on geometrically non-linear 3-D elasticity theory. Each blade thus splits into 2-D analyzes of cross-sections and non-linear 1-D analyzes along the beam reference curves. Two different tools are used here for complete 3-D analysis: VABS for 2-D cross-sectional analysis and GEBT for 1-D beam analysis. The physically-based failure models for matrix in compression and tension loading are used in the present work. Matrix cracking is detected using two failure criterion: Matrix Failure in Compression and Matrix Failure in Tension which are based on the recovered field. A strain variable is set which drives the damage variable for matrix cracking and this damage variable is used to estimate the reduced cross-sectional stiffness. The matrix micro-cracking is performed in two different approaches: (i) Element-wise, and (ii) Node-wise. The procedure presented in this paper is implemented in VABS as matrix micro-cracking modeling module. Three examples are presented to investigate the matrix failure model which illustrate the effect of matrix cracking on cross-sectional stiffness by varying the applied cyclic load.« less
A global analysis approach for investigating structural resilience in urban drainage systems.
Mugume, Seith N; Gomez, Diego E; Fu, Guangtao; Farmani, Raziyeh; Butler, David
2015-09-15
Building resilience in urban drainage systems requires consideration of a wide range of threats that contribute to urban flooding. Existing hydraulic reliability based approaches have focused on quantifying functional failure caused by extreme rainfall or increase in dry weather flows that lead to hydraulic overloading of the system. Such approaches however, do not fully explore the full system failure scenario space due to exclusion of crucial threats such as equipment malfunction, pipe collapse and blockage that can also lead to urban flooding. In this research, a new analytical approach based on global resilience analysis is investigated and applied to systematically evaluate the performance of an urban drainage system when subjected to a wide range of structural failure scenarios resulting from random cumulative link failure. Link failure envelopes, which represent the resulting loss of system functionality (impacts) are determined by computing the upper and lower limits of the simulation results for total flood volume (failure magnitude) and average flood duration (failure duration) at each link failure level. A new resilience index that combines the failure magnitude and duration into a single metric is applied to quantify system residual functionality at each considered link failure level. With this approach, resilience has been tested and characterised for an existing urban drainage system in Kampala city, Uganda. In addition, the effectiveness of potential adaptation strategies in enhancing its resilience to cumulative link failure has been tested. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Nordgren, Lena; Söderlund, Anne
2015-01-01
Younger people with heart failure often experience poor self-rated health. Furthermore, poor self-rated health is associated with long-term sick leave and disability pension. Socio-demographic factors affect the ability to return to work. However, little is known about people on sick leave due to heart failure. The aim of this study was to investigate associations between self-rated health, mood, socio-demographic factors, sick leave compensation, encounters with healthcare professionals and social insurance officers and self-estimated ability to return to work, for people on sick leave due to heart failure. This population-based investigation had a cross-sectional design. Data were collected in Sweden in 2012 from two official registries and from a postal questionnaire. In total, 590 subjects, aged 23-67, responded (response rate 45.8%). Descriptive statistics, correlation analyses (Spearman bivariate analysis) and logistic regression analyses were used to investigate associations. Poor self-rated health was strongly associated with full sick leave compensation (OR = 4.1, p < .001). Compared self-rated health was moderately associated with low income (OR = .6, p = .003). Good self-rated health was strongly associated with positive encounters with healthcare professionals (OR = 3.0, p = .022) and to the impact of positive encounters with healthcare professionals on self-estimated ability to return to work (OR = 3.3, p < .001). People with heart failure are sicklisted for long periods of time and to a great extent receive disability pension. Not being able to work imposes reduced quality of life. Positive encounters with healthcare professionals and social insurance officers can be supportive when people with heart failure struggle to remain in working life.
Generic Health Management: A System Engineering Process Handbook Overview and Process
NASA Technical Reports Server (NTRS)
Wilson, Moses Lee; Spruill, Jim; Hong, Yin Paw
1995-01-01
Health Management, a System Engineering Process, is one of those processes-techniques-and-technologies used to define, design, analyze, build, verify, and operate a system from the viewpoint of preventing, or minimizing, the effects of failure or degradation. It supports all ground and flight elements during manufacturing, refurbishment, integration, and operation through combined use of hardware, software, and personnel. This document will integrate Health Management Processes (six phases) into five phases in such a manner that it is never a stand alone task/effort which separately defines independent work functions.
Reliability analysis based on the losses from failures.
Todinov, M T
2006-04-01
The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.
Decrease the Number of Glovebox Glove Breaches and Failures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurtle, Jackie C.
2013-12-24
Los Alamos National Laboratory (LANL) is committed to the protection of the workers, public, and environment while performing work and uses gloveboxes as engineered controls to protect workers from exposure to hazardous materials while performing plutonium operations. Glovebox gloves are a weak link in the engineered controls and are a major cause of radiation contamination events which can result in potential worker exposure and localized contamination making operational areas off-limits and putting programmatic work on hold. Each day of lost opportunity at Technical Area (TA) 55, Plutonium Facility (PF) 4 is estimated at $1.36 million. Between July 2011 and Junemore » 2013, TA-55-PF-4 had 65 glovebox glove breaches and failures with an average of 2.7 per month. The glovebox work follows the five step safety process promoted at LANL with a decision diamond interjected for whether or not a glove breach or failure event occurred in the course of performing glovebox work. In the event that no glove breach or failure is detected, there is an additional decision for whether or not contamination is detected. In the event that contamination is detected, the possibility for a glove breach or failure event is revisited.« less
ERIC Educational Resources Information Center
Byrom, Tina; Lightfoot, Nic
2013-01-01
Higher education (HE) is often viewed as a conduit for social mobility through which working-class students can secure improved life-chances. However, the link between HE and social mobility is largely viewed as unproblematic. Little research has explored the possible impact of academic failure (in HE) on the trajectories of working-class students…
Reliability culture at La Silla Paranal Observatory
NASA Astrophysics Data System (ADS)
Gonzalez, Sergio
2010-07-01
The Maintenance Department at the La Silla - Paranal Observatory has been an important base to keep the operations of the observatory at a good level of reliability and availability. Several strategies have been implemented and improved in order to cover these requirements and keep the system and equipment working properly when it is required. For that reason, one of the latest improvements has been the introduction of the concept of reliability, which implies that we don't simply speak about reliability concepts. It involves much more than that. It involves the use of technologies, data collecting, data analysis, decision making, committees concentrated in analysis of failure modes and how they can be eliminated, aligning the results with the requirements of our internal partners and establishing steps to achieve success. Some of these steps have already been implemented: data collection, use of technologies, analysis of data, development of priority tools, committees dedicated to analyze data and people dedicated to reliability analysis. This has permitted us to optimize our process, analyze where we can improve, avoid functional failures, reduce the failures range in several systems and subsystems; all this has had a positive impact in terms of results for our Observatory. All these tools are part of the reliability culture that allows our system to operate with a high level of reliability and availability.
Investigation of Mechanical Breakdowns Leading to Lock Closures
2017-06-01
wide variety of issues lead to emergency clo- sures, yet no specific problem (s) that frequently cause unscheduled clo- sures were identified. Table 2-3...Reporting Used to create basic work orders, report problems or malfunctions, or request work to be done. Used to create and process work orders from...work order page for recording the failure class, problem , cause, and remedy (Figure 3-1). Appendix C includes a full list of failure classes
49 CFR 214.529 - In-service failure of primary braking system.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 4 2014-10-01 2014-10-01 false In-service failure of primary braking system. 214... Maintenance Machines and Hi-Rail Vehicles § 214.529 In-service failure of primary braking system. (a) In the event of a total in-service failure of its primary braking system, an on-track roadway maintenance...
49 CFR 214.529 - In-service failure of primary braking system.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 4 2013-10-01 2013-10-01 false In-service failure of primary braking system. 214... Maintenance Machines and Hi-Rail Vehicles § 214.529 In-service failure of primary braking system. (a) In the event of a total in-service failure of its primary braking system, an on-track roadway maintenance...
49 CFR 214.529 - In-service failure of primary braking system.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Maintenance Machines and Hi-Rail Vehicles § 214.529 In-service failure of primary braking system. (a) In the event of a total in-service failure of its primary braking system, an on-track roadway maintenance... 49 Transportation 4 2010-10-01 2010-10-01 false In-service failure of primary braking system. 214...
Studies in knowledge-based diagnosis of failures in robotic assembly
NASA Technical Reports Server (NTRS)
Lam, Raymond K.; Pollard, Nancy S.; Desai, Rajiv S.
1990-01-01
The telerobot diagnostic system (TDS) is a knowledge-based system that is being developed for identification and diagnosis of failures in the space robotic domain. The system is able to isolate the symptoms of the failure, generate failure hypotheses based on these symptoms, and test their validity at various levels by interpreting or simulating the effects of the hypotheses on results of plan execution. The implementation of the TDS is outlined. The classification of failures and the types of system models used by the TDS are discussed. A detailed example of the TDS approach to failure diagnosis is provided.
Education, Social Class and Social Exclusion.
ERIC Educational Resources Information Center
Whitty, Geoff
2001-01-01
Concerned about working-class failure, argues that recent (British) government policies have insufficiently considered sociological studies on how social class affects educational success or failure. Social-inclusion policies must address forms of middle-class self-exclusion from mainstream public education as well as working-class social…
CRYOGENIC UPPER STAGE SYSTEM SAFETY
NASA Technical Reports Server (NTRS)
Smith, R. Kenneth; French, James V.; LaRue, Peter F.; Taylor, James L.; Pollard, Kathy (Technical Monitor)
2005-01-01
NASA s Exploration Initiative will require development of many new systems or systems of systems. One specific example is that safe, affordable, and reliable upper stage systems to place cargo and crew in stable low earth orbit are urgently required. In this paper, we examine the failure history of previous upper stages with liquid oxygen (LOX)/liquid hydrogen (LH2) propulsion systems. Launch data from 1964 until midyear 2005 are analyzed and presented. This data analysis covers upper stage systems from the Ariane, Centaur, H-IIA, Saturn, and Atlas in addition to other vehicles. Upper stage propulsion system elements have the highest impact on reliability. This paper discusses failure occurrence in all aspects of the operational phases (Le., initial burn, coast, restarts, and trends in failure rates over time). In an effort to understand the likelihood of future failures in flight, we present timelines of engine system failures relevant to initial flight histories. Some evidence suggests that propulsion system failures as a result of design problems occur shortly after initial development of the propulsion system; whereas failures because of manufacturing or assembly processing errors may occur during any phase of the system builds process, This paper also explores the detectability of historical failures. Observations from this review are used to ascertain the potential for increased upper stage reliability given investments in integrated system health management. Based on a clear understanding of the failure and success history of previous efforts by multiple space hardware development groups, the paper will investigate potential improvements that can be realized through application of system safety principles.
Experiences with Extra-Vehicular Activities in Response to Critical ISS Contingencies
NASA Technical Reports Server (NTRS)
Van Cise, Edward A.; Kelly, Brian J.; Radigan, Jeffery P.; Cranmer, Curtis W.
2016-01-01
Initial "Big 14" work was put to the test for the first time in 2010. Deficiencies were found in some of the planning and approaches to that work; Failure Response Assessment Team created in 2010 to address deficiencies -Identify and perform engineering analysis in operations products prior to failure; incorporate results into operations products -Identify actions for protecting ISS against a Next Worse Failure after the first failure occurs -Better document not only EVA products but also planning products, assumptions, and open actions; Pre-failure investments against critical failures best postures ISS for swift response and recovery -A type of insurance policy -Has proven effective in a number of contingency EVA cases since 2010. Planning for MBSU R&R in 2012, Second PM R&R in 2013, EXT MDM R&R in 2014; Current FRAT schedule projects completion of all analysis in 2018
Complex Dynamics of the Power Transmission Grid (and other Critical Infrastructures)
NASA Astrophysics Data System (ADS)
Newman, David
2015-03-01
Our modern societies depend crucially on a web of complex critical infrastructures such as power transmission networks, communication systems, transportation networks and many others. These infrastructure systems display a great number of the characteristic properties of complex systems. Important among these characteristics, they exhibit infrequent large cascading failures that often obey a power law distribution in their probability versus size. This power law behavior suggests that conventional risk analysis does not apply to these systems. It is thought that much of this behavior comes from the dynamical evolution of the system as it ages, is repaired, upgraded, and as the operational rules evolve with human decision making playing an important role in the dynamics. In this talk, infrastructure systems as complex dynamical systems will be introduced and some of their properties explored. The majority of the talk will then be focused on the electric power transmission grid though many of the results can be easily applied to other infrastructures. General properties of the grid will be discussed and results from a dynamical complex systems power transmission model will be compared with real world data. Then we will look at a variety of uses of this type of model. As examples, we will discuss the impact of size and network homogeneity on the grid robustness, the change in risk of failure as generation mix (more distributed vs centralized for example) changes, as well as the effect of operational changes such as the changing the operational risk aversion or grid upgrade strategies. One of the important outcomes from this work is the realization that ``improvements'' in the system components and operational efficiency do not always improve the system robustness, and can in fact greatly increase the risk, when measured as a risk of large failure.
A review of wiring system safety in space power systems
NASA Technical Reports Server (NTRS)
Stavnes, Mark W.; Hammoud, Ahmad N.
1993-01-01
Wiring system failures have resulted from arc propagation in the wiring harnesses of current aerospace vehicles. These failures occur when the insulation becomes conductive upon the initiation of an arc. In some cases, the conductive path of the carbon arc track displays a high enough resistance such that the current is limited, and therefore may be difficult to detect using conventional circuit protection. Often, such wiring failures are not simply the result of insulation failure, but are due to a combination of wiring system factors. Inadequate circuit protection, unforgiving system designs, and careless maintenance procedures can contribute to a wiring system failure. This paper approaches the problem with respect to the overall wiring system, in order to determine what steps can be taken to improve the reliability, maintainability, and safety of space power systems. Power system technologies, system designs, and maintenance procedures which have led to past wiring system failures will be discussed. New technologies, design processes, and management techniques which may lead to improved wiring system safety will be introduced.
NASA Astrophysics Data System (ADS)
Hayes, Richard; Beets, Tim; Beno, Joseph; Booth, John; Cornell, Mark; Good, John; Heisler, James; Hill, Gary; Kriel, Herman; Penney, Charles; Rafal, Marc; Savage, Richard; Soukup, Ian; Worthington, Michael; Zierer, Joseph
2012-09-01
In support of the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX), the Center for Electromechanics at The University of Texas at Austin was tasked with developing the new Tracker and control system to support the HETDEX Wide-Field Upgrade. The tracker carries the 3,100 kg Prime Focus Instrument Package and Wide Field Corrector approximately 13 m above the 10 m diameter primary mirror. Its safe and reliable operation by a sophisticated control system, over a 20 year life time is a paramount requirement for the project. To account for all potential failures and potential hazards, to both the equipment and personnel involved, an extensive Failure Modes and Effects Analysis (FMEA) was completed early in the project. This task required participation of all the stakeholders over a multi-day meeting with numerous follow up exchanges. The event drove a number of significant design decisions and requirements that might not have been identified this early in the project without this process. The result is a system that has multiple layers of active and passive safety systems to protect the tens of millions of dollars of hardware involved and the people who operate it. This paper will describe the background of the FMEA process, how it was utilized on HETDEX, the critical outcomes, how the required safety systems were implemented, and how they have worked in operation. It should be of interest to engineers, designers, and managers engaging in complex multi-disciplinary and parallel engineering projects that involve automated hardware and control systems with potentially hazardous operating scenarios.
Effect of reducing interns' weekly work hours on sleep and attentional failures.
Lockley, Steven W; Cronin, John W; Evans, Erin E; Cade, Brian E; Lee, Clark J; Landrigan, Christopher P; Rothschild, Jeffrey M; Katz, Joel T; Lilly, Craig M; Stone, Peter H; Aeschbach, Daniel; Czeisler, Charles A
2004-10-28
Knowledge of the physiological effects of extended (24 hours or more) work shifts in postgraduate medical training is limited. We aimed to quantify work hours, sleep, and attentional failures among first-year residents (postgraduate year 1) during a traditional rotation schedule that included extended work shifts and during an intervention schedule that limited scheduled work hours to 16 or fewer consecutive hours. Twenty interns were studied during two three-week rotations in intensive care units, each during both the traditional and the intervention schedule. Subjects completed daily sleep logs that were validated with regular weekly episodes (72 to 96 hours) of continuous polysomnography (r=0.94) and work logs that were validated by means of direct observation by study staff (r=0.98). Seventeen of 20 interns worked more than 80 hours per week during the traditional schedule (mean, 84.9; range, 74.2 to 92.1). All interns worked less than 80 hours per week during the intervention schedule (mean, 65.4; range, 57.6 to 76.3). On average, interns worked 19.5 hours per week less (P<0.001), slept 5.8 hours per week more (P<0.001), slept more in the 24 hours preceding each working hour (P<0.001), and had less than half the rate of attentional failures while working during on-call nights (P=0.02) on the intervention schedule as compared with the traditional schedule. Eliminating interns' extended work shifts in an intensive care unit significantly increased sleep and decreased attentional failures during night work hours. Copyright 2004 Massachusetts Medical Society.
A fault-tolerant intelligent robotic control system
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Tso, Kam Sing
1993-01-01
This paper describes the concept, design, and features of a fault-tolerant intelligent robotic control system being developed for space and commercial applications that require high dependability. The comprehensive strategy integrates system level hardware/software fault tolerance with task level handling of uncertainties and unexpected events for robotic control. The underlying architecture for system level fault tolerance is the distributed recovery block which protects against application software, system software, hardware, and network failures. Task level fault tolerance provisions are implemented in a knowledge-based system which utilizes advanced automation techniques such as rule-based and model-based reasoning to monitor, diagnose, and recover from unexpected events. The two level design provides tolerance of two or more faults occurring serially at any level of command, control, sensing, or actuation. The potential benefits of such a fault tolerant robotic control system include: (1) a minimized potential for damage to humans, the work site, and the robot itself; (2) continuous operation with a minimum of uncommanded motion in the presence of failures; and (3) more reliable autonomous operation providing increased efficiency in the execution of robotic tasks and decreased demand on human operators for controlling and monitoring the robotic servicing routines.
Application of Density Functional Theory to Systems Containing Metal Atoms
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Arnold, James O. (Technical Monitor)
1997-01-01
The accuracy of density functional theory (DFT) for problems involving metal atoms is considered. The DFT results are compared with experiment as well as results obtained using the coupled cluster approach. The comparisons include geometries, frequencies, and bond energies. The systems considered include MO2, M(OH)+(sub n), MNO+, and MCO+(sub 2). The DFT works well for frequencies and geometries, even in cases with symmetry breaking; however, some examples have been found where the symmetry breaking is quite severe and the DFT methods do not work well. The calculation of bond energies is more difficult and examples of the successes as well as failures of DFT will be given.
The Widening Gap: A New Book on the Struggle To Balance Work and Caregiving. Research-in-Brief.
ERIC Educational Resources Information Center
Rahmanou, Hedieh
This research brief presents some main findings from a study of employer-based support systems in the United States to help families meet their caregiving responsibilities, and focuses on the failure of existing policies to support caregiving responsibilities of low-income parents and women. The brief also presents policy alternatives to help…
Cameras Monitor Spacecraft Integrity to Prevent Failures
NASA Technical Reports Server (NTRS)
2014-01-01
The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.
Accelerated Aging Experiments for Capacitor Health Monitoring and Prognostics
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan S.; Celaya, Jose Ramon; Biswas, Gautam; Goebel, Kai
2012-01-01
This paper discusses experimental setups for health monitoring and prognostics of electrolytic capacitors under nominal operation and accelerated aging conditions. Electrolytic capacitors have higher failure rates than other components in electronic systems like power drives, power converters etc. Our current work focuses on developing first-principles-based degradation models for electrolytic capacitors under varying electrical and thermal stress conditions. Prognostics and health management for electronic systems aims to predict the onset of faults, study causes for system degradation, and accurately compute remaining useful life. Accelerated life test methods are often used in prognostics research as a way to model multiple causes and assess the effects of the degradation process through time. It also allows for the identification and study of different failure mechanisms and their relationships under different operating conditions. Experiments are designed for aging of the capacitors such that the degradation pattern induced by the aging can be monitored and analyzed. Experimental setups and data collection methods are presented to demonstrate this approach.
Machine vision method for online surface inspection of easy open can ends
NASA Astrophysics Data System (ADS)
Mariño, Perfecto; Pastoriza, Vicente; Santamaría, Miguel
2006-10-01
Easy open can end manufacturing process in the food canning sector currently makes use of a manual, non-destructive testing procedure to guarantee can end repair coating quality. This surface inspection is based on a visual inspection made by human inspectors. Due to the high production rate (100 to 500 ends per minute) only a small part of each lot is verified (statistical sampling), then an automatic, online, inspection system, based on machine vision, has been developed to improve this quality control. The inspection system uses a fuzzy model to make the acceptance/rejection decision for each can end from the information obtained by the vision sensor. In this work, the inspection method is presented. This surface inspection system checks the total production, classifies the ends in agreement with an expert human inspector, supplies interpretability to the operators in order to find out the failure causes and reduce mean time to repair during failures, and allows to modify the minimum can end repair coating quality.
[Application of root cause analysis in healthcare].
Hsu, Tsung-Fu
2007-12-01
The main purpose of this study was to explore various aspects of root cause analysis (RCA), including its definition, rationale concept, main objective, implementation procedures, most common analysis methodology (fault tree analysis, FTA), and advantages and methodologic limitations in regard to healthcare. Several adverse events that occurred at a certain hospital were also analyzed by the author using FTA as part of this study. RCA is a process employed to identify basic and contributing causal factors underlying performance variations associated with adverse events. The rationale concept of RCA offers a systemic approach to improving patient safety that does not assign blame or liability to individuals. The four-step process involved in conducting an RCA includes: RCA preparation, proximate cause identification, root cause identification, and recommendation generation and implementation. FTA is a logical, structured process that can help identify potential causes of system failure before actual failures occur. Some advantages and significant methodologic limitations of RCA were discussed. Finally, we emphasized that errors stem principally from faults attributable to system design, practice guidelines, work conditions, and other human factors, which induce health professionals to make negligence or mistakes with regard to healthcare. We must explore the root causes of medical errors to eliminate potential RCA system failure factors. Also, a systemic approach is needed to resolve medical errors and move beyond a current culture centered on assigning fault to individuals. In constructing a real environment of patient-centered safety healthcare, we can help encourage clients to accept state-of-the-art healthcare services.
Reversal of cognitive decline: A novel therapeutic program
Bredesen, Dale E.
2014-01-01
This report describes a novel, comprehensive, and personalized therapeutic program that is based on the underlying pathogenesis of Alzheimer's disease, and which involves multiple modalities designed to achieve metabolic enhancement for neurodegeneration (MEND). The first 10 patients who have utilized this program include patients with memory loss associated with Alzheimer's disease (AD), amnestic mild cognitive impairment (aMCI), or subjective cognitive impairment (SCI). Nine of the 10 displayed subjective or objective improvement in cognition beginning within 3-6 months, with the one failure being a patient with very late stage AD. Six of the patients had had to discontinue working or were struggling with their jobs at the time of presentation, and all were able to return to work or continue working with improved performance. Improvements have been sustained, and at this time the longest patient follow-up is two and one-half years from initial treatment, with sustained and marked improvement. These results suggest that a larger, more extensive trial of this therapeutic program is warranted. The results also suggest that, at least early in the course, cognitive decline may be driven in large part by metabolic processes. Furthermore, given the failure of monotherapeutics in AD to date, the results raise the possibility that such a therapeutic system may be useful as a platform on which drugs that would fail as monotherapeutics may succeed as key components of a therapeutic system. PMID:25324467
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Waas, Anthony M.
2011-01-01
A thermodynamically-based work potential theory for modeling progressive damage and failure in fiber-reinforced laminates is presented. The current, multiple-internal state variable (ISV) formulation, enhanced Schapery theory (EST), utilizes separate ISVs for modeling the effects of damage and failure. Damage is considered to be the effect of any structural changes in a material that manifest as pre-peak non-linearity in the stress versus strain response. Conversely, failure is taken to be the effect of the evolution of any mechanisms that results in post-peak strain softening. It is assumed that matrix microdamage is the dominant damage mechanism in continuous fiber-reinforced polymer matrix laminates, and its evolution is controlled with a single ISV. Three additional ISVs are introduced to account for failure due to mode I transverse cracking, mode II transverse cracking, and mode I axial failure. Typically, failure evolution (i.e., post-peak strain softening) results in pathologically mesh dependent solutions within a finite element method (FEM) setting. Therefore, consistent character element lengths are introduced into the formulation of the evolution of the three failure ISVs. Using the stationarity of the total work potential with respect to each ISV, a set of thermodynamically consistent evolution equations for the ISVs is derived. The theory is implemented into commercial FEM software. Objectivity of total energy dissipated during the failure process, with regards to refinements in the FEM mesh, is demonstrated. The model is also verified against experimental results from two laminated, T800/3900-2 panels containing a central notch and different fiber-orientation stacking sequences. Global load versus displacement, global load versus local strain gage data, and macroscopic failure paths obtained from the models are compared to the experiments.
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Waas, Anthony M.
2012-01-01
A thermodynamically-based work potential theory for modeling progressive damage and failure in fiber-reinforced laminates is presented. The current, multiple-internal state variable (ISV) formulation, enhanced Schapery theory (EST), utilizes separate ISVs for modeling the effects of damage and failure. Damage is considered to be the effect of any structural changes in a material that manifest as pre-peak non-linearity in the stress versus strain response. Conversely, failure is taken to be the effect of the evolution of any mechanisms that results in post-peak strain softening. It is assumed that matrix microdamage is the dominant damage mechanism in continuous fiber-reinforced polymer matrix laminates, and its evolution is controlled with a single ISV. Three additional ISVs are introduced to account for failure due to mode I transverse cracking, mode II transverse cracking, and mode I axial failure. Typically, failure evolution (i.e., post-peak strain softening) results in pathologically mesh dependent solutions within a finite element method (FEM) setting. Therefore, consistent character element lengths are introduced into the formulation of the evolution of the three failure ISVs. Using the stationarity of the total work potential with respect to each ISV, a set of thermodynamically consistent evolution equations for the ISVs is derived. The theory is implemented into commercial FEM software. Objectivity of total energy dissipated during the failure process, with regards to refinements in the FEM mesh, is demonstrated. The model is also verified against experimental results from two laminated, T800/3900-2 panels containing a central notch and different fiber-orientation stacking sequences. Global load versus displacement, global load versus local strain gage data, and macroscopic failure paths obtained from the models are compared to the experiments.
Tensile strength and failure load of sutures for robotic surgery.
Abiri, Ahmad; Paydar, Omeed; Tao, Anna; LaRocca, Megan; Liu, Kang; Genovese, Bradley; Candler, Robert; Grundfest, Warren S; Dutson, Erik P
2017-08-01
Robotic surgical platforms have seen increased use among minimally invasive gastrointestinal surgeons (von Fraunhofer et al. in J Biomed Mater Res 19(5):595-600, 1985. doi: 10.1002/jbm.820190511 ). However, these systems still suffer from lack of haptic feedback, which results in exertion of excessive force, often leading to suture failures (Barbash et al. in Ann Surg 259(1):1-6, 2014. doi: 10.1097/SLA.0b013e3182a5c8b8 ). This work catalogs tensile strength and failure load among commonly used sutures in an effort to prevent robotic surgical consoles from exceeding identified thresholds. Trials were thus conducted on common sutures varying in material type, gauge size, rate of pulling force, and method of applied force. Polydioxanone, Silk, Vicryl, and Prolene, gauges 5-0 to 1-0, were pulled till failure using a commercial mechanical testing system. 2-0 and 3-0 sutures were further tested for the effect of pull rate on failure load at rates of 50, 200, and 400 mm/min. 3-0 sutures were also pulled till failure using a da Vinci robotic surgical system in unlooped, looped, and at the needle body arrangements. Generally, Vicryl and PDS sutures had the highest mechanical strength (47-179 kN/cm 2 ), while Silk had the lowest (40-106 kN/cm 2 ). Larger diameter sutures withstand higher total force, but finer gauges consistently show higher force per unit area. The difference between material types becomes increasingly significant as the diameters decrease. Comparisons of identical suture materials and gauges show 27-50% improvement in the tensile strength over data obtained in 1985 (Ballantyne in Surg Endosc Other Interv Tech 16(10):1389-1402, 2002. doi: 10.1007/s00464-001-8283-7 ). No significant differences were observed when sutures were pulled at different rates. Reduction in suture strength appeared to be strongly affected by the technique used to manipulate the suture. Availability of suture tensile strength and failure load data will help define software safety protocols for alerting a surgeon prior to suture failure during robotic surgery. Awareness of suture strength weakening with direct instrument manipulation may lead to the development of better techniques to further reduce intraoperative suture breakage.
NASA Technical Reports Server (NTRS)
Bundick, W. Thomas
1990-01-01
A methodology for designing a failure detection and identification (FDI) system to detect and isolate control element failures in aircraft control systems is reviewed. An FDI system design for a modified B-737 aircraft resulting from this methodology is also reviewed, and the results of evaluating this system via simulation are presented. The FDI system performed well in a no-turbulence environment, but it experienced an unacceptable number of false alarms in atmospheric turbulence. An adaptive FDI system, which adjusts thresholds and other system parameters based on the estimated turbulence level, was developed and evaluated. The adaptive system performed well over all turbulence levels simulated, reliably detecting all but the smallest magnitude partially-missing-surface failures.
Destructive Single-Event Failures in Diodes
NASA Technical Reports Server (NTRS)
Casey, Megan C.; Gigliuto, Robert A.; Lauenstein, Jean-Marie; Wilcox, Edward P.; Kim, Hak; Chen, Dakai; Phan, Anthony M.; LaBel, Kenneth A.
2013-01-01
In this summary, we have shown that diodes are susceptible to destructive single-event effects, and that these failures occur along the guard ring. By determining the last passing voltages, a safe operating area can be derived. By derating off of those values, rather than by the rated voltage, like what is currently done with power MOSFETs, we can work to ensure the safety of future missions. However, there are still open questions about these failures. Are they limited to a single manufacturer, a small number, or all of them? Is there a threshold rated voltage that must be exceeded to see these failures? With future work, we hope to answer these questions. In the full paper, laser results will also be presented to verify that failures only occur along the guard ring.
Reliability Effects of Surge Current Testing of Solid Tantalum Capacitors
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander
2007-01-01
Solid tantalum capacitors are widely used in space applications to filter low-frequency ripple currents in power supply circuits and stabilize DC voltages in the system. Tantalum capacitors manufactured per military specifications (MIL-PRF-55365) are established reliability components and have less than 0.001% of failures per 1000 hours (the failure rate is less than 10 FIT) for grades D or S, thus positioning these parts among electronic components with the highest reliability characteristics. Still, failures of tantalum capacitors do happen and when it occurs it might have catastrophic consequences for the system. This is due to a short-circuit failure mode, which might be damaging to a power supply, and also to the capability of tantalum capacitors with manganese cathodes to self-ignite when a failure occurs in low-impedance applications. During such a failure, a substantial amount of energy is released by exothermic reaction of the tantalum pellet with oxygen generated by the overheated manganese oxide cathode, resulting not only in destruction of the part, but also in damage of the board and surrounding components. A specific feature of tantalum capacitors, compared to ceramic parts, is a relatively large value of capacitance, which in contemporary low-size chip capacitors reaches dozens and hundreds of microfarads. This might result in so-called surge current or turn-on failures in the parts when the board is first powered up. Such a failure, which is considered as the most prevalent type of failures in tantalum capacitors [I], is due to fast changes of the voltage in the circuit, dV/dt, producing high surge current spikes, I(sub sp) = Cx(dV/dt), when current in the circuit is unrestricted. These spikes can reach hundreds of amperes and cause catastrophic failures in the system. The mechanism of surge current failures has not been understood completely yet, and different hypotheses were discussed in relevant literature. These include a sustained scintillation breakdown model [1-3]; electrical oscillations in circuits with a relatively high inductance [4-6]; local overheating of the cathode [5,7, 8]; mechanical damage to tantalum pentoxide dielectric caused by the impact of MnO2 crystals [2,9, 10]; or stress-induced-generation of electron traps caused by electromagnetic forces developed during current spikes [11]. A commonly accepted explanation of the surge current failures is that at unlimited current supply during surge current conditions, the self-healing mechanism in tantalum capacitors does not work, and what would be a minor scintillation spike if the current were limited, becomes a catastrophic failure of the part [l, 12]. However, our data show that the scintillation breakdown voltages are significantly greater that the surge current breakdown voltages, so it is still not clear why the part, which has no scintillations, would fail at the same voltage during surge current testing (SCT).
Physiology of respiratory disturbances in muscular dystrophies
Lo Mauro, Antonella
2016-01-01
Muscular dystrophy is a group of inherited myopathies characterised by progressive skeletal muscle wasting, including of the respiratory muscles. Respiratory failure, i.e. when the respiratory system fails in its gas exchange functions, is a common feature in muscular dystrophy, being the main cause of death, and it is a consequence of lung failure, pump failure or a combination of the two. The former is due to recurrent aspiration, the latter to progressive weakness of respiratory muscles and an increase in the load against which they must contract. In fact, both the resistive and elastic components of the work of breathing increase due to airway obstruction and chest wall and lung stiffening, respectively. The respiratory disturbances in muscular dystrophy are restrictive pulmonary function, hypoventilation, altered thoracoabdominal pattern, hypercapnia, dyspnoea, impaired regulation of breathing, inefficient cough and sleep disordered breathing. They can be present at different rates according to the type of muscular dystrophy and its progression, leading to different onset of each symptom, prognosis and degree of respiratory involvement. Key points A common feature of muscular dystrophy is respiratory failure, i.e. the inability of the respiratory system to provide proper oxygenation and carbon dioxide elimination. In the lung, respiratory failure is caused by recurrent aspiration, and leads to hypoxaemia and hypercarbia. Ventilatory failure in muscular dystrophy is caused by increased respiratory load and respiratory muscles weakness. Respiratory load increases in muscular dystrophy because scoliosis makes chest wall compliance decrease, atelectasis and fibrosis make lung compliance decrease, and airway obstruction makes airway resistance increase. The consequences of respiratory pump failure are restrictive pulmonary function, hypoventilation, altered thoracoabdominal pattern, hypercapnia, dyspnoea, impaired regulation of breathing, inefficient cough and sleep disordered breathing. Educational aims To understand the mechanisms leading to respiratory disturbances in patients with muscular dystrophy. To understand the impact of respiratory disturbances in patients with muscular dystrophy. To provide a brief description of the main forms of muscular dystrophy with their respiratory implications. PMID:28210319
Physiology of respiratory disturbances in muscular dystrophies.
Lo Mauro, Antonella; Aliverti, Andrea
2016-12-01
Muscular dystrophy is a group of inherited myopathies characterised by progressive skeletal muscle wasting, including of the respiratory muscles. Respiratory failure, i.e . when the respiratory system fails in its gas exchange functions, is a common feature in muscular dystrophy, being the main cause of death, and it is a consequence of lung failure, pump failure or a combination of the two. The former is due to recurrent aspiration, the latter to progressive weakness of respiratory muscles and an increase in the load against which they must contract. In fact, both the resistive and elastic components of the work of breathing increase due to airway obstruction and chest wall and lung stiffening, respectively. The respiratory disturbances in muscular dystrophy are restrictive pulmonary function, hypoventilation, altered thoracoabdominal pattern, hypercapnia, dyspnoea, impaired regulation of breathing, inefficient cough and sleep disordered breathing. They can be present at different rates according to the type of muscular dystrophy and its progression, leading to different onset of each symptom, prognosis and degree of respiratory involvement. A common feature of muscular dystrophy is respiratory failure, i.e. the inability of the respiratory system to provide proper oxygenation and carbon dioxide elimination.In the lung, respiratory failure is caused by recurrent aspiration, and leads to hypoxaemia and hypercarbia.Ventilatory failure in muscular dystrophy is caused by increased respiratory load and respiratory muscles weakness.Respiratory load increases in muscular dystrophy because scoliosis makes chest wall compliance decrease, atelectasis and fibrosis make lung compliance decrease, and airway obstruction makes airway resistance increase.The consequences of respiratory pump failure are restrictive pulmonary function, hypoventilation, altered thoracoabdominal pattern, hypercapnia, dyspnoea, impaired regulation of breathing, inefficient cough and sleep disordered breathing. To understand the mechanisms leading to respiratory disturbances in patients with muscular dystrophy.To understand the impact of respiratory disturbances in patients with muscular dystrophy.To provide a brief description of the main forms of muscular dystrophy with their respiratory implications.
Stetson, Peter D.; McKnight, Lawrence K.; Bakken, Suzanne; Curran, Christine; Kubose, Tate T.; Cimino, James J.
2002-01-01
Medical errors are common, costly and often preventable. Work in understanding the proximal causes of medical errors demonstrates that systems failures predispose to adverse clinical events. Most of these systems failures are due to lack of appropriate information at the appropriate time during the course of clinical care. Problems with clinical communication are common proximal causes of medical errors. We have begun a project designed to measure the impact of wireless computing on medical errors. We report here on our efforts to develop an ontology representing the intersection of medical errors, information needs and the communication space. We will use this ontology to support the collection, storage and interpretation of project data. The ontology’s formal representation of the concepts in this novel domain will help guide the rational deployment of our informatics interventions. A real-life scenario is evaluated using the ontology in order to demonstrate its utility.
NASA Astrophysics Data System (ADS)
Espinosa, H. D.; Peng, B.; Moldovan, N.; Friedmann, T. A.; Xiao, X.; Mancini, D. C.; Auciello, O.; Carlisle, J.; Zorman, C. A.; Merhegany, M.
2006-08-01
In this work, the authors report the mechanical properties of three emerging materials in thin film form: single crystal silicon carbide (3C-SiC), ultrananocrystalline diamond, and hydrogen-free tetrahedral amorphous carbon. The materials are being employed in micro- and nanoelectromechanical systems. Several reports addressed some of the mechanical properties of these materials but they are based in different experimental approaches. Here, they use a single testing method, the membrane deflection experiment, to compare these materials' Young's moduli, characteristic strengths, fracture toughnesses, and theoretical strengths. Furthermore, they analyze the applicability of Weibull theory [Proc. Royal Swedish Inst. Eng. Res. 153, 1 (1939); ASME J. Appl. Mech. 18, 293 (1951)] in the prediction of these materials' failure and document the volume- or surface-initiated failure modes by fractographic analysis. The findings are of particular relevance to the selection of micro- and nanoelectromechanical systems materials for various applications of interest.
CTS TEP thermal anomalies: Heat pipe system performance
NASA Technical Reports Server (NTRS)
Marcus, B. D.
1977-01-01
A part of the investigation is summarized of the thermal anomalies of the transmitter experiment package (TEP) on the Communications Technology Satellite (CTS) which were observed on four occasions in 1977. Specifically, the possible failure modes of the variable conductance heat pipe system (VCHPS) used for principal thermal control of the high-power traveling wave tube in the TEP are considered. Further, the investigation examines how those malfunctions may have given rise to the TEP thermal anomalies. Using CTS flight data information, ground test results, analysis conclusions, and other relevant information, the investigation concentrated on artery depriming as the most likely VCHPS failure mode. Included in the study as possible depriming mechanisms were freezing of the working fluid, Marangoni flow, and gas evolution within the arteries. The report concludes that while depriming of the heat pipe arteries is consistent with the bulk of the observed data, the factors which cause the arteries to deprime have yet to be identified.
NASA Astrophysics Data System (ADS)
Fremlin, Carl; Beckers, Jasper; Crowley, Brendan; Rauch, Joseph; Scoville, Jim
2017-10-01
The Neutral Beam system on the DIII-D tokamak consists of eight ion sources using the Common Long Pulse Source (CLPS) design. During helium operation, desired for research regarding the ITER pre-nuclear phase, it has been observed that the ion source arc chamber performance steadily deteriorates, eventually failing due to electrical breakdown of the insulation. A significant investment of manpower and time is required for repairs. To study the cause of failure a small analogue of the DIII-D neutral beam arc chamber has been constructed. This poster presents the design and analysis of the arc chamber including the PLC based operational control system for the experiment, analysis of the magnetic confinement and details of the diagnostic suite. Work supported in part by US DoE under the Science Undergraduate Laboratory Internship (SULI) program and under DE-FC02-04ER54698.
Control allocation for gimballed/fixed thrusters
NASA Astrophysics Data System (ADS)
Servidia, Pablo A.
2010-02-01
Some overactuated control systems use a control distribution law between the controller and the set of actuators, usually called control allocator. Beyond the control allocator, the configuration of actuators may be designed to be able to operate after a single point of failure, for system optimization and/or decentralization objectives. For some type of actuators, a control allocation is used even without redundancy, being a good example the design and operation of thruster configurations. In fact, as the thruster mass flow direction and magnitude only can be changed under certain limits, this must be considered in the feedback implementation. In this work, the thruster configuration design is considered in the fixed (F), single-gimbal (SG) and double-gimbal (DG) thruster cases. The minimum number of thrusters for each case is obtained and for the resulting configurations a specific control allocation is proposed using a nonlinear programming algorithm, under nominal and single-point of failure conditions.
Phased Array Probe Optimization for the Inspection of Titanium Billets
NASA Astrophysics Data System (ADS)
Rasselkorde, E.; Cooper, I.; Wallace, P.; Lupien, V.
2010-02-01
The manufacturing process of titanium billets can produce multiple sub-surface defects that are particularly difficult to detect during the early stages of production. Failure to detect these defects can lead to subsequent in-service failure. A new and novel automated quality control system is being developed for the inspection of titanium billets destined for use in aerospace applications. The sensors will be deployed by an automated system to minimise the use of manual inspections, which should improve the quality and reliability of these critical inspections early on in the manufacturing process. This paper presents the first part of the work, which is the design and the simulation of the phased array ultrasonic inspection of the billets. A series of phased array transducers were designed to optimise the ultrasonic inspection of a ten inch diameter billet made from Titanium 6Al-4V. A comparison was performed between different probes including a 2D annular sectorial array.
Dynamic least-cost optimisation of wastewater system remedial works requirements.
Vojinovic, Z; Solomatine, D; Price, R K
2006-01-01
In recent years, there has been increasing concern for wastewater system failure and identification of optimal set of remedial works requirements. So far, several methodologies have been developed and applied in asset management activities by various water companies worldwide, but often with limited success. In order to fill the gap, there are several research projects that have been undertaken in exploring various algorithms to optimise remedial works requirements, but mostly for drinking water supply systems, and very limited work has been carried out for the wastewater assets. Some of the major deficiencies of commonly used methods can be found in either one or more of the following aspects: inadequate representation of systems complexity, incorporation of a dynamic model into the decision-making loop, the choice of an appropriate optimisation technique and experience in applying that technique. This paper is oriented towards resolving these issues and discusses a new approach for the optimisation of wastewater systems remedial works requirements. It is proposed that the optimal problem search is performed by a global optimisation tool (with various random search algorithms) and the system performance is simulated by the hydrodynamic pipe network model. The work on assembling all required elements and the development of an appropriate interface protocols between the two tools, aimed to decode the potential remedial solutions into the pipe network model and to calculate the corresponding scenario costs, is currently underway.
[Failure modes and effects analysis in the prescription, validation and dispensing process].
Delgado Silveira, E; Alvarez Díaz, A; Pérez Menéndez-Conde, C; Serna Pérez, J; Rodríguez Sagrado, M A; Bermejo Vicedo, T
2012-01-01
To apply a failure modes and effects analysis to the prescription, validation and dispensing process for hospitalised patients. A work group analysed all of the stages included in the process from prescription to dispensing, identifying the most critical errors and establishing potential failure modes which could produce a mistake. The possible causes, their potential effects, and the existing control systems were analysed to try and stop them from developing. The Hazard Score was calculated, choosing those that were ≥ 8, and a Severity Index = 4 was selected independently of the hazard Score value. Corrective measures and an implementation plan were proposed. A flow diagram that describes the whole process was obtained. A risk analysis was conducted of the chosen critical points, indicating: failure mode, cause, effect, severity, probability, Hazard Score, suggested preventative measure and strategy to achieve so. Failure modes chosen: Prescription on the nurse's form; progress or treatment order (paper); Prescription to incorrect patient; Transcription error by nursing staff and pharmacist; Error preparing the trolley. By applying a failure modes and effects analysis to the prescription, validation and dispensing process, we have been able to identify critical aspects, the stages in which errors may occur and the causes. It has allowed us to analyse the effects on the safety of the process, and establish measures to prevent or reduce them. Copyright © 2010 SEFH. Published by Elsevier Espana. All rights reserved.
Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis
NASA Astrophysics Data System (ADS)
Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang
2017-07-01
In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.
NASA Technical Reports Server (NTRS)
Watring, Dale A. (Inventor); Johnson, Martin L. (Inventor)
1996-01-01
An ampoule failure system for use in material processing furnaces comprising a containment cartridge and an ampoule failure sensor. The containment cartridge contains an ampoule of toxic material therein and is positioned within a furnace for processing. An ampoule failure probe is positioned in the containment cartridge adjacent the ampoule for detecting a potential harmful release of toxic material therefrom during processing. The failure probe is spaced a predetermined distance from the ampoule and is chemically chosen so as to undergo a timely chemical reaction with the toxic material upon the harmful release thereof. The ampoule failure system further comprises a data acquisition system which is positioned externally of the furnace and is electrically connected to the ampoule failure probe so as to form a communicating electrical circuit. The data acquisition system includes an automatic shutdown device for shutting down the furnace upon the harmful release of toxic material. It also includes a resistance measuring device for measuring the resistance of the failure probe during processing. The chemical reaction causes a step increase in resistance of the failure probe whereupon the automatic shutdown device will responsively shut down the furnace.
Magnezi, Racheli; Hemi, Asaf; Hemi, Rina
2016-01-01
Background Risk management in health care systems applies to all hospital employees and directors as they deal with human life and emergency routines. There is a constant need to decrease risk and increase patient safety in the hospital environment. The purpose of this article is to review the laboratory testing procedures for parathyroid hormone and adrenocorticotropic hormone (which are characterized by short half-lives) and to track failure modes and risks, and offer solutions to prevent them. During a routine quality improvement review at the Endocrine Laboratory in Tel Hashomer Hospital, we discovered these tests are frequently repeated unnecessarily due to multiple failures. The repetition of the tests inconveniences patients and leads to extra work for the laboratory and logistics personnel as well as the nurses and doctors who have to perform many tasks with limited resources. Methods A team of eight staff members accompanied by the Head of the Endocrine Laboratory formed the team for analysis. The failure mode and effects analysis model (FMEA) was used to analyze the laboratory testing procedure and was designed to simplify the process steps and indicate and rank possible failures. Results A total of 23 failure modes were found within the process, 19 of which were ranked by level of severity. The FMEA model prioritizes failures by their risk priority number (RPN). For example, the most serious failure was the delay after the samples were collected from the department (RPN =226.1). Conclusion This model helped us to visualize the process in a simple way. After analyzing the information, solutions were proposed to prevent failures, and a method to completely avoid the top four problems was also developed. PMID:27980440
Simulating fail-stop in asynchronous distributed systems
NASA Technical Reports Server (NTRS)
Sabel, Laura; Marzullo, Keith
1994-01-01
The fail-stop failure model appears frequently in the distributed systems literature. However, in an asynchronous distributed system, the fail-stop model cannot be implemented. In particular, it is impossible to reliably detect crash failures in an asynchronous system. In this paper, we show that it is possible to specify and implement a failure model that is indistinguishable from the fail-stop model from the point of view of any process within an asynchronous system. We give necessary conditions for a failure model to be indistinguishable from the fail-stop model, and derive lower bounds on the amount of process replication needed to implement such a failure model. We present a simple one-round protocol for implementing one such failure model, which we call simulated fail-stop.
neutron-Induced Failures in semiconductor Devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wender, Stephen Arthur
2017-03-13
Single Event Effects are a very significant failure mode in modern semiconductor devices that may limit their reliability. Accelerated testing is important for semiconductor industry. Considerable more work is needed in this field to mitigate the problem. Mitigation of this problem will probably come from Physicists and Electrical Engineers working together
ERIC Educational Resources Information Center
Tulis, Maria; Ainley, Mary
2011-01-01
The current investigation was designed to identify emotion states students experience during mathematics activities, and in particular to distinguish emotions contingent on experiences of success and experiences of failure. Students' task-related emotional responses were recorded following experiences of success and failure while working with an…
Diagnosis of delay-deadline failures in real time discrete event models.
Biswas, Santosh; Sarkar, Dipankar; Bhowal, Prodip; Mukhopadhyay, Siddhartha
2007-10-01
In this paper a method for fault detection and diagnosis (FDD) of real time systems has been developed. A modeling framework termed as real time discrete event system (RTDES) model is presented and a mechanism for FDD of the same has been developed. The use of RTDES framework for FDD is an extension of the works reported in the discrete event system (DES) literature, which are based on finite state machines (FSM). FDD of RTDES models are suited for real time systems because of their capability of representing timing faults leading to failures in terms of erroneous delays and deadlines, which FSM-based ones cannot address. The concept of measurement restriction of variables is introduced for RTDES and the consequent equivalence of states and indistinguishability of transitions have been characterized. Faults are modeled in terms of an unmeasurable condition variable in the state map. Diagnosability is defined and the procedure of constructing a diagnoser is provided. A checkable property of the diagnoser is shown to be a necessary and sufficient condition for diagnosability. The methodology is illustrated with an example of a hydraulic cylinder.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Sangkeun; Chen, Liangzhe; Duan, Sisi
Abstract Critical Infrastructures (CIs) such as energy, water, and transportation are complex networks that are crucial for sustaining day-to-day commodity flows vital to national security, economic stability, and public safety. The nature of these CIs is such that failures caused by an extreme weather event or a man-made incident can trigger widespread cascading failures, sending ripple effects at regional or even national scales. To minimize such effects, it is critical for emergency responders to identify existing or potential vulnerabilities within CIs during such stressor events in a systematic and quantifiable manner and take appropriate mitigating actions. We present here amore » novel critical infrastructure monitoring and analysis system named URBAN-NET. The system includes a software stack and tools for monitoring CIs, pre-processing data, interconnecting multiple CI datasets as a heterogeneous network, identifying vulnerabilities through graph-based topological analysis, and predicting consequences based on what-if simulations along with visualization. As a proof-of-concept, we present several case studies to show the capabilities of our system. We also discuss remaining challenges and future work.« less
Degradation and ESR Failures in MnO2 Chip Tantalum Capacitors
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander A.
2017-01-01
Equivalent series resistance (ESR) of chip tantalum capacitors determines the rate of energy delivery and power dissipation thus affecting temperature and reliability of the parts. Employment of advanced capacitors with reduced ESR decreases power losses and improves efficiency in power systems. Stability of ESR is essential for correct operations of power units and might cause malfunctioning and failures when ESR becomes too high or too low. Several cases with ESR values in CWR29 capacitors exceeding the specified limit that were observed recently raised concerns regarding environmental factors affecting ESR and the adequacy of the existing screening and qualification testing. In this work, results of stress testing of various types of military and commercial capacitors obtained over years by GSFC test lab and NEPP projects that involved ESR measurements are described. Environmental stress tests include testing in humidity and vacuum chambers, temperature cycling, long-term storage at high temperatures, and various soldering simulation tests. Note that in many cases parts failed due to excessive leakage currents or reduced breakdown voltages. However, only ESR-related degradation and failures are discussed. Mechanisms of moisture effect are discussed and recommendations to improve screening and qualification system are suggested.
Fault-tolerant nonlinear adaptive flight control using sliding mode online learning.
Krüger, Thomas; Schnetter, Philipp; Placzek, Robin; Vörsmann, Peter
2012-08-01
An expanded nonlinear model inversion flight control strategy using sliding mode online learning for neural networks is presented. The proposed control strategy is implemented for a small unmanned aircraft system (UAS). This class of aircraft is very susceptible towards nonlinearities like atmospheric turbulence, model uncertainties and of course system failures. Therefore, these systems mark a sensible testbed to evaluate fault-tolerant, adaptive flight control strategies. Within this work the concept of feedback linearization is combined with feed forward neural networks to compensate for inversion errors and other nonlinear effects. Backpropagation-based adaption laws of the network weights are used for online training. Within these adaption laws the standard gradient descent backpropagation algorithm is augmented with the concept of sliding mode control (SMC). Implemented as a learning algorithm, this nonlinear control strategy treats the neural network as a controlled system and allows a stable, dynamic calculation of the learning rates. While considering the system's stability, this robust online learning method therefore offers a higher speed of convergence, especially in the presence of external disturbances. The SMC-based flight controller is tested and compared with the standard gradient descent backpropagation algorithm in the presence of system failures. Copyright © 2012 Elsevier Ltd. All rights reserved.
Revisiting control establishments for emerging energy hubs
NASA Astrophysics Data System (ADS)
Nasirian, Vahidreza
Emerging small-scale energy systems, i.e., microgrids and smartgrids, rely on centralized controllers for voltage regulation, load sharing, and economic dispatch. However, the central controller is a single-point-of-failure in such a design as either the controller or attached communication links failure can render the entire system inoperable. This work seeks for alternative distributed control structures to improve system reliability and help to the scalability of the system. A cooperative distributed controller is proposed that uses a noise-resilient voltage estimator and handles global voltage regulation and load sharing across a DC microgrid. Distributed adaptive droop control is also investigated as an alternative solution. A droop-free distributed control is offered to handle voltage/frequency regulation and load sharing in AC systems. This solution does not require frequency measurement and, thus, features a fast frequency regulation. Distributed economic dispatch is also studied, where a distributed protocol is designed that controls generation units to merge their incremental costs into a consensus and, thus, push the entire system to generate with the minimum cost. Experimental verifications and Hardware-in-the-Loop (HIL) simulations are used to study efficacy of the proposed control protocols.
Cascade phenomenon against subsequent failures in complex networks
NASA Astrophysics Data System (ADS)
Jiang, Zhong-Yuan; Liu, Zhi-Quan; He, Xuan; Ma, Jian-Feng
2018-06-01
Cascade phenomenon may lead to catastrophic disasters which extremely imperil the network safety or security in various complex systems such as communication networks, power grids, social networks and so on. In some flow-based networks, the load of failed nodes can be redistributed locally to their neighboring nodes to maximally preserve the traffic oscillations or large-scale cascading failures. However, in such local flow redistribution model, a small set of key nodes attacked subsequently can result in network collapse. Then it is a critical problem to effectively find the set of key nodes in the network. To our best knowledge, this work is the first to study this problem comprehensively. We first introduce the extra capacity for every node to put up with flow fluctuations from neighbors, and two extra capacity distributions including degree based distribution and average distribution are employed. Four heuristic key nodes discovering methods including High-Degree-First (HDF), Low-Degree-First (LDF), Random and Greedy Algorithms (GA) are presented. Extensive simulations are realized in both scale-free networks and random networks. The results show that the greedy algorithm can efficiently find the set of key nodes in both scale-free and random networks. Our work studies network robustness against cascading failures from a very novel perspective, and methods and results are very useful for network robustness evaluations and protections.
Graphical Displays Assist In Analysis Of Failures
NASA Technical Reports Server (NTRS)
Pack, Ginger; Wadsworth, David; Razavipour, Reza
1995-01-01
Failure Environment Analysis Tool (FEAT) computer program enables people to see and better understand effects of failures in system. Uses digraph models to determine what will happen to system if set of failure events occurs and to identify possible causes of selected set of failures. Digraphs or engineering schematics used. Also used in operations to help identify causes of failures after they occur. Written in C language.
Wire Rope Failure on the Guppy Winch
NASA Technical Reports Server (NTRS)
Figert, John
2016-01-01
On January 6, 2016 at El Paso, the Guppy winch motor was changed. After completion of the operational checks, the load bar was being reinstalled on the cargo pallet when the motor control FORWARD relay failed in the energized position. The pallet was pinned at all locations (each pin has a load capacity of 16,000 lbs.) while the winch was running. The wire rope snapped before aircraft power could be removed. After disassembly, the fractured wire rope was shipped to ES4 lab for further characterization of the wire rope portion of the failure. The system was being operated without a clear understanding of the system capability and function. The proximate cause was the failure of the K48 -Forward Winch Control Relay in the energized position, which allowed the motor to continuously run without command from the hand controller, and operation of the winch system with both controllers connected to the system. This prevented the emergency stop feature on the hand controller from functioning as designed. An electrical checkout engineering work instruction was completed and identified the failed relay and confirmed the emergency stop only paused the system when the STOP button on both connected hand controllers were depressed simultaneously. The winch system incorporates a torque limiting clutch. It is suspected that the clutch did not slip and the motor did not stall or overload the current limiter. Aircraft Engineering is looking at how to change the procedures to provide a checkout of the clutch and set to a slip torque limit appropriate to support operations.
Sensor Failure Detection of FASSIP System using Principal Component Analysis
NASA Astrophysics Data System (ADS)
Sudarno; Juarsa, Mulya; Santosa, Kussigit; Deswandri; Sunaryo, Geni Rina
2018-02-01
In the nuclear reactor accident of Fukushima Daiichi in Japan, the damages of core and pressure vessel were caused by the failure of its active cooling system (diesel generator was inundated by tsunami). Thus researches on passive cooling system for Nuclear Power Plant are performed to improve the safety aspects of nuclear reactors. The FASSIP system (Passive System Simulation Facility) is an installation used to study the characteristics of passive cooling systems at nuclear power plants. The accuracy of sensor measurement of FASSIP system is essential, because as the basis for determining the characteristics of a passive cooling system. In this research, a sensor failure detection method for FASSIP system is developed, so the indication of sensor failures can be detected early. The method used is Principal Component Analysis (PCA) to reduce the dimension of the sensor, with the Squarred Prediction Error (SPE) and statistic Hotteling criteria for detecting sensor failure indication. The results shows that PCA method is capable to detect the occurrence of a failure at any sensor.
NASA Astrophysics Data System (ADS)
Dulo, D. A.
Safety critical software systems permeate spacecraft, and in a long term venture like a starship would be pervasive in every system of the spacecraft. Yet software failure today continues to plague both the systems and the organizations that develop them resulting in the loss of life, time, money, and valuable system platforms. A starship cannot afford this type of software failure in long journeys away from home. A single software failure could have catastrophic results for the spaceship and the crew onboard. This paper will offer a new approach to developing safe reliable software systems through focusing not on the traditional safety/reliability engineering paradigms but rather by focusing on a new paradigm: Resilience and Failure Obviation Engineering. The foremost objective of this approach is the obviation of failure, coupled with the ability of a software system to prevent or adapt to complex changing conditions in real time as a safety valve should failure occur to ensure safe system continuity. Through this approach, safety is ensured through foresight to anticipate failure and to adapt to risk in real time before failure occurs. In a starship, this type of software engineering is vital. Through software developed in a resilient manner, a starship would have reduced or eliminated software failure, and would have the ability to rapidly adapt should a software system become unstable or unsafe. As a result, long term software safety, reliability, and resilience would be present for a successful long term starship mission.
Localized Fault Recovery for Nested Fork-Join Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kestor, Gokcen; Krishnamoorthy, Sriram; Ma, Wenjing
Nested fork-join programs scheduled using work stealing can automatically balance load and adapt to changes in the execution environment. In this paper, we design an approach to efficiently recover from faults encountered by these programs. Specifically, we focus on localized recovery of the task space in the presence of fail-stop failures. We present an approach to efficiently track, under work stealing, the relationships between the work executed by various threads. This information is used to identify and schedule the tasks to be re-executed without interfering with normal task execution. The algorithm precisely computes the work lost, incurs minimal re-execution overhead,more » and can recover from an arbitrary number of failures. Experimental evaluation demonstrates low overheads in the absence of failures, recovery overheads on the same order as the lost work, and much lower recovery costs than alternative strategies.« less
Performance-based maintenance of gas turbines for reliable control of degraded power systems
NASA Astrophysics Data System (ADS)
Mo, Huadong; Sansavini, Giovanni; Xie, Min
2018-03-01
Maintenance actions are necessary for ensuring proper operations of control systems under component degradation. However, current condition-based maintenance (CBM) models based on component health indices are not suitable for degraded control systems. Indeed, failures of control systems are only determined by the controller outputs, and the feedback mechanism compensates the control performance loss caused by the component deterioration. Thus, control systems may still operate normally even if the component health indices exceed failure thresholds. This work investigates the CBM model of control systems and employs the reduced control performance as a direct degradation measure for deciding maintenance activities. The reduced control performance depends on the underlying component degradation modelled as a Wiener process and the feedback mechanism. To this aim, the controller features are quantified by developing a dynamic and stochastic control block diagram-based simulation model, consisting of the degraded components and the control mechanism. At each inspection, the system receives a maintenance action if the control performance deterioration exceeds its preventive-maintenance or failure thresholds. Inspired by realistic cases, the component degradation model considers random start time and unit-to-unit variability. The cost analysis of maintenance model is conducted via Monte Carlo simulation. Optimal maintenance strategies are investigated to minimize the expected maintenance costs, which is a direct consequence of the control performance. The proposed framework is able to design preventive maintenance actions on a gas power plant, to ensuring required load frequency control performance against a sudden load increase. The optimization results identify the trade-off between system downtime and maintenance costs as a function of preventive maintenance thresholds and inspection frequency. Finally, the control performance-based maintenance model can reduce maintenance costs as compared to CBM and pre-scheduled maintenance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Sheldon, Frederick T.
Cyber physical computing infrastructures typically consist of a number of sites are interconnected. Its operation critically depends both on cyber components and physical components. Both types of components are subject to attacks of different kinds and frequencies, which must be accounted for the initial provisioning and subsequent operation of the infrastructure via information security analysis. Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, andmore » information assets. We concentrated our analysis on the electric sector failure scenarios and impact analyses by the NESCOR Working Group Study, From the Section 5 electric sector representative failure scenarios; we extracted the four generic failure scenarios and grouped them into three specific threat categories (confidentiality, integrity, and availability) to the system. These specific failure scenarios serve as a demonstration of our simulation. The analysis using our ABGT simulation demonstrates how to model the electric sector functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the cyber physical infrastructure network with respect to CIA.« less
Nonlinear viscoelasticity and generalized failure criterion for biopolymer gels
NASA Astrophysics Data System (ADS)
Divoux, Thibaut; Keshavarz, Bavand; Manneville, Sébastien; McKinley, Gareth
2016-11-01
Biopolymer gels display a multiscale microstructure that is responsible for their solid-like properties. Upon external deformation, these soft viscoelastic solids exhibit a generic nonlinear mechanical response characterized by pronounced stress- or strain-stiffening prior to irreversible damage and failure, most often through macroscopic fractures. Here we show on a model acid-induced protein gel that the nonlinear viscoelastic properties of the gel can be described in terms of a 'damping function' which predicts the gel mechanical response quantitatively up to the onset of macroscopic failure. Using a nonlinear integral constitutive equation built upon the experimentally-measured damping function in conjunction with power-law linear viscoelastic response, we derive the form of the stress growth in the gel following the start up of steady shear. We also couple the shear stress response with Bailey's durability criteria for brittle solids in order to predict the critical values of the stress σc and strain γc for failure of the gel, and how they scale with the applied shear rate. This provides a generalized failure criterion for biopolymer gels in a range of different deformation histories. This work was funded by the MIT-France seed fund and by the CNRS PICS-USA scheme (#36939). BK acknowledges financial support from Axalta Coating Systems.
NASA Astrophysics Data System (ADS)
Ye, Xiang; Gao, Weihua; Yan, Yanjun; Osadciw, Lisa A.
2010-04-01
Wind is an important renewable energy source. The energy and economic return from building wind farms justify the expensive investments in doing so. However, without an effective monitoring system, underperforming or faulty turbines will cause a huge loss in revenue. Early detection of such failures help prevent these undesired working conditions. We develop three tests on power curve, rotor speed curve, pitch angle curve of individual turbine. In each test, multiple states are defined to distinguish different working conditions, including complete shut-downs, under-performing states, abnormally frequent default states, as well as normal working states. These three tests are combined to reach a final conclusion, which is more effective than any single test. Through extensive data mining of historical data and verification from farm operators, some state combinations are discovered to be strong indicators of spindle failures, lightning strikes, anemometer faults, etc, for fault detection. In each individual test, and in the score fusion of these tests, we apply multidimensional scaling (MDS) to reduce the high dimensional feature space into a 3-dimensional visualization, from which it is easier to discover turbine working information. This approach gains a qualitative understanding of turbine performance status to detect faults, and also provides explanations on what has happened for detailed diagnostics. The state-of-the-art SCADA (Supervisory Control And Data Acquisition) system in industry can only answer the question whether there are abnormal working states, and our evaluation of multiple states in multiple tests is also promising for diagnostics. In the future, these tests can be readily incorporated in a Bayesian network for intelligent analysis and decision support.
Commercial Aircraft Integrated Vehicle Health Management Study
NASA Technical Reports Server (NTRS)
Reveley, Mary S.; Briggs, Jeffrey L.; Evans, Joni K.; Jones, Sharon Monica; Kurtoglu, Tolga; Leone, Karen M.; Sandifer, Carl E.; Thomas, Megan A.
2010-01-01
Statistical data and literature from academia, industry, and other government agencies were reviewed and analyzed to establish requirements for fixture work in detection, diagnosis, prognosis, and mitigation for IVHM related hardware and software. Around 15 to 20 percent of commercial aircraft accidents between 1988 and 2003 involved inalftfnctions or failures of some aircraft system or component. Engine and landing gear failures/malfunctions dominate both accidents and incidents. The IVI vl Project research technologies were found to map to the Joint Planning and Development Office's National Research and Development Plan (RDP) as well as the Safety Working Group's National Aviation Safety Strategic. Plan (NASSP). Future directions in Aviation Technology as related to IVHlvl were identified by reviewing papers from three conferences across a five year time span. A total of twenty-one trend groups in propulsion, aeronautics and aircraft categories were compiled. Current and ftiture directions of IVHM related technologies were gathered and classified according to eight categories: measurement and inspection, sensors, sensor management, detection, component and subsystem monitoring, diagnosis, prognosis, and mitigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.
Representations for margins associated with loss of assured safety (LOAS) for weak link (WL)/strong link (SL) systems involving multiple time-dependent failure modes are developed. The following topics are described: (i) defining properties for WLs and SLs, (ii) background on cumulative distribution functions (CDFs) for link failure time, link property value at link failure, and time at which LOAS occurs, (iii) CDFs for failure time margins defined by (time at which SL system fails) – (time at which WL system fails), (iv) CDFs for SL system property values at LOAS, (v) CDFs for WL/SL property value margins defined by (property valuemore » at which SL system fails) – (property value at which WL system fails), and (vi) CDFs for SL property value margins defined by (property value of failing SL at time of SL system failure) – (property value of this SL at time of WL system failure). Included in this presentation is a demonstration of a verification strategy based on defining and approximating the indicated margin results with (i) procedures based on formal integral representations and associated quadrature approximations and (ii) procedures based on algorithms for sampling-based approximations.« less
Quek, H C; Tan, Keson B; Nicholls, Jack I
2008-01-01
Biomechanical load-fatigue performance data on single-tooth implant systems with different implant-abutment interface designs is lacking in the literature. This study evaluated the load fatigue performance of 4 implant-abutment interface designs (Brånemark-CeraOne; 3i Osseotite-STA abutment; Replace Select-Easy abutment; and Lifecore Stage-1-COC abutment system). The number of load cycles to fatigue failure of 4 implant-abutment designs was tested with a custom rotational load fatigue machine. The effect of increasing and decreasing the tightening torque by 20% respectively on the load fatigue performance was also investigated. Three different tightening torque levels (recommended torque, -20% recommended torque, +20% recommended torque) were applied to the 4 implant systems. There were 12 test groups with 5 samples in each group. The rotational load fatigue machine subjected specimens to a sinusoidally applied 35 Ncm bending moment at a test frequency of 14 Hz. The number of cycles to failure was recorded. A cutoff of 5 x 10(6) cycles was applied as an upper limit. There were 2 implant failures and 1 abutment screw failure in the Brånemark group. Five abutment screw failures and 4 implant failures was recorded for the 3i system. The Replace Select system had 1 implant failure. Five cone screw failures were noted for the Lifecore system. Analysis of variance revealed no statistically significant difference in load cycles to failure for the 4 different implant-abutment systems torqued at recommended torque level. A statistically significant difference was found between the -20% torque group and the +20% torque group (P < .05) for the 3i system. Load fatigue performance and failure location is system specific and related to the design characteristics of the implant-abutment combination. It appeared that if the implant-abutment interface was maintained, load fatigue failure would occur at the weakest point of the implant. It is important to use the torque level recommended by the manufacturer.
NASA Technical Reports Server (NTRS)
Farner, Bruce
2013-01-01
A moveable valve for controlling flow of a pressurized working fluid was designed. This valve consists of a hollow, moveable floating piston pressed against a stationary solid seat, and can use the working fluid to seal the valve. This open/closed, novel valve is able to use metal-to-metal seats, without requiring seat sliding action; therefore there are no associated damaging effects. During use, existing standard high-pressure ball valve seats tend to become damaged during rotation of the ball. Additionally, forces acting on the ball and stem create large amounts of friction. The combination of these effects can lead to system failure. In an attempt to reduce damaging effects and seat failures, soft seats in the ball valve have been eliminated; however, the sliding action of the ball across the highly loaded seat still tends to scratch the seat, causing failure. Also, in order to operate, ball valves require the use of large actuators. Positioning the metal-to-metal seats requires more loading, which tends to increase the size of the required actuator, and can also lead to other failures in other areas such as the stem and bearing mechanisms, thus increasing cost and maintenance. This novel non-sliding seat surface valve allows metal-to-metal seats without the damaging effects that can lead to failure, and enables large seating forces without damaging the valve. Additionally, this valve design, even when used with large, high-pressure applications, does not require large conventional valve actuators and the valve stem itself is eliminated. Actuation is achieved with the use of a small, simple solenoid valve. This design also eliminates the need for many seals used with existing ball valve and globe valve designs, which commonly cause failure, too. This, coupled with the elimination of the valve stem and conventional valve actuator, improves valve reliability and seat life. Other mechanical liftoff seats have been designed; however, they have only resulted in increased cost, and incurred other reliability issues. With this novel design, the seat is lifted by simply removing the working fluid pressure that presses it against the seat and no external force is required. By eliminating variables associated with existing ball and globe configurations that can have damaging effects upon a valve, this novel design reduces downtime in rocket engine test schedules and maintenance costs.
Surveys for sensitivity to fibers and potential impacts from fiber induced failures
NASA Technical Reports Server (NTRS)
Butterfield, A. J.
1979-01-01
The surveys for sensitivities to fibers and potential impacts from fiber induced failures begins with a review of the survey work completed to date and then describes an impact study involving four industrial installations located in Virginia. The observations and results from both the surveys and the study provide guidelines for future efforts. The survey work was done with three broad objectives: (1) identify the pieces of potentially vulnerable equipment as candidates for test; (2) support the transfer function work by gaining an understanding of how fibers could get into a building; and (3) support the economic analysis by understanding what would happen if fibers precipitated a failure in an item of equipment.
Incorporating ideas from computer-supported cooperative work.
Pratt, Wanda; Reddy, Madhu C; McDonald, David W; Tarczy-Hornoch, Peter; Gennari, John H
2004-04-01
Many information systems have failed when deployed into complex health-care settings. We believe that one cause of these failures is the difficulty in systematically accounting for the collaborative and exception-filled nature of medical work. In this methodological review paper, we highlight research from the field of computer-supported cooperative work (CSCW) that could help biomedical informaticists recognize and design around the kinds of challenges that lead to unanticipated breakdowns and eventual abandonment of their systems. The field of CSCW studies how people collaborate with each other and the role that technology plays in this collaboration for a wide variety of organizational settings. Thus, biomedical informaticists could benefit from the lessons learned by CSCW researchers. In this paper, we provide a focused review of CSCW methods and ideas-we review aspects of the field that could be applied to improve the design and deployment of medical information systems. To make our discussion concrete, we use electronic medical record systems as an example medical information system, and present three specific principles from CSCW: accounting for incentive structures, understanding workflow, and incorporating awareness.
Impact of Extended-Duration Shifts on Medical Errors, Adverse Events, and Attentional Failures
Barger, Laura K; Ayas, Najib T; Cade, Brian E; Cronin, John W; Rosner, Bernard; Speizer, Frank E; Czeisler, Charles A
2006-01-01
Background A recent randomized controlled trial in critical-care units revealed that the elimination of extended-duration work shifts (≥24 h) reduces the rates of significant medical errors and polysomnographically recorded attentional failures. This raised the concern that the extended-duration shifts commonly worked by interns may contribute to the risk of medical errors being made, and perhaps to the risk of adverse events more generally. Our current study assessed whether extended-duration shifts worked by interns are associated with significant medical errors, adverse events, and attentional failures in a diverse population of interns across the United States. Methods and Findings We conducted a Web-based survey, across the United States, in which 2,737 residents in their first postgraduate year (interns) completed 17,003 monthly reports. The association between the number of extended-duration shifts worked in the month and the reporting of significant medical errors, preventable adverse events, and attentional failures was assessed using a case-crossover analysis in which each intern acted as his/her own control. Compared to months in which no extended-duration shifts were worked, during months in which between one and four extended-duration shifts and five or more extended-duration shifts were worked, the odds ratios of reporting at least one fatigue-related significant medical error were 3.5 (95% confidence interval [CI], 3.3–3.7) and 7.5 (95% CI, 7.2–7.8), respectively. The respective odds ratios for fatigue-related preventable adverse events, 8.7 (95% CI, 3.4–22) and 7.0 (95% CI, 4.3–11), were also increased. Interns working five or more extended-duration shifts per month reported more attentional failures during lectures, rounds, and clinical activities, including surgery and reported 300% more fatigue-related preventable adverse events resulting in a fatality. Conclusions In our survey, extended-duration work shifts were associated with an increased risk of significant medical errors, adverse events, and attentional failures in interns across the United States. These results have important public policy implications for postgraduate medical education. PMID:17194188
Extended Testability Analysis Tool
NASA Technical Reports Server (NTRS)
Melcher, Kevin; Maul, William A.; Fulton, Christopher
2012-01-01
The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.
Evaluation of methods for determining hardware projected life
NASA Technical Reports Server (NTRS)
1971-01-01
An investigation of existing methods of predicting hardware life is summarized by reviewing programs having long life requirements, current research efforts on long life problems, and technical papers reporting work on life predicting techniques. The results indicate that there are no accurate quantitative means to predict hardware life for system level hardware. The effectiveness of test programs and the cause of hardware failures is considered.
Human Support Issues and Systems for the Space Exploration Initiative: Results from Project Outreach
1991-01-01
that human factors were responsible for mission failure more often than equipment factors. Spacecraft habitability and ergonomics also require more...substantial challenges for designing reliable, flexible joints and dexterous, reliable gloves. Submission #100701 dealt with the ergonomics of work...perception that human factors deals primarily with cockpit displays and ergonomics . The success of long-duration missions will be highly dependent on
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sallaberry, Cedric Jean-Marie; Helton, Jon C.
2015-05-01
Weak link (WL)/strong link (SL) systems are important parts of the overall operational design of high - consequence systems. In such designs, the SL system is very robust and is intended to permit operation of the entire system under, and only under, intended conditions. In contrast, the WL system is intended to fail in a predictable and irreversible manner under accident conditions and render the entire system inoperable before an accidental operation of the SL system. The likelihood that the WL system will fail to d eactivate the entire system before the SL system fails (i.e., degrades into a configurationmore » that could allow an accidental operation of the entire system) is referred to as probability of loss of assured safety (PLOAS). This report describes the Fortran 90 program CPLOAS_2 that implements the following representations for PLOAS for situations in which both link physical properties and link failure properties are time - dependent: (i) failure of all SLs before failure of any WL, (ii) failure of any SL before f ailure of any WL, (iii) failure of all SLs before failure of all WLs, and (iv) failure of any SL before failure of all WLs. The effects of aleatory uncertainty and epistemic uncertainty in the definition and numerical evaluation of PLOAS can be included in the calculations performed by CPLOAS_2. Keywords: Aleatory uncertainty, CPLOAS_2, Epistemic uncertainty, Probability of loss of assured safety, Strong link, Uncertainty analysis, Weak link« less
High temperature, oxygen, and performance: Insights from reptiles and amphibians.
Gangloff, Eric J; Telemeco, Rory S
2018-04-25
Much recent theoretical and empirical work has sought to describe the physiological mechanisms underlying thermal tolerance in animals. Leading hypotheses can be broadly divided into two categories that primarily differ in organizational scale: 1) high temperature directly reduces the function of subcellular machinery, such as enzymes and cell membranes, or 2) high temperature disrupts system-level interactions, such as mismatches in the supply and demand of oxygen, prior to having any direct negative effect on the subcellular machinery. Nonetheless, a general framework describing the contexts under which either subcellular component or organ system failure limits organisms at high temperatures remains elusive. With this commentary, we leverage decades of research on the physiology of ectothermic tetrapods (amphibians and non-avian reptiles) to address these hypotheses. Available data suggest both mechanisms are important. Thus, we expand previous work and propose the Hierarchical Mechanisms of Thermal Limitation (HMTL) hypothesis, which explains how subcellular and organ system failures interact to limit performance and set tolerance limits at high temperatures. We further integrate this framework with the thermal performance curve paradigm commonly used to predict the effects of thermal environments on performance and fitness. The HMTL framework appears to successfully explain diverse observations in reptiles and amphibians and makes numerous predictions that remain untested. We hope that this framework spurs further research in diverse taxa and facilitates mechanistic forecasts of biological responses to climate change.
Independent Orbiter Assessment (IOA): Analysis of the purge, vent and drain subsystem
NASA Technical Reports Server (NTRS)
Bynum, M. C., III
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter PV and D (Purge, Vent and Drain) Subsystem hardware. The PV and D Subsystem controls the environment of unpressurized compartments and window cavities, senses hazardous gases, and purges Orbiter/ET Disconnect. The subsystem is divided into six systems: Purge System (controls the environment of unpressurized structural compartments); Vent System (controls the pressure of unpressurized compartments); Drain System (removes water from unpressurized compartments); Hazardous Gas Detection System (HGDS) (monitors hazardous gas concentrations); Window Cavity Conditioning System (WCCS) (maintains clear windows and provides pressure control of the window cavities); and External Tank/Orbiter Disconnect Purge System (prevents cryo-pumping/icing of disconnect hardware). Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Four of the sixty-two failure modes analyzed were determined as single failures which could result in the loss of crew or vehicle. A possible loss of mission could result if any of twelve single failures occurred. Two of the criticality 1/1 failures are in the Window Cavity Conditioning System (WCCS) outer window cavity, where leakage and/or restricted flow will cause failure to depressurize/repressurize the window cavity. Two criticality 1/1 failures represent leakage and/or restricted flow in the Orbiter/ET disconnect purge network which prevent cryopumping/icing of disconnect hardware. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.
Hall, Allison B; Ziadi, Maria C; Leech, Judith A; Chen, Shin-Yee; Burwash, Ian G; Renaud, Jennifer; deKemp, Robert A; Haddad, Haissam; Mielniczuk, Lisa M; Yoshinaga, Keiichiro; Guo, Ann; Chen, Li; Walter, Olga; Garrard, Linda; DaSilva, Jean N; Floras, John S; Beanlands, Rob S B
2014-09-09
Heart failure with reduced ejection fraction and obstructive sleep apnea (OSA), 2 states of increased metabolic demand and sympathetic nervous system activation, often coexist. Continuous positive airway pressure (CPAP), which alleviates OSA, can improve ventricular function. It is unknown whether this is due to altered oxidative metabolism or presynaptic sympathetic nerve function. We hypothesized that short-term (6-8 weeks) CPAP in patients with OSA and heart failure with reduced ejection fraction would improve myocardial sympathetic nerve function and energetics. Forty-five patients with OSA and heart failure with reduced ejection fraction (left ventricular ejection fraction 35.8±9.7% [mean±SD]) were evaluated with the use of echocardiography and 11C-acetate and 11C-hydroxyephedrine positron emission tomography before and ≈6 to 8 weeks after randomization to receive short-term CPAP (n=22) or no CPAP (n=23). Work metabolic index, an estimate of myocardial efficiency, was calculated as follows: (stroke volume index×heart rate×systolic blood pressure÷Kmono), where Kmono is the monoexponential function fit to the myocardial 11C-acetate time-activity data, reflecting oxidative metabolism. Presynaptic sympathetic nerve function was measured with the use of the 11C-hydroxyephedrine retention index. CPAP significantly increased hydroxyephedrine retention versus no CPAP (Δretention: +0.012 [0.002, 0.021] versus -0.006 [-0.013, 0.005] min(-1); P=0.003). There was no significant change in work metabolic index between groups. However, in those with more severe OSA (apnea-hypopnea index>20 events per hour), CPAP significantly increased both work metabolic index and systolic blood pressure (P<0.05). In patients with heart failure with reduced ejection fraction and OSA, short-term CPAP increased hydroxyephedrine retention, indicating improved myocardial sympathetic nerve function, but overall did not affect energetics. In those with more severe OSA, CPAP may improve cardiac efficiency. Further outcome-based investigation of the consequences of CPAP is warranted. http://www.clinicaltrials.gov. Unique identifier: NCT00756366. © 2014 American Heart Association, Inc.
NASA Astrophysics Data System (ADS)
Sajun Prasad, K.; Panda, Sushanta Kumar; Kar, Sujoy Kumar; Sen, Mainak; Murty, S. V. S. Naryana; Sharma, Sharad Chandra
2017-04-01
Recently, aerospace industries have shown increasing interest in forming limits of Inconel 718 sheet metals, which can be utilised in designing tools and selection of process parameters for successful fabrication of components. In the present work, stress-strain response with failure strains was evaluated by uniaxial tensile tests in different orientations, and two-stage work-hardening behavior was observed. In spite of highly preferred texture, tensile properties showed minor variations in different orientations due to the random distribution of nanoprecipitates. The forming limit strains were evaluated by deforming specimens in seven different strain paths using limiting dome height (LDH) test facility. Mostly, the specimens failed without prior indication of localized necking. Thus, fracture forming limit diagram (FFLD) was evaluated, and bending correction was imposed due to the use of sub-size hemispherical punch. The failure strains of FFLD were converted into major-minor stress space ( σ-FFLD) and effective plastic strain-stress triaxiality space ( ηEPS-FFLD) as failure criteria to avoid the strain path dependence. Moreover, FE model was developed, and the LDH, strain distribution and failure location were predicted successfully using above-mentioned failure criteria with two stages of work hardening. Fractographs were correlated with the fracture behavior and formability of sheet metal.
A diagnosis system using object-oriented fault tree models
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, F. A.
1990-01-01
Spaceborne computing systems must provide reliable, continuous operation for extended periods. Due to weight, power, and volume constraints, these systems must manage resources very effectively. A fault diagnosis algorithm is described which enables fast and flexible diagnoses in the dynamic distributed computing environments planned for future space missions. The algorithm uses a knowledge base that is easily changed and updated to reflect current system status. Augmented fault trees represented in an object-oriented form provide deep system knowledge that is easy to access and revise as a system changes. Given such a fault tree, a set of failure events that have occurred, and a set of failure events that have not occurred, this diagnosis system uses forward and backward chaining to propagate causal and temporal information about other failure events in the system being diagnosed. Once the system has established temporal and causal constraints, it reasons backward from heuristically selected failure events to find a set of basic failure events which are a likely cause of the occurrence of the top failure event in the fault tree. The diagnosis system has been implemented in common LISP using Flavors.
Does working memory capacity predict cross-modally induced failures of awareness?
Kreitz, Carina; Furley, Philip; Simons, Daniel J; Memmert, Daniel
2016-01-01
People often fail to notice unexpected stimuli when they are focusing attention on another task. Most studies of this phenomenon address visual failures induced by visual attention tasks (inattentional blindness). Yet, such failures also occur within audition (inattentional deafness), and people can even miss unexpected events in one sensory modality when focusing attention on tasks in another modality. Such cross-modal failures are revealing because they suggest the existence of a common, central resource limitation. And, such central limits might be predicted from individual differences in cognitive capacity. We replicated earlier evidence, establishing substantial rates of inattentional deafness during a visual task and inattentional blindness during an auditory task. However, neither individual working memory capacity nor the ability to perform the primary task predicted noticing in either modality. Thus, individual differences in cognitive capacity did not predict failures of awareness even though the failures presumably resulted from central resource limitations. Copyright © 2015 Elsevier Inc. All rights reserved.
Knowledge Repository for Fmea Related Knowledge
NASA Astrophysics Data System (ADS)
Cândea, Gabriela Simona; Kifor, Claudiu Vasile; Cândea, Ciprian
2014-11-01
This paper presents innovative usage of knowledge system into Failure Mode and Effects Analysis (FMEA) process using the ontology to represent the knowledge. Knowledge system is built to serve multi-projects work that nowadays are in place in any manufacturing or services provider, and knowledge must be retained and reused at the company level and not only at project level. The system is following the FMEA methodology and the validation of the concept is compliant with the automotive industry standards published by Automotive Industry Action Group, and not only. Collaboration is assured trough web-based GUI that supports multiple users access at any time
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2012 CFR
2012-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2013 CFR
2013-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2014 CFR
2014-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
Real-Time Sensor Validation System Developed for Reusable Launch Vehicle Testbed
NASA Technical Reports Server (NTRS)
Jankovsky, Amy L.
1997-01-01
A real-time system for validating sensor health has been developed for the reusable launch vehicle (RLV) program. This system, which is part of the propulsion checkout and control system (PCCS), was designed for use in an integrated propulsion technology demonstrator testbed built by Rockwell International and located at the NASA Marshall Space Flight Center. Work on the sensor health validation system, a result of an industry-NASA partnership, was completed at the NASA Lewis Research Center, then delivered to Marshall for integration and testing. The sensor validation software performs three basic functions: it identifies failed sensors, it provides reconstructed signals for failed sensors, and it identifies off-nominal system transient behavior that cannot be attributed to a failed sensor. The code is initiated by host software before the start of a propulsion system test, and it is called by the host program every control cycle. The output is posted to global memory for use by other PCCS modules. Output includes a list indicating the status of each sensor (i.e., failed, healthy, or reconstructed) and a list of features that are not due to a sensor failure. If a sensor failure is found, the system modifies that sensor's data array by substituting a reconstructed signal, when possible, for use by other PCCS modules.
Statistical Physics of Cascading Failures in Complex Networks
NASA Astrophysics Data System (ADS)
Panduranga, Nagendra Kumar
Systems such as the power grid, world wide web (WWW), and internet are categorized as complex systems because of the presence of a large number of interacting elements. For example, the WWW is estimated to have a billion webpages and understanding the dynamics of such a large number of individual agents (whose individual interactions might not be fully known) is a challenging task. Complex network representations of these systems have proved to be of great utility. Statistical physics is the study of emergence of macroscopic properties of systems from the characteristics of the interactions between individual molecules. Hence, statistical physics of complex networks has been an effective approach to study these systems. In this dissertation, I have used statistical physics to study two distinct phenomena in complex systems: i) Cascading failures and ii) Shortest paths in complex networks. Understanding cascading failures is considered to be one of the "holy grails" in the study of complex systems such as the power grid, transportation networks, and economic systems. Studying failures of these systems as percolation on complex networks has proved to be insightful. Previously, cascading failures have been studied extensively using two different models: k-core percolation and interdependent networks. The first part of this work combines the two models into a general model, solves it analytically, and validates the theoretical predictions through extensive computer simulations. The phase diagram of the percolation transition has been systematically studied as one varies the average local k-core threshold and the coupling between networks. The phase diagram of the combined processes is very rich and includes novel features that do not appear in the models which study each of the processes separately. For example, the phase diagram consists of first- and second-order transition regions separated by two tricritical lines that merge together and enclose a two-stage transition region. In the two-stage transition, the size of the giant component undergoes a first-order jump at a certain occupation probability followed by a continuous second-order transition at a smaller occupation probability. Furthermore, at certain fixed interdependencies, the percolation transition cycles from first-order to second-order to two-stage to first-order as the k-core threshold is increased. We setup the analytical equations describing the phase boundaries of the two-stage transition region and we derive the critical exponents for each type of transition. Understanding the shortest paths between individual elements in systems like communication networks and social media networks is important in the study of information cascades in these systems. Often, large heterogeneity can be present in the connections between nodes in these networks. Certain sets of nodes can be more highly connected among themselves than with the nodes from other sets. These sets of nodes are often referred to as 'communities'. The second part of this work studies the effect of the presence of communities on the distribution of shortest paths in a network using a modular Erdős-Renyi network model. In this model, the number of communities and the degree of modularity of the network can be tuned using the parameters of the model. We find that the model reaches a percolation threshold while tuning the degree of modularity of the network and the distribution of the shortest paths in the network can be used as an indicator of how the communities are connected.
Vulnerability Management for an Enterprise Resource Planning System
NASA Astrophysics Data System (ADS)
Goel, Shivani; Kiran, Ravi; Garg, Deepak
2012-09-01
Enterprise resource planning (ERP) systems are commonly used in technical educational institutions(TEIs). ERP systems should continue providing services to its users irrespective of the level of failure. There could be many types of failures in the ERP systems. There are different types of measures or characteristics that can be defined for ERP systems to handle the levels of failure. Here in this paper, various types of failure levels are identified along with various characteristics which are concerned with those failures. The relation between all these is summarized. The disruptions causing vulnerabilities in TEIs are identified .A vulnerability management cycle has been suggested along with many commercial and open source vulnerability management tools. The paper also highlights the importance of resiliency in ERP systems in TEIs.
Global resilience analysis of water distribution systems.
Diao, Kegong; Sweetapple, Chris; Farmani, Raziyeh; Fu, Guangtao; Ward, Sarah; Butler, David
2016-12-01
Evaluating and enhancing resilience in water infrastructure is a crucial step towards more sustainable urban water management. As a prerequisite to enhancing resilience, a detailed understanding is required of the inherent resilience of the underlying system. Differing from traditional risk analysis, here we propose a global resilience analysis (GRA) approach that shifts the objective from analysing multiple and unknown threats to analysing the more identifiable and measurable system responses to extreme conditions, i.e. potential failure modes. GRA aims to evaluate a system's resilience to a possible failure mode regardless of the causal threat(s) (known or unknown, external or internal). The method is applied to test the resilience of four water distribution systems (WDSs) with various features to three typical failure modes (pipe failure, excess demand, and substance intrusion). The study reveals GRA provides an overview of a water system's resilience to various failure modes. For each failure mode, it identifies the range of corresponding failure impacts and reveals extreme scenarios (e.g. the complete loss of water supply with only 5% pipe failure, or still meeting 80% of demand despite over 70% of pipes failing). GRA also reveals that increased resilience to one failure mode may decrease resilience to another and increasing system capacity may delay the system's recovery in some situations. It is also shown that selecting an appropriate level of detail for hydraulic models is of great importance in resilience analysis. The method can be used as a comprehensive diagnostic framework to evaluate a range of interventions for improving system resilience in future studies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Reduction of water losses by rehabilitation of water distribution network.
Güngör, Mahmud; Yarar, Ufuk; Firat, Mahmut
2017-09-11
Physical or real losses may be indicated as the most important component of the water losses occurring in a water distribution network (WDN). The objective of this study is to examine the effects of piping material management and network rehabilitation on the physical water losses and water losses management in a WDN. For this aim, the Denizli WDN consisting of very old pipes that have exhausted their economic life is selected as the study area. The fact that the current network is old results in the decrease of pressure strength, increase of failure intensity, and inefficient use of water resources thus leading to the application of the rehabilitation program. In Denizli, network renewal works have been carried out since the year 2009 under the rehabilitation program. It was determined that the failure rate at regions where network renewal constructions have been completed decreased down to zero level. Renewal of piping material enables the minimization of leakage losses as well as the failure rate. On the other hand, the system rehabilitation has the potential to amortize itself in a very short amount of time if the initial investment cost of network renewal is considered along with the operating costs of the old and new systems, as well as water loss costs. As a result, it can be stated that renewal of piping material in water distribution systems, enhancement of the physical properties of the system, provide significant contributions such as increase of water and energy efficiency and more effective use of resources.
Effect of system workload on operating system reliability - A study on IBM 3081
NASA Technical Reports Server (NTRS)
Iyer, R. K.; Rossetti, D. J.
1985-01-01
This paper presents an analysis of operating system failures on an IBM 3081 running VM/SP. Three broad categories of software failures are found: error handling, program control or logic, and hardware related; it is found that more than 25 percent of software failures occur in the hardware/software interface. Measurements show that results on software reliability cannot be considered representative unless the system workload is taken into account. The overall CPU execution rate, although measured to be close to 100 percent most of the time, is not found to correlate strongly with the occurrence of failures. Possible reasons for the observed workload failure dependency, based on detailed investigations of the failure data, are discussed.
Abrahamyan, G
2017-01-01
Occurrence of pregnancy after in vitro fertilization depends of two components: functional adequacy of the embryo at the blastocyst stage and receptivity of endometrium, which, according to modern perception, are determinate in achieving optimal conditions of implantation. From the pregnancy occurrence point of view, as well as in regard to its further development , implantation is the most crucial phase of IVF/ICSI and ET. As the same time, this phase is also the most vulnerable. Multiple researches have proven the role of mother thrombophilia for genesis of gestation complications and early embryo losses, but in relation to this problem i the context of IVF there is still a lot to be detailed. The objective of this work was to increase the efficiency of IVF and to research the causes of IVF failures, related to thrombophilic genetic mutations and polymorphisms. In order to achieve the set goal 354 women with infertility, who turned to the department of aided reproductive technologies (ART) for infertility treatment by means of IVF, were examined. 237 (66,9%) of women had primary infertility, 117 (33,1%) - secondary infertility. To 228 of these women the IVF (in vitro fertilization) program was introduced for the first time (study group 1), 126 patients had failed IVF history (1 to 9 failed attempts). Patients were 23 to 43 years of age. Obtained results confirm the relation between hemostasis defects, change of hemostasis system activity and efficiency of IVF. One of the main reason of IVF failure and, probably, of infertility is the hemostasis system disturbance of thrombophilic nature. High correlation is established between the hemostasis system disturbance of thrombophilic nature, preconditioned by genetic mutations and polymorphisms, as well as failed IVFs. Failure of IVF is the indication for expanded examination of genetically determined factors of hemostasis system. In case of presence of genetic defects of thrombophilic nature in hemostasis system the risk of failure in IVF program is 2 and more times higher.
Distributed environmental control
NASA Technical Reports Server (NTRS)
Cleveland, Gary A.
1992-01-01
We present an architecture of distributed, independent control agents designed to work with the Computer Aided System Engineering and Analysis (CASE/A) simulation tool. CASE/A simulates behavior of Environmental Control and Life Support Systems (ECLSS). We describe a lattice of agents capable of distributed sensing and overcoming certain sensor and effector failures. We address how the architecture can achieve the coordinating functions of a hierarchical command structure while maintaining the robustness and flexibility of independent agents. These agents work between the time steps of the CASE/A simulation tool to arrive at command decisions based on the state variables maintained by CASE/A. Control is evaluated according to both effectiveness (e.g., how well temperature was maintained) and resource utilization (the amount of power and materials used).
Fatigue Behavior of a Box-Type Welded Structure of Hydraulic Support Used in Coal Mine
Zhao, Xiaohui; Li, Fuyong; Liu, Yu; Fan, Yanjun
2015-01-01
Hydraulic support is the main supporting equipment of the coal mining systems, and they are usually subjected to fatigue failure under the high dynamic load. The fracture positions are generally at welded joints where there is a serious stress concentration. In order to investigate and further improve the fatigue strength of hydraulic support, the present work first located the possible position where fatigue failure occurs through finite element analysis, and then fatigue tests were carried out on the different forms of welded joints for the dangerous parts. Finally, Fatigue strength-life (S-N) curves and fracture mechanism were studied. This research will provide a theoretical reference for the fatigue design of welded structures for hydraulic support. PMID:28793586
Truck circuits diagnosis for railway lines equipped with an automatic block signalling system
NASA Astrophysics Data System (ADS)
Spunei, E.; Piroi, I.; Muscai, C.; Răduca, E.; Piroi, F.
2018-01-01
This work presents a diagnosis method for detecting track circuits failures on a railway traffic line equipped with an Automatic Block Signalling installation. The diagnosis method uses the installation’s electrical schemas, based on which a series of diagnosis charts have been created. Further, the diagnosis charts were used to develop a software package, CDCBla, which substantially contributes to reducing the diagnosis time and human error during failure remedies. The proposed method can also be used as a training package for the maintenance staff. Since the diagnosis method here does not need signal or measurement inputs, using it does not necessitate additional IT knowledge and can be deployed on a mobile computing device (tablet, smart phone).
Engine Icing Modeling and Simulation (Part 2): Performance Simulation of Engine Rollback Phenomena
NASA Technical Reports Server (NTRS)
May, Ryan D.; Guo, Ten-Huei; Veres, Joseph P.; Jorgenson, Philip C. E.
2011-01-01
Ice buildup in the compressor section of a commercial aircraft gas turbine engine can cause a number of engine failures. One of these failure modes is known as engine rollback: an uncommanded decrease in thrust accompanied by a decrease in fan speed and an increase in turbine temperature. This paper describes the development of a model which simulates the system level impact of engine icing using the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k). When an ice blockage is added to C-MAPSS40k, the control system responds in a manner similar to that of an actual engine, and, in cases with severe blockage, an engine rollback is observed. Using this capability to simulate engine rollback, a proof-of-concept detection scheme is developed and tested using only typical engine sensors. This paper concludes that the engine control system s limit protection is the proximate cause of iced engine rollback and that the controller can detect the buildup of ice particles in the compressor section. This work serves as a feasibility study for continued research into the detection and mitigation of engine rollback using the propulsion control system.
SDI satellite autonomy using AI and Ada
NASA Technical Reports Server (NTRS)
Fiala, Harvey E.
1990-01-01
The use of Artificial Intelligence (AI) and the programming language Ada to help a satellite recover from selected failures that could lead to mission failure are described. An unmanned satellite will have a separate AI subsystem running in parallel with the normal satellite subsystems. A satellite monitoring subsystem (SMS), under the control of a blackboard system, will continuously monitor selected satellite subsystems to become alert to any actual or potential problems. In the case of loss of communications with the earth or the home base, the satellite will go into a survival mode to reestablish communications with the earth. The use of an AI subsystem in this manner would have avoided the tragic loss of the two recent Soviet probes that were sent to investigate the planet Mars and its moons. The blackboard system works in conjunction with an SMS and a reconfiguration control subsystem (RCS). It can be shown to be an effective way for one central control subsystem to monitor and coordinate the activities and loads of many interacting subsystems that may or may not contain redundant and/or fault-tolerant elements. The blackboard system will be coded in Ada using tools such as the ABLE development system and the Ada Production system.
Advanced techniques in reliability model representation and solution
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Nicol, David M.
1992-01-01
The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.
NASA Technical Reports Server (NTRS)
Ely, Jay J.; Shaver, Timothy W.; Fuller, Gerald L.
2002-01-01
On February 14, 2002, the FCC adopted a FIRST REPORT AND ORDER, released it on April 22, 2002, and on May 16, 2002 published in the Federal Register a Final Rule, permitting marketing and operation of new products incorporating UWB technology. Wireless product developers are working to rapidly bring this versatile, powerful and expectedly inexpensive technology into numerous consumer wireless devices. Past studies addressing the potential for passenger-carried portable electronic devices (PEDs) to interfere with aircraft electronic systems suggest that UWB transmitters may pose a significant threat to aircraft communication and navigation radio receivers. NASA, United Airlines and Eagles Wings Incorporated have performed preliminary testing that clearly shows the potential for handheld UWB transmitters to cause cockpit failure indications for the air traffic control radio beacon system (ATCRBS), blanking of aircraft on the traffic alert and collision avoidance system (TCAS) displays, and cause erratic motion and failure of instrument landing system (ILS) localizer and glideslope pointers on the pilot horizontal situation and attitude director displays. This report provides details of the preliminary testing and recommends further assessment of aircraft systems for susceptibility to UWB electromagnetic interference.
A systems engineering approach to automated failure cause diagnosis in space power systems
NASA Technical Reports Server (NTRS)
Dolce, James L.; Faymon, Karl A.
1987-01-01
Automatic failure-cause diagnosis is a key element in autonomous operation of space power systems such as Space Station's. A rule-based diagnostic system has been developed for determining the cause of degraded performance. The knowledge required for such diagnosis is elicited from the system engineering process by using traditional failure analysis techniques. Symptoms, failures, causes, and detector information are represented with structured data; and diagnostic procedural knowledge is represented with rules. Detected symptoms instantiate failure modes and possible causes consistent with currently held beliefs about the likelihood of the cause. A diagnosis concludes with an explanation of the observed symptoms in terms of a chain of possible causes and subcauses.
Taylor, David; Wilkison, Michelle; Voyich, Jovanka; Meissner, Nicole
2011-05-15
We recently demonstrated that lack of type I IFN signaling (IFNAR knockout) in lymphocyte-deficient mice (IFrag(-/-)) results in bone marrow (BM) failure after Pneumocystis lung infection, whereas lymphocyte-deficient mice with intact IFNAR (RAG(-/-)) had normal hematopoiesis. In the current work, we performed studies to define further the mechanisms involved in the induction of BM failure in this system. BM chimera experiments revealed that IFNAR expression was required on BM-derived but not stroma-derived cells to prevent BM failure. Signals elicited after day 7 postinfection appeared critical in determining BM cell fate. We observed caspase-8- and caspase-9-mediated apoptotic cell death, beginning with neutrophils. Death of myeloid precursors was associated with secondary oxidative stress, and decreasing colony-forming activity in BM cell cultures. Treatment with N-acetylcysteine could slow the progression of, but not prevent, BM failure. Type I IFN signaling has previously been shown to expand the neutrophil life span and regulate the expression of some antiapoptotic factors. Quantitative RT-PCR demonstrated reduced mRNA abundance for the antiapoptotic factors BCL-2, IAP2, MCL-1, and others in BM cells from IFrag(-/-) compared with that in BM cells from RAG(-/-) mice at day 7. mRNA and protein for the proapoptotic cytokine TNF-α was increased, whereas mRNA for the growth factors G-CSF and GM-CSF was reduced. In vivo anti-TNF-α treatment improved precursor cell survival and activity in culture. Thus, we propose that lack of type I IFN signaling results in decreased resistance to inflammation-induced proapoptotic stressors and impaired replenishment by precursors after systemic responses to Pneumocystis lung infection. Our finding may have implications in understanding mechanisms underlying regenerative BM depression/failure during complex immune deficiencies such as AIDS.
NASA Astrophysics Data System (ADS)
Umezu, Yasuyoshi; Watanabe, Yuko; Ma, Ninshu
2005-08-01
Since 1996, Japan Research Institute Limited (JRI) has been providing a sheet metal forming simulation system called JSTAMP-Works packaged the FEM solvers of LS-DYNA and JOH/NIKE, which might be the first multistage system at that time and has been enjoying good reputation among users in Japan. To match the recent needs, "faster, more accurate and easier", of process designers and CAE engineers, a new metal forming simulation system JSTAMP-Works/NV is developed. The JSTAMP-Works/NV packaged the automatic healing function of CAD and had much more new capabilities such as prediction of 3D trimming lines for flanging or hemming, remote control of solver execution for multi-stage forming processes and shape evaluation between FEM and CAD. On the other way, a multi-stage multi-purpose inverse FEM solver HYSTAMP is developed and will be soon put into market, which is approved to be very fast, quite accurate and robust. Lastly, authors will give some application examples of user defined ductile damage subroutine in LS-DYNA for the estimation of material failure and springback in metal forming simulation.
NASA Technical Reports Server (NTRS)
Figueroa, Fernando; Morris, Jon; Turowski, Mark; Franzl, Richard; Walker, Mark; Kapadia, Ravi; Venkatesh, Meera; Schmalzel, John
2010-01-01
Severe weather events are likely occurrences on the Mississippi Gulf Coast. It is important to rapidly diagnose and mitigate the effects of storms on Stennis Space Center's rocket engine test complex to avoid delays to critical test article programs, reduce costs, and maintain safety. An Integrated Systems Health Management (ISHM) approach and technologies are employed to integrate environmental (weather) monitoring, structural modeling, and the suite of available facility instrumentation to provide information for readiness before storms, rapid initial damage assessment to guide mitigation planning, and then support on-going assurance as repairs are effected and finally support recertification. The system is denominated Katrina Storm Monitoring System (KStorMS). Integrated Systems Health Management (ISHM) describes a comprehensive set of capabilities that provide insight into the behavior the health of a system. Knowing the status of a system allows decision makers to effectively plan and execute their mission. For example, early insight into component degradation and impending failures provides more time to develop work around strategies and more effectively plan for maintenance. Failures of system elements generally occur over time. Information extracted from sensor data, combined with system-wide knowledge bases and methods for information extraction and fusion, inference, and decision making, can be used to detect incipient failures. If failures do occur, it is critical to detect and isolate them, and suggest an appropriate course of action. ISHM enables determining the condition (health) of every element in a complex system-of-systems or SoS (detect anomalies, diagnose causes, predict future anomalies), and provide data, information, and knowledge (DIaK) to control systems for safe and effective operation. ISHM capability is achieved by using a wide range of technologies that enable anomaly detection, diagnostics, prognostics, and advise for control: (1) anomaly detection algorithms and strategies, (2) fusion of DIaK for anomaly detection (model-based, numerical, statistical, empirical, expert-based, qualitative, etc.), (3) diagnostics/prognostics strategies and methods, (4) user interface, (5) advanced control strategies, (6) integration architectures/frameworks, (7) embedding of intelligence. Many of these technologies are mature, and they are being used in the KStorMS. The paper will describe the design, implementation, and operation of the KStorMS; and discuss further evolution to support other needs such as condition-based maintenance (CBM).
Cognitive failures in late adulthood: The role of age, social context and depressive symptoms.
Hitchcott, Paul Kenneth; Fastame, Maria Chiara; Langiu, Dalila; Penna, Maria Pietronilla
2017-01-01
The incidence of self-reported cognitive failures among older adults may be an index of successful cognitive aging. However, self-reported cognitive failures are biased by variation in depressive symptomatology. This study examined age-related and socio-cultural context effects on cognitive failures while controlling for depressive symptoms. Both overall and specific factors of cognitive failures were determined. A further goal was to investigate the relationship between working memory and cognitive efficiency measures and cognitive failures. One hundred and thirty-nine cognitively healthy adults were recruited from two populations known to differ in their dispositions toward cognitive failures and depressive symptoms (Sardinia and northern Italy). The participants were assigned to Young Old (65-74 years old), Old (75-84 years of age) or Oldest Old (≥85 years of age) groups, and individually presented with a test battery including the Cognitive Failures Questionnaire, the Centre for Epidemiological Studies of Depression Scale, and Forward and Backward Digit Span tests. Specific factors of cognitive failures were differentially associated with measures of depression and working memory. While age had no impact on any aspect of cognitive failures, overall and specific dispositions varied between the two populations. The overall liability to cognitive failure was lower in participants from Sardinia, however, this group also had a higher liability to lapses of action (Blunders factor). Overall, these findings highlight that richer information about cognitive failures may be revealed through the investigation of specific factors of cognitive failures. They also confirm that the absence of changes in cognitive failures across old age is independent of variation in depressive symptoms, at least among cognitively healthy elders.
Failure detection and fault management techniques for flush airdata sensing systems
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.
1992-01-01
A high-angle-of-attack flush airdata sensing system was installed and flight tested on the F-18 High Alpha Research Vehicle at NASA-Dryden. This system uses a matrix of pressure orifices arranged in concentric circles on the nose of the vehicle to determine angles of attack, angles of sideslip, dynamic pressure, and static pressure as well as other airdata parameters. Results presented use an arrangement of 11 symmetrically distributed ports on the aircraft nose. Experience with this sensing system data indicates that the primary concern for real-time implementation is the detection and management of overall system and individual pressure sensor failures. The multiple port sensing system is more tolerant to small disturbances in the measured pressure data than conventional probe-based intrusive airdata systems. However, under adverse circumstances, large undetected failures in individual pressure ports can result in algorithm divergence and catastrophic failure of the entire system. How system and individual port failures may be detected using chi sq. analysis is shown. Once identified, the effects of failures are eliminated using weighted least squares.
Quality Assurance and T&E of Inertial Systems for RLV Mission
NASA Astrophysics Data System (ADS)
Sathiamurthi, S.; Thakur, Nayana; Hari, K.; Peter, Pilmy; Biju, V. S.; Mani, K. S.
2017-12-01
This work describes the quality assurance and Test and Evaluation (T&E) activities carried out for the inertial systems flown successfully in India's first reusable launch vehicle technology demonstrator hypersonic experiment mission. As part of reliability analysis, failure mode effect and criticality analysis and derating analysis were carried out in the initial design phase, findings presented to design review forums and the recommendations were implemented. T&E plan was meticulously worked out and presented to respective forums for review and implementation. Test data analysis, health parameter plotting and test report generation was automated and these automations significantly reduced the time required for these activities and helped to avoid manual errors. Further, T&E cycle is optimized without compromising on quality aspects. These specific measures helped to achieve zero defect delivery of inertial systems for RLV application.
Failure Mode, Effects, and Criticality Analysis (FMECA)
1993-04-01
Preliminary Failure Modes, Effects and Criticality Analysis (FMECA) of the Brayton Isotope Power System Ground Demonstration System, Report No. TID 27301...No. TID/SNA - 3015, Aeroject Nuclear Systems Co., Sacramento, California: 1970. 95. Taylor , J.R. A Formalization of Failure Mode Analysis of Control...Roskilde, Denmark: 1973. 96. Taylor , J.R. A Semi-Automatic Method for Oualitative Failure Mode Analysis. Report No. RISO-M-1707. Available from a
10 CFR 1008.24 - Criminal penalties-failure to publish a system notice.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Criminal penalties-failure to publish a system notice...—failure to publish a system notice. Subsection (i)(2) of the Act provides that an agency officer or employee who willfully maintains a system of records without publishing a system notice as required by...
The contribution of attentional lapses to individual differences in visual working memory capacity.
Adam, Kirsten C S; Mance, Irida; Fukuda, Keisuke; Vogel, Edward K
2015-08-01
Attentional control and working memory capacity are important cognitive abilities that substantially vary between individuals. Although much is known about how attentional control and working memory capacity relate to each other and to constructs like fluid intelligence, little is known about how trial-by-trial fluctuations in attentional engagement impact trial-by-trial working memory performance. Here, we employ a novel whole-report memory task that allowed us to distinguish between varying levels of attentional engagement in humans performing a working memory task. By characterizing low-performance trials, we can distinguish between models in which working memory performance failures are caused by either (1) complete lapses of attention or (2) variations in attentional control. We found that performance failures increase with set-size and strongly predict working memory capacity. Performance variability was best modeled by an attentional control model of attention, not a lapse model. We examined neural signatures of performance failures by measuring EEG activity while participants performed the whole-report task. The number of items correctly recalled in the memory task was predicted by frontal theta power, with decreased frontal theta power associated with poor performance on the task. In addition, we found that poor performance was not explained by failures of sensory encoding; the P1/N1 response and ocular artifact rates were equivalent for high- and low-performance trials. In all, we propose that attentional lapses alone cannot explain individual differences in working memory performance. Instead, we find that graded fluctuations in attentional control better explain the trial-by-trial differences in working memory that we observe.
The Contribution of Attentional Lapses to Individual Differences in Visual Working Memory Capacity
Adam, Kirsten C. S.; Mance, Irida; Fukuda, Keisuke; Vogel, Edward K.
2015-01-01
Attentional control and working memory capacity are important cognitive abilities that substantially vary between individuals. Although much is known about how attentional control and working memory capacity relate to each other and to constructs like fluid intelligence, little is known about how trial-by-trial fluctuations in attentional engagement impact trial-by-trial working memory performance. Here, we employ a novel whole-report memory task that allowed us to distinguish between varying levels of attentional engagement in humans performing a working memory task. By characterizing low-performance trials, we can distinguish between models in which working memory performance failures are caused by either (1) complete lapses of attention or (2) variations in attentional control. We found that performance failures increase with set-size and strongly predict working memory capacity. Performance variability was best modeled by an attentional control model of attention, not a lapse model. We examined neural signatures of performance failures by measuring EEG activity while participants performed the whole-report task. The number of items correctly recalled in the memory task was predicted by frontal theta power, with decreased frontal theta power associated with poor performance on the task. In addition, we found that poor performance was not explained by failures of sensory encoding; the P1/N1 response and ocular artifact rates were equivalent for high- and low-performance trials. In all, we propose that attentional lapses alone cannot explain individual differences in working memory performance. Instead, we find that graded fluctuations in attentional control better explain the trial-by-trial differences in working memory that we observe. PMID:25811710
Tulpule, Asmin; Lensch, M William; Miller, Justine D; Austin, Karyn; D'Andrea, Alan; Schlaeger, Thorsten M; Shimamura, Akiko; Daley, George Q
2010-04-29
Fanconi anemia (FA) is a genetically heterogeneous, autosomal recessive disorder characterized by pediatric bone marrow failure and congenital anomalies. The effect of FA gene deficiency on hematopoietic development in utero remains poorly described as mouse models of FA do not develop hematopoietic failure and such studies cannot be performed on patients. We have created a human-specific in vitro system to study early hematopoietic development in FA using a lentiviral RNA interference (RNAi) strategy in human embryonic stem cells (hESCs). We show that knockdown of FANCA and FANCD2 in hESCs leads to a reduction in hematopoietic fates and progenitor numbers that can be rescued by FA gene complementation. Our data indicate that hematopoiesis is impaired in FA from the earliest stages of development, suggesting that deficiencies in embryonic hematopoiesis may underlie the progression to bone marrow failure in FA. This work illustrates how hESCs can provide unique insights into human development and further our understanding of genetic disease.
Ubel, Peter A.; Zhang, Cecilia J.; Hesson, Ashley; Davis, J. Kelly; Kirby, Christine; Barnett, Jamison; Hunter, Wynn G.
2018-01-01
Some experts contend that requiring patients to pay out of pocket for a portion of their care will bring consumer discipline to health care markets. But are physicians prepared to help patients factor out-of-pocket expenses into medical decisions? In this qualitative study of audiorecorded clinical encounters, we identified physician behaviors that stand in the way of helping patients navigate out-of-pocket spending. Some behaviors reflected a failure to fully engage with patients’ financial concerns, from never acknowledging such concerns to dismissing them too quickly. Other behaviors reflected a failure to resolve uncertainty about out-of-pocket expenses or reliance on temporary solutions without making long-term plans to reduce spending. Many of these failures resulted from systemic barriers to health care spending conversations, such as a lack of price transparency. For consumer health care markets to work as intended, physicians need to be prepared to help patients navigate out-of-pocket expenses when financial concerns arise during clinical encounters. PMID:27044966
Usability Evaluation of a Web-Based Symptom Monitoring Application for Heart Failure.
Wakefield, Bonnie; Pham, Kassie; Scherubel, Melody
2015-07-01
Symptom recognition and reporting by patients with heart failure are critical to avoid hospitalization. This project evaluated a patient symptom tracking application. Fourteen end users (nine patients, five clinicians) from a Midwestern Veterans Affairs Medical Center evaluated the website using a think aloud protocol. A structured observation protocol was used to assess success or failure for each task. Measures included task time, success, and satisfaction. Patients had a mean age of 70 years; clinicians averaged 42 years in age. Patients took 9.3 min and clinicians took less than 3 min per scenario. Most patients needed some assistance, but few patients were completely unable to complete some tasks. Clinicians demonstrated few problems navigating the site. Patient System Usability Scale item scores ranged from 2.0 to 3.6; clinician item scores ranged from 1.8 to 4.0. Further work is needed to determine whether using the web-based tool improves symptom recognition and reporting. © The Author(s) 2015.
Workplace discrimination and cumulative trauma disorders: the national EEOC ADA research project.
Armstrong, Amy J; McMahon, Brian T; West, Steven L; Lewis, Allen
2005-01-01
Employment discrimination of persons with cumulative trauma disorders (CTDs) was explored using the Integrated Mission System dataset of the US Equal Employment Opportunity Commission. Demographic characteristics and merit resolutions of the Charging Parties (persons with CTD) were compared to individuals experiencing other physical, sensory and neurological impairments. Factors compared also included industry designation, geographic region, and size of Respondents against which allegations were filed. Persons with CTD had proportionately greater allegations among large Respondents (greater than 500 workers) engaged in manufacturing, utilities, transportation, finance insurance and real estate. The types of discrimination Issues that were proportionately greater in the CTD group included layoff, failure to reinstate, and failure to provide reasonable accommodation. The CTD group was significantly less likely than the comparison group to be involved in discrimination Issues such as assignment to less desirable duty, shift or work location; demotion; termination, or failure to hire or provide training. Persons with CTD had higher proportions of merit Resolutions where allegations were voluntarily withdrawn by the Charging Party with benefits.
Common Cause Failure Modeling: Aerospace Versus Nuclear
NASA Technical Reports Server (NTRS)
Stott, James E.; Britton, Paul; Ring, Robert W.; Hark, Frank; Hatfield, G. Spencer
2010-01-01
Aggregate nuclear plant failure data is used to produce generic common-cause factors that are specifically for use in the common-cause failure models of NUREG/CR-5485. Furthermore, the models presented in NUREG/CR-5485 are specifically designed to incorporate two significantly distinct assumptions about the methods of surveillance testing from whence this aggregate failure data came. What are the implications of using these NUREG generic factors to model the common-cause failures of aerospace systems? Herein, the implications of using the NUREG generic factors in the modeling of aerospace systems are investigated in detail and strong recommendations for modeling the common-cause failures of aerospace systems are given.
Software For Fault-Tree Diagnosis Of A System
NASA Technical Reports Server (NTRS)
Iverson, Dave; Patterson-Hine, Ann; Liao, Jack
1993-01-01
Fault Tree Diagnosis System (FTDS) computer program is automated-diagnostic-system program identifying likely causes of specified failure on basis of information represented in system-reliability mathematical models known as fault trees. Is modified implementation of failure-cause-identification phase of Narayanan's and Viswanadham's methodology for acquisition of knowledge and reasoning in analyzing failures of systems. Knowledge base of if/then rules replaced with object-oriented fault-tree representation. Enhancement yields more-efficient identification of causes of failures and enables dynamic updating of knowledge base. Written in C language, C++, and Common LISP.
NASA Astrophysics Data System (ADS)
Moscardelli, L.; Wood, L. J.
2006-12-01
Several late Pleistocene-age seafloor destabilization events have been identified in the continental margin of eastern offshore Trinidad, of sufficient scale to produce tsunamigenic forces. This area, situated along the obliquely-converging-boundary of the Caribbean/South American plates and proximal to the Orinoco Delta, is characterized by catastrophic shelf-margin processes, intrusive-extrusive mobile shales, and active tectonism. A mega-merged, 10,000km2, 3D seismic survey reveals several mass transport complexes that range in area from 11.3km2 to 2017km2. Historical records indicate that this region has experienced submarine landslide- generated tsunamigenic events, including tsunamis that affected Venezuela during the 1700's-1900's. This work concentrates on defining those ancient deep marine mass transport complexes whose occurrence could potentially triggered tsunamis. Three types of failures are identified; 1) source-attached failures that are fed by shelf edge deltas whose sediment input is controlled by sea-level fluctuations and sedimentation rates, 2) source-detached systems, which occur when upper slope sediments catastrophically fail due to gas hydrate disruptions and/or earthquakes, and 3) locally sourced failures, formed when local instabilities in the sea floor trigger relatively smaller collapses. Such classification of the relationship between slope mass failures and the sourcing regions enables a better understanding of the nature of initiation, length of development history and petrography of such mass transport deposits. Source-detached systems, generated due to sudden sediment remobilizations, are more likely to disrupt the overlying water column causing a rise in tsunamigenic risk. Unlike 2D seismic, 3D seismic enables scientists to calculate more accurate deposit volumes, improve deposit imaging and thus increase the accuracy of physical and computer simulations of mass failure processes.
The U.S. EPA finalized a settlement agreement with two N.H. companies for their alleged failure to follow lead-safe work practices and provide proper lead paint disclosure to tenants at a residential property in Manchester, N.H.
Achievement Goals as Mediators of the Relationship between Competence Beliefs and Test Anxiety
ERIC Educational Resources Information Center
Putwain, David W.; Symes, Wendy
2012-01-01
Background: Previous work suggests that the expectation of failure is related to higher test anxiety and achievement goals grounded in a fear of failure. Aim: To test the hypothesis, based on the work of Elliot and Pekrun (2007), that the relationship between perceived competence and test anxiety is mediated by achievement goal orientations.…
NASA Technical Reports Server (NTRS)
Behbehani, K.
1980-01-01
A new sensor/actuator failure analysis technique for turbofan jet engines was developed. Three phases of failure analysis, namely detection, isolation, and accommodation are considered. Failure detection and isolation techniques are developed by utilizing the concept of Generalized Likelihood Ratio (GLR) tests. These techniques are applicable to both time varying and time invariant systems. Three GLR detectors are developed for: (1) hard-over sensor failure; (2) hard-over actuator failure; and (3) brief disturbances in the actuators. The probability distribution of the GLR detectors and the detectability of sensor/actuator failures are established. Failure type is determined by the maximum of the GLR detectors. Failure accommodation is accomplished by extending the Multivariable Nyquest Array (MNA) control design techniques to nonsquare system designs. The performance and effectiveness of the failure analysis technique are studied by applying the technique to a turbofan jet engine, namely the Quiet Clean Short Haul Experimental Engine (QCSEE). Single and multiple sensor/actuator failures in the QCSEE are simulated and analyzed and the effects of model degradation are studied.
Optimal maintenance policy incorporating system level and unit level for mechanical systems
NASA Astrophysics Data System (ADS)
Duan, Chaoqun; Deng, Chao; Wang, Bingran
2018-04-01
The study works on a multi-level maintenance policy combining system level and unit level under soft and hard failure modes. The system experiences system-level preventive maintenance (SLPM) when the conditional reliability of entire system exceeds SLPM threshold, and also undergoes a two-level maintenance for each single unit, which is initiated when a single unit exceeds its preventive maintenance (PM) threshold, and the other is performed simultaneously the moment when any unit is going for maintenance. The units experience both periodic inspections and aperiodic inspections provided by failures of hard-type units. To model the practical situations, two types of economic dependence have been taken into account, which are set-up cost dependence and maintenance expertise dependence due to the same technology and tool/equipment can be utilised. The optimisation problem is formulated and solved in a semi-Markov decision process framework. The objective is to find the optimal system-level threshold and unit-level thresholds by minimising the long-run expected average cost per unit time. A formula for the mean residual life is derived for the proposed multi-level maintenance policy. The method is illustrated by a real case study of feed subsystem from a boring machine, and a comparison with other policies demonstrates the effectiveness of our approach.
Health assessment of cooling fan bearings using wavelet-based filtering.
Miao, Qiang; Tang, Chao; Liang, Wei; Pecht, Michael
2012-12-24
As commonly used forced convection air cooling devices in electronics, cooling fans are crucial for guaranteeing the reliability of electronic systems. In a cooling fan assembly, fan bearing failure is a major failure mode that causes excessive vibration, noise, reduction in rotation speed, locked rotor, failure to start, and other problems; therefore, it is necessary to conduct research on the health assessment of cooling fan bearings. This paper presents a vibration-based fan bearing health evaluation method using comblet filtering and exponentially weighted moving average. A new health condition indicator (HCI) for fan bearing degradation assessment is proposed. In order to collect the vibration data for validation of the proposed method, a cooling fan accelerated life test was conducted to simulate the lubricant starvation of fan bearings. A comparison between the proposed method and methods in previous studies (i.e., root mean square, kurtosis, and fault growth parameter) was carried out to assess the performance of the HCI. The analysis results suggest that the HCI can identify incipient fan bearing failures and describe the bearing degradation process. Overall, the work presented in this paper provides a promising method for fan bearing health evaluation and prognosis.
Health Assessment of Cooling Fan Bearings Using Wavelet-Based Filtering
Miao, Qiang; Tang, Chao; Liang, Wei; Pecht, Michael
2013-01-01
As commonly used forced convection air cooling devices in electronics, cooling fans are crucial for guaranteeing the reliability of electronic systems. In a cooling fan assembly, fan bearing failure is a major failure mode that causes excessive vibration, noise, reduction in rotation speed, locked rotor, failure to start, and other problems; therefore, it is necessary to conduct research on the health assessment of cooling fan bearings. This paper presents a vibration-based fan bearing health evaluation method using comblet filtering and exponentially weighted moving average. A new health condition indicator (HCI) for fan bearing degradation assessment is proposed. In order to collect the vibration data for validation of the proposed method, a cooling fan accelerated life test was conducted to simulate the lubricant starvation of fan bearings. A comparison between the proposed method and methods in previous studies (i.e., root mean square, kurtosis, and fault growth parameter) was carried out to assess the performance of the HCI. The analysis results suggest that the HCI can identify incipient fan bearing failures and describe the bearing degradation process. Overall, the work presented in this paper provides a promising method for fan bearing health evaluation and prognosis. PMID:23262486
NASA Astrophysics Data System (ADS)
Zeng, Yajun; Skibniewski, Miroslaw J.
2013-08-01
Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.
Jaski, B E; Fifer, M A; Wright, R F; Braunwald, E; Colucci, W S
1985-01-01
Milrinone is a potent positive inotropic and vascular smooth muscle-relaxing agent in vitro, and therefore, it is not known to what extent each of these actions contributes to the drug's hemodynamic effects in patients with heart failure. In 11 patients with New York Heart Association class III or IV congestive heart failure, incremental intravenous doses of milrinone were administered to determine the dose-response relationships for heart rate, systemic vascular resistance, and inotropic state, the latter measured by peak positive left ventricular derivative of pressure with respect to time (dP/dt). To clarify further the role of a positive inotropic action, the relative effects of milrinone and nitroprusside on left ventricular stroke work and dP/dt were compared in each patient at doses matched to cause equivalent reductions in mean arterial pressure or systemic vascular resistance, indices of left ventricular afterload. Milrinone caused heart rate, stroke volume, and dP/dt to increase, and systemic vascular resistance to decrease in a concentration-related manner. At the two lowest milrinone doses resulting in serum concentrations of 63 +/- 4 and 156 +/- 5 ng/ml, respectively, milrinone caused significant increases in stroke volume and dP/dt, but no changes in systemic vascular resistance or heart rate. At the maximum milrinone dose administered (mean serum concentration, 427 +/- 11 ng/ml), heart rate increased from 92 +/- 4 to 99 +/- 4 bpm (P less than 0.01), mean aortic pressure fell from 82 +/- 3 to 71 +/- 3 mmHg (P less than 0.01), right atrial pressure fell from 15 +/- 2 to 7 +/- 1 mmHg (P less than 0.005), left ventricular end-diastolic pressure fell from 26 +/- 3 to 18 +/- 3 (P less than 0.005), stroke volume index increased from 20 +/- 2 to 30 +/- 2 ml/m2 (P less than 0.005), stroke work index increased from 14 +/- 2 to 21 +/- 2 g X m/m2 (P less than 0.01), and dP/dt increased from 858 +/- 54 to 1,130 +/- 108 mmHg/s (P less than 0.005). When compared with nitroprusside for a matched reduction in mean aortic pressure or systemic vascular resistance, milrinone caused a significantly greater increase in stroke work index at the same or lower left ventricular end-diastolic pressure. Milrinone caused a concentration-related increase in dP/dt (32% increase at maximum milrinone dose), whereas nitroprusside had no effect. These data in patients with severe heart failure indicate that in addition to a vasodilating effect, milrinone exerts a concentration-related positive inotropic action that contributes significantly to the drug's overall hemodynamic effects. The positive inotropic action occurs at drug levels that do not exert significant chronotropic or vasodilator effects. Images PMID:3973022
NASA Astrophysics Data System (ADS)
Ishikawa, Kaoru; Nakamura, Taro; Osumi, Hisashi
A reliable control method is proposed for multiple loop control system. After a feedback loop failure, such as case of the sensor break down, the control system becomes unstable and has a big fluctuation even if it has a disturbance observer. To cope with this problem, the proposed method uses an equivalent transfer function (ETF) as active redundancy compensation after the loop failure. The ETF is designed so that it does not change the transfer function of the whole system before and after the loop failure. In this paper, the characteristic of reliable control system that uses an ETF and a disturbance observer is examined by the experiment that uses the DC servo motor for the current feedback loop failure in the position servo system.
Toward lean satellites reliability improvement using HORYU-IV project as case study
NASA Astrophysics Data System (ADS)
Faure, Pauline; Tanaka, Atomu; Cho, Mengu
2017-04-01
Lean satellite programs are programs in which the satellite development philosophy is driven by fast delivery and low cost. Though this concept offers the possibility to develop and fly risky missions without jeopardizing a space program, most of these satellites suffer infant mortality and fail to achieve their mission minimum success. Lean satellites with high infant mortality rate indicate that testing prior to launch is insufficient. In this study, the authors monitored failures occurring during the development of the lean satellite HORYU-IV to identify the evolution of the cumulative number of failures against cumulative testing time. Moreover, the sub-systems driving the failures depending on the different development phases were identified. The results showed that half to 2/3 of the failures are discovered during the early stage of testing. Moreover, when the mean time before failure was calculated, it appeared that for any development phase considered, a new failure appears on average every 20 h of testing. Simulations were also performed and it showed that for an initial testing time of 50 h, reliability after 1 month launch can be improved by nearly 6 times as compared to an initial testing time of 20 h. Through this work, the authors aim at providing a qualitative reference for lean satellites developers to better help them manage resources to develop lean satellites following a fast delivery and low cost philosophy while ensuring sufficient reliability to achieve mission minimum success.
Veronese, Ivan; De Martin, Elena; Martinotti, Anna Stefania; Fumagalli, Maria Luisa; Vite, Cristina; Redaelli, Irene; Malatesta, Tiziana; Mancosu, Pietro; Beltramo, Giancarlo; Fariselli, Laura; Cantone, Marie Claire
2015-06-13
A multidisciplinary and multi-institutional working group applied the Failure Mode and Effects Analysis (FMEA) approach to assess the risks for patients undergoing Stereotactic Body Radiation Therapy (SBRT) treatments for lesions located in spine and liver in two CyberKnife® Centres. The various sub-processes characterizing the SBRT treatment were identified to generate the process trees of both the treatment planning and delivery phases. This analysis drove to the identification and subsequent scoring of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system. Novel solutions aimed to increase patient safety were accordingly considered. The process-tree characterising the SBRT treatment planning stage was composed with a total of 48 sub-processes. Similarly, 42 sub-processes were identified in the stage of delivery to liver tumours and 30 in the stage of delivery to spine lesions. All the sub-processes were judged to be potentially prone to one or more failure modes. Nineteen failures (i.e. 5 in treatment planning stage, 5 in the delivery to liver lesions and 9 in the delivery to spine lesions) were considered of high concern in view of the high RPN and/or severity index value. The analysis of the potential failures, their causes and effects allowed to improve the safety strategies already adopted in the clinical practice with additional measures for optimizing quality management workflow and increasing patient safety.
A failure management prototype: DR/Rx
NASA Technical Reports Server (NTRS)
Hammen, David G.; Baker, Carolyn G.; Kelly, Christine M.; Marsh, Christopher A.
1991-01-01
This failure management prototype performs failure diagnosis and recovery management of hierarchical, distributed systems. The prototype, which evolved from a series of previous prototypes following a spiral model for development, focuses on two functions: (1) the diagnostic reasoner (DR) performs integrated failure diagnosis in distributed systems; and (2) the recovery expert (Rx) develops plans to recover from the failure. Issues related to expert system prototype design and the previous history of this prototype are discussed. The architecture of the current prototype is described in terms of the knowledge representation and functionality of its components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meeuwsen, J.J.; Kling, W.L.; Ploem, W.A.G.A.
1997-01-01
Protection systems in power systems can fail either by not responding when they should (failure to operate) or by operating when they should not (false tripping). The former type of failure is particularly serious since it may result in the isolation of large sections of the network. However, the probability of a failure to operate can be reduced by carrying out preventive maintenance on protection systems. This paper describes an approach to determine the impact of preventive maintenance on protection systems on the reliability of the power supply to customers. The proposed approach is based on Markov models.
Reports on work in support of NASA's tracking and communication division
NASA Technical Reports Server (NTRS)
Feagin, Terry; Lekkos, Anthony
1991-01-01
This is a report on the research conducted during the period October 1, 1991 through December 31, 1991. The research is divided into two primary areas: (1) generalization of the Fault Isolation using Bit Strings (FIBS) technique to permit fuzzy information to be used to isolate faults in the tracking and communications system of the Space Station; and (2) a study of the activity that should occur in the on board systems in order to attempt to recover from failures that are external to the Space Station.
Kaleri works with the TORU teleoperated control system in the SM during Expedition 8
2004-01-30
ISS008-E-14073 (30 January 2004) --- Cosmonaut Alexander Y. Kaleri, Expedition 8 flight engineer, practices docking procedures with the manual TORU rendezvous system in the Zvezda Service Module on the International Space Station (ISS) in preparation for the docking of the Progress 13 on January 31. With the manual TORU mode, Kaleri can perform necessary guidance functions from Zvezda via two hand controllers in the event of a failure of the Kurs automated rendezvous and docking (AR&D) of the Progress. Kaleri represents Rosaviakosmos.
Kaleri works with the TORU teleoperated control system in the SM during Expedition 8
2004-01-30
ISS008-E-14076 (30 January 2004) --- Cosmonaut Alexander Y. Kaleri, Expedition 8 flight engineer, practices docking procedures with the manual TORU rendezvous system in the Zvezda Service Module on the International Space Station (ISS) in preparation for the docking of the Progress 13 on January 31. With the manual TORU mode, Kaleri can perform necessary guidance functions from Zvezda via two hand controllers in the event of a failure of the Kurs automated rendezvous and docking (AR&D) of the Progress. Kaleri represents Rosaviakosmos.
Kaleri works with the TORU teleoperated control system in the SM during Expedition 8
2004-01-30
ISS008-E-14067 (30 January 2004) --- Cosmonaut Alexander Y. Kaleri, Expedition 8 flight engineer, practices docking procedures with the manual TORU rendezvous system in the Zvezda Service Module on the International Space Station (ISS) in preparation for the docking of the Progress 13 on January 31. With the manual TORU mode, Kaleri can perform necessary guidance functions from Zvezda via two hand controllers in the event of a failure of the Kurs automated rendezvous and docking (AR&D) of the Progress. Kaleri represents Rosaviakosmos.
Food for thought: food systems, livestock futures and animal health.
Wilkinson, Angela
2013-12-01
Global food security, livestock production and animal health are inextricably bound. However, our focus on the future tends to disaggregate food and health into largely separate domains. Indeed, much foresight work is either food systems or health-based with little overlap in terms of predictions or narratives. Work on animal health is no exception. Part of the problem is the fundamental misunderstanding of the role, nature and impact of the modern futures tool kit. Here, I outline three key issues in futures research ranging from methodological confusion over the application of scenarios to the failure to effectively integrate multiple methodologies to the gap between the need for more evidence and power and control over futures processes. At its core, however, a better understanding of the narrative and worldview framing much of the futures work in animal health is required to enhance the value and impact of such exercises.
Immunity-Based Aircraft Fault Detection System
NASA Technical Reports Server (NTRS)
Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.
2004-01-01
In the study reported in this paper, we have developed and applied an Artificial Immune System (AIS) algorithm for aircraft fault detection, as an extension to a previous work on intelligent flight control (IFC). Though the prior studies had established the benefits of IFC, one area of weakness that needed to be strengthened was the control dead band induced by commanding a failed surface. Since the IFC approach uses fault accommodation with no detection, the dead band, although it reduces over time due to learning, is present and causes degradation in handling qualities. If the failure can be identified, this dead band can be further A ed to ensure rapid fault accommodation and better handling qualities. The paper describes the application of an immunity-based approach that can detect a broad spectrum of known and unforeseen failures. The approach incorporates the knowledge of the normal operational behavior of the aircraft from sensory data, and probabilistically generates a set of pattern detectors that can detect any abnormalities (including faults) in the behavior pattern indicating unsafe in-flight operation. We developed a tool called MILD (Multi-level Immune Learning Detection) based on a real-valued negative selection algorithm that can generate a small number of specialized detectors (as signatures of known failure conditions) and a larger set of generalized detectors for unknown (or possible) fault conditions. Once the fault is detected and identified, an adaptive control system would use this detection information to stabilize the aircraft by utilizing available resources (control surfaces). We experimented with data sets collected under normal and various simulated failure conditions using a piloted motion-base simulation facility. The reported results are from a collection of test cases that reflect the performance of the proposed immunity-based fault detection algorithm.
40 CFR 63.164 - Standards: Compressors.
Code of Federal Regulations, 2013 CFR
2013-07-01
... with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be observed daily or shall be equipped with an... indicates failure of the seal system, the barrier fluid system, or both. (f) If the sensor indicates failure...
40 CFR 63.164 - Standards: Compressors.
Code of Federal Regulations, 2012 CFR
2012-07-01
... with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be observed daily or shall be equipped with an... indicates failure of the seal system, the barrier fluid system, or both. (f) If the sensor indicates failure...
Space tug propulsion system failure mode, effects and criticality analysis
NASA Technical Reports Server (NTRS)
Boyd, J. W.; Hardison, E. P.; Heard, C. B.; Orourke, J. C.; Osborne, F.; Wakefield, L. T.
1972-01-01
For purposes of the study, the propulsion system was considered as consisting of the following: (1) main engine system, (2) auxiliary propulsion system, (3) pneumatic system, (4) hydrogen feed, fill, drain and vent system, (5) oxygen feed, fill, drain and vent system, and (6) helium reentry purge system. Each component was critically examined to identify possible failure modes and the subsequent effect on mission success. Each space tug mission consists of three phases: launch to separation from shuttle, separation to redocking, and redocking to landing. The analysis considered the results of failure of a component during each phase of the mission. After the failure modes of each component were tabulated, those components whose failure would result in possible or certain loss of mission or inability to return the Tug to ground were identified as critical components and a criticality number determined for each. The criticality number of a component denotes the number of mission failures in one million missions due to the loss of that component. A total of 68 components were identified as critical with criticality numbers ranging from 1 to 2990.
The Impact of System Factors on Quality and Safety in Arterial Surgery: A Systematic Review.
Lear, R; Godfrey, A D; Riga, C; Norton, C; Vincent, C; Bicknell, C D
2017-07-01
A systems approach to patient safety proposes that a wide range of factors contribute to surgical outcome, yet the impact of team, work environment, and organisational factors, is not fully understood in arterial surgery. The aim of this systematic review is to summarize and discuss what is already known about the impact of system factors on quality and safety in arterial surgery. A systematic review of original research papers in English using MEDLINE, Embase, PsycINFO, and Cochrane databases, was performed according to PRISMA guidelines. Independent reviewers selected papers according to strict inclusion and exclusion criteria, and using predefined data fields, extracted relevant data on team, work environment, and organisational factors, and measures of quality and/or safety, in arterial procedures. Twelve papers met the selection criteria. Study endpoints were not consistent between papers, and most failed to report their clinical significance. A variety of tools were used to measure team skills in five papers; only one paper measured the relationship between team factors and patient outcomes. Two papers reported that equipment failures were common and had a significant impact on operating room efficiency. The influence of hospital characteristics on failure-to-rescue rates was tested in one large study, although their conclusions were limited to the American Medicare population. Five papers implemented changes in the patient pathway, but most studies failed to account for potential confounding variables. A small number of heterogenous studies have evaluated the relationship between system factors and quality or safety in arterial surgery. There is some evidence of an association between system factors and patient outcomes, but there is more work to be done to fully understand this relationship. Future research would benefit from consistency in definitions, the use of validated assessment tools, measurement of clinically relevant endpoints, and adherence to national reporting guidelines. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olama, Mohammed M; Allgood, Glenn O; Kuruganti, Phani Teja
Electric utilities have a main responsibility to protect the lives and safety of their workers when they are working on low-, medium-, and high-voltage power lines and distribution circuits. With the anticipated widespread deployment of smart grids, a secure and highly reliable means of maintaining isolation of customer-owned distributed generation (DG) from the affected distribution circuits during maintenance is necessary to provide a fully de-energized work area, ensure utility personnel safety, and prevent hazards that can lead to accidents such as accidental electrocution from unanticipated power sources. Some circuits are serviced while energized (live line work) while others are de-energizedmore » for maintenance. For servicing de-energized circuits and equipment, lock-out tag-out (LOTO) programs provide a verifiable procedure for ensuring that circuit breakers are locked in the off state and tagged to indicate that status to operational personnel so that the lines will be checked for voltage to verify they are de-energized. The de-energized area is isolated from any energized sources, which traditionally are the substations. This procedure works well when all power sources and their interconnections are known armed with this knowledge, utility personnel can determine the appropriate circuits to de-energize for isolating the target line or equipment. However, with customer-owned DG tied into the grid, the risk of inadvertently reenergizing a circuit increases because circuit connections may not be adequately documented and are not under the direct control of the local utility. Thus, the active device may not be properly de-energized or isolated from the work area. Further, a remote means of de-energizing and locking out energized devices provides an opportunity for greatly reduced safety risk to utility personnel compared to manual operations. In this paper, we present a remotely controllable LOTO system that allows individual workers to determine the configuration and status of electrical system circuits and permit them to lock out customer-owned DG devices for safety purposes using a highly secure and ultra-reliable radio signal. The system consists of: (1) individual personal lockout devices, (2) lockout communications and logic module at circuit breakers, which are located at all DG devices, and (3) a database and configuration control process located at the utility operations center. The lockout system is a close permissive, i.e., loss of control power or communications will cause the circuit breaker to open. Once the DG device is tripped open, a visual means will provide confirmation of a loss of voltage and current that verifies the disconnected status of the DG. Further the utility personnel will be able to place their own lock electronically on the system to ensure a lockout functionally. The proposed LOTO system provides enhanced worker safety and protection against unintended energized lines when DG is present. The main approaches and challenges encountered through designing the proposed region-wide LOTO system are discussed in this paper. These approaches include: (1) evaluating the reliability of the proposed approach under N-modular redundancy with voter/spares configurations and (2) conducting a system level risk assessment study using the failure modes and effects analysis (FMEA) technique to identify and rank failure modes by probability of occurrence, probability of detection, and severity of consequences. This ranking allows a cost benefits analysis to be conducted such that dollars and efforts will be applied to the failures that provide greatest incremental gains in system capability (resilience, survivability, security, reliability, availability, etc.) per dollar spent whether capital, operations, or investment. Several simulation scenarios and their results are presented to demonstrate the viability of these approaches.« less
A Novel Solution-Technique Applied to a Novel WAAS Architecture
NASA Technical Reports Server (NTRS)
Bavuso, J.
1998-01-01
The Federal Aviation Administration has embarked on an historic task of modernizing and significantly improving the national air transportation system. One system that uses the Global Positioning System (GPS) to determine aircraft navigational information is called the Wide Area Augmentation System (WAAS). This paper describes a reliability assessment of one candidate system architecture for the WAAS. A unique aspect of this study regards the modeling and solution of a candidate system that allows a novel cold sparing scheme. The cold spare is a WAAS communications satellite that is fabricated and launched after a predetermined number of orbiting satellite failures have occurred and after some stochastic fabrication time transpires. Because these satellites are complex systems with redundant components, they exhibit an increasing failure rate with a Weibull time to failure distribution. Moreover, the cold spare satellite build-time is Weibull and upon launch is considered to be a good-as-new system with an increasing failure rate and a Weibull time to failure distribution as well. The reliability model for this system is non-Markovian because three distinct system clocks are required: the time to failure of the orbiting satellites, the build time for the cold spare, and the time to failure for the launched spare satellite. A powerful dynamic fault tree modeling notation and Monte Carlo simulation technique with importance sampling are shown to arrive at a reliability prediction for a 10 year mission.
Failure environment analysis tool applications
NASA Astrophysics Data System (ADS)
Pack, Ginger L.; Wadsworth, David B.
1993-02-01
Understanding risks and avoiding failure are daily concerns for the women and men of NASA. Although NASA's mission propels us to push the limits of technology, and though the risks are considerable, the NASA community has instilled within, the determination to preserve the integrity of the systems upon which our mission and, our employees lives and well-being depend. One of the ways this is being done is by expanding and improving the tools used to perform risk assessment. The Failure Environment Analysis Tool (FEAT) was developed to help engineers and analysts more thoroughly and reliably conduct risk assessment and failure analysis. FEAT accomplishes this by providing answers to questions regarding what might have caused a particular failure; or, conversely, what effect the occurrence of a failure might have on an entire system. Additionally, FEAT can determine what common causes could have resulted in other combinations of failures. FEAT will even help determine the vulnerability of a system to failures, in light of reduced capability. FEAT also is useful in training personnel who must develop an understanding of particular systems. FEAT facilitates training on system behavior, by providing an automated environment in which to conduct 'what-if' evaluation. These types of analyses make FEAT a valuable tool for engineers and operations personnel in the design, analysis, and operation of NASA space systems.
Failure environment analysis tool applications
NASA Technical Reports Server (NTRS)
Pack, Ginger L.; Wadsworth, David B.
1993-01-01
Understanding risks and avoiding failure are daily concerns for the women and men of NASA. Although NASA's mission propels us to push the limits of technology, and though the risks are considerable, the NASA community has instilled within, the determination to preserve the integrity of the systems upon which our mission and, our employees lives and well-being depend. One of the ways this is being done is by expanding and improving the tools used to perform risk assessment. The Failure Environment Analysis Tool (FEAT) was developed to help engineers and analysts more thoroughly and reliably conduct risk assessment and failure analysis. FEAT accomplishes this by providing answers to questions regarding what might have caused a particular failure; or, conversely, what effect the occurrence of a failure might have on an entire system. Additionally, FEAT can determine what common causes could have resulted in other combinations of failures. FEAT will even help determine the vulnerability of a system to failures, in light of reduced capability. FEAT also is useful in training personnel who must develop an understanding of particular systems. FEAT facilitates training on system behavior, by providing an automated environment in which to conduct 'what-if' evaluation. These types of analyses make FEAT a valuable tool for engineers and operations personnel in the design, analysis, and operation of NASA space systems.
Failure environment analysis tool applications
NASA Technical Reports Server (NTRS)
Pack, Ginger L.; Wadsworth, David B.
1994-01-01
Understanding risks and avoiding failure are daily concerns for the women and men of NASA. Although NASA's mission propels us to push the limits of technology, and though the risks are considerable, the NASA community has instilled within it, the determination to preserve the integrity of the systems upon which our mission and, our employees lives and well-being depend. One of the ways this is being done is by expanding and improving the tools used to perform risk assessment. The Failure Environment Analysis Tool (FEAT) was developed to help engineers and analysts more thoroughly and reliably conduct risk assessment and failure analysis. FEAT accomplishes this by providing answers to questions regarding what might have caused a particular failure; or, conversely, what effect the occurrence of a failure might have on an entire system. Additionally, FEAT can determine what common causes could have resulted in other combinations of failures. FEAT will even help determine the vulnerability of a system to failures, in light of reduced capability. FEAT also is useful in training personnel who must develop an understanding of particular systems. FEAT facilitates training on system behavior, by providing an automated environment in which to conduct 'what-if' evaluation. These types of analyses make FEAT a valuable tool for engineers and operations personnel in the design, analysis, and operation of NASA space systems.
ERIC Educational Resources Information Center
Marsh, Sheila; Rodrigues, Jeff
2015-01-01
The paper reflects on the implications of selecting local multifunctional networks as a principal method of achieving improvement in the transition experience of young people with life-limiting conditions, given the range of blocking factors identified. It summarises a programme of work that aimed to tackle these blocks through developing local…
How to Build a Robot: Collaborating to Strengthen STEM Programming in a Citywide System
ERIC Educational Resources Information Center
Groome, Meghan; Rodríguez, Linda M.
2014-01-01
You have to stick with it. It takes time, patience, trial and error, failure, and persistence. It is almost never perfect or finished, but, with a good team, you can build something that works. These are the lessons youth learn when building a robot, as many do in the out-of-school time (OST) programs supported by the initiative described in this…
Challenges in Resolution for IC Failure Analysis
NASA Astrophysics Data System (ADS)
Martinez, Nick
1999-10-01
Resolution is becoming more and more of a challenge in the world of Failure Analysis in integrated circuits. This is a result of the ongoing size reduction in microelectronics. Determining the cause of a failure depends upon being able to find the responsible defect. The time it takes to locate a given defect is extremely important so that proper corrective actions can be taken. The limits of current microscopy tools are being pushed. With sub-micron feature sizes and even smaller killing defects, optical microscopes are becoming obsolete. With scanning electron microscopy (SEM), the resolution is high but the voltage involved can make these small defects transparent due to the large mean-free path of incident electrons. In this presentation, I will give an overview of the use of inspection methods in Failure Analysis and show example studies of my work as an Intern student at Texas Instruments. 1. Work at Texas Instruments, Stafford, TX, was supported by TI. 2. Work at Texas Tech University, was supported by NSF Grant DMR9705498.
SLAMM: Visual monocular SLAM with continuous mapping using multiple maps
Md. Sabri, Aznul Qalid; Loo, Chu Kiong; Mansoor, Ali Mohammed
2018-01-01
This paper presents the concept of Simultaneous Localization and Multi-Mapping (SLAMM). It is a system that ensures continuous mapping and information preservation despite failures in tracking due to corrupted frames or sensor’s malfunction; making it suitable for real-world applications. It works with single or multiple robots. In a single robot scenario the algorithm generates a new map at the time of tracking failure, and later it merges maps at the event of loop closure. Similarly, maps generated from multiple robots are merged without prior knowledge of their relative poses; which makes this algorithm flexible. The system works in real time at frame-rate speed. The proposed approach was tested on the KITTI and TUM RGB-D public datasets and it showed superior results compared to the state-of-the-arts in calibrated visual monocular keyframe-based SLAM. The mean tracking time is around 22 milliseconds. The initialization is twice as fast as it is in ORB-SLAM, and the retrieved map can reach up to 90 percent more in terms of information preservation depending on tracking loss and loop closure events. For the benefit of the community, the source code along with a framework to be run with Bebop drone are made available at https://github.com/hdaoud/ORBSLAMM. PMID:29702697
Faught, Jacqueline Tonigan; Balter, Peter A; Johnson, Jennifer L; Kry, Stephen F; Court, Laurence E; Stingo, Francesco C; Followill, David S
2017-11-01
The objective of this work was to assess both the perception of failure modes in Intensity Modulated Radiation Therapy (IMRT) when the linac is operated at the edge of tolerances given in AAPM TG-40 (Kutcher et al.) and TG-142 (Klein et al.) as well as the application of FMEA to this specific section of the IMRT process. An online survey was distributed to approximately 2000 physicists worldwide that participate in quality services provided by the Imaging and Radiation Oncology Core - Houston (IROC-H). The survey briefly described eleven different failure modes covered by basic quality assurance in step-and-shoot IMRT at or near TG-40 (Kutcher et al.) and TG-142 (Klein et al.) tolerance criteria levels. Respondents were asked to estimate the worst case scenario percent dose error that could be caused by each of these failure modes in a head and neck patient as well as the FMEA scores: Occurrence, Detectability, and Severity. Risk probability number (RPN) scores were calculated as the product of these scores. Demographic data were also collected. A total of 181 individual and three group responses were submitted. 84% were from North America. Most (76%) individual respondents performed at least 80% clinical work and 92% were nationally certified. Respondent medical physics experience ranged from 2.5 to 45 yr (average 18 yr). A total of 52% of individual respondents were at least somewhat familiar with FMEA, while 17% were not familiar. Several IMRT techniques, treatment planning systems, and linear accelerator manufacturers were represented. All failure modes received widely varying scores ranging from 1 to 10 for occurrence, at least 1-9 for detectability, and at least 1-7 for severity. Ranking failure modes by RPN scores also resulted in large variability, with each failure mode being ranked both most risky (1st) and least risky (11th) by different respondents. On average MLC modeling had the highest RPN scores. Individual estimated percent dose errors and severity scores positively correlated (P < 0.01) for each FM as expected. No universal correlations were found between the demographic information collected and scoring, percent dose errors or ranking. Failure modes investigated overall were evaluated as low to medium risk, with average RPNs less than 110. The ranking of 11 failure modes was not agreed upon by the community. Large variability in FMEA scoring may be caused by individual interpretation and/or experience, reflecting the subjective nature of the FMEA tool. © 2017 American Association of Physicists in Medicine.
NASA Technical Reports Server (NTRS)
Obrien, Maureen E.
1990-01-01
Telerobotic operations, whether under autonomous or teleoperated control, require a much more sophisticated safety system than that needed for most industrial applications. Industrial robots generally perform very repetitive tasks in a controlled, static environment. The safety system in that case can be as simple as shutting down the robot if a human enters the work area, or even simply building a cage around the work space. Telerobotic operations, however, will take place in a dynamic, sometimes unpredictable environment, and will involve complicated and perhaps unrehearsed manipulations. This creates a much greater potential for damage to the robot or objects in its vicinity. The Procedural Safety System (PSS) collects data from external sensors and the robot, then processes it through an expert system shell to determine whether an unsafe condition or potential unsafe condition exists. Unsafe conditions could include exceeding velocity, acceleration, torque, or joint limits, imminent collision, exceeding temperature limits, and robot or sensor component failure. If a threat to safety exists, the operator is warned. If the threat is serious enough, the robot is halted. The PSS, therefore, uses expert system technology to enhance safety thus reducing operator work load, allowing him/her to focus on performing the task at hand without the distraction of worrying about violating safety criteria.
Diverse Redundant Systems for Reliable Space Life Support
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2015-01-01
Reliable life support systems are required for deep space missions. The probability of a fatal life support failure should be less than one in a thousand in a multi-year mission. It is far too expensive to develop a single system with such high reliability. Using three redundant units would require only that each have a failure probability of one in ten over the mission. Since the system development cost is inverse to the failure probability, this would cut cost by a factor of one hundred. Using replaceable subsystems instead of full systems would further cut cost. Using full sets of replaceable components improves reliability more than using complete systems as spares, since a set of components could repair many different failures instead of just one. Replaceable components would require more tools, space, and planning than full systems or replaceable subsystems. However, identical system redundancy cannot be relied on in practice. Common cause failures can disable all the identical redundant systems. Typical levels of common cause failures will defeat redundancy greater than two. Diverse redundant systems are required for reliable space life support. Three, four, or five diverse redundant systems could be needed for sufficient reliability. One system with lower level repair could be substituted for two diverse systems to save cost.
A Fault Tolerant System for an Integrated Avionics Sensor Configuration
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Lancraft, R. E.
1984-01-01
An aircraft sensor fault tolerant system methodology for the Transport Systems Research Vehicle in a Microwave Landing System (MLS) environment is described. The fault tolerant system provides reliable estimates in the presence of possible failures both in ground-based navigation aids, and in on-board flight control and inertial sensors. Sensor failures are identified by utilizing the analytic relationships between the various sensors arising from the aircraft point mass equations of motion. The estimation and failure detection performance of the software implementation (called FINDS) of the developed system was analyzed on a nonlinear digital simulation of the research aircraft. Simulation results showing the detection performance of FINDS, using a dual redundant sensor compliment, are presented for bias, hardover, null, ramp, increased noise and scale factor failures. In general, the results show that FINDS can distinguish between normal operating sensor errors and failures while providing an excellent detection speed for bias failures in the MLS, indicated airspeed, attitude and radar altimeter sensors.
Monitoring SLAC High Performance UNIX Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC
2005-12-15
Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia.more » Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.« less
Reliability Growth in Space Life Support Systems
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2014-01-01
A hardware system's failure rate often increases over time due to wear and aging, but not always. Some systems instead show reliability growth, a decreasing failure rate with time, due to effective failure analysis and remedial hardware upgrades. Reliability grows when failure causes are removed by improved design. A mathematical reliability growth model allows the reliability growth rate to be computed from the failure data. The space shuttle was extensively maintained, refurbished, and upgraded after each flight and it experienced significant reliability growth during its operational life. In contrast, the International Space Station (ISS) is much more difficult to maintain and upgrade and its failure rate has been constant over time. The ISS Carbon Dioxide Removal Assembly (CDRA) reliability has slightly decreased. Failures on ISS and with the ISS CDRA continue to be a challenge.
Westhoff-Bleck, Mechthild; Schieffer, Bernhard; Tegtbur, Uwe; Meyer, Gerd Peter; Hoy, Ludwig; Schaefer, Arnd; Tallone, Ezequiel Marcello; Tutarel, Oktay; Mertins, Ramona; Wilmink, Lena Mara; Anker, Stefan D; Bauersachs, Johann; Roentgen, Philipp
2013-12-05
Exercise training safely and efficiently improves symptoms in patients with heart failure due to left ventricular dysfunction. However, studies in congenital heart disease with systemic right ventricle are scarce and results are controversial. In a randomised controlled study we investigated the effect of aerobic exercise training on exercise capacity and systemic right ventricular function in adults with d-transposition of the great arteries after atrial redirection surgery (28.2 ± 3.0 years after Mustard procedure). 48 patients (31 male, age 29.3 ± 3.4 years) were randomly allocated to 24 weeks of structured exercise training or usual care. Primary endpoint was the change in maximum oxygen uptake (peak VO2). Secondary endpoints were systemic right ventricular diameters determined by cardiac magnetic resonance imaging (CMR). Data were analysed per intention to treat analysis. At baseline peak VO2 was 25.5 ± 4.7 ml/kg/min in control and 24.0 ± 5 ml/kg/min in the training group (p=0.3). Training significantly improved exercise capacity (treatment effect for peak VO2 3.8 ml/kg/min, 95% CI: 1.8 to 5.7; p=0.001), work load (p=0.002), maximum exercise time (p=0.002), and NYHA class (p=0.046). Systemic ventricular function and volumes determined by CMR remained unchanged. None of the patients developed signs of cardiac decompensation or arrhythmias while on exercise training. Aerobic exercise training did not detrimentally affect systemic right ventricular function, but significantly improved exercise capacity and heart failure symptoms. Aerobic exercise training can be recommended for patients following atrial redirection surgery to improve exercise capacity and to lessen or prevent heart failure symptoms. ( ClinicalTrials.gov #NCT00837603). © 2013.
Matrix Dominated Failure of Fiber-Reinforced Composite Laminates Under Static and Dynamic Loading
NASA Astrophysics Data System (ADS)
Schaefer, Joseph Daniel
Hierarchical material systems provide the unique opportunity to connect material knowledge to solving specific design challenges. Representing the quickest growing class of hierarchical materials in use, fiber-reinforced polymer composites (FRPCs) offer superior strength and stiffness-to-weight ratios, damage tolerance, and decreasing production costs compared to metals and alloys. However, the implementation of FRPCs has historically been fraught with inadequate knowledge of the material failure behavior due to incomplete verification of recent computational constitutive models and improper (or non-existent) experimental validation, which has severely slowed creation and development. Noted by the recent Materials Genome Initiative and the Worldwide Failure Exercise, current state of the art qualification programs endure a 20 year gap between material conceptualization and implementation due to the lack of effective partnership between computational coding (simulation) and experimental characterization. Qualification processes are primarily experiment driven; the anisotropic nature of composites predisposes matrix-dominant properties to be sensitive to strain rate, which necessitates extensive testing. To decrease the qualification time, a framework that practically combines theoretical prediction of material failure with limited experimental validation is required. In this work, the Northwestern Failure Theory (NU Theory) for composite lamina is presented as the theoretical basis from which the failure of unidirectional and multidirectional composite laminates is investigated. From an initial experimental characterization of basic lamina properties, the NU Theory is employed to predict the matrix-dependent failure of composites under any state of biaxial stress from quasi-static to 1000 s-1 strain rates. It was found that the number of experiments required to characterize the strain-rate-dependent failure of a new composite material was reduced by an order of magnitude, and the resulting strain-rate-dependence was applicable for a large class of materials. The presented framework provides engineers with the capability to quickly identify fiber and matrix combinations for a given application and determine the failure behavior over the range of practical loadings cases. The failure-mode-based NU Theory may be especially useful when partnered with computational approaches (which often employ micromechanics to determine constituent and constitutive response) to provide accurate validation of the matrix-dominated failure modes experienced by laminates during progressive failure.
[Biochemical failure after curative treatment for localized prostate cancer].
Zouhair, Abderrahim; Jichlinski, Patrice; Mirimanoff, René-Olivier
2005-12-07
Biochemical failure after curative treatment for localized prostate cancer is frequent. The diagnosis of biochemical failure is clear when PSA levels rise after radical prostatectomy, but may be more difficult after external beam radiation therapy. The main difficulty once biochemical failure is diagnosed is to distinguish between local and distant failure, given the low sensitivity of standard work-up exams. Metabolic imaging techniques currently under evaluation may in the future help us to localize the site of failures. There are several therapeutic options depending on the initial curative treatment, each with morbidity risks that should be considered in multidisciplinary decision-making.
NASA Technical Reports Server (NTRS)
Steurer, W. H.
1980-01-01
A survey of all presently defined or proposed large space systems indicated an ever increasing demand for flexible components and materials, primarily as a result of the widening disparity between the stowage space of launch vehicles and the size of advanced systems. Typical flexible components and material requirements were identified on the basis of recurrence and/or functional commonality. This was followed by the evaluation of candidate materials and the search for material capabilities which promise to satisfy the postulated requirements. Particular attention was placed on thin films, and on the requirements of deployable antennas. The assessment of the performance of specific materials was based primarily on the failure mode, derived from a detailed failure analysis. In view of extensive on going work on thermal and environmental degradation effects, prime emphasis was placed on the assessment of the performance loss by meteoroid damage. Quantitative data were generated for tension members and antenna reflector materials. A methodology was developed for the representation of the overall materials performance as related to systems service life. A number of promising new concepts for flexible materials were identified.
Optimization of Composite Material System and Lay-up to Achieve Minimum Weight Pressure Vessel
NASA Astrophysics Data System (ADS)
Mian, Haris Hameed; Wang, Gang; Dar, Uzair Ahmed; Zhang, Weihong
2013-10-01
The use of composite pressure vessels particularly in the aerospace industry is escalating rapidly because of their superiority in directional strength and colossal weight advantage. The present work elucidates the procedure to optimize the lay-up for composite pressure vessel using finite element analysis and calculate the relative weight saving compared with the reference metallic pressure vessel. The determination of proper fiber orientation and laminate thickness is very important to decrease manufacturing difficulties and increase structural efficiency. In the present work different lay-up sequences for laminates including, cross-ply [ 0 m /90 n ] s , angle-ply [ ±θ] ns , [ 90/±θ] ns and [ 0/±θ] ns , are analyzed. The lay-up sequence, orientation and laminate thickness (number of layers) are optimized for three candidate composite materials S-glass/epoxy, Kevlar/epoxy and Carbon/epoxy. Finite element analysis of composite pressure vessel is performed by using commercial finite element code ANSYS and utilizing the capabilities of ANSYS Parametric Design Language and Design Optimization module to automate the process of optimization. For verification, a code is developed in MATLAB based on classical lamination theory; incorporating Tsai-Wu failure criterion for first-ply failure (FPF). The results of the MATLAB code shows its effectiveness in theoretical prediction of first-ply failure strengths of laminated composite pressure vessels and close agreement with the FEA results. The optimization results shows that for all the composite material systems considered, the angle-ply [ ±θ] ns is the optimum lay-up. For given fixed ply thickness the total thickness of laminate is obtained resulting in factor of safety slightly higher than two. Both Carbon/epoxy and Kevlar/Epoxy resulted in approximately same laminate thickness and considerable percentage of weight saving, but S-glass/epoxy resulted in weight increment.
Digital I and C system upgrade integration technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, H. W.; Shih, C.; Wang, J. R.
2012-07-01
This work developed an integration technique for digital I and C system upgrade, the utility can replace the I and C systems step by step systematically by this method. Inst. of Nuclear Energy Research (INER) developed a digital Instrumentation and Control (I and C) replacement integration technique on the basis of requirement of the three existing nuclear power plants (NPPs), which are Chin-Shan (CS) NPP, Kuo-Sheng (KS) NPP, and Maanshan (MS) NPP, in Taiwan, and also developed the related Critical Digital Review (CDR) Procedure. The digital I and C replacement integration technique includes: (I) Establishment of Nuclear Power Plant Digitalmore » Replacement Integration Guideline, (2) Preliminary Investigation on I and C System Digitalization, (3) Evaluation on I and C System Digitalization, and (4) Establishment of I and C System Digitalization Architectures. These works can be a reference for performing I and C system digital replacement integration of the three existing NPPs of Taiwan Power Company (TPC). A CDR is the review for a critical system digital I and C replacement. The major reference of this procedure is EPRI TR- 1011710 (2005) 'Handbook for Evaluating Critical Digital Equipment and Systems' which was published by the Electric Power Research Inst. (EPRI). With this document, INER developed a TPC-specific CDR procedure. Currently, CDR becomes one of the policies for digital I and C replacement in TPC. The contents of this CDR procedure include: Scope, Responsibility, Operation Procedure, Operation Flow Chart, CDR review items. The CDR review items include the comparison of the design change, Software Verification and Validation (SVandV), Failure Mode and Effects Analysis (FMEA), Evaluation of Diversity and Defense-in-depth (D3), Evaluation of Watchdog Timer, Evaluation of Electromagnetic Compatibility (EMC), Evaluation of Grounding for System/Component, Seismic Evaluation, Witness and Inspection, Lessons Learnt from the Digital I and C Failure Events. A solid review can assure the quality of the digital I and C system replacement. (authors)« less
DOT National Transportation Integrated Search
1981-06-01
The purpose of Task 5 in the Extended System Operations Studies Project, DPM Failure Management, is to enhance the capabilities of the Downtown People Mover Simulation (DPMS) and the Discrete Event Simulation Model (DESM) by increasing the failure mo...
Development of software to improve AC power quality on large spacecraft
NASA Technical Reports Server (NTRS)
Kraft, L. Alan
1991-01-01
To insure the reliability of a 20 kHz, alternating current (AC) power system on spacecraft, it is essential to analyze its behavior under many adverse operating conditions. Some of these conditions include overloads, short circuits, switching surges, and harmonic distortions. Harmonic distortions can become a serious problem. It can cause malfunctions in equipment that the power system is supplying, and, during distortions such as voltage resonance, it can cause equipment and insulation failures due to the extreme peak voltages. To address the harmonic distortion issue, work was begun under the 1990 NASA-ASEE Summer Faculty Fellowship Program. Software, originally developed by EPRI, called HARMFLO, a power flow program capable of analyzing harmonic conditions on three phase, balanced, 60 Hz AC power systems, was modified to analyze single phase, 20 kHz, AC power systems. Since almost all of the equipment used on spacecraft power systems is electrically different from equipment used on terrestrial power systems, it was also necessary to develop mathematical models for the equipment to be used on the spacecraft. The modelling was also started under the same fellowship work period. Details of the modifications and models completed during the 1990 NASA-ASEE Summer Faculty Fellowship Program can be found in a project report. As a continuation of the work to develop a complete package necessary for the full analysis of spacecraft AC power system behavior, deployment work has continued through NASA Grant NAG3-1254. This report details the work covered by the above mentioned grant.
Automatic Monitoring System Design and Failure Probability Analysis for River Dikes on Steep Channel
NASA Astrophysics Data System (ADS)
Chang, Yin-Lung; Lin, Yi-Jun; Tung, Yeou-Koung
2017-04-01
The purposes of this study includes: (1) design an automatic monitoring system for river dike; and (2) develop a framework which enables the determination of dike failure probabilities for various failure modes during a rainstorm. The historical dike failure data collected in this study indicate that most dikes in Taiwan collapsed under the 20-years return period discharge, which means the probability of dike failure is much higher than that of overtopping. We installed the dike monitoring system on the Chiu-She Dike which located on the middle stream of Dajia River, Taiwan. The system includes: (1) vertical distributed pore water pressure sensors in front of and behind the dike; (2) Time Domain Reflectometry (TDR) to measure the displacement of dike; (3) wireless floating device to measure the scouring depth at the toe of dike; and (4) water level gauge. The monitoring system recorded the variation of pore pressure inside the Chiu-She Dike and the scouring depth during Typhoon Megi. The recorded data showed that the highest groundwater level insides the dike occurred 15 hours after the peak discharge. We developed a framework which accounts for the uncertainties from return period discharge, Manning's n, scouring depth, soil cohesion, and friction angle and enables the determination of dike failure probabilities for various failure modes such as overtopping, surface erosion, mass failure, toe sliding and overturning. The framework was applied to Chiu-She, Feng-Chou, and Ke-Chuang Dikes on Dajia River. The results indicate that the toe sliding or overturning has the highest probability than other failure modes. Furthermore, the overall failure probability (integrate different failure modes) reaches 50% under 10-years return period flood which agrees with the historical failure data for the study reaches.
12 CFR 263.21 - Failure to appear.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 3 2011-01-01 2011-01-01 false Failure to appear. 263.21 Section 263.21 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM RULES OF PRACTICE FOR HEARINGS Uniform Rules of Practice and Procedure § 263.21 Failure to appear. Failure...
Law, Yuk Ming; Ettedgui, Jose; Beerman, Lee; Maisel, Alan; Tofovic, Stevan
2006-08-15
The measurement of plasma B-type natriuretic peptide (BNP) has emerged as a useful biomarker of heart failure in patients with cardiomyopathy. The pathophysiology of heart failure in single ventricle (SV) circulation may be distinct from that of cardiomyopathies. A distinct pattern of BNP elevation in heart failure in the SV population was hypothesized: it is elevated in heart failure secondary to ventricular dysfunction but not in isolated cavopulmonary failure. BNP was measured prospectively in SV patients at catheterization (n = 22) and when assessing for heart failure (n = 11) (7 normal controls). Of 33 SV subjects (median age 62 months), 13 had aortopulmonary connections and 20 had cavopulmonary connections. Median and mean +/- SD BNP levels by shunt type were 184 and 754 +/- 1,086 pg/ml in the patients with aortopulmonary connections, 38 and 169 +/- 251 pg/ml in the patients with cavopulmonary connections, and 10 and 11 +/- 5 pg/ml in normal controls, respectively (p = 0.004). Median systemic ventricular end-diastolic pressure (8mm Hg, R = 0.45), mean pulmonary artery pressure (14.5 mm Hg, R = 0.62), and mean right atrial pressure (6.5 mm Hg, R = 0.54) were correlated with plasma BNP. SV subjects with symptomatic heart failure from dysfunctional systemic ventricles had median and mean +/- SD BNP levels of 378 and 714 +/- 912 pg/ml (n = 18) compared with patients with isolated failed Glenn or Fontan connections (19 and 23 +/- 16 pg/ml [n = 7, p = 0.001]) and those with no heart failure (22 and 22 +/- 12 pg/ml [n = 8, p = 0.001]). Excluding the group with cavopulmonary failure, the severity of heart failure from systemic ventricular dysfunction was associated with plasma BNP. In conclusion, plasma BNP is elevated in SV patients with systemic ventricular or left-sided cardiac failure. BNP is not elevated in patients missing a pulmonary ventricle with isolated cavopulmonary failure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appavoo, Jonathan
Exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today's machines. Systems software for exascale machines must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massive data analysis in a highly unreliable hardware environment with billions of threads of execution. The FOX project explored systems software and runtime support for a new approach to the data and work distribution for fault oblivious application execution. Our major OS work at Boston University focusedmore » on developing a new light-weight operating systems model that provides an appropriate context for both multi-core and multi-node application development. This work is discussed in section 1. Early on in the FOX project BU developed infrastructure for prototyping dynamic HPC environments in which the sets of nodes that an application is run on can be dynamically grown or shrunk. This work was an extension of the Kittyhawk project and is discussed in section 2. Section 3 documents the publications and software repositories that we have produced. To put our work in context of the complete FOX project contribution we include in section 4 an extended version of a paper that documents the complete work of the FOX team.« less
Getting Home Safe and Sound: Occupational Safety and Health Administration at 38
Silverstein, Michael
2008-01-01
The Occupational Safety and Health Act of 1970 (OSHAct) declared that every worker is entitled to safe and healthful working conditions, and that employers are responsible for work being free from all recognized hazards. Thirty-eight years after these assurances, however, it is difficult to find anyone who believes the promise of the OSHAct has been met. The persistence of preventable, life-threatening hazards at work is a failure to keep a national promise. I review the history of the Occupational Safety and Health Administration and propose measures to better ensure that those who go to work every day return home safe and sound. These measures fall into 6 areas: leverage and accountability, safety and health systems, employee rights, equal protection, framing, and infrastructure. PMID:18235060
Getting home safe and sound: occupational safety and health administration at 38.
Silverstein, Michael
2008-03-01
The Occupational Safety and Health Act of 1970 (OSHAct) declared that every worker is entitled to safe and healthful working conditions, and that employers are responsible for work being free from all recognized hazards. Thirty-eight years after these assurances, however, it is difficult to find anyone who believes the promise of the OSHAct has been met. The persistence of preventable, life-threatening hazards at work is a failure to keep a national promise. I review the history of the Occupational Safety and Health Administration and propose measures to better ensure that those who go to work every day return home safe and sound. These measures fall into 6 areas: leverage and accountability, safety and health systems, employee rights, equal protection, framing, and infrastructure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ofek, Y.
1994-05-01
This work describes a new technique, based on exchanging control signals between neighboring nodes, for constructing a stable and fault-tolerant global clock in a distributed system with an arbitrary topology. It is shown that it is possible to construct a global clock reference with time step that is much smaller than the propagation delay over the network's links. The synchronization algorithm ensures that the global clock tick' has a stable periodicity, and therefore, it is possible to tolerate failures of links and clocks that operate faster and/or slower than nominally specified, as well as hard failures. The approach taken inmore » this work is to generate a global clock from the ensemble of the local transmission clocks and not to directly synchronize these high-speed clocks. The steady-state algorithm, which generates the global clock, is executed in hardware by the network interface of each node. At the network interface, it is possible to measure accurately the propagation delay between neighboring nodes with a small error or uncertainty and thereby to achieve global synchronization that is proportional to these error measurements. It is shown that the local clock drift (or rate uncertainty) has only a secondary effect on the maximum global clock rate. The synchronization algorithm can tolerate any physical failure. 18 refs.« less
Failure mode analysis of silicon-based intracortical microelectrode arrays in non-human primates
Barrese, James C; Rao, Naveen; Paroo, Kaivon; Triebwasser, Corey; Vargas-Irwin, Carlos; Franquemont, Lachlan; Donoghue, John P
2016-01-01
Objective Brain–computer interfaces (BCIs) using chronically implanted intracortical microelectrode arrays (MEAs) have the potential to restore lost function to people with disabilities if they work reliably for years. Current sensors fail to provide reliably useful signals over extended periods of time for reasons that are not clear. This study reports a comprehensive retrospective analysis from a large set of implants of a single type of intracortical MEA in a single species, with a common set of measures in order to evaluate failure modes. Approach Since 1996, 78 silicon MEAs were implanted in 27 monkeys (Macaca mulatta). We used two approaches to find reasons for sensor failure. First, we classified the time course leading up to complete recording failure as acute (abrupt) or chronic (progressive). Second, we evaluated the quality of electrode recordings over time based on signal features and electrode impedance. Failure modes were divided into four categories: biological, material, mechanical, and unknown. Main results Recording duration ranged from 0 to 2104 days (5.75 years), with a mean of 387 days and a median of 182 days (n = 78). Sixty-two arrays failed completely with a mean time to failure of 332 days (median = 133 days) while nine array experiments were electively terminated for experimental reasons (mean = 486 days). Seven remained active at the close of this study (mean = 753 days). Most failures (56%) occurred within a year of implantation, with acute mechanical failures the most common class (48%), largely because of connector issues (83%). Among grossly observable biological failures (24%), a progressive meningeal reaction that separated the array from the parenchyma was most prevalent (14.5%). In the absence of acute interruptions, electrode recordings showed a slow progressive decline in spike amplitude, noise amplitude, and number of viable channels that predicts complete signal loss by about eight years. Impedance measurements showed systematic early increases, which did not appear to affect recording quality, followed by a slow decline over years. The combination of slowly falling impedance and signal quality in these arrays indicate that insulating material failure is the most significant factor. Significance This is the first long-term failure mode analysis of an emerging BCI technology in a large series of non-human primates. The classification system introduced here may be used to standardize how neuroprosthetic failure modes are evaluated. The results demonstrate the potential for these arrays to record for many years, but achieving reliable sensors will require replacing connectors with implantable wireless systems, controlling the meningeal reaction, and improving insulation materials. These results will focus future research in order to create clinical neuroprosthetic sensors, as well as valuable research tools, that are able to safely provide reliable neural signals for over a decade. PMID:24216311