Sample records for tool failure monitoring

  1. Accurate Prediction of Motor Failures by Application of Multi CBM Tools: A Case Study

    NASA Astrophysics Data System (ADS)

    Dutta, Rana; Singh, Veerendra Pratap; Dwivedi, Jai Prakash

    2018-02-01

    Motor failures are very difficult to predict accurately with a single condition-monitoring tool as both electrical and the mechanical systems are closely related. Electrical problem, like phase unbalance, stator winding insulation failures can, at times, lead to vibration problem and at the same time mechanical failures like bearing failure, leads to rotor eccentricity. In this case study of a 550 kW blower motor it has been shown that a rotor bar crack was detected by current signature analysis and vibration monitoring confirmed the same. In later months in a similar motor vibration monitoring predicted bearing failure and current signature analysis confirmed the same. In both the cases, after dismantling the motor, the predictions were found to be accurate. In this paper we will be discussing the accurate predictions of motor failures through use of multi condition monitoring tools with two case studies.

  2. Comprehensive in-hospital monitoring in acute heart failure: applications for clinical practice and future directions for research. A statement from the Acute Heart Failure Committee of the Heart Failure Association (HFA) of the European Society of Cardiology (ESC).

    PubMed

    Harjola, Veli-Pekka; Parissis, John; Brunner-La Rocca, Hans-Peter; Čelutkienė, Jelena; Chioncel, Ovidiu; Collins, Sean P; De Backer, Daniel; Filippatos, Gerasimos S; Gayat, Etienne; Hill, Loreena; Lainscak, Mitja; Lassus, Johan; Masip, Josep; Mebazaa, Alexandre; Miró, Òscar; Mortara, Andrea; Mueller, Christian; Mullens, Wilfried; Nieminen, Markku S; Rudiger, Alain; Ruschitzka, Frank; Seferovic, Petar M; Sionis, Alessandro; Vieillard-Baron, Antoine; Weinstein, Jean Marc; de Boer, Rudolf A; Crespo Leiro, Maria G; Piepoli, Massimo; Riley, Jillian P

    2018-04-30

    This paper provides a practical clinical application of guideline recommendations relating to the inpatient monitoring of patients with acute heart failure, through the evaluation of various clinical, biomarker, imaging, invasive and non-invasive approaches. Comprehensive inpatient monitoring is crucial to the optimal management of acute heart failure patients. The European Society of Cardiology heart failure guidelines provide recommendations for the inpatient monitoring of acute heart failure, but the level of evidence underpinning most recommendations is limited. Many tools are available for the in-hospital monitoring of patients with acute heart failure, and each plays a role at various points throughout the patient's treatment course, including the emergency department, intensive care or coronary care unit, and the general ward. Clinical judgment is the preeminent factor guiding application of inpatient monitoring tools, as the various techniques have different patient population targets. When applied appropriately, these techniques enable decision making. However, there is limited evidence demonstrating that implementation of these tools improves patient outcome. Research priorities are identified to address these gaps in evidence. Future research initiatives should aim to identify the optimal in-hospital monitoring strategies that decrease morbidity and prolong survival in patients with acute heart failure. © 2018 The Authors. European Journal of Heart Failure © 2018 European Society of Cardiology.

  3. Process tool monitoring and matching using interferometry technique

    NASA Astrophysics Data System (ADS)

    Anberg, Doug; Owen, David M.; Mileham, Jeffrey; Lee, Byoung-Ho; Bouche, Eric

    2016-03-01

    The semiconductor industry makes dramatic device technology changes over short time periods. As the semiconductor industry advances towards to the 10 nm device node, more precise management and control of processing tools has become a significant manufacturing challenge. Some processes require multiple tool sets and some tools have multiple chambers for mass production. Tool and chamber matching has become a critical consideration for meeting today's manufacturing requirements. Additionally, process tools and chamber conditions have to be monitored to ensure uniform process performance across the tool and chamber fleet. There are many parameters for managing and monitoring tools and chambers. Particle defect monitoring is a well-known and established example where defect inspection tools can directly detect particles on the wafer surface. However, leading edge processes are driving the need to also monitor invisible defects, i.e. stress, contamination, etc., because some device failures cannot be directly correlated with traditional visualized defect maps or other known sources. Some failure maps show the same signatures as stress or contamination maps, which implies correlation to device performance or yield. In this paper we present process tool monitoring and matching using an interferometry technique. There are many types of interferometry techniques used for various process monitoring applications. We use a Coherent Gradient Sensing (CGS) interferometer which is self-referencing and enables high throughput measurements. Using this technique, we can quickly measure the topography of an entire wafer surface and obtain stress and displacement data from the topography measurement. For improved tool and chamber matching and reduced device failure, wafer stress measurements can be implemented as a regular tool or chamber monitoring test for either unpatterned or patterned wafers as a good criteria for improved process stability.

  4. Failure analysis in the identification of synergies between cleaning monitoring methods.

    PubMed

    Whiteley, Greg S; Derry, Chris; Glasbey, Trevor

    2015-02-01

    The 4 monitoring methods used to manage the quality assurance of cleaning outcomes within health care settings are visual inspection, microbial recovery, fluorescent marker assessment, and rapid ATP bioluminometry. These methods each generate different types of information, presenting a challenge to the successful integration of monitoring results. A systematic approach to safety and quality control can be used to interrogate the known qualities of cleaning monitoring methods and provide a prospective management tool for infection control professionals. We investigated the use of failure mode and effects analysis (FMEA) for measuring failure risk arising through each cleaning monitoring method. FMEA uses existing data in a structured risk assessment tool that identifies weaknesses in products or processes. Our FMEA approach used the literature and a small experienced team to construct a series of analyses to investigate the cleaning monitoring methods in a way that minimized identified failure risks. FMEA applied to each of the cleaning monitoring methods revealed failure modes for each. The combined use of cleaning monitoring methods in sequence is preferable to their use in isolation. When these 4 cleaning monitoring methods are used in combination in a logical sequence, the failure modes noted for any 1 can be complemented by the strengths of the alternatives, thereby circumventing the risk of failure of any individual cleaning monitoring method. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  5. Tapered Roller Bearing Damage Detection Using Decision Fusion Analysis

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Kreider, Gary; Fichter, Thomas

    2006-01-01

    A diagnostic tool was developed for detecting fatigue damage of tapered roller bearings. Tapered roller bearings are used in helicopter transmissions and have potential for use in high bypass advanced gas turbine aircraft engines. A diagnostic tool was developed and evaluated experimentally by collecting oil debris data from failure progression tests conducted using health monitoring hardware. Failure progression tests were performed with tapered roller bearings under simulated engine load conditions. Tests were performed on one healthy bearing and three pre-damaged bearings. During each test, data from an on-line, in-line, inductance type oil debris sensor and three accelerometers were monitored and recorded for the occurrence of bearing failure. The bearing was removed and inspected periodically for damage progression throughout testing. Using data fusion techniques, two different monitoring technologies, oil debris analysis and vibration, were integrated into a health monitoring system for detecting bearing surface fatigue pitting damage. The data fusion diagnostic tool was evaluated during bearing failure progression tests under simulated engine load conditions. This integrated system showed improved detection of fatigue damage and health assessment of the tapered roller bearings as compared to using individual health monitoring technologies.

  6. Nondestructive evaluation tools and experimental studies for monitoring the health of space propulsion systems

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    1991-01-01

    An overview is given of background and information on space propulsion systems on both the programmatic and technical levels. Feasibility experimental studies indicate that nondestructive evaluation tools such as ultrasonic, eddy current and x-ray may be successfully used to monitor the life limiting failure mechanisms of space propulsion systems. Encouraging results were obtained for monitoring the life limiting failure mechanisms for three space propulsion systems; the degradation of tungsten arcjet and magnetoplasmadynamic electrodes; presence and thickness of spallable electrically conducting molybdenum films in ion thrusters; and the degradation of the catalyst in hydrazine thrusters.

  7. Remote monitoring of heart failure: benefits for therapeutic decision making.

    PubMed

    Martirosyan, Mihran; Caliskan, Kadir; Theuns, Dominic A M J; Szili-Torok, Tamas

    2017-07-01

    Chronic heart failure is a cardiovascular disorder with high prevalence and incidence worldwide. The course of heart failure is characterized by periods of stability and instability. Decompensation of heart failure is associated with frequent and prolonged hospitalizations and it worsens the prognosis for the disease and increases cardiovascular mortality among affected patients. It is therefore important to monitor these patients carefully to reveal changes in their condition. Remote monitoring has been designed to facilitate an early detection of adverse events and to minimize regular follow-up visits for heart failure patients. Several new devices have been developed and introduced to the daily practice of cardiology departments worldwide. Areas covered: Currently, special tools and techniques are available to perform remote monitoring. Concurrently there are a number of modern cardiac implantable electronic devices that incorporate a remote monitoring function. All the techniques that have a remote monitoring function are discussed in this paper in detail. All the major studies on this subject have been selected for review of the recent data on remote monitoring of HF patients and demonstrate the role of remote monitoring in the therapeutic decision making for heart failure patients. Expert commentary: Remote monitoring represents a novel intensified follow-up strategy of heart failure management. Overall, theoretically, remote monitoring may play a crucial role in the early detection of heart failure progression and may improve the outcome of patients.

  8. Virtual-Instrument-Based Online Monitoring System for Hands-on Laboratory Experiment of Partial Discharges

    ERIC Educational Resources Information Center

    Karmakar, Subrata

    2017-01-01

    Online monitoring of high-voltage (HV) equipment is a vital tool for early detection of insulation failure. Most insulation failures are caused by partial discharges (PDs) inside the HV equipment. Because of the very high cost of establishing HV equipment facility and the limitations of electromagnetic interference-screened laboratories, only a…

  9. An analysis of potential stream fish and fish habitat monitoring procedures for the Inland Northwest: Annual Report 1999

    Treesearch

    James T. Peterson; Sherry P. Wollrab

    1999-01-01

    Natural resource managers in the Inland Northwest need tools for assessing the success or failure of conservation policies and the impacts of management actions on fish and fish habitats. Effectiveness monitoring is one such potential tool, but there are currently no established monitoring protocols. Since 1991, U.S. Forest Service biologists have used the standardized...

  10. The cost-effectiveness of monitoring strategies for antiretroviral therapy of HIV infected patients in resource-limited settings: software tool.

    PubMed

    Estill, Janne; Salazar-Vizcaya, Luisa; Blaser, Nello; Egger, Matthias; Keiser, Olivia

    2015-01-01

    The cost-effectiveness of routine viral load (VL) monitoring of HIV-infected patients on antiretroviral therapy (ART) depends on various factors that differ between settings and across time. Low-cost point-of-care (POC) tests for VL are in development and may make routine VL monitoring affordable in resource-limited settings. We developed a software tool to study the cost-effectiveness of switching to second-line ART with different monitoring strategies, and focused on POC-VL monitoring. We used a mathematical model to simulate cohorts of patients from start of ART until death. We modeled 13 strategies (no 2nd-line, clinical, CD4 (with or without targeted VL), POC-VL, and laboratory-based VL monitoring, with different frequencies). We included a scenario with identical failure rates across strategies, and one in which routine VL monitoring reduces the risk of failure. We compared lifetime costs and averted disability-adjusted life-years (DALYs). We calculated incremental cost-effectiveness ratios (ICER). We developed an Excel tool to update the results of the model for varying unit costs and cohort characteristics, and conducted several sensitivity analyses varying the input costs. Introducing 2nd-line ART had an ICER of US$1651-1766/DALY averted. Compared with clinical monitoring, the ICER of CD4 monitoring was US$1896-US$5488/DALY averted and VL monitoring US$951-US$5813/DALY averted. We found no difference between POC- and laboratory-based VL monitoring, except for the highest measurement frequency (every 6 months), where laboratory-based testing was more effective. Targeted VL monitoring was on the cost-effectiveness frontier only if the difference between 1st- and 2nd-line costs remained large, and if we assumed that routine VL monitoring does not prevent failure. Compared with the less expensive strategies, the cost-effectiveness of routine VL monitoring essentially depends on the cost of 2nd-line ART. Our Excel tool is useful for determining optimal monitoring strategies for specific settings, with specific sex-and age-distributions and unit costs.

  11. The Cost-Effectiveness of Monitoring Strategies for Antiretroviral Therapy of HIV Infected Patients in Resource-Limited Settings: Software Tool

    PubMed Central

    Estill, Janne; Salazar-Vizcaya, Luisa; Blaser, Nello; Egger, Matthias; Keiser, Olivia

    2015-01-01

    Background The cost-effectiveness of routine viral load (VL) monitoring of HIV-infected patients on antiretroviral therapy (ART) depends on various factors that differ between settings and across time. Low-cost point-of-care (POC) tests for VL are in development and may make routine VL monitoring affordable in resource-limited settings. We developed a software tool to study the cost-effectiveness of switching to second-line ART with different monitoring strategies, and focused on POC-VL monitoring. Methods We used a mathematical model to simulate cohorts of patients from start of ART until death. We modeled 13 strategies (no 2nd-line, clinical, CD4 (with or without targeted VL), POC-VL, and laboratory-based VL monitoring, with different frequencies). We included a scenario with identical failure rates across strategies, and one in which routine VL monitoring reduces the risk of failure. We compared lifetime costs and averted disability-adjusted life-years (DALYs). We calculated incremental cost-effectiveness ratios (ICER). We developed an Excel tool to update the results of the model for varying unit costs and cohort characteristics, and conducted several sensitivity analyses varying the input costs. Results Introducing 2nd-line ART had an ICER of US$1651-1766/DALY averted. Compared with clinical monitoring, the ICER of CD4 monitoring was US$1896-US$5488/DALY averted and VL monitoring US$951-US$5813/DALY averted. We found no difference between POC- and laboratory-based VL monitoring, except for the highest measurement frequency (every 6 months), where laboratory-based testing was more effective. Targeted VL monitoring was on the cost-effectiveness frontier only if the difference between 1st- and 2nd-line costs remained large, and if we assumed that routine VL monitoring does not prevent failure. Conclusion Compared with the less expensive strategies, the cost-effectiveness of routine VL monitoring essentially depends on the cost of 2nd-line ART. Our Excel tool is useful for determining optimal monitoring strategies for specific settings, with specific sex-and age-distributions and unit costs. PMID:25793531

  12. Value of Telemonitoring and Telemedicine in Heart Failure Management

    PubMed Central

    Alderighi, Camilla; Rasoini, Raffaele; Mazzanti, Marco; Casolo, Giancarlo

    2017-01-01

    The use of telemonitoring and telemedicine is a relatively new but quickly developing area in medicine. As new digital tools and applications are being created and used to manage medical conditions such as heart failure, many implications require close consideration and further study, including the effectiveness and safety of these telemonitoring tools in diagnosing, treating and managing heart failure compared to traditional face-to-face doctor–patient interaction. When compared to multidisciplinary intervention programs which are frequently hindered by economic, geographic and bureaucratic barriers, non-invasive remote monitoring could be a solution to support and promote the care of patients over time. Therefore it is crucial to identify the most relevant biological parameters to monitor, which heart failure sub-populations may gain real benefits from telehealth interventions and in which specific healthcare subsets these interventions should be implemented in order to maximise value. PMID:29387464

  13. Value of Telemonitoring and Telemedicine in Heart Failure Management.

    PubMed

    Gensini, Gian Franco; Alderighi, Camilla; Rasoini, Raffaele; Mazzanti, Marco; Casolo, Giancarlo

    2017-11-01

    The use of telemonitoring and telemedicine is a relatively new but quickly developing area in medicine. As new digital tools and applications are being created and used to manage medical conditions such as heart failure, many implications require close consideration and further study, including the effectiveness and safety of these telemonitoring tools in diagnosing, treating and managing heart failure compared to traditional face-to-face doctor-patient interaction. When compared to multidisciplinary intervention programs which are frequently hindered by economic, geographic and bureaucratic barriers, non-invasive remote monitoring could be a solution to support and promote the care of patients over time. Therefore it is crucial to identify the most relevant biological parameters to monitor, which heart failure sub-populations may gain real benefits from telehealth interventions and in which specific healthcare subsets these interventions should be implemented in order to maximise value.

  14. A novel methodology for in-process monitoring of flow forming

    NASA Astrophysics Data System (ADS)

    Appleby, Andrew; Conway, Alastair; Ion, William

    2017-10-01

    Flow forming (FF) is an incremental cold working process with near-net-shape forming capability. Failures by fracture due to high deformation can be unexpected and sometimes catastrophic, causing tool damage. If process failures can be identified in real time, an automatic cut-out could prevent costly tool damage. Sound and vibration monitoring is well established and commercially viable in the machining sector to detect current and incipient process failures, but not for FF. A broad-frequency microphone was used to record the sound signature of the manufacturing cycle for a series of FF parts. Parts were flow formed using single and multiple passes, and flaws were introduced into some of the parts to simulate the presence of spontaneously initiated cracks. The results show that this methodology is capable of identifying both introduced defects and spontaneous failures during flow forming. Further investigation is needed to categorise and identify different modes of failure and identify further potential applications in rotary forming.

  15. Nurses' strategies to address self-care aspects related to medication adherence and symptom recognition in heart failure patients: an in-depth look.

    PubMed

    Jaarsma, Tiny; Nikolova-Simons, Mariana; van der Wal, Martje H L

    2012-01-01

    Despite an increasing body of knowledge on self-care in heart failure patients, the need for effective interventions remains. We sought to deepen the understanding of interventions that heart failure nurses use in clinical practice to improve patient adherence to medication and symptom monitoring. A qualitative study with a directed content analysis was performed, using data from a selected sample of Dutch-speaking heart failure nurses who completed booklets with two vignettes involving medication adherence and symptom recognition. Nurses regularly assess and reassess patients before they decide on an intervention. They evaluate basic/factual information and barriers in a patient's behavior, and try to find room for improvement in a patient's behavior. Interventions that heart failure nurses use to improve adherence to medication and symptom monitoring were grouped into the themes of increasing knowledge, increasing motivation, and providing patients with practical tools. Nurses also described using technology-based tools, increased social support, alternative communication, partnership approaches, and coordination of care to improve adherence to medications and symptom monitoring. Despite a strong focus on educational strategies, nurses also reported other strategies to increase patient adherence. Nurses use several strategies to improve patient adherence that are not incorporated into guidelines. These interventions need to be evaluated for further applications in improving heart failure management. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Acoustic emission and nondestructive evaluation of biomaterials and tissues.

    PubMed

    Kohn, D H

    1995-01-01

    Acoustic emission (AE) is an acoustic wave generated by the release of energy from localized sources in a material subjected to an externally applied stimulus. This technique may be used nondestructively to analyze tissues, materials, and biomaterial/tissue interfaces. Applications of AE include use as an early warning tool for detecting tissue and material defects and incipient failure, monitoring damage progression, predicting failure, characterizing failure mechanisms, and serving as a tool to aid in understanding material properties and structure-function relations. All these applications may be performed in real time. This review discusses general principles of AE monitoring and the use of the technique in 3 areas of importance to biomedical engineering: (1) analysis of biomaterials, (2) analysis of tissues, and (3) analysis of tissue/biomaterial interfaces. Focus in these areas is on detection sensitivity, methods of signal analysis in both the time and frequency domains, the relationship between acoustic signals and microstructural phenomena, and the uses of the technique in establishing a relationship between signals and failure mechanisms.

  17. Investigation of Tapered Roller Bearing Damage Detection Using Oil Debris Analysis

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Krieder, Gary; Fichter, Thomas

    2006-01-01

    A diagnostic tool was developed for detecting fatigue damage to tapered roller bearings. Tapered roller bearings are used in helicopter transmissions and have potential for use in high bypass advanced gas turbine aircraft engines. This diagnostic tool was developed and evaluated experimentally by collecting oil debris data from failure progression tests performed by The Timken Company in their Tapered Roller Bearing Health Monitoring Test Rig. Failure progression tests were performed under simulated engine load conditions. Tests were performed on one healthy bearing and three predamaged bearings. During each test, data from an on-line, in-line, inductance type oil debris sensor was monitored and recorded for the occurrence of debris generated during failure of the bearing. The bearing was removed periodically for inspection throughout the failure progression tests. Results indicate the accumulated oil debris mass is a good predictor of damage on tapered roller bearings. The use of a fuzzy logic model to enable an easily interpreted diagnostic metric was proposed and demonstrated.

  18. Fault management for the Space Station Freedom control center

    NASA Technical Reports Server (NTRS)

    Clark, Colin; Jowers, Steven; Mcnenny, Robert; Culbert, Chris; Kirby, Sarah; Lauritsen, Janet

    1992-01-01

    This paper describes model based reasoning fault isolation in complex systems using automated digraph analysis. It discusses the use of the digraph representation as the paradigm for modeling physical systems and a method for executing these failure models to provide real-time failure analysis. It also discusses the generality, ease of development and maintenance, complexity management, and susceptibility to verification and validation of digraph failure models. It specifically describes how a NASA-developed digraph evaluation tool and an automated process working with that tool can identify failures in a monitored system when supplied with one or more fault indications. This approach is well suited to commercial applications of real-time failure analysis in complex systems because it is both powerful and cost effective.

  19. Making intelligent systems team players. A guide to developing intelligent monitoring systems

    NASA Technical Reports Server (NTRS)

    Land, Sherry A.; Malin, Jane T.; Thronesberry, Carroll; Schreckenghost, Debra L.

    1995-01-01

    This reference guide for developers of intelligent monitoring systems is based on lessons learned by developers of the DEcision Support SYstem (DESSY), an expert system that monitors Space Shuttle telemetry data in real time. DESSY makes inferences about commands, state transitions, and simple failures. It performs failure detection rather than in-depth failure diagnostics. A listing of rules from DESSY and cue cards from DESSY subsystems are included to give the development community a better understanding of the selected model system. The G-2 programming tool used in developing DESSY provides an object-oriented, rule-based environment, but many of the principles in use here can be applied to any type of monitoring intelligent system. The step-by-step instructions and examples given for each stage of development are in G-2, but can be used with other development tools. This guide first defines the authors' concept of real-time monitoring systems, then tells prospective developers how to determine system requirements, how to build the system through a combined design/development process, and how to solve problems involved in working with real-time data. It explains the relationships among operational prototyping, software evolution, and the user interface. It also explains methods of testing, verification, and validation. It includes suggestions for preparing reference documentation and training users.

  20. General Purpose Data-Driven Online System Health Monitoring with Applications to Space Operations

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; Spirkovska, Lilly; Schwabacher, Mark

    2010-01-01

    Modern space transportation and ground support system designs are becoming increasingly sophisticated and complex. Determining the health state of these systems using traditional parameter limit checking, or model-based or rule-based methods is becoming more difficult as the number of sensors and component interactions grows. Data-driven monitoring techniques have been developed to address these issues by analyzing system operations data to automatically characterize normal system behavior. System health can be monitored by comparing real-time operating data with these nominal characterizations, providing detection of anomalous data signatures indicative of system faults, failures, or precursors of significant failures. The Inductive Monitoring System (IMS) is a general purpose, data-driven system health monitoring software tool that has been successfully applied to several aerospace applications and is under evaluation for anomaly detection in vehicle and ground equipment for next generation launch systems. After an introduction to IMS application development, we discuss these NASA online monitoring applications, including the integration of IMS with complementary model-based and rule-based methods. Although the examples presented in this paper are from space operations applications, IMS is a general-purpose health-monitoring tool that is also applicable to power generation and transmission system monitoring.

  1. Fault Injection Techniques and Tools

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen; Tsai, Timothy K.; Iyer, Ravishankar K.

    1997-01-01

    Dependability evaluation involves the study of failures and errors. The destructive nature of a crash and long error latency make it difficult to identify the causes of failures in the operational environment. It is particularly hard to recreate a failure scenario for a large, complex system. To identify and understand potential failures, we use an experiment-based approach for studying the dependability of a system. Such an approach is applied not only during the conception and design phases, but also during the prototype and operational phases. To take an experiment-based approach, we must first understand a system's architecture, structure, and behavior. Specifically, we need to know its tolerance for faults and failures, including its built-in detection and recovery mechanisms, and we need specific instruments and tools to inject faults, create failures or errors, and monitor their effects.

  2. Monitoring the response to pharmacologic therapy in patients with stable chronic heart failure: is BNP or NT-proBNP a useful assessment tool?

    PubMed

    Balion, Cynthia M; McKelvie, Robert S; Reichert, Sonja; Santaguida, Pasqualina; Booker, Lynda; Worster, Andrew; Raina, Parminder; McQueen, Matthew J; Hill, Stephen

    2008-03-01

    B-type natriuretic peptides are biomarkers of heart failure (HF) that can decrease following treatment. We sought to determine whether B-type natriuretic peptide (BNP) or N-terminal proBNP (NT-proBNP) concentration changes occurred in parallel to changes in other measures of heart failure following treatment. We conducted a systematic review of the literature for studies that assessed B-type natriuretic peptide measurements in treatment monitoring of patients with stable chronic heart failure. Selected studies had to include at least three consecutive measurements of BNP or NT-proBNP. Of 4338 citations screened, only 12 met all of the selection criteria. The selected studies included populations with a wide range of heart failure severity and therapy. BNP and NT-proBNP decreased following treatment in nine studies and was associated with improvement in clinical measures of HF. There was limited data to support using BNP or NT-proBNP to monitor therapy in patients with HF.

  3. Integrative Assessment of Congestion in Heart Failure Throughout the Patient Journey.

    PubMed

    Girerd, Nicolas; Seronde, Marie-France; Coiro, Stefano; Chouihed, Tahar; Bilbault, Pascal; Braun, François; Kenizou, David; Maillier, Bruno; Nazeyrollas, Pierre; Roul, Gérard; Fillieux, Ludivine; Abraham, William T; Januzzi, James; Sebbag, Laurent; Zannad, Faiez; Mebazaa, Alexandre; Rossignol, Patrick

    2018-04-01

    Congestion is one of the main predictors of poor patient outcome in patients with heart failure. However, congestion is difficult to assess, especially when symptoms are mild. Although numerous clinical scores, imaging tools, and biological tests are available to assist physicians in ascertaining and quantifying congestion, not all are appropriate for use in all stages of patient management. In recent years, multidisciplinary management in the community has become increasingly important to prevent heart failure hospitalizations. Electronic alert systems and communication platforms are emerging that could be used to facilitate patient home monitoring that identifies congestion from heart failure decompensation at an earlier stage. This paper describes the role of congestion detection methods at key stages of patient care: pre-admission, admission to the emergency department, in-hospital management, and lastly, discharge and continued monitoring in the community. The multidisciplinary working group, which consisted of cardiologists, emergency physicians, and a nephrologist with both clinical and research backgrounds, reviewed the current literature regarding the various scores, tools, and tests to detect and quantify congestion. This paper describes the role of each tool at key stages of patient care and discusses the advantages of telemedicine as a means of providing true integrated patient care. Copyright © 2018. Published by Elsevier Inc.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maxwell, Don E; Ezell, Matthew A; Becklehimer, Jeff

    While sites generally have systems in place to monitor the health of Cray computers themselves, often the cooling systems are ignored until a computer failure requires investigation into the source of the failure. The Liebert XDP units used to cool the Cray XE/XK models as well as the Cray proprietary cooling system used for the Cray XC30 models provide data useful for health monitoring. Unfortunately, this valuable information is often available only to custom solutions not accessible by a center-wide monitoring system or is simply ignored entirely. In this paper, methods and tools used to harvest the monitoring data availablemore » are discussed, and the implementation needed to integrate the data into a center-wide monitoring system at the Oak Ridge National Laboratory is provided.« less

  5. Structural Health Monitoring with Fiber Bragg Grating and Piezo Arrays

    NASA Technical Reports Server (NTRS)

    Black, Richard J.; Faridian, Ferey; Moslehi, Behzad; Sotoudeh, Vahid

    2012-01-01

    Structural health monitoring (SHM) is one of the most important tools available for the maintenance, safety, and integrity of aerospace structural systems. Lightweight, electromagnetic-interference- immune, fiber-optic sensor-based SHM will play an increasing role in more secure air transportation systems. Manufacturers and maintenance personnel have pressing needs for significantly improving safety and reliability while providing for lower inspection and maintenance costs. Undetected or untreated damage may grow and lead to catastrophic structural failure. Damage can originate from the strain/stress history of the material, imperfections of domain boundaries in metals, delamination in multi-layer materials, or the impact of machine tools in the manufacturing process. Damage can likewise develop during service life from wear and tear, or under extraordinary circumstances such as with unusual forces, temperature cycling, or impact of flying objects. Monitoring and early detection are key to preventing a catastrophic failure of structures, especially when these are expected to perform near their limit conditions.

  6. Efficacy of intrathoracic impedance and remote monitoring in patients with an implantable device after the 2011 great East Japan earthquake.

    PubMed

    Suzuki, Hitoshi; Yamada, Shinya; Kamiyama, Yoshiyuki; Takeishi, Yasuchika

    2014-01-01

    Several studies have revealed that stress after catastrophic disasters can trigger cardiovascular events, however, little is known about its association with the occurrence of heart failure in past earthquakes. The objective of the present study was to determine whether the Great East Japan Earthquake on March 11, 2011, increased the incidence of worsening heart failure in chronic heart failure (CHF) patients with implantable devices. Furthermore, we examined whether intrathoracic impedance using remote monitoring was effective for the management of CHF.We enrolled 44 CHF patients (32 males, mean age 63 ± 12 years) with implantable devices that can check intrathoracic impedance using remote monitoring. We defined the worsening heart failure as accumulated impedance under reference impedance exceeding 60 ohms-days (fluid index threshold), and compared the incidence of worsening heart failure and arrhythmic events 30 days before and after March 11.Within the 30 days after March 11, 10 patients exceeded the threshold compared with only 2 patients in the preceding 30 days (P < 0.05). Although 9 patients using remote monitoring among the 10 patients with threshold crossings were not hospitalized, one patient without the system was hospitalized due to acute decompensated heart failure. On the contrary, arrhythmic events did not change between before and after March 11.Our results suggest that earthquake-induced stress causes an increased risk of worsening heart failure without changes in arrhythmia. Furthermore, intrathoracic impedance using remote monitoring may be a useful tool for the management of CHF in catastrophic disasters.

  7. Network monitoring in the Tier2 site in Prague

    NASA Astrophysics Data System (ADS)

    Eliáš, Marek; Fiala, Lukáš; Horký, Jiří; Chudoba, Jiří; Kouba, Tomáš; Kundrát, Jan; Švec, Jan

    2011-12-01

    Network monitoring provides different types of view on the network traffic. It's output enables computing centre staff to make qualified decisions about changes in the organization of computing centre network and to spot possible problems. In this paper we present network monitoring framework used at Tier-2 in Prague in Institute of Physics (FZU). The framework consists of standard software and custom tools. We discuss our system for hardware failures detection using syslog logging and Nagios active checks, bandwidth monitoring of physical links and analysis of NetFlow exports from Cisco routers. We present tool for automatic detection of network layout based on SNMP. This tool also records topology changes into SVN repository. Adapted weathermap4rrd is used to visualize recorded data to get fast overview showing current bandwidth usage of links in network.

  8. Detection of Impaired Cerebral Autoregulation Using Selected Correlation Analysis: A Validation Study

    PubMed Central

    Brawanski, Alexander

    2017-01-01

    Multimodal brain monitoring has been utilized to optimize treatment of patients with critical neurological diseases. However, the amount of data requires an integrative tool set to unmask pathological events in a timely fashion. Recently we have introduced a mathematical model allowing the simulation of pathophysiological conditions such as reduced intracranial compliance and impaired autoregulation. Utilizing a mathematical tool set called selected correlation analysis (sca), correlation patterns, which indicate impaired autoregulation, can be detected in patient data sets (scp). In this study we compared the results of the sca with the pressure reactivity index (PRx), an established marker for impaired autoregulation. Mean PRx values were significantly higher in time segments identified as scp compared to segments showing no selected correlations (nsc). The sca based approach predicted cerebral autoregulation failure with a sensitivity of 78.8% and a specificity of 62.6%. Autoregulation failure, as detected by the results of both analysis methods, was significantly correlated with poor outcome. Sca of brain monitoring data detects impaired autoregulation with high sensitivity and sufficient specificity. Since the sca approach allows the simultaneous detection of both major pathological conditions, disturbed autoregulation and reduced compliance, it may become a useful analysis tool for brain multimodal monitoring data. PMID:28255331

  9. Detection of Impaired Cerebral Autoregulation Using Selected Correlation Analysis: A Validation Study.

    PubMed

    Proescholdt, Martin A; Faltermeier, Rupert; Bele, Sylvia; Brawanski, Alexander

    2017-01-01

    Multimodal brain monitoring has been utilized to optimize treatment of patients with critical neurological diseases. However, the amount of data requires an integrative tool set to unmask pathological events in a timely fashion. Recently we have introduced a mathematical model allowing the simulation of pathophysiological conditions such as reduced intracranial compliance and impaired autoregulation. Utilizing a mathematical tool set called selected correlation analysis (sca), correlation patterns, which indicate impaired autoregulation, can be detected in patient data sets (scp). In this study we compared the results of the sca with the pressure reactivity index (PRx), an established marker for impaired autoregulation. Mean PRx values were significantly higher in time segments identified as scp compared to segments showing no selected correlations (nsc). The sca based approach predicted cerebral autoregulation failure with a sensitivity of 78.8% and a specificity of 62.6%. Autoregulation failure, as detected by the results of both analysis methods, was significantly correlated with poor outcome. Sca of brain monitoring data detects impaired autoregulation with high sensitivity and sufficient specificity. Since the sca approach allows the simultaneous detection of both major pathological conditions, disturbed autoregulation and reduced compliance, it may become a useful analysis tool for brain multimodal monitoring data.

  10. Bridge Scour Technology Transfer

    DOT National Transportation Integrated Search

    2018-01-24

    Scour and flooding are the leading causes of bridge failures in the United States and therefore should be monitored. New applications of tools and technologies are being developed, tested, and implemented to reduce bridge scour risk. The National Coo...

  11. Design and Evaluation of a Web-Based Symptom Monitoring Tool for Heart Failure.

    PubMed

    Wakefield, Bonnie J; Alexander, Gregory; Dohrmann, Mary; Richardson, James

    2017-05-01

    Heart failure is a chronic condition where symptom recognition and between-visit communication with providers are critical. Patients are encouraged to track disease-specific data, such as weight and shortness of breath. Use of a Web-based tool that facilitates data display in graph form may help patients recognize exacerbations and more easily communicate out-of-range data to clinicians. The purposes of this study were to (1) design a Web-based tool to facilitate symptom monitoring and symptom recognition in patients with chronic heart failure and (2) conduct a usability evaluation of the Web site. Patient participants generally had a positive view of the Web site and indicated it would support recording their health status and communicating with their doctors. Clinician participants generally had a positive view of the Web site and indicated it would be a potentially useful adjunct to electronic health delivery systems. Participants expressed a need to incorporate decision support within the site and wanted to add other data, for example, blood pressure, and have the ability to adjust font size. A few expressed concerns about data privacy and security. Technologies require careful design and testing to ensure they are useful, usable, and safe for patients and do not add to the burden of busy providers.

  12. Configuration Management and Infrastructure Monitoring Using CFEngine and Icinga for Real-time Heterogeneous Data Taking Environment

    NASA Astrophysics Data System (ADS)

    Poat, M. D.; Lauret, J.; Betts, W.

    2015-12-01

    The STAR online computing environment is an intensive ever-growing system used for real-time data collection and analysis. Composed of heterogeneous and sometimes groups of custom-tuned machines, the computing infrastructure was previously managed by manual configurations and inconsistently monitored by a combination of tools. This situation led to configuration inconsistency and an overload of repetitive tasks along with lackluster communication between personnel and machines. Globally securing this heterogeneous cyberinfrastructure was tedious at best and an agile, policy-driven system ensuring consistency, was pursued. Three configuration management tools, Chef, Puppet, and CFEngine have been compared in reliability, versatility and performance along with a comparison of infrastructure monitoring tools Nagios and Icinga. STAR has selected the CFEngine configuration management tool and the Icinga infrastructure monitoring system leading to a versatile and sustainable solution. By leveraging these two tools STAR can now swiftly upgrade and modify the environment to its needs with ease as well as promptly react to cyber-security requests. By creating a sustainable long term monitoring solution, the detection of failures was reduced from days to minutes, allowing rapid actions before the issues become dire problems, potentially causing loss of precious experimental data or uptime.

  13. On-line tool breakage monitoring of vibration tapping using spindle motor current

    NASA Astrophysics Data System (ADS)

    Li, Guangjun; Lu, Huimin; Liu, Gang

    2008-10-01

    Input current of driving motor has been employed successfully as monitoring the cutting state in manufacturing processes for more than a decade. In vibration tapping, however, the method of on-line monitoring motor electric current has not been reported. In this paper, a tap failure prediction method is proposed to monitor the vibration tapping process using the electrical current signal of the spindle motor. The process of vibration tapping is firstly described. Then the relationship between the torque of vibration tapping and the electric current of motor is investigated by theoretic deducing and experimental measurement. According to those results, a monitoring method of tool's breakage is proposed through monitoring the ratio of the current amplitudes during adjacent vibration tapping periods. Finally, a low frequency vibration tapping system with motor current monitoring is built up using a servo motor B-106B and its driver CR06. The proposed method has been demonstrated with experiment data of vibration tapping in titanic alloys. The result of experiments shows that the method, which can avoid the tool breakage and giving a few error alarms when the threshold of amplitude ratio is 1.2 and there is at least 2 times overrun among 50 adjacent periods, is feasible for tool breakage monitoring in the process of vibration tapping small thread holes.

  14. A usability study of a mobile monitoring system for congestive heart failure patients.

    PubMed

    Svagård, I; Austad, H O; Seeberg, T; Vedum, J; Liverud, A; Mathiesen, B M; Keller, B; Bendixen, O C; Osborne, P; Strisland, F

    2014-01-01

    Sensor-based monitoring of congestive heart-failure (CHF) patients living at home can improve quality of care, detect exacerbations of disease at an earlier stage and motivate the patient for better self care. This paper reports on a usability study of the ESUMS system that provides continuous measurements of heart rate, activity, upper body posture and skin temperature via a sensor belt and a smartphone as patient terminal. Five CHF patients were included in the trial, all recently discharged from hospital. The nurses experienced continuous heart rate, activity and posture monitoring as useful and objective tools that helped them in their daily assessment of patient health. They also saw the system as an important educational tool to help patients gain insight into their own condition. Three patients liked that they could have a view of their own physiological and activity data, however the smartphones used in the study turned out to be too complicated for the patients to operate. A smartphone is built to be a multi-purpose device, and this may (conceptually and practically) be incompatible with the patients' demands for ease of use.

  15. In-Flight Validation of a Pilot Rating Scale for Evaluating Failure Transients in Electronic Flight Control Systems

    NASA Technical Reports Server (NTRS)

    Kalinowski, Kevin F.; Tucker, George E.; Moralez, Ernesto, III

    2006-01-01

    Engineering development and qualification of a Research Flight Control System (RFCS) for the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) JUH-60A has motivated the development of a pilot rating scale for evaluating failure transients in fly-by-wire flight control systems. The RASCAL RFCS includes a highly-reliable, dual-channel Servo Control Unit (SCU) to command and monitor the performance of the fly-by-wire actuators and protect against the effects of erroneous commands from the flexible, but single-thread Flight Control Computer. During the design phase of the RFCS, two piloted simulations were conducted on the Ames Research Center Vertical Motion Simulator (VMS) to help define the required performance characteristics of the safety monitoring algorithms in the SCU. Simulated failures, including hard-over and slow-over commands, were injected into the command path, and the aircraft response and safety monitor performance were evaluated. A subjective Failure/Recovery Rating (F/RR) scale was developed as a means of quantifying the effects of the injected failures on the aircraft state and the degree of pilot effort required to safely recover the aircraft. A brief evaluation of the rating scale was also conducted on the Army/NASA CH-47B variable stability helicopter to confirm that the rating scale was likely to be equally applicable to in-flight evaluations. Following the initial research flight qualification of the RFCS in 2002, a flight test effort was begun to validate the performance of the safety monitors and to validate their design for the safe conduct of research flight testing. Simulated failures were injected into the SCU, and the F/RR scale was applied to assess the results. The results validate the performance of the monitors, and indicate that the Failure/Recovery Rating scale is a very useful tool for evaluating failure transients in fly-by-wire flight control systems.

  16. Distributed Interplanetary Delay/Disruption Tolerant Network (DTN) Monitor and Control System

    NASA Technical Reports Server (NTRS)

    Wang, Shin-Ywan

    2012-01-01

    The main purpose of Distributed interplanetary Delay Tolerant Network Monitor and Control System as a DTN system network management implementation in JPL is defined to provide methods and tools that can monitor the DTN operation status, detect and resolve DTN operation failures in some automated style while either space network or some heterogeneous network is infused with DTN capability. In this paper, "DTN Monitor and Control system in Deep Space Network (DSN)" exemplifies a case how DTN Monitor and Control system can be adapted into a space network as it is DTN enabled.

  17. Fault Tree Analysis as a Planning and Management Tool: A Case Study

    ERIC Educational Resources Information Center

    Witkin, Belle Ruth

    1977-01-01

    Fault Tree Analysis is an operations research technique used to analyse the most probable modes of failure in a system, in order to redesign or monitor the system more closely in order to increase its likelihood of success. (Author)

  18. GenSAA: A tool for advancing satellite monitoring with graphical expert systems

    NASA Technical Reports Server (NTRS)

    Hughes, Peter M.; Luczak, Edward C.

    1993-01-01

    During numerous contacts with a satellite each day, spacecraft analysts must closely monitor real time data for combinations of telemetry parameter values, trends, and other indications that may signify a problem or failure. As satellites become more complex and the number of data items increases, this task is becoming increasingly difficult for humans to perform at acceptable performance levels. At the NASA Goddard Space Flight Center, fault-isolation expert systems have been developed to support data monitoring and fault detection tasks in satellite control centers. Based on the lessons learned during these initial efforts in expert system automation, a new domain-specific expert system development tool named the Generic Spacecraft Analyst Assistant (GenSAA) is being developed to facilitate the rapid development and reuse of real-time expert systems to serve as fault-isolation assistants for spacecraft analysts. Although initially domain-specific in nature, this powerful tool will support the development of highly graphical expert systems for data monitoring purposes throughout the space and commercial industry.

  19. Data Fusion Tool for Spiral Bevel Gear Condition Indicator Data

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Antolick, Lance J.; Branning, Jeremy S.; Thomas, Josiah

    2014-01-01

    Tests were performed on two spiral bevel gear sets in the NASA Glenn Spiral Bevel Gear Fatigue Test Rig to simulate the fielded failures of spiral bevel gears installed in a helicopter. Gear sets were tested until damage initiated and progressed on two or more gear or pinion teeth. During testing, gear health monitoring data was collected with two different health monitoring systems. Operational parameters were measured with a third data acquisition system. Tooth damage progression was documented with photographs taken at inspection intervals throughout the test. A software tool was developed for fusing the operational data and the vibration based gear condition indicator (CI) data collected from the two health monitoring systems. Results of this study illustrate the benefits of combining the data from all three systems to indicate progression of damage for spiral bevel gears. The tool also enabled evaluation of the effectiveness of each CI with respect to operational conditions and fault mode.

  20. Characterization of delamination and transverse cracking in graphite/epoxy laminates by acoustic emission

    NASA Technical Reports Server (NTRS)

    Garg, A.; Ishaei, O.

    1983-01-01

    Efforts to characterize and differentiate between two major failure processes in graphite/epoxy composites - transverse cracking and Mode I delamination are described. Representative laminates were tested in uniaxial tension and flexure. The failure processes were monitored and identified by acoustic emission (AE). The effect of moisture on AE was also investigated. Each damage process was found to have a distinctive AE output that is significantly affected by moisture conditions. It is concluded that AE can serve as a useful tool for detecting and identifying failure modes in composite structures in laboratory and in service environments.

  1. Improving inflammatory arthritis management through tighter monitoring of patients and the use of innovative electronic tools

    PubMed Central

    van Riel, Piet; Combe, Bernard; Abdulganieva, Diana; Bousquet, Paola; Courtenay, Molly; Curiale, Cinzia; Gómez-Centeno, Antonio; Haugeberg, Glenn; Leeb, Burkhard; Puolakka, Kari; Ravelli, Angelo; Rintelen, Bernhard; Sarzi-Puttini, Piercarlo

    2016-01-01

    Treating to target by monitoring disease activity and adjusting therapy to attain remission or low disease activity has been shown to lead to improved outcomes in chronic rheumatic diseases such as rheumatoid arthritis and spondyloarthritis. Patient-reported outcomes, used in conjunction with clinical measures, add an important perspective of disease activity as perceived by the patient. Several validated PROs are available for inflammatory arthritis, and advances in electronic patient monitoring tools are helping patients with chronic diseases to self-monitor and assess their symptoms and health. Frequent patient monitoring could potentially lead to the early identification of disease flares or adverse events, early intervention for patients who may require treatment adaptation, and possibly reduced appointment frequency for those with stable disease. A literature search was conducted to evaluate the potential role of patient self-monitoring and innovative monitoring of tools in optimising disease control in inflammatory arthritis. Experience from the treatment of congestive heart failure, diabetes and hypertension shows improved outcomes with remote electronic self-monitoring by patients. In inflammatory arthritis, electronic self-monitoring has been shown to be feasible in patients despite manual disability and to be acceptable to older patients. Patients' self-assessment of disease activity using such methods correlates well with disease activity assessed by rheumatologists. This review also describes several remote monitoring tools that are being developed and used in inflammatory arthritis, offering the potential to improve disease management and reduce pressure on specialists. PMID:27933206

  2. Spiral-Bevel-Gear Damage Detected Using Decision Fusion Analysis

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Handschuh, Robert F.

    2003-01-01

    Helicopter transmission integrity is critical to helicopter safety because helicopters depend on the power train for propulsion, lift, and flight maneuvering. To detect impending transmission failures, the ideal diagnostic tools used in the health-monitoring system would provide real-time health monitoring of the transmission, demonstrate a high level of reliable detection to minimize false alarms, and provide end users with clear information on the health of the system without requiring them to interpret large amounts of sensor data. A diagnostic tool for detecting damage to spiral bevel gears was developed. (Spiral bevel gears are used in helicopter transmissions to transfer power between nonparallel intersecting shafts.) Data fusion was used to integrate two different monitoring technologies, oil debris analysis and vibration, into a health-monitoring system for detecting surface fatigue pitting damage on the gears.

  3. On the use of high-frequency SCADA data for improved wind turbine performance monitoring

    NASA Astrophysics Data System (ADS)

    Gonzalez, E.; Stephen, B.; Infield, D.; Melero, J. J.

    2017-11-01

    SCADA-based condition monitoring of wind turbines facilitates the move from costly corrective repairs towards more proactive maintenance strategies. In this work, we advocate the use of high-frequency SCADA data and quantile regression to build a cost effective performance monitoring tool. The benefits of the approach are demonstrated through the comparison between state-of-the-art deterministic power curve modelling techniques and the suggested probabilistic model. Detection capabilities are compared for low and high-frequency SCADA data, providing evidence for monitoring at higher resolutions. Operational data from healthy and faulty turbines are used to provide a practical example of usage with the proposed tool, effectively achieving the detection of an incipient gearbox malfunction at a time horizon of more than one month prior to the actual occurrence of the failure.

  4. Ground-penetrating radar: A tool for monitoring bridge scour

    USGS Publications Warehouse

    Anderson, N.L.; Ismael, A.M.; Thitimakorn, T.

    2007-01-01

    Ground-penetrating radar (GPR) data were acquired across shallow streams and/or drainage ditches at 10 bridge sites in Missouri by maneuvering the antennae across the surface of the water and riverbank from the bridge deck, manually or by boat. The acquired two-dimensional and three-dimensional data sets accurately image the channel bottom, demonstrating that the GPR tool can be used to estimate and/or monitor water depths in shallow fluvial environments. The study results demonstrate that the GPR tool is a safe and effective tool for measuring and/or monitoring scour in proximity to bridges. The technique can be used to safely monitor scour at assigned time intervals during peak flood stages, thereby enabling owners to take preventative action prior to potential failure. The GPR tool can also be used to investigate depositional and erosional patterns over time, thereby elucidating these processes on a local scale. In certain instances, in-filled scour features can also be imaged and mapped. This information may be critically important to those engaged in bridge design. GPR has advantages over other tools commonly employed for monitoring bridge scour (reflection seismic profiling, echo sounding, and electrical conductivity probing). The tool doesn't need to be coupled to the water, can be moved rapidly across (or above) the surface of a stream, and provides an accurate depth-structure model of the channel bottom and subchannel bottom sediments. The GPR profiles can be extended across emerged sand bars or onto the shore.

  5. Time series analysis of tool wear in sheet metal stamping using acoustic emission

    NASA Astrophysics Data System (ADS)

    Vignesh Shanbhag, V.; Pereira, P. Michael; Rolfe, F. Bernard; Arunachalam, N.

    2017-09-01

    Galling is an adhesive wear mode that often affects the lifespan of stamping tools. Since stamping tools represent significant economic cost, even a slight improvement in maintenance cost is of high importance for the stamping industry. In other manufacturing industries, online tool condition monitoring has been used to prevent tool wear-related failure. However, monitoring the acoustic emission signal from a stamping process is a non-trivial task since the acoustic emission signal is non-stationary and non-transient. There have been numerous studies examining acoustic emissions in sheet metal stamping. However, very few have focused in detail on how the signals change as wear on the tool surface progresses prior to failure. In this study, time domain analysis was applied to the acoustic emission signals to extract features related to tool wear. To understand the wear progression, accelerated stamping tests were performed using a semi-industrial stamping setup which can perform clamping, piercing, stamping in a single cycle. The time domain features related to stamping were computed for the acoustic emissions signal of each part. The sidewalls of the stamped parts were scanned using an optical profilometer to obtain profiles of the worn part, and they were qualitatively correlated to that of the acoustic emissions signal. Based on the wear behaviour, the wear data can be divided into three stages: - In the first stage, no wear is observed, in the second stage, adhesive wear is likely to occur, and in the third stage severe abrasive plus adhesive wear is likely to occur. Scanning electron microscopy showed the formation of lumps on the stamping tool, which represents galling behavior. Correlation between the time domain features of the acoustic emissions signal and the wear progression identified in this study lays the basis for tool diagnostics in stamping industry.

  6. Next Generation Monitoring: Tier 2 Experience

    NASA Astrophysics Data System (ADS)

    Fay, R.; Bland, J.; Jones, S.

    2017-10-01

    Monitoring IT infrastructure is essential for maximizing availability and minimizing disruption by detecting failures and developing issues. The HEP group at Liverpool have recently updated our monitoring infrastructure with the goal of increasing coverage, improving visualization capabilities, and streamlining configuration and maintenance. Here we present a summary of Liverpool’s experience, the monitoring infrastructure, and the tools used to build it. In brief, system checks are configured in Puppet using Hiera, and managed by Sensu, replacing Nagios. Centralised logging is managed with Elasticsearch, together with Logstash and Filebeat. Kibana provides an interface for interactive analysis, including visualization and dashboards. Metric collection is also configured in Puppet, managed by collectd and stored in Graphite, with Grafana providing a visualization and dashboard tool. The Uchiwa dashboard for Sensu provides a web interface for viewing infrastructure status. Alert capabilities are provided via external handlers. A custom alert handler is in development to provide an easily configurable, extensible and maintainable alert facility.

  7. Acoustic emission detection of macro-cracks on engraving tool steel inserts during the injection molding cycle using PZT sensors.

    PubMed

    Svečko, Rajko; Kusić, Dragan; Kek, Tomaž; Sarjaš, Andrej; Hančič, Aleš; Grum, Janez

    2013-05-14

    This paper presents an improved monitoring system for the failure detection of engraving tool steel inserts during the injection molding cycle. This system uses acoustic emission PZT sensors mounted through acoustic waveguides on the engraving insert. We were thus able to clearly distinguish the defect through measured AE signals. Two engraving tool steel inserts were tested during the production of standard test specimens, each under the same processing conditions. By closely comparing the captured AE signals on both engraving inserts during the filling and packing stages, we were able to detect the presence of macro-cracks on one engraving insert. Gabor wavelet analysis was used for closer examination of the captured AE signals' peak amplitudes during the filling and packing stages. The obtained results revealed that such a system could be used successfully as an improved tool for monitoring the integrity of an injection molding process.

  8. Acoustic Emission Detection of Macro-Cracks on Engraving Tool Steel Inserts during the Injection Molding Cycle Using PZT Sensors

    PubMed Central

    Svečko, Rajko; Kusić, Dragan; Kek, Tomaž; Sarjaš, Andrej; Hančič, Aleš; Grum, Janez

    2013-01-01

    This paper presents an improved monitoring system for the failure detection of engraving tool steel inserts during the injection molding cycle. This system uses acoustic emission PZT sensors mounted through acoustic waveguides on the engraving insert. We were thus able to clearly distinguish the defect through measured AE signals. Two engraving tool steel inserts were tested during the production of standard test specimens, each under the same processing conditions. By closely comparing the captured AE signals on both engraving inserts during the filling and packing stages, we were able to detect the presence of macro-cracks on one engraving insert. Gabor wavelet analysis was used for closer examination of the captured AE signals' peak amplitudes during the filling and packing stages. The obtained results revealed that such a system could be used successfully as an improved tool for monitoring the integrity of an injection molding process. PMID:23673677

  9. Electrical failure debug using interlayer profiling method

    NASA Astrophysics Data System (ADS)

    Yang, Thomas; Shen, Yang; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh

    2017-03-01

    It is very well known that as technology nodes move to smaller sizes, the number of design rules increases while design structures become more regular and the process manufacturing steps have increased as well. Normal inspection tools can only monitor hard failures on a single layer. For electrical failures that happen due to inter layers misalignments, we can only detect them through testing. This paper will present a working flow for using pattern analysis interlayer profiling techniques to turn multiple layer physical info into group linked parameter values. Using this data analysis flow combined with an electrical model allows us to find critical regions on a layout for yield learning.

  10. Application of Vibration and Oil Analysis for Reliability Information on Helicopter Main Rotor Gearbox

    NASA Astrophysics Data System (ADS)

    Murrad, Muhamad; Leong, M. Salman

    Based on the experiences of the Malaysian Armed Forces (MAF), failure of the main rotor gearbox (MRGB) was one of the major contributing factors to helicopter breakdowns. Even though vibration and oil analysis are the effective techniques for monitoring the health of helicopter components, these two techniques were rarely combined to form an effective assessment tool in MAF. Results of the oil analysis were often used only for oil changing schedule while assessments of MRGB condition were mainly based on overall vibration readings. A study group was formed and given a mandate to improve the maintenance strategy of S61-A4 helicopter fleet in the MAF. The improvement consisted of a structured approach to the reassessment/redefinition suitable maintenance actions that should be taken for the MRGB. Basic and enhanced tools for condition monitoring (CM) are investigated to address the predominant failures of the MRGB. Quantitative accelerated life testing (QALT) was considered in this work with an intent to obtain the required reliability information in a shorter time with tests under normal stress conditions. These tests when performed correctly can provide valuable information about MRGB performance under normal operating conditions which enable maintenance personnel to make decision more quickly, accurately and economically. The time-to-failure and probability of failure information of the MRGB were generated by applying QALT analysis principles. This study is anticipated to make a dramatic change in its approach to CM, bringing significant savings and various benefits to MAF.

  11. Towards a geophysical decision-support system for monitoring and managing unstable slopes

    NASA Astrophysics Data System (ADS)

    Chambers, J. E.; Meldrum, P.; Wilkinson, P. B.; Uhlemann, S.; Swift, R. T.; Inauen, C.; Gunn, D.; Kuras, O.; Whiteley, J.; Kendall, J. M.

    2017-12-01

    Conventional approaches for condition monitoring, such as walk over surveys, remote sensing or intrusive sampling, are often inadequate for predicting instabilities in natural and engineered slopes. Surface observations cannot detect the subsurface precursors to failure events; instead they can only identify failure once it has begun. On the other hand, intrusive investigations using boreholes only sample a very small volume of ground and hence small scale deterioration process in heterogeneous ground conditions can easily be missed. It is increasingly being recognised that geophysical techniques can complement conventional approaches by providing spatial subsurface information. Here we describe the development and testing of a new geophysical slope monitoring system. It is built around low-cost electrical resistivity tomography instrumentation, combined with integrated geotechnical logging capability, and coupled with data telemetry. An automated data processing and analysis workflow is being developed to streamline information delivery. The development of this approach has provided the basis of a decision-support tool for monitoring and managing unstable slopes. The hardware component of the system has been operational at a number of field sites associated with a range of natural and engineered slopes for up to two years. We report on the monitoring results from these sites, discuss the practicalities of installing and maintaining long-term geophysical monitoring infrastructure, and consider the requirements of a fully automated data processing and analysis workflow. We propose that the result of this development work is a practical decision-support tool that can provide near-real-time information relating to the internal condition of problematic slopes.

  12. Self-actuating and self-diagnosing plastically deforming piezo-composite flapping wing MAV

    NASA Astrophysics Data System (ADS)

    Harish, Ajay B.; Harursampath, Dineshkumar; Mahapatra, D. Roy

    2011-04-01

    In this work, we propose a constitutive model to describe the behavior of Piezoelectric Fiber Reinforced Composite (PFRC) material consisting of elasto-plastic matrix reinforced by strong elastic piezoelectric fibers. Computational efficiency is achieved using analytical solutions for elastic stifness matrix derived from Variational Asymptotic Methods (VAM). This is extended to provide Structural Health Monitoring (SHM) based on plasticity induced degradation of flapping frequency of PFRC. Overall this work provides an effective mathematical tool that can be used for structural self-health monitoring of plasticity induced flapping degradation of PFRC flapping wing MAVs. The developed tool can be re-calibrated to also provide SHM for other forms of failures like fatigue, matrix cracking etc.

  13. ARC-2007-ACD07-0140-001

    NASA Image and Video Library

    2007-07-31

    David L. Iverson of NASA Ames Research center, Moffett Field, California, led development of computer software to monitor the conditions of the gyroscopes that keep the International Space Station (ISS) properly oriented in space as the ISS orbits Earth. The gyroscopes are flywheels that control the station's attitude without the use of propellant fuel. NASA computer scientists designed the new software, the Inductive Monitoring System, to detect warning signs that precede a gyroscope's failure. According to NASA officials, engineers will add the new software tool to a group of existing tools to identify and track problems related to the gyroscopes. If the software detects warning signs, it will quickly warn the space station's mission control center.

  14. The Generic Spacecraft Analyst Assistant (gensaa): a Tool for Developing Graphical Expert Systems

    NASA Technical Reports Server (NTRS)

    Hughes, Peter M.

    1993-01-01

    During numerous contacts with a satellite each day, spacecraft analysts must closely monitor real-time data. The analysts must watch for combinations of telemetry parameter values, trends, and other indications that may signify a problem or failure. As the satellites become more complex and the number of data items increases, this task is becoming increasingly difficult for humans to perform at acceptable performance levels. At NASA GSFC, fault-isolation expert systems are in operation supporting this data monitoring task. Based on the lessons learned during these initial efforts in expert system automation, a new domain-specific expert system development tool named the Generic Spacecraft Analyst Assistant (GenSAA) is being developed to facilitate the rapid development and reuse of real-time expert systems to serve as fault-isolation assistants for spacecraft analysts. Although initially domain-specific in nature, this powerful tool will readily support the development of highly graphical expert systems for data monitoring purposes throughout the space and commercial industry.

  15. E-beam column monitoring for improved CD SEM stability and tool matching

    NASA Astrophysics Data System (ADS)

    Hayes, Timothy S.; Henninger, Randall S.

    2000-06-01

    Tool matching is an important metric for in-line semiconductor metrology systems. The ability to obtain the same measurement results on two or more systems allows a semiconductor fabrication facility (fab) to deploy product in an efficient manner improving overall equipment efficiency (OEE). Many parameters on the critical dimension scanning electron microscopes (CDSEMs) can affect the long-term precision component to the tool-matching metric. One such class of parameters is related to the electron beam column stability. The alignment and condition of the gun and apertures, as well as astigmatism correction, have all been found to affect the overall measurements of the CDSEM. These effects are now becoming dominant factors in sub-3nm tool-matching criteria. This paper discusses the methodologies of column parameter monitoring and actions and controls for improving overall stability. Results have shown that column instabilities caused by contamination, gun fluctuations, component failures, detector efficiency, and external issues can be identified through parameter monitoring. The Applied Materials (AMAT) 7830 Series CDSEMs evaluated at IBM's Burlington, Vermont manufacturing facility have demonstrated 5 nm tool matching across 11 systems, which has resulted in non-dedicated product deployment and has significantly reduced cost of ownership.

  16. Transmission Bearing Damage Detection Using Decision Fusion Analysis

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Lewicki, David G.; Decker, Harry J.

    2004-01-01

    A diagnostic tool was developed for detecting fatigue damage to rolling element bearings in an OH-58 main rotor transmission. Two different monitoring technologies, oil debris analysis and vibration, were integrated using data fusion into a health monitoring system for detecting bearing surface fatigue pitting damage. This integrated system showed improved detection and decision-making capabilities as compared to using individual monitoring technologies. This diagnostic tool was evaluated by collecting vibration and oil debris data from tests performed in the NASA Glenn 500 hp Helicopter Transmission Test Stand. Data was collected during experiments performed in this test rig when two unanticipated bearing failures occurred. Results show that combining the vibration and oil debris measurement technologies improves the detection of pitting damage on spiral bevel gears duplex ball bearings and spiral bevel pinion triplex ball bearings in a main rotor transmission.

  17. An Observation Tool for Monitoring Social Skill Implementation in Contextually Relevant Environments

    ERIC Educational Resources Information Center

    Morgan, Joseph John; Hsiao, Yun-Ju; Dobbins, Nicole; Brown, Nancy B.; Lyons, Catherine

    2015-01-01

    Skills related to social-emotional learning (SEL) are essential for college and career readiness. Failure to use appropriate skills for SEL in school is often linked to several negative academic outcomes, including rejection by school community members, academic deficits, and higher rates of problematic behavior. Social skills interventions are…

  18. EEMD-based wind turbine bearing failure detection using the generator stator current homopolar component

    NASA Astrophysics Data System (ADS)

    Amirat, Yassine; Choqueuse, Vincent; Benbouzid, Mohamed

    2013-12-01

    Failure detection has always been a demanding task in the electrical machines community; it has become more challenging in wind energy conversion systems because sustainability and viability of wind farms are highly dependent on the reduction of the operational and maintenance costs. Indeed the most efficient way of reducing these costs would be to continuously monitor the condition of these systems. This allows for early detection of the generator health degeneration, facilitating a proactive response, minimizing downtime, and maximizing productivity. This paper provides then an assessment of a failure detection techniques based on the homopolar component of the generator stator current and attempts to highlight the use of the ensemble empirical mode decomposition as a tool for failure detection in wind turbine generators for stationary and non-stationary cases.

  19. Operation Reliability Assessment for Cutting Tools by Applying a Proportional Covariate Model to Condition Monitoring Information

    PubMed Central

    Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia

    2012-01-01

    The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools. PMID:23201980

  20. Clinical risk analysis with failure mode and effect analysis (FMEA) model in a dialysis unit.

    PubMed

    Bonfant, Giovanna; Belfanti, Pietro; Paternoster, Giuseppe; Gabrielli, Danila; Gaiter, Alberto M; Manes, Massimo; Molino, Andrea; Pellu, Valentina; Ponzetti, Clemente; Farina, Massimo; Nebiolo, Pier E

    2010-01-01

    The aim of clinical risk management is to improve the quality of care provided by health care organizations and to assure patients' safety. Failure mode and effect analysis (FMEA) is a tool employed for clinical risk reduction. We applied FMEA to chronic hemodialysis outpatients. FMEA steps: (i) process study: we recorded phases and activities. (ii) Hazard analysis: we listed activity-related failure modes and their effects; described control measures; assigned severity, occurrence and detection scores for each failure mode and calculated the risk priority numbers (RPNs) by multiplying the 3 scores. Total RPN is calculated by adding single failure mode RPN. (iii) Planning: we performed a RPNs prioritization on a priority matrix taking into account the 3 scores, and we analyzed failure modes causes, made recommendations and planned new control measures. (iv) Monitoring: after failure mode elimination or reduction, we compared the resulting RPN with the previous one. Our failure modes with the highest RPN came from communication and organization problems. Two tools have been created to ameliorate information flow: "dialysis agenda" software and nursing datasheets. We scheduled nephrological examinations, and we changed both medical and nursing organization. Total RPN value decreased from 892 to 815 (8.6%) after reorganization. Employing FMEA, we worked on a few critical activities, and we reduced patients' clinical risk. A priority matrix also takes into account the weight of the control measures: we believe this evaluation is quick, because of simple priority selection, and that it decreases action times.

  1. Self-Management of On-Task Homework Behavior: A Promising Strategy for Adolescents with Attention and Behavior Problems

    ERIC Educational Resources Information Center

    Axelrod, Michael I.; Zhe, Elizabeth J.; Haugen, Kimberly A.; Klein, Jean A.

    2009-01-01

    Students with attention and behavior problems oftentimes experience difficulty finishing academic work. On-task behavior is frequently cited as a primary reason for students' failure to complete homework assignments. Researchers have identified self-monitoring and self-management of on-task behavior as effective tools for improving homework…

  2. Damage tolerance modeling and validation of a wireless sensory composite panel for a structural health monitoring system

    NASA Astrophysics Data System (ADS)

    Talagani, Mohamad R.; Abdi, Frank; Saravanos, Dimitris; Chrysohoidis, Nikos; Nikbin, Kamran; Ragalini, Rose; Rodov, Irena

    2013-05-01

    The paper proposes the diagnostic and prognostic modeling and test validation of a Wireless Integrated Strain Monitoring and Simulation System (WISMOS). The effort verifies a hardware and web based software tool that is able to evaluate and optimize sensorized aerospace composite structures for the purpose of Structural Health Monitoring (SHM). The tool is an extension of an existing suite of an SHM system, based on a diagnostic-prognostic system (DPS) methodology. The goal of the extended SHM-DPS is to apply multi-scale nonlinear physics-based Progressive Failure analyses to the "as-is" structural configuration to determine residual strength, remaining service life, and future inspection intervals and maintenance procedures. The DPS solution meets the JTI Green Regional Aircraft (GRA) goals towards low weight, durable and reliable commercial aircraft. It will take advantage of the currently developed methodologies within the European Clean sky JTI project WISMOS, with the capability to transmit, store and process strain data from a network of wireless sensors (e.g. strain gages, FBGA) and utilize a DPS-based methodology, based on multi scale progressive failure analysis (MS-PFA), to determine structural health and to advice with respect to condition based inspection and maintenance. As part of the validation of the Diagnostic and prognostic system, Carbon/Epoxy ASTM coupons were fabricated and tested to extract the mechanical properties. Subsequently two composite stiffened panels were manufactured, instrumented and tested under compressive loading: 1) an undamaged stiffened buckling panel; and 2) a damaged stiffened buckling panel including an initial diamond cut. Next numerical Finite element models of the two panels were developed and analyzed under test conditions using Multi-Scale Progressive Failure Analysis (an extension of FEM) to evaluate the damage/fracture evolution process, as well as the identification of contributing failure modes. The comparisons between predictions and test results were within 10% accuracy.

  3. Frequency characteristics of the heart rate variability produced by Cheyne-Stokes respiration during 24-hr ambulatory electrocardiographic monitoring.

    PubMed

    Ichimaru, Y; Yanaga, T

    1989-06-01

    Spectral analysis of heart rates during 24-hr ambulatory electrocardiographic monitoring has been carried out to characterize the heart rate spectral components of Cheyne-Stokes respiration (CSR) by using fast Fourier transformation (FFT). Eight patients with congestive heart failure were selected for the study. FFT analyses have been performed for 614.4 sec. Out of the power spectrum, five parameters were extracted to characterize the CSR. The low peak frequencies in eight subjects were between 0.0179 Hz (56 sec) and 0.0081 Hz (123 sec). The algorithms used to detect CSR are the followings: (i) if the LFPA/ULFA ratios were above the absolute value of 1.0, and (ii) the LFPP/MLFP ratios were above the absolute values of 4.0, then the power spectrum is suggestive of CSR. We conclude that the automatic detection of CSR by heart rate spectral analysis during ambulatory ECG monitoring may afford a tool for the evaluation of the patients with congestive heart failure.

  4. Applications of the Petri net to simulate, test, and validate the performance and safety of complex, heterogeneous, multi-modality patient monitoring alarm systems.

    PubMed

    Sloane, E B; Gelhot, V

    2004-01-01

    This research is motivated by the rapid pace of medical device and information system integration. Although the ability to interconnect many medical devices and information systems may help improve patient care, there is no way to detect if incompatibilities between one or more devices might cause critical events such as patient alarms to go unnoticed or cause one or more of the devices to become stuck in a disabled state. Petri net tools allow automated testing of all possible states and transitions between devices and/or systems to detect potential failure modes in advance. This paper describes an early research project to use Petri nets to simulate and validate a multi-modality central patient monitoring system. A free Petri net tool, HPSim, is used to simulate two wireless patient monitoring networks: one with 44 heart monitors and a central monitoring system and a second version that includes an additional 44 wireless pulse oximeters. In the latter Petri net simulation, a potentially dangerous heart arrhythmia and pulse oximetry alarms were detected.

  5. Inductive System Monitors Tasks

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Inductive Monitoring System (IMS) software developed at Ames Research Center uses artificial intelligence and data mining techniques to build system-monitoring knowledge bases from archived or simulated sensor data. This information is then used to detect unusual or anomalous behavior that may indicate an impending system failure. Currently helping analyze data from systems that help fly and maintain the space shuttle and the International Space Station (ISS), the IMS has also been employed by data classes are then used to build a monitoring knowledge base. In real time, IMS performs monitoring functions: determining and displaying the degree of deviation from nominal performance. IMS trend analyses can detect conditions that may indicate a failure or required system maintenance. The development of IMS was motivated by the difficulty of producing detailed diagnostic models of some system components due to complexity or unavailability of design information. Successful applications have ranged from real-time monitoring of aircraft engine and control systems to anomaly detection in space shuttle and ISS data. IMS was used on shuttle missions STS-121, STS-115, and STS-116 to search the Wing Leading Edge Impact Detection System (WLEIDS) data for signs of possible damaging impacts during launch. It independently verified findings of the WLEIDS Mission Evaluation Room (MER) analysts and indicated additional points of interest that were subsequently investigated by the MER team. In support of the Exploration Systems Mission Directorate, IMS is being deployed as an anomaly detection tool on ISS mission control consoles in the Johnson Space Center Mission Operations Directorate. IMS has been trained to detect faults in the ISS Control Moment Gyroscope (CMG) systems. In laboratory tests, it has already detected several minor anomalies in real-time CMG data. When tested on archived data, IMS was able to detect precursors of the CMG1 failure nearly 15 hours in advance of the actual failure event. In the Aeronautics Research Mission Directorate, IMS successfully performed real-time engine health analysis. IMS was able to detect simulated failures and actual engine anomalies in an F/A-18 aircraft during the course of 25 test flights. IMS is also being used in colla

  6. Detecting Slow Deformation Signals Preceding Dynamic Failure: A New Strategy For The Mitigation Of Natural Hazards (SAFER)

    NASA Astrophysics Data System (ADS)

    Vinciguerra, Sergio; Colombero, Chiara; Comina, Cesare; Ferrero, Anna Maria; Mandrone, Giuseppe; Umili, Gessica; Fiaschi, Andrea; Saccorotti, Gilberto

    2014-05-01

    Rock slope monitoring is a major aim in territorial risk assessment and mitigation. The high velocity that usually characterizes the failure phase of rock instabilities makes the traditional instruments based on slope deformation measurements not applicable for early warning systems. On the other hand the use of acoustic emission records has been often a good tool in underground mining for slope monitoring. Here we aim to identify the characteristic signs of impending failure, by deploying a "site specific" microseismic monitoring system on an unstable patch of the Madonna del Sasso landslide on the Italian Western Alps designed to monitor subtle changes of the mechanical properties of the medium and installed as close as possible to the source region. The initial characterization based on geomechanical and geophysical tests allowed to understand the instability mechanism and to design the monitoring systems to be placed. Stability analysis showed that the stability of the slope is due to rock bridges. Their failure progress can results in a global slope failure. Consequently the rock bridges potentially generating dynamic ruptures need to be monitored. A first array consisting of instruments provided by University of Turin, has been deployed on October 2013, consisting of 4 triaxial 4.5 Hz seismometers connected to a 12 channel data logger arranged in a 'large aperture' configuration which encompasses the entire unstable rock mass. Preliminary data indicate the occurrence of microseismic swarms with different spectral contents. Two additional geophones and 4 triaxial piezoelectric accelerometers able to operate at frequencies up to 23 KHz will be installed during summer 2014. This will allow us to develop a network capable of recording events with Mw < 0.5 and frequencies between 700 Hz and 20 kHz. Rock physical and mechanical characterization along with rock deformation laboratory experiments during which the evolution of related physical parameters under simulated conditions of stress and fluid content will be also studied and theoretical modelling will allow to come up with a full hazard assessment and test new methodologies for a much wider scale of applications within EU.

  7. Fetal transesophageal echocardiography: clinical introduction as a monitoring tool during cardiac intervention in a human fetus.

    PubMed

    Kohl, T; Müller, A; Tchatcheva, K; Achenbach, S; Gembruch, U

    2005-12-01

    Because of insufficient imaging by maternal transabdominal fetal echocardiography (TAE) in a human fetus with aortic atresia, imperforate atrial septum and progressive cardiac failure, we assessed the feasibility of fetal transesophageal echocardiography (TEE) as a monitoring tool during fetal cardiac intervention at 24 + 6 weeks of gestation. Percutaneous fetoscopic intraesophageal deployment of the ultrasound catheter was achieved and did not result in any maternal or fetal complications. Fetal TEE permitted substantially clearer definition of fetal cardiac anatomy and intracardiac device manipulations than conventional maternal TAE. Despite the employment of various devices, no sufficiently large opening could be achieved within the atrial septum. Although the fetus tolerated the procedure remarkably well and satisfactory fetoplacental flow could be documented at the end of the procedure, the fetus died from progressive cardiac failure 3 days after the intervention. Fetoscopic TEE is feasible in the human fetus and permits substantially clearer definition of fetal cardiac anatomy and intracardiac manipulations than conventional maternal TAE. Based on the observation of spontaneous closure of multiple iatrogenic perforations of the atrial septum, specialized devices are required in order to improve the technical success rate of septoplasty methods and hence the survival odds of these high-risk patients.

  8. Spinoff 2013

    NASA Technical Reports Server (NTRS)

    2014-01-01

    Topics covered include: Innovative Software Tools Measure Behavioral Alertness; Miniaturized, Portable Sensors Monitor Metabolic Health; Patient Simulators Train Emergency Caregivers; Solar Refrigerators Store Life-Saving Vaccines; Monitors Enable Medication Management in Patients' Homes; Handheld Diagnostic Device Delivers Quick Medical Readings; Experiments Result in Safer, Spin-Resistant Aircraft; Interfaces Visualize Data for Airline Safety, Efficiency; Data Mining Tools Make Flights Safer, More Efficient; NASA Standards Inform Comfortable Car Seats; Heat Shield Paves the Way for Commercial Space; Air Systems Provide Life Support to Miners; Coatings Preserve Metal, Stone, Tile, and Concrete; Robots Spur Software That Lends a Hand; Cloud-Based Data Sharing Connects Emergency Managers; Catalytic Converters Maintain Air Quality in Mines; NASA-Enhanced Water Bottles Filter Water on the Go; Brainwave Monitoring Software Improves Distracted Minds; Thermal Materials Protect Priceless, Personal Keepsakes; Home Air Purifiers Eradicate Harmful Pathogens; Thermal Materials Drive Professional Apparel Line; Radiant Barriers Save Energy in Buildings; Open Source Initiative Powers Real-Time Data Streams; Shuttle Engine Designs Revolutionize Solar Power; Procedure-Authoring Tool Improves Safety on Oil Rigs; Satellite Data Aid Monitoring of Nation's Forests; Mars Technologies Spawn Durable Wind Turbines; Programs Visualize Earth and Space for Interactive Education; Processor Units Reduce Satellite Construction Costs; Software Accelerates Computing Time for Complex Math; Simulation Tools Prevent Signal Interference on Spacecraft; Software Simplifies the Sharing of Numerical Models; Virtual Machine Language Controls Remote Devices; Micro-Accelerometers Monitor Equipment Health; Reactors Save Energy, Costs for Hydrogen Production; Cameras Monitor Spacecraft Integrity to Prevent Failures; Testing Devices Garner Data on Insulation Performance; Smart Sensors Gather Information for Machine Diagnostics; Oxygen Sensors Monitor Bioreactors and Ensure Health and Safety; Vision Algorithms Catch Defects in Screen Displays; and Deformable Mirrors Capture Exoplanet Data, Reflect Lasers.

  9. Chronic Heart Failure Follow-up Management Based on Agent Technology.

    PubMed

    Mohammadzadeh, Niloofar; Safdari, Reza

    2015-10-01

    Monitoring heart failure patients through continues assessment of sign and symptoms by information technology tools lead to large reduction in re-hospitalization. Agent technology is one of the strongest artificial intelligence areas; therefore, it can be expected to facilitate, accelerate, and improve health services especially in home care and telemedicine. The aim of this article is to provide an agent-based model for chronic heart failure (CHF) follow-up management. This research was performed in 2013-2014 to determine appropriate scenarios and the data required to monitor and follow-up CHF patients, and then an agent-based model was designed. Agents in the proposed model perform the following tasks: medical data access, communication with other agents of the framework and intelligent data analysis, including medical data processing, reasoning, negotiation for decision-making, and learning capabilities. The proposed multi-agent system has ability to learn and thus improve itself. Implementation of this model with more and various interval times at a broader level could achieve better results. The proposed multi-agent system is no substitute for cardiologists, but it could assist them in decision-making.

  10. Safety and feasibility of pulmonary artery pressure-guided heart failure therapy: rationale and design of the prospective CardioMEMS Monitoring Study for Heart Failure (MEMS-HF).

    PubMed

    Angermann, Christiane E; Assmus, Birgit; Anker, Stefan D; Brachmann, Johannes; Ertl, Georg; Köhler, Friedrich; Rosenkranz, Stephan; Tschöpe, Carsten; Adamson, Philip B; Böhm, Michael

    2018-05-19

    Wireless monitoring of pulmonary artery (PA) pressures with the CardioMEMS HF™ system is indicated in patients with New York Heart Association (NYHA) class III heart failure (HF). Randomized and observational trials have shown a reduction in HF-related hospitalizations and improved quality of life in patients using this device in the United States. MEMS-HF is a prospective, non-randomized, open-label, multicenter study to characterize safety and feasibility of using remote PA pressure monitoring in a real-world setting in Germany, The Netherlands and Ireland. After informed consent, adult patients with NYHA class III HF and a recent HF-related hospitalization are evaluated for suitability for permanent implantation of a CardioMEMS™ sensor. Participation in MEMS-HF is open to qualifying subjects regardless of left ventricular ejection fraction (LVEF). Patients with reduced ejection fraction must be on stable guideline-directed pharmacotherapy as tolerated. The study will enroll 230 patients in approximately 35 centers. Expected duration is 36 months (24-month enrolment plus ≥ 12-month follow-up). Primary endpoints are freedom from device/system-related complications and freedom from pressure sensor failure at 12-month post-implant. Secondary endpoints include the annualized rate of HF-related hospitalization at 12 months versus the rate over the 12 months preceding implant, and health-related quality of life. Endpoints will be evaluated using data obtained after each subject's 12-month visit. The MEMS-HF study will provide robust evidence on the clinical safety and feasibility of implementing haemodynamic monitoring as a novel disease management tool in routine out-patient care in selected European healthcare systems. ClinicalTrials.gov; NCT02693691.

  11. Distributed multi-level supervision to effectively monitor the operations of a fleet of autonomous vehicles in agricultural tasks.

    PubMed

    Conesa-Muñoz, Jesús; Gonzalez-de-Soto, Mariano; Gonzalez-de-Santos, Pablo; Ribeiro, Angela

    2015-03-05

    This paper describes a supervisor system for monitoring the operation of automated agricultural vehicles. The system analyses all of the information provided by the sensors and subsystems on the vehicles in real time and notifies the user when a failure or potentially dangerous situation is detected. In some situations, it is even able to execute a neutralising protocol to remedy the failure. The system is based on a distributed and multi-level architecture that divides the supervision into different subsystems, allowing for better management of the detection and repair of failures. The proposed supervision system was developed to perform well in several scenarios, such as spraying canopy treatments against insects and diseases and selective weed treatments, by either spraying herbicide or burning pests with a mechanical-thermal actuator. Results are presented for selective weed treatment by the spraying of herbicide. The system successfully supervised the task; it detected failures such as service disruptions, incorrect working speeds, incorrect implement states, and potential collisions. Moreover, the system was able to prevent collisions between vehicles by taking action to avoid intersecting trajectories. The results show that the proposed system is a highly useful tool for managing fleets of autonomous vehicles. In particular, it can be used to manage agricultural vehicles during treatment operations.

  12. Distributed Multi-Level Supervision to Effectively Monitor the Operations of a Fleet of Autonomous Vehicles in Agricultural Tasks

    PubMed Central

    Conesa-Muñoz, Jesús; Gonzalez-de-Soto, Mariano; Gonzalez-de-Santos, Pablo; Ribeiro, Angela

    2015-01-01

    This paper describes a supervisor system for monitoring the operation of automated agricultural vehicles. The system analyses all of the information provided by the sensors and subsystems on the vehicles in real time and notifies the user when a failure or potentially dangerous situation is detected. In some situations, it is even able to execute a neutralising protocol to remedy the failure. The system is based on a distributed and multi-level architecture that divides the supervision into different subsystems, allowing for better management of the detection and repair of failures. The proposed supervision system was developed to perform well in several scenarios, such as spraying canopy treatments against insects and diseases and selective weed treatments, by either spraying herbicide or burning pests with a mechanical-thermal actuator. Results are presented for selective weed treatment by the spraying of herbicide. The system successfully supervised the task; it detected failures such as service disruptions, incorrect working speeds, incorrect implement states, and potential collisions. Moreover, the system was able to prevent collisions between vehicles by taking action to avoid intersecting trajectories. The results show that the proposed system is a highly useful tool for managing fleets of autonomous vehicles. In particular, it can be used to manage agricultural vehicles during treatment operations. PMID:25751079

  13. The TrialsTracker: Automated ongoing monitoring of failure to share clinical trial results by all major companies and research institutions.

    PubMed

    Powell-Smith, Anna; Goldacre, Ben

    2016-01-01

    Background : Failure to publish trial results is a prevalent ethical breach with a negative impact on patient care. Audit is an important tool for quality improvement. We set out to produce an online resource that automatically identifies the sponsors with the best and worst record for failing to share trial results. Methods: A tool was produced that identifies all completed trials from clinicaltrials.gov, searches for results in the clinicaltrials.gov registry and on PubMed, and presents summary statistics for each sponsor online. Results : The TrialsTracker tool is now available. Results are consistent with previous publication bias cohort studies using manual searches. The prevalence of missing studies is presented for various classes of sponsor. All code and data is shared. Discussion: We have designed, built, and launched an easily accessible online service, the TrialsTracker, that identifies sponsors who have failed in their duty to make results of clinical trials available, and which can be maintained at low cost. Sponsors who wish to improve their performance metrics in this tool can do so by publishing the results of their trials.

  14. URBAN-NET: A Network-based Infrastructure Monitoring and Analysis System for Emergency Management and Public Safety

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sangkeun; Chen, Liangzhe; Duan, Sisi

    Abstract Critical Infrastructures (CIs) such as energy, water, and transportation are complex networks that are crucial for sustaining day-to-day commodity flows vital to national security, economic stability, and public safety. The nature of these CIs is such that failures caused by an extreme weather event or a man-made incident can trigger widespread cascading failures, sending ripple effects at regional or even national scales. To minimize such effects, it is critical for emergency responders to identify existing or potential vulnerabilities within CIs during such stressor events in a systematic and quantifiable manner and take appropriate mitigating actions. We present here amore » novel critical infrastructure monitoring and analysis system named URBAN-NET. The system includes a software stack and tools for monitoring CIs, pre-processing data, interconnecting multiple CI datasets as a heterogeneous network, identifying vulnerabilities through graph-based topological analysis, and predicting consequences based on what-if simulations along with visualization. As a proof-of-concept, we present several case studies to show the capabilities of our system. We also discuss remaining challenges and future work.« less

  15. Angular approach combined to mechanical model for tool breakage detection by eddy current sensors

    NASA Astrophysics Data System (ADS)

    Ritou, M.; Garnier, S.; Furet, B.; Hascoet, J. Y.

    2014-02-01

    The paper presents a new complete approach for Tool Condition Monitoring (TCM) in milling. The aim is the early detection of small damages so that catastrophic tool failures are prevented. A versatile in-process monitoring system is introduced for reliability concerns. The tool condition is determined by estimates of the radial eccentricity of the teeth. An adequate criterion is proposed combining mechanical model of milling and angular approach.Then, a new solution is proposed for the estimate of cutting force using eddy current sensors implemented close to spindle nose. Signals are analysed in the angular domain, notably by synchronous averaging technique. Phase shifts induced by changes of machining direction are compensated. Results are compared with cutting forces measured with a dynamometer table.The proposed method is implemented in an industrial case of pocket machining operation. One of the cutting edges has been slightly damaged during the machining, as shown by a direct measurement of the tool. A control chart is established with the estimates of cutter eccentricity obtained during the machining from the eddy current sensors signals. Efficiency and reliability of the method is demonstrated by a successful detection of the damage.

  16. Sources and characteristics of acoustic emissions from mechanically stressed geologic granular media — A review

    NASA Astrophysics Data System (ADS)

    Michlmayr, Gernot; Cohen, Denis; Or, Dani

    2012-05-01

    The formation of cracks and emergence of shearing planes and other modes of rapid macroscopic failure in geologic granular media involve numerous grain scale mechanical interactions often generating high frequency (kHz) elastic waves, referred to as acoustic emissions (AE). These acoustic signals have been used primarily for monitoring and characterizing fatigue and progressive failure in engineered systems, with only a few applications concerning geologic granular media reported in the literature. Similar to the monitoring of seismic events preceding an earthquake, AE may offer a means for non-invasive, in-situ, assessment of mechanical precursors associated with imminent landslides or other types of rapid mass movements (debris flows, rock falls, snow avalanches, glacier stick-slip events). Despite diverse applications and potential usefulness, a systematic description of the AE method and its relevance to mechanical processes in Earth sciences is lacking. This review is aimed at providing a sound foundation for linking observed AE with various micro-mechanical failure events in geologic granular materials, not only for monitoring of triggering events preceding mass mobilization, but also as a non-invasive tool in its own right for probing the rich spectrum of mechanical processes at scales ranging from a single grain to a hillslope. We review first studies reporting use of AE for monitoring of failure in various geologic materials, and describe AE generating source mechanisms in mechanically stressed geologic media (e.g., frictional sliding, micro-crackling, particle collisions, rupture of water bridges, etc.) including AE statistical features, such as frequency content and occurrence probabilities. We summarize available AE sensors and measurement principles. The high sampling rates of advanced AE systems enable detection of numerous discrete failure events within a volume and thus provide access to statistical descriptions of progressive collapse of systems with many interacting mechanical elements such as the fiber bundle model (FBM). We highlight intrinsic links between AE characteristics and established statistical models often used in structural engineering and material sciences, and outline potential applications for failure prediction and early-warning using the AE method in combination with the FBM. The biggest challenge to application of the AE method for field applications is strong signal attenuation. We provide an outlook for overcoming such limitations considering emergence of a class of fiber-optic based distributed AE sensors and deployment of acoustic waveguides as part of monitoring networks.

  17. Non-destructive measurement and role of surface residual stress monitoring in residual life assessment of a steam turbine blading material

    NASA Astrophysics Data System (ADS)

    Prabhu-Gaunkar, Gajanana; Rawat, M. S.; Prasad, C. R.

    2014-02-01

    Steam turbine blades in power generation equipment are made from martensitic stainless steels having high strength, good toughness and corrosion resistance. However, these steels are susceptible to pitting which can promote early failures of blades in the turbines, particularly in the low pressure dry/wet areas by stress corrosion and corrosion fatigue. Presence of tensile residual stresses is known to accelerate failures whereas compressive stresses can help in delaying failures. Shot peening has been employed as an effective tool to induce compressive residual stresses which offset a part of local surface tensile stresses in the surface layers of components. Maintaining local stresses at stress raisers, such as pits formed during service, below a threshold level can help in preventing the initiation microcracks and failures. The thickness of the layer in compression will, however, depend of the shot peening parameters and should extend below the bottom of corrosion pits. The magnitude of surface compressive drops progressively during service exposure and over time the effectiveness of shot peening is lost making the material susceptible to micro-crack initiation once again. Measurement and monitoring of surface residual stress therefore becomes important for assessing residual life of components in service. This paper shows the applicability of surface stress monitoring to life assessment of steam turbine blade material based on data generated in laboratory on residual surface stress measurements in relation to fatigue exposure. An empirical model is proposed to calculate the remaining life of shot peened steam turbine blades in service.

  18. High-sensitivity c-reactive protein (hs-CRP) value with 90 days mortality in patients with heart failure

    NASA Astrophysics Data System (ADS)

    Nursyamsiah; Hasan, R.

    2018-03-01

    Hospitalization in patients with chronic heart failure is associated with high rates of mortality and morbidity that during treatment and post-treatment. Despite the various therapies available today, mortality and re-hospitalization rates within 60 to 90 days post-hospitalization are still quite high. This period is known as the vulnerable phase. With the prognostic evaluation tools in patients with heart failure are expected to help identify high-risk individuals, then more rigorous monitoring and interventions can be undertaken. To determine whether hs-CRP have an impact on mortality within 90 days in hospitalized patients with heart failure, an observational cohort study was conducted in 39 patients with heart failure who were hospitalized due to worsening chronic heart failure. Patients were followed for up to 90 days after initial evaluation with the primary endpoint is death. Hs-CRP value >4.25 mg/L we found 70% was dead and hs-CRP value <4.25 mg/L only 6.9% was dead whereas the survival within 90 days. p:0.000.In conclusion, there were differences in hs-CRP values between in patients with heart failure who died and survival within 90 days.

  19. Chronic Heart Failure Follow-up Management Based on Agent Technology

    PubMed Central

    Safdari, Reza

    2015-01-01

    Objectives Monitoring heart failure patients through continues assessment of sign and symptoms by information technology tools lead to large reduction in re-hospitalization. Agent technology is one of the strongest artificial intelligence areas; therefore, it can be expected to facilitate, accelerate, and improve health services especially in home care and telemedicine. The aim of this article is to provide an agent-based model for chronic heart failure (CHF) follow-up management. Methods This research was performed in 2013-2014 to determine appropriate scenarios and the data required to monitor and follow-up CHF patients, and then an agent-based model was designed. Results Agents in the proposed model perform the following tasks: medical data access, communication with other agents of the framework and intelligent data analysis, including medical data processing, reasoning, negotiation for decision-making, and learning capabilities. Conclusions The proposed multi-agent system has ability to learn and thus improve itself. Implementation of this model with more and various interval times at a broader level could achieve better results. The proposed multi-agent system is no substitute for cardiologists, but it could assist them in decision-making. PMID:26618038

  20. Could Acoustic Emission Testing Show a Pipe Failure in Advance?

    NASA Astrophysics Data System (ADS)

    Soares, S. D.; Teixeira, J. C. G.

    2004-02-01

    During the last 20 years PETROBRAS has been attempting to use Acoustic Emission (AE) as an inspection tool. In this period the AE concept has changed from a revolutionary method to a way of finding areas to make a complete inspection. PETROBRAS has a lot of pressure vessels inspected by AE and with other NDTs techniques to establish their relationship. In other hand, PETROBRAS R&D Center has conducted destructive hydrostatic tests in pipelines samples with artificial defects made by milling. Those tests were monitored by acoustic emission and manual ultrasonic until the complete failure of pipe sample. This article shows the results obtained and a brief proposal of analysis criteria for this environment of test.

  1. Monitoring damage growth in titanium matrix composites using acoustic emission

    NASA Technical Reports Server (NTRS)

    Bakuckas, J. G., Jr.; Prosser, W. H.; Johnson, W. S.

    1993-01-01

    The application of the acoustic emission (AE) technique to locate and monitor damage growth in titanium matrix composites (TMC) was investigated. Damage growth was studied using several optical techniques including a long focal length, high magnification microscope system with image acquisition capabilities. Fracture surface examinations were conducted using a scanning electron microscope (SEM). The AE technique was used to locate damage based on the arrival times of AE events between two sensors. Using model specimens exhibiting a dominant failure mechanism, correlations were established between the observed damage growth mechanisms and the AE results in terms of the events amplitude. These correlations were used to monitor the damage growth process in laminates exhibiting multiple modes of damage. Results revealed that the AE technique is a viable and effective tool to monitor damage growth in TMC.

  2. Trends in non-stationary signal processing techniques applied to vibration analysis of wind turbine drive train - A contemporary survey

    NASA Astrophysics Data System (ADS)

    Uma Maheswari, R.; Umamaheswari, R.

    2017-02-01

    Condition Monitoring System (CMS) substantiates potential economic benefits and enables prognostic maintenance in wind turbine-generator failure prevention. Vibration Monitoring and Analysis is a powerful tool in drive train CMS, which enables the early detection of impending failure/damage. In variable speed drives such as wind turbine-generator drive trains, the vibration signal acquired is of non-stationary and non-linear. The traditional stationary signal processing techniques are inefficient to diagnose the machine faults in time varying conditions. The current research trend in CMS for drive-train focuses on developing/improving non-linear, non-stationary feature extraction and fault classification algorithms to improve fault detection/prediction sensitivity and selectivity and thereby reducing the misdetection and false alarm rates. In literature, review of stationary signal processing algorithms employed in vibration analysis is done at great extent. In this paper, an attempt is made to review the recent research advances in non-linear non-stationary signal processing algorithms particularly suited for variable speed wind turbines.

  3. 40 CFR 49.4166 - Monitoring requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... burning pilot flame, electronically controlled automatic igniters, and monitoring system failures, using a... failure, electronically controlled automatic igniter failure, or improper monitoring equipment operation... and natural gas emissions in the event that natural gas recovered for pipeline injection must be...

  4. 40 CFR 49.4166 - Monitoring requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... burning pilot flame, electronically controlled automatic igniters, and monitoring system failures, using a... failure, electronically controlled automatic igniter failure, or improper monitoring equipment operation... and natural gas emissions in the event that natural gas recovered for pipeline injection must be...

  5. Trastuzumab-induced cardiotoxicity.

    PubMed

    Moss, Lisa Stegall; Starbuck, Mandy Fields; Mayer, Deborah K; Harwood, Elaine Brooks; Glotzer, Jana

    2009-11-01

    To review trastuzumab-related cardiotoxic effects in the breast cancer adjuvant setting, present a system for pretreatment screening for cardiovascular risk factors, describe monitoring recommendations, provide a tool to facilitate adherence to monitoring guidelines, and discuss implications for patient education. Literature regarding cardiotoxicity and trastuzumab in breast cancer. Trastuzumab was approved in 2006 for use in the adjuvant setting. A small percentage of women (approximately 4%) developed heart failure during or after treatment. However, the trials excluded women with cardiac disease. Current screening for cardiotoxicity relies on sequential left ventricular function measurements with either echocardiography or multigated acquisition scanning at baseline and every three months. Treatment modifications are recommended if changes from baseline are detected. Long-term and late effects have yet to be determined. Although a small number of women experienced cardiotoxicity in the adjuvant setting, an increase may be seen because women with preexisting heart disease receive this treatment. Guidelines and tools will be helpful for appropriate and consistent screening of cardiac risk factors and disease prior to initiation of trastuzumab and for monitoring during and after administration. Nurses are instrumental in assessing, monitoring, and treating women receiving trastuzumab. Implementing guidelines to promote adherence to recommended monitoring is important in the early detection of cardiotoxicity in this population. Educating women about their treatment and side effects is an important aspect of care.

  6. Lifecycle Prognostics Architecture for Selected High-Cost Active Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    N. Lybeck; B. Pham; M. Tawfik

    There are an extensive body of knowledge and some commercial products available for calculating prognostics, remaining useful life, and damage index parameters. The application of these technologies within the nuclear power community is still in its infancy. Online monitoring and condition-based maintenance is seeing increasing acceptance and deployment, and these activities provide the technological bases for expanding to add predictive/prognostics capabilities. In looking to deploy prognostics there are three key aspects of systems that are presented and discussed: (1) component/system/structure selection, (2) prognostic algorithms, and (3) prognostics architectures. Criteria are presented for component selection: feasibility, failure probability, consequences of failure,more » and benefits of the prognostics and health management (PHM) system. The basis and methods commonly used for prognostics algorithms are reviewed and summarized. Criteria for evaluating PHM architectures are presented: open, modular architecture; platform independence; graphical user interface for system development and/or results viewing; web enabled tools; scalability; and standards compatibility. Thirteen software products were identified and discussed in the context of being potentially useful for deployment in a PHM program applied to systems in a nuclear power plant (NPP). These products were evaluated by using information available from company websites, product brochures, fact sheets, scholarly publications, and direct communication with vendors. The thirteen products were classified into four groups of software: (1) research tools, (2) PHM system development tools, (3) deployable architectures, and (4) peripheral tools. Eight software tools fell into the deployable architectures category. Of those eight, only two employ all six modules of a full PHM system. Five systems did not offer prognostic estimates, and one system employed the full health monitoring suite but lacked operations and maintenance support. Each product is briefly described in Appendix A. Selection of the most appropriate software package for a particular application will depend on the chosen component, system, or structure. Ongoing research will determine the most appropriate choices for a successful demonstration of PHM systems in aging NPPs.« less

  7. CNC machine tool's wear diagnostic and prognostic by using dynamic Bayesian networks

    NASA Astrophysics Data System (ADS)

    Tobon-Mejia, D. A.; Medjaher, K.; Zerhouni, N.

    2012-04-01

    The failure of critical components in industrial systems may have negative consequences on the availability, the productivity, the security and the environment. To avoid such situations, the health condition of the physical system, and particularly of its critical components, can be constantly assessed by using the monitoring data to perform on-line system diagnostics and prognostics. The present paper is a contribution on the assessment of the health condition of a computer numerical control (CNC) tool machine and the estimation of its remaining useful life (RUL). The proposed method relies on two main phases: an off-line phase and an on-line phase. During the first phase, the raw data provided by the sensors are processed to extract reliable features. These latter are used as inputs of learning algorithms in order to generate the models that represent the wear's behavior of the cutting tool. Then, in the second phase, which is an assessment one, the constructed models are exploited to identify the tool's current health state, predict its RUL and the associated confidence bounds. The proposed method is applied on a benchmark of condition monitoring data gathered during several cuts of a CNC tool. Simulation results are obtained and discussed at the end of the paper.

  8. SDI satellite autonomy using AI and Ada

    NASA Technical Reports Server (NTRS)

    Fiala, Harvey E.

    1990-01-01

    The use of Artificial Intelligence (AI) and the programming language Ada to help a satellite recover from selected failures that could lead to mission failure are described. An unmanned satellite will have a separate AI subsystem running in parallel with the normal satellite subsystems. A satellite monitoring subsystem (SMS), under the control of a blackboard system, will continuously monitor selected satellite subsystems to become alert to any actual or potential problems. In the case of loss of communications with the earth or the home base, the satellite will go into a survival mode to reestablish communications with the earth. The use of an AI subsystem in this manner would have avoided the tragic loss of the two recent Soviet probes that were sent to investigate the planet Mars and its moons. The blackboard system works in conjunction with an SMS and a reconfiguration control subsystem (RCS). It can be shown to be an effective way for one central control subsystem to monitor and coordinate the activities and loads of many interacting subsystems that may or may not contain redundant and/or fault-tolerant elements. The blackboard system will be coded in Ada using tools such as the ABLE development system and the Ada Production system.

  9. Volcanic alert system (VAS) developed during the 2011-2014 El Hierro (Canary Islands) volcanic process

    NASA Astrophysics Data System (ADS)

    García, Alicia; Berrocoso, Manuel; Marrero, José M.; Fernández-Ros, Alberto; Prates, Gonçalo; De la Cruz-Reyna, Servando; Ortiz, Ramón

    2014-06-01

    The 2011 volcanic unrest at El Hierro Island illustrated the need for a Volcanic Alert System (VAS) specifically designed for the management of volcanic crises developing after long repose periods. The VAS comprises the monitoring network, the software tools for analysis of the monitoring parameters, the Volcanic Activity Level (VAL) management, and the assessment of hazard. The VAS presented here focuses on phenomena related to moderate eruptions, and on potentially destructive volcano-tectonic earthquakes and landslides. We introduce a set of new data analysis tools, aimed to detect data trend changes, as well as spurious signals related to instrumental failure. When data-trend changes and/or malfunctions are detected, a watchdog is triggered, issuing a watch-out warning (WOW) to the Monitoring Scientific Team (MST). The changes in data patterns are then translated by the MST into a VAL that is easy to use and understand by scientists, technicians, and decision-makers. Although the VAS was designed specifically for the unrest episodes at El Hierro, the methodologies may prove useful at other volcanic systems.

  10. Renal function monitoring in heart failure – what is the optimal frequency? A narrative review

    PubMed Central

    Wright, David; Devonald, Mark Alexander John; Pirmohamed, Munir

    2017-01-01

    The second most common cause of hospitalization due to adverse drug reactions in the UK is renal dysfunction due to diuretics, particularly in patients with heart failure, where diuretic therapy is a mainstay of treatment regimens. Therefore, the optimal frequency for monitoring renal function in these patients is an important consideration for preventing renal failure and hospitalization. This review looks at the current evidence for optimal monitoring practices of renal function in patients with heart failure according to national and international guidelines on the management of heart failure (AHA/NICE/ESC/SIGN). Current guidance of renal function monitoring is in large part based on expert opinion, with a lack of clinical studies that have specifically evaluated the optimal frequency of renal function monitoring in patients with heart failure. Furthermore, there is variability between guidelines, and recommendations are typically nonspecific. Safer prescribing of diuretics in combination with other antiheart failure treatments requires better evidence for frequency of renal function monitoring. We suggest developing more personalized monitoring rather than from the current medication‐based guidance. Such flexible clinical guidelines could be implemented using intelligent clinical decision support systems. Personalized renal function monitoring would be more effective in preventing renal decline, rather than reacting to it. PMID:28901643

  11. Association rule mining on grid monitoring data to detect error sources

    NASA Astrophysics Data System (ADS)

    Maier, Gerhild; Schiffers, Michael; Kranzlmueller, Dieter; Gaidioz, Benjamin

    2010-04-01

    Error handling is a crucial task in an infrastructure as complex as a grid. There are several monitoring tools put in place, which report failing grid jobs including exit codes. However, the exit codes do not always denote the actual fault, which caused the job failure. Human time and knowledge is required to manually trace back errors to the real fault underlying an error. We perform association rule mining on grid job monitoring data to automatically retrieve knowledge about the grid components' behavior by taking dependencies between grid job characteristics into account. Therewith, problematic grid components are located automatically and this information - expressed by association rules - is visualized in a web interface. This work achieves a decrease in time for fault recovery and yields an improvement of a grid's reliability.

  12. Failure Mode and Effect Analysis (FMEA) may enhance implementation of clinical practice guidelines: An experience from the Middle East.

    PubMed

    Babiker, Amir; Amer, Yasser S; Osman, Mohamed E; Al-Eyadhy, Ayman; Fatani, Solafa; Mohamed, Sarar; Alnemri, Abdulrahman; Titi, Maher A; Shaikh, Farheen; Alswat, Khalid A; Wahabi, Hayfaa A; Al-Ansary, Lubna A

    2018-02-01

    Implementation of clinical practice guidelines (CPGs) has been shown to reduce variation in practice and improve health care quality and patients' safety. There is a limited experience of CPG implementation (CPGI) in the Middle East. The CPG program in our institution was launched in 2009. The Quality Management department conducted a Failure Mode and Effect Analysis (FMEA) for further improvement of CPGI. This is a prospective study of a qualitative/quantitative design. Our FMEA included (1) process review and recording of the steps and activities of CPGI; (2) hazard analysis by recording activity-related failure modes and their effects, identification of actions required, assigned severity, occurrence, and detection scores for each failure mode and calculated the risk priority number (RPN) by using an online interactive FMEA tool; (3) planning: RPNs were prioritized, recommendations, and further planning for new interventions were identified; and (4) monitoring: after reduction or elimination of the failure mode. The calculated RPN will be compared with subsequent analysis in post-implementation phase. The data were scrutinized from a feedback of quality team members using a FMEA framework to enhance the implementation of 29 adapted CPGs. The identified potential common failure modes with the highest RPN (≥ 80) included awareness/training activities, accessibility of CPGs, fewer advocates from clinical champions, and CPGs auditing. Actions included (1) organizing regular awareness activities, (2) making CPGs printed and electronic copies accessible, (3) encouraging senior practitioners to get involved in CPGI, and (4) enhancing CPGs auditing as part of the quality sustainability plan. In our experience, FMEA could be a useful tool to enhance CPGI. It helped us to identify potential barriers and prepare relevant solutions. © 2017 John Wiley & Sons, Ltd.

  13. Functional Fault Model Development Process to Support Design Analysis and Operational Assessment

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Maul, William A.; Hemminger, Joseph A.

    2016-01-01

    A functional fault model (FFM) is an abstract representation of the failure space of a given system. As such, it simulates the propagation of failure effects along paths between the origin of the system failure modes and points within the system capable of observing the failure effects. As a result, FFMs may be used to diagnose the presence of failures in the modeled system. FFMs necessarily contain a significant amount of information about the design, operations, and failure modes and effects. One of the important benefits of FFMs is that they may be qualitative, rather than quantitative and, as a result, may be implemented early in the design process when there is more potential to positively impact the system design. FFMs may therefore be developed and matured throughout the monitored system's design process and may subsequently be used to provide real-time diagnostic assessments that support system operations. This paper provides an overview of a generalized NASA process that is being used to develop and apply FFMs. FFM technology has been evolving for more than 25 years. The FFM development process presented in this paper was refined during NASA's Ares I, Space Launch System, and Ground Systems Development and Operations programs (i.e., from about 2007 to the present). Process refinement took place as new modeling, analysis, and verification tools were created to enhance FFM capabilities. In this paper, standard elements of a model development process (i.e., knowledge acquisition, conceptual design, implementation & verification, and application) are described within the context of FFMs. Further, newer tools and analytical capabilities that may benefit the broader systems engineering process are identified and briefly described. The discussion is intended as a high-level guide for future FFM modelers.

  14. On possibilities of using global monitoring in effective prevention of tailings storage facilities failures.

    PubMed

    Stefaniak, Katarzyna; Wróżyńska, Magdalena

    2018-02-01

    Protection of common natural goods is one of the greatest challenges man faces every day. Extracting and processing natural resources such as mineral deposits contributes to the transformation of the natural environment. The number of activities designed to keep balance are undertaken in accordance with the concept of integrated order. One of them is the use of comprehensive systems of tailings storage facility monitoring. Despite the monitoring, system failures still occur. The quantitative aspect of the failures illustrates both the scale of the problem and the quantitative aspect of the consequences of tailings storage facility failures. The paper presents vast possibilities provided by the global monitoring in the effective prevention of these failures. Particular attention is drawn to the potential of using multidirectional monitoring, including technical and environmental monitoring by the example of one of the world's biggest hydrotechnical constructions-Żelazny Most Tailings Storage Facility (TSF), Poland. Analysis of monitoring data allows to take preventive action against construction failures of facility dams, which can have devastating effects on human life and the natural environment.

  15. Development of a Real Time Internal Charging Tool for Geosynchronous Orbit

    NASA Technical Reports Server (NTRS)

    Posey, Nathaniel A.; Minow, Joesph I.

    2013-01-01

    The high-energy electron fluxes encountered by satellites in geosynchronous orbit pose a serious threat to onboard instrumentation and other circuitry. A substantial build-up of charge within a satellite's insulators can lead to electric fields in excess of the breakdown strength, which can result in destructive electrostatic discharges. The software tool we've developed uses data on the plasma environment taken from NOAA's GOES-13 satellite to track the resulting electric field strength within a material of arbitrary depth and conductivity and allows us to monitor the risk of material failure in real time. The tool also utilizes a transport algorithm to simulate the effects of shielding on the dielectric. Data on the plasma environment and the resulting electric fields are logged to allow for playback at a variable frame rate.

  16. The Borg scale as an important tool of self-monitoring and self-regulation of exercise prescription in heart failure patients during hydrotherapy. A randomized blinded controlled trial.

    PubMed

    Carvalho, Vitor Oliveira; Bocchi, Edimar Alcides; Guimarães, Guilherme Veiga

    2009-10-01

    The Borg Scale may be a useful tool for heart failure patients to self-monitor and self-regulate exercise on land or in water (hydrotherapy) by maintaining the heart rate (HR) between the anaerobic threshold and respiratory compensation point. Patients performed a cardiopulmonary exercise test to determine their anaerobic threshold/respiratory compensation points. The percentage of the mean HR during the exercise session in relation to the anaerobic threshold HR (%EHR-AT), in relation to the respiratory compensation point (%EHR-RCP), in relation to the peak HR by the exercise test (%EHR-Peak) and in relation to the maximum predicted HR (%EHR-Predicted) was calculated. Next, patients were randomized into the land or water exercise group. One blinded investigator instructed the patients in each group to exercise at a level between "relatively easy and slightly tiring". The mean HR throughout the 30-min exercise session was recorded. The %EHR-AT and %EHR-predicted did not differ between the land and water exercise groups, but they differed in the %EHR-RCP (95 +/-7 to 86 +/-7, P<0.001) and in the %EHR-Peak (85 +/-8 to 78 +/-9, P=0.007). Exercise guided by the Borg scale maintains the patient's HR between the anaerobic threshold and respiratory compensation point (ie, in the exercise training zone).

  17. Sensors and systems for space applications: a methodology for developing fault detection, diagnosis, and recovery

    NASA Astrophysics Data System (ADS)

    Edwards, John L.; Beekman, Randy M.; Buchanan, David B.; Farner, Scott; Gershzohn, Gary R.; Khuzadi, Mbuyi; Mikula, D. F.; Nissen, Gerry; Peck, James; Taylor, Shaun

    2007-04-01

    Human space travel is inherently dangerous. Hazardous conditions will exist. Real time health monitoring of critical subsystems is essential for providing a safe abort timeline in the event of a catastrophic subsystem failure. In this paper, we discuss a practical and cost effective process for developing critical subsystem failure detection, diagnosis and response (FDDR). We also present the results of a real time health monitoring simulation of a propellant ullage pressurization subsystem failure. The health monitoring development process identifies hazards, isolates hazard causes, defines software partitioning requirements and quantifies software algorithm development. The process provides a means to establish the number and placement of sensors necessary to provide real time health monitoring. We discuss how health monitoring software tracks subsystem control commands, interprets off-nominal operational sensor data, predicts failure propagation timelines, corroborate failures predictions and formats failure protocol.

  18. Use of failure mode effect analysis (FMEA) to improve medication management process.

    PubMed

    Jain, Khushboo

    2017-03-13

    Purpose Medication management is a complex process, at high risk of error with life threatening consequences. The focus should be on devising strategies to avoid errors and make the process self-reliable by ensuring prevention of errors and/or error detection at subsequent stages. The purpose of this paper is to use failure mode effect analysis (FMEA), a systematic proactive tool, to identify the likelihood and the causes for the process to fail at various steps and prioritise them to devise risk reduction strategies to improve patient safety. Design/methodology/approach The study was designed as an observational analytical study of medication management process in the inpatient area of a multi-speciality hospital in Gurgaon, Haryana, India. A team was made to study the complex process of medication management in the hospital. FMEA tool was used. Corrective actions were developed based on the prioritised failure modes which were implemented and monitored. Findings The percentage distribution of medication errors as per the observation made by the team was found to be maximum of transcription errors (37 per cent) followed by administration errors (29 per cent) indicating the need to identify the causes and effects of their occurrence. In all, 11 failure modes were identified out of which major five were prioritised based on the risk priority number (RPN). The process was repeated after corrective actions were taken which resulted in about 40 per cent (average) and around 60 per cent reduction in the RPN of prioritised failure modes. Research limitations/implications FMEA is a time consuming process and requires a multidisciplinary team which has good understanding of the process being analysed. FMEA only helps in identifying the possibilities of a process to fail, it does not eliminate them, additional efforts are required to develop action plans and implement them. Frank discussion and agreement among the team members is required not only for successfully conducing FMEA but also for implementing the corrective actions. Practical implications FMEA is an effective proactive risk-assessment tool and is a continuous process which can be continued in phases. The corrective actions taken resulted in reduction in RPN, subjected to further evaluation and usage by others depending on the facility type. Originality/value The application of the tool helped the hospital in identifying failures in medication management process, thereby prioritising and correcting them leading to improvement.

  19. Renal function monitoring in heart failure - what is the optimal frequency? A narrative review.

    PubMed

    Al-Naher, Ahmed; Wright, David; Devonald, Mark Alexander John; Pirmohamed, Munir

    2018-01-01

    The second most common cause of hospitalization due to adverse drug reactions in the UK is renal dysfunction due to diuretics, particularly in patients with heart failure, where diuretic therapy is a mainstay of treatment regimens. Therefore, the optimal frequency for monitoring renal function in these patients is an important consideration for preventing renal failure and hospitalization. This review looks at the current evidence for optimal monitoring practices of renal function in patients with heart failure according to national and international guidelines on the management of heart failure (AHA/NICE/ESC/SIGN). Current guidance of renal function monitoring is in large part based on expert opinion, with a lack of clinical studies that have specifically evaluated the optimal frequency of renal function monitoring in patients with heart failure. Furthermore, there is variability between guidelines, and recommendations are typically nonspecific. Safer prescribing of diuretics in combination with other antiheart failure treatments requires better evidence for frequency of renal function monitoring. We suggest developing more personalized monitoring rather than from the current medication-based guidance. Such flexible clinical guidelines could be implemented using intelligent clinical decision support systems. Personalized renal function monitoring would be more effective in preventing renal decline, rather than reacting to it. © 2017 The Authors. British Journal of Clinical Pharmacology published by John Wiley & Sons Ltd on behalf of British Pharmacological Society.

  20. Remote monitoring to Improve long-term prognosis in heart failure patients with implantable cardioverter-defibrillators.

    PubMed

    Ono, Maki; Varma, Niraj

    2017-05-01

    Strong evidence exists for the utility of remote monitoring in cardiac implantable electronic devices for early detection of arrhythmias and evaluation of system performance. The application of remote monitoring for the management of chronic disease such as heart failure has been an active area of research. Areas covered: This review aims to cover the latest evidence of remote monitoring of implantable cardiac defibrillators in terms of heart failure prognosis. This article also updates the current technology relating to the method and discusses key factors to be addressed in order to better use the approach. PubMed and internet searches were conducted to acquire most recent data and technology information. Expert commentary: Multiparameter monitoring with automatic transmission is useful for heart failure management. Improved adherence to remote monitoring and an optimal algorithm for transmitted alerts and their management are warranted in the management of heart failure.

  1. Implantable cardiac resynchronization therapy devices to monitor heart failure clinical status.

    PubMed

    Fung, Jeffrey Wing-Hong; Yu, Cheuk-Man

    2007-03-01

    Cardiac resynchronization therapy is a standard therapy for selected patients with heart failure. With advances in technology and storage capacity, the device acts as a convenient platform to provide valuable information about heart failure status in these high-risk patients. Unlike other modalities of investigation which may only allow one-off evaluation, heart failure status can be monitored by device diagnostics including heart rate variability, activity status, and intrathoracic impedance in a continuous basis. These parameters do not just provide long-term prognostic information but also may be useful to predict upcoming heart failure exacerbation. Prompt and early intervention may abort decompensation, prevent hospitalization, improve quality of life, and reduce health care cost. Moreover, this information may be applied to titrate the dosage of medication and monitor response to heart failure treatment. This review will focus on the prognostic and predictive values of heart failure status monitoring provided by these devices.

  2. DiAs Web Monitoring: A Real-Time Remote Monitoring System Designed for Artificial Pancreas Outpatient Trials

    PubMed Central

    Place, Jérôme; Robert, Antoine; Brahim, Najib Ben; Patrick, Keith-Hynes; Farret, Anne; Marie-Josée, Pelletier; Buckingham, Bruce; Breton, Marc; Kovatchev, Boris; Renard, Eric

    2013-01-01

    Background Developments in an artificial pancreas (AP) for patients with type 1 diabetes have allowed a move toward performing outpatient clinical trials. “Home-like” environment implies specific protocol and system adaptations among which the introduction of remote monitoring is meaningful. We present a novel tool allowing multiple patients to monitor AP use in home-like settings. Methods We investigated existing systems, performed interviews of experienced clinical teams, listed required features, and drew several mockups of the user interface. The resulting application was tested on the bench before it was used in three outpatient studies representing 3480 h of remote monitoring. Results Our tool, called DiAs Web Monitoring (DWM), is a web-based application that ensures reception, storage, and display of data sent by AP systems. Continuous glucose monitoring (CGM) and insulin delivery data are presented in a colored chart to facilitate reading and interpretation. Several subjects can be monitored simultaneously on the same screen, and alerts are triggered to help detect events such as hypoglycemia or CGM failures. In the third trial, DWM received approximately 460 data per subject per hour: 77% for log messages, 5% for CGM data. More than 97% of transmissions were achieved in less than 5 min. Conclusions Transition from a hospital setting to home-like conditions requires specific AP supervision to which remote monitoring systems can contribute valuably. DiAs Web Monitoring worked properly when tested in our outpatient studies. It could facilitate subject monitoring and even accelerate medical and technical assessment of the AP. It should now be adapted for long-term studies with an enhanced notification feature. J Diabetes Sci Technol 2013;7(6):1427–1435 PMID:24351169

  3. DiAs web monitoring: a real-time remote monitoring system designed for artificial pancreas outpatient trials.

    PubMed

    Place, Jérôme; Robert, Antoine; Ben Brahim, Najib; Keith-Hynes, Patrick; Farret, Anne; Pelletier, Marie-Josée; Buckingham, Bruce; Breton, Marc; Kovatchev, Boris; Renard, Eric

    2013-11-01

    Developments in an artificial pancreas (AP) for patients with type 1 diabetes have allowed a move toward performing outpatient clinical trials. "Home-like" environment implies specific protocol and system adaptations among which the introduction of remote monitoring is meaningful. We present a novel tool allowing multiple patients to monitor AP use in home-like settings. We investigated existing systems, performed interviews of experienced clinical teams, listed required features, and drew several mockups of the user interface. The resulting application was tested on the bench before it was used in three outpatient studies representing 3480 h of remote monitoring. Our tool, called DiAs Web Monitoring (DWM), is a web-based application that ensures reception, storage, and display of data sent by AP systems. Continuous glucose monitoring (CGM) and insulin delivery data are presented in a colored chart to facilitate reading and interpretation. Several subjects can be monitored simultaneously on the same screen, and alerts are triggered to help detect events such as hypoglycemia or CGM failures. In the third trial, DWM received approximately 460 data per subject per hour: 77% for log messages, 5% for CGM data. More than 97% of transmissions were achieved in less than 5 min. Transition from a hospital setting to home-like conditions requires specific AP supervision to which remote monitoring systems can contribute valuably. DiAs Web Monitoring worked properly when tested in our outpatient studies. It could facilitate subject monitoring and even accelerate medical and technical assessment of the AP. It should now be adapted for long-term studies with an enhanced notification feature. © 2013 Diabetes Technology Society.

  4. Decoupled tracking and thermal monitoring of non-stationary targets.

    PubMed

    Tan, Kok Kiong; Zhang, Yi; Huang, Sunan; Wong, Yoke San; Lee, Tong Heng

    2009-10-01

    Fault diagnosis and predictive maintenance address pertinent economic issues relating to production systems as an efficient technique can continuously monitor key health parameters and trigger alerts when critical changes in these variables are detected, before they lead to system failures and production shutdowns. In this paper, we present a decoupled tracking and thermal monitoring system which can be used on non-stationary targets of closed systems such as machine tools. There are three main contributions from the paper. First, a vision component is developed to track moving targets under a monitor. Image processing techniques are used to resolve the target location to be tracked. Thus, the system is decoupled and applicable to closed systems without the need for a physical integration. Second, an infrared temperature sensor with a built-in laser for locating the measurement spot is deployed for non-contact temperature measurement of the moving target. Third, a predictive motion control system holds the thermal sensor and follows the moving target efficiently to enable continuous temperature measurement and monitoring.

  5. Disease management: remote monitoring in heart failure patients with implantable defibrillators, resynchronization devices, and haemodynamic monitors.

    PubMed

    Abraham, William T

    2013-06-01

    Heart failure represents a major public health concern, associated with high rates of morbidity and mortality. A particular focus of contemporary heart failure management is reduction of hospital admission and readmission rates. While optimal medical therapy favourably impacts the natural history of the disease, devices such as cardiac resynchronization therapy devices and implantable cardioverter defibrillators have added incremental value in improving heart failure outcomes. These devices also enable remote patient monitoring via device-based diagnostics. Device-based measurement of physiological parameters, such as intrathoracic impedance and heart rate variability, provide a means to assess risk of worsening heart failure and the possibility of future hospitalization. Beyond this capability, implantable haemodynamic monitors have the potential to direct day-to-day management of heart failure patients to significantly reduce hospitalization rates. The use of a pulmonary artery pressure measurement system has been shown to significantly reduce the risk of heart failure hospitalization in a large randomized controlled study, the CardioMEMS Heart Sensor Allows Monitoring of Pressure to Improve Outcomes in NYHA Class III Heart Failure Patients (CHAMPION) trial. Observations from a pilot study also support the potential use of a left atrial pressure monitoring system and physician-directed patient self-management paradigm; these observations are under further investigation in the ongoing LAPTOP-HF trial. All these devices depend upon high-intensity remote monitoring for successful detection of parameter deviations and for directing and following therapy.

  6. Investigation of fatigue crack growth in acrylic bone cement using the acoustic emission technique.

    PubMed

    Roques, A; Browne, M; Thompson, J; Rowland, C; Taylor, A

    2004-02-01

    Failure of the bone cement mantle has been implicated in the loosening process of cemented hip stems. Current methods of investigating degradation of the cement mantle in vitro often require sectioning of the sample to confirm failure paths. The present research investigates acoustic emission as a passive experimental method for the assessment of bone cement failure. Damage in bone cement was monitored during four point bending fatigue tests through an analysis of the peak amplitude, duration, rise time (RT) and energy of the events emitted from the damage sections. A difference in AE trends was observed during failure for specimens aged and tested in (i) air and (ii) Ringer's solution at 37 degrees C. It was noted that the acoustic behaviour varied according to applied load level; events of higher duration and RT were emitted during fatigue at lower stresses. A good correlation was observed between crack location and source of acoustic emission, and the nature of the acoustic parameters that were most suited to bone cement failure characterisation was identified. The methodology employed in this study could potentially be used as a pre-clinical assessment tool for the integrity of cemented load bearing implants.

  7. Subcritical crack growth in SiNx thin-film barriers studied by electro-mechanical two-point bending

    NASA Astrophysics Data System (ADS)

    Guan, Qingling; Laven, Jozua; Bouten, Piet C. P.; de With, Gijsbertus

    2013-06-01

    Mechanical failure resulting from subcritical crack growth in the SiNx inorganic barrier layer applied on a flexible multilayer structure was studied by an electro-mechanical two-point bending method. A 10 nm conducting tin-doped indium oxide layer was sputtered as an electrical probe to monitor the subcritical crack growth in the 150 nm dielectric SiNx layer carried by a polyethylene naphthalate substrate. In the electro-mechanical two-point bending test, dynamic and static loads were applied to investigate the crack propagation in the barrier layer. As consequence of using two loading modes, the characteristic failure strain and failure time could be determined. The failure probability distribution of strain and lifetime under each loading condition was described by Weibull statistics. In this study, results from the tests in dynamic and static loading modes were linked by a power law description to determine the critical failure over a range of conditions. The fatigue parameter n from the power law reduces greatly from 70 to 31 upon correcting for internal strain. The testing method and analysis tool as described in the paper can be used to understand the limit of thin-film barriers in terms of their mechanical properties.

  8. Hybrid neural intelligent system to predict business failure in small-to-medium-size enterprises.

    PubMed

    Borrajo, M Lourdes; Baruque, Bruno; Corchado, Emilio; Bajo, Javier; Corchado, Juan M

    2011-08-01

    During the last years there has been a growing need of developing innovative tools that can help small to medium sized enterprises to predict business failure as well as financial crisis. In this study we present a novel hybrid intelligent system aimed at monitoring the modus operandi of the companies and predicting possible failures. This system is implemented by means of a neural-based multi-agent system that models the different actors of the companies as agents. The core of the multi-agent system is a type of agent that incorporates a case-based reasoning system and automates the business control process and failure prediction. The stages of the case-based reasoning system are implemented by means of web services: the retrieval stage uses an innovative weighted voting summarization of self-organizing maps ensembles-based method and the reuse stage is implemented by means of a radial basis function neural network. An initial prototype was developed and the results obtained related to small and medium enterprises in a real scenario are presented.

  9. Tools for distributed application management

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith; Cooper, Robert; Wood, Mark; Birman, Kenneth P.

    1990-01-01

    Distributed application management consists of monitoring and controlling an application as it executes in a distributed environment. It encompasses such activities as configuration, initialization, performance monitoring, resource scheduling, and failure response. The Meta system (a collection of tools for constructing distributed application management software) is described. Meta provides the mechanism, while the programmer specifies the policy for application management. The policy is manifested as a control program which is a soft real-time reactive program. The underlying application is instrumented with a variety of built-in and user-defined sensors and actuators. These define the interface between the control program and the application. The control program also has access to a database describing the structure of the application and the characteristics of its environment. Some of the more difficult problems for application management occur when preexisting, nondistributed programs are integrated into a distributed application for which they may not have been intended. Meta allows management functions to be retrofitted to such programs with a minimum of effort.

  10. Tools for distributed application management

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith; Wood, Mark; Cooper, Robert; Birman, Kenneth P.

    1990-01-01

    Distributed application management consists of monitoring and controlling an application as it executes in a distributed environment. It encompasses such activities as configuration, initialization, performance monitoring, resource scheduling, and failure response. The Meta system is described: a collection of tools for constructing distributed application management software. Meta provides the mechanism, while the programmer specifies the policy for application management. The policy is manifested as a control program which is a soft real time reactive program. The underlying application is instrumented with a variety of built-in and user defined sensors and actuators. These define the interface between the control program and the application. The control program also has access to a database describing the structure of the application and the characteristics of its environment. Some of the more difficult problems for application management occur when pre-existing, nondistributed programs are integrated into a distributed application for which they may not have been intended. Meta allows management functions to be retrofitted to such programs with a minimum of effort.

  11. Automatically Detecting Failures in Natural Language Processing Tools for Online Community Text.

    PubMed

    Park, Albert; Hartzler, Andrea L; Huh, Jina; McDonald, David W; Pratt, Wanda

    2015-08-31

    The prevalence and value of patient-generated health text are increasing, but processing such text remains problematic. Although existing biomedical natural language processing (NLP) tools are appealing, most were developed to process clinician- or researcher-generated text, such as clinical notes or journal articles. In addition to being constructed for different types of text, other challenges of using existing NLP include constantly changing technologies, source vocabularies, and characteristics of text. These continuously evolving challenges warrant the need for applying low-cost systematic assessment. However, the primarily accepted evaluation method in NLP, manual annotation, requires tremendous effort and time. The primary objective of this study is to explore an alternative approach-using low-cost, automated methods to detect failures (eg, incorrect boundaries, missed terms, mismapped concepts) when processing patient-generated text with existing biomedical NLP tools. We first characterize common failures that NLP tools can make in processing online community text. We then demonstrate the feasibility of our automated approach in detecting these common failures using one of the most popular biomedical NLP tools, MetaMap. Using 9657 posts from an online cancer community, we explored our automated failure detection approach in two steps: (1) to characterize the failure types, we first manually reviewed MetaMap's commonly occurring failures, grouped the inaccurate mappings into failure types, and then identified causes of the failures through iterative rounds of manual review using open coding, and (2) to automatically detect these failure types, we then explored combinations of existing NLP techniques and dictionary-based matching for each failure cause. Finally, we manually evaluated the automatically detected failures. From our manual review, we characterized three types of failure: (1) boundary failures, (2) missed term failures, and (3) word ambiguity failures. Within these three failure types, we discovered 12 causes of inaccurate mappings of concepts. We used automated methods to detect almost half of 383,572 MetaMap's mappings as problematic. Word sense ambiguity failure was the most widely occurring, comprising 82.22% of failures. Boundary failure was the second most frequent, amounting to 15.90% of failures, while missed term failures were the least common, making up 1.88% of failures. The automated failure detection achieved precision, recall, accuracy, and F1 score of 83.00%, 92.57%, 88.17%, and 87.52%, respectively. We illustrate the challenges of processing patient-generated online health community text and characterize failures of NLP tools on this patient-generated health text, demonstrating the feasibility of our low-cost approach to automatically detect those failures. Our approach shows the potential for scalable and effective solutions to automatically assess the constantly evolving NLP tools and source vocabularies to process patient-generated text.

  12. Outcomes and complications of intracranial pressure monitoring in acute liver failure: a retrospective cohort study.

    PubMed

    Karvellas, Constantine J; Fix, Oren K; Battenhouse, Holly; Durkalski, Valerie; Sanders, Corron; Lee, William M

    2014-05-01

    To determine if intracranial pressure monitor placement in patients with acute liver failure is associated with significant clinical outcomes. Retrospective multicenter cohort study. Academic liver transplant centers comprising the U.S. Acute Liver Failure Study Group. Adult critically ill patients with acute liver failure presenting with grade III/IV hepatic encephalopathy (n = 629) prospectively enrolled between March 2004 and August 2011. Intracranial pressure monitored (n = 140) versus nonmonitored controls (n = 489). Intracranial pressure monitored patients were younger than controls (35 vs 43 yr, p < 0.001) and more likely to be on renal replacement therapy (52% vs 38%, p = 0.003). Of 87 intracranial pressure monitored patients with detailed information, 44 (51%) had evidence of intracranial hypertension (intracranial pressure > 25 mm Hg) and overall 21-day mortality was higher in patients with intracranial hypertension (43% vs 23%, p = 0.05). During the first 7 days, intracranial pressure monitored patients received more intracranial hypertension-directed therapies (mannitol, 56% vs 21%; hypertonic saline, 14% vs 7%; hypothermia, 24% vs 10%; p < 0.03 for each). Forty-one percent of intracranial pressure monitored patients received liver transplant (vs 18% controls; p < 0.001). Overall 21-day mortality was similar (intracranial pressure monitored 33% vs controls 38%, p = 0.24). Where data were available, hemorrhagic complications were rare in intracranial pressure monitored patients (4 of 56 [7%]; three died). When stratifying by acetaminophen status and adjusting for confounders, intracranial pressure monitor placement did not impact 21-day mortality in acetaminophen patients (p = 0.89). However, intracranial pressure monitor was associated with increased 21-day mortality in nonacetaminophen patients (odds ratio, ~ 3.04; p = 0.014). In intracranial pressure monitored patients with acute liver failure, intracranial hypertension is commonly observed. The use of intracranial pressure monitor in acetaminophen acute liver failure did not confer a significant 21-day mortality benefit, whereas in nonacetaminophen acute liver failure, it may be associated with worse outcomes. Hemorrhagic complications from intracranial pressure monitor placement were uncommon and cannot account for mortality trends. Although our results cannot conclusively confirm or refute the utility of intracranial pressure monitoring in patients with acute liver failure, patient selection and ancillary assessments of cerebral blood flow likely have a significant role. Prospective studies would be required to conclusively account for confounding by illness severity and transplant.

  13. Applicability of a Crack-Detection System for Use in Rotor Disk Spin Test Experiments Being Evaluated

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, Ali; Baaklini, George Y.; Roth, Don J.

    2004-01-01

    Engine makers and aviation safety government institutions continue to have a strong interest in monitoring the health of rotating components in aircraft engines to improve safety and to lower maintenance costs. To prevent catastrophic failure (burst) of the engine, they use nondestructive evaluation (NDE) and major overhauls for periodic inspections to discover any cracks that might have formed. The lowest cost fluorescent penetrant inspection NDE technique can fail to disclose cracks that are tightly closed during rest or that are below the surface. The NDE eddy current system is more effective at detecting both crack types, but it requires careful setup and operation and only a small portion of the disk can be practically inspected. So that sensor systems can sustain normal function in a severe environment, health-monitoring systems require the sensor system to transmit a signal if a crack detected in the component is above a predetermined length (but below the length that would lead to failure) and lastly to act neutrally upon the overall performance of the engine system and not interfere with engine maintenance operations. Therefore, more reliable diagnostic tools and high-level techniques for detecting damage and monitoring the health of rotating components are very essential in maintaining engine safety and reliability and in assessing life.

  14. Intelligent data analysis: the best approach for chronic heart failure (CHF) follow up management.

    PubMed

    Mohammadzadeh, Niloofar; Safdari, Reza; Baraani, Alireza; Mohammadzadeh, Farshid

    2014-08-01

    Intelligent data analysis has ability to prepare and present complex relations between symptoms and diseases, medical and treatment consequences and definitely has significant role in improving follow-up management of chronic heart failure (CHF) patients, increasing speed ​​and accuracy in diagnosis and treatments; reducing costs, designing and implementation of clinical guidelines. The aim of this article is to describe intelligent data analysis methods in order to improve patient monitoring in follow and treatment of chronic heart failure patients as the best approach for CHF follow up management. Minimum data set (MDS) requirements for monitoring and follow up of CHF patient designed in checklist with six main parts. All CHF patients that discharged in 2013 from Tehran heart center have been selected. The MDS for monitoring CHF patient status were collected during 5 months in three different times of follow up. Gathered data was imported in RAPIDMINER 5 software. Modeling was based on decision trees methods such as C4.5, CHAID, ID3 and k-Nearest Neighbors algorithm (K-NN) with k=1. Final analysis was based on voting method. Decision trees and K-NN evaluate according to Cross-Validation. Creating and using standard terminologies and databases consistent with these terminologies help to meet the challenges related to data collection from various places and data application in intelligent data analysis. It should be noted that intelligent analysis of health data and intelligent system can never replace cardiologists. It can only act as a helpful tool for the cardiologist's decisions making.

  15. PET-CMR in heart failure - synergistic or redundant imaging?

    PubMed

    Quail, Michael A; Sinusas, Albert J

    2017-07-01

    Imaging in heart failure (HF) provides data for diagnosis, prognosis and disease monitoring. Both MRI and nuclear imaging techniques have been successfully used for this purpose in HF. Positron Emission Tomography-Cardiac Magnetic Resonance (PET-CMR) is an example of a new multimodality diagnostic imaging technique with potential applications in HF. The threshold for adopting a new diagnostic tool to clinical practice must necessarily be high, lest they exacerbate costs without improving care. New modalities must demonstrate clinical superiority, or at least equivalence, combined with another important advantage, such as lower cost or improved patient safety. The purpose of this review is to outline the current status of multimodality PET-CMR with regard to HF applications, and determine whether the clinical utility of this new technology justifies the cost.

  16. Random safety auditing, root cause analysis, failure mode and effects analysis.

    PubMed

    Ursprung, Robert; Gray, James

    2010-03-01

    Improving quality and safety in health care is a major concern for health care providers, the general public, and policy makers. Errors and quality issues are leading causes of morbidity and mortality across the health care industry. There is evidence that patients in the neonatal intensive care unit (NICU) are at high risk for serious medical errors. To facilitate compliance with safe practices, many institutions have established quality-assurance monitoring procedures. Three techniques that have been found useful in the health care setting are failure mode and effects analysis, root cause analysis, and random safety auditing. When used together, these techniques are effective tools for system analysis and redesign focused on providing safe delivery of care in the complex NICU system. Copyright 2010 Elsevier Inc. All rights reserved.

  17. Remote maintenance monitoring system

    NASA Technical Reports Server (NTRS)

    Simpkins, Lorenz G. (Inventor); Owens, Richard C. (Inventor); Rochette, Donn A. (Inventor)

    1992-01-01

    A remote maintenance monitoring system retrofits to a given hardware device with a sensor implant which gathers and captures failure data from the hardware device, without interfering with its operation. Failure data is continuously obtained from predetermined critical points within the hardware device, and is analyzed with a diagnostic expert system, which isolates failure origin to a particular component within the hardware device. For example, monitoring of a computer-based device may include monitoring of parity error data therefrom, as well as monitoring power supply fluctuations therein, so that parity error and power supply anomaly data may be used to trace the failure origin to a particular plane or power supply within the computer-based device. A plurality of sensor implants may be rerofit to corresponding plural devices comprising a distributed large-scale system. Transparent interface of the sensors to the devices precludes operative interference with the distributed network. Retrofit capability of the sensors permits monitoring of even older devices having no built-in testing technology. Continuous real time monitoring of a distributed network of such devices, coupled with diagnostic expert system analysis thereof, permits capture and analysis of even intermittent failures, thereby facilitating maintenance of the monitored large-scale system.

  18. Remote Monitoring to Reduce Heart Failure Readmissions.

    PubMed

    Emani, Sitaramesh

    2017-02-01

    Rehospitalization for heart failure remains a challenge in the treatment of affected patients. The ability to remotely monitor patients for worsening heart failure may provide an avenue through which therapeutic interventions can be made to prevent a rehospitalization. Available data on remote monitoring to reduce heart failure rehospitalizations are reviewed within. Strategies to reduce readmissions include clinical telemonitoring, bioimpedance changes, biomarkers, and remote hemodynamic monitoring. Telemonitoring is readily available, but has low sensitivity and adherence. No data exist to demonstrate the efficacy of this strategy in reducing admissions. Bioimpedance offers improved sensitivity compared to telemonitoring, but has not demonstrated an ability to reduce hospitalizations and is currently limited to those patients who have separate indications for an implantable device. Biomarker levels have shown variable results in the ability to reduce hospitalizations and remain without definitive proof supporting their utilization. Remote hemodynamic monitoring has shown the strongest ability to reduce heart failure readmissions and is currently approved for this purpose. However, remote hemodynamic monitoring requires an invasive procedure and may not be cost-effective. All currently available strategies to reduce hospitalizations with remote monitoring have drawbacks and challenges. Remote hemodynamic monitoring is currently the most efficacious based on data, but is not without its own imperfections.

  19. Toward a synthetic economic systems modeling tool for sustainable exploitation of ecosystems.

    PubMed

    Richardson, Colin; Courvisanos, Jerry; Crawford, John W

    2011-02-01

    Environmental resources that underpin the basic human needs of water, energy, and food are predicted to become in such short supply by 2050 that global security and the well-being of millions will be under threat. These natural commodities have been allowed to reach crisis levels of supply because of a failure of economic systems to sustain them. This is largely because there have been no means of integrating their exploitation into any economic model that effectively addresses ecological systemic failures in a way that provides an integrated ecological-economic tool that can monitor and evaluate market and policy targets. We review the reasons for this and recent attempts to address the problem while identifying outstanding issues. The key elements of a policy-oriented economic model that integrates ecosystem processes are described and form the basis of a proposed new synthesis approach. The approach is illustrated by an indicative case study that develops a simple model for rainfed and irrigated food production in the Murray-Darling basin of southeastern Australia. © 2011 New York Academy of Sciences.

  20. Model-based diagnostics for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Stephan, Amy; Martin, Eric R.; Lerutte, Marcel G.

    1991-01-01

    An innovative approach to fault management was recently demonstrated for the NASA LeRC Space Station Freedom (SSF) power system testbed. This project capitalized on research in model-based reasoning, which uses knowledge of a system's behavior to monitor its health. The fault management system (FMS) can isolate failures online, or in a post analysis mode, and requires no knowledge of failure symptoms to perform its diagnostics. An in-house tool called MARPLE was used to develop and run the FMS. MARPLE's capabilities are similar to those available from commercial expert system shells, although MARPLE is designed to build model-based as opposed to rule-based systems. These capabilities include functions for capturing behavioral knowledge, a reasoning engine that implements a model-based technique known as constraint suspension, and a tool for quickly generating new user interfaces. The prototype produced by applying MARPLE to SSF not only demonstrated that model-based reasoning is a valuable diagnostic approach, but it also suggested several new applications of MARPLE, including an integration and testing aid, and a complement to state estimation.

  1. A SOA-Based Solution to Monitor Vaccination Coverage Among HIV-Infected Patients in Liguria.

    PubMed

    Giannini, Barbara; Gazzarata, Roberta; Sticchi, Laura; Giacomini, Mauro

    2016-01-01

    Vaccination in HIV-infected patients constitutes an essential tool in the prevention of the most common infectious diseases. The Ligurian Vaccination in HIV Program is a proposed vaccination schedule specifically dedicated to this risk group. Selective strategies are proposed within this program, employing ICT (Information and Communication) tools to identify this susceptible target group, to monitor immunization coverage over time and to manage failures and defaulting. The proposal is to connect an immunization registry system to an existing regional platform that allows clinical data re-use among several medical structures, to completely manage the vaccination process. This architecture will adopt a Service Oriented Architecture (SOA) approach and standard HSSP (Health Services Specification Program) interfaces to support interoperability. According to the presented solution, vaccination administration information retrieved from the immunization registry will be structured according to the specifications within the immunization section of the HL7 (Health Level 7) CCD (Continuity of Care Document) document. Immunization coverage will be evaluated through the continuous monitoring of serology and antibody titers gathered from the hospital LIS (Laboratory Information System) structured into a HL7 Version 3 (v3) Clinical Document Architecture Release 2 (CDA R2).

  2. Attitudes among healthcare professionals towards ICT and home follow-up in chronic heart failure care.

    PubMed

    Gund, Anna; Lindecrantz, Kaj; Schaufelberger, Maria; Patel, Harshida; Sjöqvist, Bengt Arne

    2012-11-28

    eHealth applications for out-of-hospital monitoring and treatment follow-up have been advocated for many years as a promising tool to improve treatment compliance, promote individualized care and obtain a person-centred care. Despite these benefits and a large number of promising projects, a major breakthrough in everyday care is generally still lacking. Inappropriate organization for eHealth technology, reluctance from users in the introduction of new working methods, and resistance to information and communication technology (ICT) in general could be reasons for this. Another reason may be attitudes towards the potential in out-of-hospital eHealth applications. It is therefore of interest to study the general opinions among healthcare professionals to ICT in healthcare, as well as the attitudes towards using ICT as a tool for patient monitoring and follow-up at home. One specific area of interest is in-home follow-up of elderly patients with chronic heart failure (CHF). The aim of this paper is to investigate the attitudes towards ICT, as well as distance monitoring and follow-up, among healthcare professionals working with this patient group. This paper covers an attitude survey study based on responses from 139 healthcare professionals working with CHF care in Swedish hospital departments, i.e. cardiology and medicine departments. Comparisons between physicians and nurses, and in some cases between genders, on attitudes towards ICT tools and follow-up at home were performed. Out of the 425 forms sent out, 139 were collected, and 17 out of 21 counties and regions were covered in the replies. Among the respondents, 66% were nurses, 30% physicians and 4% others. As for gender, 90% of nurses were female and 60% of physicians were male. Internet was used daily by 67% of the respondents. Attitudes towards healthcare ICT were found positive as 74% were positive concerning healthcare ICT today, 96% were positive regarding the future of healthcare ICT, and 54% had high confidence in healthcare ICT. Possibilities for distance monitoring/follow-up are good according to 63% of the respondents, 78% thought that this leads to increased patient involvement, and 80% thought it would improve possibilities to deliver better care. Finally, 72% of the respondents said CHF patients would benefit from home monitoring/follow-up to some extent, and 19% to a large extent. However, the best method of follow-up was considered to be home visits by nurse, or phone contact. The results indicate that a majority of the healthcare professionals in this study are positive to both current and future use of ICT tools in healthcare and home follow-up. Consequently other factors have to play an important role in the slow penetration of out-of-hospital eHealth applications in daily healthcare practice.

  3. Heart failure patients utilizing an electric home monitor: What effects does heart failure have on their quality of life?

    NASA Astrophysics Data System (ADS)

    Simuel, Gloria J.

    Heart Failure continues to be a major public health problem associated with high mortality and morbidity. Heart Failure is the leading cause of hospitalization for persons older than 65 years, has a poor prognosis and is associated with poor quality of life. More than 5.3 million American adults are living with heart failure. Despite maximum medical therapy and frequent hospitalizations to stabilize their condition, one in five heart failure patients die within the first year of diagnosis. Several disease-management programs have been proposed and tested to improve the quality of heart failure care. Studies have shown that hospital admissions and emergency room visits decrease with increased nursing interventions in the home and community setting. An alternative strategy for promoting self-management of heart failure is the use of electronic home monitoring. The purpose of this study was to examine what effects heart failure has on patient's quality of life that had been monitoring on an electronic home monitor longer than 2 months. Twenty-one questionnaires were given to patients utilizing an electronic home monitor by their home health agency nurse. Eleven patients completed the questionnaire. The findings showed that there is some deterioration in quality of life with more association with the physical aspects of life than with the emotional aspects of life, which probably was due to the small sample size. There was no significant difference in readmission rates in patients utilizing an electronic home monitor. Further research is needed with a larger population of patients with chronic heart failure and other chronic diseases which may provide more data, and address issues such as patient compliance with self-care, impact of heart failure on patient's quality of life, functional capacity, and heart failure patient's utilization of the emergency rooms and hospital. Telemonitoring holds promise for improving the self-care abilities of persons with HF.

  4. Usability Evaluation of a Web-Based Symptom Monitoring Application for Heart Failure.

    PubMed

    Wakefield, Bonnie; Pham, Kassie; Scherubel, Melody

    2015-07-01

    Symptom recognition and reporting by patients with heart failure are critical to avoid hospitalization. This project evaluated a patient symptom tracking application. Fourteen end users (nine patients, five clinicians) from a Midwestern Veterans Affairs Medical Center evaluated the website using a think aloud protocol. A structured observation protocol was used to assess success or failure for each task. Measures included task time, success, and satisfaction. Patients had a mean age of 70 years; clinicians averaged 42 years in age. Patients took 9.3 min and clinicians took less than 3 min per scenario. Most patients needed some assistance, but few patients were completely unable to complete some tasks. Clinicians demonstrated few problems navigating the site. Patient System Usability Scale item scores ranged from 2.0 to 3.6; clinician item scores ranged from 1.8 to 4.0. Further work is needed to determine whether using the web-based tool improves symptom recognition and reporting. © The Author(s) 2015.

  5. Triplexer Monitor Design for Failure Detection in FTTH System

    NASA Astrophysics Data System (ADS)

    Fu, Minglei; Le, Zichun; Hu, Jinhua; Fei, Xia

    2012-09-01

    Triplexer was one of the key components in FTTH systems, which employed an analog overlay channel for video broadcasting in addition to bidirectional digital transmission. To enhance the survivability of triplexer as well as the robustness of FTTH system, a multi-ports device named triplexer monitor was designed and realized, by which failures at triplexer ports can be detected and localized. Triplexer monitor was composed of integrated circuits and its four input ports were connected with the beam splitter whose power division ratio was 95∶5. By means of detecting the sampled optical signal from the beam splitters, triplexer monitor tracked the status of the four ports in triplexer (e.g. 1310 nm, 1490 nm, 1550 nm and com ports). In this paper, the operation scenario of the triplexer monitor with external optical devices was addressed. And the integrated circuit structure of the triplexer monitor was also given. Furthermore, a failure localization algorithm was proposed, which based on the state transition diagram. In order to measure the failure detection and localization time under the circumstance of different failed ports, an experimental test-bed was built. Experiment results showed that the detection time for the failure at 1310 nm port by the triplexer monitor was less than 8.20 ms. For the failure at 1490 nm or 1550 nm port it was less than 8.20 ms and for the failure at com port it was less than 7.20 ms.

  6. Detection of system failures in multi-axes tasks. [pilot monitored instrument approach

    NASA Technical Reports Server (NTRS)

    Ephrath, A. R.

    1975-01-01

    The effects of the pilot's participation mode in the control task on his workload level and failure detection performance were examined considering a low visibility landing approach. It is found that the participation mode had a strong effect on the pilot's workload, the induced workload being lowest when the pilot acted as a monitoring element during a coupled approach and highest when the pilot was an active element in the control loop. The effects of workload and participation mode on failure detection were separated. The participation mode was shown to have a dominant effect on the failure detection performance, with a failure in a monitored (coupled) axis being detected significantly faster than a comparable failure in a manually controlled axis.

  7. Selective monitoring

    NASA Astrophysics Data System (ADS)

    Homem-de-Mello, Luiz S.

    1992-04-01

    While in NASA's earlier space missions such as Voyager the number of sensors was in the hundreds, future platforms such as the Space Station Freedom will have tens of thousands sensors. For these planned missions it will be impossible to use the comprehensive monitoring strategy that was used in the past in which human operators monitored all sensors all the time. A selective monitoring strategy must be substituted for the current comprehensive strategy. This selective monitoring strategy uses computer tools to preprocess the incoming data and direct the operators' attention to the most critical parts of the physical system at any given time. There are several techniques that can be used to preprocess the incoming information. This paper presents an approach to using diagnostic reasoning techniques to preprocess the sensor data and detect which parts of the physical system require more attention because components have failed or are most likely to have failed. Given the sensor readings and a model of the physical system, a number of assertions are generated and expressed as Boolean equations. The resulting system of Boolean equations is solved symbolically. Using a priori probabilities of component failure and Bayes' rule, revised probabilities of failure can be computed. These will indicate what components have failed or are the most likely to have failed. This approach is suitable for systems that are well understood and for which the correctness of the assertions can be guaranteed. Also, the system must be such that assertions can be made from instantaneous measurements. And the system must be such that changes are slow enough to allow the computation.

  8. Effective technologies for noninvasive remote monitoring in heart failure.

    PubMed

    Conway, Aaron; Inglis, Sally C; Clark, Robyn A

    2014-06-01

    Trials of new technologies to remotely monitor for signs and symptoms of worsening heart failure are continually emerging. The extent to which technological differences impact the effectiveness of noninvasive remote monitoring for heart failure management is unknown. This study examined the effect of specific technology used for noninvasive remote monitoring of people with heart failure on all-cause mortality and heart failure-related hospitalizations. A subanalysis of a large systematic review and meta-analysis was conducted. Studies were stratified according to the specific type of technology used, and separate meta-analyses were performed. Four different types of noninvasive remote monitoring technologies were identified, including structured telephone calls, videophone, interactive voice response devices, and telemonitoring. Only structured telephone calls and telemonitoring were effective in reducing the risk of all-cause mortality (relative risk [RR]=0.87; 95% confidence interval [CI], 0.75-1.01; p=0.06; and RR=0.62; 95% CI, 0.50-0.77; p<0.0001, respectively) and heart failure-related hospitalizations (RR=0.77; 95% CI, 0.68-0.87; p<0.001; and RR=0.75; 95% CI, 0.63-0.91; p=0.003, respectively). More research data are required for videophone and interactive voice response technologies. This subanalysis identified that only two of the four specific technologies used for noninvasive remote monitoring in heart failure improved outcomes. When results of studies that involved these disparate technologies were combined in previous meta-analyses, significant improvements in outcomes were identified. As such, this study has highlighted implications for future meta-analyses of randomized controlled trials focused on evaluating the effectiveness of remote monitoring in heart failure.

  9. Upgrading the Digital Electronics of the PEP-II Bunch Current Monitors at the Stanford Linear Accelerator Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kline, Josh; /SLAC

    2006-08-28

    The testing of the upgrade prototype for the bunch current monitors (BCMs) in the PEP-II storage rings at the Stanford Linear Accelerator Center (SLAC) is the topic of this paper. Bunch current monitors are used to measure the charge in the electron/positron bunches traveling in particle storage rings. The BCMs in the PEP-II storage rings need to be upgraded because components of the current system have failed and are known to be failure prone with age, and several of the integrated chips are no longer produced making repairs difficult if not impossible. The main upgrade is replacing twelve old (1995)more » field programmable gate arrays (FPGAs) with a single Virtex II FPGA. The prototype was tested using computer synthesis tools, a commercial signal generator, and a fast pulse generator.« less

  10. Data Auditor: Analyzing Data Quality Using Pattern Tableaux

    NASA Astrophysics Data System (ADS)

    Srivastava, Divesh

    Monitoring databases maintain configuration and measurement tables about computer systems, such as networks and computing clusters, and serve important business functions, such as troubleshooting customer problems, analyzing equipment failures, planning system upgrades, etc. These databases are prone to many data quality issues: configuration tables may be incorrect due to data entry errors, while measurement tables may be affected by incorrect, missing, duplicate and delayed polls. We describe Data Auditor, a tool for analyzing data quality and exploring data semantics of monitoring databases. Given a user-supplied constraint, such as a boolean predicate expected to be satisfied by every tuple, a functional dependency, or an inclusion dependency, Data Auditor computes "pattern tableaux", which are concise summaries of subsets of the data that satisfy or fail the constraint. We discuss the architecture of Data Auditor, including the supported types of constraints and the tableau generation mechanism. We also show the utility of our approach on an operational network monitoring database.

  11. Preventing overtraining in athletes in high-intensity sports and stress/recovery monitoring.

    PubMed

    Kellmann, M

    2010-10-01

    In sports, the importance of optimizing the recovery-stress state is critical. Effective recovery from intense training loads often faced by elite athletes can often determine sporting success or failure. In recent decades, athletes, coaches, and sport scientists have been keen to find creative, new methods for improving the quality and quantity of training for athletes. These efforts have consistently faced barriers, including overtraining, fatigue, injury, illness, and burnout. Physiological and psychological limits dictate a need for research that addresses the avoidance of overtraining, maximizes recovery, and successfully negotiates the fine line between high and excessive training loads. Monitoring instruments like the Recovery-Stress Questionnaire for Athletes can assist with this research by providing a tool to assess their perceived state of recovery. This article will highlight the importance of recovery for elite athletes and provide an overview of monitoring instruments. © 2010 John Wiley & Sons A/S.

  12. Automated terrestrial laser scanning with near-real-time change detection - monitoring of the Séchilienne landslide

    NASA Astrophysics Data System (ADS)

    Kromer, Ryan A.; Abellán, Antonio; Hutchinson, D. Jean; Lato, Matt; Chanut, Marie-Aurelie; Dubois, Laurent; Jaboyedoff, Michel

    2017-05-01

    We present an automated terrestrial laser scanning (ATLS) system with automatic near-real-time change detection processing. The ATLS system was tested on the Séchilienne landslide in France for a 6-week period with data collected at 30 min intervals. The purpose of developing the system was to fill the gap of high-temporal-resolution TLS monitoring studies of earth surface processes and to offer a cost-effective, light, portable alternative to ground-based interferometric synthetic aperture radar (GB-InSAR) deformation monitoring. During the study, we detected the flux of talus, displacement of the landslide and pre-failure deformation of discrete rockfall events. Additionally, we found the ATLS system to be an effective tool in monitoring landslide and rockfall processes despite missing points due to poor atmospheric conditions or rainfall. Furthermore, such a system has the potential to help us better understand a wide variety of slope processes at high levels of temporal detail.

  13. Multidrug-resistant tuberculosis treatment failure detection depends on monitoring interval and microbiological method

    PubMed Central

    White, Richard A.; Lu, Chunling; Rodriguez, Carly A.; Bayona, Jaime; Becerra, Mercedes C.; Burgos, Marcos; Centis, Rosella; Cohen, Theodore; Cox, Helen; D'Ambrosio, Lia; Danilovitz, Manfred; Falzon, Dennis; Gelmanova, Irina Y.; Gler, Maria T.; Grinsdale, Jennifer A.; Holtz, Timothy H.; Keshavjee, Salmaan; Leimane, Vaira; Menzies, Dick; Milstein, Meredith B.; Mishustin, Sergey P.; Pagano, Marcello; Quelapio, Maria I.; Shean, Karen; Shin, Sonya S.; Tolman, Arielle W.; van der Walt, Martha L.; Van Deun, Armand; Viiklepp, Piret

    2016-01-01

    Debate persists about monitoring method (culture or smear) and interval (monthly or less frequently) during treatment for multidrug-resistant tuberculosis (MDR-TB). We analysed existing data and estimated the effect of monitoring strategies on timing of failure detection. We identified studies reporting microbiological response to MDR-TB treatment and solicited individual patient data from authors. Frailty survival models were used to estimate pooled relative risk of failure detection in the last 12 months of treatment; hazard of failure using monthly culture was the reference. Data were obtained for 5410 patients across 12 observational studies. During the last 12 months of treatment, failure detection occurred in a median of 3 months by monthly culture; failure detection was delayed by 2, 7, and 9 months relying on bimonthly culture, monthly smear and bimonthly smear, respectively. Risk (95% CI) of failure detection delay resulting from monthly smear relative to culture is 0.38 (0.34–0.42) for all patients and 0.33 (0.25–0.42) for HIV-co-infected patients. Failure detection is delayed by reducing the sensitivity and frequency of the monitoring method. Monthly monitoring of sputum cultures from patients receiving MDR-TB treatment is recommended. Expanded laboratory capacity is needed for high-quality culture, and for smear microscopy and rapid molecular tests. PMID:27587552

  14. Poster - 30: Use of a Hazard-Risk Analysis for development of a new eye immobilization tool for treatment of choroidal melanoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prooijen, Monique van; Breen, Stephen

    Purpose: Our treatment for choroidal melanoma utilizes the GTC frame. The patient looks at a small LED to stabilize target position. The LED is attached to a metal arm attached to the GTC frame. A camera on the arm allows therapists to monitor patient compliance. To move to mask-based immobilization we need a new LED/camera attachment mechanism. We used a Hazard-Risk Analysis (HRA) to guide the design of the new tool. Method: A pre-clinical model was built with input from therapy and machine shop personnel. It consisted of an aluminum frame placed in aluminum guide posts attached to the couchmore » top. Further development was guided by the Department of Defense Standard Practice - System Safety hazard risk analysis technique. Results: An Orfit mask was selected because it allowed access to indexes on the couch top which assist with setup reproducibility. The first HRA table was created considering mechanical failure modes of the device. Discussions with operators and manufacturers identified other failure modes and solutions. HRA directed the design towards a safe clinical device. Conclusion: A new immobilization tool has been designed using hazard-risk analysis which resulted in an easier-to-use and safer tool compared to the initial design. The remaining risks are all low probability events and not dissimilar from those currently faced with the GTC setup. Given the gains in ease of use for therapists and patients as well as the lower costs for the hospital, we will implement this new tool.« less

  15. Extended Testability Analysis Tool

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin; Maul, William A.; Fulton, Christopher

    2012-01-01

    The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.

  16. An FMS Dynamic Production Scheduling Algorithm Considering Cutting Tool Failure and Cutting Tool Life

    NASA Astrophysics Data System (ADS)

    Setiawan, A.; Wangsaputra, R.; Martawirya, Y. Y.; Halim, A. H.

    2016-02-01

    This paper deals with Flexible Manufacturing System (FMS) production rescheduling due to unavailability of cutting tools caused either of cutting tool failure or life time limit. The FMS consists of parallel identical machines integrated with an automatic material handling system and it runs fully automatically. Each machine has a same cutting tool configuration that consists of different geometrical cutting tool types on each tool magazine. The job usually takes two stages. Each stage has sequential operations allocated to machines considering the cutting tool life. In the real situation, the cutting tool can fail before the cutting tool life is reached. The objective in this paper is to develop a dynamic scheduling algorithm when a cutting tool is broken during unmanned and a rescheduling needed. The algorithm consists of four steps. The first step is generating initial schedule, the second step is determination the cutting tool failure time, the third step is determination of system status at cutting tool failure time and the fourth step is the rescheduling for unfinished jobs. The approaches to solve the problem are complete-reactive scheduling and robust-proactive scheduling. The new schedules result differences starting time and completion time of each operations from the initial schedule.

  17. Dissolution Failure of Solid Oral Drug Products in Field Alert Reports.

    PubMed

    Sun, Dajun; Hu, Meng; Browning, Mark; Friedman, Rick L; Jiang, Wenlei; Zhao, Liang; Wen, Hong

    2017-05-01

    From 2005 to 2014, 370 data entries of dissolution failures of solid oral drug products were assessed with respect to the solubility of drug substances, dosage forms [immediate release (IR) vs. modified release (MR)], and manufacturers (brand name vs. generic). The study results show that the solubility of drug substances does not play a significant role in dissolution failures; however, MR drug products fail dissolution tests more frequently than IR drug products. When multiple variables were analyzed simultaneously, poorly water-soluble IR drug products failed the most dissolution tests, followed by poorly soluble MR drug products and very soluble MR drug products. Interestingly, the generic drug products fail dissolution tests at an earlier time point during a stability study than the brand name drug products. Whether the dissolution failure of these solid oral drug products has any in vivo implication will require further pharmacokinetic, pharmacodynamic, clinical, and drug safety evaluation. Food and Drug Administration is currently conducting risk-based assessment using in-house dissolution testing, physiologically based pharmacokinetic modeling and simulation, and post-market surveillance tools. At the meantime, this interim report will outline a general scheme of monitoring dissolution failures of solid oral dosage forms as a pharmaceutical quality indicator. Published by Elsevier Inc.

  18. 40 CFR 49.10711 - Federal Implementation Plan for the Astaris-Idaho LLC Facility (formerly owned by FMC Corporation...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... section, consistent with any averaging period specified for averaging the results of monitoring. Fugitive... beneficial. Monitoring malfunction means any sudden, infrequent, not reasonably preventable failure of the monitoring to provide valid data. Monitoring failures that are caused in part by poor maintenance or careless...

  19. 40 CFR 49.10711 - Federal Implementation Plan for the Astaris-Idaho LLC Facility (formerly owned by FMC Corporation...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... section, consistent with any averaging period specified for averaging the results of monitoring. Fugitive... beneficial. Monitoring malfunction means any sudden, infrequent, not reasonably preventable failure of the monitoring to provide valid data. Monitoring failures that are caused in part by poor maintenance or careless...

  20. Immunisation Information Systems – useful tools for monitoring vaccination programmes in EU/EEA countries, 2016

    PubMed Central

    Derrough, Tarik; Olsson, Kate; Gianfredi, Vincenza; Simondon, Francois; Heijbel, Harald; Danielsson, Niklas; Kramarz, Piotr; Pastore-Celentano, Lucia

    2017-01-01

    Immunisation Information Systems (IIS) are computerised confidential population based-systems containing individual-level information on vaccines received in a given area. They benefit individuals directly by ensuring vaccination according to the schedule and they provide information to vaccine providers and public health authorities responsible for the delivery and monitoring of an immunisation programme. In 2016, the European Centre for Disease Prevention and Control (ECDC) conducted a survey on the level of implementation and functionalities of IIS in 30 European Union/European Economic Area (EU/EEA) countries. It explored the governance and financial support for the systems, IIS software, system characteristics in terms of population, identification of immunisation recipients, vaccinations received, and integration with other health record systems, the use of the systems for surveillance and programme management as well as the challenges involved with implementation. The survey was answered by 27 of the 30 EU/EEA countries having either a system in production at national or subnational levels (n = 16), or being piloted (n = 5) or with plans for setting up a system in the future (n = 6). The results demonstrate the added-value of IIS in a number of areas of vaccination programme monitoring such as monitoring vaccine coverage at local geographical levels, linking individual immunisation history with health outcome data for safety investigations, monitoring vaccine effectiveness and failures and as an educational tool for both vaccine providers and vaccine recipients. IIS represent a significant way forward for life-long vaccination programme monitoring. PMID:28488999

  1. Immunisation Information Systems - useful tools for monitoring vaccination programmes in EU/EEA countries, 2016.

    PubMed

    Derrough, Tarik; Olsson, Kate; Gianfredi, Vincenza; Simondon, Francois; Heijbel, Harald; Danielsson, Niklas; Kramarz, Piotr; Pastore-Celentano, Lucia

    2017-04-27

    Immunisation Information Systems (IIS) are computerised confidential population based-systems containing individual-level information on vaccines received in a given area. They benefit individuals directly by ensuring vaccination according to the schedule and they provide information to vaccine providers and public health authorities responsible for the delivery and monitoring of an immunisation programme. In 2016, the European Centre for Disease Prevention and Control (ECDC) conducted a survey on the level of implementation and functionalities of IIS in 30 European Union/European Economic Area (EU/EEA) countries. It explored the governance and financial support for the systems, IIS software, system characteristics in terms of population, identification of immunisation recipients, vaccinations received, and integration with other health record systems, the use of the systems for surveillance and programme management as well as the challenges involved with implementation. The survey was answered by 27 of the 30 EU/EEA countries having either a system in production at national or subnational levels (n = 16), or being piloted (n = 5) or with plans for setting up a system in the future (n = 6). The results demonstrate the added-value of IIS in a number of areas of vaccination programme monitoring such as monitoring vaccine coverage at local geographical levels, linking individual immunisation history with health outcome data for safety investigations, monitoring vaccine effectiveness and failures and as an educational tool for both vaccine providers and vaccine recipients. IIS represent a significant way forward for life-long vaccination programme monitoring. This article is copyright of The Authors, 2017.

  2. Volcanic Alert System (VAS) developed during the (2011-2013) El Hierro (Canary Islands) volcanic process

    NASA Astrophysics Data System (ADS)

    Ortiz, Ramon; Berrocoso, Manuel; Marrero, Jose Manuel; Fernandez-Ros, Alberto; Prates, Gonçalo; De la Cruz-Reyna, Servando; Garcia, Alicia

    2014-05-01

    In volcanic areas with long repose periods (as El Hierro), recently installed monitoring networks offer no instrumental record of past eruptions nor experience in handling a volcanic crisis. Both conditions, uncertainty and inexperience, contribute to make the communication of hazard more difficult. In fact, in the initial phases of the unrest at El Hierro, the perception of volcanic risk was somewhat distorted, as even relatively low volcanic hazards caused a high political impact. The need of a Volcanic Alert System became then evident. In general, the Volcanic Alert System is comprised of the monitoring network, the software tools for the analysis of the observables, the management of the Volcanic Activity Level, and the assessment of the threat. The Volcanic Alert System presented here places special emphasis on phenomena associated to moderate eruptions, as well as on volcano-tectonic earthquakes and landslides, which in some cases, as in El Hierro, may be more destructive than an eruption itself. As part of the Volcanic Alert System, we introduce here the Volcanic Activity Level which continuously applies a routine analysis of monitoring data (particularly seismic and deformation data) to detect data trend changes or monitoring network failures. The data trend changes are quantified according to the Failure Forecast Method (FFM). When data changes and/or malfunctions are detected, by an automated watchdog, warnings are automatically issued to the Monitoring Scientific Team. Changes in the data patterns are then translated by the Monitoring Scientific Team into a simple Volcanic Activity Level, that is easy to use and understand by the scientists and technicians in charge for the technical management of the unrest. The main feature of the Volcanic Activity Level is its objectivity, as it does not depend on expert opinions, which are left to the Scientific Committee, and its capabilities for early detection of precursors. As a consequence of the El Hierro experience we consider the objectivity of the Volcanic Activity Level a powerful tool to focus the discussions in a Scientific Committee on the activity forecast and on the expected scenarios, rather than on the multiple explanations of the data fluctuations, which is one of the main sources of conflict in the Scientific Committee discussions. Although the Volcanic Alert System was designed specifically for the unrest episodes at El Hierro, the involved methodologies may be applied to other situations of unrest.

  3. EVALUATION OF SAFETY IN A RADIATION ONCOLOGY SETTING USING FAILURE MODE AND EFFECTS ANALYSIS

    PubMed Central

    Ford, Eric C.; Gaudette, Ray; Myers, Lee; Vanderver, Bruce; Engineer, Lilly; Zellars, Richard; Song, Danny Y.; Wong, John; DeWeese, Theodore L.

    2013-01-01

    Purpose Failure mode and effects analysis (FMEA) is a widely used tool for prospectively evaluating safety and reliability. We report our experiences in applying FMEA in the setting of radiation oncology. Methods and Materials We performed an FMEA analysis for our external beam radiation therapy service, which consisted of the following tasks: (1) create a visual map of the process, (2) identify possible failure modes; assign risk probability numbers (RPN) to each failure mode based on tabulated scores for the severity, frequency of occurrence, and detectability, each on a scale of 1 to 10; and (3) identify improvements that are both feasible and effective. The RPN scores can span a range of 1 to 1000, with higher scores indicating the relative importance of a given failure mode. Results Our process map consisted of 269 different nodes. We identified 127 possible failure modes with RPN scores ranging from 2 to 160. Fifteen of the top-ranked failure modes were considered for process improvements, representing RPN scores of 75 and more. These specific improvement suggestions were incorporated into our practice with a review and implementation by each department team responsible for the process. Conclusions The FMEA technique provides a systematic method for finding vulnerabilities in a process before they result in an error. The FMEA framework can naturally incorporate further quantification and monitoring. A general-use system for incident and near miss reporting would be useful in this regard. PMID:19409731

  4. Implantable Hemodynamic Monitoring for Heart Failure Patients.

    PubMed

    Abraham, William T; Perl, Leor

    2017-07-18

    Rates of heart failure hospitalization remain unacceptably high. Such hospitalizations are associated with substantial patient, caregiver, and economic costs. Randomized controlled trials of noninvasive telemedical systems have failed to demonstrate reduced rates of hospitalization. The failure of these technologies may be due to the limitations of the signals measured. Intracardiac and pulmonary artery pressure-guided management has become a focus of hospitalization reduction in heart failure. Early studies using implantable hemodynamic monitors demonstrated the potential of pressure-based heart failure management, whereas subsequent studies confirmed the clinical utility of this approach. One large pivotal trial proved the safety and efficacy of pulmonary artery pressure-guided heart failure management, showing a marked reduction in heart failure hospitalizations in patients randomized to active pressure-guided management. "Next-generation" implantable hemodynamic monitors are in development, and novel approaches for the use of this data promise to expand the use of pressure-guided heart failure management. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  5. [Progressive damage monitoring of corrugated composite skins by the FBG spectral characteristics].

    PubMed

    Zhang, Yong; Wang, Bang-Feng; Lu, Ji-Yun; Gu, Li-Li; Su, Yong-Gang

    2014-03-01

    In the present paper, a method of monitoring progressive damage of composite structures by non-uniform fiber Bragg grating (FBG) reflection spectrum is proposed. Due to the finite element analysis of corrugated composite skins specimens, the failure process under tensile load and corresponding critical failure loads of corrugated composite skin was predicated. Then, the non-uniform reflection spectrum of FBG sensor could be reconstructed and the corresponding relationship between layer failure order sequence of corrugated composite skin and FBG sensor reflection spectrums was acquired. A monitoring system based on FBG non-uniform reflection spectrum, which can be used to monitor progressive damage of corrugated composite skins, was built. The corrugated composite skins were stretched under this FBG non-uniform reflection spectrum monitoring system. The results indicate that real-time spectrums acquired by FBG non-uniform reflection spectrum monitoring system show the same trend with the reconstruction reflection spectrums. The maximum error between the corresponding failure and the predictive value is 8.6%, which proves the feasibility of using FBG sensor to monitor progressive damage of corrugated composite skin. In this method, the real-time changes in the FBG non-uniform reflection spectrum within the scope of failure were acquired through the way of monitoring and predicating, and at the same time, the progressive damage extent and layer failure sequence of corru- gated composite skin was estimated, and without destroying the structure of the specimen, the method is easy and simple to operate. The measurement and transmission section of the system are completely composed of optical fiber, which provides new ideas and experimental reference for the field of dynamic monitoring of smart skin.

  6. Estimation procedures to measure and monitor failure rates of components during thermal-vacuum testing

    NASA Technical Reports Server (NTRS)

    Williams, R. E.; Kruger, R.

    1980-01-01

    Estimation procedures are described for measuring component failure rates, for comparing the failure rates of two different groups of components, and for formulating confidence intervals for testing hypotheses (based on failure rates) that the two groups perform similarly or differently. Appendix A contains an example of an analysis in which these methods are applied to investigate the characteristics of two groups of spacecraft components. The estimation procedures are adaptable to system level testing and to monitoring failure characteristics in orbit.

  7. Structural health monitoring of wind turbine blades : SE 265 Final Project.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barkley, W. C.; Jacobs, Laura D.; Rutherford, A. C.

    2006-03-23

    ACME Wind Turbine Corporation has contacted our dynamic analysis firm regarding structural health monitoring of their wind turbine blades. ACME has had several failures in previous years. Examples are shown in Figure 1. These failures have resulted in economic loss for the company due to down time of the turbines (lost revenue) and repair costs. Blade failures can occur in several modes, which may depend on the type of construction and load history. Cracking and delamination are some typical modes of blade failure. ACME warranties its turbines and wishes to decrease the number of blade failures they have to repairmore » and replace. The company wishes to implement a real time structural health monitoring system in order to better understand when blade replacement is necessary. Because of warranty costs incurred to date, ACME is interested in either changing the warranty period for the blades in question or predicting imminent failure before it occurs. ACME's current practice is to increase the number of physical inspections when blades are approaching the end of their fatigue lives. Implementation of an in situ monitoring system would eliminate or greatly reduce the need for such physical inspections. Another benefit of such a monitoring system is that the life of any given component could be extended since real conditions would be monitored. The SHM system designed for ACME must be able to operate while the wind turbine is in service. This means that wireless communication options will likely be implemented. Because blade failures occur due to cyclic stresses in the blade material, the sensing system will focus on monitoring strain at various points.« less

  8. Investigation of the cross-ship comparison monitoring method of failure detection in the HIMAT RPRV. [digital control techniques using airborne microprocessors

    NASA Technical Reports Server (NTRS)

    Wolf, J. A.

    1978-01-01

    The Highly maneuverable aircraft technology (HIMAT) remotely piloted research vehicle (RPRV) uses cross-ship comparison monitoring of the actuator RAM positions to detect a failure in the aileron, canard, and elevator control surface servosystems. Some possible sources of nuisance trips for this failure detection technique are analyzed. A FORTRAN model of the simplex servosystems and the failure detection technique were utilized to provide a convenient means of changing parameters and introducing system noise. The sensitivity of the technique to differences between servosystems and operating conditions was determined. The cross-ship comparison monitoring method presently appears to be marginal in its capability to detect an actual failure and to withstand nuisance trips.

  9. In situ monitoring of the integrity of bonded repair patches on aircraft and civil infrastructures

    NASA Astrophysics Data System (ADS)

    Kumar, Amrita; Roach, Dennis; Beard, Shawn; Qing, Xinlin; Hannum, Robert

    2006-03-01

    Monitoring the continued health of aircraft subsystems and identifying problems before they affect airworthiness has been a long-term goal of the aviation industry. Because in-service conditions and failure modes experienced by structures are generally complex and unknown, conservative calendar-based or usage-based scheduled maintenance practices are overly time-consuming, labor-intensive and expensive. Metal structures such as helicopters and other transportation systems are likely to develop fatigue cracks under cyclic loads and corrosive service environments. Early detection of cracks is a key element to prevent catastrophic failure and prolong structural life. Furthermore, as structures age, maintenance service frequency and costs increase while performance and availability decrease. Current non-destructive inspection (NDI) techniques that can potentially be used for this purpose typically involve complex, time-intensive procedures, which are labor-intensive and expensive. Most techniques require access to the damaged area on at least one side, and sometimes on both sides. This can be very difficult for monitoring of certain inaccessible regions. In those cases, inspection may require removal of access panels or even structural disassembly. Once access has been obtained, automated inspection techniques likely will not be practical due to the bulk of the required equipment. Results obtained from these techniques may also be sensitive to the sweep speed, tool orientation, and downward pressure. This can be especially problematic for hand-held inspection tools where none of these parameters is mechanically controlled. As a result, data can vary drastically from one inspection to the next, from one technician to the next, and even from one sweep to the next. Structural health monitoring (SHM) offers the promise of a paradigm shift from schedule-driven maintenance to condition-based maintenance (CBM) of assets. Sensors embedded permanently in aircraft safety critical structures that can monitor damage can provide for improved reliability and streamlining of aircraft maintenance. Early detection of damage such as fatigue crack initiation can improve personnel safety and prolong service life. This paper presents the testing of an acousto-ultrasonic piezoelectric sensor based structural health monitoring system for real-time monitoring of fatigue cracks and disbonds in bonded repairs. The system utilizes a network of distributed miniature piezoelectric sensors/actuators embedded on a thin dielectric carrier film, to query, monitor and evaluate the condition of a structure. The sensor layers are extremely flexible and can be integrated with any type of metal or composite structure. Diagnostic signals obtained from a structure during structural monitoring are processed by a portable diagnostic unit. With appropriate diagnostic software, the signals can be analyzed to ascertain the integrity of the structure being monitored. Details on the system, its integration and examples of detection of fatigue crack and disbond growth and quantification for bonded repairs will be presented here.

  10. On-line detection of key radionuclides for fuel-rod failure in a pressurized water reactor.

    PubMed

    Qin, Guoxiu; Chen, Xilin; Guo, Xiaoqing; Ni, Ning

    2016-08-01

    For early on-line detection of fuel rod failure, the key radionuclides useful in monitoring must leak easily from failing rods. Yield, half-life, and mass share of fission products that enter the primary coolant also need to be considered in on-line analyses. From all the nuclides that enter the primary coolant during fuel-rod failure, (135)Xe and (88)Kr were ultimately chosen as crucial for on-line monitoring of fuel-rod failure. A monitoring system for fuel-rod failure detection for pressurized water reactor (PWR) based on the LaBr3(Ce) detector was assembled and tested. The samples of coolant from the PWR were measured using the system as well as a HPGe γ-ray spectrometer. A comparison showed the method was feasible. Finally, the γ-ray spectra of primary coolant were measured under normal operations and during fuel-rod failure. The two peaks of (135)Xe (249.8keV) and (88)Kr (2392.1keV) were visible, confirming that the method is capable of monitoring fuel-rod failure on-line. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. SU-E-T-627: Failure Modes and Effect Analysis for Monthly Quality Assurance of Linear Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, J; Xiao, Y; Wang, J

    2014-06-15

    Purpose: To develop and implement a failure mode and effect analysis (FMEA) on routine monthly Quality Assurance (QA) tests (physical tests part) of linear accelerator. Methods: A systematic failure mode and effect analysis method was performed for monthly QA procedures. A detailed process tree of monthly QA was created and potential failure modes were defined. Each failure mode may have many influencing factors. For each factor, a risk probability number (RPN) was calculated from the product of probability of occurrence (O), the severity of effect (S), and detectability of the failure (D). The RPN scores are in a range ofmore » 1 to 1000, with higher scores indicating stronger correlation to a given influencing factor of a failure mode. Five medical physicists in our institution were responsible to discuss and to define the O, S, D values. Results: 15 possible failure modes were identified and all RPN scores of all influencing factors of these 15 failue modes were from 8 to 150, and the checklist of FMEA in monthly QA was drawn. The system showed consistent and accurate response to erroneous conditions. Conclusion: The influencing factors of RPN greater than 50 were considered as highly-correlated factors of a certain out-oftolerance monthly QA test. FMEA is a fast and flexible tool to develop an implement a quality management (QM) frame work of monthly QA, which improved the QA efficiency of our QA team. The FMEA work may incorporate more quantification and monitoring fuctions in future.« less

  12. Effectiveness of distributed temperature measurements for early detection of piping in river embankments

    NASA Astrophysics Data System (ADS)

    Bersan, Silvia; Koelewijn, André R.; Simonini, Paolo

    2018-02-01

    Internal erosion is the cause of a significant percentage of failure and incidents involving both dams and river embankments in many countries. In the past 20 years the use of fibre-optic Distributed Temperature Sensing (DTS) in dams has proved to be an effective tool for the detection of leakages and internal erosion. This work investigates the effectiveness of DTS for dike monitoring, focusing on the early detection of backward erosion piping, a mechanism that affects the foundation layer of structures resting on permeable, sandy soils. The paper presents data from a piping test performed on a large-scale experimental dike equipped with a DTS system together with a large number of accompanying sensors. The effect of seepage and piping on the temperature field is analysed, eventually identifying the processes that cause the onset of thermal anomalies around piping channels and thus enable their early detection. Making use of dimensional analysis, the factors that influence this thermal response of a dike foundation are identified. Finally some tools are provided that can be helpful for the design of monitoring systems and for the interpretation of temperature data.

  13. Real-time thermal imaging of solid oxide fuel cell cathode activity in working condition.

    PubMed

    Montanini, Roberto; Quattrocchi, Antonino; Piccolo, Sebastiano A; Amato, Alessandra; Trocino, Stefano; Zignani, Sabrina C; Faro, Massimiliano Lo; Squadrito, Gaetano

    2016-09-01

    Electrochemical methods such as voltammetry and electrochemical impedance spectroscopy are effective for quantifying solid oxide fuel cell (SOFC) operational performance, but not for identifying and monitoring the chemical processes that occur on the electrodes' surface, which are thought to be strictly related to the SOFCs' efficiency. Because of their high operating temperature, mechanical failure or cathode delamination is a common shortcoming of SOFCs that severely affects their reliability. Infrared thermography may provide a powerful tool for probing in situ SOFC electrode processes and the materials' structural integrity, but, due to the typical design of pellet-type cells, a complete optical access to the electrode surface is usually prevented. In this paper, a specially designed SOFC is introduced, which allows temperature distribution to be measured over all the cathode area while still preserving the electrochemical performance of the device. Infrared images recorded under different working conditions are then processed by means of a dedicated image processing algorithm for quantitative data analysis. Results reported in the paper highlight the effectiveness of infrared thermal imaging in detecting the onset of cell failure during normal operation and in monitoring cathode activity when the cell is fed with different types of fuels.

  14. U.S. Governmental Information Operations and Strategic Communications: A Discredited Tool or User Failure? Implications for Future Conflict

    DTIC Science & Technology

    2013-12-01

    U.S. GOVERNMENTAL INFORMATION OPERATIONS AND STRATEGIC COMMUNICATIONS : A DISCREDITED TOOL OR USER FAILURE? IMPLICATIONS FOR FUTURE CONFLICT Steve...TYPE 3. DATES COVERED 00-00-2013 to 00-00-2013 4. TITLE AND SUBTITLE U.S. Governmental Information Operations and Strategic Communications : A ...GOVERNMENTAL INFORMATION OPERATIONS AND STRATEGIC COMMUNICATIONS : A DISCREDITED TOOL OR USER FAILURE? IMPLICATIONS FOR FUTURE CONFLICT Steve Tatham

  15. Integrated Systems Health Management (ISHM) Toolkit

    NASA Technical Reports Server (NTRS)

    Venkatesh, Meera; Kapadia, Ravi; Walker, Mark; Wilkins, Kim

    2013-01-01

    A framework of software components has been implemented to facilitate the development of ISHM systems according to a methodology based on Reliability Centered Maintenance (RCM). This framework is collectively referred to as the Toolkit and was developed using General Atomics' Health MAP (TM) technology. The toolkit is intended to provide assistance to software developers of mission-critical system health monitoring applications in the specification, implementation, configuration, and deployment of such applications. In addition to software tools designed to facilitate these objectives, the toolkit also provides direction to software developers in accordance with an ISHM specification and development methodology. The development tools are based on an RCM approach for the development of ISHM systems. This approach focuses on defining, detecting, and predicting the likelihood of system functional failures and their undesirable consequences.

  16. CIM at GE's factory of the future

    NASA Astrophysics Data System (ADS)

    Waldman, H.

    Functional features of a highly automated aircraft component batch processing factory are described. The system has processing, working, and methodology components. A rotating parts operation installed 20 yr ago features a high density of numerically controlled machines, and is connected to a hierarchical network of data communications and apparatus for moving the rotating parts and tools of engines. Designs produced at one location in the country are sent by telephone link to other sites for development of manufacturing plans, tooling, numerical control programs, and process instructions for the rotating parts. Direct numerical control is implemented at the work stations, which have instructions stored on tape for back-up in case the host computer goes down. Each machine is automatically monitored at 48 points and notice of failure can originate from any point in the system.

  17. Tool vibration detection with eddy current sensors in machining process and computation of stability lobes using fuzzy classifiers

    NASA Astrophysics Data System (ADS)

    Devillez, Arnaud; Dudzinski, Daniel

    2007-01-01

    Today the knowledge of a process is very important for engineers to find optimal combination of control parameters warranting productivity, quality and functioning without defects and failures. In our laboratory, we carry out research in the field of high speed machining with modelling, simulation and experimental approaches. The aim of our investigation is to develop a software allowing the cutting conditions optimisation to limit the number of predictive tests, and the process monitoring to prevent any trouble during machining operations. This software is based on models and experimental data sets which constitute the knowledge of the process. In this paper, we deal with the problem of vibrations occurring during a machining operation. These vibrations may cause some failures and defects to the process, like workpiece surface alteration and rapid tool wear. To measure on line the tool micro-movements, we equipped a lathe with a specific instrumentation using eddy current sensors. Obtained signals were correlated with surface finish and a signal processing algorithm was used to determine if a test is stable or unstable. Then, a fuzzy classification method was proposed to classify the tests in a space defined by the width of cut and the cutting speed. Finally, it was shown that the fuzzy classification takes into account of the measurements incertitude to compute the stability limit or stability lobes of the process.

  18. A novel tool for the prediction of tablet sticking during high speed compaction.

    PubMed

    Abdel-Hamid, Sameh; Betz, Gabriele

    2012-01-01

    During tableting, capping is a problem of cohesion while sticking is a problem of adhesion. Sticking is a multi-composite problem; causes are either material or machine related. Nowadays, detecting such a problem is a pre-requisite in the early stages of development. The aim of our study was to investigate sticking by radial die-wall pressure monitoring guided by compaction simulation. This was done by using the highly sticking drug; Mefenamic acid (MA) at different drug loadings with different fillers compacted at different pressures and speeds. By increasing MA loading, we found that viscoelastic fillers showed high residual radial pressure after compaction while plastic/brittle fillers showed high radial pressure during compaction, p < 0.05. Visually, plastic/brittle fillers showed greater tendencies for adhesion to punches than viscoelastic fillers while the later showed higher tendencies for adhesion to the die-wall. This was confirmed by higher values of axial stress transmission for plastic/brittle than viscoelastic fillers (higher punch surface/powder interaction), and higher residual die-wall and ejection forces for viscoelastic than plastic/brittle fillers, p < 0.05. Take-off force was not a useful tool to estimate sticking due to cohesive failure of the compacts. Radial die-wall pressure monitoring is suggested as a robust tool to predict sticking.

  19. Attitudes among healthcare professionals towards ICT and home follow-up in chronic heart failure care

    PubMed Central

    2012-01-01

    Background eHealth applications for out-of-hospital monitoring and treatment follow-up have been advocated for many years as a promising tool to improve treatment compliance, promote individualized care and obtain a person-centred care. Despite these benefits and a large number of promising projects, a major breakthrough in everyday care is generally still lacking. Inappropriate organization for eHealth technology, reluctance from users in the introduction of new working methods, and resistance to information and communication technology (ICT) in general could be reasons for this. Another reason may be attitudes towards the potential in out-of-hospital eHealth applications. It is therefore of interest to study the general opinions among healthcare professionals to ICT in healthcare, as well as the attitudes towards using ICT as a tool for patient monitoring and follow-up at home. One specific area of interest is in-home follow-up of elderly patients with chronic heart failure (CHF). The aim of this paper is to investigate the attitudes towards ICT, as well as distance monitoring and follow-up, among healthcare professionals working with this patient group. Method This paper covers an attitude survey study based on responses from 139 healthcare professionals working with CHF care in Swedish hospital departments, i.e. cardiology and medicine departments. Comparisons between physicians and nurses, and in some cases between genders, on attitudes towards ICT tools and follow-up at home were performed. Results Out of the 425 forms sent out, 139 were collected, and 17 out of 21 counties and regions were covered in the replies. Among the respondents, 66% were nurses, 30% physicians and 4% others. As for gender, 90% of nurses were female and 60% of physicians were male. Internet was used daily by 67% of the respondents. Attitudes towards healthcare ICT were found positive as 74% were positive concerning healthcare ICT today, 96% were positive regarding the future of healthcare ICT, and 54% had high confidence in healthcare ICT. Possibilities for distance monitoring/follow-up are good according to 63% of the respondents, 78% thought that this leads to increased patient involvement, and 80% thought it would improve possibilities to deliver better care. Finally, 72% of the respondents said CHF patients would benefit from home monitoring/follow-up to some extent, and 19% to a large extent. However, the best method of follow-up was considered to be home visits by nurse, or phone contact. Conclusion The results indicate that a majority of the healthcare professionals in this study are positive to both current and future use of ICT tools in healthcare and home follow-up. Consequently other factors have to play an important role in the slow penetration of out-of-hospital eHealth applications in daily healthcare practice. PMID:23190602

  20. Inverter ratio failure detector

    NASA Technical Reports Server (NTRS)

    Wagner, A. P.; Ebersole, T. J.; Andrews, R. E. (Inventor)

    1974-01-01

    A failure detector which detects the failure of a dc to ac inverter is disclosed. The inverter under failureless conditions is characterized by a known linear relationship of its input and output voltages and by a known linear relationship of its input and output currents. The detector includes circuitry which is responsive to the detector's input and output voltages and which provides a failure-indicating signal only when the monitored output voltage is less by a selected factor, than the expected output voltage for the monitored input voltage, based on the known voltages' relationship. Similarly, the detector includes circuitry which is responsive to the input and output currents and provides a failure-indicating signal only when the input current exceeds by a selected factor the expected input current for the monitored output current based on the known currents' relationship.

  1. Automatic Monitoring System Design and Failure Probability Analysis for River Dikes on Steep Channel

    NASA Astrophysics Data System (ADS)

    Chang, Yin-Lung; Lin, Yi-Jun; Tung, Yeou-Koung

    2017-04-01

    The purposes of this study includes: (1) design an automatic monitoring system for river dike; and (2) develop a framework which enables the determination of dike failure probabilities for various failure modes during a rainstorm. The historical dike failure data collected in this study indicate that most dikes in Taiwan collapsed under the 20-years return period discharge, which means the probability of dike failure is much higher than that of overtopping. We installed the dike monitoring system on the Chiu-She Dike which located on the middle stream of Dajia River, Taiwan. The system includes: (1) vertical distributed pore water pressure sensors in front of and behind the dike; (2) Time Domain Reflectometry (TDR) to measure the displacement of dike; (3) wireless floating device to measure the scouring depth at the toe of dike; and (4) water level gauge. The monitoring system recorded the variation of pore pressure inside the Chiu-She Dike and the scouring depth during Typhoon Megi. The recorded data showed that the highest groundwater level insides the dike occurred 15 hours after the peak discharge. We developed a framework which accounts for the uncertainties from return period discharge, Manning's n, scouring depth, soil cohesion, and friction angle and enables the determination of dike failure probabilities for various failure modes such as overtopping, surface erosion, mass failure, toe sliding and overturning. The framework was applied to Chiu-She, Feng-Chou, and Ke-Chuang Dikes on Dajia River. The results indicate that the toe sliding or overturning has the highest probability than other failure modes. Furthermore, the overall failure probability (integrate different failure modes) reaches 50% under 10-years return period flood which agrees with the historical failure data for the study reaches.

  2. Multimodal brain monitoring in fulminant hepatic failure

    PubMed Central

    Paschoal Jr, Fernando Mendes; Nogueira, Ricardo Carvalho; Ronconi, Karla De Almeida Lins; de Lima Oliveira, Marcelo; Teixeira, Manoel Jacobsen; Bor-Seng-Shu, Edson

    2016-01-01

    Acute liver failure, also known as fulminant hepatic failure (FHF), embraces a spectrum of clinical entities characterized by acute liver injury, severe hepatocellular dysfunction, and hepatic encephalopathy. Cerebral edema and intracranial hypertension are common causes of mortality in patients with FHF. The management of patients who present acute liver failure starts with determining the cause and an initial evaluation of prognosis. Regardless of whether or not patients are listed for liver transplantation, they should still be monitored for recovery, death, or transplantation. In the past, neuromonitoring was restricted to serial clinical neurologic examination and, in some cases, intracranial pressure monitoring. Over the years, this monitoring has proven insufficient, as brain abnormalities were detected at late and irreversible stages. The need for real-time monitoring of brain functions to favor prompt treatment and avert irreversible brain injuries led to the concepts of multimodal monitoring and neurophysiological decision support. New monitoring techniques, such as brain tissue oxygen tension, continuous electroencephalogram, transcranial Doppler, and cerebral microdialysis, have been developed. These techniques enable early diagnosis of brain hemodynamic, electrical, and biochemical changes, allow brain anatomical and physiological monitoring-guided therapy, and have improved patient survival rates. The purpose of this review is to discuss the multimodality methods available for monitoring patients with FHF in the neurocritical care setting. PMID:27574545

  3. General Purpose Data-Driven Monitoring for Space Operations

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; Martin, Rodney A.; Schwabacher, Mark A.; Spirkovska, Liljana; Taylor, William McCaa; Castle, Joseph P.; Mackey, Ryan M.

    2009-01-01

    As modern space propulsion and exploration systems improve in capability and efficiency, their designs are becoming increasingly sophisticated and complex. Determining the health state of these systems, using traditional parameter limit checking, model-based, or rule-based methods, is becoming more difficult as the number of sensors and component interactions grow. Data-driven monitoring techniques have been developed to address these issues by analyzing system operations data to automatically characterize normal system behavior. System health can be monitored by comparing real-time operating data with these nominal characterizations, providing detection of anomalous data signatures indicative of system faults or failures. The Inductive Monitoring System (IMS) is a data-driven system health monitoring software tool that has been successfully applied to several aerospace applications. IMS uses a data mining technique called clustering to analyze archived system data and characterize normal interactions between parameters. The scope of IMS based data-driven monitoring applications continues to expand with current development activities. Successful IMS deployment in the International Space Station (ISS) flight control room to monitor ISS attitude control systems has led to applications in other ISS flight control disciplines, such as thermal control. It has also generated interest in data-driven monitoring capability for Constellation, NASA's program to replace the Space Shuttle with new launch vehicles and spacecraft capable of returning astronauts to the moon, and then on to Mars. Several projects are currently underway to evaluate and mature the IMS technology and complementary tools for use in the Constellation program. These include an experiment on board the Air Force TacSat-3 satellite, and ground systems monitoring for NASA's Ares I-X and Ares I launch vehicles. The TacSat-3 Vehicle System Management (TVSM) project is a software experiment to integrate fault and anomaly detection algorithms and diagnosis tools with executive and adaptive planning functions contained in the flight software on-board the Air Force Research Laboratory TacSat-3 satellite. The TVSM software package will be uploaded after launch to monitor spacecraft subsystems such as power and guidance, navigation, and control (GN&C). It will analyze data in real-time to demonstrate detection of faults and unusual conditions, diagnose problems, and react to threats to spacecraft health and mission goals. The experiment will demonstrate the feasibility and effectiveness of integrated system health management (ISHM) technologies with both ground and on-board experiments.

  4. Analytical Study of different types Of network failure detection and possible remedies

    NASA Astrophysics Data System (ADS)

    Saxena, Shikha; Chandra, Somnath

    2012-07-01

    Faults in a network have various causes,such as the failure of one or more routers, fiber-cuts, failure of physical elements at the optical layer, or extraneous causes like power outages. These faults are usually detected as failures of a set of dependent logical entities and the links affected by the failed components. A reliable control plane plays a crucial role in creating high-level services in the next-generation transport network based on the Generalized Multiprotocol Label Switching (GMPLS) or Automatically Switched Optical Networks (ASON) model. In this paper, approaches to control-plane survivability, based on protection and restoration mechanisms, are examined. Procedures for the control plane state recovery are also discussed, including link and node failure recovery and the concepts of monitoring paths (MPs) and monitoring cycles (MCs) for unique localization of shared risk linked group (SRLG) failures in all-optical networks. An SRLG failure is a failure of multiple links due to a failure of a common resource. MCs (MPs) start and end at same (distinct) monitoring location(s). They are constructed such that any SRLG failure results in the failure of a unique combination of paths and cycles. We derive necessary and sufficient conditions on the set of MCs and MPs needed for localizing an SRLG failure in an arbitrary graph. Procedure of Protection and Restoration of the SRLG failure by backup re-provisioning algorithm have also been discussed.

  5. Application of a neural network as a potential aid in predicting NTF pump failure

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Hill, Jeffrey S.; Lamarsh, William J., II; Bradley, David E.

    1993-01-01

    The National Transonic Facility has three centrifugal multi-stage pumps to supply liquid nitrogen to the wind tunnel. Pump reliability is critical to facility operation and test capability. A highly desirable goal is to be able to detect a pump rotating component problem as early as possible during normal operation and avoid serious damage to other pump components. If a problem is detected before serious damage occurs, the repair cost and downtime could be reduced significantly. A neural network-based tool was developed for monitoring pump performance and aiding in predicting pump failure. Once trained, neural networks can rapidly process many combinations of input values other than those used for training to approximate previously unknown output values. This neural network was applied to establish relationships among the critical frequencies and aid in predicting failures. Training pairs were developed from frequency scans from typical tunnel operations. After training, various combinations of critical pump frequencies were propagated through the neural network. The approximated output was used to create a contour plot depicting the relationships of the input frequencies to the output pump frequency.

  6. Ambulatory Holter monitoring in asymptomatic patients with DDD pacemakers - do we need ACC/AHA Guidelines revision?

    PubMed

    Chudzik, Michal; Klimczak, Artur; Wranicz, Jerzy Krzysztof

    2013-10-31

    We sought to determine the usefulness of ambulatory 24-hour Holter monitoring in detecting asymptomatic pacemaker (PM) malfunction episodes in patients with dual-chamber pacemakers whose pacing and sensing parameters were proper, as seen in routine post-implantation follow-ups. Ambulatory 24-hour Holter recordings (HM) were performed in 100 patients with DDD pacemakers 1 day after the implantation. Only asymptomatic patients with proper pacing and sensing parameters (assessed on PM telemetry on the first day post-implantation) were enrolled in the study. The following parameters were assessed: failure to pace, failure to sense (both oversensing and undersensing episodes) as well as the percentage of all PM disturbances. Despite proper sensing and pacing parameters, HM revealed PM disturbances in 23 patients out of 100 (23%). Atrial undersensing episodes were found in 12 patients (p < 0.005) with totally 963 episodes and failure to capture in 1 patient (1%). T wave oversensing was the most common ventricular channel disorder (1316 episodes in 9 patients, p < 0.0005). Malfunction episodes occurred sporadically, leading to pauses of up to 1.6 s or temporary bradycardia, which were, nevertheless, not accompanied by clinical symptoms. No ventricular pacing disturbances were found. Asymptomatic pacemaker dysfunction may be observed in nearly 25% of patients with proper DDD parameters after implantation. Thus, ambulatory HM during the early post-implantation period may be a useful tool to detect the need to reprogram PM parameters.

  7. EEMD-based multiscale ICA method for slewing bearing fault detection and diagnosis

    NASA Astrophysics Data System (ADS)

    Žvokelj, Matej; Zupan, Samo; Prebil, Ivan

    2016-05-01

    A novel multivariate and multiscale statistical process monitoring method is proposed with the aim of detecting incipient failures in large slewing bearings, where subjective influence plays a minor role. The proposed method integrates the strengths of the Independent Component Analysis (ICA) multivariate monitoring approach with the benefits of Ensemble Empirical Mode Decomposition (EEMD), which adaptively decomposes signals into different time scales and can thus cope with multiscale system dynamics. The method, which was named EEMD-based multiscale ICA (EEMD-MSICA), not only enables bearing fault detection but also offers a mechanism of multivariate signal denoising and, in combination with the Envelope Analysis (EA), a diagnostic tool. The multiscale nature of the proposed approach makes the method convenient to cope with data which emanate from bearings in complex real-world rotating machinery and frequently represent the cumulative effect of many underlying phenomena occupying different regions in the time-frequency plane. The efficiency of the proposed method was tested on simulated as well as real vibration and Acoustic Emission (AE) signals obtained through conducting an accelerated run-to-failure lifetime experiment on a purpose-built laboratory slewing bearing test stand. The ability to detect and locate the early-stage rolling-sliding contact fatigue failure of the bearing indicates that AE and vibration signals carry sufficient information on the bearing condition and that the developed EEMD-MSICA method is able to effectively extract it, thereby representing a reliable bearing fault detection and diagnosis strategy.

  8. [Telemetric monitoring reduces visits to the emergency room and cost of care in patients with chronic heart failure].

    PubMed

    Pérez-Rodríguez, Gilberto; Brito-Zurita, Olga Rosa; Sistos-Navarro, Enrique; Benítez-Aréchiga, Zaria Margarita; Sarmiento-Salazar, Gloria Leticia; Vargas-Lizárraga, José Feliciano

    2015-01-01

    Tele-cardiology is the use of information technologies that help prolong survival, improve quality of life and reduce costs in health care. Heart failure is a chronic disease that leads to high care costs. To determine the effectiveness of telemetric monitoring for controlling clinical variables, reduced emergency room visits, and cost of care in a group of patients with heart failure compared to traditional medical consultation. A randomized, controlled and open clinical trial was conducted on 40 patients with Heart failure in a tertiary care centre in north-western Mexico. The patients were divided randomly into 2 groups of 20 patients each (telemetric monitoring, traditional medical consultation). In each participant was evaluated for: blood pressure, heart rate and body weight. The telemetric monitoring group was monitored remotely and traditional medical consultation group came to the hospital on scheduled dates. All patients could come to the emergency room if necessary. The telemetric monitoring group decreased their weight and improved control of the disease (P=.01). Systolic blood pressure and cost of care decreased (51%) significantly compared traditional medical consultation group (P>.05). Admission to the emergency room was avoided in 100% of patients in the telemetric monitoring group. In patients with heart failure, the telemetric monitoring was effective in reducing emergency room visits and saved significant resources in care during follow-up. Copyright © 2015 Academia Mexicana de Cirugía A.C. Published by Masson Doyma México S.A. All rights reserved.

  9. Progressive Damage and Failure Analysis of Composite Laminates

    NASA Astrophysics Data System (ADS)

    Joseph, Ashith P. K.

    Composite materials are widely used in various industries for making structural parts due to higher strength to weight ratio, better fatigue life, corrosion resistance and material property tailorability. To fully exploit the capability of composites, it is required to know the load carrying capacity of the parts made of them. Unlike metals, composites are orthotropic in nature and fails in a complex manner under various loading conditions which makes it a hard problem to analyze. Lack of reliable and efficient failure analysis tools for composites have led industries to rely more on coupon and component level testing to estimate the design space. Due to the complex failure mechanisms, composite materials require a very large number of coupon level tests to fully characterize the behavior. This makes the entire testing process very time consuming and costly. The alternative is to use virtual testing tools which can predict the complex failure mechanisms accurately. This reduces the cost only to it's associated computational expenses making significant savings. Some of the most desired features in a virtual testing tool are - (1) Accurate representation of failure mechanism: Failure progression predicted by the virtual tool must be same as those observed in experiments. A tool has to be assessed based on the mechanisms it can capture. (2) Computational efficiency: The greatest advantages of a virtual tools are the savings in time and money and hence computational efficiency is one of the most needed features. (3) Applicability to a wide range of problems: Structural parts are subjected to a variety of loading conditions including static, dynamic and fatigue conditions. A good virtual testing tool should be able to make good predictions for all these different loading conditions. The aim of this PhD thesis is to develop a computational tool which can model the progressive failure of composite laminates under different quasi-static loading conditions. The analysis tool is validated by comparing the simulations against experiments for a selected number of quasi-static loading cases.

  10. Delamination-Indicating Thermal Barrier Coatings

    NASA Technical Reports Server (NTRS)

    Eldridge, Jeffrey I.

    2007-01-01

    The risk of premature failure of thermal barrier coatings (TBCs), typically composed of yttria-stabilized zirconia (YSZ), compromises the reliability of TBCs used to provide thermal protection for turbine engine components. Unfortunately, TBC delamination proceeds well beneath the TBC surface and cannot be monitored by visible inspection. Nondestructive diagnostic tools that could reliably probe the subsurface damage state of TBCs would alleviate the risk of TBC premature failure by indicating when the TBC needs to be replaced before the level of TBC damage threatens engine performance or safety. To meet this need, a new coating design for thermal barrier coatings (TBCs) that are self-indicating for delamination has been successfully implemented by incorporating a europium-doped luminescent sublayer at the base of a TBC composed of YSZ. The luminescent sublayer has the same YSZ composition as the rest of the TBC except for the addition of low-level europium doping and therefore does not alter TBC performance.

  11. A Fuzzy Reasoning Design for Fault Detection and Diagnosis of a Computer-Controlled System

    PubMed Central

    Ting, Y.; Lu, W.B.; Chen, C.H.; Wang, G.K.

    2008-01-01

    A Fuzzy Reasoning and Verification Petri Nets (FRVPNs) model is established for an error detection and diagnosis mechanism (EDDM) applied to a complex fault-tolerant PC-controlled system. The inference accuracy can be improved through the hierarchical design of a two-level fuzzy rule decision tree (FRDT) and a Petri nets (PNs) technique to transform the fuzzy rule into the FRVPNs model. Several simulation examples of the assumed failure events were carried out by using the FRVPNs and the Mamdani fuzzy method with MATLAB tools. The reasoning performance of the developed FRVPNs was verified by comparing the inference outcome to that of the Mamdani method. Both methods result in the same conclusions. Thus, the present study demonstratrates that the proposed FRVPNs model is able to achieve the purpose of reasoning, and furthermore, determining of the failure event of the monitored application program. PMID:19255619

  12. A probabilistic-based approach to monitoring tool wear state and assessing its effect on workpiece quality in nickel-based alloys

    NASA Astrophysics Data System (ADS)

    Akhavan Niaki, Farbod

    The objective of this research is first to investigate the applicability and advantage of statistical state estimation methods for predicting tool wear in machining nickel-based superalloys over deterministic methods, and second to study the effects of cutting tool wear on the quality of the part. Nickel-based superalloys are among those classes of materials that are known as hard-to-machine alloys. These materials exhibit a unique combination of maintaining their strength at high temperature and have high resistance to corrosion and creep. These unique characteristics make them an ideal candidate for harsh environments like combustion chambers of gas turbines. However, the same characteristics that make nickel-based alloys suitable for aggressive conditions introduce difficulties when machining them. High strength and low thermal conductivity accelerate the cutting tool wear and increase the possibility of the in-process tool breakage. A blunt tool nominally deteriorates the surface integrity and damages quality of the machined part by inducing high tensile residual stresses, generating micro-cracks, altering the microstructure or leaving a poor roughness profile behind. As a consequence in this case, the expensive superalloy would have to be scrapped. The current dominant solution for industry is to sacrifice the productivity rate by replacing the tool in the early stages of its life or to choose conservative cutting conditions in order to lower the wear rate and preserve workpiece quality. Thus, monitoring the state of the cutting tool and estimating its effects on part quality is a critical task for increasing productivity and profitability in machining superalloys. This work aims to first introduce a probabilistic-based framework for estimating tool wear in milling and turning of superalloys and second to study the detrimental effects of functional state of the cutting tool in terms of wear and wear rate on part quality. In the milling operation, the mechanisms of tool failure were first identified and, based on the rapid catastrophic failure of the tool, a Bayesian inference method (i.e., Markov Chain Monte Carlo, MCMC) was used for parameter calibration of tool wear using a power mechanistic model. The calibrated model was then used in the state space probabilistic framework of a Kalman filter to estimate the tool flank wear. Furthermore, an on-machine laser measuring system was utilized and fused into the Kalman filter to improve the estimation accuracy. In the turning operation the behavior of progressive wear was investigated as well. Due to the nonlinear nature of wear in turning, an extended Kalman filter was designed for tracking progressive wear, and the results of the probabilistic-based method were compared with a deterministic technique, where significant improvement (more than 60% increase in estimation accuracy) was achieved. To fulfill the second objective of this research in understanding the underlying effects of wear on part quality in cutting nickel-based superalloys, a comprehensive study on surface roughness, dimensional integrity and residual stress was conducted. The estimated results derived from a probabilistic filter were used for finding the proper correlations between wear, surface roughness and dimensional integrity, along with a finite element simulation for predicting the residual stress profile for sharp and worn cutting tool conditions. The output of this research provides the essential information on condition monitoring of the tool and its effects on product quality. The low-cost Hall effect sensor used in this work to capture spindle power in the context of the stochastic filter can effectively estimate tool wear in both milling and turning operations, while the estimated wear can be used to generate knowledge of the state of workpiece surface integrity. Therefore the true functionality and efficiency of the tool in superalloy machining can be evaluated without additional high-cost sensing.

  13. Failure environment analysis tool applications

    NASA Astrophysics Data System (ADS)

    Pack, Ginger L.; Wadsworth, David B.

    1993-02-01

    Understanding risks and avoiding failure are daily concerns for the women and men of NASA. Although NASA's mission propels us to push the limits of technology, and though the risks are considerable, the NASA community has instilled within, the determination to preserve the integrity of the systems upon which our mission and, our employees lives and well-being depend. One of the ways this is being done is by expanding and improving the tools used to perform risk assessment. The Failure Environment Analysis Tool (FEAT) was developed to help engineers and analysts more thoroughly and reliably conduct risk assessment and failure analysis. FEAT accomplishes this by providing answers to questions regarding what might have caused a particular failure; or, conversely, what effect the occurrence of a failure might have on an entire system. Additionally, FEAT can determine what common causes could have resulted in other combinations of failures. FEAT will even help determine the vulnerability of a system to failures, in light of reduced capability. FEAT also is useful in training personnel who must develop an understanding of particular systems. FEAT facilitates training on system behavior, by providing an automated environment in which to conduct 'what-if' evaluation. These types of analyses make FEAT a valuable tool for engineers and operations personnel in the design, analysis, and operation of NASA space systems.

  14. Failure environment analysis tool applications

    NASA Technical Reports Server (NTRS)

    Pack, Ginger L.; Wadsworth, David B.

    1993-01-01

    Understanding risks and avoiding failure are daily concerns for the women and men of NASA. Although NASA's mission propels us to push the limits of technology, and though the risks are considerable, the NASA community has instilled within, the determination to preserve the integrity of the systems upon which our mission and, our employees lives and well-being depend. One of the ways this is being done is by expanding and improving the tools used to perform risk assessment. The Failure Environment Analysis Tool (FEAT) was developed to help engineers and analysts more thoroughly and reliably conduct risk assessment and failure analysis. FEAT accomplishes this by providing answers to questions regarding what might have caused a particular failure; or, conversely, what effect the occurrence of a failure might have on an entire system. Additionally, FEAT can determine what common causes could have resulted in other combinations of failures. FEAT will even help determine the vulnerability of a system to failures, in light of reduced capability. FEAT also is useful in training personnel who must develop an understanding of particular systems. FEAT facilitates training on system behavior, by providing an automated environment in which to conduct 'what-if' evaluation. These types of analyses make FEAT a valuable tool for engineers and operations personnel in the design, analysis, and operation of NASA space systems.

  15. Failure environment analysis tool applications

    NASA Technical Reports Server (NTRS)

    Pack, Ginger L.; Wadsworth, David B.

    1994-01-01

    Understanding risks and avoiding failure are daily concerns for the women and men of NASA. Although NASA's mission propels us to push the limits of technology, and though the risks are considerable, the NASA community has instilled within it, the determination to preserve the integrity of the systems upon which our mission and, our employees lives and well-being depend. One of the ways this is being done is by expanding and improving the tools used to perform risk assessment. The Failure Environment Analysis Tool (FEAT) was developed to help engineers and analysts more thoroughly and reliably conduct risk assessment and failure analysis. FEAT accomplishes this by providing answers to questions regarding what might have caused a particular failure; or, conversely, what effect the occurrence of a failure might have on an entire system. Additionally, FEAT can determine what common causes could have resulted in other combinations of failures. FEAT will even help determine the vulnerability of a system to failures, in light of reduced capability. FEAT also is useful in training personnel who must develop an understanding of particular systems. FEAT facilitates training on system behavior, by providing an automated environment in which to conduct 'what-if' evaluation. These types of analyses make FEAT a valuable tool for engineers and operations personnel in the design, analysis, and operation of NASA space systems.

  16. 49 CFR Appendix G to Part 227 - Schedule of Civil Penalties

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION OCCUPATIONAL NOISE EXPOSURE Pt. 227, App. G Appendix G to Part 227... Requirements 227.103Noise monitoring program: (a) Failure to develop and/or implement a noise monitoring... levels and/or make noise measurements as required 2,500 5,000 (d) Failure to repeat noise monitoring...

  17. Why Alzheimer trials fail: removing soluble oligomeric beta amyloid is essential, inconsistent, and difficult.

    PubMed

    Rosenblum, William I

    2014-05-01

    Before amyloid formation, peptides cleaved from the amyloid precursor protein (APP) exist as soluble oligomers. These are extremely neurotoxic. Their concentration is strongly correlated with synaptic impairment in animals and parallel cognitive decline in animals and humans. Clinical trials have largely been aimed at removing insoluble beta amyloid in senile plaques and have not reduced soluble load. Even treatment that should remove soluble oligomers has not consistently reduced the load. Failure to significantly improve cognition has frequently been attributed to failure of the amyloid hypothesis or to irreversible alteration in the brain. Instead, trial failures may be because of failure to significantly reduce load of toxic Aβ oligomers. Moreover, targeting only synthesis of Aβ peptides, only the oligomers themselves, or only the final insoluble amyloid may fail to significantly reduce soluble load because of the interrelationship between these 3 points in the amyloid cascade. Thus, treatments may fail unless trials target simultaneously all 3 points in the equation-"triple therapy". Cerebrospinal fluid analysis and other monitoring tools may in the future provide reliable measurement of soluble load. But currently, only analysis of autopsied brains can provide this data and thus enable proper evaluation and explanation of the outcome of clinical trials. These data are essential before attributing trial failures to the advanced nature of the disease or asserting that failures prove that the theory linking Alzheimer's disease to products of amyloid precursor protein is incorrect. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Failure strength of the bovine caudal disc under internal hydrostatic pressure.

    PubMed

    Schechtman, Helio; Robertson, Peter A; Broom, Neil D

    2006-01-01

    The structure of the disc is both complex and inhomogeneous, and it functions as a successful load-bearing organ by virtue of the integration of its various structural regions. These same features also render it impossible to assess the failure strength of the disc from isolated tissue samples, which at best can only yield material properties. This study investigated the intrinsic failure strength of the intact bovine caudal disc under a simple mode of internal hydrostatic pressure. Using a hydraulic actuator, coloured hydrogel was injected under monitored pressure into the nucleus through a hollow screw insert which passed longitudinally through one of the attached vertebrae. Failure did not involve vertebra/endplate structures. Rather, failure of the disc annulus was indicated by the simultaneous manifestation of a sudden loss of gel pressure, a flood of gel colouration appearing in the outer annulus and audible fibrous tearing. A mean hydrostatic failure pressure of 18+/-3 MPa was observed which was approximated as a thick-wall hoop stress of 45+/-7 MPa. The experiment provides a measurement of the intrinsic strength of the disc using a method of internal hydrostatic loading which avoids any disruption of the complex architecture of the annular wall. Although the disc in vivo is subjected to a much more complex pattern of loading than is achieved using simple hydrostatic pressurization, this latter mode provides a useful tool for investigating alterations in intrinsic disc strength associated with prior loading history or degeneration.

  19. Gerontechnologies for Older Patients with Heart Failure: What is the Role of Smartphones, Tablets, and Remote Monitoring Devices in Improving Symptom Monitoring and Self-Care Management?

    PubMed

    Masterson Creber, Ruth M; Hickey, Kathleen T; Maurer, Mathew S

    2016-10-01

    Older adults with heart failure have multiple chronic conditions and a large number and range of symptoms. A fundamental component of heart failure self-care management is regular symptom monitoring. Symptom monitoring can be facilitated by cost-effective, easily accessible technologies that are integrated into patients' lives. Technologies that are tailored to older adults by incorporating gerontological design principles are called gerontechnologies. Gerontechnology is an interdisciplinary academic and professional field that combines gerontology and technology with the goals of improving prevention, care, and enhancing the quality of life for older adults. The purpose of this article is to discuss the role of gerontechnologies, specifically the use of mobile applications available on smartphones and tablets as well as remote monitoring systems, for outpatient disease management among older adults with heart failure. While largely unproven, these rapidly developing technologies have great potential to improve outcomes among older persons.

  20. On the use of temperature for online condition monitoring of geared systems - A review

    NASA Astrophysics Data System (ADS)

    Touret, T.; Changenet, C.; Ville, F.; Lalmi, M.; Becquerelle, S.

    2018-02-01

    Gear unit condition monitoring is a key factor for mechanical system reliability management. When they are subjected to failure, gears and bearings may generate excessive vibration, debris and heat. Vibratory, acoustic or debris analyses are proven approaches to perform condition monitoring. An alternative to those methods is to use temperature as a condition indicator to detect gearbox failure. The review focuses on condition monitoring studies which use this thermal approach. According to the failure type and the measurement method, it exists a distinction whether it is contact (e.g. thermocouple) or non-contact temperature sensor (e.g. thermography). Capabilities and limitations of this approach are discussed. It is shown that the use of temperature for condition monitoring has a clear potential as an alternative to vibratory or acoustic health monitoring.

  1. Six sigma for revenue retrieval.

    PubMed

    Plonien, Cynthia

    2013-01-01

    Deficiencies in revenue retrieval due to failures in obtaining charges have contributed to a negative bottom line for numerous hospitals. Improving documentation practices through a Six Sigma process improvement initiative can minimize opportunities for errors through reviews and instill structure for compliance and consistency. Commitment to the Six Sigma principles with continuous monitoring of outcomes and constant communication of results to departments, management, and payers is a strong approach to reducing the financial impact of denials on an organization's revenues and expenses. Using Six Sigma tools can help improve the organization's financial performance not only for today, but also for health care's uncertain future.

  2. Real-time complex event processing for cloud resources

    NASA Astrophysics Data System (ADS)

    Adam, M.; Cordeiro, C.; Field, L.; Giordano, D.; Magnoni, L.

    2017-10-01

    The ongoing integration of clouds into the WLCG raises the need for detailed health and performance monitoring of the virtual resources in order to prevent problems of degraded service and interruptions due to undetected failures. When working in scale, the existing monitoring diversity can lead to a metric overflow whereby the operators need to manually collect and correlate data from several monitoring tools and frameworks, resulting in tens of different metrics to be constantly interpreted and analyzed per virtual machine. In this paper we present an ESPER based standalone application which is able to process complex monitoring events coming from various sources and automatically interpret data in order to issue alarms upon the resources’ statuses, without interfering with the actual resources and data sources. We will describe how this application has been used with both commercial and non-commercial cloud activities, allowing the operators to quickly be alarmed and react to misbehaving VMs and LHC experiments’ workflows. We will present the pattern analysis mechanisms being used, as well as the surrounding Elastic and REST API interfaces where the alarms are collected and served to users.

  3. A Selection of Composites Simulation Practices at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Ratcliffe, James G.

    2007-01-01

    One of the major areas of study at NASA Langley Research Center is the development of technologies that support the use of advanced composite materials in aerospace applications. Amongst the supporting technologies are analysis tools used to simulate the behavior of these materials. This presentation will discuss a number of examples of analysis tools and simulation practices conducted at NASA Langley. The presentation will include examples of damage tolerance analyses for both interlaminar and intralaminar failure modes. Tools for modeling interlaminar failure modes include fracture mechanics and cohesive methods, whilst tools for modeling intralaminar failure involve the development of various progressive failure analyses. Other examples of analyses developed at NASA Langley include a thermo-mechanical model of an orthotropic material and the simulation of delamination growth in z-pin reinforced laminates.

  4. 40 CFR 141.211 - Special notice for repeated failure to conduct monitoring of the source water for Cryptosporidium...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Special notice for repeated failure to conduct monitoring of the source water for Cryptosporidium and for failure to determine bin classification or mean Cryptosporidium level. 141.211 Section 141.211 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS ...

  5. Failure Modes and Effects Analysis (FMEA) Assistant Tool Feasibility Study

    NASA Technical Reports Server (NTRS)

    Flores, Melissa; Malin, Jane T.

    2013-01-01

    An effort to determine the feasibility of a software tool to assist in Failure Modes and Effects Analysis (FMEA) has been completed. This new and unique approach to FMEA uses model based systems engineering concepts to recommend failure modes, causes, and effects to the user after they have made several selections from pick lists about a component s functions and inputs/outputs. Recommendations are made based on a library using common failure modes identified over the course of several major human spaceflight programs. However, the tool could be adapted for use in a wide range of applications from NASA to the energy industry.

  6. Failure Modes and Effects Analysis (FMEA) Assistant Tool Feasibility Study

    NASA Astrophysics Data System (ADS)

    Flores, Melissa D.; Malin, Jane T.; Fleming, Land D.

    2013-09-01

    An effort to determine the feasibility of a software tool to assist in Failure Modes and Effects Analysis (FMEA) has been completed. This new and unique approach to FMEA uses model based systems engineering concepts to recommend failure modes, causes, and effects to the user after they have made several selections from pick lists about a component's functions and inputs/outputs. Recommendations are made based on a library using common failure modes identified over the course of several major human spaceflight programs. However, the tool could be adapted for use in a wide range of applications from NASA to the energy industry.

  7. Formal Verification of the Runway Safety Monitor

    NASA Technical Reports Server (NTRS)

    Siminiceanu, Radu; Ciardo, Gianfranco

    2006-01-01

    The Runway Safety Monitor (RSM) designed by Lockheed Martin is part of NASA's effort to reduce runway accidents. We developed a Petri net model of the RSM protocol and used the model checking functions of our tool SMART to investigate a number of safety properties in RSM. To mitigate the impact of state-space explosion, we built a highly discretized model of the system, obtained by partitioning the monitored runway zone into a grid of smaller volumes and by considering scenarios involving only two aircraft. The model also assumes that there are no communication failures, such as bad input from radar or lack of incoming data, thus it relies on a consistent view of reality by all participants. In spite of these simplifications, we were able to expose potential problems in the RSM conceptual design. Our findings were forwarded to the design engineers, who undertook corrective action. Additionally, the results stress the efficiency attained by the new model checking algorithms implemented in SMART, and demonstrate their applicability to real-world systems.

  8. Diagnostics of wear in aeronautical systems

    NASA Technical Reports Server (NTRS)

    Wedeven, L. D.

    1979-01-01

    The use of appropriate diagnostic tools for aircraft oil wetted components is reviewed, noting that it can reduce direct operating costs through reduced unscheduled maintenance, particularly in helicopter engine and transmission systems where bearing failures are a significant cost factor. Engine and transmission wear modes are described, and diagnostic methods for oil and wet particle analysis, the spectrometric oil analysis program, chip detectors, ferrography, in-line oil monitor and radioactive isotope tagging are discussed, noting that they are effective over a limited range of particle sizes but compliment each other if used in parallel. Fine filtration can potentially increase time between overhauls, but reduces the effectiveness of conventional oil monitoring techniques so that alternative diagnostic techniques must be used. It is concluded that the development of a diagnostic system should be parallel and integral with the development of a mechanical system.

  9. Optical Spectroscopy of New Materials

    NASA Technical Reports Server (NTRS)

    White, Susan M.; Arnold, James O. (Technical Monitor)

    1993-01-01

    Composites are currently used for a rapidly expanding number of applications including aircraft structures, rocket nozzles, thermal protection of spacecraft, high performance ablative surfaces, sports equipment including skis, tennis rackets and bicycles, lightweight automobile components, cutting tools, and optical-grade mirrors. Composites are formed from two or more insoluble materials to produce a material with superior properties to either component. Composites range from dispersion-hardened alloys to advanced fiber-reinforced composites. UV/VIS and FTIR spectroscopy currently is used to evaluate the bonding between the matrix and the fibers, monitor the curing process of a polymer, measure surface contamination, characterize the interphase material, monitor anion transport in polymer phases, characterize the void formation (voids must be minimized because, like cracks in a bulk material, they lead to failure), characterize the surface of the fiber component, and measure the overall optical properties for energy balances.

  10. Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model: A Web-based program designed to evaluate the cost-effectiveness of disease management programs in heart failure.

    PubMed

    Reed, Shelby D; Neilson, Matthew P; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H; Polsky, Daniel E; Graham, Felicia L; Bowers, Margaret T; Paul, Sara C; Granger, Bradi B; Schulman, Kevin A; Whellan, David J; Riegel, Barbara; Levy, Wayne C

    2015-11-01

    Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics; use of evidence-based medications; and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model. Projections of resource use and quality of life are modeled using relationships with time-varying Seattle Heart Failure Model scores. The model can be used to evaluate parallel-group and single-cohort study designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. The Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. 40 CFR 141.211 - Special notice for repeated failure to conduct monitoring of the source water for Cryptosporidium...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... classification or mean Cryptosporidium level must contain the following language: We are required to monitor the... or mean Cryptosporidium level. 141.211 Section 141.211 Protection of Environment ENVIRONMENTAL... Cryptosporidium level. (a) When is the special notice for repeated failure to monitor to be given? The owner or...

  12. Smart acoustic emission system for wireless monitoring of concrete structures

    NASA Astrophysics Data System (ADS)

    Yoon, Dong-Jin; Kim, Young-Gil; Kim, Chi-Yeop; Seo, Dae-Cheol

    2008-03-01

    Acoustic emission (AE) has emerged as a powerful nondestructive tool to detect preexisting defects or to characterize failure mechanisms. Recently, this technique or this kind of principle, that is an in-situ monitoring of inside damages of materials or structures, becomes increasingly popular for monitoring the integrity of large structures. Concrete is one of the most widely used materials for constructing civil structures. In the nondestructive evaluation point of view, a lot of AE signals are generated in concrete structures under loading whether the crack development is active or not. Also, it was required to find a symptom of damage propagation before catastrophic failure through a continuous monitoring. Therefore we have done a practical study in this work to fabricate compact wireless AE sensor and to develop diagnosis system. First, this study aims to identify the differences of AE event patterns caused by both real damage sources and the other normal sources. Secondly, it was focused to develop acoustic emission diagnosis system for assessing the deterioration of concrete structures such as a bridge, dame, building slab, tunnel etc. Thirdly, the wireless acoustic emission system was developed for the application of monitoring concrete structures. From the previous laboratory study such as AE event patterns analysis under various loading conditions, we confirmed that AE analysis provided a promising approach for estimating the condition of damage and distress in concrete structures. In this work, the algorithm for determining the damage status of concrete structures was developed and typical criteria for decision making was also suggested. For the future application of wireless monitoring, a low energy consumable, compact, and robust wireless acoustic emission sensor module was developed and applied to the concrete beam for performance test. Finally, based on the self-developed diagnosis algorithm and compact wireless AE sensor, new AE system for practical AE diagnosis was demonstrated for assessing the conditions of damage and distress in concrete structures.

  13. Review and Analysis of Existing Mobile Phone Apps to Support Heart Failure Symptom Monitoring and Self-Care Management Using the Mobile Application Rating Scale (MARS).

    PubMed

    Masterson Creber, Ruth M; Maurer, Mathew S; Reading, Meghan; Hiraldo, Grenny; Hickey, Kathleen T; Iribarren, Sarah

    2016-06-14

    Heart failure is the most common cause of hospital readmissions among Medicare beneficiaries and these hospitalizations are often driven by exacerbations in common heart failure symptoms. Patient collaboration with health care providers and decision making is a core component of increasing symptom monitoring and decreasing hospital use. Mobile phone apps offer a potentially cost-effective solution for symptom monitoring and self-care management at the point of need. The purpose of this review of commercially available apps was to identify and assess the functionalities of patient-facing mobile health apps targeted toward supporting heart failure symptom monitoring and self-care management. We searched 3 Web-based mobile app stores using multiple terms and combinations (eg, "heart failure," "cardiology," "heart failure and self-management"). Apps meeting inclusion criteria were evaluated using the Mobile Application Rating Scale (MARS), IMS Institute for Healthcare Informatics functionality scores, and Heart Failure Society of America (HFSA) guidelines for nonpharmacologic management. Apps were downloaded and assessed independently by 2-4 reviewers, interclass correlations between reviewers were calculated, and consensus was met by discussion. Of 3636 potentially relevant apps searched, 34 met inclusion criteria. Most apps were excluded because they were unrelated to heart failure, not in English or Spanish, or were games. Interrater reliability between reviewers was high. AskMD app had the highest average MARS total (4.9/5). More than half of the apps (23/34, 68%) had acceptable MARS scores (>3.0). Heart Failure Health Storylines (4.6) and AskMD (4.5) had the highest scores for behavior change. Factoring MARS, functionality, and HFSA guideline scores, the highest performing apps included Heart Failure Health Storylines, Symple, ContinuousCare Health App, WebMD, and AskMD. Peer-reviewed publications were identified for only 3 of the 34 apps. This review suggests that few apps meet prespecified criteria for quality, content, or functionality, highlighting the need for further refinement and mapping to evidence-based guidelines and room for overall quality improvement in heart failure symptom monitoring and self-care related apps.

  14. Model Based Optimal Sensor Network Design for Condition Monitoring in an IGCC Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Rajeeva; Kumar, Aditya; Dai, Dan

    2012-12-31

    This report summarizes the achievements and final results of this program. The objective of this program is to develop a general model-based sensor network design methodology and tools to address key issues in the design of an optimal sensor network configuration: the type, location and number of sensors used in a network, for online condition monitoring. In particular, the focus in this work is to develop software tools for optimal sensor placement (OSP) and use these tools to design optimal sensor network configuration for online condition monitoring of gasifier refractory wear and radiant syngas cooler (RSC) fouling. The methodology developedmore » will be applicable to sensing system design for online condition monitoring for broad range of applications. The overall approach consists of (i) defining condition monitoring requirement in terms of OSP and mapping these requirements in mathematical terms for OSP algorithm, (ii) analyzing trade-off of alternate OSP algorithms, down selecting the most relevant ones and developing them for IGCC applications (iii) enhancing the gasifier and RSC models as required by OSP algorithms, (iv) applying the developed OSP algorithm to design the optimal sensor network required for the condition monitoring of an IGCC gasifier refractory and RSC fouling. Two key requirements for OSP for condition monitoring are desired precision for the monitoring variables (e.g. refractory wear) and reliability of the proposed sensor network in the presence of expected sensor failures. The OSP problem is naturally posed within a Kalman filtering approach as an integer programming problem where the key requirements of precision and reliability are imposed as constraints. The optimization is performed over the overall network cost. Based on extensive literature survey two formulations were identified as being relevant to OSP for condition monitoring; one based on LMI formulation and the other being standard INLP formulation. Various algorithms to solve these two formulations were developed and validated. For a given OSP problem the computation efficiency largely depends on the “size” of the problem. Initially a simplified 1-D gasifier model assuming axial and azimuthal symmetry was used to test out various OSP algorithms. Finally these algorithms were used to design the optimal sensor network for condition monitoring of IGCC gasifier refractory wear and RSC fouling. The sensors type and locations obtained as solution to the OSP problem were validated using model based sensing approach. The OSP algorithm has been developed in a modular form and has been packaged as a software tool for OSP design where a designer can explore various OSP design algorithm is a user friendly way. The OSP software tool is implemented in Matlab/Simulink© in-house. The tool also uses few optimization routines that are freely available on World Wide Web. In addition a modular Extended Kalman Filter (EKF) block has also been developed in Matlab/Simulink© which can be utilized for model based sensing of important process variables that are not directly measured through combining the online sensors with model based estimation once the hardware sensor and their locations has been finalized. The OSP algorithm details and the results of applying these algorithms to obtain optimal sensor location for condition monitoring of gasifier refractory wear and RSC fouling profile are summarized in this final report.« less

  15. Cadaveric study validating in vitro monitoring techniques to measure the failure mechanism of glenoid implants against clinical CT.

    PubMed

    Junaid, Sarah; Gregory, Thomas; Fetherston, Shirley; Emery, Roger; Amis, Andrew A; Hansen, Ulrich

    2018-03-23

    Definite glenoid implant loosening is identifiable on radiographs, however, identifying early loosening still eludes clinicians. Methods to monitor glenoid loosening in vitro have not been validated to clinical imaging. This study investigates the correlation between in vitro measures and CT images. Ten cadaveric scapulae were implanted with a pegged glenoid implant and fatigue tested to failure. Each scapulae were cyclically loaded superiorly and CT scanned every 20,000 cycles until failure to monitor progressive radiolucent lines. Superior and inferior rim displacements were also measured. A finite element (FE) model of one scapula was used to analyze the interfacial stresses at the implant/cement and cement/bone interfaces. All ten implants failed inferiorly at the implant-cement interface, two also failed at the cement-bone interface inferiorly, and three showed superior failure. Failure occurred at of 80,966 ± 53,729 (mean ± SD) cycles. CT scans confirmed failure of the fixation, and in most cases, was observed either before or with visual failure. Significant correlations were found between inferior rim displacement, vertical head displacement and failure of the glenoid implant. The FE model showed peak tensile stresses inferiorly and high compressive stresses superiorly, corroborating experimental findings. In vitro monitoring methods correlated to failure progression in clinical CT images possibly indicating its capacity to detect loosening earlier for earlier clinical intervention if needed. Its use in detecting failure non-destructively for implant development and testing is also valuable. The study highlights failure at the implant-cement interface and early signs of failure are identifiable in CT images. © 2018 The Authors. Journal of Orthopaedic Research ® Published by Wiley Periodicals, Inc. on behalf of the Orthopaedic Research Society. J Orthop Res 9999:XX-XX, 2018. © 2018 The Authors. Journal of Orthopaedic Research® Published by Wiley Periodicals, Inc. on behalf of the Orthopaedic Research Society.

  16. Automation-induced monitoring inefficiency: role of display location.

    PubMed

    Singh, I L; Molloy, R; Parasuraman, R

    1997-01-01

    Operators can be poor monitors of automation if they are engaged concurrently in other tasks. However, in previous studies of this phenomenon the automated task was always presented in the periphery, away from the primary manual tasks that were centrally displayed. In this study we examined whether centrally locating an automated task would boost monitoring performance during a flight-simulation task consisting of system monitoring, tracking and fuel resource management sub-tasks. Twelve nonpilot subjects were required to perform the tracking and fuel management tasks manually while watching the automated system monitoring task for occasional failures. The automation reliability was constant at 87.5% for six subjects and variable (alternating between 87.5% and 56.25%) for the other six subjects. Each subject completed four 30 min sessions over a period of 2 days. In each automation reliability condition the automation routine was disabled for the last 20 min of the fourth session in order to simulate catastrophic automation failure (0 % reliability). Monitoring for automation failure was inefficient when automation reliability was constant but not when it varied over time, replicating previous results. Furthermore, there was no evidence of resource or speed accuracy trade-off between tasks. Thus, automation-induced failures of monitoring cannot be prevented by centrally locating the automated task.

  17. Automation-induced monitoring inefficiency: role of display location

    NASA Technical Reports Server (NTRS)

    Singh, I. L.; Molloy, R.; Parasuraman, R.

    1997-01-01

    Operators can be poor monitors of automation if they are engaged concurrently in other tasks. However, in previous studies of this phenomenon the automated task was always presented in the periphery, away from the primary manual tasks that were centrally displayed. In this study we examined whether centrally locating an automated task would boost monitoring performance during a flight-simulation task consisting of system monitoring, tracking and fuel resource management sub-tasks. Twelve nonpilot subjects were required to perform the tracking and fuel management tasks manually while watching the automated system monitoring task for occasional failures. The automation reliability was constant at 87.5% for six subjects and variable (alternating between 87.5% and 56.25%) for the other six subjects. Each subject completed four 30 min sessions over a period of 2 days. In each automation reliability condition the automation routine was disabled for the last 20 min of the fourth session in order to simulate catastrophic automation failure (0 % reliability). Monitoring for automation failure was inefficient when automation reliability was constant but not when it varied over time, replicating previous results. Furthermore, there was no evidence of resource or speed accuracy trade-off between tasks. Thus, automation-induced failures of monitoring cannot be prevented by centrally locating the automated task.

  18. Remote monitoring of Xpert® MTB/RIF testing in Mozambique: results of programmatic implementation of GxAlert.

    PubMed

    Cowan, J; Michel, C; Manhiça, I; Mutaquiha, C; Monivo, C; Saize, D; Beste, J; Creswell, J; Codlin, A J; Gloyd, S

    2016-03-01

    Electronic diagnostic tests, such as the Xpert® MTB/RIF assay, are being implemented in low- and middle-income countries (LMICs). However, timely information from these tests available via remote monitoring is underutilized. The failure to transmit real-time, actionable data to key individuals such as clinicians, patients, and national monitoring and evaluation teams may negatively impact patient care. To describe recently developed applications that allow for real-time, remote monitoring of Xpert results, and initial implementation of one of these products in central Mozambique. In partnership with the Mozambican National Tuberculosis Program, we compared three different remote monitoring tools for Xpert and selected one, GxAlert, to pilot and evaluate at five public health centers in Mozambique. GxAlert software was successfully installed on all five Xpert computers, and test results are now uploaded daily via a USB internet modem to a secure online database. A password-protected web-based interface allows real-time analysis of test results, and 1200 positive tests for tuberculosis generated 8000 SMS result notifications to key individuals. Remote monitoring of diagnostic platforms is feasible in LMICs. While promising, this effort needs to address issues around patient data ownership, confidentiality, interoperability, unique patient identifiers, and data security.

  19. [Research and implementation of a real-time monitoring system for running status of medical monitors based on the internet of things].

    PubMed

    Li, Yiming; Qian, Mingli; Li, Long; Li, Bin

    2014-07-01

    This paper proposed a real-time monitoring system for running status of medical monitors based on the internet of things. In the aspect of hardware, a solution of ZigBee networks plus 470 MHz networks is proposed. In the aspect of software, graphical display of monitoring interface and real-time equipment failure alarm is implemented. The system has the function of remote equipment failure detection and wireless localization, which provides a practical and effective method for medical equipment management.

  20. Integrating Near-Real Time Hydrologic-Response Monitoring and Modeling for Improved Assessments of Slope Stability Along the Coastal Bluffs of the Puget Sound Rail Corridor, Washington State

    NASA Astrophysics Data System (ADS)

    Mirus, B. B.; Baum, R. L.; Stark, B.; Smith, J. B.; Michel, A.

    2015-12-01

    Previous USGS research on landslide potential in hillside areas and coastal bluffs around Puget Sound, WA, has identified rainfall thresholds and antecedent moisture conditions that correlate with heightened probability of shallow landslides. However, physically based assessments of temporal and spatial variability in landslide potential require improved quantitative characterization of the hydrologic controls on landslide initiation in heterogeneous geologic materials. Here we present preliminary steps towards integrating monitoring of hydrologic response with physically based numerical modeling to inform the development of a landslide warning system for a railway corridor along the eastern shore of Puget Sound. We instrumented two sites along the steep coastal bluffs - one active landslide and one currently stable slope with the potential for failure - to monitor rainfall, soil-moisture, and pore-pressure dynamics in near-real time. We applied a distributed model of variably saturated subsurface flow for each site, with heterogeneous hydraulic-property distributions based on our detailed site characterization of the surficial colluvium and the underlying glacial-lacustrine deposits that form the bluffs. We calibrated the model with observed volumetric water content and matric potential time series, then used simulated pore pressures from the calibrated model to calculate the suction stress and the corresponding distribution of the factor of safety against landsliding with the infinite slope approximation. Although the utility of the model is limited by uncertainty in the deeper groundwater flow system, the continuous simulation of near-surface hydrologic response can help to quantify the temporal variations in the potential for shallow slope failures at the two sites. Thus the integration of near-real time monitoring and physically based modeling contributes a useful tool towards mitigating hazards along the Puget Sound railway corridor.

  1. Supporting dynamic change detection: using the right tool for the task.

    PubMed

    Vallières, Benoît R; Hodgetts, Helen M; Vachon, François; Tremblay, Sébastien

    2016-01-01

    Detecting task-relevant changes in a visual scene is necessary for successfully monitoring and managing dynamic command and control situations. Change blindness-the failure to notice visual changes-is an important source of human error. Change History EXplicit (CHEX) is a tool developed to aid change detection and maintain situation awareness; and in the current study we test the generality of its ability to facilitate the detection of changes when this subtask is embedded within a broader dynamic decision-making task. A multitasking air-warfare simulation required participants to perform radar-based subtasks, for which change detection was a necessary aspect of the higher-order goal of protecting one's own ship. In this task, however, CHEX rendered the operator even more vulnerable to attentional failures in change detection and increased perceived workload. Such support was only effective when participants performed a change detection task without concurrent subtasks. Results are interpreted in terms of the NSEEV model of attention behavior (Steelman, McCarley, & Wickens, Hum. Factors 53:142-153, 2011; J. Exp. Psychol. Appl. 19:403-419, 2013), and suggest that decision aids for use in multitasking contexts must be designed to fit within the available workload capacity of the user so that they may truly augment cognition.

  2. Safety Management of a Clinical Process Using Failure Mode and Effect Analysis: Continuous Renal Replacement Therapies in Intensive Care Unit Patients.

    PubMed

    Sanchez-Izquierdo-Riera, Jose Angel; Molano-Alvarez, Esteban; Saez-de la Fuente, Ignacio; Maynar-Moliner, Javier; Marín-Mateos, Helena; Chacón-Alves, Silvia

    2016-01-01

    The failure mode and effect analysis (FMEA) may improve the safety of the continuous renal replacement therapies (CRRT) in the intensive care unit. We use this tool in three phases: 1) Retrospective observational study. 2) A process FMEA, with implementation of the improvement measures identified. 3) Cohort study after FMEA. We included 54 patients in the pre-FMEA group and 72 patients in the post-FMEA group. Comparing the risks frequencies per patient in both groups, we got less cases of under 24 hours of filter survival time in the post-FMEA group (31 patients 57.4% vs. 21 patients 29.6%; p < 0.05); less patients suffered circuit coagulation with inability to return the blood to the patient (25 patients [46.3%] vs. 16 patients [22.2%]; p < 0.05); 54 patients (100%) versus 5 (6.94%) did not get phosphorus levels monitoring (p < 0.05); in 14 patients (25.9%) versus 0 (0%), the CRRT prescription did not appear on medical orders. As a measure of improvement, we adopt a dynamic dosage management. After the process FMEA, there were several improvements in the management of intensive care unit patients receiving CRRT, and we consider it a useful tool for improving the safety of critically ill patients.

  3. Sleep Overnight Monitoring for Apnea in Patients Hospitalized with Heart Failure (SOMA-HF Study)

    PubMed Central

    Sharma, Sunil; Mather, Paul J.; Chowdhury, Anindita; Gupta, Suchita; Mukhtar, Umer; Willes, Leslee; Whellan, David J.; Malhotra, Atul; Quan, Stuart F.

    2017-01-01

    Introduction: Sleep-disordered breathing (SDB) is highly prevalent in hospitalized patients with congestive heart failure (CHF) and the condition is diagnosed and treated in only a minority of these patients. Portable monitoring (PM) is a screening option, but due to costs and the expertise required, many hospitals may find it impractical to implement. We sought to test the utility of an alternative approach for screening hospitalized CHF patients for SDB, high-resolution pulse oximetry (HRPO). Methods: We conducted a prospective controlled trial of 125 consecutive patients admitted to the hospital with CHF. Simultaneous PM and HRPO for a single night was performed. All but one patient were monitored on breathing room air. The HRPO-derived ODI (oxygen desaturation index) was compared with PM-derived respiratory event index (REI) using both receiver operator characteristic (ROC) curve analysis and a Bland-Altman plot. Results: Of 105 consecutive CHF patients with analyzable data, 61 (58%) were males with mean age of 64.9 ± 15.1 years and mean body mass index of 30.3 ± 8.3 kg/m2. Of the 105 patients, 10 (9.5%) had predominantly central sleep apnea (central events > 50% of the total events), although central events were noted in 42 (40%) of the patients. The ROC analysis showed an area under the curve of 0.89 for REI > 5 events/h. The Bland-Altman plot showed acceptable agreement with 95% limits of agreement between −28.5 to 33.7 events/h and little bias. Conclusions: We conclude that high-resolution pulse oximetry is a simple and cost-effective screening tool for SDB in CHF patients admitted to the hospital. Such screening approaches may be valuable for large-scale implementation and for the optimal design of interventional trials. Citation: Sharma S, Mather PJ, Chowdhury A, Gupta S, Mukhtar U, Willes L, Whellan DJ, Malhotra A, Quan SF. Sleep overnight monitoring for apnea in patients hospitalized with heart failure (SOMA-HF Study). J Clin Sleep Med. 2017;13(10):1185–1190. PMID:28859720

  4. Memory Circuit Fault Simulator

    NASA Technical Reports Server (NTRS)

    Sheldon, Douglas J.; McClure, Tucker

    2013-01-01

    Spacecraft are known to experience significant memory part-related failures and problems, both pre- and postlaunch. These memory parts include both static and dynamic memories (SRAM and DRAM). These failures manifest themselves in a variety of ways, such as pattern-sensitive failures, timingsensitive failures, etc. Because of the mission critical nature memory devices play in spacecraft architecture and operation, understanding their failure modes is vital to successful mission operation. To support this need, a generic simulation tool that can model different data patterns in conjunction with variable write and read conditions was developed. This tool is a mathematical and graphical way to embed pattern, electrical, and physical information to perform what-if analysis as part of a root cause failure analysis effort.

  5. Towards eradication of inappropriate therapies for ICD lead failure by combining comprehensive remote monitoring and lead noise alerts.

    PubMed

    Ploux, Sylvain; Swerdlow, Charles D; Strik, Marc; Welte, Nicolas; Klotz, Nicolas; Ritter, Philippe; Haïssaguerre, Michel; Bordachar, Pierre

    2018-06-02

    Recognition of implantable cardioverter defibrillator (ICD) lead malfunction before occurrence of life threatening complications is crucial. We aimed to assess the effectiveness of remote monitoring associated or not with a lead noise alert for early detection of ICD lead failure. From October 2013 to April 2017, a median of 1,224 (578-1,958) ICD patients were remotely monitored with comprehensive analysis of all transmitted materials. ICD lead failure and subsequent device interventions were prospectively collected in patients with (RMLN) and without (RM) a lead noise alert (Abbott Secure Sense™ or Medtronic Lead Integrity Alert™) in their remote monitoring system. During a follow-up of 4,457 patient years, 64 lead failures were diagnosed. Sixty-one (95%) of the diagnoses were made before any clinical complication occurred. Inappropriate shocks were delivered in only one patient of each group (3%), with an annual rate of 0.04%. All high voltage conductor failures were identified remotely by a dedicated impedance alert in 10 patients. Pace-sense component failures were correctly identified by a dedicated alert in 77% (17 of 22) of the RMLN group versus 25% (8 of 32) of the RM group (P = 0.002). The absence of a lead noise alert was associated with a 16-fold increase in the likelihood of initiating either a shock or ATP (OR: 16.0, 95% CI 1.8-143.3; P = 0.01). ICD remote monitoring with systematic review of all transmitted data is associated with a very low rate of inappropriate shocks related to lead failure. Dedicated noise alerts further reduce inappropriate detection of ventricular arrhythmias. © 2018 Wiley Periodicals, Inc.

  6. Introduction of the Tools for Economic Analysis of Patient Management Interventions in Heart Failure Costing Tool: a user-friendly spreadsheet program to estimate costs of providing patient-centered interventions.

    PubMed

    Reed, Shelby D; Li, Yanhong; Kamble, Shital; Polsky, Daniel; Graham, Felicia L; Bowers, Margaret T; Samsa, Gregory P; Paul, Sara; Schulman, Kevin A; Whellan, David J; Riegel, Barbara J

    2012-01-01

    Patient-centered health care interventions, such as heart failure disease management programs, are under increasing pressure to demonstrate good value. Variability in costing methods and assumptions in economic evaluations of such interventions limit the comparability of cost estimates across studies. Valid cost estimation is critical to conducting economic evaluations and for program budgeting and reimbursement negotiations. Using sound economic principles, we developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure (TEAM-HF) Costing Tool, a spreadsheet program that can be used by researchers and health care managers to systematically generate cost estimates for economic evaluations and to inform budgetary decisions. The tool guides users on data collection and cost assignment for associated personnel, facilities, equipment, supplies, patient incentives, miscellaneous items, and start-up activities. The tool generates estimates of total program costs, cost per patient, and cost per week and presents results using both standardized and customized unit costs for side-by-side comparisons. Results from pilot testing indicated that the tool was well-formatted, easy to use, and followed a logical order. Cost estimates of a 12-week exercise training program in patients with heart failure were generated with the costing tool and were found to be consistent with estimates published in a recent study. The TEAM-HF Costing Tool could prove to be a valuable resource for researchers and health care managers to generate comprehensive cost estimates of patient-centered interventions in heart failure or other conditions for conducting high-quality economic evaluations and making well-informed health care management decisions.

  7. 14 CFR 171.323 - Fabrication and installation requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., using applicable electric and safety codes and Federal Communications Commission (FCC) licensing... time not to exceed 1.5 hours. This measure applies to correction of unscheduled failures of the monitor... measure applies to unscheduled outage, out-of-tolerance conditions, and failures of the monitor...

  8. Detecting Slow Deformation Signals Preceding Dynamic Failure: A New Strategy For The Mitigation Of Natural Hazards (SAFER)

    NASA Astrophysics Data System (ADS)

    Vinciguerra, S.; Colombero, C.; Comina, C.; Umili, G.

    2015-12-01

    Rock slope monitoring is a major aim in territorial risk assessment and mitigation. The use of "site specific" microseismic monitoring systems can allow to detect pre-failure signals in unstable sectors within the rock mass and to predict the possible acceleration to the failure. To this aim multi-scale geophysical methods can provide a unique tool for an high-resolution imaging of the internal structure of the rock mass and constraints on the physical state of the medium. We present here a cross-hole seismic tomography survey coupled with laboratory ultrasonic velocity measurements and determination of physical properties on rock samples to characterize the damaged and potentially unstable granitic cliff of Madonna del Sasso (NW, Italy). Results allowed to achieve two main advances, in terms of obtaining: i) a lithological interpretation of the velocity field obtained at the site, ii) a systematic correlation of the measured velocities with physical properties (density and porosity) and macroscopic features of the granite (weathering and anisotropy) of the cliff. A microseismic monitoring system developed by the University of Turin/Compagnia San Paolo, consisting of a network of 4 triaxial geophones (4.5 Hz) connected to a 12-channel data logger, has been deployed on the unstable granitic cliff. More than 2000 events with different waveforms, duration and frequency content were recorded between November 2013 and July 2014. By inspecting the acquired events we identified the key parameters for a reliable distinction among the nature of each signal, i.e. the signal shape (in terms of amplitude, duration, kurtosis) and the frequency content (maximum frequency content and frequency distribution). Four main classes of recorded signals can be recognised: microseismic events, regional earthquakes, electrical noises and calibration signals, and unclassified events (probably grouping rockfalls, quarry blasts, other anthropic and natural sources of seismic noise).

  9. Award ER25750: Coordinated Infrastructure for Fault Tolerance Systems Indiana University Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumsdaine, Andrew

    2013-03-08

    The main purpose of the Coordinated Infrastructure for Fault Tolerance in Systems initiative has been to conduct research with a goal of providing end-to-end fault tolerance on a systemwide basis for applications and other system software. While fault tolerance has been an integral part of most high-performance computing (HPC) system software developed over the past decade, it has been treated mostly as a collection of isolated stovepipes. Visibility and response to faults has typically been limited to the particular hardware and software subsystems in which they are initially observed. Little fault information is shared across subsystems, allowing little flexibility ormore » control on a system-wide basis, making it practically impossible to provide cohesive end-to-end fault tolerance in support of scientific applications. As an example, consider faults such as communication link failures that can be seen by a network library but are not directly visible to the job scheduler, or consider faults related to node failures that can be detected by system monitoring software but are not inherently visible to the resource manager. If information about such faults could be shared by the network libraries or monitoring software, then other system software, such as a resource manager or job scheduler, could ensure that failed nodes or failed network links were excluded from further job allocations and that further diagnosis could be performed. As a founding member and one of the lead developers of the Open MPI project, our efforts over the course of this project have been focused on making Open MPI more robust to failures by supporting various fault tolerance techniques, and using fault information exchange and coordination between MPI and the HPC system software stack from the application, numeric libraries, and programming language runtime to other common system components such as jobs schedulers, resource managers, and monitoring tools.« less

  10. Availability analysis of mechanical systems with condition-based maintenance using semi-Markov and evaluation of optimal condition monitoring interval

    NASA Astrophysics Data System (ADS)

    Kumar, Girish; Jain, Vipul; Gandhi, O. P.

    2018-03-01

    Maintenance helps to extend equipment life by improving its condition and avoiding catastrophic failures. Appropriate model or mechanism is, thus, needed to quantify system availability vis-a-vis a given maintenance strategy, which will assist in decision-making for optimal utilization of maintenance resources. This paper deals with semi-Markov process (SMP) modeling for steady state availability analysis of mechanical systems that follow condition-based maintenance (CBM) and evaluation of optimal condition monitoring interval. The developed SMP model is solved using two-stage analytical approach for steady-state availability analysis of the system. Also, CBM interval is decided for maximizing system availability using Genetic Algorithm approach. The main contribution of the paper is in the form of a predictive tool for system availability that will help in deciding the optimum CBM policy. The proposed methodology is demonstrated for a centrifugal pump.

  11. Reusable rocket engine turbopump condition monitoring

    NASA Technical Reports Server (NTRS)

    Hampson, M. E.

    1984-01-01

    Significant improvements in engine readiness with reductions in maintenance costs and turn-around times can be achieved with an engine condition monitoring systems (CMS). The CMS provides health status of critical engine components, without disassembly, through monitoring with advanced sensors. Engine failure reports over 35 years were categorized into 20 different modes of failure. Rotor bearings and turbine blades were determined to be the most critical in limiting turbopump life. Measurement technologies were matched to each of the failure modes identified. Three were selected to monitor the rotor bearings and turbine blades: the isotope wear detector and fiberoptic deflectometer (bearings), and the fiberoptic pyrometer (blades). Signal processing algorithms were evaluated for their ability to provide useful health data to maintenance personnel. Design modifications to the Space Shuttle Main Engine (SSME) high pressure turbopumps were developed to incorporate the sensors. Laboratory test fixtures have been designed for monitoring the rotor bearings and turbine blades in simulated turbopump operating conditions.

  12. The microbiome as engineering tool: Manufacturing and trading between microorganisms.

    PubMed

    De Vrieze, Jo; Christiaens, Marlies E R; Verstraete, Willy

    2017-10-25

    The integration of microbial technologies within the framework of the water-energy nexus has been taking place for over a century, but these mixed microbial communities are considered hard to deal with 'black boxes'. Process steering is mainly based on avoiding process failure by monitoring conventional parameters, e.g., pH and temperature, which often leads to operation far below the intrinsic potential. Mixed microbial communities do not reflect a randomised individual mix, but an interacting microbiological entity. Advance monitoring to obtain effective engineering of the microbiome is achievable, and even crucial to obtain the desired performance and products. This can be achieved via a top-down or bottom-up approach. The top-down strategy is reflected in the microbial resource management concept that considers the microbial community as a well-structured network. This network can be monitored by means of molecular techniques that will allow the development of accurate and quick decision tools. In contrast, the bottom-up approach makes use of synthetic cultures that can be composed starting from defined axenic cultures, based on the requirements of the process under consideration. The success of both approaches depends on real-time monitoring and control. Of particular importance is the necessity to identify and characterise the key players in the process. These key players not only relate with the establishment of functional conversions, but also with the interaction between partner bacteria. This emphasises the importance of molecular (screening) techniques to obtain structural and functional insights, minimise energy input, and maximise product output by means of integrated microbiome processes. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Integrated health management and control of complex dynamical systems

    NASA Astrophysics Data System (ADS)

    Tolani, Devendra K.

    2005-11-01

    A comprehensive control and health management strategy for human-engineered complex dynamical systems is formulated for achieving high performance and reliability over a wide range of operation. Results from diverse research areas such as Probabilistic Robust Control (PRC), Damage Mitigating/Life Extending Control (DMC), Discrete Event Supervisory (DES) Control, Symbolic Time Series Analysis (STSA) and Health and Usage Monitoring System (HUMS) have been employed to achieve this goal. Continuous-domain control modules at the lower level are synthesized by PRC and DMC theories, whereas the upper-level supervision is based on DES control theory. In the PRC approach, by allowing different levels of risk under different flight conditions, the control system can achieve the desired trade off between stability robustness and nominal performance. In the DMC approach, component damage is incorporated in the control law to reduce the damage rate for enhanced structural durability. The DES controller monitors the system performance and, based on the mission requirements (e.g., performance metrics and level of damage mitigation), switches among various lower-level controllers. The core idea is to design a framework where the DES controller at the upper-level, mimics human intelligence and makes appropriate decisions to satisfy mission requirements, enhance system performance and structural durability. Recently developed tools in STSA have been used for anomaly detection and failure prognosis. The DMC deals with the usage monitoring or operational control part of health management, where as the issue of health monitoring is addressed by the anomaly detection tools. The proposed decision and control architecture has been validated on two test-beds, simulating the operations of rotorcraft dynamics and aircraft propulsion.

  14. Automating usability of ATLAS Distributed Computing resources

    NASA Astrophysics Data System (ADS)

    Tupputi, S. A.; Di Girolamo, A.; Kouba, T.; Schovancová, J.; Atlas Collaboration

    2014-06-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  15. Prevalence and Predictors of Immunological Failure among HIV Patients on HAART in Southern Ethiopia.

    PubMed

    Yirdaw, Kesetebirhan Delele; Hattingh, Susan

    2015-01-01

    Immunological monitoring is part of the standard of care for patients on antiretroviral treatment. Yet, little is known about the routine implementation of immunological laboratory monitoring and utilization in clinical care in Ethiopia. This study assessed the pattern of immunological monitoring, immunological response, level of immunological treatment failure and factors related to it among patients on antiretroviral therapy in selected hospitals in southern Ethiopia. A retrospective longitudinal analytic study was conducted using documents of patients started on antiretroviral therapy. Adequacy of timely immunological monitoring was assessed every six months the first year and every one year thereafter. Immunological response was assessed every six months at cohort level. Immunological failure was based on the criteria: fall of follow-up CD4 cell count to baseline (or below), or CD4 levels persisting below 100 cells/mm3, or 50% fall from on-treatment peak value. A total of 1,321 documents of patients reviewed revealed timely immunological monitoring were inadequate. There was adequate immunological response, with pediatric patients, females, those with less advanced illness (baseline WHO Stage I or II) and those with higher baseline CD4 cell count found to have better immunological recovery. Thirty-nine patients (3%) were not evaluated for immunological failure because they had frequent treatment interruption. Despite overall adequate immunological response at group level, the prevalence of those who ever experienced immunological failure was 17.6% (n=226), while after subsequent re-evaluation it dropped to 11.5% (n=147). Having WHO Stage III/IV of the disease or a higher CD4 cell count at baseline was identified as a risk for immunological failure. Few patients with confirmed failure were switched to second line therapy. These findings highlight the magnitude of the problem of immunological failure and the gap in management. Prioritizing care for high risk patients may help in effective utilization of meager resources.

  16. Prevalence and Predictors of Immunological Failure among HIV Patients on HAART in Southern Ethiopia

    PubMed Central

    2015-01-01

    Immunological monitoring is part of the standard of care for patients on antiretroviral treatment. Yet, little is known about the routine implementation of immunological laboratory monitoring and utilization in clinical care in Ethiopia. This study assessed the pattern of immunological monitoring, immunological response, level of immunological treatment failure and factors related to it among patients on antiretroviral therapy in selected hospitals in southern Ethiopia. A retrospective longitudinal analytic study was conducted using documents of patients started on antiretroviral therapy. Adequacy of timely immunological monitoring was assessed every six months the first year and every one year thereafter. Immunological response was assessed every six months at cohort level. Immunological failure was based on the criteria: fall of follow-up CD4 cell count to baseline (or below), or CD4 levels persisting below 100 cells/mm3, or 50% fall from on-treatment peak value. A total of 1,321 documents of patients reviewed revealed timely immunological monitoring were inadequate. There was adequate immunological response, with pediatric patients, females, those with less advanced illness (baseline WHO Stage I or II) and those with higher baseline CD4 cell count found to have better immunological recovery. Thirty-nine patients (3%) were not evaluated for immunological failure because they had frequent treatment interruption. Despite overall adequate immunological response at group level, the prevalence of those who ever experienced immunological failure was 17.6% (n=226), while after subsequent re-evaluation it dropped to 11.5% (n=147). Having WHO Stage III/IV of the disease or a higher CD4 cell count at baseline was identified as a risk for immunological failure. Few patients with confirmed failure were switched to second line therapy. These findings highlight the magnitude of the problem of immunological failure and the gap in management. Prioritizing care for high risk patients may help in effective utilization of meager resources. PMID:25961732

  17. Long term real-time monitoring of large alpine rockslides by GB-InSAR: mechanisms, triggers, scenario assessment and Early Warning

    NASA Astrophysics Data System (ADS)

    Crosta, G. B.; Agliardi, F.; Sosio, R.; Rivolta, C.; Leva, D.; Dei Cas, L.

    2012-04-01

    Large rockslides in alpine valleys can undergo catastrophic evolution, posing extraordinary risks to settlements, lives and critical infrastructures. These phenomena are controlled by a complex interplay of lithological, structural, hydrological and meteo-climatic factors, which eventually result in: complex triggering mechanisms and kinematics, highly variable activity, regressive to progressive trends with superimposed acceleration and deceleration periods related to rainfall and snowmelt. Managing large rockslide risk remains challenging, due the high uncertainty related to their geological model and dynamics. In this context, the most promising approach to constrain rockslide kinematics, establish correlations with triggering factors, and predict future displacements, velocity and acceleration, and eventually possible final collapse is based on the analysis and modelling of long-term series of monitoring data. More than traditional monitoring activities, remote sensing represents an important tool aimed at describing local rockslide displacements and kinematics, at distinguishing rates of activity, and providing real time data suitable for early warning. We analyze a long term monitoring dataset collected for a deep-seated rockslide (Ruinon, Lombardy, Italy), actively monitored since 1997 through an in situ monitoring network (topographic and GPS, wire extensometers and distometer baselines) and since 2006 by a ground based radar (GB-InSAR). Monitoring allowed to set-up and update the geological model, identify rockslide extent and geometry, analyze its sensitivity to seasonal changes and their impact on the reliability and EW potential of monitoring data. GB-InSAR data allowed to identify sub-areas with different behaviors associated to outcropping bedrock and thick debris cover, and to set-up a "virtual monitoring network" by a posteriori selection of critical locations. Resulting displacement time series provide a large amount of information even in debris-covered areas, where traditional monitoring fails. Such spatially-distributed, improved information, validated by selected ground-based measurements, allowed to establish new velocity thresholds for EW purposes. Relationships between rainfall and displacement rates allowed to identify different possible failure mechanisms and to constrain the applicability of rainfall EW thresholds. Comparison with temperature and snow melting time series allowed to clarify the sensitivity of the rockslide movement to these controlling factors. Finally, the recognition of the sensitivity to all these factors allowed us to accomplish a more complete hazard assessment by defining different failure scenarios and the associated triggering thresholds.

  18. Continuous ECG Monitoring in Patients With Acute Coronary Syndrome or Heart Failure: EASI Versus Gold Standard.

    PubMed

    Lancia, Loreto; Toccaceli, Andrea; Petrucci, Cristina; Romano, Silvio; Penco, Maria

    2018-05-01

    The purpose of the study was to compare the EASI system with the standard 12-lead surface electrocardiogram (ECG) for the accuracy in detecting the main electrocardiographic parameters (J point, PR, QT, and QRS) commonly monitored in patients with acute coronary syndromes or heart failure. In this observational comparative study, 253 patients who were consecutively admitted to the coronary care unit with acute coronary syndrome or heart failure were evaluated. In all patients, two complete 12-lead ECGs were acquired simultaneously. A total of 6,072 electrocardiographic leads were compared (3,036 standard and 3,036 EASI). No significant differences were found between the investigate parameters of the two measurement methods, either in patients with acute coronary syndrome or in those with heart failure. This study confirmed the accuracy of the EASI system in monitoring the main ECG parameters in patients admitted to the coronary care unit with acute coronary syndrome or heart failure.

  19. Rate-based structural health monitoring using permanently installed sensors

    PubMed Central

    2017-01-01

    Permanently installed sensors are becoming increasingly ubiquitous, facilitating very frequent in situ measurements and consequently improved monitoring of ‘trends’ in the observed system behaviour. It is proposed that this newly available data may be used to provide prior warning and forecasting of critical events, particularly system failure. Numerous damage mechanisms are examples of positive feedback; they are ‘self-accelerating’ with an increasing rate of damage towards failure. The positive feedback leads to a common time-response behaviour which may be described by an empirical relation allowing prediction of the time to criticality. This study focuses on Structural Health Monitoring of engineering components; failure times are projected well in advance of failure for fatigue, creep crack growth and volumetric creep damage experiments. The proposed methodology provides a widely applicable framework for using newly available near-continuous data from permanently installed sensors to predict time until failure in a range of application areas including engineering, geophysics and medicine. PMID:28989308

  20. Failure of platelet parameters and biomarkers to correlate platelet function to severity and etiology of heart failure in patients enrolled in the EPCOT trial. With special reference to the Hemodyne hemostatic analyzer. Whole Blood Impedance Aggregometry for the Assessment of Platelet Function in Patients with Congestive Heart Failure.

    PubMed

    Serebruany, Victor L; McKenzie, Marcus E; Meister, Andrew F; Fuzaylov, Sergey Y; Gurbel, Paul A; Atar, Dan; Gattis, Wendy A; O'Connor, Christopher M

    2002-01-01

    Data from small studies have suggested the presence of platelet abnormalities in patients with congestive heart failure (CHF). We sought to characterize the diagnostic utility of different platelet parameters and platelet-endothelial biomarkers in a random outpatient CHF population investigated in the EPCOT ('Whole Blood Impedance Aggregometry for the Assessment of Platelet Function in Patients with Congestive Heart Failure') Trial. Blood samples were obtained for measurement of platelet contractile force (PCF), whole blood aggregation, shear-induced closure time, expression of glycoprotein (GP) IIb/IIIa, and P-selectin in 100 consecutive patients with CHF. Substantial interindividual variability of platelet characteristics exists in patients with CHF. There were no statistically significant differences when patients were grouped according to incidence of vascular events, emergency revascularization needs, survival, or etiology of heart failure. Aspirin use did not affect instrument readings either. PCF correlates very poorly with whole blood aggregometry (r(2) = 0.023), closure time (r(2) = 0.028), platelet GP IIb/IIIa (r(2) = 0.0028), and P-selectin (r(2) = 0.002) expression. Furthermore, there was no correlation with brain natriuretic peptide concentrations, a marker of severity and prognosis in heart failure reflecting the neurohumoral status. Patients with heart failure enrolled in the EPCOT Trial exhibited a marginal, sometimes oppositely directed change in platelet function, challenging the diagnostic utility of these platelet parameters and biomarkers to serve as useful tools for the identification of platelet abnormalities, for predicting clinical outcomes, or for monitoring antiplatelet strategies in this population. The usefulness of these measurements for assessing platelets in the different clinical settings remains to be explored. Taken together, opposite to our expectations, major clinical characteristics of heart failure did not correlate well with the platelet characteristics investigated in this study. Copyright 2002 S. Karger AG, Basel

  1. Practical Insight to Monitor Home NIV in COPD Patients.

    PubMed

    Arnal, Jean-Michel; Texereau, Joëlle; Garnero, Aude

    2017-08-01

    Home noninvasive ventilation (NIV) is used in COPD patients with concomitant chronic hypercapnic respiratory failure in order to correct nocturnal hypoventilation and improve sleep quality, quality of life, and survival. Monitoring of home NIV is needed to assess the effectiveness of ventilation and adherence to therapy, resolve potential adverse effects, reinforce patient knowledge, provide maintenance of the equipment, and readjust the ventilator settings according to the changing condition of the patient. Clinical monitoring is very informative. Anamnesis focuses on the improvement of nocturnal hypoventilation symptoms, sleep quality, and side effects of NIV. Side effects are major cause of intolerance. Screening side effects leads to modification of interface, gas humidification, or ventilator settings. Home care providers maintain ventilator and interface and educate patients for correct use. However, patient's education should be supervised by specialized clinicians. Blood gas measurement shows a significant decrease in PaCO 2 when NIV is efficient. Analysis of ventilator data is very useful to assess daily use, unintentional leaks, upper airway obstruction, and patient ventilator synchrony. Nocturnal oximetry and capnography are additional monitoring tools to assess the impact of NIV on gas exchanges. In the near future, telemonitoring will reinforce and change the organization of home NIV for COPD patients.

  2. A sneak peek into digital innovations and wearable sensors for cardiac monitoring.

    PubMed

    Michard, Frederic

    2017-04-01

    Many mobile phone or tablet applications have been designed to control cardiovascular risk factors (obesity, smoking, sedentary lifestyle, diabetes and hypertension) or to optimize treatment adherence. Some have been shown to be useful but the long-term benefits remain to be demonstrated. Digital stethoscopes make easier the interpretation of abnormal heart sounds, and the development of pocket-sized echo machines may quickly and significantly expand the use of ultrasounds. Daily home monitoring of pulmonary artery pressures with wireless implantable sensors has been shown to be associated with a significant decrease in hospital readmissions for heart failure. There are more and more non-invasive, wireless, and wearable sensors designed to monitor heart rate, heart rate variability, respiratory rate, arterial oxygen saturation, and thoracic fluid content. They have the potential to change the way we monitor and treat patients with cardiovascular diseases in the hospital and beyond. Some may have the ability to improve quality of care, decrease the number of medical visits and hospitalization, and ultimately health care costs. Validation and outcome studies are needed to clarify, among the growing number of digital innovations and wearable sensors, which tools have real clinical value.

  3. Deep Space Network Antenna Logic Controller

    NASA Technical Reports Server (NTRS)

    Ahlstrom, Harlow; Morgan, Scott; Hames, Peter; Strain, Martha; Owen, Christopher; Shimizu, Kenneth; Wilson, Karen; Shaller, David; Doktomomtaz, Said; Leung, Patrick

    2007-01-01

    The Antenna Logic Controller (ALC) software controls and monitors the motion control equipment of the 4,000-metric-ton structure of the Deep Space Network 70-meter antenna. This program coordinates the control of 42 hydraulic pumps, while monitoring several interlocks for personnel and equipment safety. Remote operation of the ALC runs via the Antenna Monitor & Control (AMC) computer, which orchestrates the tracking functions of the entire antenna. This software provides a graphical user interface for local control, monitoring, and identification of faults as well as, at a high level, providing for the digital control of the axis brakes so that the servo of the AMC may control the motion of the antenna. Specific functions of the ALC also include routines for startup in cold weather, controlled shutdown for both normal and fault situations, and pump switching on failure. The increased monitoring, the ability to trend key performance characteristics, the improved fault detection and recovery, the centralization of all control at a single panel, and the simplification of the user interface have all reduced the required workforce to run 70-meter antennas. The ALC also increases the antenna availability by reducing the time required to start up the antenna, to diagnose faults, and by providing additional insight into the performance of key parameters that aid in preventive maintenance to avoid key element failure. The ALC User Display (AUD) is a graphical user interface with hierarchical display structure, which provides high-level status information to the operation of the ALC, as well as detailed information for virtually all aspects of the ALC via drill-down displays. The operational status of an item, be it a function or assembly, is shown in the higher-level display. By pressing the item on the display screen, a new screen opens to show more detail of the function/assembly. Navigation tools and the map button allow immediate access to all screens.

  4. Ultrasonic Evaluation of Fatigue Damage

    NASA Astrophysics Data System (ADS)

    Bayer, P.; Singher, L.; Notea, A.

    2004-02-01

    Despite the fact that most engineers and designers are aware of fatigue, many severe breakdowns of industrial plant and machinery still occur due to fatigue. In effect, it's been estimated that fatigue causes at least 80% of the failures in modern engineering components. From an operational point of view, the detection of fatigue damage, preferably at a very early stage, is a critically important consideration in order to prevent possible catastrophic equipment failure and associated losses. This paper describes the investigation involving the use of ultrasonic waves as a potential tool for early detection of fatigue damage. The parameters investigated were the ultrasonic wave velocities (longitudinal and transverse waves) and attenuation coefficient before fatigue damage and after progressive stages of fatigue. Although comparatively small uncertainties were observed, the feasibility of utilizing the velocity of ultrasonic waves as a fatigue monitor was barely substantiated within actual research conditions. However, careful measurements of the ultrasonic attenuation parameter had demonstrated its potential to provide an early assessment of damage during fatigue.

  5. Progress feedback and the OQ-system: The past and the future.

    PubMed

    Lambert, Michael J

    2015-12-01

    A serious problem in routine clinical practice is clinician optimism about the benefit clients derive from the therapy that they offer compared to measured benefits. The consequence of seeing the silver lining is a failure to identify cases that, in the end, leave treatment worse-off than when they started or are simply unaffected. It has become clear that some methods of measuring, monitoring, and providing feedback to clinicians about client mental health status over the course of routine care improves treatment outcomes for clients at risk of treatment failure (Shimokawa, Lambert, & Smart, 2010) and thus is a remedy for therapist optimism by identifying cases at risk for poor outcomes. The current article presents research findings related to use of the Outcome Questionnaire-45 and Clinical Support Tools for this purpose. The necessary characteristics of feedback systems that work to benefit client's well-being are identified. In addition, suggestions for future research and use in routine care are presented. (c) 2015 APA, all rights reserved).

  6. Construct validity of the Heart Failure Screening Tool (Heart-FaST) to identify heart failure patients at risk of poor self-care: Rasch analysis.

    PubMed

    Reynolds, Nicholas A; Ski, Chantal F; McEvedy, Samantha M; Thompson, David R; Cameron, Jan

    2018-02-14

    The aim of this study was to psychometrically evaluate the Heart Failure Screening Tool (Heart-FaST) via: (1) examination of internal construct validity; (2) testing of scale function in accordance with design; and (3) recommendation for change/s, if items are not well adjusted, to improve psychometric credential. Self-care is vital to the management of heart failure. The Heart-FaST may provide a prospective assessment of risk, regarding the likelihood that patients with heart failure will engage in self-care. Psychometric validation of the Heart-FaST using Rasch analysis. The Heart-FaST was administered to 135 patients (median age = 68, IQR = 59-78 years; 105 males) enrolled in a multidisciplinary heart failure management program. The Heart-FaST is a nurse-administered tool for screening patients with HF at risk of poor self-care. A Rasch analysis of responses was conducted which tested data against Rasch model expectations, including whether items serve as unbiased, non-redundant indicators of risk and measure a single construct and that rating scales operate as intended. The results showed that data met Rasch model expectations after rescoring or deleting items due to poor discrimination, disordered thresholds, differential item functioning, or response dependence. There was no evidence of multidimensionality which supports the use of total scores from Heart-FaST as indicators of risk. Aggregate scores from this modified screening tool rank heart failure patients according to their "risk of poor self-care" demonstrating that the Heart-FaST items constitute a meaningful scale to identify heart failure patients at risk of poor engagement in heart failure self-care. © 2018 John Wiley & Sons Ltd.

  7. Intracranial pressure monitoring in pediatric and adult patients with hydrocephalus and tentative shunt failure: a single-center experience over 10 years in 146 patients.

    PubMed

    Sæhle, Terje; Eide, Per Kristian

    2015-05-01

    OBJECT In patients with hydrocephalus and shunts, lasting symptoms such as headache and dizziness may be indicative of shunt failure, which may necessitate shunt revision. In cases of doubt, the authors monitor intracranial pressure (ICP) to determine the presence of over- or underdrainage of CSF to tailor management. In this study, the authors reviewed their experience of ICP monitoring in shunt failure. The aims of the study were to identify the complications and impact of ICP monitoring, as well as to determine the mean ICP and characteristics of the cardiac-induced ICP waves in pediatric versus adult over- and underdrainage. METHODS The study population included all pediatric and adult patients with hydrocephalus and shunts undergoing diagnostic ICP monitoring for tentative shunt failure during the 10-year period from 2002 to 2011. The patients were allocated into 3 groups depending on how they were managed following ICP monitoring: no drainage failure, overdrainage, or underdrainage. While patients with no drainage failure were managed conservatively without further actions, over- or underdrainage cases were managed with shunt revision or shunt valve adjustment. The ICP and ICP wave scores were determined from the continuous ICP waveforms. RESULTS The study population included 71 pediatric and 75 adult patients. There were no major complications related to ICP monitoring, but 1 patient was treated for a postoperative superficial wound infection and another experienced a minor bleed at the tip of the ICP sensor. Following ICP monitoring, shunt revision was performed in 74 (51%) of 146 patients, while valve adjustment was conducted in 17 (12%) and conservative measures without any actions in 55 (38%). Overdrainage was characterized by a higher percentage of episodes with negative mean ICP less than -5 to -10 mm Hg. The ICP wave scores, in particular the mean ICP wave amplitude (MWA), best differentiated underdrainage. Neither mean ICP nor MWA levels showed any significant association with age. CONCLUSIONS In this cohort of pediatric and adult patients with hydrocephalus and tentative shunt failure, the risk of ICP monitoring was very low, and helped the authors avoid shunt revision in 49% of the patients. Mean ICP best differentiated overdrainage, which was characterized by a higher percentage of episodes with negative mean ICP less than -5 to -10 mm Hg. Underdrainage was best characterized by elevated MWA values, indicative of impaired intracranial compliance.

  8. Telemonitoring in heart failure: Big Brother watching over you.

    PubMed

    Dierckx, R; Pellicori, P; Cleland, J G F; Clark, A L

    2015-01-01

    Heart failure (HF) is a leading cause of hospitalisations in older people. Several strategies, supported by novel technologies, are now available to monitor patients' health from a distance. Although studies have suggested that remote monitoring may reduce HF hospitalisations and mortality, the study of different patient populations, the use of different monitoring technologies and the use of different endpoints limit the generalisability of the results of the clinical trials reported, so far. In this review, we discuss the existing home monitoring modalities, relevant trials and focus on future directions for telemonitoring.

  9. DMS augmented monitoring and diganosis application (DMS AMDA) prototype

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Boyd, Mark A.; Iverson, David L.; Donnell, Brian; Lauritsen, Janet; Doubek, Sharon; Gibson, Jim; Monahan, Christine; Rosenthal, Donald A.

    1993-01-01

    The Data Management System Augmented Monitoring and Diagnosis Application (DMS AMDA) is currently under development at NASA Ames Research Center (ARC). It will provide automated monitoring and diagnosis capabilities for the Space Station Freedom (SSF) Data Management System (DMS) in the Control Center Complex (CCC) at NASA Johnson Space Center. Several advanced automation applications are under development for use in the CCC for other SSF subsystems. The DMS AMDA, however, is the first application to utilize digraph failure analysis techniques and the Extended Realtime FEAT (ERF) application as the core of its diagnostic system design, since the other projects were begun before the digraph tools were available. Model-based diagnosis and expert systems techniques will provide additional capabilities and augment ERF where appropriate. Utilization of system knowledge captured in the design phase of a system in digraphs should result in both a cost savings and a technical advantage during implementation of the diagnostic software. This paper addresses both the programmatic and technical considerations of this approach, and describes the software design and initial prototyping effort.

  10. mcrA Gene abundance correlates with hydrogenotrophic methane production rates in full-scale anaerobic waste treatment systems.

    PubMed

    Morris, R L; Tale, V P; Mathai, P P; Zitomer, D H; Maki, J S

    2016-02-01

    Anaerobic treatment is a sustainable and economical technology for waste stabilization and production of methane as a renewable energy. However, the process is under-utilized due to operational challenges. Organic overload or toxicants can stress the microbial community that performs waste degradation, resulting in system failure. In addition, not all methanogenic microbial communities are equally capable of consistent, maximum biogas production. Opinion varies as to which parameters should be used to monitor the fitness of digester biomass. No standard molecular tools are currently in use to monitor and compare full-scale operations. It was hypothesized that determining the number of gene copies of mcrA, a methanogen-specific gene, would positively correlate with specific methanogenic activity (SMA) rates from biomass samples from six full-scale anaerobic digester systems. Positive correlations were observed between mcrA gene copy numbers and methane production rates against H2  : CO2 and propionate (R(2)  = 0·67-0·70, P < 0·05) but not acetate (R(2)  = 0·49, P > 0·05). Results from this study indicate that mcrA gene targeted qPCR can be used as an alternate tool to monitor and compare certain methanogen communities in anaerobic digesters. Using quantitative PCR (qPCR), we demonstrate that the abundance of mcrA, a gene specific to methane producing archaea, correlated with specific methanogenic activity (SMA) measurements when H2 and CO2 , or propionate were provided as substrates. However, mcrA abundance did not correlate with SMA with acetate. SMA values are often used as a fitness indicator of anaerobic biomass. Results from qPCR can be obtained within a day while SMA analysis requires days to weeks to complete. Therefore, qPCR for mcrA abundance is a sensitive and fast method to compare and monitor the fitness of certain anaerobic biomass. As a monitoring tool, qPCR of mcrA will help anaerobic digester operators optimize treatment and encourage more widespread use of this valuable technology. © 2015 The Society for Applied Microbiology.

  11. A System for Fault Management for NASA's Deep Space Habitat

    NASA Technical Reports Server (NTRS)

    Colombano, Silvano P.; Spirkovska, Liljana; Aaseng, Gordon B.; Mccann, Robert S.; Baskaran, Vijayakumar; Ossenfort, John P.; Smith, Irene Skupniewicz; Iverson, David L.; Schwabacher, Mark A.

    2013-01-01

    NASA's exploration program envisions the utilization of a Deep Space Habitat (DSH) for human exploration of the space environment in the vicinity of Mars and/or asteroids. Communication latencies with ground control of as long as 20+ minutes make it imperative that DSH operations be highly autonomous, as any telemetry-based detection of a systems problem on Earth could well occur too late to assist the crew with the problem. A DSH-based development program has been initiated to develop and test the automation technologies necessary to support highly autonomous DSH operations. One such technology is a fault management tool to support performance monitoring of vehicle systems operations and to assist with real-time decision making in connection with operational anomalies and failures. Toward that end, we are developing Advanced Caution and Warning System (ACAWS), a tool that combines dynamic and interactive graphical representations of spacecraft systems, systems modeling, automated diagnostic analysis and root cause identification, system and mission impact assessment, and mitigation procedure identification to help spacecraft operators (both flight controllers and crew) understand and respond to anomalies more effectively. In this paper, we describe four major architecture elements of ACAWS: Anomaly Detection, Fault Isolation, System Effects Analysis, and Graphic User Interface (GUI), and how these elements work in concert with each other and with other tools to provide fault management support to both the controllers and crew. We then describe recent evaluations and tests of ACAWS on the DSH testbed. The results of these tests support the feasibility and strength of our approach to failure management automation and enhanced operational autonomy.

  12. A System for Fault Management and Fault Consequences Analysis for NASA's Deep Space Habitat

    NASA Technical Reports Server (NTRS)

    Colombano, Silvano; Spirkovska, Liljana; Baskaran, Vijaykumar; Aaseng, Gordon; McCann, Robert S.; Ossenfort, John; Smith, Irene; Iverson, David L.; Schwabacher, Mark

    2013-01-01

    NASA's exploration program envisions the utilization of a Deep Space Habitat (DSH) for human exploration of the space environment in the vicinity of Mars and/or asteroids. Communication latencies with ground control of as long as 20+ minutes make it imperative that DSH operations be highly autonomous, as any telemetry-based detection of a systems problem on Earth could well occur too late to assist the crew with the problem. A DSH-based development program has been initiated to develop and test the automation technologies necessary to support highly autonomous DSH operations. One such technology is a fault management tool to support performance monitoring of vehicle systems operations and to assist with real-time decision making in connection with operational anomalies and failures. Toward that end, we are developing Advanced Caution and Warning System (ACAWS), a tool that combines dynamic and interactive graphical representations of spacecraft systems, systems modeling, automated diagnostic analysis and root cause identification, system and mission impact assessment, and mitigation procedure identification to help spacecraft operators (both flight controllers and crew) understand and respond to anomalies more effectively. In this paper, we describe four major architecture elements of ACAWS: Anomaly Detection, Fault Isolation, System Effects Analysis, and Graphic User Interface (GUI), and how these elements work in concert with each other and with other tools to provide fault management support to both the controllers and crew. We then describe recent evaluations and tests of ACAWS on the DSH testbed. The results of these tests support the feasibility and strength of our approach to failure management automation and enhanced operational autonomy

  13. Integrating Oil Debris and Vibration Measurements for Intelligent Machine Health Monitoring. Degree awarded by Toledo Univ., May 2002

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.

    2003-01-01

    A diagnostic tool for detecting damage to gears was developed. Two different measurement technologies, oil debris analysis and vibration were integrated into a health monitoring system for detecting surface fatigue pitting damage on gears. This integrated system showed improved detection and decision-making capabilities as compared to using individual measurement technologies. This diagnostic tool was developed and evaluated experimentally by collecting vibration and oil debris data from fatigue tests performed in the NASA Glenn Spur Gear Fatigue Rig. An oil debris sensor and the two vibration algorithms were adapted as the diagnostic tools. An inductance type oil debris sensor was selected for the oil analysis measurement technology. Gear damage data for this type of sensor was limited to data collected in the NASA Glenn test rigs. For this reason, this analysis included development of a parameter for detecting gear pitting damage using this type of sensor. The vibration data was used to calculate two previously available gear vibration diagnostic algorithms. The two vibration algorithms were selected based on their maturity and published success in detecting damage to gears. Oil debris and vibration features were then developed using fuzzy logic analysis techniques, then input into a multi sensor data fusion process. Results show combining the vibration and oil debris measurement technologies improves the detection of pitting damage on spur gears. As a result of this research, this new diagnostic tool has significantly improved detection of gear damage in the NASA Glenn Spur Gear Fatigue Rigs. This research also resulted in several other findings that will improve the development of future health monitoring systems. Oil debris analysis was found to be more reliable than vibration analysis for detecting pitting fatigue failure of gears and is capable of indicating damage progression. Also, some vibration algorithms are as sensitive to operational effects as they are to damage. Another finding was that clear threshold limits must be established for diagnostic tools. Based on additional experimental data obtained from the NASA Glenn Spiral Bevel Gear Fatigue Rig, the methodology developed in this study can be successfully implemented on other geared systems.

  14. 40 CFR 141.211 - Special notice for repeated failure to conduct monitoring of the source water for Cryptosporidium...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... determine whether water treatment at the (treatment plant name) is sufficient to adequately remove... source of your drinking water for Cryptosporidium in order to determine by (date) whether water treatment... conduct monitoring of the source water for Cryptosporidium and for failure to determine bin classification...

  15. Additional self-monitoring tools in the dietary modification component of The Women's Health Initiative.

    PubMed

    Mossavar-Rahmani, Yasmin; Henry, Holly; Rodabough, Rebecca; Bragg, Charlotte; Brewer, Amy; Freed, Trish; Kinzel, Laura; Pedersen, Margaret; Soule, C Oehme; Vosburg, Shirley

    2004-01-01

    Self-monitoring promotes behavior changes by promoting awareness of eating habits and creates self-efficacy. It is an important component of the Women's Health Initiative dietary intervention. During the first year of intervention, 74% of the total sample of 19,542 dietary intervention participants self-monitored. As the study progressed the self-monitoring rate declined to 59% by spring 2000. Participants were challenged by inability to accurately estimate fat content of restaurant foods and the inconvenience of carrying bulky self-monitoring tools. In 1996, a Self-Monitoring Working Group was organized to develop additional self-monitoring options that were responsive to participant needs. This article describes the original and additional self-monitoring tools and trends in tool use over time. Original tools were the Food Diary and Fat Scan. Additional tools include the Keeping Track of Goals, Quick Scan, Picture Tracker, and Eating Pattern Changes instruments. The additional tools were used by the majority of participants (5,353 of 10,260 or 52% of participants who were self-monitoring) by spring 2000. Developing self-monitoring tools that are responsive to participant needs increases the likelihood that self-monitoring can enhance dietary reporting adherence, especially in long-term clinical trials.

  16. Matrix Failure Modes and Effects Analysis as a Knowledge Base for a Real Time Automated Diagnosis Expert System

    NASA Technical Reports Server (NTRS)

    Herrin, Stephanie; Iverson, David; Spukovska, Lilly; Souza, Kenneth A. (Technical Monitor)

    1994-01-01

    Failure Modes and Effects Analysis contain a wealth of information that can be used to create the knowledge base required for building automated diagnostic Expert systems. A real time monitoring and diagnosis expert system based on an actual NASA project's matrix failure modes and effects analysis was developed. This Expert system Was developed at NASA Ames Research Center. This system was first used as a case study to monitor the Research Animal Holding Facility (RAHF), a Space Shuttle payload that is used to house and monitor animals in orbit so the effects of space flight and microgravity can be studied. The techniques developed for the RAHF monitoring and diagnosis Expert system are general enough to be used for monitoring and diagnosis of a variety of other systems that undergo a Matrix FMEA. This automated diagnosis system was successfully used on-line and validated on the Space Shuttle flight STS-58, mission SLS-2 in October 1993.

  17. Application of Failure Mode and Effects Analysis to Intraoperative Radiation Therapy Using Mobile Electron Linear Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciocca, Mario, E-mail: mario.ciocca@cnao.it; Cantone, Marie-Claire; Veronese, Ivan

    2012-02-01

    Purpose: Failure mode and effects analysis (FMEA) represents a prospective approach for risk assessment. A multidisciplinary working group of the Italian Association for Medical Physics applied FMEA to electron beam intraoperative radiation therapy (IORT) delivered using mobile linear accelerators, aiming at preventing accidental exposures to the patient. Methods and Materials: FMEA was applied to the IORT process, for the stages of the treatment delivery and verification, and consisted of three steps: 1) identification of the involved subprocesses; 2) identification and ranking of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system,more » based on the product of three parameters (severity, frequency of occurrence and detectability, each ranging from 1 to 10); 3) identification of additional safety measures to be proposed for process quality and safety improvement. RPN upper threshold for little concern of risk was set at 125. Results: Twenty-four subprocesses were identified. Ten potential failure modes were found and scored, in terms of RPN, in the range of 42-216. The most critical failure modes consisted of internal shield misalignment, wrong Monitor Unit calculation and incorrect data entry at treatment console. Potential causes of failure included shield displacement, human errors, such as underestimation of CTV extension, mainly because of lack of adequate training and time pressures, failure in the communication between operators, and machine malfunctioning. The main effects of failure were represented by CTV underdose, wrong dose distribution and/or delivery, unintended normal tissue irradiation. As additional safety measures, the utilization of a dedicated staff for IORT, double-checking of MU calculation and data entry and finally implementation of in vivo dosimetry were suggested. Conclusions: FMEA appeared as a useful tool for prospective evaluation of patient safety in radiotherapy. The application of this method to IORT lead to identify three safety measures for risk mitigation.« less

  18. Blunt splenic injuries: have we watched long enough?

    PubMed

    Smith, Jason; Armen, Scott; Cook, Charles H; Martin, Larry C

    2008-03-01

    Nonoperative management (NOM) of blunt splenic injuries (BSIs) has been used with increasing frequency in adult patients. There are currently no definitive guidelines established for how long BSI patients should be monitored for failure of NOM after injury. This study was performed to ascertain the length of inpatient observation needed to capture most failures, and to identify factors associated with failure of NOM. We utilized the National Trauma Data Bank to determine time to failure after BSI. During the 5-year study period, 23,532 patients were identified with BSI, of which 2,366 (10% overall) were taken directly to surgery (within 2 hours of arrival). Of 21,166 patients initially managed nonoperatively, 18,506 were successful (79% of all-comers). Patients with isolated BSI are currently monitored approximately 5 days as inpatients. Of patients failing NOM, 95% failed during the first 72 hours, and monitoring 2 additional days saw only 1.5% more failures. Factors influencing success of NOM included computed tomographic injury grade, severity of patient injury, and American College of Surgeons designation of trauma center. Importantly, patients who failed NOM did not seem to have detrimental outcomes when compared with patients with successful NOM. No statistically significant predictive variables could be identified that would help predict patients who would go on to fail NOM. We conclude that at least 80% of BSI can be managed successfully with NOM, and that patients should be monitored as inpatients for failure after BSI for 3 to 5 days.

  19. Early detection of nonneurologic organ failure in patients with severe traumatic brain injury: Multiple organ dysfunction score or sequential organ failure assessment?

    PubMed

    Ramtinfar, Sara; Chabok, Shahrokh Yousefzadeh; Chari, Aliakbar Jafari; Reihanian, Zoheir; Leili, Ehsan Kazemnezhad; Alizadeh, Arsalan

    2016-10-01

    The aim of this study is to compare the discriminant function of multiple organ dysfunction score (MODS) and sequential organ failure assessment (SOFA) components in predicting the Intensive Care Unit (ICU) mortality and neurologic outcome. A descriptive-analytic study was conducted at a level I trauma center. Data were collected from patients with severe traumatic brain injury admitted to the neurosurgical ICU. Basic demographic data, SOFA and MOD scores were recorded daily for all patients. Odd's ratios (ORs) were calculated to determine the relationship of each component score to mortality, and area under receiver operating characteristic (AUROC) curve was used to compare the discriminative ability of two tools with respect to ICU mortality. The most common organ failure observed was respiratory detected by SOFA of 26% and MODS of 13%, and the second common was cardiovascular detected by SOFA of 18% and MODS of 13%. No hepatic or renal failure occurred, and coagulation failure reported as 2.5% by SOFA and MODS. Cardiovascular failure defined by both tools had a correlation to ICU mortality and it was more significant for SOFA (OR = 6.9, CI = 3.6-13.3, P < 0.05 for SOFA; OR = 5, CI = 3-8.3, P < 0.05 for MODS; AUROC = 0.82 for SOFA; AUROC = 0.73 for MODS). The relationship of cardiovascular failure to dichotomized neurologic outcome was not significant statistically. ICU mortality was not associated with respiratory or coagulation failure. Cardiovascular failure defined by either tool significantly related to ICU mortality. Compared to MODS, SOFA-defined cardiovascular failure was a stronger predictor of death. ICU mortality was not affected by respiratory or coagulation failures.

  20. A case study in nonconformance and performance trend analysis

    NASA Technical Reports Server (NTRS)

    Maloy, Joseph E.; Newton, Coy P.

    1990-01-01

    As part of NASA's effort to develop an agency-wide approach to trend analysis, a pilot nonconformance and performance trending analysis study was conducted on the Space Shuttle auxiliary power unit (APU). The purpose of the study was to (1) demonstrate that nonconformance analysis can be used to identify repeating failures of a specific item (and the associated failure modes and causes) and (2) determine whether performance parameters could be analyzed and monitored to provide an indication of component or system degradation prior to failure. The nonconformance analysis of the APU did identify repeating component failures, which possibly could be reduced if key performance parameters were monitored and analyzed. The performance-trending analysis verified that the characteristics of hardware parameters can be effective in detecting degradation of hardware performance prior to failure.

  1. Cost-utility analysis of the EVOLVO study on remote monitoring for heart failure patients with implantable defibrillators: randomized controlled trial.

    PubMed

    Zanaboni, Paolo; Landolina, Maurizio; Marzegalli, Maurizio; Lunati, Maurizio; Perego, Giovanni B; Guenzati, Giuseppe; Curnis, Antonio; Valsecchi, Sergio; Borghetti, Francesca; Borghi, Gabriella; Masella, Cristina

    2013-05-30

    Heart failure patients with implantable defibrillators place a significant burden on health care systems. Remote monitoring allows assessment of device function and heart failure parameters, and may represent a safe, effective, and cost-saving method compared to conventional in-office follow-up. We hypothesized that remote device monitoring represents a cost-effective approach. This paper summarizes the economic evaluation of the Evolution of Management Strategies of Heart Failure Patients With Implantable Defibrillators (EVOLVO) study, a multicenter clinical trial aimed at measuring the benefits of remote monitoring for heart failure patients with implantable defibrillators. Two hundred patients implanted with a wireless transmission-enabled implantable defibrillator were randomized to receive either remote monitoring or the conventional method of in-person evaluations. Patients were followed for 16 months with a protocol of scheduled in-office and remote follow-ups. The economic evaluation of the intervention was conducted from the perspectives of the health care system and the patient. A cost-utility analysis was performed to measure whether the intervention was cost-effective in terms of cost per quality-adjusted life year (QALY) gained. Overall, remote monitoring did not show significant annual cost savings for the health care system (€1962.78 versus €2130.01; P=.80). There was a significant reduction of the annual cost for the patients in the remote arm in comparison to the standard arm (€291.36 versus €381.34; P=.01). Cost-utility analysis was performed for 180 patients for whom QALYs were available. The patients in the remote arm gained 0.065 QALYs more than those in the standard arm over 16 months, with a cost savings of €888.10 per patient. Results from the cost-utility analysis of the EVOLVO study show that remote monitoring is a cost-effective and dominant solution. Remote management of heart failure patients with implantable defibrillators appears to be cost-effective compared to the conventional method of in-person evaluations. ClinicalTrials.gov NCT00873899; http://clinicaltrials.gov/show/NCT00873899 (Archived by WebCite at http://www.webcitation.org/6H0BOA29f).

  2. Grid Stability Awareness System (GSAS) Final Scientific/Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feuerborn, Scott; Ma, Jian; Black, Clifton

    The project team developed a software suite named Grid Stability Awareness System (GSAS) for power system near real-time stability monitoring and analysis based on synchrophasor measurement. The software suite consists of five analytical tools: an oscillation monitoring tool, a voltage stability monitoring tool, a transient instability monitoring tool, an angle difference monitoring tool, and an event detection tool. These tools have been integrated into one framework to provide power grid operators with both real-time or near real-time stability status of a power grid and historical information about system stability status. These tools are being considered for real-time use in themore » operation environment.« less

  3. Remote patient monitoring in chronic heart failure.

    PubMed

    Palaniswamy, Chandrasekar; Mishkin, Aaron; Aronow, Wilbert S; Kalra, Ankur; Frishman, William H

    2013-01-01

    Heart failure (HF) poses a significant economic burden on our health-care resources with very high readmission rates. Remote monitoring has a substantial potential to improve the management and outcome of patients with HF. Readmission for decompensated HF is often preceded by a stage of subclinical hemodynamic decompensation, where therapeutic interventions would prevent subsequent clinical decompensation and hospitalization. Various methods of remote patient monitoring include structured telephone support, advanced telemonitoring technologies, remote monitoring of patients with implanted cardiac devices such as pacemakers and defibrillators, and implantable hemodynamic monitors. Current data examining the efficacy of remote monitoring technologies in improving outcomes have shown inconsistent results. Various medicolegal and financial issues need to be addressed before widespread implementation of this exciting technology can take place.

  4. Enhanced Schapery Theory Software Development for Modeling Failure of Fiber-Reinforced Laminates

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Waas, Anthony M.

    2013-01-01

    Progressive damage and failure analysis (PDFA) tools are needed to predict the nonlinear response of advanced fiber-reinforced composite structures. Predictive tools should incorporate the underlying physics of the damage and failure mechanisms observed in the composite, and should utilize as few input parameters as possible. The purpose of the Enhanced Schapery Theory (EST) was to create a PDFA tool that operates in conjunction with a commercially available finite element (FE) code (Abaqus). The tool captures the physics of the damage and failure mechanisms that result in the nonlinear behavior of the material, and the failure methodology employed yields numerical results that are relatively insensitive to changes in the FE mesh. The EST code is written in Fortran and compiled into a static library that is linked to Abaqus. A Fortran Abaqus UMAT material subroutine is used to facilitate the communication between Abaqus and EST. A clear distinction between damage and failure is imposed. Damage mechanisms result in pre-peak nonlinearity in the stress strain curve. Four internal state variables (ISVs) are utilized to control the damage and failure degradation. All damage is said to result from matrix microdamage, and a single ISV marks the micro-damage evolution as it is used to degrade the transverse and shear moduli of the lamina using a set of experimentally obtainable matrix microdamage functions. Three separate failure ISVs are used to incorporate failure due to fiber breakage, mode I matrix cracking, and mode II matrix cracking. Failure initiation is determined using a failure criterion, and the evolution of these ISVs is controlled by a set of traction-separation laws. The traction separation laws are postulated such that the area under the curves is equal to the fracture toughness of the material associated with the corresponding failure mechanism. A characteristic finite element length is used to transform the traction-separation laws into stress-strain laws. The ISV evolution equations are derived in a thermodynamically consistent manner by invoking the stationary principle on the total work of the system with respect to each ISV. A novel feature is the inclusion of both pre-peak damage and appropriately scaled, post-peak strain softening failure. Also, the characteristic elements used in the failure degradation scheme are calculated using the element nodal coordinates, rather than simply the square root of the area of the element.

  5. Monitoring of waste disposal in deep geological formations

    NASA Astrophysics Data System (ADS)

    German, V.; Mansurov, V.

    2003-04-01

    In the paper application of kinetic approach for description of rock failure process and waste disposal microseismic monitoring is advanced. On base of two-stage model of failure process the capability of rock fracture is proved. The requests to monitoring system such as real time mode of data registration and processing and its precision range are formulated. The method of failure nuclei delineation in a rock masses is presented. This method is implemented in a software program for strong seismic events forecasting. It is based on direct use of the fracture concentration criterion. The method is applied to the database of microseismic events of the North Ural Bauxite Mine. The results of this application, such as: efficiency, stability, possibility of forecasting rockburst are discussed.

  6. Health information systems: failure, success and improvisation.

    PubMed

    Heeks, Richard

    2006-02-01

    The generalised assumption of health information systems (HIS) success is questioned by a few commentators in the medical informatics field. They point to widespread HIS failure. The purpose of this paper was therefore to develop a better conceptual foundation for, and practical guidance on, health information systems failure (and success). Literature and case analysis plus pilot testing of developed model. Defining HIS failure and success is complex, and the current evidence base on HIS success and failure rates was found to be weak. Nonetheless, the best current estimate is that HIS failure is an important problem. The paper therefore derives and explains the "design-reality gap" conceptual model. This is shown to be robust in explaining multiple cases of HIS success and failure, yet provides a contingency that encompasses the differences which exist in different HIS contexts. The design-reality gap model is piloted to demonstrate its value as a tool for risk assessment and mitigation on HIS projects. It also throws into question traditional, structured development methodologies, highlighting the importance of emergent change and improvisation in HIS. The design-reality gap model can be used to address the problem of HIS failure, both as a post hoc evaluative tool and as a pre hoc risk assessment and mitigation tool. It also validates a set of methods, techniques, roles and competencies needed to support the dynamic improvisations that are found to underpin cases of HIS success.

  7. A fuzzy case based reasoning tool for model based approach to rocket engine health monitoring

    NASA Technical Reports Server (NTRS)

    Krovvidy, Srinivas; Nolan, Adam; Hu, Yong-Lin; Wee, William G.

    1992-01-01

    In this system we develop a fuzzy case based reasoner that can build a case representation for several past anomalies detected, and we develop case retrieval methods that can be used to index a relevant case when a new problem (case) is presented using fuzzy sets. The choice of fuzzy sets is justified by the uncertain data. The new problem can be solved using knowledge of the model along with the old cases. This system can then be used to generalize the knowledge from previous cases and use this generalization to refine the existing model definition. This in turn can help to detect failures using the model based algorithms.

  8. Clinical Immunology Review Series: An approach to the patient with angio-oedema

    PubMed Central

    Grigoriadou, S; Longhurst, H J

    2009-01-01

    Angio-oedema is a common reason for attendance at the accident and emergency department and for referral to immunology/allergy clinics. Causative factors should always be sought, but a large proportion of patients have the idiopathic form of the disease. A minority of patients represent a diagnostic and treatment challenge. Failure to identify the more unusual causes of angio-oedema may result in life-threatening situations. Common and rare causes of angio-oedema will be discussed in this article, as well as the diagnostic and treatment pathways for the management of these patients. A comprehensive history and close monitoring of response to treatment are the most cost-effective diagnostic and treatment tools. PMID:19220828

  9. Ensuring Patient Safety by using Colored Petri Net Simulation in the Design of Heterogeneous, Multi-Vendor, Integrated, Life-Critical Wireless (802.x) Patient Care Device Networks.

    PubMed

    Sloane, Elliot; Gehlot, Vijay

    2005-01-01

    Hospitals and manufacturers are designing and deploying the IEEE 802.x wireless technologies in medical devices to promote patient mobility and flexible facility use. There is little information, however, on the reliability or ultimate safety of connecting multiple wireless life-critical medical devices from multiple vendors using commercial 802.11a, 802.11b, 802.11g or pre-802.11n devices. It is believed that 802.11-type devices can introduce unintended life-threatening risks unless delivery of critical patient alarms to central monitoring systems and/or clinical personnel is assured by proper use of 802.11e Quality of Service (QoS) methods. Petri net tools can be used to simulate all possible states and transitions between devices and/or systems in a wireless device network, and can identify failure modes in advance. Colored Petri Net (CPN) tools are ideal, in fact, as they allow tracking and controlling each message in a network based on pre-selected criteria. This paper describes a research project using CPN to simulate and validate alarm integrity in a small multi-modality wireless patient monitoring system. A 20-monitor wireless patient monitoring network is created in two versions: one with non-prioritized 802.x CSM protocols and the second with simulated Quality of Service (QoS) capabilities similar to 802.11e (i.e., the second network allows message priority management.) In the standard 802.x network, dangerous heart arrhythmia and pulse oximetry alarms could not be reliably and rapidly communicated, but the second network's QoS priority management reduced that risk significantly.

  10. Failure mode and effect analysis-based quality assurance for dynamic MLC tracking systems

    PubMed Central

    Sawant, Amit; Dieterich, Sonja; Svatos, Michelle; Keall, Paul

    2010-01-01

    Purpose: To develop and implement a failure mode and effect analysis (FMEA)-based commissioning and quality assurance framework for dynamic multileaf collimator (DMLC) tumor tracking systems. Methods: A systematic failure mode and effect analysis was performed for a prototype real-time tumor tracking system that uses implanted electromagnetic transponders for tumor position monitoring and a DMLC for real-time beam adaptation. A detailed process tree of DMLC tracking delivery was created and potential tracking-specific failure modes were identified. For each failure mode, a risk probability number (RPN) was calculated from the product of the probability of occurrence, the severity of effect, and the detectibility of the failure. Based on the insights obtained from the FMEA, commissioning and QA procedures were developed to check (i) the accuracy of coordinate system transformation, (ii) system latency, (iii) spatial and dosimetric delivery accuracy, (iv) delivery efficiency, and (v) accuracy and consistency of system response to error conditions. The frequency of testing for each failure mode was determined from the RPN value. Results: Failures modes with RPN≥125 were recommended to be tested monthly. Failure modes with RPN<125 were assigned to be tested during comprehensive evaluations, e.g., during commissioning, annual quality assurance, and after major software∕hardware upgrades. System latency was determined to be ∼193 ms. The system showed consistent and accurate response to erroneous conditions. Tracking accuracy was within 3%–3 mm gamma (100% pass rate) for sinusoidal as well as a wide variety of patient-derived respiratory motions. The total time taken for monthly QA was ∼35 min, while that taken for comprehensive testing was ∼3.5 h. Conclusions: FMEA proved to be a powerful and flexible tool to develop and implement a quality management (QM) framework for DMLC tracking. The authors conclude that the use of FMEA-based QM ensures efficient allocation of clinical resources because the most critical failure modes receive the most attention. It is expected that the set of guidelines proposed here will serve as a living document that is updated with the accumulation of progressively more intrainstitutional and interinstitutional experience with DMLC tracking. PMID:21302802

  11. Graphical Displays Assist In Analysis Of Failures

    NASA Technical Reports Server (NTRS)

    Pack, Ginger; Wadsworth, David; Razavipour, Reza

    1995-01-01

    Failure Environment Analysis Tool (FEAT) computer program enables people to see and better understand effects of failures in system. Uses digraph models to determine what will happen to system if set of failure events occurs and to identify possible causes of selected set of failures. Digraphs or engineering schematics used. Also used in operations to help identify causes of failures after they occur. Written in C language.

  12. Naturally Acquired Learned Helplessness: The Relationship of School Failure to Achievement Behavior, Attributions, and Self-Concept.

    ERIC Educational Resources Information Center

    Johnson, Dona S.

    1981-01-01

    Personality and behavioral consequences of learned helplessness were monitored in children experiencing failure in school. The predictive quality of learned helplessness theory was compared with that of value expectancy theories. Low self-concept was predicted significantly by school failure, internal attributions for failure, and external…

  13. Increasing Psychotherapists’ Adoption and Implementation of the Evidence-based Practice of Progress Monitoring

    PubMed Central

    Persons, Jacqueline B.; Koerner, Kelly; Eidelman, Polina; Thomas, Cannon; Liu, Howard

    2015-01-01

    Evidence-based practices (EBPs) reach consumers slowly because practitioners are slow to adopt and implement them. We hypothesized that giving psychotherapists a tool + training intervention that was designed to help the therapist integrate the EBP of progress monitoring into his or her usual way of working would be associated with adoption and sustained implementation of the particular progress monitoring tool we trained them to use (the Depression Anxiety Stress Scales on our Online Progress Tracking tool) and would generalize to all types of progress monitoring measures. To test these hypotheses, we developed an online progress monitoring tool and a course that trained psychotherapists to use it, and we assessed progress monitoring behavior in 26 psychotherapists before, during, immediately after, and 12 months after they received the tool and training. Immediately after receiving the tool + training intervention, participants showed statistically significant increases in use of the online tool and of all types of progress monitoring measures. Twelve months later, participants showed sustained use of any type of progress monitoring measure but not the online tool. PMID:26618237

  14. Increasing psychotherapists' adoption and implementation of the evidence-based practice of progress monitoring.

    PubMed

    Persons, Jacqueline B; Koerner, Kelly; Eidelman, Polina; Thomas, Cannon; Liu, Howard

    2016-01-01

    Evidence-based practices (EBPs) reach consumers slowly because practitioners are slow to adopt and implement them. We hypothesized that giving psychotherapists a tool + training intervention that was designed to help the therapist integrate the EBP of progress monitoring into his or her usual way of working would be associated with adoption and sustained implementation of the particular progress monitoring tool we trained them to use (the Depression Anxiety Stress Scales on our Online Progress Tracking tool) and would generalize to all types of progress monitoring measures. To test these hypotheses, we developed an online progress monitoring tool and a course that trained psychotherapists to use it, and we assessed progress monitoring behavior in 26 psychotherapists before, during, immediately after, and 12 months after they received the tool and training. Immediately after receiving the tool + training intervention, participants showed statistically significant increases in use of the online tool and of all types of progress monitoring measures. Twelve months later, participants showed sustained use of any type of progress monitoring measure but not the online tool. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. 40 CFR 1065.410 - Maintenance limits for stabilized test engines.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... engineering grade tools to identify bad engine components. Any equipment, instruments, or tools used for... no longer use it as an emission-data engine. Also, if your test engine has a major mechanical failure... your test engine has a major mechanical failure that requires you to take it apart, you may no longer...

  16. 40 CFR 1065.410 - Maintenance limits for stabilized test engines.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... engineering grade tools to identify bad engine components. Any equipment, instruments, or tools used for... no longer use it as an emission-data engine. Also, if your test engine has a major mechanical failure... your test engine has a major mechanical failure that requires you to take it apart, you may no longer...

  17. Investigating Brittle Rock Failure and Associated Seismicity Using Laboratory Experiments and Numerical Simulations

    NASA Astrophysics Data System (ADS)

    Zhao, Qi

    Rock failure process is a complex phenomenon that involves elastic and plastic deformation, microscopic cracking, macroscopic fracturing, and frictional slipping of fractures. Understanding this complex behaviour has been the focus of a significant amount of research. In this work, the combined finite-discrete element method (FDEM) was first employed to study (1) the influence of rock discontinuities on hydraulic fracturing and associated seismicity and (2) the influence of in-situ stress on seismic behaviour. Simulated seismic events were analyzed using post-processing tools including frequency-magnitude distribution (b-value), spatial fractal dimension (D-value), seismic rate, and fracture clustering. These simulations demonstrated that at the local scale, fractures tended to propagate following the rock mass discontinuities; while at reservoir scale, they developed in the direction parallel to the maximum in-situ stress. Moreover, seismic signature (i.e., b-value, D-value, and seismic rate) can help to distinguish different phases of the failure process. The FDEM modelling technique and developed analysis tools were then coupled with laboratory experiments to further investigate the different phases of the progressive rock failure process. Firstly, a uniaxial compression experiment, monitored using a time-lapse ultrasonic tomography method, was carried out and reproduced by the numerical model. Using this combination of technologies, the entire deformation and failure processes were studied at macroscopic and microscopic scales. The results not only illustrated the rock failure and seismic behaviours at different stress levels, but also suggested several precursory behaviours indicating the catastrophic failure of the rock. Secondly, rotary shear experiments were conducted using a newly developed rock physics experimental apparatus ERDmu-T) that was paired with X-ray micro-computed tomography (muCT). This combination of technologies has significant advantages over conventional rotary shear experiments since it allowed for the direct observation of how two rough surfaces interact and deform without perturbing the experimental conditions. Some intriguing observations were made pertaining to key areas of the study of fault evolution, making possible for a more comprehensive interpretation of the frictional sliding behaviour. Lastly, a carefully calibrated FDEM model that was built based on the rotary experiment was utilized to investigate facets that the experiment was not able to resolve, for example, the time-continuous stress condition and the seismic activity on the shear surface. The model reproduced the mechanical behaviour observed in the laboratory experiment, shedding light on the understanding of fault evolution.

  18. Prediction of muscle performance during dynamic repetitive movement

    NASA Technical Reports Server (NTRS)

    Byerly, D. L.; Byerly, K. A.; Sognier, M. A.; Squires, W. G.

    2003-01-01

    BACKGROUND: During long-duration spaceflight, astronauts experience progressive muscle atrophy and often perform strenuous extravehicular activities. Post-flight, there is a lengthy recovery period with an increased risk for injury. Currently, there is a critical need for an enabling tool to optimize muscle performance and to minimize the risk of injury to astronauts while on-orbit and during post-flight recovery. Consequently, these studies were performed to develop a method to address this need. METHODS: Eight test subjects performed a repetitive dynamic exercise to failure at 65% of their upper torso weight using a Lordex spinal machine. Surface electromyography (SEMG) data was collected from the erector spinae back muscle. The SEMG data was evaluated using a 5th order autoregressive (AR) model and linear regression analysis. RESULTS: The best predictor found was an AR parameter, the mean average magnitude of AR poles, with r = 0.75 and p = 0.03. This parameter can predict performance to failure as early as the second repetition of the exercise. CONCLUSION: A method for predicting human muscle performance early during dynamic repetitive exercise was developed. The capability to predict performance to failure has many potential applications to the space program including evaluating countermeasure effectiveness on-orbit, optimizing post-flight recovery, and potential future real-time monitoring capability during extravehicular activity.

  19. Macrophage phagocytosis alters the MRI signal of ferumoxytol-labeled mesenchymal stromal cells in cartilage defects.

    PubMed

    Nejadnik, Hossein; Lenkov, Olga; Gassert, Florian; Fretwell, Deborah; Lam, Isaac; Daldrup-Link, Heike E

    2016-05-13

    Human mesenchymal stem cells (hMSCs) are a promising tool for cartilage regeneration in arthritic joints. hMSC labeling with iron oxide nanoparticles enables non-invasive in vivo monitoring of transplanted cells in cartilage defects with MR imaging. Since graft failure leads to macrophage phagocytosis of apoptotic cells, we evaluated in vitro and in vivo whether nanoparticle-labeled hMSCs show distinct MR signal characteristics before and after phagocytosis by macrophages. We found that apoptotic nanoparticle-labeled hMSCs were phagocytosed by macrophages while viable nanoparticle-labeled hMSCs were not. Serial MRI scans of hMSC transplants in arthritic joints of recipient rats showed that the iron signal of apoptotic, nanoparticle-labeled hMSCs engulfed by macrophages disappeared faster compared to viable hMSCs. This corresponded to poor cartilage repair outcomes of the apoptotic hMSC transplants. Therefore, rapid decline of iron MRI signal at the transplant site can indicate cell death and predict incomplete defect repair weeks later. Currently, hMSC graft failure can be only diagnosed by lack of cartilage defect repair several months after cell transplantation. The described imaging signs can diagnose hMSC transplant failure more readily, which could enable timely re-interventions and avoid unnecessary follow up studies of lost transplants.

  20. Macrophage phagocytosis alters the MRI signal of ferumoxytol-labeled mesenchymal stromal cells in cartilage defects

    NASA Astrophysics Data System (ADS)

    Nejadnik, Hossein; Lenkov, Olga; Gassert, Florian; Fretwell, Deborah; Lam, Isaac; Daldrup-Link, Heike E.

    2016-05-01

    Human mesenchymal stem cells (hMSCs) are a promising tool for cartilage regeneration in arthritic joints. hMSC labeling with iron oxide nanoparticles enables non-invasive in vivo monitoring of transplanted cells in cartilage defects with MR imaging. Since graft failure leads to macrophage phagocytosis of apoptotic cells, we evaluated in vitro and in vivo whether nanoparticle-labeled hMSCs show distinct MR signal characteristics before and after phagocytosis by macrophages. We found that apoptotic nanoparticle-labeled hMSCs were phagocytosed by macrophages while viable nanoparticle-labeled hMSCs were not. Serial MRI scans of hMSC transplants in arthritic joints of recipient rats showed that the iron signal of apoptotic, nanoparticle-labeled hMSCs engulfed by macrophages disappeared faster compared to viable hMSCs. This corresponded to poor cartilage repair outcomes of the apoptotic hMSC transplants. Therefore, rapid decline of iron MRI signal at the transplant site can indicate cell death and predict incomplete defect repair weeks later. Currently, hMSC graft failure can be only diagnosed by lack of cartilage defect repair several months after cell transplantation. The described imaging signs can diagnose hMSC transplant failure more readily, which could enable timely re-interventions and avoid unnecessary follow up studies of lost transplants.

  1. Suitability of amphibians and reptiles for translocation.

    PubMed

    Germano, Jennifer M; Bishop, Phillip J

    2009-02-01

    Translocations are important tools in the field of conservation. Despite increased use over the last few decades, the appropriateness of translocations for amphibians and reptiles has been debated widely over the past 20 years. To provide a comprehensive evaluation of the suitability of amphibians and reptiles for translocation, we reviewed the results of amphibian and reptile translocation projects published between 1991 and 2006. The success rate of amphibian and reptile translocations reported over this period was twice that reported in an earlier review in 1991. Success and failure rates were independent of the taxonomic class (Amphibia or Reptilia) released. Reptile translocations driven by human-wildlife conflict mitigation had a higher failure rate than those motivated by conservation, and more recent projects of reptile translocations had unknown outcomes. The outcomes of amphibian translocations were significantly related to the number of animals released, with projects releasing over 1000 individuals being most successful. The most common reported causes of translocation failure were homing and migration of introduced individuals out of release sites and poor habitat. The increased success of amphibian and reptile translocations reviewed in this study compared with the 1991 review is encouraging for future conservation projects. Nevertheless, more preparation, monitoring, reporting of results, and experimental testing of techniques and reintroduction questions need to occur to improve translocations of amphibians and reptiles as a whole.

  2. The respiratory system.

    PubMed

    Zifko, U; Chen, R

    1996-10-01

    Neurological disorders frequently contribute to respiratory failure in critically ill patients. They may be the primary reason for the initiation of mechanical ventilation, or may develop later as a secondary complication. Disorders of the central nervous system leading to respiratory failure include metabolic encephalopathies, acute stroke, lesions of the motor cortex and brain-stem respiratory centres, and their descending pathways. Guillan-Barré syndrome, critical illness polyneuropathy and acute quadriplegic myopathy are the more common neuromuscular causes of respiratory failure. Clinical observations and pulmonary function tests are important in monitoring respiratory function. Respiratory electrophysiological studies are useful in the investigation and monitoring of respiratory failure. Transcortical and cervical magnetic stimulation can assess the central respiratory drive, and may be useful in determining the prognosis in ventilated patients, with cervical cord dysfunction. It is also helpful in the assessment of failure to wean, which is often caused by a combination of central and peripheral nervous system disorders. Phrenic nerve conduction studies and needle electromyography of the diaphragm and chest wall muscles are useful to characterize neuropathies and myopathies affecting the diaphragm. Repetitive phrenic nerve stimulation can assess neuromuscular transmission defects. It is important to identify patients at risk of respiratory failure. They should be carefully monitored and mechanical ventilation should be initiated before the development of severe hypoxaemia.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lherbier, Louis, W.; Novotnak, David, J.; Herling, Darrell, R.

    Hot forming processes such as forging, die casting and glass forming require tooling that is subjected to high temperatures during the manufacturing of components. Current tooling is adversely affected by prolonged exposure at high temperatures. Initial studies were conducted to determine the root cause of tool failures in a number of applications. Results show that tool failures vary and depend on the operating environment under which they are used. Major root cause failures include (1) thermal softening, (2) fatigue and (3) tool erosion, all of which are affected by process boundary conditions such as lubrication, cooling, process speed, etc. Whilemore » thermal management is a key to addressing tooling failures, it was clear that new tooling materials with superior high temperature strength could provide improved manufacturing efficiencies. These efficiencies are based on the use of functionally graded materials (FGM), a new subset of hybrid tools with customizable properties that can be fabricated using advanced powder metallurgy manufacturing technologies. Modeling studies of the various hot forming processes helped identify the effect of key variables such as stress, temperature and cooling rate and aid in the selection of tooling materials for specific applications. To address the problem of high temperature strength, several advanced powder metallurgy nickel and cobalt based alloys were selected for evaluation. These materials were manufactured into tooling using two relatively new consolidation processes. One process involved laser powder deposition (LPD) and the second involved a solid state dynamic powder consolidation (SSDPC) process. These processes made possible functionally graded materials (FGM) that resulted in shaped tooling that was monolithic, bi-metallic or substrate coated. Manufacturing of tooling with these processes was determined to be robust and consistent for a variety of materials. Prototype and production testing of FGM tooling showed the benefits of the nickel and cobalt based powder metallurgy alloys in a number of applications evaluated. Improvements in tool life ranged from three (3) to twenty (20) or more times than currently used tooling. Improvements were most dramatic where tool softening and deformation were the major cause of tool failures in hot/warm forging applications. Significant improvement was also noted in erosion of aluminum die casting tooling. Cost and energy savings can be realized as a result of increased tooling life, increased productivity and a reduction in scrap because of improved dimensional controls. Although LPD and SSDPC tooling usually have higher acquisition costs, net tooling costs per component produced drops dramatically with superior tool performance. Less energy is used to manufacture the tooling because fewer tools are required and less recycling of used tools are needed for the hot forming process. Energy is saved during the component manufacturing cycle because more parts can be produced in shorter periods of time. Energy is also saved by minimizing heating furnace idling time because of less downtime for tooling changes.« less

  4. Anthology of the Development of Radiation Transport Tools as Applied to Single Event Effects

    NASA Astrophysics Data System (ADS)

    Reed, R. A.; Weller, R. A.; Akkerman, A.; Barak, J.; Culpepper, W.; Duzellier, S.; Foster, C.; Gaillardin, M.; Hubert, G.; Jordan, T.; Jun, I.; Koontz, S.; Lei, F.; McNulty, P.; Mendenhall, M. H.; Murat, M.; Nieminen, P.; O'Neill, P.; Raine, M.; Reddell, B.; Saigné, F.; Santin, G.; Sihver, L.; Tang, H. H. K.; Truscott, P. R.; Wrobel, F.

    2013-06-01

    This anthology contains contributions from eleven different groups, each developing and/or applying Monte Carlo-based radiation transport tools to simulate a variety of effects that result from energy transferred to a semiconductor material by a single particle event. The topics span from basic mechanisms for single-particle induced failures to applied tasks like developing websites to predict on-orbit single event failure rates using Monte Carlo radiation transport tools.

  5. Failure mode and effect analysis in blood transfusion: a proactive tool to reduce risks.

    PubMed

    Lu, Yao; Teng, Fang; Zhou, Jie; Wen, Aiqing; Bi, Yutian

    2013-12-01

    The aim of blood transfusion risk management is to improve the quality of blood products and to assure patient safety. We utilize failure mode and effect analysis (FMEA), a tool employed for evaluating risks and identifying preventive measures to reduce the risks in blood transfusion. The failure modes and effects occurring throughout the whole process of blood transfusion were studied. Each failure mode was evaluated using three scores: severity of effect (S), likelihood of occurrence (O), and probability of detection (D). Risk priority numbers (RPNs) were calculated by multiplying the S, O, and D scores. The plan-do-check-act cycle was also used for continuous improvement. Analysis has showed that failure modes with the highest RPNs, and therefore the greatest risk, were insufficient preoperative assessment of the blood product requirement (RPN, 245), preparation time before infusion of more than 30 minutes (RPN, 240), blood transfusion reaction occurring during the transfusion process (RPN, 224), blood plasma abuse (RPN, 180), and insufficient and/or incorrect clinical information on request form (RPN, 126). After implementation of preventative measures and reassessment, a reduction in RPN was detected with each risk. The failure mode with the second highest RPN, namely, preparation time before infusion of more than 30 minutes, was shown in detail to prove the efficiency of this tool. FMEA evaluation model is a useful tool in proactively analyzing and reducing the risks associated with the blood transfusion procedure. © 2013 American Association of Blood Banks.

  6. Reliability of biologic indicators in a mail-return sterilization-monitoring service: a review of 3 years.

    PubMed

    Andrés, M T; Tejerina, J M; Fierro, J F

    1995-12-01

    Most mail-return sterilization-monitoring services use spore strips to test sterilizers in dental clinics, but factors such as delay caused by mailing to the laboratory could cause false negatives. The aims of this study were to determine the influence of poststerilization time and temperature on the biologic indicator recovery system and to evaluate sterilization failure and its possible causes in dental clinics subscribing to a mail-return sterilization-monitoring service. Spore strips used in independent tests revealed the poststerilization time and temperature after a 7-day delay to have no significant influence. Sixty-six dental clinics that received quarterly biologic indicators to evaluate the effectiveness of their sterilizers had sterilization failure rates of 28.7% in 1992, 18.1% in 1993, and 9.1% in 1994, a statistically significant decrease in sterilization failure during the 3-year period. The usual causes of failure were operator error in wrapping of instruments, loading, operating temperature, or exposure time.

  7. Pilot performance in zero-visibility precision approach. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Ephrath, A. R.

    1975-01-01

    The pilot's short-term decisions regarding performance assessment and failure monitoring is examined. The performance of airline pilots who flew simulated zero-visibility landing approaches is reported. Results indicate that the pilot's mode of participation in the control task has a strong effect on his workload, the induced workload being lowest when the pilot acts as a monitor during a coupled approach and highest when the pilot is an active element in the control loop. A marked increase in workload at altitudes below 500 ft. is documented at all participation modes; this increase is inversely related to distance-to-go. The participation mode is shown to have a dominant effect on failure-detection performance, with a failure in a monitored (coupled) axis being detected faster than a comparable failure in a manually-controlled axis. Touchdown performance is also documented. It is concluded that the conventional instrument panel and its associated displays are inadequate for zero-visibility operations in the final phases of the landing approach.

  8. Cardiomyocyte binucleation is associated with aberrant mitotic microtubule distribution, mislocalization of RhoA and IQGAP3, as well as defective actomyosin ring anchorage and cleavage furrow ingression.

    PubMed

    Leone, Marina; Musa, Gentian; Engel, Felix Benedikt

    2018-03-07

    After birth mammalian cardiomyocytes initiate a last cell cycle which results in binucleation due to cytokinesis failure. Despite its importance for cardiac regenerative therapies, this process is poorly understood. Here, we aimed at a better understanding of the difference between cardiomyocyte proliferation and binucleation, and providing a new tool to distinguish these two processes. Monitoring of cell division by time-lapse imaging revealed that rat cardiomyocyte binucleation stems from a failure to properly ingress the cleavage furrow. Astral microtubule required for actomyosin ring anchorage and thus furrow ingression were not symmetrically distributed at the periphery of the equatorial region during anaphase in binucleating cardiomyocytes. Consequently, RhoA, the master regulator of actomyosin ring formation and constriction, non-muscle myosin IIB, a central component of the actomyosin ring, as well as IQGAP3 were abnormally localized during cytokinesis. In agreement with improper furrow ingression, binucleation in vitro as well as in vivo was associated with a failure of RhoA as well as IQGAP3 to localize to the stembody of the midbody. Taken together, these results indicate that naturally occurring cytokinesis failure in primary cardiomyocytes is due to an aberrant mitotic microtubule apparatus resulting in inefficient anchorage of the actomyosin ring to the plasma cell membrane. Thus, cardiomyocyte binucleation and division can be discriminated by the analysis of RhoA as well as IQGAP3 localization.

  9. Safety Evaluation of an Automated Remote Monitoring System for Heart Failure in an Urban, Indigent Population.

    PubMed

    Gross-Schulman, Sandra; Sklaroff, Laura Myerchin; Hertz, Crystal Coyazo; Guterman, Jeffrey J

    2017-12-01

    Heart Failure (HF) is the most expensive preventable condition, regardless of patient ethnicity, race, socioeconomic status, sex, and insurance status. Remote telemonitoring with timely outpatient care can significantly reduce avoidable HF hospitalizations. Human outreach, the traditional method used for remote monitoring, is effective but costly. Automated systems can potentially provide positive clinical, fiscal, and satisfaction outcomes in chronic disease monitoring. The authors implemented a telephonic HF automated remote monitoring system that utilizes deterministic decision tree logic to identify patients who are at risk of clinical decompensation. This safety study evaluated the degree of clinical concordance between the automated system and traditional human monitoring. This study focused on a broad underserved population and demonstrated a safe, reliable, and inexpensive method of monitoring patients with HF.

  10. Remote monitoring of cardiovascular implanted electronic devices: a paradigm shift for the 21st century.

    PubMed

    Cronin, Edmond M; Varma, Niraj

    2012-07-01

    Traditional follow-up of cardiac implantable electronic devices involves the intermittent download of largely nonactionable data. Remote monitoring represents a paradigm shift from episodic office-based follow-up to continuous monitoring of device performance and patient and disease state. This lessens device clinical burden and may also lead to cost savings, although data on economic impact are only beginning to emerge. Remote monitoring technology has the potential to improve the outcomes through earlier detection of arrhythmias and compromised device integrity, and possibly predict heart failure hospitalizations through integration of heart failure diagnostics and hemodynamic monitors. Remote monitoring platforms are also huge databases of patients and devices, offering unprecedented opportunities to investigate real-world outcomes. Here, the current status of the field is described and future directions are predicted.

  11. Comparative study of two modes of gastroesophageal reflux measuring: conventional esophageal pH monitoring and wireless pH monitoring.

    PubMed

    Azzam, Rimon Sobhi; Sallum, Rubens A A; Brandão, Jeovana Ferreira; Navarro-Rodriguez, Tomás; Nasi, Ary

    2012-01-01

    Esophageal pH monitoring is considered to be the gold standard for the diagnosis of gastroesophageal acid reflux. However, this method is very troublesome and considerably limits the patient's routine activities. Wireless pH monitoring was developed to avoid these restrictions. To compare the first 24 hours of the conventional and wireless pH monitoring, positioned 3 cm above the lower esophageal sphincter, in relation to: the occurrence of relevant technical failures, the ability to detect reflux and the ability to correlate the clinical symptoms to reflux. Twenty-five patients referred for esophageal pH monitoring and with typical symptoms of gastroesophageal reflux disease were studied prospectively, underwent clinical interview, endoscopy, esophageal manometry and were submitted, with a simultaneous initial period, to 24-hour catheter pH monitoring and 48-hour wireless pH monitoring. Early capsule detachment occurred in one (4%) case and there were no technical failures with the catheter pH monitoring (P = 0.463). Percentages of reflux time (total, upright and supine) were higher with the wireless pH monitoring (P < 0.05). Pathological gastroesophageal reflux occurred in 16 (64%) patients submitted to catheter and in 19 (76%) to the capsule (P = 0.355). The symptom index was positive in 12 (48%) patients with catheter pH monitoring and in 13 (52%) with wireless pH monitoring (P = 0.777). 1) No significant differences were reported between the two methods of pH monitoring (capsule vs catheter), in regard to relevant technical failures; 2) Wireless pH monitoring detected higher percentages of reflux time than the conventional pH-metry; 3) The two methods of pH monitoring were comparable in diagnosis of pathological gastroesophageal reflux and comparable in correlating the clinical symptoms with the gastroesophageal reflux.

  12. Application of Generative Topographic Mapping to Gear Failures Monitoring

    NASA Astrophysics Data System (ADS)

    Liao, Guanglan; Li, Weihua; Shi, Tielin; Rao, Raj B. K. N.

    2002-07-01

    The Generative Topographic Mapping (GTM) model is introduced as a probabilistic re-formation of the self-organizing map and has already been used in a variety of applications. This paper presents a study of the GTM in industrial gear failures monitoring. Vibration signals are analyzed using the GTM model, and the results show that gear feature data sets can be projected into a two-dimensional space and clustered in different areas according to their conditions, which can classify and identify clearly a gear work condition with cracked or broken tooth compared with the normal condition. With the trace of the image points in the two-dimensional space, the variation of gear work conditions can be observed visually, therefore, the occurrence and varying trend of gear failures can be monitored in time.

  13. Development of Wireless Subsurface Microsensors for Health Monitoring of Thermal Protection Systems

    NASA Technical Reports Server (NTRS)

    Pallix, Joan; Milos, Frank; Arnold, James O. (Technical Monitor)

    2000-01-01

    Low cost access to space is a primary goal for both NASA and the U.S. aerospace industry. Integrated subsystem health diagnostics is an area where major improvements have been identified for potential implementation into the design of new reusable launch vehicles (RLVS) in order to reduce life cycle costs, increase safety margins and improve mission reliability. A number of efforts are underway to use existing and emerging technologies to establish new methods for vehicle health monitoring on operational vehicles as well as X-vehicles. This paper summarizes a joint effort between several NASA centers and industry partners to develop rapid wireless diagnostic tools for failure management and long-term TPS performance monitoring of thermal protection systems (TPS) on future RLVS. An embedded wireless microsensor suite is being designed to allow rapid subsurface TPS health monitoring and damage assessment. This sensor suite will consist of both passive overlimit sensors and sensors for continuous parameter monitoring in flight. The on-board diagnostic system can be used to radio in maintenance requirements before landing and the data could also be used to assist in design validation for X-vehicles. For a 3rd generation vehicle, wireless diagnostics should be at a stage of technical development that will allow use for intelligent feedback systems for guidance and navigation control applications and can also serve as feedback for TPS that can intelligently adapt to its environment.

  14. Instrument Failures for the da Vinci Surgical System: a Food and Drug Administration MAUDE Database Study.

    PubMed

    Friedman, Diana C W; Lendvay, Thomas S; Hannaford, Blake

    2013-05-01

    Our goal was to analyze reported instances of the da Vinci robotic surgical system instrument failures using the FDA's MAUDE (Manufacturer and User Facility Device Experience) database. From these data we identified some root causes of failures as well as trends that may assist surgeons and users of the robotic technology. We conducted a survey of the MAUDE database and tallied robotic instrument failures that occurred between January 2009 and December 2010. We categorized failures into five main groups (cautery, shaft, wrist or tool tip, cable, and control housing) based on technical differences in instrument design and function. A total of 565 instrument failures were documented through 528 reports. The majority of failures (285) were of the instrument's wrist or tool tip. Cautery problems comprised 174 failures, 76 were shaft failures, 29 were cable failures, and 7 were control housing failures. Of the reports, 10 had no discernible failure mode and 49 exhibited multiple failures. The data show that a number of robotic instrument failures occurred in a short period of time. In reality, many instrument failures may go unreported, thus a true failure rate cannot be determined from these data. However, education of hospital administrators, operating room staff, surgeons, and patients should be incorporated into discussions regarding the introduction and utilization of robotic technology. We recommend institutions incorporate standard failure reporting policies so that the community of robotic surgery companies and surgeons can improve on existing technologies for optimal patient safety and outcomes.

  15. VEGF in nuclear medicine: Clinical application in cancer and future perspectives (Review).

    PubMed

    Taurone, Samanta; Galli, Filippo; Signore, Alberto; Agostinelli, Enzo; Dierckx, Rudi A J O; Minni, Antonio; Pucci, Marcella; Artico, Marco

    2016-08-01

    Clinical trials using antiangiogenic drugs revealed their potential against cancer. Unfortunately, a large percentage of patients does not yet benefit from this therapeutic approach highlighting the need of diagnostic tools to non-invasively evaluate and monitor response to therapy. It would also allow to predict which kind of patient will likely benefit of antiangiogenic therapy. Reasons for treatment failure might be due to a low expression of the drug targets or prevalence of other pathways. Molecular imaging has been therefore explored as a diagnostic technique of choice. Since the vascular endothelial growth factor (VEGF/VEGFR) pathway is the main responsible of tumor angiogenesis, several new drugs targeting either the soluble ligand or its receptor to inhibit signaling leading to tumor regression could be involved. Up today, it is difficult to determine VEGF or VEGFR local levels and their non-invasive measurement in tumors might give insight into the available target for VEGF/VEGFR-dependent antiangiogenic therapies, allowing therapy decision making and monitoring of response.

  16. Model-Based Anomaly Detection for a Transparent Optical Transmission System

    NASA Astrophysics Data System (ADS)

    Bengtsson, Thomas; Salamon, Todd; Ho, Tin Kam; White, Christopher A.

    In this chapter, we present an approach for anomaly detection at the physical layer of networks where detailed knowledge about the devices and their operations is available. The approach combines physics-based process models with observational data models to characterize the uncertainties and derive the alarm decision rules. We formulate and apply three different methods based on this approach for a well-defined problem in optical network monitoring that features many typical challenges for this methodology. Specifically, we address the problem of monitoring optically transparent transmission systems that use dynamically controlled Raman amplification systems. We use models of amplifier physics together with statistical estimation to derive alarm decision rules and use these rules to automatically discriminate between measurement errors, anomalous losses, and pump failures. Our approach has led to an efficient tool for systematically detecting anomalies in the system behavior of a deployed network, where pro-active measures to address such anomalies are key to preventing unnecessary disturbances to the system's continuous operation.

  17. Investigate the Capabilities of Remotely Sensed Crop Indicators for Agricultural Drought Monitoring in Kansas

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Becker-Reshef, I.; Justice, C. O.

    2013-12-01

    Although agricultural production has been rising in the past years, drought remains the primary cause of crop failure, leading to food price instability and threatening food security. The recent 'Global Food Crisis' in 2008, 2011 and 2012 has put drought and its impact on crop production at the forefront, highlighting the need for effective agricultural drought monitoring. Satellite observations have proven a practical, cost-effective and dynamic tool for drought monitoring. However, most satellite based methods are not specially developed for agriculture and their performances for agricultural drought monitoring still need further development. Wheat is the most widely grown crop in the world, and the recent droughts highlight the importance of drought monitoring in major wheat producing areas. As the largest wheat producing state in the US, Kansas plays an important role in both global and domestic wheat markets. Thus, the objective of this study is to investigate the capabilities of remotely sensed crop indicators for effective agricultural drought monitoring in Kansas wheat-grown regions using MODIS data and crop yield statistics. First, crop indicators such as NDVI, anomaly and cumulative metrics were calculated. Second, the varying impacts of agricultural drought at different stages were explored by examining the relationship between the derived indicators and yields. Also, the starting date of effective agricultural drought early detection and the key agricultural drought alert period were identified. Finally, the thresholds of these indicators for agricultural drought early warning were derived and the implications of these indicators for agricultural drought monitoring were discussed. The preliminary results indicate that drought shows significant impacts from the mid-growing-season (after Mid-April); NDVI anomaly shows effective drought early detection from Late-April, and Late-April to Early-June can be used as the key alert period for agricultural drought early warning; and drought occurring in Early-May has the most significant agricultural impacts. This research intends to help prototype an agricultural drought alert system, which could alert crop analysts to agricultural drought vulnerable areas/periods and provide tools for assessing crop outlooks in these regions.

  18. Use of near-infrared spectroscopy (NIRs) in the biopharmaceutical industry for real-time determination of critical process parameters and integration of advanced feedback control strategies using MIDUS control.

    PubMed

    Vann, Lucas; Sheppard, John

    2017-12-01

    Control of biopharmaceutical processes is critical to achieve consistent product quality. The most challenging unit operation to control is cell growth in bioreactors due to the exquisitely sensitive and complex nature of the cells that are converting raw materials into new cells and products. Current monitoring capabilities are increasing, however, the main challenge is now becoming the ability to use the data generated in an effective manner. There are a number of contributors to this challenge including integration of different monitoring systems as well as the functionality to perform data analytics in real-time to generate process knowledge and understanding. In addition, there is a lack of ability to easily generate strategies and close the loop to feedback into the process for advanced process control (APC). The current research aims to demonstrate the use of advanced monitoring tools along with data analytics to generate process understanding in an Escherichia coli fermentation process. NIR spectroscopy was used to measure glucose and critical amino acids in real-time to help in determining the root cause of failures associated with different lots of yeast extract. First, scale-down of the process was required to execute a simple design of experiment, followed by scale-up to build NIR models as well as soft sensors for advanced process control. In addition, the research demonstrates the potential for a novel platform technology that enables manufacturers to consistently achieve "goldenbatch" performance through monitoring, integration, data analytics, understanding, strategy design and control (MIDUS control). MIDUS control was employed to increase batch-to-batch consistency in final product titers, decrease the coefficient of variability from 8.49 to 1.16%, predict possible exhaust filter failures and close the loop to prevent their occurrence and avoid lost batches.

  19. Failure Prevention of Hydraulic System Based on Oil Contamination

    NASA Astrophysics Data System (ADS)

    Singh, M.; Lathkar, G. S.; Basu, S. K.

    2012-07-01

    Oil contamination is the major source of failure and wear of hydraulic system components. As per literature survey, approximately 70 % of hydraulic system failures are caused by oil contamination. Hence, to operate the hydraulic system reliably, the hydraulic oil should be of perfect condition. This requires a proper `Contamination Management System' which involves monitoring of various parameters like oil viscosity, oil temperature, contamination level etc. A study has been carried out on vehicle mounted hydraulically operated system used for articulation of heavy article, after making the platform levelled with outrigger cylinders. It is observed that by proper monitoring of contamination level, there is considerably increase in reliability, economy in operation and long service life. This also prevents the frequent failure of hydraulic system.

  20. Remote Monitoring in Heart Failure: the Current State.

    PubMed

    Mohan, Rajeev C; Heywood, J Thomas; Small, Roy S

    2017-03-01

    The treatment of congestive heart failure is an expensive undertaking with much of this cost occurring as a result of hospitalization. It is not surprising that many remote monitoring strategies have been developed to help patients maintain clinical stability by avoiding congestion. Most of these have failed. It seems very unlikely that these failures were the result of any one underlying false assumption but rather from the fact that heart failure is a progressive, deadly disease and that human behavior is hard to modify. One lesson that does stand out from the myriad of methods to detect congestion is that surrogates of congestion, such as weight and impedance, are not reliable or actionable enough to influence outcomes. Too many factors influence these surrogates to successfully and confidently use them to affect HF hospitalization. Surrogates are often attractive because they can be inexpensively measured and followed. They are, however, indirect estimations of congestion, and due to the lack specificity, the time and expense expended affecting the surrogate do not provide enough benefit to warrant its use. We know that high filling pressures cause transudation of fluid into tissues and that pulmonary edema and peripheral edema drive patients to seek medical assistance. Direct measurement of these filling pressures appears to be the sole remote monitoring modality that shows a benefit in altering the course of the disease in these patients. Congestive heart failure is such a serious problem and the consequences of hospitalization so onerous in terms of patient well-being and costs to society that actual hemodynamic monitoring, despite its costs, is beneficial in carefully selected high-risk patients. Those patients who benefit are ones with a prior hospitalization and ongoing New York Heart Association (NYHA) class III symptoms. Patients with NYHA class I and II symptoms do not require hemodynamic monitoring because they largely have normal hemodynamics. Those with NYHA class IV symptoms do not benefit because their hemodynamics are so deranged that they cannot be substantially altered except by mechanical circulatory support or heart transplantation. Finally, hemodynamic monitoring offers substantial hope to those patients with normal ejection fraction (EF) heart failure, a large group for whom medical therapy has largely been a failure. These patients have not benefited from the neurohormonal revolution that improved the lives of their brothers and sisters with reduced ejection fractions. Hemodynamic stabilization improves the condition of both but more so of the normal EF cohort. This is an important observation that will help us design future trials for the 50% of heart failure patients with normal systolic function.

  1. Studies and analyses of the space shuttle main engine

    NASA Technical Reports Server (NTRS)

    Tischer, Alan E.; Glover, R. C.

    1987-01-01

    The primary objectives were to: evaluate ways to maximize the information yield from the current Space Shuttle Main Engine (SSME) condition monitoring sensors, identify additional sensors or monitoring capabilities which would significantly improve SSME data, and provide continuing support of the Main Engine Cost/Operations (MECO) model. In the area of SSME condition monitoring, the principal tasks were a review of selected SSME failure data, a general survey of condition monitoring, and an evaluation of the current engine monitoring system. A computerized data base was developed to assist in modeling engine failure information propagations. Each of the above items is discussed in detail. Also included is a brief discussion of the activities conducted in support of the MECO model.

  2. Monitoring Corrosion of Steel Bars in Reinforced Concrete Structures

    PubMed Central

    Verma, Sanjeev Kumar; Bhadauria, Sudhir Singh; Akhtar, Saleem

    2014-01-01

    Corrosion of steel bars embedded in reinforced concrete (RC) structures reduces the service life and durability of structures causing early failure of structure, which costs significantly for inspection and maintenance of deteriorating structures. Hence, monitoring of reinforcement corrosion is of significant importance for preventing premature failure of structures. This paper attempts to present the importance of monitoring reinforcement corrosion and describes the different methods for evaluating the corrosion state of RC structures, especially hal-cell potential (HCP) method. This paper also presents few techniques to protect concrete from corrosion. PMID:24558346

  3. Monitoring corrosion of steel bars in reinforced concrete structures.

    PubMed

    Verma, Sanjeev Kumar; Bhadauria, Sudhir Singh; Akhtar, Saleem

    2014-01-01

    Corrosion of steel bars embedded in reinforced concrete (RC) structures reduces the service life and durability of structures causing early failure of structure, which costs significantly for inspection and maintenance of deteriorating structures. Hence, monitoring of reinforcement corrosion is of significant importance for preventing premature failure of structures. This paper attempts to present the importance of monitoring reinforcement corrosion and describes the different methods for evaluating the corrosion state of RC structures, especially hal-cell potential (HCP) method. This paper also presents few techniques to protect concrete from corrosion.

  4. Weld monitor and failure detector for nuclear reactor system

    DOEpatents

    Sutton, Jr., Harry G.

    1987-01-01

    Critical but inaccessible welds in a nuclear reactor system are monitored throughout the life of the reactor by providing small aperture means projecting completely through the reactor vessel wall and also through the weld or welds to be monitored. The aperture means is normally sealed from the atmosphere within the reactor. Any incipient failure or cracking of the weld will cause the environment contained within the reactor to pass into the aperture means and thence to the outer surface of the reactor vessel where its presence is readily detected.

  5. Frequency Domain Reflectometry Modeling and Measurement for Nondestructive Evaluation of Nuclear Power Plant Cables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glass, Samuel W.; Fifield, Leonard S.; Jones, Anthony M.

    Cable insulation polymers are among the more susceptible materials to age-related degradation within a nuclear power plant. This is recognized by both regulators and utilities, so all plants have developed cable aging management programs to detect damage before critical component failure in compliance with regulatory guidelines. Although a wide range of tools are available to evaluate cables and cable systems, cable aging management programs vary in how condition monitoring and NDE is conducted as utilities search for the most reliable and cost-effective ways to assess cable system condition. Frequency domain reflectometry (FDR) is emerging as one valuable tool to locatemore » and assess damaged portions of a cable system with minimal cost and only requires access in most cases to one of the cable terminal ends. This work examines a physics-based model of a cable system and relates it to FDR measurements for a better understanding of specific damage influences on defect detectability.« less

  6. A study of unstable rock failures using finite difference and discrete element methods

    NASA Astrophysics Data System (ADS)

    Garvey, Ryan J.

    Case histories in mining have long described pillars or faces of rock failing violently with an accompanying rapid ejection of debris and broken material into the working areas of the mine. These unstable failures have resulted in large losses of life and collapses of entire mine panels. Modern mining operations take significant steps to reduce the likelihood of unstable failure, however eliminating their occurrence is difficult in practice. Researchers over several decades have supplemented studies of unstable failures through the application of various numerical methods. The direction of the current research is to extend these methods and to develop improved numerical tools with which to study unstable failures in underground mining layouts. An extensive study is first conducted on the expression of unstable failure in discrete element and finite difference methods. Simulated uniaxial compressive strength tests are run on brittle rock specimens. Stable or unstable loading conditions are applied onto the brittle specimens by a pair of elastic platens with ranging stiffnesses. Determinations of instability are established through stress and strain histories taken for the specimen and the system. Additional numerical tools are then developed for the finite difference method to analyze unstable failure in larger mine models. Instability identifiers are established for assessing the locations and relative magnitudes of unstable failure through measures of rapid dynamic motion. An energy balance is developed which calculates the excess energy released as a result of unstable equilibria in rock systems. These tools are validated through uniaxial and triaxial compressive strength tests and are extended to models of coal pillars and a simplified mining layout. The results of the finite difference simulations reveal that the instability identifiers and excess energy calculations provide a generalized methodology for assessing unstable failures within potentially complex mine models. These combined numerical tools may be applied in future studies to design primary and secondary supports in bump-prone conditions, evaluate retreat mining cut sequences, asses pillar de-stressing techniques, or perform backanalyses on unstable failures in select mining layouts.

  7. GeneXpert HIV-1 quant assay, a new tool for scale up of viral load monitoring in the success of ART programme in India.

    PubMed

    Kulkarni, Smita; Jadhav, Sushama; Khopkar, Priyanka; Sane, Suvarna; Londhe, Rajkumar; Chimanpure, Vaishali; Dhilpe, Veronica; Ghate, Manisha; Yelagate, Rajendra; Panchal, Narayan; Rahane, Girish; Kadam, Dilip; Gaikwad, Nitin; Rewari, Bharat; Gangakhedkar, Raman

    2017-07-21

    Recent WHO guidelines identify virologic monitoring for diagnosing and confirming ART failure. In view of this, validation and scale up of point of care viral load technologies is essential in resource limited settings. A systematic validation of the GeneXpert® HIV-1 Quant assay (a point-of-care technology) in view of scaling up HIV-1 viral load in India to monitor the success of national ART programme was carried out. Two hundred nineteen plasma specimens falling in nine viral load ranges (<40 to >5 L copies/ml) were tested by the Abbott m2000rt Real Time and GeneXpert HIV-1 Quant assays. Additionally, 20 seronegative; 16 stored specimens and 10 spiked controls were also tested. Statistical analysis was done using Stata/IC and sensitivity, specificity, PPV, NPV and %misclassification rates were calculated as per DHSs/AISs, WHO, NACO cut-offs for virological failure. The GeneXpert assay compared well with the Abbott assay with a higher sensitivity (97%), specificity (97-100%) and concordance (91.32%). The correlation between two assays (r = 0.886) was statistically significant (p < 0.01), the linear regression showed a moderate fit (R 2  = 0.784) and differences were within limits of agreement. Reproducibility showed an average variation of 4.15 and 3.52% while Lower limit of detection (LLD) and Upper limit of detection (ULD) were 42 and 1,740,000 copies/ml respectively. The misclassification rates for three viral load cut offs were not statistically different (p = 0.736). All seronegative samples were negative and viral loads of the stored samples showed a good fit (R 2  = 0.896 to 0.982). The viral load results of GeneXpert HIV-1 Quant assay compared well with Abbott HIV-1 m2000 Real Time PCR; suggesting its use as a Point of care assay for viral load estimation in resource limited settings. Its ease of performance and rapidity will aid in timely diagnosis of ART failures, integrated HIV-TB management and will facilitate the UNAIDS 90-90-90 target.

  8. Real-time diagnostics for a reusable rocket engine

    NASA Technical Reports Server (NTRS)

    Guo, T. H.; Merrill, W.; Duyar, A.

    1992-01-01

    A hierarchical, decentralized diagnostic system is proposed for the Real-Time Diagnostic System component of the Intelligent Control System (ICS) for reusable rocket engines. The proposed diagnostic system has three layers of information processing: condition monitoring, fault mode detection, and expert system diagnostics. The condition monitoring layer is the first level of signal processing. Here, important features of the sensor data are extracted. These processed data are then used by the higher level fault mode detection layer to do preliminary diagnosis on potential faults at the component level. Because of the closely coupled nature of the rocket engine propulsion system components, it is expected that a given engine condition may trigger more than one fault mode detector. Expert knowledge is needed to resolve the conflicting reports from the various failure mode detectors. This is the function of the diagnostic expert layer. Here, the heuristic nature of this decision process makes it desirable to use an expert system approach. Implementation of the real-time diagnostic system described above requires a wide spectrum of information processing capability. Generally, in the condition monitoring layer, fast data processing is often needed for feature extraction and signal conditioning. This is usually followed by some detection logic to determine the selected faults on the component level. Three different techniques are used to attack different fault detection problems in the NASA LeRC ICS testbed simulation. The first technique employed is the neural network application for real-time sensor validation which includes failure detection, isolation, and accommodation. The second approach demonstrated is the model-based fault diagnosis system using on-line parameter identification. Besides these model based diagnostic schemes, there are still many failure modes which need to be diagnosed by the heuristic expert knowledge. The heuristic expert knowledge is implemented using a real-time expert system tool called G2 by Gensym Corp. Finally, the distributed diagnostic system requires another level of intelligence to oversee the fault mode reports generated by component fault detectors. The decision making at this level can best be done using a rule-based expert system. This level of expert knowledge is also implemented using G2.

  9. Public Choice, Market Failure, and Government Failure in Principles Textbooks

    ERIC Educational Resources Information Center

    Fike, Rosemarie; Gwartney, James

    2015-01-01

    Public choice uses the tools of economics to analyze how the political process allocates resources and impacts economic activity. In this study, the authors examine twenty-three principles texts regarding coverage of public choice, market failure, and government failure. Approximately half the texts provide coverage of public choice and recognize…

  10. An artificial intelligence approach to onboard fault monitoring and diagnosis for aircraft applications

    NASA Technical Reports Server (NTRS)

    Schutte, P. C.; Abbott, K. H.

    1986-01-01

    Real-time onboard fault monitoring and diagnosis for aircraft applications, whether performed by the human pilot or by automation, presents many difficult problems. Quick response to failures may be critical, the pilot often must compensate for the failure while diagnosing it, his information about the state of the aircraft is often incomplete, and the behavior of the aircraft changes as the effect of the failure propagates through the system. A research effort was initiated to identify guidelines for automation of onboard fault monitoring and diagnosis and associated crew interfaces. The effort began by determining the flight crew's information requirements for fault monitoring and diagnosis and the various reasoning strategies they use. Based on this information, a conceptual architecture was developed for the fault monitoring and diagnosis process. This architecture represents an approach and a framework which, once incorporated with the necessary detail and knowledge, can be a fully operational fault monitoring and diagnosis system, as well as providing the basis for comparison of this approach to other fault monitoring and diagnosis concepts. The architecture encompasses all aspects of the aircraft's operation, including navigation, guidance and controls, and subsystem status. The portion of the architecture that encompasses subsystem monitoring and diagnosis was implemented for an aircraft turbofan engine to explore and demonstrate the AI concepts involved. This paper describes the architecture and the implementation for the engine subsystem.

  11. In vivo swine myocardial tissue characterization and monitoring during open chest surgery by time-resolved diffuse near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Spinelli, Lorenzo; Contini, Davide; Farina, Andrea; Torricelli, Alessandro; Pifferi, Antonio; Cubeddu, Rinaldo; Ascari, Luca; Potì, Luca; Trivella, Maria Giovanna; L'Abbate, Antonio; Puzzuoli, Stefano

    2011-03-01

    Cardiovascular diseases are the main cause of death in industrialized countries. Worldwide, a large number of patients suffering from cardiac diseases are treated by surgery. Despite the advances achieved in the last decades with myocardial protection, surgical failure can still occur. This is due at least in part to the imperfect control of the metabolic status of the heart in the various phases of surgical intervention. At present, this is indirectly controlled by the electrocardiogram and the echographic monitoring of cardiac mechanics as direct measurements are lacking. Diffuse optical technologies have recently emerged as promising tools for the characterization of biological tissues like breast, muscles and bone, and for the monitoring of important metabolic parameters such as blood oxygenation, volume and flow. As a matter of fact, their utility has been demonstrated in a variety of applications for functional imaging of the brain, optical mammography and monitoring of muscle metabolism. However, due to technological and practical difficulties, their potential for cardiac monitoring has not yet been exploited. In this work we show the feasibility of the in-vivo determination of absorption and scattering spectra of the cardiac muscle in the 600-1100 nm range, and of monitoring myocardial tissue hemodynamics by time domain near-infrared spectroscopy at 690 nm and 830 nm. Both measurements have been performed on the exposed beating heart during open chest surgery in pigs, an experimental model closely mimicking the clinical cardio-surgical setting.

  12. Cyber-Physical System Security With Deceptive Virtual Hosts for Industrial Control Networks

    DOE PAGES

    Vollmer, Todd; Manic, Milos

    2014-05-01

    A challenge facing industrial control network administrators is protecting the typically large number of connected assets for which they are responsible. These cyber devices may be tightly coupled with the physical processes they control and human induced failures risk dire real-world consequences. Dynamic virtual honeypots are effective tools for observing and attracting network intruder activity. This paper presents a design and implementation for self-configuring honeypots that passively examine control system network traffic and actively adapt to the observed environment. In contrast to prior work in the field, six tools were analyzed for suitability of network entity information gathering. Ettercap, anmore » established network security tool not commonly used in this capacity, outperformed the other tools and was chosen for implementation. Utilizing Ettercap XML output, a novel four-step algorithm was developed for autonomous creation and update of a Honeyd configuration. This algorithm was tested on an existing small campus grid and sensor network by execution of a collaborative usage scenario. Automatically created virtual hosts were deployed in concert with an anomaly behavior (AB) system in an attack scenario. Virtual hosts were automatically configured with unique emulated network stack behaviors for 92% of the targeted devices. The AB system alerted on 100% of the monitored emulated devices.« less

  13. Design of Friction Stir Spot Welding Tools by Using a Novel Thermal-Mechanical Approach

    PubMed Central

    Su, Zheng-Ming; Qiu, Qi-Hong; Lin, Pai-Chen

    2016-01-01

    A simple thermal-mechanical model for friction stir spot welding (FSSW) was developed to obtain similar weld performance for different weld tools. Use of the thermal-mechanical model and a combined approach enabled the design of weld tools for various sizes but similar qualities. Three weld tools for weld radii of 4, 5, and 6 mm were made to join 6061-T6 aluminum sheets. Performance evaluations of the three weld tools compared fracture behavior, microstructure, micro-hardness distribution, and welding temperature of welds in lap-shear specimens. For welds made by the three weld tools under identical processing conditions, failure loads were approximately proportional to tool size. Failure modes, microstructures, and micro-hardness distributions were similar. Welding temperatures correlated with frictional heat generation rate densities. Because the three weld tools sufficiently met all design objectives, the proposed approach is considered a simple and feasible guideline for preliminary tool design. PMID:28773800

  14. Design of Friction Stir Spot Welding Tools by Using a Novel Thermal-Mechanical Approach.

    PubMed

    Su, Zheng-Ming; Qiu, Qi-Hong; Lin, Pai-Chen

    2016-08-09

    A simple thermal-mechanical model for friction stir spot welding (FSSW) was developed to obtain similar weld performance for different weld tools. Use of the thermal-mechanical model and a combined approach enabled the design of weld tools for various sizes but similar qualities. Three weld tools for weld radii of 4, 5, and 6 mm were made to join 6061-T6 aluminum sheets. Performance evaluations of the three weld tools compared fracture behavior, microstructure, micro-hardness distribution, and welding temperature of welds in lap-shear specimens. For welds made by the three weld tools under identical processing conditions, failure loads were approximately proportional to tool size. Failure modes, microstructures, and micro-hardness distributions were similar. Welding temperatures correlated with frictional heat generation rate densities. Because the three weld tools sufficiently met all design objectives, the proposed approach is considered a simple and feasible guideline for preliminary tool design.

  15. Perspectives on Wellness Self-Monitoring Tools for Older Adults

    PubMed Central

    Huh, Jina; Le, Thai; Reeder, Blaine; Thompson, Hilaire J.; Demiris, George

    2013-01-01

    Purpose Our purpose was to understand different stakeholder perceptions about the use of self-monitoring tools, specifically in the area of older adults’ personal wellness. In conjunction with the advent of personal health records, tracking personal health using self-monitoring technologies shows promising patient support opportunities. While clinicians’ tools for monitoring of older adults have been explored, we know little about how older adults may self-monitor their wellness and health and how their health care providers would perceive such use. Methods We conducted three focus groups with health care providers (n=10) and four focus groups with community-dwelling older adults (n=31). Results Older adult participants’ found the concept of self-monitoring unfamiliar and this influenced a narrowed interest in the use of wellness self-monitoring tools. On the other hand, health care provider participants showed open attitudes towards wellness monitoring tools for older adults and brainstormed about various stakeholders’ use cases. The two participant groups showed diverging perceptions in terms of: perceived uses, stakeholder interests, information ownership and control, and sharing of wellness monitoring tools. Conclusions Our paper provides implications and solutions for how older adults’ wellness self-monitoring tools can enhance patient-health care provider interaction, patient education, and improvement in overall wellness. PMID:24041452

  16. Perspectives on wellness self-monitoring tools for older adults.

    PubMed

    Huh, Jina; Le, Thai; Reeder, Blaine; Thompson, Hilaire J; Demiris, George

    2013-11-01

    Our purpose was to understand different stakeholder perceptions about the use of self-monitoring tools, specifically in the area of older adults' personal wellness. In conjunction with the advent of personal health records, tracking personal health using self-monitoring technologies shows promising patient support opportunities. While clinicians' tools for monitoring of older adults have been explored, we know little about how older adults may self-monitor their wellness and health and how their health care providers would perceive such use. We conducted three focus groups with health care providers (n=10) and four focus groups with community-dwelling older adults (n=31). Older adult participants' found the concept of self-monitoring unfamiliar and this influenced a narrowed interest in the use of wellness self-monitoring tools. On the other hand, health care provider participants showed open attitudes toward wellness monitoring tools for older adults and brainstormed about various stakeholders' use cases. The two participant groups showed diverging perceptions in terms of: perceived uses, stakeholder interests, information ownership and control, and sharing of wellness monitoring tools. Our paper provides implications and solutions for how older adults' wellness self-monitoring tools can enhance patient-health care provider interaction, patient education, and improvement in overall wellness. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Operating a global seismic network - perspectives from the USGS GSN

    NASA Astrophysics Data System (ADS)

    Gee, L. S.; Derr, J. S.; Hutt, C. R.; Bolton, H.; Ford, D.; Gyure, G. S.; Storm, T.; Leith, W.

    2007-05-01

    The Global Seismographic Network (GSN) is a permanent digital network of state-of-the-art seismological and geophysical sensors connected by a global telecommunications network, serving as a multi-use scientific facility used for seismic monitoring for response applications, basic and applied research in solid earthquake geophysics, and earth science education. A joint program of the U.S. Geological Survey (USGS), the National Science Foundation, and Incorporated Research Institutions in Seismology (IRIS), the GSN provides near- uniform, worldwide monitoring of the Earth through 144 modern, globally distributed seismic stations. The USGS currently operates 90 GSN or GSN-affiliate stations. As a US government program, the USGS GSN is evaluated on several performance measures including data availability, data latency, and cost effectiveness. The USGS-component of the GSN, like the GSN as a whole, is in transition from a period of rapid growth to steady- state operations. The program faces challenges of aging equipment and increased operating costs at the same time that national and international earthquake and tsunami monitoring agencies place an increased reliance on GSN data. Data acquisition of the USGS GSN is based on the Quanterra Q680 datalogger, a workhorse system that is approaching twenty years in the field, often in harsh environments. An IRIS instrumentation committee recently selected the Quanterra Q330 HR as the "next generation" GSN data acquisition system, and the USGS will begin deploying the new equipment in the middle of 2007. These new systems will address many of the issues associated with the ageing Q680 while providing a platform for interoperability across the GSN.. In order to address the challenge of increasing operational costs, the USGS employs several tools. First, the USGS benefits from the contributions of local host institutions. The station operators are the first line of defense when a station experiences problems, changing boards, swapping cables, and re-centering sensors. In order to facilitate this effort, the USGS maintains supplies of on-site spares at a number of stations, primarily at those with difficult shipping or travel logistics. In addition, the USGS is moving toward the GSN standard of installing a secondary broadband sensor at each site, to serve as a backup in case of failure of the primary broadband sensor. The recent transition to real-time telemetry has been an enormous boon for station operations as well as for earthquake and tsunami monitoring. For example, the USGS examines waveforms daily for data dropouts (gaps), out-of-nominal range data values, and overall noise levels. Higher level quality control focuses on problems in sensitivity, timing, polarity, orientation, and general instrument behavior. The quality control operations are essential for quickly identifying problems with stations, allowing for remedial or preventive maintenance that preserves data continuity and quality and minimizes catastrophic failure of the station or significant loss of data. The USGS tracks network performance using a variety of tools. Through Web pages with plots of waveforms (heliplots), data latency, and data availability, quick views of station status are available. The USGS has recently implemented other monitoring tools, such as SeisNetWatch, for evaluating station state of health.

  18. Reliable Collection of Real-Time Patient Physiologic Data from less Reliable Networks: a "Monitor of Monitors" System (MoMs).

    PubMed

    Hu, Peter F; Yang, Shiming; Li, Hsiao-Chi; Stansbury, Lynn G; Yang, Fan; Hagegeorge, George; Miller, Catriona; Rock, Peter; Stein, Deborah M; Mackenzie, Colin F

    2017-01-01

    Research and practice based on automated electronic patient monitoring and data collection systems is significantly limited by system down time. We asked whether a triple-redundant Monitor of Monitors System (MoMs) to collect and summarize key information from system-wide data sources could achieve high fault tolerance, early diagnosis of system failure, and improve data collection rates. In our Level I trauma center, patient vital signs(VS) monitors were networked to collect real time patient physiologic data streams from 94 bed units in our various resuscitation, operating, and critical care units. To minimize the impact of server collection failure, three BedMaster® VS servers were used in parallel to collect data from all bed units. To locate and diagnose system failures, we summarized critical information from high throughput datastreams in real-time in a dashboard viewer and compared the before and post MoMs phases to evaluate data collection performance as availability time, active collection rates, and gap duration, occurrence, and categories. Single-server collection rates in the 3-month period before MoMs deployment ranged from 27.8 % to 40.5 % with combined 79.1 % collection rate. Reasons for gaps included collection server failure, software instability, individual bed setting inconsistency, and monitor servicing. In the 6-month post MoMs deployment period, average collection rates were 99.9 %. A triple redundant patient data collection system with real-time diagnostic information summarization and representation improved the reliability of massive clinical data collection to nearly 100 % in a Level I trauma center. Such data collection framework may also increase the automation level of hospital-wise information aggregation for optimal allocation of health care resources.

  19. Low-cost failure sensor design and development for water pipeline distribution systems.

    PubMed

    Khan, K; Widdop, P D; Day, A J; Wood, A S; Mounce, S R; Machell, J

    2002-01-01

    This paper describes the design and development of a new sensor which is low cost to manufacture and install and is reliable in operation with sufficient accuracy, resolution and repeatability for use in newly developed systems for pipeline monitoring and leakage detection. To provide an appropriate signal, the concept of a "failure" sensor is introduced, in which the output is not necessarily proportional to the input, but is unmistakably affected when an unusual event occurs. The design of this failure sensor is based on the water opacity which can be indicative of an unusual event in a water distribution network. The laboratory work and field trials necessary to design and prove out this type of failure sensor are described here. It is concluded that a low-cost failure sensor of this type has good potential for use in a comprehensive water monitoring and management system based on Artificial Neural Networks (ANN).

  20. Obesity and Natriuretic Peptides, BNP and NT-proBNP: Mechanisms and Diagnostic Implications for Heart Failure

    PubMed Central

    Madamanchi, Chaitanya; Alhosaini, Hassan; Sumida, Arihiro; Runge, Marschall S.

    2014-01-01

    Many advances have been made in the diagnosis and management of heart failure (HF) in recent years. Cardiac biomarkers are an essential tool for clinicians: point of care B-Type Natriuretic Peptide (BNP) and its N-terminal counterpart (NT-proBNP) levels help distinguish cardiac from non-cardiac causes of dyspnea and are also useful in the prognosis and monitoring of the efficacy of therapy. One of the major limitations of HF biomarkers is in obese patients where the relationship between BNP and NT-proBNP levels and myocardial stiffness is complex. Recent data suggest an inverse relationship between BNP and NT-proBNP levels and body mass index. Given the ever-increasing prevalence of obesity world-wide, it is important to understand the benefits and limitations of HF biomarkers in this population. This review will explore the biology, physiology, and pathophysiology of these peptides and the cardiac endocrine paradox in HF. We also examine the clinical evidence, mechanisms, and plausible biological explanations for the discord between BNP levels and HF in obese patients. PMID:25156856

  1. Cost-Utility Analysis of the EVOLVO Study on Remote Monitoring for Heart Failure Patients With Implantable Defibrillators: Randomized Controlled Trial

    PubMed Central

    Landolina, Maurizio; Marzegalli, Maurizio; Lunati, Maurizio; Perego, Giovanni B; Guenzati, Giuseppe; Curnis, Antonio; Valsecchi, Sergio; Borghetti, Francesca; Borghi, Gabriella; Masella, Cristina

    2013-01-01

    Background Heart failure patients with implantable defibrillators place a significant burden on health care systems. Remote monitoring allows assessment of device function and heart failure parameters, and may represent a safe, effective, and cost-saving method compared to conventional in-office follow-up. Objective We hypothesized that remote device monitoring represents a cost-effective approach. This paper summarizes the economic evaluation of the Evolution of Management Strategies of Heart Failure Patients With Implantable Defibrillators (EVOLVO) study, a multicenter clinical trial aimed at measuring the benefits of remote monitoring for heart failure patients with implantable defibrillators. Methods Two hundred patients implanted with a wireless transmission–enabled implantable defibrillator were randomized to receive either remote monitoring or the conventional method of in-person evaluations. Patients were followed for 16 months with a protocol of scheduled in-office and remote follow-ups. The economic evaluation of the intervention was conducted from the perspectives of the health care system and the patient. A cost-utility analysis was performed to measure whether the intervention was cost-effective in terms of cost per quality-adjusted life year (QALY) gained. Results Overall, remote monitoring did not show significant annual cost savings for the health care system (€1962.78 versus €2130.01; P=.80). There was a significant reduction of the annual cost for the patients in the remote arm in comparison to the standard arm (€291.36 versus €381.34; P=.01). Cost-utility analysis was performed for 180 patients for whom QALYs were available. The patients in the remote arm gained 0.065 QALYs more than those in the standard arm over 16 months, with a cost savings of €888.10 per patient. Results from the cost-utility analysis of the EVOLVO study show that remote monitoring is a cost-effective and dominant solution. Conclusions Remote management of heart failure patients with implantable defibrillators appears to be cost-effective compared to the conventional method of in-person evaluations. Trial Registration ClinicalTrials.gov NCT00873899; http://clinicaltrials.gov/show/NCT00873899 (Archived by WebCite at http://www.webcitation.org/6H0BOA29f). PMID:23722666

  2. Tools for automated acoustic monitoring within the R package monitoR

    USGS Publications Warehouse

    Katz, Jonathan; Hafner, Sasha D.; Donovan, Therese

    2016-01-01

    The R package monitoR contains tools for managing an acoustic-monitoring program including survey metadata, template creation and manipulation, automated detection and results management. These tools are scalable for use with small projects as well as larger long-term projects and those with expansive spatial extents. Here, we describe typical workflow when using the tools in monitoR. Typical workflow utilizes a generic sequence of functions, with the option for either binary point matching or spectrogram cross-correlation detectors.

  3. Checklists and Monitoring in the Cockpit: Why Crucial Defenses Sometimes Fail

    NASA Technical Reports Server (NTRS)

    Dismukes, R. Key; Berman, Ben

    2010-01-01

    Checklists and monitoring are two essential defenses against equipment failures and pilot errors. Problems with checklist use and pilots failures to monitor adequately have a long history in aviation accidents. This study was conducted to explore why checklists and monitoring sometimes fail to catch errors and equipment malfunctions as intended. Flight crew procedures were observed from the cockpit jumpseat during normal airline operations in order to: 1) collect data on monitoring and checklist use in cockpit operations in typical flight conditions; 2) provide a plausible cognitive account of why deviations from formal checklist and monitoring procedures sometimes occur; 3) lay a foundation for identifying ways to reduce vulnerability to inadvertent checklist and monitoring errors; 4) compare checklist and monitoring execution in normal flights with performance issues uncovered in accident investigations; and 5) suggest ways to improve the effectiveness of checklists and monitoring. Cognitive explanations for deviations from prescribed procedures are provided, along with suggestions for countermeasures for vulnerability to error.

  4. Advanced Signal Conditioners for Data-Acquisition Systems

    NASA Technical Reports Server (NTRS)

    Lucena, Angel; Perotti, Jose; Eckhoff, Anthony; Medelius, Pedro

    2004-01-01

    Signal conditioners embodying advanced concepts in analog and digital electronic circuitry and software have been developed for use in data-acquisition systems that are required to be compact and lightweight, to utilize electric energy efficiently, and to operate with high reliability, high accuracy, and high power efficiency, without intervention by human technicians. These signal conditioners were originally intended for use aboard spacecraft. There are also numerous potential terrestrial uses - especially in the fields of aeronautics and medicine, wherein it is necessary to monitor critical functions. Going beyond the usual analog and digital signal-processing functions of prior signal conditioners, the new signal conditioner performs the following additional functions: It continuously diagnoses its own electronic circuitry, so that it can detect failures and repair itself (as described below) within seconds. It continuously calibrates itself on the basis of a highly accurate and stable voltage reference, so that it can continue to generate accurate measurement data, even under extreme environmental conditions. It repairs itself in the sense that it contains a micro-controller that reroutes signals among redundant components as needed to maintain the ability to perform accurate and stable measurements. It detects deterioration of components, predicts future failures, and/or detects imminent failures by means of a real-time analysis in which, among other things, data on its present state are continuously compared with locally stored historical data. It minimizes unnecessary consumption of electric energy. The design architecture divides the signal conditioner into three main sections: an analog signal section, a digital module, and a power-management section. The design of the analog signal section does not follow the traditional approach of ensuring reliability through total redundancy of hardware: Instead, following an approach called spare parts tool box, the reliability of each component is assessed in terms of such considerations as risks of damage, mean times between failures, and the effects of certain failures on the performance of the signal conditioner as a whole system. Then, fewer or more spares are assigned for each affected component, pursuant to the results of this analysis, in order to obtain the required degree of reliability of the signal conditioner as a whole system. The digital module comprises one or more processors and field-programmable gate arrays, the number of each depending on the results of the aforementioned analysis. The digital module provides redundant control, monitoring, and processing of several analog signals. It is designed to minimize unnecessary consumption of electric energy, including, when possible, going into a low-power "sleep" mode that is implemented in firmware. The digital module communicates with external equipment via a personal-computer serial port. The digital module monitors the "health" of the rest of the signal conditioner by processing defined measurements and/or trends. It automatically makes adjustments to respond to channel failures, compensate for effects of temperature, and maintain calibration.

  5. A single CD4 test with 250 cells/mm3 threshold predicts viral suppression in HIV-infected adults failing first-line therapy by clinical criteria.

    PubMed

    Gilks, Charles F; Walker, A Sarah; Munderi, Paula; Kityo, Cissy; Reid, Andrew; Katabira, Elly; Goodall, Ruth L; Grosskurth, Heiner; Mugyenyi, Peter; Hakim, James; Gibb, Diana M

    2013-01-01

    In low-income countries, viral load (VL) monitoring of antiretroviral therapy (ART) is rarely available in the public sector for HIV-infected adults or children. Using clinical failure alone to identify first-line ART failure and trigger regimen switch may result in unnecessary use of costly second-line therapy. Our objective was to identify CD4 threshold values to confirm clinically-determined ART failure when VL is unavailable. 3316 HIV-infected Ugandan/Zimbabwean adults were randomised to first-line ART with Clinically-Driven (CDM, CD4s measured but blinded) or routine Laboratory and Clinical Monitoring (LCM, 12-weekly CD4s) in the DART trial. CD4 at switch and ART failure criteria (new/recurrent WHO 4, single/multiple WHO 3 event; LCM: CD4<100 cells/mm(3)) were reviewed in 361 LCM, 314 CDM participants who switched over median 5 years follow-up. Retrospective VLs were available in 368 (55%) participants. Overall, 265/361 (73%) LCM participants failed with CD4<100 cells/mm(3); only 7 (2%) switched with CD4≥250 cells/mm(3), four switches triggered by WHO events. Without CD4 monitoring, 207/314 (66%) CDM participants failed with WHO 4 events, and 77(25%)/30(10%) with single/multiple WHO 3 events. Failure/switching with single WHO 3 events was more likely with CD4≥250 cells/mm(3) (28/77; 36%) (p = 0.0002). CD4 monitoring reduced switching with viral suppression: 23/187 (12%) LCM versus 49/181 (27%) CDM had VL<400 copies/ml at failure/switch (p<0.0001). Amongst CDM participants with CD4<250 cells/mm(3) only 11/133 (8%) had VL<400 copies/ml, compared with 38/48 (79%) with CD4≥250 cells/mm(3) (p<0.0001). Multiple, but not single, WHO 3 events predicted first-line ART failure. A CD4 threshold 'tiebreaker' of ≥250 cells/mm(3) for clinically-monitored patients failing first-line could identify ∼80% with VL<400 copies/ml, who are unlikely to benefit from second-line. Targeting CD4s to single WHO stage 3 'clinical failures' would particularly avoid premature, costly switch to second-line ART.

  6. Grid site availability evaluation and monitoring at CMS

    DOE PAGES

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; ...

    2017-10-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less

  7. Grid site availability evaluation and monitoring at CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less

  8. Green light for liver function monitoring using indocyanine green? An overview of current clinical applications.

    PubMed

    Vos, J J; Wietasch, J K G; Absalom, A R; Hendriks, H G D; Scheeren, T W L

    2014-12-01

    The dye indocyanine green is familiar to anaesthetists, and has been studied for more than half a century for cardiovascular and hepatic function monitoring. It is still, however, not yet in routine clinical use in anaesthesia and critical care, at least in Europe. This review is intended to provide a critical analysis of the available evidence concerning the indications for clinical measurement of indocyanine green elimination as a diagnostic and prognostic tool in two areas: its role in peri-operative liver function monitoring during major hepatic resection and liver transplantation; and its role in critically ill patients on the intensive care unit, where it is used for prediction of mortality, and for assessment of the severity of acute liver failure or that of intra-abdominal hypertension. Although numerous studies have demonstrated that indocyanine green elimination measurements in these patient populations can provide diagnostic or prognostic information to the clinician, 'hard' evidence - i.e. high-quality prospective randomised controlled trials - is lacking, and therefore it is not yet time to give a green light for use of indocyanine green in routine clinical practice. © 2014 The Association of Anaesthetists of Great Britain and Ireland.

  9. Grid site availability evaluation and monitoring at CMS

    NASA Astrophysics Data System (ADS)

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; Lammel, Stephan; Sciabà, Andrea

    2017-10-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impact data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.

  10. Introduction of the TEAM-HF Costing Tool: A User-Friendly Spreadsheet Program to Estimate Costs of Providing Patient-Centered Interventions

    PubMed Central

    Reed, Shelby D.; Li, Yanhong; Kamble, Shital; Polsky, Daniel; Graham, Felicia L.; Bowers, Margaret T.; Samsa, Gregory P.; Paul, Sara; Schulman, Kevin A.; Whellan, David J.; Riegel, Barbara J.

    2011-01-01

    Background Patient-centered health care interventions, such as heart failure disease management programs, are under increasing pressure to demonstrate good value. Variability in costing methods and assumptions in economic evaluations of such interventions limit the comparability of cost estimates across studies. Valid cost estimation is critical to conducting economic evaluations and for program budgeting and reimbursement negotiations. Methods and Results Using sound economic principles, we developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure (TEAM-HF) Costing Tool, a spreadsheet program that can be used by researchers or health care managers to systematically generate cost estimates for economic evaluations and to inform budgetary decisions. The tool guides users on data collection and cost assignment for associated personnel, facilities, equipment, supplies, patient incentives, miscellaneous items, and start-up activities. The tool generates estimates of total program costs, cost per patient, and cost per week and presents results using both standardized and customized unit costs for side-by-side comparisons. Results from pilot testing indicated that the tool was well-formatted, easy to use, and followed a logical order. Cost estimates of a 12-week exercise training program in patients with heart failure were generated with the costing tool and were found to be consistent with estimates published in a recent study. Conclusions The TEAM-HF Costing Tool could prove to be a valuable resource for researchers and health care managers to generate comprehensive cost estimates of patient-centered interventions in heart failure or other conditions for conducting high-quality economic evaluations and making well-informed health care management decisions. PMID:22147884

  11. A tool for assessment of heart failure prescribing quality: A systematic review and meta-analysis.

    PubMed

    El Hadidi, Seif; Darweesh, Ebtissam; Byrne, Stephen; Bermingham, Margaret

    2018-04-16

    Heart failure (HF) guidelines aim to standardise patient care. Internationally, prescribing practice in HF may deviate from guidelines and so a standardised tool is required to assess prescribing quality. A systematic review and meta-analysis were performed to identify a quantitative tool for measuring adherence to HF guidelines and its clinical implications. Eleven electronic databases were searched to include studies reporting a comprehensive tool for measuring adherence to prescribing guidelines in HF patients aged ≥18 years. Qualitative studies or studies measuring prescription rates alone were excluded. Study quality was assessed using the Good ReseArch for Comparative Effectiveness Checklist. In total, 2455 studies were identified. Sixteen eligible full-text articles were included (n = 14 354 patients, mean age 69 ± 8 y). The Guideline Adherence Index (GAI), and its modified versions, was the most frequently cited tool (n = 13). Other tools identified were the Individualised Reconciled Evidence Recommendations, the Composite Heart Failure Performance, and the Heart Failure Scale. The meta-analysis included the GAI studies of good to high quality. The average GAI-3 was 62%. Compared to low GAI, high GAI patients had lower mortality rate (7.6% vs 33.9%) and lower rehospitalisation rates (23.5% vs 24.5%); both P ≤ .05. High GAI was associated with reduced risk of mortality (hazard ratio = 0.29, 95% confidence interval, 0.06-0.51) and rehospitalisation (hazard ratio = 0.64, 95% confidence interval, 0.41-1.00). No tool was used to improve prescribing quality. The GAI is the most frequently used tool to assess guideline adherence in HF. High GAI is associated with improved HF outcomes. Copyright © 2018 John Wiley & Sons, Ltd.

  12. High satisfaction and low decisional conflict with advance care planning among chronically ill patients with advanced chronic obstructive pulmonary disease or heart failure using an online decision aid: A pilot study.

    PubMed

    Van Scoy, Lauren J; Green, Michael J; Dimmock, Anne Ef; Bascom, Rebecca; Boehmer, John P; Hensel, Jessica K; Hozella, Joshua B; Lehman, Erik B; Schubart, Jane R; Farace, Elana; Stewart, Renee R; Levi, Benjamin H

    2016-09-01

    Many patients with chronic illnesses report a desire for increased involvement in medical decision-making. This pilot study aimed to explore how patients with exacerbation-prone disease trajectories such as advanced heart failure or chronic obstructive pulmonary disease experience advance care planning using an online decision aid and to compare whether patients with different types of exacerbation-prone illnesses had varied experiences using the tool. Pre-intervention questionnaires measured advance care planning knowledge. Post-intervention questionnaires measured: (1) advance care planning knowledge; (2) satisfaction with tool; (3) decisional conflict; and (4) accuracy of the resultant advance directive. Comparisons were made between patients with heart failure and chronic obstructive pulmonary disease. Over 90% of the patients with heart failure (n = 24) or chronic obstructive pulmonary disease (n = 25) reported being "satisfied" or "highly satisfied" with the tool across all satisfaction domains; over 90% of participants rated the resultant advance directive as "very accurate." Participants reported low decisional conflict. Advance care planning knowledge scores rose by 18% (p < 0.001) post-intervention. There were no significant differences between participants with heart failure and chronic obstructive pulmonary disease. Patients with advanced heart failure and chronic obstructive pulmonary disease were highly satisfied after using an online advance care planning decision aid and had increased knowledge of advance care planning. This tool can be a useful resource for time-constrained clinicians whose patients wish to engage in advance care planning. © The Author(s) 2016.

  13. Portable Sleep Monitoring for Diagnosing Sleep Apnea in Hospitalized Patients With Heart Failure.

    PubMed

    Aurora, R Nisha; Patil, Susheel P; Punjabi, Naresh M

    2018-04-21

    Sleep apnea is an underdiagnosed condition in patients with heart failure. Efficient identification of sleep apnea is needed, as treatment may improve heart failure-related outcomes. Currently, use of portable sleep monitoring in hospitalized patients and those at risk for central sleep apnea is discouraged. This study examined whether portable sleep monitoring with respiratory polygraphy can accurately diagnose sleep apnea in patients hospitalized with decompensated heart failure. Hospitalized patients with decompensated heart failure underwent concurrent respiratory polygraphy and polysomnography. Both recordings were scored for obstructive and central disordered breathing events in a blinded fashion, using standard criteria, and the apnea-hypopnea index (AHI) was determined. Pearson's correlation coefficients and Bland-Altman plots were used to examine the concordance among the overall, obstructive, and central AHI values derived by respiratory polygraphy and polysomnography. The sample consisted of 53 patients (47% women) with a mean age of 59.0 years. The correlation coefficient for the overall AHI from the two diagnostic methods was 0.94 (95% CI, 0.89-0.96). The average difference in AHI between the two methods was 3.6 events/h. Analyses of the central and obstructive AHI values showed strong concordance between the two methods, with correlation coefficients of 0.98 (95% CI, 0.96-0.99) and 0.91 (95% CI, 0.84-0.95), respectively. Complete agreement in the classification of sleep apnea severity between the two methods was seen in 89% of the sample. Portable sleep monitoring can accurately diagnose sleep apnea in hospitalized patients with heart failure and may promote early initiation of treatment. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  14. On providing the fault-tolerant operation of information systems based on open content management systems

    NASA Astrophysics Data System (ADS)

    Kratov, Sergey

    2018-01-01

    Modern information systems designed to service a wide range of users, regardless of their subject area, are increasingly based on Web technologies and are available to users via Internet. The article discusses the issues of providing the fault-tolerant operation of such information systems, based on free and open source content management systems. The toolkit available to administrators of similar systems is shown; the scenarios for using these tools are described. Options for organizing backups and restoring the operability of systems after failures are suggested. Application of the proposed methods and approaches allows providing continuous monitoring of the state of systems, timely response to the emergence of possible problems and their prompt solution.

  15. Monitoring the ongoing deformation and seasonal behaviour affecting Mosul Dam through space-borne SAR data

    NASA Astrophysics Data System (ADS)

    Tessari, G.; Riccardi, P.; Pasquali, P.

    2017-12-01

    Monitoring of dam structural health is an important practice to control the structure itself and the water reservoir, to guarantee efficient operation and safety of surrounding areas. Ensuring the longevity of the structure requires the timely detection of any behaviour that could deteriorate the dam and potentially result in its shutdown or failure.The detection and monitoring of surface displacements is increasingly performed through the analysis of satellite Synthetic Aperture Radar (SAR) data, thanks to the non-invasiveness of their acquisition, the possibility to cover large areas in a short time and the new space missions equipped with high spatial resolution sensors. The availability of SAR satellite acquisitions from the early 1990s enables to reconstruct the historical evolution of dam behaviour, defining its key parameters, possibly from its construction to the present. Furthermore, the progress on SAR Interferometry (InSAR) techniques through the development of Differential InSAR (DInSAR) and Advanced stacking techniques (A-DInSAR) allows to obtain accurate velocity maps and displacement time-series.The importance of these techniques emerges when environmental or logistic conditions do not allow to monitor dams applying the traditional geodetic techniques. In such cases, A-DInSAR constitutes a reliable diagnostic tool of dam structural health to avoid any extraordinary failure that may lead to loss of lives.In this contest, an emblematic case will be analysed as test case: the Mosul Dam, the largest Iraqi dam, where monitoring and maintaining are impeded for political controversy, causing possible risks for the population security. In fact, it is considered one of the most dangerous dams in the world because of the erosion of the gypsum rock at the basement and the difficult interventions due to security problems. The dam consists of 113 m tall and 3.4 km long earth-fill embankment-type, with a clay core, and it was completed in 1984.The deformation fields obtained from SAR data are evaluated to assess the temporal evolution of the strains affecting the structure. Obtained results represent the preliminary stage of a multidisciplinary project, finalized to assess possible damages affecting a dam through remote sensing and civil engineering surveys.

  16. Actualities and Development of Heavy-Duty CNC Machine Tool Thermal Error Monitoring Technology

    NASA Astrophysics Data System (ADS)

    Zhou, Zu-De; Gui, Lin; Tan, Yue-Gang; Liu, Ming-Yao; Liu, Yi; Li, Rui-Ya

    2017-09-01

    Thermal error monitoring technology is the key technological support to solve the thermal error problem of heavy-duty CNC (computer numerical control) machine tools. Currently, there are many review literatures introducing the thermal error research of CNC machine tools, but those mainly focus on the thermal issues in small and medium-sized CNC machine tools and seldom introduce thermal error monitoring technologies. This paper gives an overview of the research on the thermal error of CNC machine tools and emphasizes the study of thermal error of the heavy-duty CNC machine tool in three areas. These areas are the causes of thermal error of heavy-duty CNC machine tool and the issues with the temperature monitoring technology and thermal deformation monitoring technology. A new optical measurement technology called the "fiber Bragg grating (FBG) distributed sensing technology" for heavy-duty CNC machine tools is introduced in detail. This technology forms an intelligent sensing and monitoring system for heavy-duty CNC machine tools. This paper fills in the blank of this kind of review articles to guide the development of this industry field and opens up new areas of research on the heavy-duty CNC machine tool thermal error.

  17. Making real-time reactive systems reliable

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith; Wood, Mark

    1990-01-01

    A reactive system is characterized by a control program that interacts with an environment (or controlled program). The control program monitors the environment and reacts to significant events by sending commands to the environment. This structure is quite general. Not only are most embedded real time systems reactive systems, but so are monitoring and debugging systems and distributed application management systems. Since reactive systems are usually long running and may control physical equipment, fault tolerance is vital. The research tries to understand the principal issues of fault tolerance in real time reactive systems and to build tools that allow a programmer to design reliable, real time reactive systems. In order to make real time reactive systems reliable, several issues must be addressed: (1) How can a control program be built to tolerate failures of sensors and actuators. To achieve this, a methodology was developed for transforming a control program that references physical value into one that tolerates sensors that can fail and can return inaccurate values; (2) How can the real time reactive system be built to tolerate failures of the control program. Towards this goal, whether the techniques presented can be extended to real time reactive systems is investigated; and (3) How can the environment be specified in a way that is useful for writing a control program. Towards this goal, whether a system with real time constraints can be expressed as an equivalent system without such constraints is also investigated.

  18. System Modeling and Diagnostics for Liquefying-Fuel Hybrid Rockets

    NASA Technical Reports Server (NTRS)

    Poll, Scott; Iverson, David; Ou, Jeremy; Sanderfer, Dwight; Patterson-Hine, Ann

    2003-01-01

    A Hybrid Combustion Facility (HCF) was recently built at NASA Ames Research Center to study the combustion properties of a new fuel formulation that burns approximately three times faster than conventional hybrid fuels. Researchers at Ames working in the area of Integrated Vehicle Health Management recognized a good opportunity to apply IVHM techniques to a candidate technology for next generation launch systems. Five tools were selected to examine various IVHM techniques for the HCF. Three of the tools, TEAMS (Testability Engineering and Maintenance System), L2 (Livingstone2), and RODON, are model-based reasoning (or diagnostic) systems. Two other tools in this study, ICS (Interval Constraint Simulator) and IMS (Inductive Monitoring System) do not attempt to isolate the cause of the failure but may be used for fault detection. Models of varying scope and completeness were created, both qualitative and quantitative. In each of the models, the structure and behavior of the physical system are captured. In the qualitative models, the temporal aspects of the system behavior and the abstraction of sensor data are handled outside of the model and require the development of additional code. In the quantitative model, less extensive processing code is also necessary. Examples of fault diagnoses are given.

  19. 24 CFR 1000.538 - What remedies are available for substantial noncompliance?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ..., DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT NATIVE AMERICAN HOUSING ACTIVITIES Recipient Monitoring... to programs, projects, or activities not affected by the failure to comply; or (4) In the case of... expenditure of funds for activities affected by such failure to comply. (c) If HUD determines that the failure...

  20. 24 CFR 1000.538 - What remedies are available for substantial noncompliance?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ..., DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT NATIVE AMERICAN HOUSING ACTIVITIES Recipient Monitoring... to programs, projects, or activities not affected by the failure to comply; or (4) In the case of... expenditure of funds for activities affected by such failure to comply. (c) If HUD determines that the failure...

  1. 24 CFR 1000.538 - What remedies are available for substantial noncompliance?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT NATIVE AMERICAN HOUSING ACTIVITIES Recipient Monitoring... to programs, projects, or activities not affected by the failure to comply; or (4) In the case of... expenditure of funds for activities affected by such failure to comply. (c) If HUD determines that the failure...

  2. Automatic patient respiration failure detection system with wireless transmission

    NASA Technical Reports Server (NTRS)

    Dimeff, J.; Pope, J. M.

    1968-01-01

    Automatic respiration failure detection system detects respiration failure in patients with a surgically implanted tracheostomy tube, and actuates an audible and/or visual alarm. The system incorporates a miniature radio transmitter so that the patient is unencumbered by wires yet can be monitored from a remote location.

  3. Application of Quality Management Tools for Evaluating the Failure Frequency of Cutter-Loader and Plough Mining Systems

    NASA Astrophysics Data System (ADS)

    Biały, Witold

    2017-06-01

    Failure frequency in the mining process, with a focus on the mining machine, has been presented and illustrated by the example of two coal-mines. Two mining systems have been subjected to analysis: a cutter-loader and a plough system. In order to reduce costs generated by failures, maintenance teams should regularly make sure that the machines are used and operated in a rational and effective way. Such activities will allow downtimes to be reduced, and, in consequence, will increase the effectiveness of a mining plant. The evaluation of mining machines' failure frequency contained in this study has been based on one of the traditional quality management tools - the Pareto chart.

  4. Perceptions of seniors with heart failure regarding autonomous zero-effort monitoring of physiological parameters in the smart-home environment.

    PubMed

    Grace, Sherry L; Taherzadeh, Golnoush; Jae Chang, Isaac Sung; Boger, Jennifer; Arcelus, Amaya; Mak, Susanna; Chessex, Caroline; Mihailidis, Alex

    Technological advances are leading to the ability to autonomously monitor patient's health status in their own homes, to enable aging-in-place. To understand the perceptions of seniors with heart failure (HF) regarding smart-home systems to monitor their physiological parameters. In this qualitative study, HF outpatients were invited to a smart-home lab, where they completed a sequence of activities, during which the capacity of 5 autonomous sensing modalities was compared to gold standard measures. Afterwards, a semi-structured interview was undertaken. These were transcribed and analyzed using an interpretive-descriptive approach. Five themes emerged from the 26 interviews: (1) perceptions of technology, (2) perceived benefits of autonomous health monitoring, (3) disadvantages of autonomous monitoring, (4) lack of perceived need for continuous health monitoring, and (5) preferences for autonomous monitoring. Patient perception towards autonomous monitoring devices was positive, lending credence to zero-effort technology as a viable and promising approach. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Modified Sainsbury tool: an initial risk assessment tool for primary care mental health and learning disability services.

    PubMed

    Stein, W

    2005-10-01

    Risk assessments by health and social care professionals must encompass risk of suicide, of harm to others, and of neglect. The UK's National Confidential Inquiry into Homicide and Suicide paints a picture of failure to predict suicides and homicides, failure to identify opportunities for prevention and a failure to manage these opportunities. Assessing risk at 'first contact' with the mental health service assumes a special place in this regard. The initial opportunity to be alerted to, and thus to influence, risk, usually falls to the general psychiatric service (as opposed to forensic specialists) or to a joint health and local authority community mental health team. The Mental Health and Learning Disabilities Directorate of Renfrewshire & Inverclyde Primary Care NHS Trust, Scotland, determined to standardize their approach to risk assessment and selected a modified version of the Sainsbury Risk Assessment Tool. A year-long pilot revealed general support for its service-wide introduction but also some misgivings to address, including: (i) rejection of the tool by some medical staff; (ii) concerns about limited training; and (iii) a perceived failure on the part of the management to properly resource its use. The tool has the potential to fit well with the computer-networked needs assessment system used in joint-working with partner local authorities to allocate care resources.

  6. Attitudes of heart failure patients and health care providers towards mobile phone-based remote monitoring.

    PubMed

    Seto, Emily; Leonard, Kevin J; Masino, Caterina; Cafazzo, Joseph A; Barnsley, Jan; Ross, Heather J

    2010-11-29

    Mobile phone-based remote patient monitoring systems have been proposed for heart failure management because they are relatively inexpensive and enable patients to be monitored anywhere. However, little is known about whether patients and their health care providers are willing and able to use this technology. The objective of our study was to assess the attitudes of heart failure patients and their health care providers from a heart function clinic in a large urban teaching hospital toward the use of mobile phone-based remote monitoring. A questionnaire regarding attitudes toward home monitoring and technology was administered to 100 heart failure patients (94/100 returned a completed questionnaire). Semi-structured interviews were also conducted with 20 heart failure patients and 16 clinicians to determine the perceived benefits and barriers to using mobile phone-based remote monitoring, as well as their willingness and ability to use the technology. The survey results indicated that the patients were very comfortable using mobile phones (mean rating 4.5, SD 0.6, on a five-point Likert scale), even more so than with using computers (mean 4.1, SD 1.1). The difference in comfort level between mobile phones and computers was statistically significant (P< .001). Patients were also confident in using mobile phones to view health information (mean 4.4, SD 0.9). Patients and clinicians were willing to use the system as long as several conditions were met, including providing a system that was easy to use with clear tangible benefits, maintaining good patient-provider communication, and not increasing clinical workload. Clinicians cited several barriers to implementation of such a system, including lack of remuneration for telephone interactions with patients and medicolegal implications. Patients and clinicians want to use mobile phone-based remote monitoring and believe that they would be able to use the technology. However, they have several reservations, such as potential increased clinical workload, medicolegal issues, and difficulty of use for some patients due to lack of visual acuity or manual dexterity.

  7. Device monitoring strategies in acute heart failure syndromes.

    PubMed

    Samara, Michael A; Tang, W H Wilson

    2011-09-01

    Acute heart failure syndromes (AHFS) represent the most common discharge diagnoses in adults over age 65 and translate into dramatically increased heart failure-associated morbidity and mortality. Conventional approaches to the early detection of pulmonary and systemic congestion have been shown to be of limited sensitivity. Despite their proven efficacy, disease management and structured telephone support programs have failed to achieve widespread use in part due to their resource intensiveness and reliance upon motivated patients. While once thought to hold great promise, results from recent prospective studies on telemonitoring strategies have proven disappointing. Implantable devices with their capacity to monitor electrophysiologic and hemodynamic parameters over long periods of time and with minimal reliance on patient participation may provide solutions to some of these problems. Conventional electrophysiologic parameters and intrathoracic impedance data are currently available in the growing population of heart failure patients with equipped devices. A variety of implantable hemodynamic monitors are currently under investigation. How best to integrate these devices into a systematic approach to the management of patients before, during, and after AHFS is yet to be established.

  8. Learning from Failures: Archiving and Designing with Failure and Risk

    NASA Technical Reports Server (NTRS)

    VanWie, Michael; Bohm, Matt; Barrientos, Francesca; Turner, Irem; Stone, Robert

    2005-01-01

    Identifying and mitigating risks during conceptual design remains an ongoing challenge. This work presents the results of collaborative efforts between The University of Missouri-Rolla and NASA Ames Research Center to examine how an early stage mission design team at NASA addresses risk, and, how a computational support tool can assist these designers in their tasks. Results of our observations are given in addition to a brief example of our implementation of a repository based computational tool that allows users to browse and search through archived failure and risk data as related to either physical artifacts or functionality.

  9. FFI: A software tool for ecological monitoring

    Treesearch

    Duncan C. Lutes; Nathan C. Benson; MaryBeth Keifer; John F. Caratti; S. Austin Streetman

    2009-01-01

    A new monitoring tool called FFI (FEAT/FIREMON Integrated) has been developed to assist managers with collection, storage and analysis of ecological information. The tool was developed through the complementary integration of two fire effects monitoring systems commonly used in the United States: FIREMON and the Fire Ecology Assessment Tool. FFI provides software...

  10. Acoustic emissions (AE) monitoring of large-scale composite bridge components

    NASA Astrophysics Data System (ADS)

    Velazquez, E.; Klein, D. J.; Robinson, M. J.; Kosmatka, J. B.

    2008-03-01

    Acoustic Emissions (AE) has been successfully used with composite structures to both locate and give a measure of damage accumulation. The current experimental study uses AE to monitor large-scale composite modular bridge components. The components consist of a carbon/epoxy beam structure as well as a composite to metallic bonded/bolted joint. The bonded joints consist of double lap aluminum splice plates bonded and bolted to carbon/epoxy laminates representing the tension rail of a beam. The AE system is used to monitor the bridge component during failure loading to assess the failure progression and using time of arrival to give insight into the origins of the failures. Also, a feature in the AE data called Cumulative Acoustic Emission counts (CAE) is used to give an estimate of the severity and rate of damage accumulation. For the bolted/bonded joints, the AE data is used to interpret the source and location of damage that induced failure in the joint. These results are used to investigate the use of bolts in conjunction with the bonded joint. A description of each of the components (beam and joint) is given with AE results. A summary of lessons learned for AE testing of large composite structures as well as insight into failure progression and location is presented.

  11. TU-AB-BRD-02: Failure Modes and Effects Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huq, M.

    2015-06-15

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less

  12. Multi-category micro-milling tool wear monitoring with continuous hidden Markov models

    NASA Astrophysics Data System (ADS)

    Zhu, Kunpeng; Wong, Yoke San; Hong, Geok Soon

    2009-02-01

    In-process monitoring of tool conditions is important in micro-machining due to the high precision requirement and high tool wear rate. Tool condition monitoring in micro-machining poses new challenges compared to conventional machining. In this paper, a multi-category classification approach is proposed for tool flank wear state identification in micro-milling. Continuous Hidden Markov models (HMMs) are adapted for modeling of the tool wear process in micro-milling, and estimation of the tool wear state given the cutting force features. For a noise-robust approach, the HMM outputs are connected via a medium filter to minimize the tool state before entry into the next state due to high noise level. A detailed study on the selection of HMM structures for tool condition monitoring (TCM) is presented. Case studies on the tool state estimation in the micro-milling of pure copper and steel demonstrate the effectiveness and potential of these methods.

  13. Retrogressive slope failure in glaciolacustrine clays: Sauga landslide, western Estonia

    NASA Astrophysics Data System (ADS)

    Kohv, Marko; Talviste, Peeter; Hang, Tiit; Kalm, Volli

    2010-12-01

    The largest recent landslide in Estonia (ca 60 000 m 3), which occurred on 19 December 2005, has been investigated, modelled and monitored. Eight boreholes, geotechnical sampling and nine vane shear tests provided data on the geological setting, soil strength parameters and location of the rupture zones. Topographic surveys were carried out twice a year from April 2006 to April 2009 to monitor the evolution of the slope. Limit equilibrium modelling displayed a complex of six separate retrogressive failures, beginning near to the Sauga River and ending 75 m from the former river channel. Modelling results are in agreement with the actual morphology of the multiple landslides. Monitoring records the enlargement of the landslide as the Sauga River downcuts through the slide and erodes its toe. Strength loss in the varved clays underlying the slope is a key factor in failure development.

  14. An evaluation of a real-time fault diagnosis expert system for aircraft applications

    NASA Technical Reports Server (NTRS)

    Schutte, Paul C.; Abbott, Kathy H.; Palmer, Michael T.; Ricks, Wendell R.

    1987-01-01

    A fault monitoring and diagnosis expert system called Faultfinder was conceived and developed to detect and diagnose in-flight failures in an aircraft. Faultfinder is an automated intelligent aid whose purpose is to assist the flight crew in fault monitoring, fault diagnosis, and recovery planning. The present implementation of this concept performs monitoring and diagnosis for a generic aircraft's propulsion and hydraulic subsystems. This implementation is capable of detecting and diagnosing failures of known and unknown (i.e., unforseeable) type in a real-time environment. Faultfinder uses both rule-based and model-based reasoning strategies which operate on causal, temporal, and qualitative information. A preliminary evaluation is made of the diagnostic concepts implemented in Faultfinder. The evaluation used actual aircraft accident and incident cases which were simulated to assess the effectiveness of Faultfinder in detecting and diagnosing failures. Results of this evaluation, together with the description of the current Faultfinder implementation, are presented.

  15. Identification and classification of failure modes in laminated composites by using a multivariate statistical analysis of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Baccar, D.; Söffker, D.

    2017-11-01

    Acoustic Emission (AE) is a suitable method to monitor the health of composite structures in real-time. However, AE-based failure mode identification and classification are still complex to apply due to the fact that AE waves are generally released simultaneously from all AE-emitting damage sources. Hence, the use of advanced signal processing techniques in combination with pattern recognition approaches is required. In this paper, AE signals generated from laminated carbon fiber reinforced polymer (CFRP) subjected to indentation test are examined and analyzed. A new pattern recognition approach involving a number of processing steps able to be implemented in real-time is developed. Unlike common classification approaches, here only CWT coefficients are extracted as relevant features. Firstly, Continuous Wavelet Transform (CWT) is applied to the AE signals. Furthermore, dimensionality reduction process using Principal Component Analysis (PCA) is carried out on the coefficient matrices. The PCA-based feature distribution is analyzed using Kernel Density Estimation (KDE) allowing the determination of a specific pattern for each fault-specific AE signal. Moreover, waveform and frequency content of AE signals are in depth examined and compared with fundamental assumptions reported in this field. A correlation between the identified patterns and failure modes is achieved. The introduced method improves the damage classification and can be used as a non-destructive evaluation tool.

  16. Wear and Adhesive Failure of Al2O3 Powder Coating Sprayed onto AISI H13 Tool Steel Substrate

    NASA Astrophysics Data System (ADS)

    Amanov, Auezhan; Pyun, Young-Sik

    2016-07-01

    In this study, an alumina (Al2O3) ceramic powder was sprayed onto an AISI H13 hot-work tool steel substrate that was subjected to sanding and ultrasonic nanocrystalline surface modification (UNSM) treatment processes. The significance of the UNSM technique on the adhesive failure of the Al2O3 coating and on the hardness of the substrate was investigated. The adhesive failure of the coating sprayed onto sanded and UNSM-treated substrates was investigated by a micro-scratch tester at an incremental load. It was found, based on the obtained results, that the coating sprayed onto the UNSM-treated substrate exhibited a better resistance to adhesive failure in comparison with that of the coating sprayed onto the sanded substrate. Dry friction and wear property of the coatings sprayed onto the sanded and UNSM-treated substrates were assessed by means of a ball-on-disk tribometer against an AISI 52100 steel ball. It was demonstrated that the UNSM technique controllably improved the adhesive failure of the Al2O3 coating, where the critical load was improved by about 31%. Thus, it is expected that the application of the UNSM technique to an AISI H13 tool steel substrate prior to coating may delay the adhesive failure and improve the sticking between the coating and the substrate thanks to the modified and hardened surface.

  17. Feasibility and acceptability of a self-measurement using a portable bioelectrical impedance analysis, by the patient with chronic heart failure, in acute decompensated heart failure.

    PubMed

    Huguel, Benjamin; Vaugrenard, Thibaud; Saby, Ludivine; Benhamou, Lionel; Arméro, Sébastien; Camilleri, Élise; Langar, Aida; Alitta, Quentin; Grino, Michel; Retornaz, Frédérique

    2018-06-01

    Chronic heart failure (CHF) is a major public health matter. Mainly affecting the elderly, it is responsible for a high rate of hospitalization due to the frequency of acute heart failure (ADHF). This represents a disabling pathology for the patient and very costly for the health care system. Our study is designed to assess a connected and portable bioelectrical impedance analysis (BIA) that could reduce these hospitalizations by preventing early ADHF. This prospective study included patients hospitalized in cardiology for ADHF. Patients achieved 3 self-measurements using the BIA during their hospitalization and answered a questionnaire evaluating the acceptability of this self-measurement. The results of these measures were compared with the clinical, biological and echocardiographic criteria of patients at the same time. Twenty-three patients were included, the self-measurement during the overall duration of the hospitalization was conducted autonomously by more than 80% of the patients. The acceptability (90%) for the use of the portable BIA was excellent. Some correlations were statistically significant, such as the total water difference to the weight difference (p=0.001). There were common trends between the variation of impedance analysis measures and other evaluation criteria. The feasibility and acceptability of a self-measurement of bioelectrical impedance analysis by the patient in AHF opens up major prospects in the management of monitoring patients in CHF. The interest of this tool is the prevention of ADHF leading to hospitalization or re-hospitalizations now requires to be presented by new studies.

  18. The future of telemedicine for the management of heart failure patients: a Consensus Document of the Italian Association of Hospital Cardiologists (A.N.M.C.O), the Italian Society of Cardiology (S.I.C.) and the Italian Society for Telemedicine and eHealth (Digital S.I.T.)

    PubMed Central

    Casolo, Giancarlo; Gulizia, Michele Massimo; Aspromonte, Nadia; Scalvini, Simonetta; Mortara, Andrea; Alunni, Gianfranco; Ricci, Renato Pietro; Mantovan, Roberto; Russo, Giancarmine; Gensini, Gian Franco; Romeo, Francesco

    2017-01-01

    Abstract Telemedicine applied to heart failure patients is a tool for recording and providing remote transmission, storage and interpretation of cardiovascular parameters and/or useful diagnostic images to allow for intensive home monitoring of patients with advanced heart failure, or during the vulnerable post-acute phase, to improve patient’s prognosis and quality of life. Recently, several meta-analyses have shown that telemedicine-supported care pathways are not only effective but also economically advantageous. Benefits seem to be substantial, with a 30–35% reduction in mortality and 15–20% decrease in hospitalizations. Patients implanted with cardiac devices can also benefit from an integrated remote clinical management since all modern devices can transmit technical and diagnostic data. However, telemedicine may provide benefits to heart failure patients only as part of a shared and integrated multi-disciplinary and multi-professional ‘chronic care model’. Moreover, the future development of remote telemonitoring programs in Italy will require the primary use of products certified as medical devices, validated organizational solutions as well as legislative and administrative adoption of new care methods and the widespread growth of clinical care competence to remotely manage the complexity of chronicity. Through this consensus document, Italian Cardiology reaffirms its willingness to contribute promoting a new phase of qualitative assessment, standardization of processes and testing of telemedicine-based care models in heart failure. By recognizing the relevance of telemedicine for the care of non-hospitalized patients with heart failure, its strategic importance for the design of innovative models of care, and the many challenges and opportunities it raises, ANMCO and SIC through this document report a consensus on the main directions for its widespread and sustainable clinical implementation PMID:28751839

  19. The future of telemedicine for the management of heart failure patients: a Consensus Document of the Italian Association of Hospital Cardiologists (A.N.M.C.O), the Italian Society of Cardiology (S.I.C.) and the Italian Society for Telemedicine and eHealth (Digital S.I.T.).

    PubMed

    Di Lenarda, Andrea; Casolo, Giancarlo; Gulizia, Michele Massimo; Aspromonte, Nadia; Scalvini, Simonetta; Mortara, Andrea; Alunni, Gianfranco; Ricci, Renato Pietro; Mantovan, Roberto; Russo, Giancarmine; Gensini, Gian Franco; Romeo, Francesco

    2017-05-01

    Telemedicine applied to heart failure patients is a tool for recording and providing remote transmission, storage and interpretation of cardiovascular parameters and/or useful diagnostic images to allow for intensive home monitoring of patients with advanced heart failure, or during the vulnerable post-acute phase, to improve patient's prognosis and quality of life. Recently, several meta-analyses have shown that telemedicine-supported care pathways are not only effective but also economically advantageous. Benefits seem to be substantial, with a 30-35% reduction in mortality and 15-20% decrease in hospitalizations. Patients implanted with cardiac devices can also benefit from an integrated remote clinical management since all modern devices can transmit technical and diagnostic data. However, telemedicine may provide benefits to heart failure patients only as part of a shared and integrated multi-disciplinary and multi-professional 'chronic care model'. Moreover, the future development of remote telemonitoring programs in Italy will require the primary use of products certified as medical devices, validated organizational solutions as well as legislative and administrative adoption of new care methods and the widespread growth of clinical care competence to remotely manage the complexity of chronicity. Through this consensus document, Italian Cardiology reaffirms its willingness to contribute promoting a new phase of qualitative assessment, standardization of processes and testing of telemedicine-based care models in heart failure. By recognizing the relevance of telemedicine for the care of non-hospitalized patients with heart failure, its strategic importance for the design of innovative models of care, and the many challenges and opportunities it raises, ANMCO and SIC through this document report a consensus on the main directions for its widespread and sustainable clinical implementation.

  20. Evaluation Methodologies for Estimating the Likelihood of Program Implementation Failure

    ERIC Educational Resources Information Center

    Durand, Roger; Decker, Phillip J.; Kirkman, Dorothy M.

    2014-01-01

    Despite our best efforts as evaluators, program implementation failures abound. A wide variety of valuable methodologies have been adopted to explain and evaluate the "why" of these failures. Yet, typically these methodologies have been employed concurrently (e.g., project monitoring) or to the post-hoc assessment of program activities.…

  1. Prognostics using Engineering and Environmental Parameters as Applied to State of Health (SOH) Radionuclide Aerosol Sampler Analyzer (RASA) Real-Time Monitoring

    NASA Astrophysics Data System (ADS)

    Hutchenson, K. D.; Hartley-McBride, S.; Saults, T.; Schmidt, D. P.

    2006-05-01

    The International Monitoring System (IMS) is composed in part of radionuclide particulate and gas monitoring systems. Monitoring the operational status of these systems is an important aspect of nuclear weapon test monitoring. Quality data, process control techniques, and predictive models are necessary to detect and predict system component failures. Predicting failures in advance provides time to mitigate these failures, thus minimizing operational downtime. The Provisional Technical Secretariat (PTS) requires IMS radionuclide systems be operational 95 percent of the time. The United States National Data Center (US NDC) offers contributing components to the IMS. This effort focuses on the initial research and process development using prognostics for monitoring and predicting failures of the RASA two (2) days into the future. The predictions, using time series methods, are input to an expert decision system, called SHADES (State of Health Airflow and Detection Expert System). The results enable personnel to make informed judgments about the health of the RASA system. Data are read from a relational database, processed, and displayed to the user in a GIS as a prototype GUI. This procedure mimics the real time application process that could be implemented as an operational system, This initial proof-of-concept effort developed predictive models focused on RASA components for a single site (USP79). Future work shall include the incorporation of other RASA systems, as well as their environmental conditions that play a significant role in performance. Similarly, SHADES currently accommodates specific component behaviors at this one site. Future work shall also include important environmental variables that play an important part of the prediction algorithms.

  2. 40 CFR 63.8810 - How do I monitor and collect data to demonstrate continuous compliance?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... data. Monitoring failures that are caused by poor maintenance or careless operation are not... performance evaluation of each CMS in accordance with your site-specific monitoring plan. (4) You must operate...

  3. Failure Forecasting in Triaxially Stressed Sandstones

    NASA Astrophysics Data System (ADS)

    Crippen, A.; Bell, A. F.; Curtis, A.; Main, I. G.

    2017-12-01

    Precursory signals to fracturing events have been observed to follow power-law accelerations in spatial, temporal, and size distributions leading up to catastrophic failure. In previous studies this behavior was modeled using Voight's relation of a geophysical precursor in order to perform `hindcasts' by solving for failure onset time. However, performing this analysis in retrospect creates a bias, as we know an event happened, when it happened, and we can search data for precursors accordingly. We aim to remove this retrospective bias, thereby allowing us to make failure forecasts in real-time in a rock deformation laboratory. We triaxially compressed water-saturated 100 mm sandstone cores (Pc= 25MPa, Pp = 5MPa, σ = 1.0E-5 s-1) to the point of failure while monitoring strain rate, differential stress, AEs, and continuous waveform data. Here we compare the current `hindcast` methods on synthetic and our real laboratory data. We then apply these techniques to increasing fractions of the data sets to observe the evolution of the failure forecast time with precursory data. We discuss these results as well as our plan to mitigate false positives and minimize errors for real-time application. Real-time failure forecasting could revolutionize the field of hazard mitigation of brittle failure processes by allowing non-invasive monitoring of civil structures, volcanoes, and possibly fault zones.

  4. Heart failure in children - overview

    MedlinePlus

    ... heart failure worse Monitor for side effects of medicines your child may be taking ... a safe and effective exercise and activity plan. MEDICINES, SURGERY, AND DEVICES Your child will need to take medicines to treat heart ...

  5. Sounds of Failure: Passive Acoustic Measurements of Excited Vibrational Modes

    NASA Astrophysics Data System (ADS)

    Brzinski, Theodore A.; Daniels, Karen E.

    2018-05-01

    Granular materials can fail through spontaneous events like earthquakes or brittle fracture. However, measurements and analytic models which forecast failure in this class of materials, while of both fundamental and practical interest, remain elusive. Materials including numerical packings of spheres, colloidal glasses, and granular materials have been known to develop an excess of low-frequency vibrational modes as the confining pressure is reduced. Here, we report experiments on sheared granular materials in which we monitor the evolving density of excited modes via passive monitoring of acoustic emissions. We observe a broadening of the distribution of excited modes coincident with both bulk and local plasticity, and evolution in the shape of the distribution before and after bulk failure. These results provide a new interpretation of the changing state of the material on its approach to stick-slip failure.

  6. Sounds of Failure: Passive Acoustic Measurements of Excited Vibrational Modes.

    PubMed

    Brzinski, Theodore A; Daniels, Karen E

    2018-05-25

    Granular materials can fail through spontaneous events like earthquakes or brittle fracture. However, measurements and analytic models which forecast failure in this class of materials, while of both fundamental and practical interest, remain elusive. Materials including numerical packings of spheres, colloidal glasses, and granular materials have been known to develop an excess of low-frequency vibrational modes as the confining pressure is reduced. Here, we report experiments on sheared granular materials in which we monitor the evolving density of excited modes via passive monitoring of acoustic emissions. We observe a broadening of the distribution of excited modes coincident with both bulk and local plasticity, and evolution in the shape of the distribution before and after bulk failure. These results provide a new interpretation of the changing state of the material on its approach to stick-slip failure.

  7. Syndromic surveillance for health information system failures: a feasibility study.

    PubMed

    Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico

    2013-05-01

    To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65-0.85. Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures.

  8. A web-based GPS system for displacement monitoring and failure mechanism analysis of reservoir landslide.

    PubMed

    Li, Yuanyao; Huang, Jinsong; Jiang, Shui-Hua; Huang, Faming; Chang, Zhilu

    2017-12-07

    It is important to monitor the displacement time series and to explore the failure mechanism of reservoir landslide for early warning. Traditionally, it is a challenge to monitor the landslide displacements real-timely and automatically. Globe Position System (GPS) is considered as the best real-time monitoring technology, however, the accuracies of the landslide displacements monitored by GPS are not assessed effectively. A web-based GPS system is developed to monitor the landslide displacements real-timely and automatically in this study. And the discrete wavelet transform (DWT) is proposed to assess the accuracy of the GPS monitoring displacements. Wangmiao landslide in Three Gorges Reservoir area in China is used as case study. The results show that the web-based GPS system has advantages of high precision, real-time, remote control and automation for landslide monitoring; the Root Mean Square Errors of the monitoring landslide displacements are less than 5 mm. Meanwhile, the results also show that a rapidly falling reservoir water level can trigger the reactivation of Wangmiao landslide. Heavy rainfall is also an important factor, but not a crucial component.

  9. 17 CFR 49.17 - Access to SDR data.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... legal and statutory responsibilities under the Act and related regulations. (2) Monitoring tools. A registered swap data repository is required to provide the Commission with proper tools for the monitoring... data structure and content. These monitoring tools shall be substantially similar in analytical...

  10. 17 CFR 49.17 - Access to SDR data.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... legal and statutory responsibilities under the Act and related regulations. (2) Monitoring tools. A registered swap data repository is required to provide the Commission with proper tools for the monitoring... data structure and content. These monitoring tools shall be substantially similar in analytical...

  11. Wind Turbine Bearing Diagnostics Based on Vibration Monitoring

    NASA Astrophysics Data System (ADS)

    Kadhim, H. T.; Mahmood, F. H.; Resen, A. K.

    2018-05-01

    Reliability maintenance can be considered as an accurate condition monitoring system which increasing beneficial and decreasing the cost production of wind energy. Supporting low friction of wind turbine rotating shaft is the main task of rolling element bearing and it is the main part that suffers from failure. The rolling failures elements have an economic impact and may lead to malfunctions and catastrophic failures. This paper concentrates on the vibration monitoring as a Non-Destructive Technique for assessing and demonstrates the feasibility of vibration monitoring for small wind turbine bearing defects based on LabVIEW software. Many bearings defects were created, such as inner race defect, outer race defect, and ball spin defect. The spectra data were recorded and compared with the theoretical results. The accelerometer with 4331 NI USB DAQ was utilized to acquiring, analyzed, and recorded. The experimental results were showed the vibration technique is suitable for diagnostic the defects that will be occurred in the small wind turbine bearings and developing a fault in the bearing which leads to increasing the vibration amplitude or peaks in the spectrum.

  12. Accidental Beam Losses and Protection in the LHC

    NASA Astrophysics Data System (ADS)

    Schmidt, R.; Working Group On Machine Protection

    2005-06-01

    At top energy (proton momentum 7 TeV/c) with nominal beam parameters, each of the two LHC proton beams has a stored energy of 350 MJ threatening to damage accelerator equipment in case of accidental beam loss. It is essential that the beams are properly extracted onto the dump blocks in case of failure since these are the only elements that can withstand full beam impact. Although the energy stored in the beams at injection (450 GeV/c) is about 15 times smaller compared to top energy, the beams must still be properly extracted in case of large accidental beam losses. Failures must be detected at a sufficiently early stage and initiate a beam dump. Quenches and power converter failures will be detected by monitoring the correct functioning of the hardware systems. In addition, safe operation throughout the cycle requires the use of beam loss monitors, collimators and absorbers. Ideas of detection of fast beam current decay, monitoring of fast beam position changes and monitoring of fast magnet current changes are discussed, to provide the required redundancy for machine protection.

  13. A risk-adjusted O-E CUSUM with monitoring bands for monitoring medical outcomes.

    PubMed

    Sun, Rena Jie; Kalbfleisch, John D

    2013-03-01

    In order to monitor a medical center's survival outcomes using simple plots, we introduce a risk-adjusted Observed-Expected (O-E) Cumulative SUM (CUSUM) along with monitoring bands as decision criterion.The proposed monitoring bands can be used in place of a more traditional but complicated V-shaped mask or the simultaneous use of two one-sided CUSUMs. The resulting plot is designed to simultaneously monitor for failure time outcomes that are "worse than expected" or "better than expected." The slopes of the O-E CUSUM provide direct estimates of the relative risk (as compared to a standard or expected failure rate) for the data being monitored. Appropriate rejection regions are obtained by controlling the false alarm rate (type I error) over a period of given length. Simulation studies are conducted to illustrate the performance of the proposed method. A case study is carried out for 58 liver transplant centers. The use of CUSUM methods for quality improvement is stressed. Copyright © 2013, The International Biometric Society.

  14. On-line Monitoring for Cutting Tool Wear Condition Based on the Parameters

    NASA Astrophysics Data System (ADS)

    Han, Fenghua; Xie, Feng

    2017-07-01

    In the process of cutting tools, it is very important to monitor the working state of the tools. On the basis of acceleration signal acquisition under the constant speed, time domain and frequency domain analysis of relevant indicators monitor the online of tool wear condition. The analysis results show that the method can effectively judge the tool wear condition in the process of machining. It has certain application value.

  15. Weight Management Belief is the Leading Influential Factor of Weight Monitoring Compliance in Congestive Heart Failure Patients.

    PubMed

    Lu, Min-Xia; Zhang, Yan-Yun; Jiang, Jun-Fang; Ju, Yang; Wu, Qing; Zhao, Xin; Wang, Xiao-Hua

    2016-11-01

    Daily weight monitoring is frequently recommended as a part of heart failure self-management to prevent exacerbations. This study is to identify factors that influence weight monitoring compliance of congestive heart failure patients at baseline and after a 1-year weight management (WM) program. This was a secondary analysis of an investigative study and a randomized controlled study. A general information questionnaire assessed patient demographics and clinical variables such as medicine use and diagnoses, and the weight management scale evaluated their WM abilities. Good and poor compliance based on abnormal weight gain from the European Society of Cardiology (> 2 kg in 3 days) were compared, and hierarchical multiple logistic regression analysis was used to identify factors influencing weight monitoring compliance. A total of 316 patients were enrolled at baseline, and 66 patients were enrolled after the 1-year WM program. Of them, 12.66% and 60.61% had good weight monitoring compliance at baseline and after 1 year of WM, respectively. A high WM-related belief score indicated good weight monitoring compliance at both time points [odds ratio (OR), 1.043, 95% confidence interval (CI), 1.023-1.063, p < 0.001; and OR, 2.054, 95% CI, 1.209-3.487, p < 0.001, respectively). Patients with a high WM-related practice score had good weight monitoring compliance at baseline (OR, 1.046, 95% CI, 1.027-1.065, p < 0.001), and patients who had not monitored abnormal weight had poor weight monitoring compliance after the 1-year WM program (OR, 0.244, 95% CI, 0.006-0.991, p = 0.049). Data from this study suggested that belief related to WM plays an important role in weight monitoring compliance.

  16. On-Board Particulate Filter Failure Prevention and Failure Diagnostics Using Radio Frequency Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sappok, Alex; Ragaller, Paul; Herman, Andrew

    The increasing use of diesel and gasoline particulate filters requires advanced on-board diagnostics (OBD) to prevent and detect filter failures and malfunctions. Early detection of upstream (engine-out) malfunctions is paramount to preventing irreversible damage to downstream aftertreatment system components. Such early detection can mitigate the failure of the particulate filter resulting in the escape of emissions exceeding permissible limits and extend the component life. However, despite best efforts at early detection and filter failure prevention, the OBD system must also be able to detect filter failures when they occur. In this study, radio frequency (RF) sensors were used to directlymore » monitor the particulate filter state of health for both gasoline particulate filter (GPF) and diesel particulate filter (DPF) applications. The testing included controlled engine dynamometer evaluations, which characterized soot slip from various filter failure modes, as well as on-road fleet vehicle tests. The results show a high sensitivity to detect conditions resulting in soot leakage from the particulate filter, as well as potential for direct detection of structural failures including internal cracks and melted regions within the filter media itself. Furthermore, the measurements demonstrate, for the first time, the capability to employ a direct and continuous monitor of particulate filter diagnostics to both prevent and detect potential failure conditions in the field.« less

  17. A Single CD4 Test with 250 Cells/Mm3 Threshold Predicts Viral Suppression in HIV-Infected Adults Failing First-Line Therapy by Clinical Criteria

    PubMed Central

    Munderi, Paula; Kityo, Cissy; Reid, Andrew; Katabira, Elly; Goodall, Ruth L.; Grosskurth, Heiner; Mugyenyi, Peter; Hakim, James; Gibb, Diana M.

    2013-01-01

    Background In low-income countries, viral load (VL) monitoring of antiretroviral therapy (ART) is rarely available in the public sector for HIV-infected adults or children. Using clinical failure alone to identify first-line ART failure and trigger regimen switch may result in unnecessary use of costly second-line therapy. Our objective was to identify CD4 threshold values to confirm clinically-determined ART failure when VL is unavailable. Methods 3316 HIV-infected Ugandan/Zimbabwean adults were randomised to first-line ART with Clinically-Driven (CDM, CD4s measured but blinded) or routine Laboratory and Clinical Monitoring (LCM, 12-weekly CD4s) in the DART trial. CD4 at switch and ART failure criteria (new/recurrent WHO 4, single/multiple WHO 3 event; LCM: CD4<100 cells/mm3) were reviewed in 361 LCM, 314 CDM participants who switched over median 5 years follow-up. Retrospective VLs were available in 368 (55%) participants. Results Overall, 265/361 (73%) LCM participants failed with CD4<100 cells/mm3; only 7 (2%) switched with CD4≥250 cells/mm3, four switches triggered by WHO events. Without CD4 monitoring, 207/314 (66%) CDM participants failed with WHO 4 events, and 77(25%)/30(10%) with single/multiple WHO 3 events. Failure/switching with single WHO 3 events was more likely with CD4≥250 cells/mm3 (28/77; 36%) (p = 0.0002). CD4 monitoring reduced switching with viral suppression: 23/187 (12%) LCM versus 49/181 (27%) CDM had VL<400 copies/ml at failure/switch (p<0.0001). Amongst CDM participants with CD4<250 cells/mm3 only 11/133 (8%) had VL<400copies/ml, compared with 38/48 (79%) with CD4≥250 cells/mm3 (p<0.0001). Conclusion Multiple, but not single, WHO 3 events predicted first-line ART failure. A CD4 threshold ‘tiebreaker’ of ≥250 cells/mm3 for clinically-monitored patients failing first-line could identify ∼80% with VL<400 copies/ml, who are unlikely to benefit from second-line. Targeting CD4s to single WHO stage 3 ‘clinical failures’ would particularly avoid premature, costly switch to second-line ART. PMID:23437399

  18. Monitoring and Control Interface Based on Virtual Sensors

    PubMed Central

    Escobar, Ricardo F.; Adam-Medina, Manuel; García-Beltrán, Carlos D.; Olivares-Peregrino, Víctor H.; Juárez-Romero, David; Guerrero-Ramírez, Gerardo V.

    2014-01-01

    In this article, a toolbox based on a monitoring and control interface (MCI) is presented and applied in a heat exchanger. The MCI was programed in order to realize sensor fault detection and isolation and fault tolerance using virtual sensors. The virtual sensors were designed from model-based high-gain observers. To develop the control task, different kinds of control laws were included in the monitoring and control interface. These control laws are PID, MPC and a non-linear model-based control law. The MCI helps to maintain the heat exchanger under operation, even if a temperature outlet sensor fault occurs; in the case of outlet temperature sensor failure, the MCI will display an alarm. The monitoring and control interface is used as a practical tool to support electronic engineering students with heat transfer and control concepts to be applied in a double-pipe heat exchanger pilot plant. The method aims to teach the students through the observation and manipulation of the main variables of the process and by the interaction with the monitoring and control interface (MCI) developed in LabVIEW©. The MCI provides the electronic engineering students with the knowledge of heat exchanger behavior, since the interface is provided with a thermodynamic model that approximates the temperatures and the physical properties of the fluid (density and heat capacity). An advantage of the interface is the easy manipulation of the actuator for an automatic or manual operation. Another advantage of the monitoring and control interface is that all algorithms can be manipulated and modified by the users. PMID:25365462

  19. Real World Experience With Ion Implant Fault Detection at Freescale Semiconductor

    NASA Astrophysics Data System (ADS)

    Sing, David C.; Breeden, Terry; Fakhreddine, Hassan; Gladwin, Steven; Locke, Jason; McHugh, Jim; Rendon, Michael

    2006-11-01

    The Freescale automatic fault detection and classification (FDC) system has logged data from over 3.5 million implants in the past two years. The Freescale FDC system is a low cost system which collects summary implant statistics at the conclusion of each implant run. The data is collected by either downloading implant data log files from the implant tool workstation, or by exporting summary implant statistics through the tool's automation interface. Compared to the traditional FDC systems which gather trace data from sensors on the tool as the implant proceeds, the Freescale FDC system cannot prevent scrap when a fault initially occurs, since the data is collected after the implant concludes. However, the system can prevent catastrophic scrap events due to faults which are not detected for days or weeks, leading to the loss of hundreds or thousands of wafers. At the Freescale ATMC facility, the practical applications of the FD system fall into two categories: PM trigger rules which monitor tool signals such as ion gauges and charge control signals, and scrap prevention rules which are designed to detect specific failure modes that have been correlated to yield loss and scrap. PM trigger rules are designed to detect shifts in tool signals which indicate normal aging of tool systems. For example, charging parameters gradually shift as flood gun assemblies age, and when charge control rules start to fail a flood gun PM is performed. Scrap prevention rules are deployed to detect events such as particle bursts and excessive beam noise, events which have been correlated to yield loss. The FDC system does have tool log-down capability, and scrap prevention rules often use this capability to automatically log the tool into a maintenance state while simultaneously paging the sustaining technician for data review and disposition of the affected product.

  20. Dual permeability FEM models for distributed fiber optic sensors development

    NASA Astrophysics Data System (ADS)

    Aguilar-López, Juan Pablo; Bogaard, Thom

    2017-04-01

    Fiber optic cables are commonly known for being robust and reliable mediums for transferring information at the speed of light in glass. Billions of kilometers of cable have been installed around the world for internet connection and real time information sharing. Yet, fiber optic cable is not only a mean for information transfer but also a way to sense and measure physical properties of the medium in which is installed. For dike monitoring, it has been used in the past for detecting inner core and foundation temperature changes which allow to estimate water infiltration during high water events. The DOMINO research project, aims to develop a fiber optic based dike monitoring system which allows to directly sense and measure any pore pressure change inside the dike structure. For this purpose, questions like which location, how many sensors, which measuring frequency and which accuracy are required for the sensor development. All these questions may be initially answered with a finite element model which allows to estimate the effects of pore pressure change in different locations along the cross section while having a time dependent estimation of a stability factor. The sensor aims to monitor two main failure mechanisms at the same time; The piping erosion failure mechanism and the macro-stability failure mechanism. Both mechanisms are going to be modeled and assessed in detail with a finite element based dual permeability Darcy-Richards numerical solution. In that manner, it is possible to assess different sensing configurations with different loading scenarios (e.g. High water levels, rainfall events and initial soil moisture and permeability conditions). The results obtained for the different configurations are later evaluated based on an entropy based performance evaluation. The added value of this kind of modelling approach for the sensor development is that it allows to simultaneously model the piping erosion and macro-stability failure mechanisms in a time dependent manner. In that way, the estimated pore pressures may be related to the monitored one and to both failure mechanisms. Furthermore, the approach is intended to be used in a later stage for the real time monitoring of the failure.

  1. PACS quality control and automatic problem notifier

    NASA Astrophysics Data System (ADS)

    Honeyman-Buck, Janice C.; Jones, Douglas; Frost, Meryll M.; Staab, Edward V.

    1997-05-01

    One side effect of installing a clinical PACS Is that users become dependent upon the technology and in some cases it can be very difficult to revert back to a film based system if components fail. The nature of system failures range from slow deterioration of function as seen in the loss of monitor luminance through sudden catastrophic loss of the entire PACS networks. This paper describes the quality control procedures in place at the University of Florida and the automatic notification system that alerts PACS personnel when a failure has happened or is anticipated. The goal is to recover from a failure with a minimum of downtime and no data loss. Routine quality control is practiced on all aspects of PACS, from acquisition, through network routing, through display, and including archiving. Whenever possible, the system components perform self and between platform checks for active processes, file system status, errors in log files, and system uptime. When an error is detected or a exception occurs, an automatic page is sent to a pager with a diagnostic code. Documentation on each code, trouble shooting procedures, and repairs are kept on an intranet server accessible only to people involved in maintaining the PACS. In addition to the automatic paging system for error conditions, acquisition is assured by an automatic fax report sent on a daily basis to all technologists acquiring PACS images to be used as a cross check that all studies are archived prior to being removed from the acquisition systems. Daily quality control is preformed to assure that studies can be moved from each acquisition and contrast adjustment. The results of selected quality control reports will be presented. The intranet documentation server will be described with the automatic pager system. Monitor quality control reports will be described and the cost of quality control will be quantified. As PACS is accepted as a clinical tool, the same standards of quality control must be established as are expected on other equipment used in the diagnostic process.

  2. Hyper-X Stage Separation: Simulation Development and Results

    NASA Technical Reports Server (NTRS)

    Reubush, David E.; Martin, John G.; Robinson, Jeffrey S.; Bose, David M.; Strovers, Brian K.

    2001-01-01

    This paper provides an overview of stage separation simulation development and results for NASA's Hyper-X program; a focused hypersonic technology effort designed to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. This paper presents an account of the development of the current 14 degree of freedom stage separation simulation tool (SepSim) and results from use of the tool in a Monte Carlo analysis to evaluate the risk of failure for the separation event. Results from use of the tool show that there is only a very small risk of failure in the separation event.

  3. Continuous monitoring of intracranial pressure after endoscopic third ventriculostomy in the management of CSF shunt failure.

    PubMed

    Elgamal, E A

    2010-04-01

    The effectiveness of continuous intracranial pressure (ICP) monitoring in the adaptation period, after endoscopic third ventriculostomy (ETV), and removal of the failed shunt in the management of CSF shunt failure is assessed. Nine patients with active hydrocephalus presenting with CSF shunt obstruction or infection were managed by ETV, removal of the shunt and insertion of an external ventricular drain (EVD) containing an ICP sensor for the purpose of postoperative monitoring of the ICP, and intermittent drainage of CSF. Patient ages ranged from 8 months to 24 years, and six of them were females. Hydrocephalus was obstructive in seven patients, and multiloculated in two. Six patients had an ventriculoperitoneal shunt (VPS), one with a bilateral VPS, one patient had a ventriculoatrial shunt, and one had a VPS and cystoperitoneal shunt (CPS). Shunt failure was caused by obstruction in six patients and infection in three. The post-operative ICP monitoring period ranged from 1-7 days. Intracranial hypertension was persistent in the first day after ETV in 3 patients, and up to 110 mL of CSF were drained to improve its symptoms. ETV was successful in six patients and 3 had permanent VPS. Post-operative continuous ICP monitoring and EVD insertion were very useful in the treatment of CSF shunt failure with ETV. This procedure allowed intermittent CSF drainage, relieving symptoms of elevated ICP, and provided accurate assessment of the success of the ETV and patency of the stoma in the early postoperative days by CT ventriculography and can also be used to install antibiotics in cases of infection.

  4. Diagnostic value of different adherence measures using electronic monitoring and virologic failure as reference standards.

    PubMed

    Deschamps, Ann E; De Geest, Sabina; Vandamme, Anne-Mieke; Bobbaers, Herman; Peetermans, Willy E; Van Wijngaerden, Eric

    2008-09-01

    Nonadherence to antiretroviral therapy is a substantial problem in HIV and jeopardizes the success of treatment. Accurate measurement of nonadherence is therefore imperative for good clinical management but no gold standard has been agreed on yet. In a single-center prospective study nonadherence was assessed by electronic monitoring: percentage of doses missed and drug holidays and by three self reports: (1) a visual analogue scale (VAS): percentage of overall doses taken; (2) the Swiss HIV Cohort Study Adherence Questionnaire (SHCS-AQ): percentage of overall doses missed and drug holidays and (3) the European HIV Treatment Questionnaire (EHTQ): percentage of doses missed and drug holidays for each antiretroviral drug separately. Virologic failure prospectively assessed during 1 year, and electronic monitoring were used as reference standards. Using virologic failure as reference standard, the best results were for (1) the SHCS-AQ after electronic monitoring (sensitivity, 87.5%; specificity, 78.6%); (2) electronic monitoring (sensitivity, 75%; specificity, 85.6%), and (3) the VAS combined with the SHCS-AQ before electronic monitoring (sensitivity, 87.5%; specificity, 58.6%). The sensitivity of the complex EHTQ was less than 50%. Asking simple questions about doses taken or missed is more sensitive than complex questioning about each drug separately. Combining the VAS with the SHCS-AQ seems a feasible nonadherence measure for daily clinical practice. Self-reports perform better after electronic monitoring: their diagnostic value could be lower when given independently.

  5. Vegetation optical depth measured by microwave radiometry as an indicator of tree mortality risk

    NASA Astrophysics Data System (ADS)

    Rao, K.; Anderegg, W.; Sala, A.; Martínez-Vilalta, J.; Konings, A. G.

    2017-12-01

    Increased drought-related tree mortality has been observed across several regions in recent years. Vast spatial extent and high temporal variability makes field monitoring of tree mortality cumbersome and expensive. With global coverage and high temporal revisit, satellite remote sensing offers an unprecedented tool to monitor terrestrial ecosystems and identify areas at risk of large drought-driven tree mortality events. To date, studies that use remote sensing data to monitor tree mortality have focused on external climatic thresholds such as temperature and evapotranspiration. However, this approach fails to consider internal water stress in vegetation - which can vary across trees even for similar climatic conditions due to differences in hydraulic behavior, soil type, etc - and may therefore be a poor basis for measuring mortality events. There is a consensus that xylem hydraulic failure often precedes drought-induced mortality, suggesting depleted canopy water content shortly before onset of mortality. Observations of vegetation optical depth (VOD) derived from passive microwave are proportional to canopy water content. In this study, we propose to use variations in VOD as an indicator of potential tree mortality. Since VOD accounts for intrinsic water stress undergone by vegetation, it is expected to be more accurate than external climatic stress indicators. Analysis of tree mortality events in California, USA observed by airborne detection shows a consistent relationship between mortality and the proposed VOD metric. Although this approach is limited by the kilometer-scale resolution of passive microwave radiometry, our results nevertheless demonstrate that microwave-derived estimates of vegetation water content can be used to study drought-driven tree mortality, and may be a valuable tool for mortality predictions if they can be combined with higher-resolution variables.

  6. Experimental Modal Analysis and Dynaic Strain Fiber Bragg Gratings for Structural Health Monitoring of Composite Aerospace Structures

    NASA Astrophysics Data System (ADS)

    Panopoulou, A.; Fransen, S.; Gomez Molinero, V.; Kostopoulos, V.

    2012-07-01

    The objective of this work is to develop a new structural health monitoring system for composite aerospace structures based on dynamic response strain measurements and experimental modal analysis techniques. Fibre Bragg Grating (FBG) optical sensors were used for monitoring the dynamic response of the composite structure. The structural dynamic behaviour has been numerically simulated and experimentally verified by means of vibration testing. The hypothesis of all vibration tests was that actual damage in composites reduces their stiffness and produces the same result as mass increase produces. Thus, damage was simulated by slightly varying locally the mass of the structure at different zones. Experimental modal analysis based on the strain responses was conducted and the extracted strain mode shapes were the input for the damage detection expert system. A feed-forward back propagation neural network was the core of the damage detection system. The features-input to the neural network consisted of the strain mode shapes, extracted from the experimental modal analysis. Dedicated training and validation activities were carried out based on the experimental results. The system showed high reliability, confirmed by the ability of the neural network to recognize the size and the position of damage on the structure. The experiments were performed on a real structure i.e. a lightweight antenna sub- reflector, manufactured and tested at EADS CASA ESPACIO. An integrated FBG sensor network, based on the advantage of multiplexing, was mounted on the structure with optimum topology. Numerical simulation of both structures was used as a support tool at all the steps of the work. Potential applications for the proposed system are during ground qualification extensive tests of space structures and during the mission as modal analysis tool on board, being able via the FBG responses to identify a potential failure.

  7. Education Data in Conflict-Affected Countries: The Fifth Failure?

    ERIC Educational Resources Information Center

    Montjourides, Patrick

    2013-01-01

    Poor-quality, or completely absent, data deny millions of children the right to an education. This is often the case in conflict-ridden areas. The 2011 Education for All Global Monitoring Report (UNESCO 2011b) identified four failures that are holding back progress in education and damaging millions of children's lives: failures of protection,…

  8. Use of Semi-Autonomous Tools for ISS Commanding and Monitoring

    NASA Technical Reports Server (NTRS)

    Brzezinski, Amy S.

    2014-01-01

    As the International Space Station (ISS) has moved into a utilization phase, operations have shifted to become more ground-based with fewer mission control personnel monitoring and commanding multiple ISS systems. This shift to fewer people monitoring more systems has prompted use of semi-autonomous console tools in the ISS Mission Control Center (MCC) to help flight controllers command and monitor the ISS. These console tools perform routine operational procedures while keeping the human operator "in the loop" to monitor and intervene when off-nominal events arise. Two such tools, the Pre-positioned Load (PPL) Loader and Automatic Operators Recorder Manager (AutoORM), are used by the ISS Communications RF Onboard Networks Utilization Specialist (CRONUS) flight control position. CRONUS is responsible for simultaneously commanding and monitoring the ISS Command & Data Handling (C&DH) and Communications and Tracking (C&T) systems. PPL Loader is used to uplink small pieces of frequently changed software data tables, called PPLs, to ISS computers to support different ISS operations. In order to uplink a PPL, a data load command must be built that contains multiple user-input fields. Next, a multiple step commanding and verification procedure must be performed to enable an onboard computer for software uplink, uplink the PPL, verify the PPL has incorporated correctly, and disable the computer for software uplink. PPL Loader provides different levels of automation in both building and uplinking these commands. In its manual mode, PPL Loader automatically builds the PPL data load commands but allows the flight controller to verify and save the commands for future uplink. In its auto mode, PPL Loader automatically builds the PPL data load commands for flight controller verification, but automatically performs the PPL uplink procedure by sending commands and performing verification checks while notifying CRONUS of procedure step completion. If an off-nominal condition occurs during procedure execution, PPL Loader notifies CRONUS through popup messages, allowing CRONUS to examine the situation and choose an option of how PPL loader should proceed with the procedure. The use of PPL Loader to perform frequent, routine PPL uplinks offloads CRONUS to better monitor two ISS systems. It also reduces procedure performance time and decreases risk of command errors. AutoORM identifies ISS communication outage periods and builds commands to lock, playback, and unlock ISS Operations Recorder files. Operation Recorder files are circular buffer files of continually recorded ISS telemetry data. Sections of these files can be locked from further writing, be played back to capture telemetry data that occurred during an ISS loss of signal (LOS) period, and then be unlocked for future recording use. Downlinked Operation Recorder files are used by mission support teams for data analysis, especially if failures occur during LOS. The commands to lock, playback, and unlock Operations Recorder files are encompassed in three different operational procedures and contain multiple user-input fields. AutoORM provides different levels of automation for building and uplinking the commands to lock, playback, and unlock Operations Recorder files. In its automatic mode, AutoORM automatically detects ISS LOS periods, then generates and uplinks the commands to lock, playback, and unlock Operations Recorder files when MCC regains signal with ISS. AutoORM also features semi-autonomous and manual modes which integrate CRONUS more into the command verification and uplink process. AutoORMs ability to automatically detect ISS LOS periods and build the necessary commands to preserve, playback, and release recorded telemetry data greatly offloads CRONUS to perform more high-level cognitive tasks, such as mission planning and anomaly troubleshooting. Additionally, since Operations Recorder commands contain numerical time input fields which are tedious for a human to manually build, AutoORM's ability to automatically build commands reduces operational command errors. PPL Loader and AutoORM demonstrate principles of semi-autonomous operational tools that will benefit future space mission operations. Both tools employ different levels of automation to perform simple and routine procedures, thereby offloading human operators to perform higher-level cognitive tasks. Because both tools provide procedure execution status and highlight off-nominal indications, the flight controller is able to intervene during procedure execution if needed. Semi-autonomous tools and systems that can perform routine procedures, yet keep human operators informed of execution, will be essential in future long-duration missions where the onboard crew will be solely responsible for spacecraft monitoring and control.

  9. Super Learner Analysis of Electronic Adherence Data Improves Viral Prediction and May Provide Strategies for Selective HIV RNA Monitoring.

    PubMed

    Petersen, Maya L; LeDell, Erin; Schwab, Joshua; Sarovar, Varada; Gross, Robert; Reynolds, Nancy; Haberer, Jessica E; Goggin, Kathy; Golin, Carol; Arnsten, Julia; Rosen, Marc I; Remien, Robert H; Etoori, David; Wilson, Ira B; Simoni, Jane M; Erlen, Judith A; van der Laan, Mark J; Liu, Honghu; Bangsberg, David R

    2015-05-01

    Regular HIV RNA testing for all HIV-positive patients on antiretroviral therapy (ART) is expensive and has low yield since most tests are undetectable. Selective testing of those at higher risk of failure may improve efficiency. We investigated whether a novel analysis of adherence data could correctly classify virological failure and potentially inform a selective testing strategy. Multisite prospective cohort consortium. We evaluated longitudinal data on 1478 adult patients treated with ART and monitored using the Medication Event Monitoring System (MEMS) in 16 US cohorts contributing to the MACH14 consortium. Because the relationship between adherence and virological failure is complex and heterogeneous, we applied a machine-learning algorithm (Super Learner) to build a model for classifying failure and evaluated its performance using cross-validation. Application of the Super Learner algorithm to MEMS data, combined with data on CD4 T-cell counts and ART regimen, significantly improved classification of virological failure over a single MEMS adherence measure. Area under the receiver operating characteristic curve, evaluated on data not used in model fitting, was 0.78 (95% confidence interval: 0.75 to 0.80) and 0.79 (95% confidence interval: 0.76 to 0.81) for failure defined as single HIV RNA level >1000 copies per milliliter or >400 copies per milliliter, respectively. Our results suggest that 25%-31% of viral load tests could be avoided while maintaining sensitivity for failure detection at or above 95%, for a cost savings of $16-$29 per person-month. Our findings provide initial proof of concept for the potential use of electronic medication adherence data to reduce costs through behavior-driven HIV RNA testing.

  10. Super learner analysis of electronic adherence data improves viral prediction and may provide strategies for selective HIV RNA monitoring

    PubMed Central

    Petersen, Maya L.; LeDell, Erin; Schwab, Joshua; Sarovar, Varada; Gross, Robert; Reynolds, Nancy; Haberer, Jessica E.; Goggin, Kathy; Golin, Carol; Arnsten, Julia; Rosen, Marc; Remien, Robert; Etoori, David; Wilson, Ira; Simoni, Jane M.; Erlen, Judith A.; van der Laan, Mark J.; Liu, Honghu; Bangsberg, David R

    2015-01-01

    Objective Regular HIV RNA testing for all HIV positive patients on antiretroviral therapy (ART) is expensive and has low yield since most tests are undetectable. Selective testing of those at higher risk of failure may improve efficiency. We investigated whether a novel analysis of adherence data could correctly classify virological failure and potentially inform a selective testing strategy. Design Multisite prospective cohort consortium. Methods We evaluated longitudinal data on 1478 adult patients treated with ART and monitored using the Medication Event Monitoring System (MEMS) in 16 United States cohorts contributing to the MACH14 consortium. Since the relationship between adherence and virological failure is complex and heterogeneous, we applied a machine-learning algorithm (Super Learner) to build a model for classifying failure and evaluated its performance using cross-validation. Results Application of the Super Learner algorithm to MEMS data, combined with data on CD4+ T cell counts and ART regimen, significantly improved classification of virological failure over a single MEMS adherence measure. Area under the ROC curve, evaluated on data not used in model fitting, was 0.78 (95% CI: 0.75, 0.80) and 0.79 (95% CI: 0.76, 0.81) for failure defined as single HIV RNA level >1000 copies/ml or >400 copies/ml, respectively. Our results suggest 25–31% of viral load tests could be avoided while maintaining sensitivity for failure detection at or above 95%, for a cost savings of $16–$29 per person-month. Conclusions Our findings provide initial proof-of-concept for the potential use of electronic medication adherence data to reduce costs through behavior-driven HIV RNA testing. PMID:25942462

  11. Time-frequency vibration analysis for the detection of motor damages caused by bearing currents

    NASA Astrophysics Data System (ADS)

    Prudhom, Aurelien; Antonino-Daviu, Jose; Razik, Hubert; Climente-Alarcon, Vicente

    2017-02-01

    Motor failure due to bearing currents is an issue that has drawn an increasing industrial interest over recent years. Bearing currents usually appear in motors operated by variable frequency drives (VFD); these drives may lead to common voltage modes which cause currents induced in the motor shaft that are discharged through the bearings. The presence of these currents may lead to the motor bearing failure only few months after system startup. Vibration monitoring is one of the most common ways for detecting bearing damages caused by circulating currents; the evaluation of the amplitudes of well-known characteristic components in the vibration Fourier spectrum that are associated with race, ball or cage defects enables to evaluate the bearing condition and, hence, to identify an eventual damage due to bearing currents. However, the inherent constraints of the Fourier transform may complicate the detection of the progressive bearing degradation; for instance, in some cases, other frequency components may mask or be confused with bearing defect-related while, in other cases, the analysis may not be suitable due to the eventual non-stationary nature of the captured vibration signals. Moreover, the fact that this analysis implies to lose the time-dimension limits the amount of information obtained from this technique. This work proposes the use of time-frequency (T-F) transforms to analyse vibration data in motors affected by bearing currents. The experimental results obtained in real machines show that the vibration analysis via T-F tools may provide significant advantages for the detection of bearing current damages; among other, these techniques enable to visualise the progressive degradation of the bearing while providing an effective discrimination versus other components that are not related with the fault. Moreover, their application is valid regardless of the operation regime of the machine. Both factors confirm the robustness and reliability of these tools that may be an interesting alternative for detecting this type of failure in induction motors.

  12. Runtime Performance Monitoring Tool for RTEMS System Software

    NASA Astrophysics Data System (ADS)

    Cho, B.; Kim, S.; Park, H.; Kim, H.; Choi, J.; Chae, D.; Lee, J.

    2007-08-01

    RTEMS is a commercial-grade real-time operating system that supports multi-processor computers. However, there are not many development tools for RTEMS. In this paper, we report new RTEMS-based runtime performance monitoring tool. We have implemented a light weight runtime monitoring task with an extension to the RTEMS APIs. Using our tool, software developers can verify various performance- related parameters during runtime. Our tool can be used during software development phase and in-orbit operation as well. Our implemented target agent is light weight and has small overhead using SpaceWire interface. Efforts to reduce overhead and to add other monitoring parameters are currently under research.

  13. Microstructure and Mechanical Performance of Friction Stir Spot-Welded Aluminum-5754 Sheets

    NASA Astrophysics Data System (ADS)

    Pathak, N.; Bandyopadhyay, K.; Sarangi, M.; Panda, Sushanta Kumar

    2013-01-01

    Friction stir spot welding (FSSW) is a recent trend of joining light-weight sheet metals while fabricating automotive and aerospace body components. For the successful application of this solid-state welding process, it is imperative to have a thorough understanding of the weld microstructure, mechanical performance, and failure mechanism. In the present study, FSSW of aluminum-5754 sheet metal was tried using tools with circular and tapered pin considering different tool rotational speeds, plunge depths, and dwell times. The effects of tool design and process parameters on temperature distribution near the sheet-tool interface, weld microstructure, weld strength, and failure modes were studied. It was found that the peak temperature was higher while welding with a tool having circular pin compared to tapered pin, leading to a bigger dynamic recrystallized stir zone (SZ) with a hook tip bending towards the upper sheet and away from the keyhole. Hence, higher lap shear separation load was observed in the welds made from circular pin compared to those made from tapered pin. Due to influence of size and hardness of SZ on crack propagation, three different failure modes of weld nugget were observed through optical cross-sectional micrograph and SEM fractographs.

  14. Spinoff 2012

    NASA Technical Reports Server (NTRS)

    2013-01-01

    Topics covered include: Water Treatment Technologies Inspire Healthy Beverages; Dietary Formulas Fortify Antioxidant Supplements; Rovers Pave the Way for Hospital Robots; Dry Electrodes Facilitate Remote Health Monitoring; Telescope Innovations Improve Speed, Accuracy of Eye Surgery; Superconductors Enable Lower Cost MRI Systems; Anti-Icing Formulas Prevent Train Delays; Shuttle Repair Tools Automate Vehicle Maintenance; Pressure-Sensitive Paints Advance Rotorcraft Design Testing; Speech Recognition Interfaces Improve Flight Safety; Polymers Advance Heat Management Materials for Vehicles; Wireless Sensors Pinpoint Rotorcraft Troubles; Ultrasonic Detectors Safely Identify Dangerous, Costly Leaks; Detectors Ensure Function, Safety of Aircraft Wiring; Emergency Systems Save Tens of Thousands of Lives; Oxygen Assessments Ensure Safer Medical Devices; Collaborative Platforms Aid Emergency Decision Making; Space-Inspired Trailers Encourage Exploration on Earth; Ultra-Thin Coatings Beautify Art; Spacesuit Materials Add Comfort to Undergarments; Gigapixel Images Connect Sports Teams with Fans; Satellite Maps Deliver More Realistic Gaming; Elemental Scanning Devices Authenticate Works of Art; Microradiometers Reveal Ocean Health, Climate Change; Sensors Enable Plants to Text Message Farmers; Efficient Cells Cut the Cost of Solar Power; Shuttle Topography Data Inform Solar Power Analysis; Photocatalytic Solutions Create Self-Cleaning Surfaces; Concentrators Enhance Solar Power Systems; Innovative Coatings Potentially Lower Facility Maintenance Costs; Simulation Packages Expand Aircraft Design Options; Web Solutions Inspire Cloud Computing Software; Behavior Prediction Tools Strengthen Nanoelectronics; Power Converters Secure Electronics in Harsh Environments; Diagnostics Tools Identify Faults Prior to Failure; Archiving Innovations Preserve Essential Historical Records; Meter Designs Reduce Operation Costs for Industry; Commercial Platforms Allow Affordable Space Research; Fiber Optics Deliver Real-Time Structural Monitoring; Camera Systems Rapidly Scan Large Structures; Terahertz Lasers Reveal Information for 3D Images; Thin Films Protect Electronics from Heat and Radiation; Interferometers Sharpen Measurements for Better Telescopes; and Vision Systems Illuminate Industrial Processes.

  15. Transition Flight Control Room Automation

    NASA Technical Reports Server (NTRS)

    Welborn, Curtis Ray

    1990-01-01

    The Workstation Prototype Laboratory is currently working on a number of projects which we feel can have a direct impact on ground operations automation. These projects include: The Fuel Cell Monitoring System (FCMS), which will monitor and detect problems with the fuel cells on the Shuttle. FCMS will use a combination of rules (forward/backward) and multi-threaded procedures which run concurrently with the rules, to implement the malfunction algorithms of the EGIL flight controllers. The combination of rule based reasoning and procedural reasoning allows us to more easily map the malfunction algorithms into a real-time system implementation. A graphical computation language (AGCOMPL). AGCOMPL is an experimental prototype to determine the benefits and drawbacks of using a graphical language to design computations (algorithms) to work on Shuttle or Space Station telemetry and trajectory data. The design of a system which will allow a model of an electrical system, including telemetry sensors, to be configured on the screen graphically using previously defined electrical icons. This electrical model would then be used to generate rules and procedures for detecting malfunctions in the electrical components of the model. A generic message management (GMM) system. GMM is being designed as a message management system for real-time applications which send advisory messages to a user. The primary purpose of GMM is to reduce the risk of overloading a user with information when multiple failures occurs and in assisting the developer in devising an explanation facility. The emphasis of our work is to develop practical tools and techniques, while determining the feasibility of a given approach, including identification of appropriate software tools to support research, application and tool building activities.

  16. Transition flight control room automation

    NASA Technical Reports Server (NTRS)

    Welborn, Curtis Ray

    1990-01-01

    The Workstation Prototype Laboratory is currently working on a number of projects which can have a direct impact on ground operations automation. These projects include: (1) The fuel cell monitoring system (FCMS), which will monitor and detect problems with the fuel cells on the shuttle. FCMS will use a combination of rules (forward/backward) and multithreaded procedures, which run concurrently with the rules, to implement the malfunction algorithms of the EGIL flight controllers. The combination of rule-based reasoning and procedural reasoning allows us to more easily map the malfunction algorithms into a real-time system implementation. (2) A graphical computation language (AGCOMPL) is an experimental prototype to determine the benefits and drawbacks of using a graphical language to design computations (algorithms) to work on shuttle or space station telemetry and trajectory data. (3) The design of a system will allow a model of an electrical system, including telemetry sensors, to be configured on the screen graphically using previously defined electrical icons. This electrical model would then be used to generate rules and procedures for detecting malfunctions in the electrical components of the model. (4) A generic message management (GMM) system is being designed for real-time applications as a message management system which sends advisory messages to a user. The primary purpose of GMM is to reduce the risk of overloading a user with information when multiple failures occur and to assist the developer in the devising an explanation facility. The emphasis of our work is to develop practical tools and techniques, including identification of appropriate software tools to support research, application, and tool building activities, while determining the feasibility of a given approach.

  17. Real-time failure control (SAFD)

    NASA Technical Reports Server (NTRS)

    Panossian, Hagop V.; Kemp, Victoria R.; Eckerling, Sherry J.

    1990-01-01

    The Real Time Failure Control program involves development of a failure detection algorithm, referred as System for Failure and Anomaly Detection (SAFD), for the Space Shuttle Main Engine (SSME). This failure detection approach is signal-based and it entails monitoring SSME measurement signals based on predetermined and computed mean values and standard deviations. Twenty four engine measurements are included in the algorithm and provisions are made to add more parameters if needed. Six major sections of research are presented: (1) SAFD algorithm development; (2) SAFD simulations; (3) Digital Transient Model failure simulation; (4) closed-loop simulation; (5) SAFD current limitations; and (6) enhancements planned for.

  18. Patient engagement with a mobile web-based telemonitoring system for heart failure self-management: a pilot study.

    PubMed

    Zan, Shiyi; Agboola, Stephen; Moore, Stephanie A; Parks, Kimberly A; Kvedar, Joseph C; Jethwani, Kamal

    2015-04-01

    Intensive remote monitoring programs for congestive heart failure have been successful in reducing costly readmissions, but may not be appropriate for all patients. There is an opportunity to leverage the increasing accessibility of mobile technologies and consumer-facing digital devices to empower patients in monitoring their own health outside of the hospital setting. The iGetBetter system, a secure Web- and telephone-based heart failure remote monitoring program, which leverages mobile technology and portable digital devices, offers a creative solution at lower cost. The objective of this pilot study was to evaluate the feasibility of using the iGetBetter system for disease self-management in patients with heart failure. This was a single-arm prospective study in which 21 ambulatory, adult heart failure patients used the intervention for heart failure self-management over a 90-day study period. Patients were instructed to take their weight, blood pressure, and heart rate measurements each morning using a WS-30 bluetooth weight scale, a self-inflating blood pressure cuff (Withings LLC, Issy les Moulineaux, France), and an iPad Mini tablet computer (Apple Inc, Cupertino, CA, USA) equipped with cellular Internet connectivity to view their measurements on the Internet. Outcomes assessed included usability and satisfaction, engagement with the intervention, hospital resource utilization, and heart failure-related quality of life. Descriptive statistics were used to summarize data, and matched controls identified from the electronic medical record were used as comparison for evaluating hospitalizations. There were 20 participants (mean age 53 years) that completed the study. Almost all participants (19/20, 95%) reported feeling more connected to their health care team and more confident in performing care plan activities, and 18/20 (90%) felt better prepared to start discussions about their health with their doctor. Although heart failure-related quality of life improved from baseline, it was not statistically significant (P=.55). Over half of the participants had greater than 80% (72/90 days) weekly and overall engagement with the program, and 15% (3/20) used the interactive voice response telephone system exclusively for managing their care plan. Hospital utilization did not differ in the intervention group compared to the control group (planned hospitalizations P=.23, and unplanned hospitalizations P=.99). Intervention participants recorded shorter average length of hospital stay, but no significant differences were observed between intervention and control groups (P=.30). This pilot study demonstrated the feasibility of a low-intensive remote monitoring program leveraging commonly used mobile and portable consumer devices in augmenting care for a fairly young population of ambulatory patients with heart failure. Further prospective studies with a larger sample size and within more diverse patient populations is necessary to determine the effect of mobile-based remote monitoring programs such as the iGetBetter system on clinical outcomes in heart failure.

  19. Dynamic Analyses of Result Quality in Energy-Aware Approximate Programs

    NASA Astrophysics Data System (ADS)

    RIngenburg, Michael F.

    Energy efficiency is a key concern in the design of modern computer systems. One promising approach to energy-efficient computation, approximate computing, trades off output precision for energy efficiency. However, this tradeoff can have unexpected effects on computation quality. This thesis presents dynamic analysis tools to study, debug, and monitor the quality and energy efficiency of approximate computations. We propose three styles of tools: prototyping tools that allow developers to experiment with approximation in their applications, online tools that instrument code to determine the key sources of error, and online tools that monitor the quality of deployed applications in real time. Our prototyping tool is based on an extension to the functional language OCaml. We add approximation constructs to the language, an approximation simulator to the runtime, and profiling and auto-tuning tools for studying and experimenting with energy-quality tradeoffs. We also present two online debugging tools and three online monitoring tools. The first online tool identifies correlations between output quality and the total number of executions of, and errors in, individual approximate operations. The second tracks the number of approximate operations that flow into a particular value. Our online tools comprise three low-cost approaches to dynamic quality monitoring. They are designed to monitor quality in deployed applications without spending more energy than is saved by approximation. Online monitors can be used to perform real time adjustments to energy usage in order to meet specific quality goals. We present prototype implementations of all of these tools and describe their usage with several applications. Our prototyping, profiling, and autotuning tools allow us to experiment with approximation strategies and identify new strategies, our online tools succeed in providing new insights into the effects of approximation on output quality, and our monitors succeed in controlling output quality while still maintaining significant energy efficiency gains.

  20. Monitoring conservation success in a large oak woodland landscape

    Treesearch

    Rich Reiner; Emma Underwood; John-O Niles

    2002-01-01

    Monitoring is essential in understanding the success or failure of a conservation project and provides the information needed to conduct adaptive management. Although there is a large body of literature on monitoring design, it fails to provide sufficient information to practitioners on how to organize and apply monitoring when implementing landscape-scale conservation...

  1. Unified Geophysical Cloud Platform (UGCP) for Seismic Monitoring and other Geophysical Applications.

    NASA Astrophysics Data System (ADS)

    Synytsky, R.; Starovoit, Y. O.; Henadiy, S.; Lobzakov, V.; Kolesnikov, L.

    2016-12-01

    We present Unified Geophysical Cloud Platform (UGCP) or UniGeoCloud as an innovative approach for geophysical data processing in the Cloud environment with the ability to run any type of data processing software in isolated environment within the single Cloud platform. We've developed a simple and quick method of several open-source widely known software seismic packages (SeisComp3, Earthworm, Geotool, MSNoise) installation which does not require knowledge of system administration, configuration, OS compatibility issues etc. and other often annoying details preventing time wasting for system configuration work. Installation process is simplified as "mouse click" on selected software package from the Cloud market place. The main objective of the developed capability was the software tools conception with which users are able to design and install quickly their own highly reliable and highly available virtual IT-infrastructure for the organization of seismic (and in future other geophysical) data processing for either research or monitoring purposes. These tools provide access to any seismic station data available in open IP configuration from the different networks affiliated with different Institutions and Organizations. It allows also setting up your own network as you desire by selecting either regionally deployed stations or the worldwide global network based on stations selection form the global map. The processing software and products and research results could be easily monitored from everywhere using variety of user's devices form desk top computers to IT gadgets. Currents efforts of the development team are directed to achieve Scalability, Reliability and Sustainability (SRS) of proposed solutions allowing any user to run their applications with the confidence of no data loss and no failure of the monitoring or research software components. The system is suitable for quick rollout of NDC-in-Box software package developed for State Signatories and aimed for promotion of data processing collected by the IMS Network.

  2. Failure mode and effects analysis drastically reduced potential risks in clinical trial conduct.

    PubMed

    Lee, Howard; Lee, Heechan; Baik, Jungmi; Kim, Hyunjung; Kim, Rachel

    2017-01-01

    Failure mode and effects analysis (FMEA) is a risk management tool to proactively identify and assess the causes and effects of potential failures in a system, thereby preventing them from happening. The objective of this study was to evaluate effectiveness of FMEA applied to an academic clinical trial center in a tertiary care setting. A multidisciplinary FMEA focus group at the Seoul National University Hospital Clinical Trials Center selected 6 core clinical trial processes, for which potential failure modes were identified and their risk priority number (RPN) was assessed. Remedial action plans for high-risk failure modes (RPN >160) were devised and a follow-up RPN scoring was conducted a year later. A total of 114 failure modes were identified with an RPN score ranging 3-378, which was mainly driven by the severity score. Fourteen failure modes were of high risk, 11 of which were addressed by remedial actions. Rescoring showed a dramatic improvement attributed to reduction in the occurrence and detection scores by >3 and >2 points, respectively. FMEA is a powerful tool to improve quality in clinical trials. The Seoul National University Hospital Clinical Trials Center is expanding its FMEA capability to other core clinical trial processes.

  3. Mechanical properties of sugar beet root during storage

    NASA Astrophysics Data System (ADS)

    Nedomová, Šárka; Kumbár, Vojtěch; Pytel, Roman; Buchar, Jaroslav

    2017-10-01

    This paper is an investigation via two experimental methods, of the textural properties of sugar beet roots during the storage period. In the work, sugar beet roots mechanical properties were evaluated during the post-harvest period - 1, 8, 22, 43, and 71 days after crop. Both experimental methods, i.e. compression test and puncture test, suggest that the failure strength of the sugar beet root increases with the storage time. The parameters obtained using the puncture test, are more sensitive to the storage duration than those obtained by way of the compression test. We also found that such mechanical properties served as a reliable tool for monitoring the progress of sugar beet roots storage. The described methods could also be used to highlight important information on sugar beet evolution during storage.

  4. Stimulating Creativity and Innovation through Intelligent Fast Failure

    ERIC Educational Resources Information Center

    Tahirsylaj, Armend S.

    2012-01-01

    Literature on creativity and innovation has discussed the issue of failure in the light of its benefits and limitations for enhancing human potential in all domains of life, but in business, science, engineering, and industry more specifically. In this paper, the Intelligent Fast Failure (IFF) as a useful tool of creativity and innovation for…

  5. Implant experience with an implantable hemodynamic monitor for the management of symptomatic heart failure.

    PubMed

    Steinhaus, David; Reynolds, Dwight W; Gadler, Fredrik; Kay, G Neal; Hess, Mike F; Bennett, Tom

    2005-08-01

    Management of congestive heart failure is a serious public health problem. The use of implantable hemodynamic monitors (IHMs) may assist in this management by providing continuous ambulatory filling pressure status for optimal volume management. The Chronicle system includes an implanted monitor, a pressure sensor lead with passive fixation, an external pressure reference (EPR), and data retrieval and viewing components. The tip of the lead is placed near the right ventricular outflow tract to minimize risk of sensor tissue encapsulation. Implant technique and lead placement is similar to that of a permanent pacemaker. After the system had been successfully implanted in 148 patients, the type and frequency of implant-related adverse events were similar to a single-chamber pacemaker implant. R-wave amplitude was 15.2 +/- 6.7 mV and the pressure waveform signal was acceptable in all but two patients in whom presence of artifacts required lead repositioning. Implant procedure time was not influenced by experience, remaining constant throughout the study. Based on this evaluation, permanent placement of an IHM in symptomatic heart failure patients is technically feasible. Further investigation is warranted to evaluate the use of the continuous hemodynamic data in management of heart failure patients.

  6. Leg edema quantification for heart failure patients via 3D imaging.

    PubMed

    Hayn, Dieter; Fruhwald, Friedrich; Riedel, Arthur; Falgenhauer, Markus; Schreier, Günter

    2013-08-14

    Heart failure is a common cardiac disease in elderly patients. After discharge, approximately 50% of all patients are readmitted to a hospital within six months. Recent studies show that home monitoring of heart failure patients can reduce the number of readmissions. Still, a large number of false positive alarms as well as underdiagnoses in other cases require more accurate alarm generation algorithms. New low-cost sensors for leg edema detection could be the missing link to help home monitoring to its breakthrough. We evaluated a 3D camera-based measurement setup in order to geometrically detect and quantify leg edemas. 3D images of legs were taken and geometric parameters were extracted semi-automatically from the images. Intra-subject variability for five healthy subjects was evaluated. Thereafter, correlation of 3D parameters with body weight and leg circumference was assessed during a clinical study at the Medical University of Graz. Strong correlation was found in between both reference values and instep height, while correlation in between curvature of the lower leg and references was very low. We conclude that 3D imaging might be a useful and cost-effective extension of home monitoring for heart failure patients, though further (prospective) studies are needed.

  7. A multicenter randomized controlled evaluation of automated home monitoring and telephonic disease management in patients recently hospitalized for congestive heart failure: the SPAN-CHF II trial.

    PubMed

    Weintraub, Andrew; Gregory, Douglas; Patel, Ayan R; Levine, Daniel; Venesy, David; Perry, Kathleen; Delano, Christine; Konstam, Marvin A

    2010-04-01

    We performed a prospective, randomized investigation assessing the incremental effect of automated health monitoring (AHM) technology over and above that of a previously described nurse directed heart failure (HF) disease management program. The AHM system measured and transmitted body weight, blood pressure, and heart rate data as well as subjective patient self-assessments via a standard telephone line to a central server. A total of 188 consented and eligible patients were randomized between intervention and control groups in 1:1 ratio. Subjects randomized to the control arm received the Specialized Primary and Networked Care in Heart Failure (SPAN-CHF) heart failure disease management program. Subjects randomized to the intervention arm received the SPAN-CHF disease management program in conjunction with the AHM system. The primary end point was prespecified as the relative event rate of HF hospitalization between intervention and control groups at 90 days. The relative event rate of HF hospitalization for the intervention group compared with controls was 0.50 (95%CI [0.25-0.99], P = .05). Short-term reductions in the heart failure hospitalization rate were associated with the use of automated home monitoring equipment. Long-term benefits in this model remain to be studied. (c) 2010 Elsevier Inc. All rights reserved.

  8. Reducing unscheduled plant maintenance delays -- Field test of a new method to predict electric motor failure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Homce, G.T.; Thalimer, J.R.

    1996-05-01

    Most electric motor predictive maintenance methods have drawbacks that limit their effectiveness in the mining environment. The US Bureau of Miens (USBM) is developing an alternative approach to detect winding insulation breakdown in advance of complete motor failure. In order to evaluate the analysis algorithms necessary for this approach, the USBM has designed and installed a system to monitor 120 electric motors in a coal preparation plant. The computer-based experimental system continuously gathers, stores, and analyzes electrical parameters for each motor. The results are then correlated to data from conventional motor-maintenance methods and in-service failures to determine if the analysismore » algorithms can detect signs of insulation deterioration and impending failure. This paper explains the on-line testing approach used in this research, and describes monitoring system design and implementation. At this writing data analysis is underway, but conclusive results are not yet available.« less

  9. Remote monitoring reduces healthcare use and improves quality of care in heart failure patients with implantable defibrillators: the evolution of management strategies of heart failure patients with implantable defibrillators (EVOLVO) study.

    PubMed

    Landolina, Maurizio; Perego, Giovanni B; Lunati, Maurizio; Curnis, Antonio; Guenzati, Giuseppe; Vicentini, Alessandro; Parati, Gianfranco; Borghi, Gabriella; Zanaboni, Paolo; Valsecchi, Sergio; Marzegalli, Maurizio

    2012-06-19

    Heart failure patients with implantable cardioverter-defibrillators (ICDs) or an ICD for resynchronization therapy often visit the hospital for unscheduled examinations, placing a great burden on healthcare providers. We hypothesized that Internet-based remote interrogation systems could reduce emergency healthcare visits. This multicenter randomized trial involving 200 patients compared remote monitoring with standard patient management consisting of scheduled visits and patient response to audible ICD alerts. The primary end point was the rate of emergency department or urgent in-office visits for heart failure, arrhythmias, or ICD-related events. Over 16 months, such visits were 35% less frequent in the remote arm (75 versus 117; incidence density, 0.59 versus 0.93 events per year; P=0.005). A 21% difference was observed in the rates of total healthcare visits for heart failure, arrhythmias, or ICD-related events (4.40 versus 5.74 events per year; P<0.001). The time from an ICD alert condition to review of the data was reduced from 24.8 days in the standard arm to 1.4 days in the remote arm (P<0.001). The patients' clinical status, as measured by the Clinical Composite Score, was similar in the 2 groups, whereas a more favorable change in quality of life (Minnesota Living With Heart Failure Questionnaire) was observed from the baseline to the 16th month in the remote arm (P=0.026). Remote monitoring reduces emergency department/urgent in-office visits and, in general, total healthcare use in patients with ICD or defibrillators for resynchronization therapy. Compared with standard follow-up through in-office visits and audible ICD alerts, remote monitoring results in increased efficiency for healthcare providers and improved quality of care for patients. URL: http://www.clinicaltrials.gov. Unique identifier: NCT00873899.

  10. Detecting Damage in Composite Material Using Nonlinear Elastic Wave Spectroscopy Methods

    NASA Astrophysics Data System (ADS)

    Meo, Michele; Polimeno, Umberto; Zumpano, Giuseppe

    2008-05-01

    Modern aerospace structures make increasing use of fibre reinforced plastic composites, due to their high specific mechanical properties. However, due to their brittleness, low velocity impact can cause delaminations beneath the surface, while the surface may appear to be undamaged upon visual inspection. Such damage is called barely visible impact damage (BVID). Such internal damages lead to significant reduction in local strengths and ultimately could lead to catastrophic failures. It is therefore important to detect and monitor damages in high loaded composite components to receive an early warning for a well timed maintenance of the aircraft. Non-linear ultrasonic spectroscopy methods are promising damage detection and material characterization tools. In this paper, two different non-linear elastic wave spectroscopy (NEWS) methods are presented: single mode nonlinear resonance ultrasound (NRUS) and nonlinear wave modulation technique (NWMS). The NEWS methods were applied to detect delamination damage due to low velocity impact (<12 J) on various composite plates. The results showed that the proposed methodology appear to be highly sensitive to the presence of damage with very promising future NDT and structural health monitoring applications.

  11. Recruitment and retention monitoring: facilitating the mission of the National Institute of Neurological Disorders and Stroke (NINDS)

    PubMed Central

    Roberts, J; Waddy, S; Kaufmann, P

    2012-01-01

    It is commonly accepted that inefficient recruitment and inadequate retention continue to threaten the completion of clinical trials intended to reduce the public health burden of neurological disease. This article will discuss the scientific, economic, and ethical implications of failure to recruit and retain adequate samples in clinical trials, including the consequences of failing to recruit adequately diverse samples. We will also discuss the more common challenges and barriers to efficient and effective recruitment and retention, and the impact these have on successful clinical trial planning. We will explain the newly established efforts within National Institute of Neurological Disorders and Stroke (NINDS) to monitor recruitment and retention with well-defined metrics and implementation of grant awards that include feasibility milestones for continued funding. Finally, we will describe our efforts to address some of the common challenges to recruitment and retention through assistance to investigators and coordinators with evidence-based support, tools, and resources for planning and strategizing recruitment and retention as well as a trans-NIH effort to improve awareness of clinical research in the general public. PMID:23230460

  12. Diagnosis and Prognosis of Weapon Systems

    NASA Technical Reports Server (NTRS)

    Nolan, Mary; Catania, Rebecca; deMare, Gregory

    2005-01-01

    The Prognostics Framework is a set of software tools with an open architecture that affords a capability to integrate various prognostic software mechanisms and to provide information for operational and battlefield decision-making and logistical planning pertaining to weapon systems. The Prognostics NASA Tech Briefs, February 2005 17 Framework is also a system-level health -management software system that (1) receives data from performance- monitoring and built-in-test sensors and from other prognostic software and (2) processes the received data to derive a diagnosis and a prognosis for a weapon system. This software relates the diagnostic and prognostic information to the overall health of the system, to the ability of the system to perform specific missions, and to needed maintenance actions and maintenance resources. In the development of the Prognostics Framework, effort was focused primarily on extending previously developed model-based diagnostic-reasoning software to add prognostic reasoning capabilities, including capabilities to perform statistical analyses and to utilize information pertaining to deterioration of parts, failure modes, time sensitivity of measured values, mission criticality, historical data, and trends in measurement data. As thus extended, the software offers an overall health-monitoring capability.

  13. Arrhythmias and hemodialysis: role of potassium and new diagnostic tools.

    PubMed

    Buemi, Michele; Coppolino, Giuseppe; Bolignano, Davide; Sturiale, Alessio; Campo, Susanna; Buemi, Antoine; Crascì, Eleonora; Romeo, Adolfo

    2009-01-01

    Cardiovascular diseases represent the main causes of death in patients affected by renal failure, and arrhythmias are frequently observed in patients undergoing hemodialysis. Dialytic treatment per se can be considered as an arrhythmogenic stimulus; moreover, uraemic patients are characterized by a "pro-arrhythmic substrate" because of the high prevalence of ischaemic heart disease, left ventricular hypertrophy and autonomic neuropathy. One of the most important pathogenetic element involved in the onset of intra-dialytic arrhythmias is the alteration in electrolytes concentration, particularly calcium and potassium. It may be very useful to monitor the patient's cardiac activity during the whole hemodilaytic session. Nevertheless, the application of an extended intradialytic electrocardiographic monitoring is not simple because of several technical and structural impairments. We tried to overcome these difficulties using Whealthy, a wearable system consisting in a t-shirt composed of conductors and piezoresistive materials, integrated to form fibers and threads connected to tissutal sensors, electrodes, and connectors. ECG and pneumographic impedance signals are acquired by the electrodes in the tissue, and the data are registered by a small computer and transmitted via GPRS or Bluetooth.

  14. Blood-based analyses of cancer: Circulating myeloid-derived suppressor cells - is a new era coming?

    PubMed

    Okla, Karolina; Wertel, Iwona; Wawruszak, Anna; Bobiński, Marcin; Kotarski, Jan

    2018-06-21

    Progress in cancer treatment made by the beginning of the 21st century has shifted the paradigm from one-size-fits-all to tailor-made treatment. The popular vision, to study solid tumors through the relatively noninvasive sampling of blood, is one of the most thrilling and rapidly advancing fields in global cancer diagnostics. From this perspective, immune-cell analysis in cancer could play a pivotal role in oncology practice. This approach is driven both by rapid technological developments, including the analysis of circulating myeloid-derived suppressor cells (cMDSCs), and by the increasing application of (immune) therapies, the success or failure of which may depend on effective and timely measurements of relevant biomarkers. Although the implementation of these powerful noninvasive diagnostic capabilities in guiding precision cancer treatment is poised to change the ways in which we select and monitor cancer therapy, challenges remain. Here, we discuss the challenges associated with the analysis and clinical aspects of cMDSCs and assess whether the problems in implementing tumor-evolution monitoring as a global tool in personalized oncology can be overcome.

  15. Study of Disseminating Landslide Early Warning Information in Malaysia

    NASA Astrophysics Data System (ADS)

    Koay, Swee Peng; Lateh, Habibah; Tien Tay, Lea; Ahamd, Jamilah; Chan, Huah Yong; Sakai, Naoki; Jamaludin, Suhaimi

    2015-04-01

    In Malaysia, rain induced landslides are occurring more often than before. The Malaysian Government allocates millions of Malaysian Ringgit for slope monitoring and slope failure remedial measures in the budget every year. In rural areas, local authorities also play a major role in monitoring the slope to prevent casualty by giving information to the residents who are staying near to the slopes. However, there are thousands of slopes which are classified as high risk slopes in Malaysia. Implementing site monitoring system in these slopes to monitor the movement of the soil in the slopes, predicting the occurrence of slopes failure and establishing early warning system are too costly and almost impossible. In our study, we propose Accumulated Rainfall vs. Rainfall Intensity prediction method to predict the slope failure by referring to the predicted rainfall data from radar and the rain volume from rain gauges. The critical line which determines if the slope is in danger, is generated by simulator with well-surveyed the soil property in the slope and compared with historical data. By establishing such predicting system, the slope failure warning information can be obtained and disseminated to the surroundings via SMS, internet and siren. However, establishing the early warning dissemination system is not enough in disaster prevention, educating school children and the community by giving knowledge on landslides, such as landslide's definition, how and why does the slope failure happen and when will it fail, to raise the risk awareness on landslides will reduce landslides casualty, especially in rural area. Moreover, showing video on the risk and symptom of landslides in school will also help the school children gaining the knowledge of landslides. Generating hazard map and landslides historical data provides further information on the occurrence of the slope failure. In future, further study on fine tuning of landslides prediction method, applying IT technology to educate school children and disseminate warning information will assist the government authorities to reduce landslide casualty by disseminating prompt slope failure warning and improving the community's awareness of disaster prevention.

  16. Prediction of morbidity and mortality in patients with type 2 diabetes.

    PubMed

    Wells, Brian J; Roth, Rachel; Nowacki, Amy S; Arrigain, Susana; Yu, Changhong; Rosenkrans, Wayne A; Kattan, Michael W

    2013-01-01

    Introduction. The objective of this study was to create a tool that accurately predicts the risk of morbidity and mortality in patients with type 2 diabetes according to an oral hypoglycemic agent. Materials and Methods. The model was based on a cohort of 33,067 patients with type 2 diabetes who were prescribed a single oral hypoglycemic agent at the Cleveland Clinic between 1998 and 2006. Competing risk regression models were created for coronary heart disease (CHD), heart failure, and stroke, while a Cox regression model was created for mortality. Propensity scores were used to account for possible treatment bias. A prediction tool was created and internally validated using tenfold cross-validation. The results were compared to a Framingham model and a model based on the United Kingdom Prospective Diabetes Study (UKPDS) for CHD and stroke, respectively. Results and Discussion. Median follow-up for the mortality outcome was 769 days. The numbers of patients experiencing events were as follows: CHD (3062), heart failure (1408), stroke (1451), and mortality (3661). The prediction tools demonstrated the following concordance indices (c-statistics) for the specific outcomes: CHD (0.730), heart failure (0.753), stroke (0.688), and mortality (0.719). The prediction tool was superior to the Framingham model at predicting CHD and was at least as accurate as the UKPDS model at predicting stroke. Conclusions. We created an accurate tool for predicting the risk of stroke, coronary heart disease, heart failure, and death in patients with type 2 diabetes. The calculator is available online at http://rcalc.ccf.org under the heading "Type 2 Diabetes" and entitled, "Predicting 5-Year Morbidity and Mortality." This may be a valuable tool to aid the clinician's choice of an oral hypoglycemic, to better inform patients, and to motivate dialogue between physician and patient.

  17. Remote Monitoring of Patients With Heart Failure: An Overview of Systematic Reviews

    PubMed Central

    Karunanithi, Mohanraj; Fatehi, Farhad; Ding, Hang; Walters, Darren

    2017-01-01

    Background Many systematic reviews exist on the use of remote patient monitoring (RPM) interventions to improve clinical outcomes and psychological well-being of patients with heart failure. However, research is broadly distributed from simple telephone-based to complex technology-based interventions. The scope and focus of such evidence also vary widely, creating challenges for clinicians who seek information on the effect of RPM interventions. Objective The aim of this study was to investigate the effects of RPM interventions on the health outcomes of patients with heart failure by synthesizing review-level evidence. Methods We searched PubMed, EMBASE, CINAHL (Cumulative Index to Nursing and Allied Health Literature), and the Cochrane Library from 2005 to 2015. We screened reviews based on relevance to RPM interventions using criteria developed for this overview. Independent authors screened, selected, and extracted information from systematic reviews. AMSTAR (Assessment of Multiple Systematic Reviews) was used to assess the methodological quality of individual reviews. We used standardized language to summarize results across reviews and to provide final statements about intervention effectiveness. Results A total of 19 systematic reviews met our inclusion criteria. Reviews consisted of RPM with diverse interventions such as telemonitoring, home telehealth, mobile phone–based monitoring, and videoconferencing. All-cause mortality and heart failure mortality were the most frequently reported outcomes, but others such as quality of life, rehospitalization, emergency department visits, and length of stay were also reported. Self-care and knowledge were less commonly identified. Conclusions Telemonitoring and home telehealth appear generally effective in reducing heart failure rehospitalization and mortality. Other interventions, including the use of mobile phone–based monitoring and videoconferencing, require further investigation. PMID:28108430

  18. Remote Monitoring of Patients With Heart Failure: An Overview of Systematic Reviews.

    PubMed

    Bashi, Nazli; Karunanithi, Mohanraj; Fatehi, Farhad; Ding, Hang; Walters, Darren

    2017-01-20

    Many systematic reviews exist on the use of remote patient monitoring (RPM) interventions to improve clinical outcomes and psychological well-being of patients with heart failure. However, research is broadly distributed from simple telephone-based to complex technology-based interventions. The scope and focus of such evidence also vary widely, creating challenges for clinicians who seek information on the effect of RPM interventions. The aim of this study was to investigate the effects of RPM interventions on the health outcomes of patients with heart failure by synthesizing review-level evidence. We searched PubMed, EMBASE, CINAHL (Cumulative Index to Nursing and Allied Health Literature), and the Cochrane Library from 2005 to 2015. We screened reviews based on relevance to RPM interventions using criteria developed for this overview. Independent authors screened, selected, and extracted information from systematic reviews. AMSTAR (Assessment of Multiple Systematic Reviews) was used to assess the methodological quality of individual reviews. We used standardized language to summarize results across reviews and to provide final statements about intervention effectiveness. A total of 19 systematic reviews met our inclusion criteria. Reviews consisted of RPM with diverse interventions such as telemonitoring, home telehealth, mobile phone-based monitoring, and videoconferencing. All-cause mortality and heart failure mortality were the most frequently reported outcomes, but others such as quality of life, rehospitalization, emergency department visits, and length of stay were also reported. Self-care and knowledge were less commonly identified. Telemonitoring and home telehealth appear generally effective in reducing heart failure rehospitalization and mortality. Other interventions, including the use of mobile phone-based monitoring and videoconferencing, require further investigation. ©Nazli Bashi, Mohanraj Karunanithi, Farhad Fatehi, Hang Ding, Darren Walters. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 20.01.2017.

  19. Syndromic surveillance for health information system failures: a feasibility study

    PubMed Central

    Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico

    2013-01-01

    Objective To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. Methods A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. Results In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65–0.85. Conclusions Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures. PMID:23184193

  20. Estimation and Control for Autonomous Coring from a Rover Manipulator

    NASA Technical Reports Server (NTRS)

    Hudson, Nicolas; Backes, Paul; DiCicco, Matt; Bajracharya, Max

    2010-01-01

    A system consisting of a set of estimators and autonomous behaviors has been developed which allows robust coring from a low-mass rover platform, while accommodating for moderate rover slip. A redundant set of sensors, including a force-torque sensor, visual odometry, and accelerometers are used to monitor discrete critical and operational modes, as well as to estimate continuous drill parameters during the coring process. A set of critical failure modes pertinent to shallow coring from a mobile platform is defined, and autonomous behaviors associated with each critical mode are used to maintain nominal coring conditions. Autonomous shallow coring is demonstrated from a low-mass rover using a rotary-percussive coring tool mounted on a 5 degree-of-freedom (DOF) arm. A new architecture of using an arm-stabilized, rotary percussive tool with the robotic arm used to provide the drill z-axis linear feed is validated. Particular attention to hole start using this architecture is addressed. An end-to-end coring sequence is demonstrated, where the rover autonomously detects and then recovers from a series of slip events that exceeded 9 cm total displacement.

  1. Orthos, an alarm system for the ALICE DAQ operations

    NASA Astrophysics Data System (ADS)

    Chapeland, Sylvain; Carena, Franco; Carena, Wisla; Chibante Barroso, Vasco; Costa, Filippo; Denes, Ervin; Divia, Roberto; Fuchs, Ulrich; Grigore, Alexandru; Simonetti, Giuseppe; Soos, Csaba; Telesca, Adriana; Vande Vyvre, Pierre; von Haller, Barthelemy

    2012-12-01

    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The DAQ (Data Acquisition System) facilities handle the data flow from the detectors electronics up to the mass storage. The DAQ system is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches), and controls hundreds of distributed hardware and software components interacting together. This paper presents Orthos, the alarm system used to detect, log, report, and follow-up abnormal situations on the DAQ machines at the experimental area. The main objective of this package is to integrate alarm detection and notification mechanisms with a full-featured issues tracker, in order to prioritize, assign, and fix system failures optimally. This tool relies on a database repository with a logic engine, SQL interfaces to inject or query metrics, and dynamic web pages for user interaction. We describe the system architecture, the technologies used for the implementation, and the integration with existing monitoring tools.

  2. SRTR center-specific reporting tools: Posttransplant outcomes.

    PubMed

    Dickinson, D M; Shearon, T H; O'Keefe, J; Wong, H-H; Berg, C L; Rosendale, J D; Delmonico, F L; Webb, R L; Wolfe, R A

    2006-01-01

    Measuring and monitoring performance--be it waiting list and posttransplant outcomes by a transplant center, or organ donation success by an organ procurement organization and its partnering hospitals--is an important component of ensuring good care for people with end-stage organ failure. Many parties have an interest in examining these outcomes, from patients and their families to payers such as insurance companies or the Centers for Medicare and Medicaid Services; from primary caregivers providing patient counseling to government agencies charged with protecting patients. The Scientific Registry of Transplant Recipients produces regular, public reports on the performance of transplant centers and organ procurement organizations. This article explains the statistical tools used to prepare these reports, with a focus on graft survival and patient survival rates of transplant centers--especially the methods used to fairly and usefully compare outcomes of centers that serve different populations. The article concludes with a practical application of these statistics--their use in screening transplant center performance to identify centers that may need remedial action by the OPTN/UNOS Membership and Professional Standards Committee.

  3. Physical and chemical analysis of lithium-ion battery cell-to-cell failure events inside custom fire chamber

    NASA Astrophysics Data System (ADS)

    Spinner, Neil S.; Field, Christopher R.; Hammond, Mark H.; Williams, Bradley A.; Myers, Kristina M.; Lubrano, Adam L.; Rose-Pehrsson, Susan L.; Tuttle, Steven G.

    2015-04-01

    A 5-cubic meter decompression chamber was re-purposed as a fire test chamber to conduct failure and abuse experiments on lithium-ion batteries. Various modifications were performed to enable remote control and monitoring of chamber functions, along with collection of data from instrumentation during tests including high speed and infrared cameras, a Fourier transform infrared spectrometer, real-time gas analyzers, and compact reconfigurable input and output devices. Single- and multi-cell packages of LiCoO2 chemistry 18650 lithium-ion batteries were constructed and data was obtained and analyzed for abuse and failure tests. Surrogate 18650 cells were designed and fabricated for multi-cell packages that mimicked the thermal behavior of real cells without using any active components, enabling internal temperature monitoring of cells adjacent to the active cell undergoing failure. Heat propagation and video recordings before, during, and after energetic failure events revealed a high degree of heterogeneity; some batteries exhibited short burst of sparks while others experienced a longer, sustained flame during failure. Carbon monoxide, carbon dioxide, methane, dimethyl carbonate, and ethylene carbonate were detected via gas analysis, and the presence of these species was consistent throughout all failure events. These results highlight the inherent danger in large format lithium-ion battery packs with regards to cell-to-cell failure, and illustrate the need for effective safety features.

  4. Microseismic Signature of Magma Failure: Testing Failure Forecast in Heterogeneous Material

    NASA Astrophysics Data System (ADS)

    Vasseur, J.; Lavallee, Y.; Hess, K.; Wassermann, J. M.; Dingwell, D. B.

    2012-12-01

    Volcanoes exhibit a range of seismic precursors prior to eruptions. This range of signals derive from different processes, which if quantified, may tell us when and how the volcano will erupt: effusively or explosively. This quantification can be performed in laboratory. Here we investigated the signals associated with the deformation and failure of single-phase silicate liquids compare to mutli-phase magmas containing pores and crystals as heterogeneities. For the past decades, magmas have been simplified as viscoelastic fluids with grossly predictable failure, following an analysis of the stress and strain rate conditions in volcanic conduits. Yet it is clear that the way magmas fail is not unique and evidences increasingly illustrate the role of heterogeneities in the process of magmatic fragmentation. In such multi-phase magmas, failure cannot be predicted using current rheological laws. Microseismicity, as detected in the laboratory by analogous Acoustic Emission (AE), can be used to monitor fracture initiation and propagation, and thus provides invaluable information to characterise the process of brittle failure underlying explosive eruptions. Tri-axial press experiments on different synthetised and natural glass samples have been performed to investigate the acoustic signature of failure. We observed that the failure of single-phase liquids occurs without much strain and is preceded by the constant nucleation, propagation and coalescence of cracks as demonstrated by the monitored AE. In contrast, the failure of multi-phase magmas depends on the applied stress and is strain dependent. The path dependence of magma failure is nonetheless accompanied by supra exponential acceleration in released AEs. Analysis of the released AEs following material Failure Forecast Method (FFM) suggests that the predicability of failure is enhanced by the presence of heterogeneities in magmas. We discuss our observations in terms of volcanic scenarios.

  5. Image edge detection based tool condition monitoring with morphological component analysis.

    PubMed

    Yu, Xiaolong; Lin, Xin; Dai, Yiquan; Zhu, Kunpeng

    2017-07-01

    The measurement and monitoring of tool condition are keys to the product precision in the automated manufacturing. To meet the need, this study proposes a novel tool wear monitoring approach based on the monitored image edge detection. Image edge detection has been a fundamental tool to obtain features of images. This approach extracts the tool edge with morphological component analysis. Through the decomposition of original tool wear image, the approach reduces the influence of texture and noise for edge measurement. Based on the target image sparse representation and edge detection, the approach could accurately extract the tool wear edge with continuous and complete contour, and is convenient in charactering tool conditions. Compared to the celebrated algorithms developed in the literature, this approach improves the integrity and connectivity of edges, and the results have shown that it achieves better geometry accuracy and lower error rate in the estimation of tool conditions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Switching Kalman filter for failure prognostic

    NASA Astrophysics Data System (ADS)

    Lim, Chi Keong Reuben; Mba, David

    2015-02-01

    The use of condition monitoring (CM) data to predict remaining useful life have been growing with increasing use of health and usage monitoring systems on aircraft. There are many data-driven methodologies available for the prediction and popular ones include artificial intelligence and statistical based approach. The drawback of such approaches is that they require a lot of failure data for training which can be scarce in practice. In lieu of this, methods using state-space and regression-based models that extract information from the data history itself have been explored. However, such methods have their own limitations as they utilize a single time-invariant model which does not represent changing degradation path well. This causes most degradation modeling studies to focus only on segments of their CM data that behaves close to the assumed model. In this paper, a state-space based method; the Switching Kalman Filter (SKF), is adopted for model estimation and life prediction. The SKF approach however, uses multiple models from which the most probable model is inferred from the CM data using Bayesian estimation before it is applied for prediction. At the same time, the inference of the degradation model itself can provide maintainers with more information for their planning. This SKF approach is demonstrated with a case study on gearbox bearings that were found defective from the Republic of Singapore Air Force AH64D helicopter. The use of in-service CM data allows the approach to be applied in a practical scenario and results showed that the developed SKF approach is a promising tool to support maintenance decision-making.

  7. TU-AB-BRD-00: Task Group 100

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2015-06-15

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less

  8. TU-AB-BRD-03: Fault Tree Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunscombe, P.

    2015-06-15

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less

  9. TU-AB-BRD-01: Process Mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palta, J.

    2015-06-15

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less

  10. TU-AB-BRD-04: Development of Quality Management Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomadsen, B.

    2015-06-15

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less

  11. Cost-Effectiveness of Implantable Pulmonary Artery Pressure Monitoring in Chronic Heart Failure.

    PubMed

    Sandhu, Alexander T; Goldhaber-Fiebert, Jeremy D; Owens, Douglas K; Turakhia, Mintu P; Kaiser, Daniel W; Heidenreich, Paul A

    2016-05-01

    This study aimed to evaluate the cost-effectiveness of the CardioMEMS (CardioMEMS Heart Failure System, St Jude Medical Inc, Atlanta, Georgia) device in patients with chronic heart failure. The CardioMEMS device, an implantable pulmonary artery pressure monitor, was shown to reduce hospitalizations for heart failure and improve quality of life in the CHAMPION (CardioMEMS Heart Sensor Allows Monitoring of Pressure to Improve Outcomes in NYHA Class III Heart Failure Patients) trial. We developed a Markov model to determine the hospitalization, survival, quality of life, cost, and incremental cost-effectiveness ratio of CardioMEMS implantation compared with usual care among a CHAMPION trial cohort of patients with heart failure. We obtained event rates and utilities from published trial data; we used costs from literature estimates and Medicare reimbursement data. We performed subgroup analyses of preserved and reduced ejection fraction and an exploratory analysis in a lower-risk cohort on the basis of the CHARM (Candesartan in Heart failure: Reduction in Mortality and Morbidity) trials. CardioMEMS reduced lifetime hospitalizations (2.18 vs. 3.12), increased quality-adjusted life-years (QALYs) (2.74 vs. 2.46), and increased costs ($176,648 vs. $156,569), thus yielding a cost of $71,462 per QALY gained and $48,054 per life-year gained. The cost per QALY gained was $82,301 in patients with reduced ejection fraction and $47,768 in those with preserved ejection fraction. In the lower-risk CHARM cohort, the device would need to reduce hospitalizations for heart failure by 41% to cost <$100,000 per QALY gained. The cost-effectiveness was most sensitive to the device's durability. In populations similar to that of the CHAMPION trial, the CardioMEMS device is cost-effective if the trial effectiveness is sustained over long periods. Post-marketing surveillance data on durability will further clarify its value. Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  12. GEOGLAM Crop Assessment Tool: Adapting from global agricultural monitoring to food security monitoring

    NASA Astrophysics Data System (ADS)

    Humber, M. L.; Becker-Reshef, I.; Nordling, J.; Barker, B.; McGaughey, K.

    2014-12-01

    The GEOGLAM Crop Monitor's Crop Assessment Tool was released in August 2013 in support of the GEOGLAM Crop Monitor's objective to develop transparent, timely crop condition assessments in primary agricultural production areas, highlighting potential hotspots of stress/bumper crops. The Crop Assessment Tool allows users to view satellite derived products, best available crop masks, and crop calendars (created in collaboration with GEOGLAM Crop Monitor partners), then in turn submit crop assessment entries detailing the crop's condition, drivers, impacts, trends, and other information. Although the Crop Assessment Tool was originally intended to collect data on major crop production at the global scale, the types of data collected are also relevant to the food security and rangelands monitoring communities. In line with the GEOGLAM Countries at Risk philosophy of "foster[ing] the coordination of product delivery and capacity building efforts for national and regional organizations, and the development of harmonized methods and tools", a modified version of the Crop Assessment Tool is being developed for the USAID Famine Early Warning Systems Network (FEWS NET). As a member of the Countries at Risk component of GEOGLAM, FEWS NET provides agricultural monitoring, timely food security assessments, and early warnings of potential significant food shortages focusing specifically on countries at risk of food security emergencies. While the FEWS NET adaptation of the Crop Assessment Tool focuses on crop production in the context of food security rather than large scale production, the data collected is nearly identical to the data collected by the Crop Monitor. If combined, the countries monitored by FEWS NET and GEOGLAM Crop Monitor would encompass over 90 countries representing the most important regions for crop production and food security.

  13. Integrated medication management in mHealth applications.

    PubMed

    Ebner, Hubert; Modre-Osprian, Robert; Kastner, Peter; Schreier, Günter

    2014-01-01

    Continuous medication monitoring is essential for successful management of heart failure patients. Experiences with the recently established heart failure network HerzMobil Tirol show that medication monitoring limited to heart failure specific drugs could be insufficient, in particular for general practitioners. Additionally, some patients are confused about monitoring only part of their prescribed drugs. Sometimes medication will be changed without informing the responsible physician. As part of the upcoming Austrian electronic health record system ELGA, the eMedication system will collect prescription and dispensing data of drugs and these data will be accessible to authorized healthcare professionals on an inter-institutional level. Therefore, we propose two concepts on integrated medication management in mHealth applications that integrate ELGA eMedication and closed-loop mHealth-based telemonitoring. As a next step, we will implement these concepts and analyze--in a feasibility study--usability and practicability as well as legal aspects with respect to automatic data transfer from the ELGA eMedication service.

  14. Monitoring the health of power transformers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirtley, J.L. Jr.; Hagman, W.H.; Lesieutre, B.C.

    This article reviews MIT`s model-based system which offers adaptive, intelligent surveillance of transformers, and summons attention to anomalous operation through paging devices. Failures of large power transformers are problematic for four reasons. Generally, large transformers are situated so that failures present operational problems to the system. In addition, large power transformers are encased in tanks of flammable and environmentally hazardous fluid. Failures are often accompanied by fire and/or spillage of this fluid. This presents hazards to people, other equipment and property, and the local environment. Finally, large power transformers are costly devices. There is a clear incentive for utilities tomore » keep track of the health of their power transformers. Massachusetts Institute of Technology (MIT) has developed an adaptive, intelligent, monitoring system for large power transformers. Four large transformers on the Boston Edison system are under continuous surveillance by this system, which can summon attention to anomalous operation through paging devices. The monitoring system offers two advantages over more traditional (not adaptive) methods of tracking transformer operation.« less

  15. Bridging Empirical and Physical Approaches for Landslide Monitoring and Early Warning

    NASA Technical Reports Server (NTRS)

    Kirschbaum, Dalia; Peters-Lidard, Christa; Adler, Robert; Kumar, Sujay; Harrison, Ken

    2011-01-01

    Rainfall-triggered landslides typically occur and are evaluated at local scales, using slope-stability models to calculate coincident changes in driving and resisting forces at the hillslope level in order to anticipate slope failures. Over larger areas, detailed high resolution landslide modeling is often infeasible due to difficulties in quantifying the complex interaction between rainfall infiltration and surface materials as well as the dearth of available in situ soil and rainfall estimates and accurate landslide validation data. This presentation will discuss how satellite precipitation and surface information can be applied within a landslide hazard assessment framework to improve landslide monitoring and early warning by considering two disparate approaches to landslide hazard assessment: an empirical landslide forecasting algorithm and a physical slope-stability model. The goal of this research is to advance near real-time landslide hazard assessment and early warning at larger spatial scales. This is done by employing high resolution surface and precipitation information within a probabilistic framework to provide more physically-based grounding to empirical landslide triggering thresholds. The empirical landslide forecasting tool, running in near real-time at http://trmm.nasa.gov, considers potential landslide activity at the global scale and relies on Tropical Rainfall Measuring Mission (TRMM) precipitation data and surface products to provide a near real-time picture of where landslides may be triggered. The physical approach considers how rainfall infiltration on a hillslope affects the in situ hydro-mechanical processes that may lead to slope failure. Evaluation of these empirical and physical approaches are performed within the Land Information System (LIS), a high performance land surface model processing and data assimilation system developed within the Hydrological Sciences Branch at NASA's Goddard Space Flight Center. LIS provides the capabilities to quantify uncertainty from model inputs and calculate probabilistic estimates for slope failures. Results indicate that remote sensing data can provide many of the spatiotemporal requirements for accurate landslide monitoring and early warning; however, higher resolution precipitation inputs will help to better identify small-scale precipitation forcings that contribute to significant landslide triggering. Future missions, such as the Global Precipitation Measurement (GPM) mission will provide more frequent and extensive estimates of precipitation at the global scale, which will serve as key inputs to significantly advance the accuracy of landslide hazard assessment, particularly over larger spatial scales.

  16. Left ventricular strain and twisting in heart failure with preserved ejection fraction: an updated review.

    PubMed

    Tadic, Marijana; Pieske-Kraigher, Elisabeth; Cuspidi, Cesare; Genger, Martin; Morris, Daniel A; Zhang, Kun; Walther, Nina Alexandra; Pieske, Burket

    2017-05-01

    Despite the high prevalence of the patients with heart failure with preserved ejection fraction (HFpEF), our knowledge about this entity, from diagnostic tools to therapeutic approach, is still not well established. The evaluation of patients with HFpEF is mainly based on echocardiography, as the most widely accepted tool in cardiac imaging. Identification of left ventricular (LV) diastolic dysfunction has long been considered as the only responsible for HFpEF, and its evaluation is still "sine qua non" of HFpEF diagnostics. However, one should be aware of the fact that identifying cardiac dysfunction in HFpEF might be very challenging and often needs more complex evaluation of cardiac structure and function. New echocardiographic modalities such as 2D and 3D speckle tracking imaging could help in the diagnosis of HFpEF and provide further information regarding LV function and mechanics. Early diagnosis, medical management, and adequate monitoring of HFpEF patients are prerequisites of modern medical treatment. New healthcare approaches require individualized patient care, which is why clinicians should have all clinical, laboratory, and diagnostic data before making final decisions about the treatment of any patients. This is particularly important for HFpEF that often remains undiagnosed for quite a long time, which further prolongs the beginning of adequate treatment and brings into question outcome of these patients. The aim of this article is to provide the overview of the main principles of LV mechanics and summarize recent data regarding LV strain in patients with HFpEF.

  17. Are Your Students Ready for Anatomy and Physiology? Developing Tools to Identify Students at Risk for Failure

    ERIC Educational Resources Information Center

    Gultice, Amy; Witham, Ann; Kallmeyer, Robert

    2015-01-01

    High failure rates in introductory college science courses, including anatomy and physiology, are common at institutions across the country, and determining the specific factors that contribute to this problem is challenging. To identify students at risk for failure in introductory physiology courses at our open-enrollment institution, an online…

  18. Tips and Traps: Lessons From Codesigning a Clinician E-Monitoring Tool for Computerized Cognitive Behavioral Therapy

    PubMed Central

    Hawken, Susan J; Stasiak, Karolina; Lucassen, Mathijs FG; Fleming, Theresa; Shepherd, Matthew; Greenwood, Andrea; Osborne, Raechel; Merry, Sally N

    2017-01-01

    Background Computerized cognitive behavioral therapy (cCBT) is an acceptable and promising treatment modality for adolescents with mild-to-moderate depression. Many cCBT programs are standalone packages with no way for clinicians to monitor progress or outcomes. We sought to develop an electronic monitoring (e-monitoring) tool in consultation with clinicians and adolescents to allow clinicians to monitor mood, risk, and treatment adherence of adolescents completing a cCBT program called SPARX (Smart, Positive, Active, Realistic, X-factor thoughts). Objective The objectives of our study were as follows: (1) assess clinicians’ and adolescents’ views on using an e-monitoring tool and to use this information to help shape the development of the tool and (2) assess clinician experiences with a fully developed version of the tool that was implemented in their clinical service. Methods A descriptive qualitative study using semistructured focus groups was conducted in New Zealand. In total, 7 focus groups included clinicians (n=50) who worked in primary care, and 3 separate groups included adolescents (n=29). Clinicians were general practitioners (GPs), school guidance counselors, clinical psychologists, youth workers, and nurses. Adolescents were recruited from health services and a high school. Focus groups were run to enable feedback at 3 phases that corresponded to the consultation, development, and postimplementation stages. Thematic analysis was applied to transcribed responses. Results Focus groups during the consultation and development phases revealed the need for a simple e-monitoring registration process with guides for end users. Common concerns were raised in relation to clinical burden, monitoring risk (and effects on the therapeutic relationship), alongside confidentiality or privacy and technical considerations. Adolescents did not want to use their social media login credentials for e-monitoring, as they valued their privacy. However, adolescents did want information on seeking help and personalized monitoring and communication arrangements. Postimplementation, clinicians who had used the tool in practice revealed no adverse impact on the therapeutic relationship, and adolescents were not concerned about being e-monitored. Clinicians did need additional time to monitor adolescents, and the e-monitoring tool was used in a different way than was originally anticipated. Also, it was suggested that the registration process could be further streamlined and integrated with existing clinical data management systems, and the use of clinician alerts could be expanded beyond the scope of simply flagging adolescents of concern. Conclusions An e-monitoring tool was developed in consultation with clinicians and adolescents. However, the study revealed the complexity of implementing the tool in clinical practice. Of salience were privacy, parallel monitoring systems, integration with existing electronic medical record systems, customization of the e-monitor, and preagreed monitoring arrangements between clinicians and adolescents. PMID:28077345

  19. PERMEABLE REACTIVE BARRIER PERFORMANCE MONITORING: LONG-TERM TRENDS IN GEOCHEMICAL PARAMETERS AT TWO SITES

    EPA Science Inventory

    A major goal of research on the long-term performance of subsurface reactive barriers is to identify standard ground water monitoring parameters that may be useful indicators of declining performance or impending system failure. Results are presented from ground water monitoring ...

  20. Personnel reliability impact on petrochemical facilities monitoring system's failure skipping probability

    NASA Astrophysics Data System (ADS)

    Kostyukov, V. N.; Naumenko, A. P.

    2017-08-01

    The paper dwells upon urgent issues of evaluating impact of actions conducted by complex technological systems operators on their safe operation considering application of condition monitoring systems for elements and sub-systems of petrochemical production facilities. The main task for the research is to distinguish factors and criteria of monitoring system properties description, which would allow to evaluate impact of errors made by personnel on operation of real-time condition monitoring and diagnostic systems for machinery of petrochemical facilities, and find and objective criteria for monitoring system class, considering a human factor. On the basis of real-time condition monitoring concepts of sudden failure skipping risk, static and dynamic error, monitoring systems, one may solve a task of evaluation of impact that personnel's qualification has on monitoring system operation in terms of error in personnel or operators' actions while receiving information from monitoring systems and operating a technological system. Operator is considered as a part of the technological system. Although, personnel's behavior is usually a combination of the following parameters: input signal - information perceiving, reaction - decision making, response - decision implementing. Based on several researches on behavior of nuclear powers station operators in USA, Italy and other countries, as well as on researches conducted by Russian scientists, required data on operator's reliability were selected for analysis of operator's behavior at technological facilities diagnostics and monitoring systems. The calculations revealed that for the monitoring system selected as an example, the failure skipping risk for the set values of static (less than 0.01) and dynamic (less than 0.001) errors considering all related factors of data on reliability of information perception, decision-making, and reaction fulfilled is 0.037, in case when all the facilities and error probability are under control - not more than 0.027. In case when only pump and compressor units are under control, the failure skipping risk is not more than 0.022, when the probability of error in operator's action is not more than 0.011. The work output shows that on the basis of the researches results an assessment of operators' reliability can be made in terms of almost any kind of production, but considering only technological capabilities, since operators' psychological and general training considerable vary in different production industries. Using latest technologies of engineering psychology and design of data support systems, situation assessment systems, decision-making and responding system, as well as achievement in condition monitoring in various production industries one can evaluate hazardous condition skipping risk probability considering static, dynamic errors and human factor.

  1. Development and pilot testing of an online monitoring tool of depression symptoms and side effects for young people being treated for depression.

    PubMed

    Hetrick, Sarah E; Dellosa, Maria Kristina; Simmons, Magenta B; Phillips, Lisa

    2015-02-01

    To develop and examine the feasibility of an online monitoring tool of depressive symptoms, suicidality and side effects. The online tool was developed based on guideline recommendations, and employed already validated and widely used measures. Quantitative data about its use, and qualitative information on its functionality and usefulness were collected from surveys, a focus group and individual interviews. Fifteen young people completed the tool between 1 and 12 times, and reported it was easy to use. Clinicians suggested it was too long and could be completed in the waiting room to lessen impact on session time. Overall, clients and clinicians who used the tool found it useful. Results show that an online monitoring tool is potentially useful as a systematic means for monitoring symptoms, but further research is needed including how to embed the tool within clinical practice. © 2014 Wiley Publishing Asia Pty Ltd.

  2. 46 CFR 62.25-1 - General.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... control system; (3) A safety control system, if required by § 62.25-15; (4) Instrumentation to monitor... if instrumentation is not continuously monitored or is inappropriate for detection of a failure or...

  3. 46 CFR 62.25-1 - General.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... control system; (3) A safety control system, if required by § 62.25-15; (4) Instrumentation to monitor... if instrumentation is not continuously monitored or is inappropriate for detection of a failure or...

  4. 46 CFR 62.25-1 - General.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... control system; (3) A safety control system, if required by § 62.25-15; (4) Instrumentation to monitor... if instrumentation is not continuously monitored or is inappropriate for detection of a failure or...

  5. 46 CFR 62.25-1 - General.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... control system; (3) A safety control system, if required by § 62.25-15; (4) Instrumentation to monitor... if instrumentation is not continuously monitored or is inappropriate for detection of a failure or...

  6. Analysis of arrhythmic events is useful to detect lead failure earlier in patients followed by remote monitoring.

    PubMed

    Nishii, Nobuhiro; Miyoshi, Akihito; Kubo, Motoki; Miyamoto, Masakazu; Morimoto, Yoshimasa; Kawada, Satoshi; Nakagawa, Koji; Watanabe, Atsuyuki; Nakamura, Kazufumi; Morita, Hiroshi; Ito, Hiroshi

    2018-03-01

    Remote monitoring (RM) has been advocated as the new standard of care for patients with cardiovascular implantable electronic devices (CIEDs). RM has allowed the early detection of adverse clinical events, such as arrhythmia, lead failure, and battery depletion. However, lead failure was often identified only by arrhythmic events, but not impedance abnormalities. To compare the usefulness of arrhythmic events with conventional impedance abnormalities for identifying lead failure in CIED patients followed by RM. CIED patients in 12 hospitals have been followed by the RM center in Okayama University Hospital. All transmitted data have been analyzed and summarized. From April 2009 to March 2016, 1,873 patients have been followed by the RM center. During the mean follow-up period of 775 days, 42 lead failure events (atrial lead 22, right ventricular pacemaker lead 5, implantable cardioverter defibrillator [ICD] lead 15) were detected. The proportion of lead failures detected only by arrhythmic events, which were not detected by conventional impedance abnormalities, was significantly higher than that detected by impedance abnormalities (arrhythmic event 76.2%, 95% CI: 60.5-87.9%; impedance abnormalities 23.8%, 95% CI: 12.1-39.5%). Twenty-seven events (64.7%) were detected without any alert. Of 15 patients with ICD lead failure, none has experienced inappropriate therapy. RM can detect lead failure earlier, before clinical adverse events. However, CIEDs often diagnose lead failure as just arrhythmic events without any warning. Thus, to detect lead failure earlier, careful human analysis of arrhythmic events is useful. © 2017 Wiley Periodicals, Inc.

  7. Remote monitoring of LED lighting system performance

    NASA Astrophysics Data System (ADS)

    Thotagamuwa, Dinusha R.; Perera, Indika U.; Narendran, Nadarajah

    2016-09-01

    The concept of connected lighting systems using LED lighting for the creation of intelligent buildings is becoming attractive to building owners and managers. In this application, the two most important parameters include power demand and the remaining useful life of the LED fixtures. The first enables energy-efficient buildings and the second helps building managers schedule maintenance services. The failure of an LED lighting system can be parametric (such as lumen depreciation) or catastrophic (such as complete cessation of light). Catastrophic failures in LED lighting systems can create serious consequences in safety critical and emergency applications. Therefore, both failure mechanisms must be considered and the shorter of the two must be used as the failure time. Furthermore, because of significant variation between the useful lives of similar products, it is difficult to accurately predict the life of LED systems. Real-time data gathering and analysis of key operating parameters of LED systems can enable the accurate estimation of the useful life of a lighting system. This paper demonstrates the use of a data-driven method (Euclidean distance) to monitor the performance of an LED lighting system and predict its time to failure.

  8. A failure effects simulation of a low authority flight control augmentation system on a UH-1H helicopter

    NASA Technical Reports Server (NTRS)

    Corliss, L. D.; Talbot, P. D.

    1977-01-01

    A two-pilot moving base simulator experiment was conducted to assess the effects of servo failures of a flight control system on the transient dynamics of a Bell UH-1H helicopter. The flight control hardware considered was part of the V/STOLAND system built with control authorities of from 20-40%. Servo hardover and oscillatory failures were simulated in each control axis. Measurements were made to determine the adequacy of the failure monitoring system time delay and the servo center and lock time constant, the pilot reaction times, and the altitude and attitude excursions of the helicopter at hover and 60 knots. Safe recoveries were made from all failures under VFR conditions. Pilot reaction times were from 0.5 to 0.75 sec. Reduction of monitor delay times below these values resulted in significantly reduced excursion envelopes. A subsequent flight test was conducted on a UH-1H helicopter with the V/STOLAND system installed. Series servo hardovers were introduced in hover and at 60 knots straight and level. Data from these tests are included for comparison.

  9. Interferon-γ–Inducible Protein 10 (IP-10) as a Screening Tool to Optimize Human Immunodeficiency Virus RNA Monitoring in Resource-Limited Settings

    PubMed Central

    Pastor, Lucía; Casellas, Aina; Carrillo, Jorge; Maculuve, Sonia; Jairoce, Chenjerai; Paredes, Roger; Blanco, Julià; Naniche, Denise

    2017-01-01

    Abstract Background Achieving effective antiretroviral treatment (ART) monitoring is a key determinant to ensure viral suppression and reach the UNAIDS 90-90-90 targets. The gold standard for detecting virological failure is plasma human immunodeficiency virus (HIV) RNA (viral load [VL]) testing; however, its availability is very limited in low-income countries due to cost and operational constraints. Methods HIV-1–infected adults on first-line ART attending routine visits at the Manhiça District Hospital, Mozambique, were previously evaluated for virologic failure. Plasma levels of interferon-γ–inducible protein 10 (IP-10) were quantified by enzyme-linked immunosorbent assay. Logistic regression was used to build an IP-10–based model able to identify individuals with VL >150 copies/mL. From the 316 individuals analyzed, 253 (80%) were used for model training and 63 (20%) for validation. Receiver operating characteristic curves were employed to evaluate model prediction. Results From the individuals included in the training set, 34% had detectable VL. Mean age was 41 years, 70% were females, and median time on ART was 3.4 years. IP-10 levels were significantly higher in subjects with detectable VL (108.2 pg/mL) as compared to those with undetectable VL (38.0 pg/mL) (P < .0001, U test). IP-10 univariate model demonstrated high classification performance (area under the curve = 0.85 [95% confidence interval {CI}, .80–.90]). Using a cutoff value of IP-10 ≥44.2 pg/mL, the model identified detectable VL with 91.9% sensitivity (95% CI, 83.9%–96.7%) and 59.9% specificity (95% CI, 52.0%–67.4%), values confirmed in the validation set. Conclusions IP-10 is an accurate biomarker to screen individuals on ART for detectable viremia. Further studies should evaluate the benefits of IP-10 as a triage approach to monitor ART in resource-limited settings. PMID:29020145

  10. Development and validation of a liquid chromatography-mass spectrometry metabonomic platform in human plasma of liver failure caused by hepatitis B virus.

    PubMed

    Zhang, Lijun; Jia, Xiaofang; Peng, Xia; Ou, Qiang; Zhang, Zhengguo; Qiu, Chao; Yao, Yamin; Shen, Fang; Yang, Hua; Ma, Fang; Wang, Jiefei; Yuan, Zhenghong

    2010-10-01

    This paper presents an liquid chromatography (LC)/mass spectrometry (MS)-based metabonomic platform that combined the discovery of differential metabolites through principal component analysis (PCA) with the verification by selective multiple reaction monitoring (MRM). These methods were applied to analyze plasma samples from liver disease patients and healthy donors. LC-MS raw data (about 1000 compounds), from the plasma of liver failure patients (n = 26) and healthy controls (n = 16), were analyzed through the PCA method and a pattern recognition profile that had significant difference between liver failure patients and healthy controls (P < 0.05) was established. The profile was verified in 165 clinical subjects. The specificity and sensitivity of this model in predicting liver failure were 94.3 and 100.0%, respectively. The differential ions with m/z of 414.5, 432.0, 520.5, and 775.0 were verified to be consistent with the results from PCA by MRM mode in 40 clinical samples, and were proved not to be caused by the medicines taken by patients through rat model experiments. The compound with m/z of 520.5 was identified to be 1-Linoleoylglycerophosphocholine or 1-Linoleoylphosphatidylcholine through exact mass measurements performed using Ion Trap-Time-of-Flight MS and METLIN Metabolite Database search. In all, it was the first time to integrate metabonomic study and MRM relative quantification of differential peaks in a large number of clinical samples. Thereafter, a rat model was used to exclude drug effects on the abundance of differential ion peaks. 1-Linoleoylglycerophosphocholine or 1-Linoleoylphosphatidylcholine, a potential biomarker, was identified. The LC/MS-based metabonomic platform could be a powerful tool for the metabonomic screening of plasma biomarkers.

  11. Reusable Rocket Engine Maintenance Study

    NASA Technical Reports Server (NTRS)

    Macgregor, C. A.

    1982-01-01

    Approximately 85,000 liquid rocket engine failure reports, obtained from 30 years of developing and delivering major pump feed engines, were reviewed and screened and reduced to 1771. These were categorized into 16 different failure modes. Failure propagation diagrams were established. The state of the art of engine condition monitoring for in-flight sensors and between flight inspection technology was determined. For the 16 failure modes, the potential measurands and diagnostic requirements were identified, assessed and ranked. Eight areas are identified requiring advanced technology development.

  12. Failure Control Techniques for the SSME

    NASA Technical Reports Server (NTRS)

    Taniguchi, M. H.

    1987-01-01

    Since ground testing of the Space Shuttle Main Engine (SSME) began in 1975, the detection of engine anomalies and the prevention of major damage have been achieved by a multi-faceted detection/shutdown system. The system continues the monitoring task today and consists of the following: sensors, automatic redline and other limit logic, redundant sensors and controller voting logic, conditional decision logic, and human monitoring. Typically, on the order of 300 to 500 measurements are sensed and recorded for each test, while on the order of 100 are used for control and monitoring. Despite extensive monitoring by the current detection system, twenty-seven (27) major incidents have occurred. This number would appear insignificant compared with over 1200 hot-fire tests which have taken place since 1976. However, the number suggests the requirement for and future benefits of a more advanced failure detection system.

  13. Flood scour monitoring system using fiber Bragg grating sensors

    NASA Astrophysics Data System (ADS)

    Lin, Yung Bin; Lai, Jihn Sung; Chang, Kuo Chun; Li, Lu Sheng

    2006-12-01

    The exposure and subsequent undermining of pier/abutment foundations through the scouring action of a flood can result in the structural failure of a bridge. Bridge scour is one of the leading causes of bridge failure. Bridges subject to periods of flood/high flow require monitoring during those times in order to protect the traveling public. In this study, an innovative scour monitoring system using button-like fiber Bragg grating (FBG) sensors was developed and applied successfully in the field during the Aere typhoon period in 2004. The in situ FBG scour monitoring system has been demonstrated to be robust and reliable for real-time scour-depth measurements, and to be valid for indicating depositional depth at the Dadu Bridge. The field results show that this system can function well and survive a typhoon flood.

  14. Robust detection, isolation and accommodation for sensor failures

    NASA Technical Reports Server (NTRS)

    Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.

    1986-01-01

    The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques

  15. The Utility of Failure Modes and Effects Analysis of Consultations in a Tertiary, Academic, Medical Center.

    PubMed

    Niv, Yaron; Itskoviz, David; Cohen, Michal; Hendel, Hagit; Bar-Giora, Yonit; Berkov, Evgeny; Weisbord, Irit; Leviron, Yifat; Isasschar, Assaf; Ganor, Arian

    Failure modes and effects analysis (FMEA) is a tool used to identify potential risks in health care processes. We used the FMEA tool for improving the process of consultation in an academic medical center. A team of 10 staff members-5 physicians, 2 quality experts, 2 organizational consultants, and 1 nurse-was established. The consultation process steps, from ordering to delivering, were computed. Failure modes were assessed for likelihood of occurrence, detection, and severity. A risk priority number (RPN) was calculated. An interventional plan was designed according to the highest RPNs. Thereafter, we compared the percentage of completed computer-based documented consultations before and after the intervention. The team identified 3 main categories of failure modes that reached the highest RPNs: initiation of consultation by a junior staff physician without senior approval, failure to document the consultation in the computerized patient registry, and asking for consultation on the telephone. An interventional plan was designed, including meetings to update knowledge of the consultation request process, stressing the importance of approval by a senior physician, training sessions for closing requests in the patient file, and reporting of telephone requests. The number of electronically documented consultation results and recommendations significantly increased (75%) after intervention. FMEA is an important and efficient tool for improving the consultation process in an academic medical center.

  16. Construction and Validation of a Questionnaire about Heart Failure Patients' Knowledge of Their Disease

    PubMed Central

    Bonin, Christiani Decker Batista; dos Santos, Rafaella Zulianello; Ghisi, Gabriela Lima de Melo; Vieira, Ariany Marques; Amboni, Ricardo; Benetti, Magnus

    2014-01-01

    Background The lack of tools to measure heart failure patients' knowledge about their syndrome when participating in rehabilitation programs demonstrates the need for specific recommendations regarding the amount or content of information required. Objectives To develop and validate a questionnaire to assess heart failure patients' knowledge about their syndrome when participating in cardiac rehabilitation programs. Methods The tool was developed based on the Coronary Artery Disease Education Questionnaire and applied to 96 patients with heart failure, with a mean age of 60.22 ± 11.6 years, 64% being men. Reproducibility was obtained via the intraclass correlation coefficient, using the test-retest method. Internal consistency was assessed by use of Cronbach's alpha, and construct validity, by use of exploratory factor analysis. Results The final version of the tool had 19 questions arranged in ten areas of importance for patient education. The proposed questionnaire had a clarity index of 8.94 ± 0.83. The intraclass correlation coefficient was 0.856, and Cronbach's alpha, 0.749. Factor analysis revealed five factors associated with the knowledge areas. Comparing the final scores with the characteristics of the population evidenced that low educational level and low income are significantly associated with low levels of knowledge. Conclusion The instrument has satisfactory clarity and validity indices, and can be used to assess the heart failure patients' knowledge about their syndrome when participating in cardiac rehabilitation programs. PMID:24652054

  17. Adapting HIV patient and program monitoring tools for chronic non-communicable diseases in Ethiopia.

    PubMed

    Letebo, Mekitew; Shiferaw, Fassil

    2016-06-02

    Chronic non-communicable diseases (NCDs) have become a huge public health concern in developing countries. Many resource-poor countries facing this growing epidemic, however, lack systems for an organized and comprehensive response to NCDs. Lack of NCD national policy, strategies, treatment guidelines and surveillance and monitoring systems are features of health systems in many developing countries. Successfully responding to the problem requires a number of actions by the countries, including developing context-appropriate chronic care models and programs and standardization of patient and program monitoring tools. In this cross-sectional qualitative study we assessed existing monitoring and evaluation (M&E) tools used for NCD services in Ethiopia. Since HIV care and treatment program is the only large-scale chronic care program in the country, we explored the M&E tools being used in the program and analyzed how these tools might be adapted to support NCD services in the country. Document review and in-depth interviews were the main data collection methods used. The interviews were held with health workers and staff involved in data management purposively selected from four health facilities with high HIV and NCD patient load. Thematic analysis was employed to make sense of the data. Our findings indicate the apparent lack of information systems for NCD services, including the absence of standardized patient and program monitoring tools to support the services. We identified several HIV care and treatment patient and program monitoring tools currently being used to facilitate intake process, enrolment, follow up, cohort monitoring, appointment keeping, analysis and reporting. Analysis of how each tool being used for HIV patient and program monitoring can be adapted for supporting NCD services is presented. Given the similarity between HIV care and treatment and NCD services and the huge investment already made to implement standardized tools for HIV care and treatment program, adaptation and use of HIV patient and program monitoring tools for NCD services can improve NCD response in Ethiopia through structuring services, standardizing patient care and treatment, supporting evidence-based planning and providing information on effectiveness of interventions.

  18. A noncontact RF-based respiratory sensor: results of a clinical trial.

    PubMed

    Madsen, Spence; Baczuk, Jordan; Thorup, Kurt; Barton, Richard; Patwari, Neal; Langell, John T

    2016-06-01

    Respiratory rate (RR) is a critical vital signs monitored in health care setting. Current monitors suffer from sensor-contact failure, inaccurate data, and limited patient mobility. There is a critical need for an accurate and reliable and noncontact system to monitor RR. We developed a contact-free radio frequency (RF)-based system that measures movement using WiFi signal diffraction, which is converted into interpretable data using a Fourier transform. Here, we investigate the system's ability to measure fine movements associated with human respiration. Testing was conducted on subjects using visual cue, fixed-tempo instruction to breath at standard RRs. Blinded instruction-based RRs were compared to RF-acquired data to determine measurement accuracy. The RF-based technology was studied on postoperative ventilator-dependent patients. Blinded ventilator capnographic RR data were collected for each patient and compared to RF-acquired data to determine measurement accuracy. Respiratory rate data collected from 10 subjects breathing at a fixed RR (14, 16, 18, or 20) demonstrated 95.5% measurement accuracy between the patient's actual rate and that measured by our RF technology. Ten patients were enrolled into the clinical trial. Blinded ventilator capnographic RR data were compared to RF-based acquired data. The RF-based data showed 88.8% measurement accuracy with ventilator capnography. Initial clinical pilot trials with our contact-free RF-based monitoring system demonstrate a high degree of RR measurement accuracy when compared to capnographic data. Based on these results, we believe RF-based systems present a promising noninvasive, inexpensive, and accurate tool for continuous RR monitoring. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. [Electronic fetal monitoring and management of adverse outcomes: how to perform and improve a training program for clinicians?].

    PubMed

    Secourgeon, J-F

    2012-10-01

    Electronic fetal monitoring during labor is the most commonly used method to evaluate the fetal status, but it remains exposed to some criticism. By comparison with intermittent auscultation and in the light of the results of the great studies in the last 30 years, it may be accused its failure to improve the neonatal outcome and its responsibility in the increase on operative deliveries. Actually, the electronic fetal monitoring is a tool whose effectiveness is linked to the accuracy of the analysis developed by the clinician. Studies on assessment of the tracing interpretation indicate that there is always a lack of quality, which may be improved through training programs. It also reveals the benefit of the fetal blood sampling to reduce operative deliveries and the generalization of this method, in addition to electronic fetal monitoring, is recommended by referral agencies. More generally, the continuous monitoring is only a part of the patient safety strategy in the labour ward and we are currently observing, in some European countries and in the United States, the development of training programs concerning the management of the adverse outcomes in obstetrics. The good performances related to the quality of care are demonstrated by the findings of the studies performed in the centers that have implemented an active training policy. In France, the professionals directly involved in the field of the perinatology should benefit from such educational programs that could be organized within the care networks under the authority of referral agencies. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  20. The Utility of a Wireless Implantable Hemodynamic Monitoring System in Patients Requiring Mechanical Circulatory Support.

    PubMed

    Feldman, David S; Moazami, Nader; Adamson, Philip B; Vierecke, Juliane; Raval, Nir; Shreenivas, Satya; Cabuay, Barry M; Jimenez, Javier; Abraham, William T; O'Connell, John B; Naka, Yoshifumi

    Proper timing of left ventricular assist device (LVAD) implantation in advanced heart failure patients is not well established and is an area of intense interest. In addition, optimizing LVAD performance after implantation remains difficult and represents a significant clinical need. Implantable hemodynamic monitoring systems may provide physicians with the physiologic information necessary to improve the timing of LVAD implantation as well as LVAD performance when compared with current methods. The CardioMEMS Heart sensor Allows for Monitoirng of Pressures to Improve Outcomes in NYHA Class III heart failure patients (CHAMPION) Trial enrolled 550 previously hospitalized patients with New York Heart Association (NYHA) class III heart failure. All patients were implanted with a pulmonary artery (PA) pressure monitoring system and randomized to a treatment and control groups. In the treatment group, physicians used the hemodynamic information to make heart failure management decisions. This information was not available to physicians for the control group. During an average of 18 month randomized follow-up, 27 patients required LVAD implantation. At the time of PA pressure sensor implantation, patients ultimately requiring advanced therapy had higher PA pressures, lower systemic pressure, and similar cardiac output measurements. Treatment and control patients in the LVAD subgroup had similar clinical profiles at the time of enrollment. There was a trend toward a shorter length of time to LVAD implantation in the treatment group when hemodynamic information was available. After LVAD implantation, most treatment group patients continued to provide physicians with physiologic information from the hemodynamic monitoring system. As expected PA pressures declined significantly post LVAD implant in all patients, but the magnitude of decline was higher in patients with PA pressure monitoring. Implantable hemodynamic monitoring appeared to improve the timing of LVAD implantation as well as optimize LVAD performance when compared with current methods. Further studies are necessary to evaluate these findings in a prospective manner.

  1. Failure mode and effects analysis drastically reduced potential risks in clinical trial conduct

    PubMed Central

    Baik, Jungmi; Kim, Hyunjung; Kim, Rachel

    2017-01-01

    Background Failure mode and effects analysis (FMEA) is a risk management tool to proactively identify and assess the causes and effects of potential failures in a system, thereby preventing them from happening. The objective of this study was to evaluate effectiveness of FMEA applied to an academic clinical trial center in a tertiary care setting. Methods A multidisciplinary FMEA focus group at the Seoul National University Hospital Clinical Trials Center selected 6 core clinical trial processes, for which potential failure modes were identified and their risk priority number (RPN) was assessed. Remedial action plans for high-risk failure modes (RPN >160) were devised and a follow-up RPN scoring was conducted a year later. Results A total of 114 failure modes were identified with an RPN score ranging 3–378, which was mainly driven by the severity score. Fourteen failure modes were of high risk, 11 of which were addressed by remedial actions. Rescoring showed a dramatic improvement attributed to reduction in the occurrence and detection scores by >3 and >2 points, respectively. Conclusions FMEA is a powerful tool to improve quality in clinical trials. The Seoul National University Hospital Clinical Trials Center is expanding its FMEA capability to other core clinical trial processes. PMID:29089745

  2. Failure mode analysis to predict product reliability.

    NASA Technical Reports Server (NTRS)

    Zemanick, P. P.

    1972-01-01

    The failure mode analysis (FMA) is described as a design tool to predict and improve product reliability. The objectives of the failure mode analysis are presented as they influence component design, configuration selection, the product test program, the quality assurance plan, and engineering analysis priorities. The detailed mechanics of performing a failure mode analysis are discussed, including one suggested format. Some practical difficulties of implementation are indicated, drawn from experience with preparing FMAs on the nuclear rocket engine program.

  3. Fracture - An Unforgiving Failure Mode

    NASA Technical Reports Server (NTRS)

    Goodin, James Ronald

    2006-01-01

    During the 2005 Conference for the Advancement for Space Safety, after a typical presentation of safety tools, a Russian in the audience simply asked, "How does that affect the hardware?" Having participated in several International System Safety Conferences, I recalled that most attention is dedicated to safety tools and little, if any, to hardware. The intent of this paper on the hazard of fracture and failure modes associated with fracture is my attempt to draw attention to the grass roots of system safety - improving hardware robustness and resilience.

  4. Detection of Failure in Asynchronous Motor Using Soft Computing Method

    NASA Astrophysics Data System (ADS)

    Vinoth Kumar, K.; Sony, Kevin; Achenkunju John, Alan; Kuriakose, Anto; John, Ano P.

    2018-04-01

    This paper investigates the stator short winding failure of asynchronous motor also their effects on motor current spectrums. A fuzzy logic approach i.e., model based technique possibly will help to detect the asynchronous motor failure. Actually, fuzzy logic similar to humanoid intelligent methods besides expected linguistic empowering inferences through vague statistics. The dynamic model is technologically advanced for asynchronous motor by means of fuzzy logic classifier towards investigate the stator inter turn failure in addition open phase failure. A hardware implementation was carried out with LabVIEW for the online-monitoring of faults.

  5. Field failure mechanisms for photovoltaic modules

    NASA Technical Reports Server (NTRS)

    Dumas, L. N.; Shumka, A.

    1981-01-01

    Beginning in 1976, Department of Energy field centers have installed and monitored a number of field tests and application experiments using current state-of-the-art photovoltaic modules. On-site observations of module physical and electrical degradation, together with in-depth laboratory analysis of failed modules, permits an overall assessment of the nature and causes of early field failures. Data on failure rates are presented, and key failure mechanisms are analyzed with respect to origin, effect, and prospects for correction. It is concluded that all failure modes identified to date are avoidable or controllable through sound design and production practices.

  6. Rate of occurrence of failures based on a nonhomogeneous Poisson process: an ozone analyzer case study.

    PubMed

    de Moura Xavier, José Carlos; de Andrade Azevedo, Irany; de Sousa Junior, Wilson Cabral; Nishikawa, Augusto

    2013-02-01

    Atmospheric pollutant monitoring constitutes a primordial activity in public policies concerning air quality. In São Paulo State, Brazil, the São Paulo State Environment Company (CETESB) maintains an automatic network which continuously monitors CO, SO(2), NO(x), O(3), and particulate matter concentrations in the air. The monitoring process accuracy is a fundamental condition for the actions to be taken by CETESB. As one of the support systems, a preventive maintenance program for the different analyzers used is part of the data quality strategy. Knowledge of the behavior of analyzer failure times could help optimize the program. To achieve this goal, the failure times of an ozone analyzer-considered a repairable system-were modeled by means of the nonhomogeneous Poisson process. The rate of occurrence of failures (ROCOF) was estimated for the intervals 0-70,800 h and 0-88,320 h, in which six and seven failures were observed, respectively. The results showed that the ROCOF estimate is influenced by the choice of the observation period, t(0) = 70,800 h and t(7) = 88,320 h in the cases analyzed. Identification of preventive maintenance actions, mainly when parts replacement occurs in the last interval of observation, is highlighted, justifying the alteration in the behavior of the inter-arrival times. The performance of a follow-up on each analyzer is recommended in order to record the impact of the performed preventive maintenance program on the enhancement of its useful life.

  7. Structural health monitoring and damage evaluation for steel confined reinforced concrete column using the acoustic emission technique

    NASA Astrophysics Data System (ADS)

    Du, Fangzhu; Li, Dongsheng

    2018-03-01

    As a new kind of composite structures, the using of steel confined reinforced concrete column attract increasing attention in civil engineer. During the damage process, this new structure offers highly complex and invisible failure mechanism due to the combination effects of steel tubes, concrete, and steel rebar. Acoustic emission (AE) technique has been extensively studied in nondestructive testing (NDT) and is currently applied in civil engineering for structural health monitoring (SHM) and damage evaluation. In the present study, damage property and failure evolution of steel confined and unconfined reinforced concrete (RC) columns are investigated under quasi-static loading through (AE) signal. Significantly improved loading capacity and excellent energy dissipation characteristic demonstrated the practicality of that proposed structure. AE monitoring results indicated that the progressive deformation of the test specimens occur in three stages representing different damage conditions. Sentry function compares the logarithm ratio between the stored strain energy (Es) and the released acoustic energy (Ea); explicitly disclose the damage growth and failure mechanism of the test specimens. Other extended AE features including index of damage (ID), and relax ratio are calculated to quantitatively evaluate the damage severity and critical point. Complicated temporal evolution of different AE features confirms the potential importance of integrated analysis of two or more parameters. The proposed multi-indicators analysis is capable of revealing the damage growth and failure mechanism for steel confined RC columns, and providing critical warning information for structure failure.

  8. Cardio-Pulmonary Stethoscope: Clinical Validation With Heart Failure and Hemodialysis Patients.

    PubMed

    Iskander, Magdy F; Seto, Todd B; Perron, Ruthsenne Rg; Lim, Eunjung; Qazi, Farhan

    2018-05-01

    The purpose of this study is to evaluate the accuracy of a noninvasive radiofrequency-based device, the Cardio-Pulmonary Stethoscope (CPS), to monitor heart and respiration rates, and detect changes in lung water content in human experiments and clinical trials. Three human populations (healthy subjects ( ), heart failure (), and hemodialysis () patients) were enrolled in this study. The study was conducted at the University of Hawaii and the Queen's Medical Center in Honolulu, HI, USA. Measurement of heart and respiration rates for all patients was compared with standard FDA - approved monitoring methods. For lung water measurements, CPS data were compared with simultaneous pulmonary capillary wedge pressure (PCWP) measurements for heart failure patients, and with change in weight of extracted fluid for hemodialysis patients. Statistical correlation methods (Pearson, mixed, and intraclass) were used to compare the data and examine accuracy of CPS results. Results show that heart and respiration rates of all patients have excellent correlation factors, r≥0.9. Comparisons with fluid removed during hemodialysis treatment showed correlation factor of to 1, while PCWP measurements of heart failure patients had correlation factor of to 0.97. These results suggest that CPS technology accurately quantifies heart and respiration rates and measure fluid changes in the lungs. The CPS has the potential to accurately monitor lung fluid status noninvasively and continuously in a clinical and outpatient setting. Early and efficient management of lung fluid status is key in managing chronic conditions such heart failure, pulmonary hypertension, and acute respiration distress syndrome.

  9. Experimental investigation on the fracture behaviour of black shale by acoustic emission monitoring and CT image analysis during uniaxial compression

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Li, C. H.; Hu, Y. Z.

    2018-04-01

    Plenty of mechanical experiments have been done to investigate the deformation and failure characteristics of shale; however, the anisotropic failure mechanism has not been well studied. Here, laboratory Uniaxial Compressive Strength tests on cylindrical shale samples obtained by drilling at different inclinations to bedding plane were performed. The failure behaviours of the shale samples were studied by real-time acoustic emission (AE) monitoring and post-test X-ray computer tomography (CT) analysis. The experimental results suggest that the pronounced bedding planes of shale have a great influence on the mechanical properties and AE patterns. The AE counts and AE cumulative energy release curves clearly demonstrate different morphology, and the `U'-shaped curve relationship between the AE counts, AE cumulative energy release and bedding inclination was first documented. The post-test CT image analysis shows the crack patterns via 2-D image reconstructions, an index of stimulated fracture density is defined to represent the anisotropic failure mode of shale. What is more, the most striking finding is that the AE monitoring results are in good agreement with the CT analysis. The structural difference in the shale sample is the controlling factor resulting in the anisotropy of AE patterns. The pronounced bedding structure in the shale formation results in an anisotropy of elasticity, strength and AE information from which the changes in strength dominate the entire failure pattern of the shale samples.

  10. TEAM-HF Cost-Effectiveness Model: A Web-Based Program Designed to Evaluate the Cost-Effectiveness of Disease Management Programs in Heart Failure

    PubMed Central

    Reed, Shelby D.; Neilson, Matthew P.; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H.; Polsky, Daniel E.; Graham, Felicia L.; Bowers, Margaret T.; Paul, Sara C.; Granger, Bradi B.; Schulman, Kevin A.; Whellan, David J.; Riegel, Barbara; Levy, Wayne C.

    2015-01-01

    Background Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. Methods We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure (TEAM-HF) Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics, use of evidence-based medications, and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model (SHFM). Projections of resource use and quality of life are modeled using relationships with time-varying SHFM scores. The model can be used to evaluate parallel-group and single-cohort designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. Results The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. Conclusion The TEAM-HF Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. PMID:26542504

  11. Unified continuum damage model for matrix cracking in composite rotor blades

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pollayi, Hemaraju; Harursampath, Dineshkumar

    This paper deals with modeling of the first damage mode, matrix micro-cracking, in helicopter rotor/wind turbine blades and how this effects the overall cross-sectional stiffness. The helicopter/wind turbine rotor system operates in a highly dynamic and unsteady environment leading to severe vibratory loads present in the system. Repeated exposure to this loading condition can induce damage in the composite rotor blades. These rotor/turbine blades are generally made of fiber-reinforced laminated composites and exhibit various competing modes of damage such as matrix micro-cracking, delamination, and fiber breakage. There is a need to study the behavior of the composite rotor system undermore » various key damage modes in composite materials for developing Structural Health Monitoring (SHM) system. Each blade is modeled as a beam based on geometrically non-linear 3-D elasticity theory. Each blade thus splits into 2-D analyzes of cross-sections and non-linear 1-D analyzes along the beam reference curves. Two different tools are used here for complete 3-D analysis: VABS for 2-D cross-sectional analysis and GEBT for 1-D beam analysis. The physically-based failure models for matrix in compression and tension loading are used in the present work. Matrix cracking is detected using two failure criterion: Matrix Failure in Compression and Matrix Failure in Tension which are based on the recovered field. A strain variable is set which drives the damage variable for matrix cracking and this damage variable is used to estimate the reduced cross-sectional stiffness. The matrix micro-cracking is performed in two different approaches: (i) Element-wise, and (ii) Node-wise. The procedure presented in this paper is implemented in VABS as matrix micro-cracking modeling module. Three examples are presented to investigate the matrix failure model which illustrate the effect of matrix cracking on cross-sectional stiffness by varying the applied cyclic load.« less

  12. Fluid status monitoring with a wireless network to reduce cardiovascular-related hospitalizations and mortality in heart failure: rationale and design of the OptiLink HF Study (Optimization of Heart Failure Management using OptiVol Fluid Status Monitoring and CareLink).

    PubMed

    Brachmann, Johannes; Böhm, Michael; Rybak, Karin; Klein, Gunnar; Butter, Christian; Klemm, Hanno; Schomburg, Rolf; Siebermair, Johannes; Israel, Carsten; Sinha, Anil-Martin; Drexler, Helmut

    2011-07-01

    The Optimization of Heart Failure Management using OptiVol Fluid Status Monitoring and CareLink (OptiLink HF) study is designed to investigate whether OptiVol fluid status monitoring with an automatically generated wireless CareAlert notification via the CareLink Network can reduce all-cause death and cardiovascular hospitalizations in an HF population, compared with standard clinical assessment. Methods Patients with newly implanted or replacement cardioverter-defibrillator devices with or without cardiac resynchronization therapy, who have chronic HF in New York Heart Association class II or III and a left ventricular ejection fraction ≤35% will be eligible to participate. Following device implantation, patients are randomized to either OptiVol fluid status monitoring through CareAlert notification or regular care (OptiLink 'on' vs. 'off'). The primary endpoint is a composite of all-cause death or cardiovascular hospitalization. It is estimated that 1000 patients will be required to demonstrate superiority of the intervention group to reduce the primary outcome by 30% with 80% power. The OptiLink HF study is designed to investigate whether early detection of congestion reduces mortality and cardiovascular hospitalization in patients with chronic HF. The study is expected to close recruitment in September 2012 and to report first results in May 2014.

  13. FRP debonding monitoring using OTDR techniques

    NASA Astrophysics Data System (ADS)

    Hou, Shuang; Cai, C. S. Steve; Ou, Jinping

    2009-07-01

    Debonding failure has been reported as the dominant failure mode for FRP strengthening in flexure. This paper explores a novel debonding monitoring method for FRP strengthened structures by means of OTDR-based fiber optic technology. Interface slip as a key factor in debonding failures will be measured through sensing optic fibers, which is instrumented in the interface between FRP and concrete in the direction perpendicular to the FRP filaments. Slip in the interface will induce power losses in the optic fiber signals at the intersection point of the FRP strip and the sensing optic fiber and the signal change will be detected through OTDR device. The FRP double shear tests and three-point bending tests were conducted to verify the effectiveness of the proposed monitoring method. It is found that the early bebonding can be detected before it causes the interface failure. The sensing optic fiber shows signal changes in the slip value at about 36~156 micrometer which is beyond sensing capacity of the conventional sensors. The tests results show that the proposed method is feasible in slip measurement with high sensitivity, and would be cost effective because of the low price of sensors used, which shows its potential of large-scale applications in civil infrastructures, especially for bridges.

  14. Biological variation of the natriuretic peptides and their role in monitoring patients with heart failure.

    PubMed

    Wu, Alan H B; Smith, Andrew

    2004-03-15

    B-type natriuretic peptide (BNP) and the inactive metabolite NT-proBNP are proven tests for diagnosis and staging of severity for patients with heart failure. However, the utility of these biomarkers for monitoring the success of drug therapy remains to be determined. Results of longitudinal studies on serial blood testing must be linked to overall patient morbidity and mortality outcomes. We previously determined the 8-week biological variability (BV) of BNP and NT-proBNP assays in healthy subjects and the 1-day BV for BNP alone in patients with compensated and stable heart failure. From these studies, the percent statistical change in serial samples of approximately 100% difference was estimated (95% confidence). We applied the biological variability concepts to the serial results of BNP and NT-proBNP collected from patients with heart failure and compared the performance of these two markers. While there are minor differences in the results between the assays from one time period to another, the overall interpretation of results are essentially identical. Moreover, the majority of individual serial time points are not significantly different from the previous value. Frequent testing (e.g. daily) for BNP and NT-proBNP to monitor therapy for patients with CHF is not indicated, as overall changes require several days to become evident.

  15. Respiratory Compromise as a New Paradigm for the Care of Vulnerable Hospitalized Patients.

    PubMed

    Morris, Timothy A; Gay, Peter C; MacIntyre, Neil R; Hess, Dean R; Hanneman, Sandra K; Lamberti, James P; Doherty, Dennis E; Chang, Lydia; Seckel, Maureen A

    2017-04-01

    Acute respiratory compromise describes a deterioration in respiratory function with a high likelihood of rapid progression to respiratory failure and death. Identifying patients at risk for respiratory compromise coupled with monitoring of patients who have developed respiratory compromise might allow earlier interventions to prevent or mitigate further decompensation. The National Association for the Medical Direction of Respiratory Care (NAMDRC) organized a workshop meeting with representation from many national societies to address the unmet needs of respiratory compromise from a clinical practice perspective. Respiratory compromise may arise de novo or may complicate preexisting lung disease. The group identified distinct subsets of respiratory compromise that present similar opportunities for early detection and useful intervention to prevent respiratory failure. The subtypes were characterized by the pathophysiological mechanisms they had in common: impaired control of breathing, impaired airway protection, parenchymal lung disease, increased airway resistance, hydrostatic pulmonary edema, and right-ventricular failure. Classification of acutely ill respiratory patients into one or more of these categories may help in selecting the screening and monitoring strategies that are most appropriate for the patient's particular pathophysiology. Standardized screening and monitoring practices for patients with similar mechanisms of deterioration may enhance the ability to predict respiratory failure early and prevent its occurrence. Copyright © 2017 by Daedalus Enterprises.

  16. An Acuity Tool for Heart Failure Case Management: Quantifying Workload, Service Utilization, and Disease Severity.

    PubMed

    Kilgore, Matthew D

    The cardiology service line director at a health maintenance organization (HMO) in Washington State required a valid, reliable, and practical means for measuring workloads and other productivity factors for six heart failure (HF) registered nurse case managers located across three geographical regions. The Kilgore Heart Failure Case Management (KHFCM) Acuity Tool was systematically designed, developed, and validated to measure workload as a dependent function of the number of heart failure case management (HFCM) services rendered and the duration of times spent on various care duties. Research and development occurred at various HMO-affiliated internal medicine and cardiology offices throughout Western Washington. The concepts, methods, and principles used to develop the KHFCM Acuity Tool are applicable for any type of health care professional aiming to quantify workload using a high-quality objective tool. The content matter, scaling, and language on the KHFCM Acuity Tool are specific to HFCM settings. The content matter and numeric scales for the KHFCM Acuity Tool were developed and validated using a mixed-method participant action research method applied to a group of six outpatient HF case managers and their respective caseloads. The participant action research method was selected, because the application of this method requires research participants to become directly involved in the diagnosis of research problems, the planning and execution of actions taken to address those problems, and the implementation of progressive strategies throughout the course of the study, as necessary, to produce the most credible and practical practice improvements (; ; ; ). Heart failure case managers served clients with New York Heart Association Functional Class III-IV HF (), and encounters were conducted primarily by telephone or in-office consultation. A mix of qualitative and quantitative results demonstrated a variety of quality improvement outcomes achieved by the design and practice application of the KHFCM Acuity Tool. Quality improvement outcomes included a more valid reflection of encounter times and demonstration of the KHFCM Acuity Tool as a reliable, practical, credible, and satisfying tool for reflecting HF case manager workloads and HF disease severity. The KHFCM Acuity Tool defines workload simply as a function of the number of HFCM services performed and the duration of time spent on a client encounter. The design of the tool facilitates the measure of workload, service utilization, and HF disease characteristics, independently from the overall measure of acuity, so that differences in individual case manager practice, as well as client characteristics within sites, across sites, and potentially throughout annual seasons, can be demonstrated. Data produced from long-term applications of the KHFCM Acuity Tool, across all regions, could serve as a driver for establishing systemwide HFCM productivity benchmarks or standards of practice for HF case managers. Data produced from localized applications could serve as a reference for coordinating staffing resources or developing HFCM productivity benchmarks within individual regions or sites.

  17. The use of spore strips for monitoring the sterilization of bottled fluids.

    PubMed Central

    Selkon, J. B.; Sisson, P. R.; Ingham, H. R.

    1979-01-01

    A bacterial spore test has been developed which enables the efficacy of the sterilizing cycle recommended by the British Pharmaceutical Codex (1973) for bottled fluids to be accurately monitored. During a 14-month period this test detected faults in 3.3% of the sterilizing cycles, representing five distinct episodes of sterilization failure that passed unnoticed by the conventional controls of physical measurements and sterility testing. There were no failures of sterilization as detected by conventional techniques which were not indicated by the spore test. PMID:458140

  18. Proceedings of the IDA Workshop on Formal Specification and Verification of Ada (Trade Name) (3rd) Held in Research Triangle Park, North Carolina on 14-16 May 1986

    DTIC Science & Technology

    1986-08-01

    sensitivity to software or hardware failures (bit transformation, register perversion, interface failures, etc .) which could cause the system to operate in a...of systems . She pointed to the need for 40 safety concerns in a continually growing number of computer applications (e.g., monitor and/or control of...informal, definition. Finally, the definition is based on the SMoLCS (Structured Monitored Linear Concurrent Systems ) methodology, an approach to the

  19. NASA Prototype All Composite Tank Cryogenic Pressure Tests to Failure with Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Werlink, Rudolph J.; Pena, Francisco

    2015-01-01

    This Paper will describe the results of pressurization to failure of 100 gallon composite tanks using liquid nitrogen. Advanced methods of health monitoring will be compared as will the experimental data to a finite element model. The testing is wholly under NASA including unique PZT (Lead Zirconate Titanate) based active vibration technology. Other technologies include fiber optics strain based systems including NASA AFRC technology, Acoustic Emission, Acellent smart sensor, this work is expected to lead to a practical in-Sutu system for composite tanks.

  20. Application of Advanced Nondestructive Evaluation Techniques for Cylindrical Composite Test Samples

    NASA Technical Reports Server (NTRS)

    Martin, Richard E.; Roth, Donald J.; Salem, Jonathan A.

    2013-01-01

    Two nondestructive methods were applied to composite cylinder samples pressurized to failure in order to determine manufacturing quality and monitor damage progression under load. A unique computed tomography (CT) image processing methodology developed at NASA Glenn Research was used to assess the condition of the as-received samples while acoustic emission (AE) monitoring was used to identify both the extent and location of damage within the samples up to failure. Results show the effectiveness of both of these methods in identifying potentially critical fabrication issues and their resulting impact on performance.

  1. CD4 count-based failure criteria combined with viral load monitoring may trigger worse switch decisions than viral load monitoring alone.

    PubMed

    Hoffmann, Christopher J; Maritz, Jean; van Zyl, Gert U

    2016-02-01

    CD4 count decline often triggers antiretroviral regimen switches in resource-limited settings, even when viral load testing is available. We therefore compared CD4 failure and CD4 trends in patients with viraemia with or without antiretroviral resistance. Retrospective cohort study investigating the association of HIV drug resistance with CD4 failure or CD4 trends in patients on first-line antiretroviral regimens during viraemia. Patients with viraemia (HIV RNA >1000 copies/ml) from two HIV treatment programmes in South Africa (n = 350) were included. We investigated the association of M184V and NNRTI resistance with WHO immunological failure criteria and CD4 count trends, using chi-square tests and linear mixed models. Fewer patients with the M184V mutation reached immunologic failure criteria than those without: 51 of 151(34%) vs. 90 of 199 (45%) (P = 0.03). Similarly, 79 of 220 (36%) patients, who had major NNRTI resistance, had immunological failure, whereas 62 of 130 (48%) without (chi-square P = 0.03) did. The CD4 count decline among patients with the M184V mutation was 2.5 cells/mm(3) /year, whereas in those without M184V it was 14 cells/mm(3) /year (P = 0.1), but the difference in CD4 count decline with and without NNRTI resistance was marginal. Our data suggest that CD4 count monitoring may lead to inappropriate delayed therapy switches for patients with HIV drug resistance. Conversely, patients with viraemia but no drug resistance are more likely to have a CD4 count decline and thus may be more likely to be switched to a second-line regimen. © 2015 John Wiley & Sons Ltd.

  2. Ares I-X Ground Diagnostic Prototype

    NASA Technical Reports Server (NTRS)

    Schwabacher, Mark; Martin, Rodney; Waterman, Robert; Oostdyk, Rebecca; Ossenfort, John; Matthews, Bryan

    2010-01-01

    Automating prelaunch diagnostics for launch vehicles offers three potential benefits. First, it potentially improves safety by detecting faults that might otherwise have been missed so that they can be corrected before launch. Second, it potentially reduces launch delays by more quickly diagnosing the cause of anomalies that occur during prelaunch processing. Reducing launch delays will be critical to the success of NASA's planned future missions that require in-orbit rendezvous. Third, it potentially reduces costs by reducing both launch delays and the number of people needed to monitor the prelaunch process. NASA is currently developing the Ares I launch vehicle to bring the Orion capsule and its crew of four astronauts to low-earth orbit on their way to the moon. Ares I-X will be the first unmanned test flight of Ares I. It is scheduled to launch on October 27, 2009. The Ares I-X Ground Diagnostic Prototype is a prototype ground diagnostic system that will provide anomaly detection, fault detection, fault isolation, and diagnostics for the Ares I-X first-stage thrust vector control (TVC) and for the associated ground hydraulics while it is in the Vehicle Assembly Building (VAB) at John F. Kennedy Space Center (KSC) and on the launch pad. It will serve as a prototype for a future operational ground diagnostic system for Ares I. The prototype combines three existing diagnostic tools. The first tool, TEAMS (Testability Engineering and Maintenance System), is a model-based tool that is commercially produced by Qualtech Systems, Inc. It uses a qualitative model of failure propagation to perform fault isolation and diagnostics. We adapted an existing TEAMS model of the TVC to use for diagnostics and developed a TEAMS model of the ground hydraulics. The second tool, Spacecraft Health Inference Engine (SHINE), is a rule-based expert system developed at the NASA Jet Propulsion Laboratory. We developed SHINE rules for fault detection and mode identification. The prototype uses the outputs of SHINE as inputs to TEAMS. The third tool, the Inductive Monitoring System (IMS), is an anomaly detection tool developed at NASA Ames Research Center and is currently used to monitor the International Space Station Control Moment Gyroscopes. IMS automatically "learns" a model of historical nominal data in the form of a set of clusters and signals an alarm when new data fails to match this model. IMS offers the potential to detect faults that have not been modeled. The three tools have been integrated and deployed to Hangar AE at KSC where they interface with live data from the Ares I-X vehicle and from the ground hydraulics. The outputs of the tools are displayed on a console in Hangar AE, one of the locations from which the Ares I-X launch will be monitored. The full paper will describe how the prototype performed before the launch. It will include an analysis of the prototype's accuracy, including false-positive rates, false-negative rates, and receiver operating characteristics (ROC) curves. It will also include a description of the prototype's computational requirements, including CPU usage, main memory usage, and disk usage. If the prototype detects any faults during the prelaunch period then the paper will include a description of those faults. Similarly, if the prototype has any false alarms then the paper will describe them and will attempt to explain their causes.

  3. AgroClimate: Simulating and Monitoring the Risk of Extreme Weather Events from a Crop Phenology Perspective

    NASA Astrophysics Data System (ADS)

    Fraisse, C.; Pequeno, D.; Staub, C. G.; Perry, C.

    2016-12-01

    Climate variability, particularly the occurrence of extreme weather conditions such as dry spells and heat stress during sensitive crop developmental phases can substantially increase the prospect of reduced crop yields. Yield losses or crop failure risk due to stressful weather conditions vary mainly due to stress severity and exposure time and duration. The magnitude of stress effects is also crop specific, differing in terms of thresholds and adaptation to environmental conditions. To help producers in the Southeast USA mitigate and monitor the risk of crop losses due to extreme weather events we developed a web-based tool that evaluates the risk of extreme weather events during the season taking into account the crop development stages. Producers can enter their plans for the upcoming season in a given field (e.g. crop, variety, planting date, acreage etc.), select or not a specific El Nino Southern Oscillation (ENSO) phase, and will be presented with the probabilities (ranging from 0 -100%) of extreme weather events occurring during sensitive phases of the growing season for the selected conditions. The DSSAT models CERES-Maize, CROPGRO-Soybean, CROPGRO-Cotton, and N-Wheat phenology models have been translated from FORTRAN to a standalone versions in R language. These models have been tested in collaboration with Extension faculty and producers during the 2016 season and their usefulness for risk mitigation and monitoring evaluated. A companion AgroClimate app was also developed to help producers track and monitor phenology development during the cropping season.

  4. Monitoring and Follow-up of Chronic Heart Failure: a Literature Review of eHealth Applications and Systems.

    PubMed

    de la Torre Díez, Isabel; Garcia-Zapirain, Begoña; Méndez-Zorrilla, Amaia; López-Coronado, Miguel

    2016-07-01

    In developed countries heart failure is one of the most important causes of death, followed closely by strokes and other cerebrovascular diseases. It is one of the major healthcare issues in terms of increasing number of patients, rate of hospitalizations and costs. The main aim of this paper is to present telemedicine applications for monitoring and follow-up of heart failure and to show how these systems can help reduce costs of administering heart failure. The search for e-health applications and systems in the field of telemonitoring of heart failure was pursued in IEEE Xplore, Science Direct, PubMed and Scopus systems between 2005 and the present time. This search was conducted between May and June 2015, and the articles deemed to be of most interest about treatment, prevention, self-empowerment and stabilization of patients were selected. Over 100 articles about telemonitoring of heart failure have been found in the literature reviewed since 2005, although the most interesting ones have been selected from the scientific standpoint. Many of them show that telemonitoring of patients with a high risk of heart failure is a measure that might help to reduce the risk of suffering from the disease. Following the review conducted, in can be stated that via the research articles analysed that telemonitoring systems can help to reduce the costs of administering heart failure and result in less re-hospitalization of patients.

  5. Condition monitoring of turning process using infrared thermography technique - An experimental approach

    NASA Astrophysics Data System (ADS)

    Prasad, Balla Srinivasa; Prabha, K. Aruna; Kumar, P. V. S. Ganesh

    2017-03-01

    In metal cutting machining, major factors that affect the cutting tool life are machine tool vibrations, tool tip/chip temperature and surface roughness along with machining parameters like cutting speed, feed rate, depth of cut, tool geometry, etc., so it becomes important for the manufacturing industry to find the suitable levels of process parameters for obtaining maintaining tool life. Heat generation in cutting was always a main topic to be studied in machining. Recent advancement in signal processing and information technology has resulted in the use of multiple sensors for development of the effective monitoring of tool condition monitoring systems with improved accuracy. From a process improvement point of view, it is definitely more advantageous to proactively monitor quality directly in the process instead of the product, so that the consequences of a defective part can be minimized or even eliminated. In the present work, a real time process monitoring method is explored using multiple sensors. It focuses on the development of a test bed for monitoring the tool condition in turning of AISI 316L steel by using both coated and uncoated carbide inserts. Proposed tool condition monitoring (TCM) is evaluated in the high speed turning using multiple sensors such as Laser Doppler vibrometer and infrared thermography technique. The results indicate the feasibility of using the dominant frequency of the vibration signals for the monitoring of high speed turning operations along with temperatures gradient. A possible correlation is identified in both regular and irregular cutting tool wear. While cutting speed and feed rate proved to be influential parameter on the depicted temperatures and depth of cut to be less influential. Generally, it is observed that lower heat and temperatures are generated when coated inserts are employed. It is found that cutting temperatures are gradually increased as edge wear and deformation developed.

  6. Use of Dobutamine Stress Echocardiography for Periprocedural Evaluation of a Case of Critical Valvular Pulmonary Stenosis with Delayed Presentation.

    PubMed

    Barik, Ramachandra; Akula, Siva Prasad; Damera, Sheshagiri Rao

    2016-01-01

    We report a case illustrating a 39-year-old man with delayed presentation of severe pulmonary valve (PV) stenosis, clinical evidence of congestive right heart failure in the form of enlarged liver, raised jugular venous pressure, and anasarca without cyanosis. Echocardiography (echo) was used both for diagnosis and monitoring this patient as main tool. The contractile reserve of the right ventricle (RV) was evaluated by infusion of dobutamine and diuretic for 4 days before pulmonary balloon valvotomy. Both the tricuspid annular peak systolic excursion and diastolic (diastolic anterograde flow through PV) function of RV improved after percutaneous balloon pulmonary valvotomy. These improvements were clinically apparent by complete resolution of anasarca, pericardial effusion, and normalization albumin-globulin ratio. The periprocedural echo findings were quite unique in this illustration.

  7. Use of Dobutamine Stress Echocardiography for Periprocedural Evaluation of a Case of Critical Valvular Pulmonary Stenosis with Delayed Presentation

    PubMed Central

    Barik, Ramachandra; Akula, Siva Prasad; Damera, Sheshagiri Rao

    2016-01-01

    We report a case illustrating a 39-year-old man with delayed presentation of severe pulmonary valve (PV) stenosis, clinical evidence of congestive right heart failure in the form of enlarged liver, raised jugular venous pressure, and anasarca without cyanosis. Echocardiography (echo) was used both for diagnosis and monitoring this patient as main tool. The contractile reserve of the right ventricle (RV) was evaluated by infusion of dobutamine and diuretic for 4 days before pulmonary balloon valvotomy. Both the tricuspid annular peak systolic excursion and diastolic (diastolic anterograde flow through PV) function of RV improved after percutaneous balloon pulmonary valvotomy. These improvements were clinically apparent by complete resolution of anasarca, pericardial effusion, and normalization albumin-globulin ratio. The periprocedural echo findings were quite unique in this illustration. PMID:28465962

  8. Tool Wear Monitoring Using Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Song, Dong Yeul; Ohara, Yasuhiro; Tamaki, Haruo; Suga, Masanobu

    A tool wear monitoring approach considering the nonlinear behavior of cutting mechanism caused by tool wear and/or localized chipping is proposed, and its effectiveness is verified through the cutting experiment and actual turning machining. Moreover, the variation in the surface roughness of the machined workpiece is also discussed using this approach. In this approach, the residual error between the actually measured vibration signal and the estimated signal obtained from the time series model corresponding to dynamic model of cutting is introduced as the feature of diagnosis. Consequently, it is found that the early tool wear state (i.e. flank wear under 40µm) can be monitored, and also the optimal tool exchange time and the tool wear state for actual turning machining can be judged by this change in the residual error. Moreover, the variation of surface roughness Pz in the range of 3 to 8µm can be estimated by the monitoring of the residual error.

  9. Performance Monitoring Of A Computer Numerically Controlled (CNC) Lathe Using Pattern Recognition Techniques

    NASA Astrophysics Data System (ADS)

    Daneshmend, L. K.; Pak, H. A.

    1984-02-01

    On-line monitoring of the cutting process in CNC lathe is desirable to ensure unattended fault-free operation in an automated environment. The state of the cutting tool is one of the most important parameters which characterises the cutting process. Direct monitoring of the cutting tool or workpiece is not feasible during machining. However several variables related to the state of the tool can be measured on-line. A novel monitoring technique is presented which uses cutting torque as the variable for on-line monitoring. A classifier is designed on the basis of the empirical relationship between cutting torque and flank wear. The empirical model required by the on-line classifier is established during an automated training cycle using machine vision for off-line direct inspection of the tool.

  10. A Remote Patient Monitoring System for Congestive Heart Failure

    PubMed Central

    Suh, Myung-kyung; Chen, Chien-An; Woodbridge, Jonathan; Tu, Michael Kai; Kim, Jung In; Nahapetian, Ani; Evangelista, Lorraine S.; Sarrafzadeh, Majid

    2011-01-01

    Congestive heart failure (CHF) is a leading cause of death in the United States affecting approximately 670,000 individuals. Due to the prevalence of CHF related issues, it is prudent to seek out methodologies that would facilitate the prevention, monitoring, and treatment of heart disease on a daily basis. This paper describes WANDA (Weight and Activity with Blood Pressure Monitoring System); a study that leverages sensor technologies and wireless communications to monitor the health related measurements of patients with CHF. The WANDA system is a three-tier architecture consisting of sensors, web servers, and back-end databases. The system was developed in conjunction with the UCLA School of Nursing and the UCLA Wireless Health Institute to enable early detection of key clinical symptoms indicative of CHF-related decompensation. This study shows that CHF patients monitored by WANDA are less likely to have readings fall outside a healthy range. In addition, WANDA provides a useful feedback system for regulating readings of CHF patients. PMID:21611788

  11. Spatial and temporal analyses for multiscale monitoring of landslides: Examples from Northern Ireland

    NASA Astrophysics Data System (ADS)

    Bell, Andrew; McKinley, Jennifer; Hughes, David

    2013-04-01

    Landslides in the form of debris flows, large scale rotational features and composite mudflows impact transport corridors cutting off local communities and in some instances result in loss of life. This study presents landslide monitoring methods used for predicting and characterising landslide activity along transport corridors. A variety of approaches are discussed: desk based risk assessment of slopes using Geographical Information Systems (GIS); Aerial LiDAR surveys and Terrestrial LiDAR monitoring and field instrumentation of selected sites. A GIS based case study is discussed which provides risk assessment for the potential of slope stability issues. Layers incorporated within the system include Digital Elevation Model (DEM), slope, aspect, solid and drift geology and groundwater conditions. Additional datasets include consequence of failure. These are combined within a risk model, presented as likelihoods of failure. This integrated spatial approach for slope risk assessment provides the user with a preliminary risk assessment of sites. An innovative "Flexviewer" web-based server interface allows users to view data without needing advanced GIS techniques to gather information about selected areas. On a macro landscape scale, Aerial LiDAR (ALS) surveys are used for the characterisation of landslides from the surrounding terrain. DEMs are generated along with terrain derivatives: slope, curvature and various measures of terrain roughness. Spatial analysis of terrain morphological parameters allow characterisation of slope stability issues and are used to predict areas of potential failure or recently failure terrain. On a local scale ground monitoring approaches are employed for the monitoring of changes in selected slopes using ALS and risk assessment approaches. Results are shown from on-going bimonthly Terrestrial LiDAR (TLS) monitoring of the slope within a site specific geodectically referenced network. This has allowed a classification of changes in the slopes with DEMs of difference showing areas of recent movement, erosion and deposition. In addition, changes in the structure of the slope characterised by DEM of difference and morphological parameters in the form of roughness, slope and curvature measures are progressively linked to failures indicated from temporal DEM monitoring. Preliminary results are presented for a case site at Straidkilly Point, Glenarm, Co. Antrim, Northern Ireland, illustrating multiple approaches to the spatial and temporal monitoring of landslides. These indicate how spatial morphological approaches and risk assessment frameworks coupled with TLS monitoring and field instrumentation enable characterisation and prediction of potential areas of slope stability issues. On site weather instrumentation and piezometers document changes in pore water pressures resulting in site-specific information with geotechnical observations parameterised within the temporal LiDAR monitoring. This provides a multifaceted approach to the characterisation and analysis of slope stability issues. The presented methodology of multiscale datasets and surveying approaches utilising spatial parameters and risk index mapping enables a more comprehensive and effective prediction of landslides resulting in effective characterisation and remediation strategies.

  12. Using the electronic health record to build a culture of practice safety: evaluating the implementation of trigger tools in one general practice.

    PubMed

    Margham, Tom; Symes, Natalie; Hull, Sally A

    2018-04-01

    Identifying patients at risk of harm in general practice is challenging for busy clinicians. In UK primary care, trigger tools and case note reviews are mainly used to identify rates of harm in sample populations. This study explores how adaptions to existing trigger tool methodology can identify patient safety events and engage clinicians in ongoing reflective work around safety. Mixed-method quantitative and narrative evaluation using thematic analysis in a single East London training practice. The project team developed and tested five trigger searches, supported by Excel worksheets to guide the case review process. Project evaluation included summary statistics of completed worksheets and a qualitative review focused on ease of use, barriers to implementation, and perception of value to clinicians. Trigger searches identified 204 patients for GP review. Overall, 117 (57%) of cases were reviewed and 62 (53%) of these cases had patient safety events identified. These were usually incidents of omission, including failure to monitor or review. Key themes from interviews with practice members included the fact that GPs' work is generally reactive and GPs welcomed an approach that identified patients who were 'under the radar' of safety. All GPs expressed concern that the tool might identify too many patients at risk of harm, placing further demands on their time. Electronic trigger tools can identify patients for review in domains of clinical risk for primary care. The high yield of safety events engaged clinicians and provided validation of the need for routine safety checks. © British Journal of General Practice 2018.

  13. Public Notification - Revised Total Coliform Rule Failure To Report Template

    EPA Pesticide Factsheets

    When a PWS fails to report their monitoring results for Total Coliform bacteria, it must issue a public notice to inform consumers of its water of that failure to report. This template can be used as a guide for preparing that public notice.

  14. Exploration of Drone and Remote Sensing Technologies in Highway Embankment Monitoring and Management (Phase I) : research project capsule.

    DOT National Transportation Integrated Search

    2017-09-01

    Over time, many Louisiana highway embankments have experienced surface sliding failures, a safety issue causing traffic disruptions. Since no advance-warning system is available for these highway embankment failures, the Louisiana Department of Trans...

  15. Failure Analysis and Magnetic Evaluation of Tertiary Superheater Tube Used in Gas-Fired Boiler

    NASA Astrophysics Data System (ADS)

    Mohapatra, J. N.; Patil, Sujay; Sah, Rameshwar; Krishna, P. C.; Eswarappa, B.

    2018-02-01

    Failure analysis was carried out on a prematurely failed tertiary superheater tube used in gas-fired boiler. The analysis includes a comparative study of visual examination, chemical composition, hardness and microstructure at failed region, adjacent and far to failure as well as on fresh tube. The chemistry was found matching to the standard specification, whereas the hardness was low in failed tube compared to the fish mouth opening region and the fresh tube. Microscopic examination of failed sample revealed the presence of spheroidal carbides of Cr and Mo predominantly along the grain boundaries. The primary cause of failure is found to be localized heating. Magnetic hysteresis loop (MHL) measurements were carried out to correlate the magnetic parameters with microstructure and mechanical properties to establish a possible non-destructive evaluation (NDE) for health monitoring of the tubes. The coercivity of the MHL showed a very good correlation with microstructure and mechanical properties deterioration enabling a possible NDE technique for the health monitoring of the tubes.

  16. A hybrid feature selection and health indicator construction scheme for delay-time-based degradation modelling of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Deng, Congying; Zhang, Yi

    2018-03-01

    Rolling element bearings are mechanical components used frequently in most rotating machinery and they are also vulnerable links representing the main source of failures in such systems. Thus, health condition monitoring and fault diagnosis of rolling element bearings have long been studied to improve operational reliability and maintenance efficiency of rotatory machines. Over the past decade, prognosis that enables forewarning of failure and estimation of residual life attracted increasing attention. To accurately and efficiently predict failure of the rolling element bearing, the degradation requires to be well represented and modelled. For this purpose, degradation of the rolling element bearing is analysed with the delay-time-based model in this paper. Also, a hybrid feature selection and health indicator construction scheme is proposed for extraction of the bearing health relevant information from condition monitoring sensor data. Effectiveness of the presented approach is validated through case studies on rolling element bearing run-to-failure experiments.

  17. Novel tool wear monitoring method in milling difficult-to-machine materials using cutting chip formation

    NASA Astrophysics Data System (ADS)

    Zhang, P. P.; Guo, Y.; Wang, B.

    2017-05-01

    The main problems in milling difficult-to-machine materials are the high cutting temperature and rapid tool wear. However it is impossible to investigate tool wear in machining. Tool wear and cutting chip formation are two of the most important representations for machining efficiency and quality. The purpose of this paper is to develop the model of tool wear with cutting chip formation (width of chip and radian of chip) on difficult-to-machine materials. Thereby tool wear is monitored by cutting chip formation. A milling experiment on the machining centre with three sets cutting parameters was performed to obtain chip formation and tool wear. The experimental results show that tool wear increases gradually along with cutting process. In contrast, width of chip and radian of chip decrease. The model is developed by fitting the experimental data and formula transformations. The most of monitored errors of tool wear by the chip formation are less than 10%. The smallest error is 0.2%. Overall errors by the radian of chip are less than the ones by the width of chip. It is new way to monitor and detect tool wear by cutting chip formation in milling difficult-to-machine materials.

  18. Comparative study of the failure rates among 3 implantable defibrillator leads.

    PubMed

    van Malderen, Sophie C H; Szili-Torok, Tamas; Yap, Sing C; Hoeks, Sanne E; Zijlstra, Felix; Theuns, Dominic A M J

    2016-12-01

    After the introduction of the Biotronik Linox S/SD high-voltage lead, several cases of early failure have been observed. The purpose of this article was to assess the performance of the Linox S/SD lead in comparison to 2 other contemporary leads. We used the prospective Erasmus MC ICD registry to identify all implanted Linox S/SD (n = 408), Durata (St. Jude Medical, model 7122) (n = 340), and Endotak Reliance (Boston Scientific, models 0155, 0138, and 0158) (n = 343) leads. Lead failure was defined by low- or high-voltage impedance, failure to capture, sense or defibrillate, or the presence of nonphysiological signals not due to external interference. During a median follow-up of 5.1 years, 24 Linox (5.9%), 5 Endotak (1.5%), and 5 Durata (1.5%) leads failed. At 5-year follow-up, the cumulative failure rate of Linox leads (6.4%) was higher than that of Endotak (0.4%; P < .0001) and Durata (2.0%; P = .003) leads. The incidence rate was higher in Linox leads (1.3 per 100 patient-years) than in Endotak and Durata leads (0.2 and 0.3 per 100 patient-years, respectively; P < .001). A log-log analysis of the cumulative hazard for Linox leads functioning at 3-year follow-up revealed a stable failure rate of 3% per year. The majority of failures consisted of noise (62.5%) and abnormal impedance (33.3%). This study demonstrates a higher failure rate of Linox S/SD high-voltage leads compared to contemporary leads. Although the mechanism of lead failure is unclear, the majority presents with abnormal electrical parameters. Comprehensive monitoring of Linox S/SD high-voltage leads includes remote monitoring to facilitate early detection of lead failure. Copyright © 2016. Published by Elsevier Inc.

  19. PZT Active Frequency Based Wind Blade Fatigue to Failure Testing Results for Various Blade Designs

    DTIC Science & Technology

    2011-09-01

    PZT Active Frequency Based Wind Blade Fatigue to Failure Testing Results for Various Blade Designs R. J. WERLINK...number. 1. REPORT DATE SEP 2011 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE PZT Active Frequency Based Wind Blade Fatigue ...18 Abstract: This paper summarizes NASA PZT Health Monitoring System results previously reported for 9 meter blade Fatigue loading to failure

  20. German disease management guidelines: surgical therapies for chronic heart failure.

    PubMed

    Sindermann, J R; Klotz, S; Rahbar, K; Hoffmeier, A; Drees, G

    2010-02-01

    The German Disease Management Guideline "Chronic Heart Failure" intends to guide physicians working in the field of diagnosis and treatment of heart failure. The guideline provides a tool on the background of evidence based medicine. The following short review wants to give insights into the role of some surgical treatment options to improve heart failure, such as revascularization, ventricular reconstruction and aneurysmectomy, mitral valve reconstruction, ventricular assist devices and heart transplantation. (c) Georg Thieme Verlag KG Stuttgart-New York.

  1. Variations of electric resistance and H2 and Rn emissions of concrete blocks under increasing uniaxial compression

    USGS Publications Warehouse

    King, C.-Y.; Luo, G.

    1990-01-01

    Electric resistance and emissions of hydrogen and radon isotopes of concrete (which is somewhat similar to fault-zone materials) under increasing uniaxial compression were continuously monitored to check whether they show any pre- and post-failure changes that may correspond to similar changes reported for earthquakes. The results show that all these parameters generally begin to increase when the applied stresses reach 20% to 90% of the corresponding failure stresses, probably due to the occurrence and growth of dilatant microcracks in the specimens. The prefailure changes have different patterns for different specimens, probably because of differences in spatial and temporal distributions of the microcracks. The resistance shows large co-failure increases, and the gas emissions show large post-failure increases. The post-failure increase of radon persists longer and stays at a higher level than that of hydrogen, suggesting a difference in the emission mechanisms for these two kinds of gases. The H2 increase may be mainly due to chemical reaction at the crack surfaces while they are fresh, whereas the Rn increases may be mainly the result of the increased emanation area of such surfaces. The results suggest that monitoring of resistivity and gas emissions may be useful for predicting earthquakes and failures of concrete structures. ?? 1990 Birkha??user Verlag.

  2. Direct Left Atrial Pressure Monitoring in Severe Heart Failure: Long-Term Sensor Performance

    PubMed Central

    Ritzema, Jay; Eigler, Neal L.; Melton, Iain C.; Krum, Henry; Adamson, Philip B.; Kar, Saibal; Shah, Prediman K.; Whiting, James S.; Heywood, J. Thomas; Rosero, Spencer; Singh, Jagmeet P.; Saxon, Leslie; Matthews, Ray; Crozier, Ian G.; Abraham, William T.

    2010-01-01

    We report the stability, accuracy, and development history of a new left atrial pressure (LAP) sensing system in ambulatory heart failure (HF) patients. A total of 84 patients with advanced HF underwent percutaneous transseptal implantation of the pressure sensor. Quarterly noninvasive calibration by modified Valsalva maneuver was achieved in all patients, and 96.5% of calibration sessions were successful with a reproducibility of 1.2 mmHg. Absolute sensor drift was maximal after 3 months at 4.7 mmHg (95% CI, 3.2–6.2 mmHg) and remained stable through 48 months. LAP was highly correlated with simultaneous pulmonary wedge pressure at 3 and 12 months (r = 0.98, average difference of 0.8 ± 4.0 mmHg). Freedom from device failure was 95% (n = 37) at 2 years and 88% (n = 12) at 4 years. Causes of failure were identified and mitigated with 100% freedom from device failure and less severe anomalies in the last 41 consecutive patients (p = 0.005). Accurate and reliable LAP measurement using a chronic implanted monitoring system is safe and feasible in patients with advanced heart failure. PMID:20945124

  3. Monitoring the quality of total hip replacement in a tertiary care department using a cumulative summation statistical method (CUSUM).

    PubMed

    Biau, D J; Meziane, M; Bhumbra, R S; Dumaine, V; Babinet, A; Anract, P

    2011-09-01

    The purpose of this study was to define immediate post-operative 'quality' in total hip replacements and to study prospectively the occurrence of failure based on these definitions of quality. The evaluation and assessment of failure were based on ten radiological and clinical criteria. The cumulative summation (CUSUM) test was used to study 200 procedures over a one-year period. Technical criteria defined failure in 17 cases (8.5%), those related to the femoral component in nine (4.5%), the acetabular component in 32 (16%) and those relating to discharge from hospital in five (2.5%). Overall, the procedure was considered to have failed in 57 of the 200 total hip replacements (28.5%). The use of a new design of acetabular component was associated with more failures. For the CUSUM test, the level of adequate performance was set at a rate of failure of 20% and the level of inadequate performance set at a failure rate of 40%; no alarm was raised by the test, indicating that there was no evidence of inadequate performance. The use of a continuous monitoring statistical method is useful to ensure that the quality of total hip replacement is maintained, especially as newer implants are introduced.

  4. Monitoring Distributed Real-Time Systems: A Survey and Future Directions

    NASA Technical Reports Server (NTRS)

    Goodloe, Alwyn E.; Pike, Lee

    2010-01-01

    Runtime monitors have been proposed as a means to increase the reliability of safety-critical systems. In particular, this report addresses runtime monitors for distributed hard real-time systems. This class of systems has had little attention from the monitoring community. The need for monitors is shown by discussing examples of avionic systems failure. We survey related work in the field of runtime monitoring. Several potential monitoring architectures for distributed real-time systems are presented along with a discussion of how they might be used to monitor properties of interest.

  5. 40 CFR 141.561 - What happens if my system's turbidity monitoring equipment fails?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 24 2013-07-01 2013-07-01 false What happens if my system's turbidity monitoring equipment fails? 141.561 Section 141.561 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... happens if my system's turbidity monitoring equipment fails? If there is a failure in the continuous...

  6. 40 CFR 141.561 - What happens if my system's turbidity monitoring equipment fails?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 23 2011-07-01 2011-07-01 false What happens if my system's turbidity monitoring equipment fails? 141.561 Section 141.561 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... happens if my system's turbidity monitoring equipment fails? If there is a failure in the continuous...

  7. 40 CFR 141.561 - What happens if my system's turbidity monitoring equipment fails?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 24 2012-07-01 2012-07-01 false What happens if my system's turbidity monitoring equipment fails? 141.561 Section 141.561 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... happens if my system's turbidity monitoring equipment fails? If there is a failure in the continuous...

  8. 40 CFR 141.561 - What happens if my system's turbidity monitoring equipment fails?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 23 2014-07-01 2014-07-01 false What happens if my system's turbidity monitoring equipment fails? 141.561 Section 141.561 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... happens if my system's turbidity monitoring equipment fails? If there is a failure in the continuous...

  9. 40 CFR 63.8192 - What work practice standards must I meet?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) of this section. (ii) Using the monitoring data collected according to paragraph (g)(1)(i) of this section, establish your action level at the 75th percentile of the data set. (iii) Submit your action... monitor the mercury concentration in the cell room. Failure to monitor and record the data according to...

  10. 40 CFR 63.8192 - What work practice standards must I meet?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) of this section. (ii) Using the monitoring data collected according to paragraph (g)(1)(i) of this section, establish your action level at the 75th percentile of the data set. (iii) Submit your action... monitor the mercury concentration in the cell room. Failure to monitor and record the data according to...

  11. 40 CFR 63.8192 - What work practice standards must I meet?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) of this section. (ii) Using the monitoring data collected according to paragraph (g)(1)(i) of this section, establish your action level at the 75th percentile of the data set. (iii) Submit your action... monitor the mercury concentration in the cell room. Failure to monitor and record the data according to...

  12. 40 CFR 63.8192 - What work practice standards must I meet?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) of this section. (ii) Using the monitoring data collected according to paragraph (g)(1)(i) of this section, establish your action level at the 75th percentile of the data set. (iii) Submit your action... monitor the mercury concentration in the cell room. Failure to monitor and record the data according to...

  13. 40 CFR 63.8192 - What work practice standards must I meet?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) of this section. (ii) Using the monitoring data collected according to paragraph (g)(1)(i) of this section, establish your action level at the 75th percentile of the data set. (iii) Submit your action... monitor the mercury concentration in the cell room. Failure to monitor and record the data according to...

  14. Self-Monitoring Success and Failure: Evidence for a Mediating Mechanism.

    ERIC Educational Resources Information Center

    Susser, Howard S.

    Two theories, the closed loop model (divides self-regulation into self-monitoring, self-evaluation, and self-reinforcement) and the non-mediational model (defines self-regulation as behavior that is controlled by its long-term and observable consequences), have been proposed to explain why behavior changes when self-monitoring occurs. Both…

  15. FIREMON: Fire effects monitoring and inventory system

    Treesearch

    Duncan C. Lutes; Robert E. Keane; John F. Caratti; Carl H. Key; Nathan C. Benson; Steve Sutherland; Larry J. Gangi

    2006-01-01

    Monitoring and inventory to assess the effects of wildland fire is critical for 1) documenting fire effects, 2) assessing ecosystem damage and benefit, 3) evaluating the success or failure of a burn, and 4) appraising the potential for future treatments. However, monitoring fire effects is often difficult because data collection requires abundant funds, resources, and...

  16. A multi-sensor approach to landslide monitoring of rainfall-induced failures in Scotland.

    NASA Astrophysics Data System (ADS)

    Gilles, Charlie; Hoey, Trevor; Williams, Richard

    2017-04-01

    Landslides are of significant interest in upland areas of the United Kingdom due to their: complex mechanics, potential to channelize into hazardous debris flows and their costly potential impacts on infrastructure. The British Geological Survey National Landslide Database contains an average of 367 landslides per year (from 1970). Slope failures in the UK are typically triggered by extended periods of intense rainfall, and can occur at any time of year. In any given rainfall event that triggers landslides, most potentially vulnerable slopes remain stable. Accurate warning systems would be facilitated by identifying landslide precursors prior to failure events. This project tests whether such precursors can be identified in the valley of Glen Ogle, Scotland (87 km north-west of Edinburgh), where in summer 2004 two debris flows blocked the main road (A85), trapping fifty-seven people. Two adjacent sites have been selected on a west facing slope in Glen Ogle, one of which (the control) has been stable since at least 2004 and the other failed in 2004 and remains unstable. Understanding the immediate causes and antecedent conditions responsible for landslides requires a multi-scale approach. This project uses multiple sensors to assess failure mechanisms of landslides in Glen Ogle: (1) 3-monthly, high (1.8 arcsec) resolution terrestrial laser scanning of topography to detect changes and identify patterns of movement prior to major failure, using the Riegl VZ-1000 (NERC Geophysical Equipment Fund); (2) rainfall and soil moisture data to monitor pore pressure of landslide failure prior to and after hydrologically triggered events; (3) monitoring ground motion using grain-scale sensors which are becoming lower cost, more efficient in terms of power, and can be wirelessly networked these will be used to detect small scale movement of the landslide. Comparative data from the control and test sites will be presented, from which patterns of surface deformation between failure events will be derived.

  17. Factors Influencing Progressive Failure Analysis Predictions for Laminated Composite Structure

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.

    2008-01-01

    Progressive failure material modeling methods used for structural analysis including failure initiation and material degradation are presented. Different failure initiation criteria and material degradation models are described that define progressive failure formulations. These progressive failure formulations are implemented in a user-defined material model for use with a nonlinear finite element analysis tool. The failure initiation criteria include the maximum stress criteria, maximum strain criteria, the Tsai-Wu failure polynomial, and the Hashin criteria. The material degradation model is based on the ply-discounting approach where the local material constitutive coefficients are degraded. Applications and extensions of the progressive failure analysis material model address two-dimensional plate and shell finite elements and three-dimensional solid finite elements. Implementation details are described in the present paper. Parametric studies for laminated composite structures are discussed to illustrate the features of the progressive failure modeling methods that have been implemented and to demonstrate their influence on progressive failure analysis predictions.

  18. Low cost structural health monitoring of bridges using wireless sensors : research summary.

    DOT National Transportation Integrated Search

    2012-05-01

    Problem: Structural health monitoring is critical to protecting bridges against aging, : failures, and potentially collapse. However, instrumentiation techniques : suffer from non-scalability due to the high cost of instrumentation devices : and inst...

  19. Assessing team performance in the operating room: development and use of a "black-box" recorder and other tools for the intraoperative environment.

    PubMed

    Guerlain, Stephanie; Adams, Reid B; Turrentine, F Beth; Shin, Thomas; Guo, Hui; Collins, Stephen R; Calland, J Forrest

    2005-01-01

    The objective of this research was to develop a digital system to archive the complete operative environment along with the assessment tools for analysis of this data, allowing prospective studies of operative performance, intraoperative errors, team performance, and communication. Ability to study this environment will yield new insights, allowing design of systems to avoid preventable errors that contribute to perioperative complications. A multitrack, synchronized, digital audio-visual recording system (RATE tool) was developed to monitor intraoperative performance, including software to synchronize data and allow assignment of independent observational scores. Cases were scored for technical performance, participants' situational awareness (knowledge of critical information), and their comfort and satisfaction with the conduct of the procedure. Laparoscopic cholecystectomy (n = 10) was studied. Technical performance of the RATE tool was excellent. The RATE tool allowed real time, multitrack data collection of all aspects of the operative environment, while permitting digital recording of the objective assessment data in a time synchronized and annotated fashion during the procedure. The mean technical performance score was 73% +/- 28% of maximum (perfect) performance. Situational awareness varied widely among team members, with the attending surgeon typically the only team member having comprehensive knowledge of critical case information. The RATE tool allows prospective analysis of performance measures such as technical judgments, team performance, and communication patterns, offers the opportunity to conduct prospective intraoperative studies of human performance, and allows for postoperative discussion, review, and teaching. This study also suggests that gaps in situational awareness might be an underappreciated source of operative adverse events. Future uses of this system will aid teaching, failure or adverse event analysis, and intervention research.

  20. A Automated Tool for Supporting FMEAs of Digital Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yue,M.; Chu, T.-L.; Martinez-Guridi, G.

    2008-09-07

    Although designs of digital systems can be very different from each other, they typically use many of the same types of generic digital components. Determining the impacts of the failure modes of these generic components on a digital system can be used to support development of a reliability model of the system. A novel approach was proposed for such a purpose by decomposing the system into a level of the generic digital components and propagating failure modes to the system level, which generally is time-consuming and difficult to implement. To overcome the associated issues of implementing the proposed FMEA approach,more » an automated tool for a digital feedwater control system (DFWCS) has been developed in this study. The automated FMEA tool is in nature a simulation platform developed by using or recreating the original source code of the different module software interfaced by input and output variables that represent physical signals exchanged between modules, the system, and the controlled process. For any given failure mode, its impacts on associated signals are determined first and the variables that correspond to these signals are modified accordingly by the simulation. Criteria are also developed, as part of the simulation platform, to determine whether the system has lost its automatic control function, which is defined as a system failure in this study. The conceptual development of the automated FMEA support tool can be generalized and applied to support FMEAs for reliability assessment of complex digital systems.« less

Top