Savannah River Site generic data base development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanton, C.H.; Eide, S.A.
This report describes the results of a project to improve the generic component failure data base for the Savannah River Site (SRS). A representative list of components and failure modes for SRS risk models was generated by reviewing existing safety analyses and component failure data bases and from suggestions from SRS safety analysts. Then sources of data or failure rate estimates were identified and reviewed for applicability. A major source of information was the Nuclear Computerized Library for Assessing Reactor Reliability, or NUCLARR. This source includes an extensive collection of failure data and failure rate estimates for commercial nuclear powermore » plants. A recent Idaho National Engineering Laboratory report on failure data from the Idaho Chemical Processing Plant was also reviewed. From these and other recent sources, failure data and failure rate estimates were collected for the components and failure modes of interest. This information was aggregated to obtain a recommended generic failure rate distribution (mean and error factor) for each component failure mode.« less
Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Novack, Steven
2015-01-01
Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.
A Generic Modeling Process to Support Functional Fault Model Development
NASA Technical Reports Server (NTRS)
Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.
2016-01-01
Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.
Generic Sensor Failure Modeling for Cooperative Systems.
Jäger, Georg; Zug, Sebastian; Casimiro, António
2018-03-20
The advent of cooperative systems entails a dynamic composition of their components. As this contrasts current, statically composed systems, new approaches for maintaining their safety are required. In that endeavor, we propose an integration step that evaluates the failure model of shared information in relation to an application's fault tolerance and thereby promises maintainability of such system's safety. However, it also poses new requirements on failure models, which are not fulfilled by state-of-the-art approaches. Consequently, this work presents a mathematically defined generic failure model as well as a processing chain for automatically extracting such failure models from empirical data. By examining data of an Sharp GP2D12 distance sensor, we show that the generic failure model not only fulfills the predefined requirements, but also models failure characteristics appropriately when compared to traditional techniques.
Generic Sensor Failure Modeling for Cooperative Systems
Jäger, Georg; Zug, Sebastian
2018-01-01
The advent of cooperative systems entails a dynamic composition of their components. As this contrasts current, statically composed systems, new approaches for maintaining their safety are required. In that endeavor, we propose an integration step that evaluates the failure model of shared information in relation to an application’s fault tolerance and thereby promises maintainability of such system’s safety. However, it also poses new requirements on failure models, which are not fulfilled by state-of-the-art approaches. Consequently, this work presents a mathematically defined generic failure model as well as a processing chain for automatically extracting such failure models from empirical data. By examining data of an Sharp GP2D12 distance sensor, we show that the generic failure model not only fulfills the predefined requirements, but also models failure characteristics appropriately when compared to traditional techniques. PMID:29558435
A Automated Tool for Supporting FMEAs of Digital Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yue,M.; Chu, T.-L.; Martinez-Guridi, G.
2008-09-07
Although designs of digital systems can be very different from each other, they typically use many of the same types of generic digital components. Determining the impacts of the failure modes of these generic components on a digital system can be used to support development of a reliability model of the system. A novel approach was proposed for such a purpose by decomposing the system into a level of the generic digital components and propagating failure modes to the system level, which generally is time-consuming and difficult to implement. To overcome the associated issues of implementing the proposed FMEA approach,more » an automated tool for a digital feedwater control system (DFWCS) has been developed in this study. The automated FMEA tool is in nature a simulation platform developed by using or recreating the original source code of the different module software interfaced by input and output variables that represent physical signals exchanged between modules, the system, and the controlled process. For any given failure mode, its impacts on associated signals are determined first and the variables that correspond to these signals are modified accordingly by the simulation. Criteria are also developed, as part of the simulation platform, to determine whether the system has lost its automatic control function, which is defined as a system failure in this study. The conceptual development of the automated FMEA support tool can be generalized and applied to support FMEAs for reliability assessment of complex digital systems.« less
Failure behavior of generic metallic and composite aircraft structural components under crash loads
NASA Technical Reports Server (NTRS)
Carden, Huey D.; Robinson, Martha P.
1990-01-01
Failure behavior results are presented from crash dynamics research using concepts of aircraft elements and substructure not necessarily designed or optimized for energy absorption or crash loading considerations. To achieve desired new designs incorporating improved energy absorption capabilities often requires an understanding of how more conventional designs behave under crash loadings. Experimental and analytical data are presented which indicate some general trends in the failure behavior of a class of composite structures including individual fuselage frames, skeleton subfloors with stringers and floor beams without skin covering, and subfloors with skin added to the frame-stringer arrangement. Although the behavior is complex, a strong similarity in the static/dynamic failure behavior among these structures is illustrated through photographs of the experimental results and through analytical data of generic composite structural models.
NASA Technical Reports Server (NTRS)
Bloomquist, C. E.; Kallmeyer, R. H.
1972-01-01
Field failure rates and confidence factors are presented for 88 identifiable components of the ground support equipment at the John F. Kennedy Space Center. For most of these, supplementary information regarding failure mode and cause is tabulated. Complete reliability assessments are included for three systems, eight subsystems, and nine generic piece-part classifications. Procedures for updating or augmenting the reliability results are also included.
CONFIG: Qualitative simulation tool for analyzing behavior of engineering devices
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Basham, Bryan D.; Harris, Richard A.
1987-01-01
To design failure management expert systems, engineers mentally analyze the effects of failures and procedures as they propagate through device configurations. CONFIG is a generic device modeling tool for use in discrete event simulation, to support such analyses. CONFIG permits graphical modeling of device configurations and qualitative specification of local operating modes of device components. Computation requirements are reduced by focussing the level of component description on operating modes and failure modes, and specifying qualitative ranges of variables relative to mode transition boundaries. Simulation processing occurs only when modes change or variables cross qualitative boundaries. Device models are built graphically, using components from libraries. Components are connected at ports by graphical relations that define data flow. The core of a component model is its state transition diagram, which specifies modes of operation and transitions among them.
NASA Technical Reports Server (NTRS)
Bloomquist, C. E.; Kallmeyer, R. H.
1972-01-01
Field failure rates and confidence factors are presented for 88 identifiable components of the ground support equipment at the John F. Kennedy Space Center. For most of these, supplementary information regarding failure mode and cause is tabulated. Complete reliability assessments are included for three systems, eight subsystems, and nine generic piece-part classifications. Procedures for updating or augmenting the reliability results presented in this handbook are also included.
NASA Technical Reports Server (NTRS)
Packard, Michael H.
2002-01-01
Probabilistic Structural Analysis (PSA) is now commonly used for predicting the distribution of time/cycles to failure of turbine blades and other engine components. These distributions are typically based on fatigue/fracture and creep failure modes of these components. Additionally, reliability analysis is used for taking test data related to particular failure modes and calculating failure rate distributions of electronic and electromechanical components. How can these individual failure time distributions of structural, electronic and electromechanical component failure modes be effectively combined into a top level model for overall system evaluation of component upgrades, changes in maintenance intervals, or line replaceable unit (LRU) redesign? This paper shows an example of how various probabilistic failure predictions for turbine engine components can be evaluated and combined to show their effect on overall engine performance. A generic model of a turbofan engine was modeled using various Probabilistic Risk Assessment (PRA) tools (Quantitative Risk Assessment Software (QRAS) etc.). Hypothetical PSA results for a number of structural components along with mitigation factors that would restrict the failure mode from propagating to a Loss of Mission (LOM) failure were used in the models. The output of this program includes an overall failure distribution for LOM of the system. The rank and contribution to the overall Mission Success (MS) is also given for each failure mode and each subsystem. This application methodology demonstrates the effectiveness of PRA for assessing the performance of large turbine engines. Additionally, the effects of system changes and upgrades, the application of different maintenance intervals, inclusion of new sensor detection of faults and other upgrades were evaluated in determining overall turbine engine reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Y.A.; Feltus, M.A.
1995-07-01
Reliability-centered maintenance (RCM) methods are applied to boiling water reactor plant-specific emergency core cooling system probabilistic risk assessment (PRA) fault trees. The RCM is a technique that is system function-based, for improving a preventive maintenance (PM) program, which is applied on a component basis. Many PM programs are based on time-directed maintenance tasks, while RCM methods focus on component condition-directed maintenance tasks. Stroke time test data for motor-operated valves (MOVs) are used to address three aspects concerning RCM: (a) to determine if MOV stroke time testing was useful as a condition-directed PM task; (b) to determine and compare the plant-specificmore » MOV failure data from a broad RCM philosophy time period compared with a PM period and, also, compared with generic industry MOV failure data; and (c) to determine the effects and impact of the plant-specific MOV failure data on core damage frequency (CDF) and system unavailabilities for these emergency systems. The MOV stroke time test data from four emergency core cooling systems [i.e., high-pressure coolant injection (HPCI), reactor core isolation cooling (RCIC), low-pressure core spray (LPCS), and residual heat removal/low-pressure coolant injection (RHR/LPCI)] were gathered from Philadelphia Electric Company`s Peach Bottom Atomic Power Station Units 2 and 3 between 1980 and 1992. The analyses showed that MOV stroke time testing was not a predictor for eminent failure and should be considered as a go/no-go test. The failure data from the broad RCM philosophy showed an improvement compared with the PM-period failure rates in the emergency core cooling system MOVs. Also, the plant-specific MOV failure rates for both maintenance philosophies were shown to be lower than the generic industry estimates.« less
Auxiliary feedwater system risk-based inspection guide for the Salem Nuclear Power Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pugh, R.; Gore, B.F. Vo, T.V.
In a study by the US Nuclear Regulatory Commission (NRC), Pacific Northwest Laboratory has developed and applied a methodology for deriving plant-specific risk-based inspection guidance for the auxiliary feedwater (AFW) system at pressurized water reactors that have not undergone probabilistic risk assessment (PRA). This methodology uses existing PRA results and plant operating experience information. Existing PRA-based inspection guidance information recently developed for the NRC for various plants was used to identify generic component failure modes. This information was then combined with plant-specific and industry-wide component information and failure data to identify failure modes and failure mechanisms for the AFW systemmore » at the selected plants. Salem was selected as the fifth plant for study. The product of this effort is a prioritized listing of AFW failures which have occurred at the plant and at other PWRs. This listing is intended for use by NRC inspectors in the preparation of inspection plans addressing AFW risk-important components at the Salem plant. 23 refs., 1 fig., 1 tab.« less
Zentall, Shannon R; Morris, Bradley J
2010-10-01
Previous research has demonstrated that generic praise ("good drawer") is related to children giving up after failure because failure implies the lack of a critical trait (e.g., drawing ability). Conversely, nongeneric praise ("good job drawing") is related to mastery motivation because it implies that success is related to effort. Yet children may receive a mixture of these praise types (i.e., inconsistent praise), the effects of which are unclear. We tested how inconsistent praise influenced two components of motivation: self-evaluation and persistence. Kindergarteners (N=135) were randomly assigned to one of five conditions in which consistency of praise type was varied. After two failure scenarios, children reported self-evaluations and persistence. Results indicated that more nongeneric praise related linearly to greater motivation, yet self-evaluation and persistence were impacted differently by inconsistent praise types. Hearing even a small amount of generic praise reduced persistence, whereas hearing a small amount of nongeneric praise preserved self-evaluation. Copyright 2010 Elsevier Inc. All rights reserved.
Common Cause Failure Modeling: Aerospace Versus Nuclear
NASA Technical Reports Server (NTRS)
Stott, James E.; Britton, Paul; Ring, Robert W.; Hark, Frank; Hatfield, G. Spencer
2010-01-01
Aggregate nuclear plant failure data is used to produce generic common-cause factors that are specifically for use in the common-cause failure models of NUREG/CR-5485. Furthermore, the models presented in NUREG/CR-5485 are specifically designed to incorporate two significantly distinct assumptions about the methods of surveillance testing from whence this aggregate failure data came. What are the implications of using these NUREG generic factors to model the common-cause failures of aerospace systems? Herein, the implications of using the NUREG generic factors in the modeling of aerospace systems are investigated in detail and strong recommendations for modeling the common-cause failures of aerospace systems are given.
A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp
High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storagemore » systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Sheldon, Frederick T.
Cyber physical computing infrastructures typically consist of a number of sites are interconnected. Its operation critically depends both on cyber components and physical components. Both types of components are subject to attacks of different kinds and frequencies, which must be accounted for the initial provisioning and subsequent operation of the infrastructure via information security analysis. Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, andmore » information assets. We concentrated our analysis on the electric sector failure scenarios and impact analyses by the NESCOR Working Group Study, From the Section 5 electric sector representative failure scenarios; we extracted the four generic failure scenarios and grouped them into three specific threat categories (confidentiality, integrity, and availability) to the system. These specific failure scenarios serve as a demonstration of our simulation. The analysis using our ABGT simulation demonstrates how to model the electric sector functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the cyber physical infrastructure network with respect to CIA.« less
NASA Astrophysics Data System (ADS)
Ren, Xusheng; Qian, Longsheng; Zhang, Guiyan
2005-12-01
According to Generic Reliability Assurance Requirements for Passive Optical Components GR-1221-CORE (Issue 2, January 1999), reliability determination test of different kinds of passive optical components which using in uncontrolled environments is taken. The test condition of High Temperature Storage Test (Dry Test) and Damp Test is in below sheet. Except for humidity condition, all is same. In order to save test time and cost, after a sires of contrast tests, the replacement of Dry Heat is discussed. Controlling the Failure mechanism of dry heat and damp heat of passive optical components, the contrast test of dry heat and damp heat for passive optical components (include DWDM, CWDM, Coupler, Isolator, mini Isolator) is taken. The test result of isolator is listed. Telcordia test not only test the reliability of the passive optical components, but also test the patience of the experimenter. The cost of Telcordia test in money, manpower and material resources, especially in time is heavy burden for the company. After a series of tests, we can find that Damp heat could factually test the reliability of passive optical components, and equipment manufacturer in accord with component manufacture could omit the dry heat test if damp heat test is taken first and passed.
Simultaneously Coupled Mechanical-Electrochemical-Thermal Simulation of Lithium-Ion Cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, C.; Santhanagopalan, S.; Sprague, M. A.
2016-07-28
Understanding the combined electrochemical-thermal and mechanical response of a system has a variety of applications, for example, structural failure from electrochemical fatigue and the potential induced changes of material properties. For lithium-ion batteries, there is an added concern over the safety of the system in the event of mechanical failure of the cell components. In this work, we present a generic multi-scale simultaneously coupled mechanical-electrochemical-thermal model to examine the interaction between mechanical failure and electrochemical-thermal responses. We treat the battery cell as a homogeneous material while locally we explicitly solve for the mechanical response of individual components using a homogenizationmore » model and the electrochemical-thermal responses using an electrochemical model for the battery. A benchmark problem is established to demonstrate the proposed modeling framework. The model shows the capability to capture the gradual evolution of cell electrochemical-thermal responses, and predicts the variation of those responses under different short-circuit conditions.« less
Simultaneously Coupled Mechanical-Electrochemical-Thermal Simulation of Lithium-Ion Cells: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chao; Santhanagopalan, Shriram; Sprague, Michael A.
2016-08-01
Understanding the combined electrochemical-thermal and mechanical response of a system has a variety of applications, for example, structural failure from electrochemical fatigue and the potential induced changes of material properties. For lithium-ion batteries, there is an added concern over the safety of the system in the event of mechanical failure of the cell components. In this work, we present a generic multi-scale simultaneously coupled mechanical-electrochemical-thermal model to examine the interaction between mechanical failure and electrochemical-thermal responses. We treat the battery cell as a homogeneous material while locally we explicitly solve for the mechanical response of individual components using a homogenizationmore » model and the electrochemical-thermal responses using an electrochemical model for the battery. A benchmark problem is established to demonstrate the proposed modeling framework. The model shows the capability to capture the gradual evolution of cell electrochemical-thermal responses, and predicts the variation of those responses under different short-circuit conditions.« less
Chanchai, Rattanachai; Kanjanavanit, Rungsrit; Leemasawat, Krit; Amarittakomol, Anong; Topaiboon, Paleerat; Phrommintikul, Arintaya
2018-01-01
Background: Beta-blockers have been shown to decrease mortality and morbidity in heart failure with reduced ejection fraction (HFrEF) patients. However, the side effects are also dose-related, leading to the underdosing. Cost constraint may be one of the limitations of appropriate beta-blocker use; this can be improved with generic drugs. However, the effects in real life practice have not been investigated. Methods and results: This study aimed to compare the efficacy and safety of generic and brand beta-blockers in HFrEF patients. We performed a retrospective cohort analysis in HFrEF patients who received either generic or brand beta-blocker in Chiang Mai Heart Failure Clinic. The primary endpoint was the proportion of patients who received at least 50% target dose of beta-blocker between generic and brand beta-blockers. Adverse events were secondary endpoints. 217 patients (119 and 98 patients received generic and brand beta-blocker, respectively) were enrolled. There were no differences between groups regarding age, gender, etiology of heart failure, New York Heart Association (NYHA) functional class, left ventricular ejection fraction (LVEF), rate of receiving angiotensin converting enzyme inhibitor (ACEI), angiotensin recepter blocker (ARB), or spironolactone. Patients receiving brand beta-blockers had lower resting heart rate at baseline (74.9 and 84.2 bpm, p = .001). Rate of achieved 50% target dose and target daily dose did not differ between groups (40.4 versus 44.5% and 48.0 versus 55.0%, p > .05, respectively). Rate of side effects was not different between groups (32.3 versus 29.5%, p > .05) and the most common side effect was hypotension. Conclusion: This study demonstrated that beta-blocker tolerability was comparable between brand and generic formulations. Generic or brand beta-blockers should be prescribed to HFrEF patients who have no contraindications.
Chanchai, Rattanachai; Kanjanavanit, Rungsrit; Leemasawat, Krit; Amarittakomol, Anong; Topaiboon, Paleerat; Phrommintikul, Arintaya
2018-01-01
Abstract Background: Beta-blockers have been shown to decrease mortality and morbidity in heart failure with reduced ejection fraction (HFrEF) patients. However, the side effects are also dose-related, leading to the underdosing. Cost constraint may be one of the limitations of appropriate beta-blocker use; this can be improved with generic drugs. However, the effects in real life practice have not been investigated. Methods and results: This study aimed to compare the efficacy and safety of generic and brand beta-blockers in HFrEF patients. We performed a retrospective cohort analysis in HFrEF patients who received either generic or brand beta-blocker in Chiang Mai Heart Failure Clinic. The primary endpoint was the proportion of patients who received at least 50% target dose of beta-blocker between generic and brand beta-blockers. Adverse events were secondary endpoints. 217 patients (119 and 98 patients received generic and brand beta-blocker, respectively) were enrolled. There were no differences between groups regarding age, gender, etiology of heart failure, New York Heart Association (NYHA) functional class, left ventricular ejection fraction (LVEF), rate of receiving angiotensin converting enzyme inhibitor (ACEI), angiotensin recepter blocker (ARB), or spironolactone. Patients receiving brand beta-blockers had lower resting heart rate at baseline (74.9 and 84.2 bpm, p = .001). Rate of achieved 50% target dose and target daily dose did not differ between groups (40.4 versus 44.5% and 48.0 versus 55.0%, p > .05, respectively). Rate of side effects was not different between groups (32.3 versus 29.5%, p > .05) and the most common side effect was hypotension. Conclusion: This study demonstrated that beta-blocker tolerability was comparable between brand and generic formulations. Generic or brand beta-blockers should be prescribed to HFrEF patients who have no contraindications. PMID:29379674
Park-Wyllie, Laura; van Stralen, Judy; Castillon, Genaro; Sherman, Stephen E; Almagor, Doron
2017-10-01
Our study evaluated adverse events of therapeutic failure (and specifically reduced duration of action) with the use of a branded product, Osmotic Release Oral System (OROS) methylphenidate, which is approved for the treatment of attention deficit/hyperactivity disorder, and a generic product (methylphenidate, methylphenidate ER-C), which was approved for marketing in Canada based on bioequivalence to OROS methylphenidate. This study was initiated following reports that some US-marketed generic methylphenidate ER products had substantially higher reporting rates of therapeutic failure than did the referenced brands. Through methodology similar to that used by the US Food and Drug Administration to investigate the issue with the US-marketed generic, reporting rates were calculated from cases of therapeutic failure identified in the Canadian Vigilance Adverse Reaction Online database for a 1-year period beginning 8 months after each product launch. Corresponding population exposure was estimated from the number of tablets dispensed. An in-depth analysis of narratives of individual case safety reports (ICSRs) with the use of the generic product was conducted in duplicate by 2 physicians to assess causality and to characterize the potential safety risk and clinical pattern of therapeutic failure. Similar secondary analyses were conducted on the US-marketed products. Reporting rates of therapeutic failure with the use of methylphenidate ER-C (generic) and OROS methylphenidate (brand name) were 411.5 and 37.5 cases per 100,000 patient-years, respectively (reporting rate ratio, 10.99; 95% CI, 5.93-22.21). In-depth analysis of narratives of 230 ICSRs of therapeutic failure with the Canadian-marketed generic determined that all ICSRs were either probably (60 [26%]) or possibly (170 [74%]) causally related to methylphenidate ER-C. Clinical symptoms suggestive of overdose were present in 31 reports of loss of efficacy (13.5%) and occurred primarily in the morning, and premature loss of efficacy (shorter duration of action) was described in 98 cases (42.6%) and occurred primarily in the afternoon. Impacts on social functioning, such as disruption in work or school performance or adverse social behaviors, were found in 51 cases (22.2%). The ~10-fold higher reporting rate of therapeutic failure with the generic product relative to its reference product in the present Canadian study resembles findings with US-marketed generic products. While these results should be interpreted with caution due to the limitations of spontaneous adverse event reporting, which may confound comparisons across products, similar findings nonetheless led the US Food and Drug Administration to declare in 2014 that 2 methylphenidate ER generic products in the United States were neither bioequivalent nor interchangeable with OROS methylphenidate-their reference product. Our results indicate a potential safety issue with the Canadian-marketed generic and suggest a need for further investigation by Health Canada. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana L. Kelly; Albert Malkhasyan
2010-06-01
There is a nearly ubiquitous assumption in PSA that parameter values are at least piecewise-constant in time. As a result, Bayesian inference tends to incorporate many years of plant operation, over which there have been significant changes in plant operational and maintenance practices, plant management, etc. These changes can cause significant changes in parameter values over time; however, failure to perform Bayesian inference in the proper time-dependent framework can mask these changes. Failure to question the assumption of constant parameter values, and failure to perform Bayesian inference in the proper time-dependent framework were noted as important issues in NUREG/CR-6813, performedmore » for the U. S. Nuclear Regulatory Commission’s Advisory Committee on Reactor Safeguards in 2003. That report noted that “industry lacks tools to perform time-trend analysis with Bayesian updating.” This paper describes an application of time-dependent Bayesian inference methods developed for the European Commission Ageing PSA Network. These methods utilize open-source software, implementing Markov chain Monte Carlo sampling. The paper also illustrates the development of a generic prior distribution, which incorporates multiple sources of generic data via weighting factors that address differences in key influences, such as vendor, component boundaries, conditions of the operating environment, etc.« less
Ramiro, Miguel A; Llibre, Josep M
2014-11-01
The availability of generic lamivudine in the context of the current economic crisis has raised a new issue in some European countries: breaking up the once-daily fixed-dose antiretroviral combinations (FDAC) of efavirenz/tenofovir/emtricitabine, tenofovir/emtricitabine, or abacavir/lamivudine, in order to administer their components separately, thereby allowing the use of generic lamivudine instead of branded emtricitabine or lamivudine. The legal, ethical, and economic implications of this potential strategy are reviewed, particularly in those patients receiving a once-daily single-tablet regimen. An unfamiliar change in antiretroviral treatment from a successful patient-friendly FDAC into a more complex regimen including separately the components to allow the substitution of one (or some) of them for generic surrogates (in the absence of a generic bioequivalent FDAC) could be discriminatory because it does not guarantee access to equal excellence in healthcare to all citizens. Furthermore, it could violate the principle of non-maleficence by potentially causing harm both at the individual level (hindering adherence and favouring treatment failure and resistance), and at the community level (hampering control of disease transmission and transmission of HIV-1 resistance). Replacing a FDAC with the individual components of that combination should only be permitted when the substituting medication has the same qualitative and quantitative composition of active ingredients, pharmaceutical form, method of administration, dosage and presentation as the medication being replaced, and a randomized study has demonstrated its non-inferiority. Finally, a strict pharma-economic study supporting this change, comparing the effectiveness and the cost of a specific intervention with the best available alternative, should be undertaken before its potential implementation. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.
NASA Technical Reports Server (NTRS)
Kleinhammer, Roger K.; Graber, Robert R.; DeMott, D. L.
2016-01-01
Reliability practitioners advocate getting reliability involved early in a product development process. However, when assigned to estimate or assess the (potential) reliability of a product or system early in the design and development phase, they are faced with lack of reasonable models or methods for useful reliability estimation. Developing specific data is costly and time consuming. Instead, analysts rely on available data to assess reliability. Finding data relevant to the specific use and environment for any project is difficult, if not impossible. Instead, analysts attempt to develop the "best" or composite analog data to support the assessments. Industries, consortia and vendors across many areas have spent decades collecting, analyzing and tabulating fielded item and component reliability performance in terms of observed failures and operational use. This data resource provides a huge compendium of information for potential use, but can also be compartmented by industry, difficult to find out about, access, or manipulate. One method used incorporates processes for reviewing these existing data sources and identifying the available information based on similar equipment, then using that generic data to derive an analog composite. Dissimilarities in equipment descriptions, environment of intended use, quality and even failure modes impact the "best" data incorporated in an analog composite. Once developed, this composite analog data provides a "better" representation of the reliability of the equipment or component. It can be used to support early risk or reliability trade studies, or analytical models to establish the predicted reliability data points. It also establishes a baseline prior that may updated based on test data or observed operational constraints and failures, i.e., using Bayesian techniques. This tutorial presents a descriptive compilation of historical data sources across numerous industries and disciplines, along with examples of contents and data characteristics. It then presents methods for combining failure information from different sources and mathematical use of this data in early reliability estimation and analyses.
Aging assessment of large electric motors in nuclear power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villaran, M.; Subudhi, M.
1996-03-01
Large electric motors serve as the prime movers to drive high capacity pumps, fans, compressors, and generators in a variety of nuclear plant systems. This study examined the stressors that cause degradation and aging in large electric motors operating in various plant locations and environments. The operating history of these machines in nuclear plant service was studied by review and analysis of failure reports in the NPRDS and LER databases. This was supplemented by a review of motor designs, and their nuclear and balance of plant applications, in order to characterize the failure mechanisms that cause degradation, aging, and failuremore » in large electric motors. A generic failure modes and effects analysis for large squirrel cage induction motors was performed to identify the degradation and aging mechanisms affecting various components of these large motors, the failure modes that result, and their effects upon the function of the motor. The effects of large motor failures upon the systems in which they are operating, and on the plant as a whole, were analyzed from failure reports in the databases. The effectiveness of the industry`s large motor maintenance programs was assessed based upon the failure reports in the databases and reviews of plant maintenance procedures and programs.« less
Unique failure behavior of metal/composite aircraft structural components under crash type loads
NASA Technical Reports Server (NTRS)
Carden, Huey D.
1990-01-01
Failure behavior results are presented on some of the crash dynamics research conducted with concepts of aircraft elements and substructure which have not necessarily been designed or optimized for energy absorption or crash loading considerations. To achieve desired new designs which incorporate improved energy absorption capabilities often requires an understanding of how more conventional designs behave under crash type loadings. Experimental and analytical data are presented which indicate some general trends in the failure behavior of a class of composite structures which include individual fuselage frames, skeleton subfloors with stringers and floor beams but without skin covering, and subfloors with skin added to the frame-stringer arrangement. Although the behavior is complex, a strong similarity in the static/dynamic failure behavior among these structures is illustrated through photographs of the experimental results and through analytical data of generic composite structural models. It is believed that the thread of similarity in behavior is telling the designer and dynamists a great deal about what to expect in the crash behavior of these structures and can guide designs for improving the energy absorption and crash behavior of such structures.
Lin, Yu-Shiuan; Jan, I-Shiow; Cheng, Shou-Hsia
2017-03-01
Generic medications used for chronic diseases are beneficial in containing healthcare costs and improving drug accessibility. However, the effects of generic drugs in acute and severe illness remain controversial. This study aims to investigate treatment costs and outcomes of generic antibiotics prescribed for adults with a urinary tract infection in outpatient settings. The data source was the Longitudinal Health Insurance Database of Taiwan. We included outpatients aged 20 years and above with a urinary tract infection who required one oral antibiotic for which brand-name and generic products were simultaneously available. Drug cost and overall healthcare expense of the index consultation, healthcare cost during a 42-day follow-up period, and treatment failure rates were the main dependent variables. Data were compared between brand-name and generic users from the entire cohort and a propensity score-matched samples. Results from the entire cohort and propensity score-matched samples were similar. Daily antibiotic cost was significantly lower among generic users than brand-name users. Significant lower total drug claims of the index consultation only existed in patients receiving the investigated antibiotics, while the drug price between brand-name and generic versions were relatively large (e.g., >50%). The overall healthcare cost of the index consultation, healthcare expenditure during a 42-day follow-up period, and treatment failure rates were similar between the two groups. Compared with those treated with brand-name antibiotics, outpatients who received generic antibiotics had equivalent treatment outcomes with lower drug costs. Generic antibiotics are effective and worthy of adoption among outpatients with simple infections indicating oral antibiotic treatment. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Weiss, Jerold L.; Hsu, John Y.
1986-01-01
The use of a decentralized approach to failure detection and isolation for use in restructurable control systems is examined. This work has produced: (1) A method for evaluating fundamental limits to FDI performance; (2) Application using flight recorded data; (3) A working control element FDI system with maximal sensitivity to critical control element failures; (4) Extensive testing on realistic simulations; and (5) A detailed design methodology involving parameter optimization (with respect to model uncertainties) and sensitivity analyses. This project has concentrated on detection and isolation of generic control element failures since these failures frequently lead to emergency conditions and since knowledge of remaining control authority is essential for control system redesign. The failures are generic in the sense that no temporal failure signature information was assumed. Thus, various forms of functional failures are treated in a unified fashion. Such a treatment results in a robust FDI system (i.e., one that covers all failure modes) but sacrifices some performance when detailed failure signature information is known, useful, and employed properly. It was assumed throughout that all sensors are validated (i.e., contain only in-spec errors) and that only the first failure of a single control element needs to be detected and isolated. The FDI system which has been developed will handle a class of multiple failures.
Sadashiv, Mucheli Shravan; Rupali, Priscilla; Manesh, Abi; Kannangai, Rajesh; Abraham, Ooriapadickal Cherian; Pulimood, Susanne A; Karthik, Rajiv; Rajkumar, S; Thomas, Kurien
2017-12-01
Since the time of NACO Antiretroviral (ART) roll-out, generic ART has been the mainstay of therapy. There are many studies documenting the efficacy of generic ART but with the passage of time, failure of therapy is on the rise. As institution of second line ART has significant financial implications both for a program and for an individual it is imperative that we determine factors which contribute towards treatment failure in a cohort of patients on generic antiretroviral therapy. This was a nested matched case-control study assessing the predictors for treatment failure in our cohort who had been on Anti-retroviral therapy for at least a year. We identified 42 patients (Cases) with documented treatment failure out of our cohort of 823 patients and 42 sex, age and duration of therapy-matched controls. Using a structured proforma, we collected information from the out-patient and in-patient charts of the Infectious Diseases clinic Cohort in CMC, Vellore. A set of predetermined variables were studied as potential risk factors for treatment failure on ART. Univariate analysis showed significant association with 1) Self-reported nonadherence<95% [OR 12.81 (95%CI 1.54-281.45)]. 2) Treatment interruptions in adherent cases (OR 9.56 (95% CI 1.11-213.35)]. 3) Past inappropriate therapies [OR 9.65 (95% CI 1.12-215.94)]. 4) Diarrhoea [OR 16.40 (95% CI 2.02-3.55.960]. 5) GI opportunistic infections (OR 11.06 (95% CI 1.31 -244.27)] and 6) Drug Toxicity [OR 3.69 (95% CI 1.15-12.35).In multiple logistic regression analysis, we found independent risk factors of treatment failure to be: Self-reported non-adherence (<95%) with OR 15.46(95%CI 1.55 - 154.08), drug toxicity - OR 4.13(95%CI 1.095 - 15.534) and history of diarrhoea - OR 23.446(95%CI 2.572 - 213.70). This study reveals that besides adherence to therapy, presence of diarrhoea and occurrence of drug toxicity are significant risk factors associated with failure of anti-retroviral therapy. There is a need for further prospective studies to assess their role in development of treatment failure on ART and thus help development of targeted interventions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Curtis; Rabiti, Cristian; Martineau, Richard
Safety is central to the design, licensing, operation, and economics of Nuclear Power Plants (NPPs). As the current Light Water Reactor (LWR) NPPs age beyond 60 years, there are possibilities for increased frequency of Systems, Structures, and Components (SSCs) degradations or failures that initiate safety-significant events, reduce existing accident mitigation capabilities, or create new failure modes. Plant designers commonly “over-design” portions of NPPs and provide robustness in the form of redundant and diverse engineered safety features to ensure that, even in the case of well-beyond design basis scenarios, public health and safety will be protected with a very high degreemore » of assurance. This form of defense-in-depth is a reasoned response to uncertainties and is often referred to generically as “safety margin.” Historically, specific safety margin provisions have been formulated, primarily based on “engineering judgment.”« less
Effect of stress concentrations in composite structures
NASA Technical Reports Server (NTRS)
Babcock, G. D.; Knauss, W. G.
1984-01-01
The goal of achieving a better understanding of the failure of complex composite structure is sought. This type of structure requires a thorough understanding of the behavior under load both on a macro and micro scale if failure mechanisms are to be understood. The two problems being studied are the failure at a panel/stiffener interface and a generic problem of failure at a stress concentration.
NASA Technical Reports Server (NTRS)
Carden, Huey D.; Boitnott, Richard L.; Fasanella, Edwin L.
1990-01-01
Failure behavior results are presented from crash dynamics research using concepts of aircraft elements and substructure not necessarily designed or optimized for energy absorption or crash loading considerations. To achieve desired new designs which incorporate improved energy absorption capabilities often requires an understanding of how more conventional designs behave under crash loadings. Experimental and analytical data are presented which indicate some general trends in the failure behavior of a class of composite structures which include individual fuselage frames, skeleton subfloors with stringers and floor beams but without skin covering, and subfloors with skin added to the frame-stringer arrangement. Although the behavior is complex, a strong similarity in the static and dynamic failure behavior among these structures is illustrated through photographs of the experimental results and through analytical data of generic composite structural models. It is believed that the similarity in behavior is giving the designer and dynamists much information about what to expect in the crash behavior of these structures and can guide designs for improving the energy absorption and crash behavior of such structures.
NASA Technical Reports Server (NTRS)
Carden, Huey D.; Boitnott, Richard L.; Fasanella, Edwin L.
1990-01-01
Failure behavior results are presented from crash dynamics research using concepts of aircraft elements and substructure not necessarily designed or optimized for energy absorption or crash loading considerations. To achieve desired new designs which incorporate improved energy absorption capabilities often requires an understanding of how more conventional designs behave under crash loadings. Experimental and analytical data are presented which indicate some general trends in the failure behavior of a class of composite structures which include individual fuselage frames, skeleton subfloors with stringers and floor beams but without skin covering, and subfloors with skin added to the frame-stringer arrangement. Although the behavior is complex, a strong similarity in the static and dynamic failure behavior among these structures is illustrated through photographs of the experimental results and through analytical data of generic composite structural models. It is believed that the similarity in behavior is giving the designer and dynamists much information about what to expect in the crash behavior of these structures and can guide designs for improving the energy absorption and crash behavior of such structures.
Error and attack tolerance of complex networks
NASA Astrophysics Data System (ADS)
Albert, Réka; Jeong, Hawoong; Barabási, Albert-László
2000-07-01
Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network. Complex communication networks display a surprising degree of robustness: although key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these and other complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. Here we demonstrate that error tolerance is not shared by all redundant systems: it is displayed only by a class of inhomogeneously wired networks, called scale-free networks, which include the World-Wide Web, the Internet, social networks and cells. We find that such networks display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates. However, error tolerance comes at a high price in that these networks are extremely vulnerable to attacks (that is, to the selection and removal of a few nodes that play a vital role in maintaining the network's connectivity). Such error tolerance and attack vulnerability are generic properties of communication networks.
Safety and efficacy of generic drugs with respect to brand formulation.
Gallelli, Luca; Palleria, Caterina; De Vuono, Antonio; Mumoli, Laura; Vasapollo, Piero; Piro, Brunella; Russo, Emilio
2013-12-01
Generic drugs are equivalent to the brand formulation if they have the same active substance, the same pharmaceutical form and the same therapeutic indications and a similar bioequivalence respect to the reference medicinal product. The use of generic drugs is indicated from many countries in order to reduce medication price. However some points, such as bioequivalence and the role of excipients, may be clarified regarding the clinical efficacy and safety during the switch from brand to generic formulations. In conclusion, the use of generic drugs could be related with an increased days of disease (time to relapse) or might lead to a therapeutic failure; on the other hand, a higher drug concentration might expose patients to an increased risk of dose-dependent side-effects.
Hydrodynamic design of generic pump components
NASA Technical Reports Server (NTRS)
Eastland, A. H. J.; Dodson, H. C.
1991-01-01
Inducer and impellar base geometries were defined for a fuel pump for a generic generator cycle. Blade surface data and inlet flowfield definition are available in sufficient detail to allow computational fluid dynamic analysis of the two components.
Fluctuating pressures in pump diffuser and collector scrolls, part 1
NASA Technical Reports Server (NTRS)
Sloteman, Donald P.
1989-01-01
The cracking of scroll liners on the SSME High Pressure Fuel Turbo Pump (HPFTP) on hot gas engine test firings has prompted a study into the nature of pressure fluctuations in centrifugal pump states. The amplitudes of these fluctuations and where they originate in the pump stage are quantified. To accomplish this, a test program was conducted to map the pressure pulsation activity in a centrifugal pump stage. This stage is based on typical commercial (or generic) pump design practice and not the specialized design of the HPFTP. Measurements made in the various elements comprising the stage indicate that pulsation activity is dominated by synchronous related phenomena. Pulsation amplitudes measured in the scroll are low, on the order of 2 to 7 percent of the impeller exit tip speed velocity head. Significant non-sychronous pressure fluctuations occur at low flow, and while of interest to commercial pump designers, have little meaning to the HPFTP experience. Results obtained with the generic components do provide insights into possible pulsation related scroll failures on the HPFTP, and provide a basis for further study.
Sirimamilla, P Abhiram; Rimnac, Clare M; Furmanski, Jevan
2018-01-01
Highly crosslinked UHMWPE is now the material of choice for hard-on-soft bearing couples in total joint replacements. However, the fracture resistance of the polymer remains a design concern for increased longevity of the components in vivo. Fracture research utilizing the traditional linear elastic fracture mechanics (LEFM) or elastic plastic fracture mechanics (EPFM) approach has not yielded a definite failure criterion for UHMWPE. Therefore, an advanced viscous fracture model has been applied to various notched compact tension specimen geometries to estimate the fracture resistance of the polymer. Two generic crosslinked UHMWPE formulations (remelted 65kGy and remelted 100kGy) were analyzed in this study using notched test specimens with three different notch radii under static loading conditions. The results suggest that the viscous fracture model can be applied to crosslinked UHMWPE and a single value of critical energy governs crack initiation and propagation in the material. To our knowledge, this is one of the first studies to implement a mechanistic approach to study crack initiation and propagation in UHMWPE for a range of clinically relevant stress-concentration geometries. It is believed that a combination of structural analysis of components and material parameter quantification is a path to effective failure prediction in UHMWPE total joint replacement components, though additional testing is needed to verify the rigor of this approach. Copyright © 2017 Elsevier Ltd. All rights reserved.
Safety and efficacy of generic drugs with respect to brand formulation
Gallelli, Luca; Palleria, Caterina; De Vuono, Antonio; Mumoli, Laura; Vasapollo, Piero; Piro, Brunella; Russo, Emilio
2013-01-01
Generic drugs are equivalent to the brand formulation if they have the same active substance, the same pharmaceutical form and the same therapeutic indications and a similar bioequivalence respect to the reference medicinal product. The use of generic drugs is indicated from many countries in order to reduce medication price. However some points, such as bioequivalence and the role of excipients, may be clarified regarding the clinical efficacy and safety during the switch from brand to generic formulations. In conclusion, the use of generic drugs could be related with an increased days of disease (time to relapse) or might lead to a therapeutic failure; on the other hand, a higher drug concentration might expose patients to an increased risk of dose-dependent side-effects. PMID:24347975
Source Data Impacts on Epistemic Uncertainty for Launch Vehicle Fault Tree Models
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Novack, Steven; Ring, Robert
2016-01-01
Launch vehicle systems are designed and developed using both heritage and new hardware. Design modifications to the heritage hardware to fit new functional system requirements can impact the applicability of heritage reliability data. Risk estimates for newly designed systems must be developed from generic data sources such as commercially available reliability databases using reliability prediction methodologies, such as those addressed in MIL-HDBK-217F. Failure estimates must be converted from the generic environment to the specific operating environment of the system in which it is used. In addition, some qualification of applicability for the data source to the current system should be made. Characterizing data applicability under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This paper will demonstrate a data-source applicability classification method for suggesting epistemic component uncertainty to a target vehicle based on the source and operating environment of the originating data. The source applicability is determined using heuristic guidelines while translation of operating environments is accomplished by applying statistical methods to MIL-HDK-217F tables. The paper will provide one example for assigning environmental factors uncertainty when translating between operating environments for the microelectronic part-type components. The heuristic guidelines will be followed by uncertainty-importance routines to assess the need for more applicable data to reduce model uncertainty.
NASA Astrophysics Data System (ADS)
Guler Yigitoglu, Askin
In the context of long operation of nuclear power plants (NPPs) (i.e., 60-80 years, and beyond), investigation of the aging of passive systems, structures and components (SSCs) is important to assess safety margins and to decide on reactor life extension as indicated within the U.S. Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) Program. In the traditional probabilistic risk assessment (PRA) methodology, evaluating the potential significance of aging of passive SSCs on plant risk is challenging. Although passive SSC failure rates can be added as initiating event frequencies or basic event failure rates in the traditional event-tree/fault-tree methodology, these failure rates are generally based on generic plant failure data which means that the true state of a specific plant is not reflected in a realistic manner on aging effects. Dynamic PRA methodologies have gained attention recently due to their capability to account for the plant state and thus address the difficulties in the traditional PRA modeling of aging effects of passive components using physics-based models (and also in the modeling of digital instrumentation and control systems). Physics-based models can capture the impact of complex aging processes (e.g., fatigue, stress corrosion cracking, flow-accelerated corrosion, etc.) on SSCs and can be utilized to estimate passive SSC failure rates using realistic NPP data from reactor simulation, as well as considering effects of surveillance and maintenance activities. The objectives of this dissertation are twofold: The development of a methodology for the incorporation of aging modeling of passive SSC into a reactor simulation environment to provide a framework for evaluation of their risk contribution in both the dynamic and traditional PRA; and the demonstration of the methodology through its application to pressurizer surge line pipe weld and steam generator tubes in commercial nuclear power plants. In the proposed methodology, a multi-state physics based model is selected to represent the aging process. The model is modified via sojourn time approach to reflect the operational and maintenance history dependence of the transition rates. Thermal-hydraulic parameters of the model are calculated via the reactor simulation environment and uncertainties associated with both parameters and the models are assessed via a two-loop Monte Carlo approach (Latin hypercube sampling) to propagate input probability distributions through the physical model. The effort documented in this thesis towards this overall objective consists of : i) defining a process for selecting critical passive components and related aging mechanisms, ii) aging model selection, iii) calculating the probability that aging would cause the component to fail, iv) uncertainty/sensitivity analyses, v) procedure development for modifying an existing PRA to accommodate consideration of passive component failures, and, vi) including the calculated failure probability in the modified PRA. The proposed methodology is applied to pressurizer surge line pipe weld aging and steam generator tube degradation in pressurized water reactors.
NASA Technical Reports Server (NTRS)
Hark, Frank; Britton, Paul; Ring, Rob; Novack, Steven D.
2016-01-01
Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFS are a set of dependent type of failures that can be caused by: system environments; manufacturing; transportation; storage; maintenance; and assembly, as examples. Since there are many factors that contribute to CCFs, the effects can be reduced, but they are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and CCF (dependent) failure and data is limited, especially for launch vehicles. The Probabilistic Risk Assessment (PRA) of NASA's Safety and Mission Assurance Directorate at Marshal Space Flight Center (MFSC) is using generic data from the Nuclear Regulatory Commission's database of common cause failures at nuclear power plants to estimate CCF due to the lack of a more appropriate data source. There remains uncertainty in the actual magnitude of the common cause risk estimates for different systems at this stage of the design. Given the limited data about launch vehicle CCF and that launch vehicles are a highly redundant system by design, it is important to make design decisions to account for a range of values for independent and CCFs. When investigating the design of the one-out-of-two component redundant system for launch vehicles, a response surface was constructed to represent the impact of the independent failure rate versus a common cause beta factor effect on a system's failure probability. This presentation will define a CCF and review estimation calculations. It gives a summary of reduction methodologies and a review of examples of historical CCFs. Finally, it presents the response surface and discusses the results of the different CCFs on the reliability of a one-out-of-two system.
NASA Technical Reports Server (NTRS)
Hark, Frank; Britton, Paul; Ring, Rob; Novack, Steven D.
2015-01-01
Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFS are a set of dependent type of failures that can be caused by: system environments; manufacturing; transportation; storage; maintenance; and assembly, as examples. Since there are many factors that contribute to CCFs, the effects can be reduced, but they are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and CCF (dependent) failure and data is limited, especially for launch vehicles. The Probabilistic Risk Assessment (PRA) of NASA's Safety and Mission Assurance Directorate at Marshall Space Flight Center (MFSC) is using generic data from the Nuclear Regulatory Commission's database of common cause failures at nuclear power plants to estimate CCF due to the lack of a more appropriate data source. There remains uncertainty in the actual magnitude of the common cause risk estimates for different systems at this stage of the design. Given the limited data about launch vehicle CCF and that launch vehicles are a highly redundant system by design, it is important to make design decisions to account for a range of values for independent and CCFs. When investigating the design of the one-out-of-two component redundant system for launch vehicles, a response surface was constructed to represent the impact of the independent failure rate versus a common cause beta factor effect on a system's failure probability. This presentation will define a CCF and review estimation calculations. It gives a summary of reduction methodologies and a review of examples of historical CCFs. Finally, it presents the response surface and discusses the results of the different CCFs on the reliability of a one-out-of-two system.
Montefusco, Alberto; Consonni, Francesco; Beretta, Gian Paolo
2015-04-01
By reformulating the steepest-entropy-ascent (SEA) dynamical model for nonequilibrium thermodynamics in the mathematical language of differential geometry, we compare it with the primitive formulation of the general equation for the nonequilibrium reversible-irreversible coupling (GENERIC) model and discuss the main technical differences of the two approaches. In both dynamical models the description of dissipation is of the "entropy-gradient" type. SEA focuses only on the dissipative, i.e., entropy generating, component of the time evolution, chooses a sub-Riemannian metric tensor as dissipative structure, and uses the local entropy density field as potential. GENERIC emphasizes the coupling between the dissipative and nondissipative components of the time evolution, chooses two compatible degenerate structures (Poisson and degenerate co-Riemannian), and uses the global energy and entropy functionals as potentials. As an illustration, we rewrite the known GENERIC formulation of the Boltzmann equation in terms of the square root of the distribution function adopted by the SEA formulation. We then provide a formal proof that in more general frameworks, whenever all degeneracies in the GENERIC framework are related to conservation laws, the SEA and GENERIC models of the dissipative component of the dynamics are essentially interchangeable, provided of course they assume the same kinematics. As part of the discussion, we note that equipping the dissipative structure of GENERIC with the Leibniz identity makes it automatically SEA on metric leaves.
GERICOS: A Generic Framework for the Development of On-Board Software
NASA Astrophysics Data System (ADS)
Plasson, P.; Cuomo, C.; Gabriel, G.; Gauthier, N.; Gueguen, L.; Malac-Allain, L.
2016-08-01
This paper presents an overview of the GERICOS framework (GEneRIC Onboard Software), its architecture, its various layers and its future evolutions. The GERICOS framework, developed and qualified by LESIA, offers a set of generic, reusable and customizable software components for the rapid development of payload flight software. The GERICOS framework has a layered structure. The first layer (GERICOS::CORE) implements the concept of active objects and forms an abstraction layer over the top of real-time kernels. The second layer (GERICOS::BLOCKS) offers a set of reusable software components for building flight software based on generic solutions to recurrent functionalities. The third layer (GERICOS::DRIVERS) implements software drivers for several COTS IP cores of the LEON processor ecosystem.
PSHFT - COMPUTERIZED LIFE AND RELIABILITY MODELLING FOR TURBOPROP TRANSMISSIONS
NASA Technical Reports Server (NTRS)
Savage, M.
1994-01-01
The computer program PSHFT calculates the life of a variety of aircraft transmissions. A generalized life and reliability model is presented for turboprop and parallel shaft geared prop-fan aircraft transmissions. The transmission life and reliability model is a combination of the individual reliability models for all the bearings and gears in the main load paths. The bearing and gear reliability models are based on the statistical two parameter Weibull failure distribution method and classical fatigue theories. The computer program developed to calculate the transmission model is modular. In its present form, the program can analyze five different transmissions arrangements. Moreover, the program can be easily modified to include additional transmission arrangements. PSHFT uses the properties of a common block two-dimensional array to separate the component and transmission property values from the analysis subroutines. The rows correspond to specific components with the first row containing the values for the entire transmission. Columns contain the values for specific properties. Since the subroutines (which determine the transmission life and dynamic capacity) interface solely with this property array, they are separated from any specific transmission configuration. The system analysis subroutines work in an identical manner for all transmission configurations considered. Thus, other configurations can be added to the program by simply adding component property determination subroutines. PSHFT consists of a main program, a series of configuration specific subroutines, generic component property analysis subroutines, systems analysis subroutines, and a common block. The main program selects the routines to be used in the analysis and sequences their operation. The series of configuration specific subroutines input the configuration data, perform the component force and life analyses (with the help of the generic component property analysis subroutines), fill the property array, call up the system analysis routines, and finally print out the analysis results for the system and components. PSHFT is written in FORTRAN 77 and compiled on a MicroSoft FORTRAN compiler. The program will run on an IBM PC AT compatible with at least 104k bytes of memory. The program was developed in 1988.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daling, P.M.; Marler, J.E.; Vo, T.V.
This study evaluates the values (benefits) and impacts (costs) associated with potential resolutions to Generic Issue 143, ``Availability of HVAC and Chilled Water Systems.`` The study identifies vulnerabilities related to failures of HVAC, chilled water, and room cooling systems; develops estimates of room heatup rates and safety-related equipment vulnerabilities following losses of HVAC/room cooler systems; develops estimates of the core damage frequencies and public risks associated with failures of these systems; develops three proposed resolution strategies to this generic issue; and performs a value/impact analysis of the proposed resolutions. Existing probabilistic risk assessments for four representative plants, including one plantmore » from each vendor, form the basis for the core damage frequency and public risk calculations. Both internal and external events were considered. It was concluded that all three proposed resolution strategies exceed the $1,000/person-rem cost-effectiveness ratio. Additional evaluations were performed to develop ``generic`` insights on potential design-related and configuration-related vulnerabilities and potential high-frequency ({approximately}1E-04/RY) accident sequences that involve failures of HVAC/room cooling functions. It was concluded that, although high-frequency accident sequences may exist at some plants, these high-frequency sequences are plant-specific in nature or have been resolved through hardware and/or operational changes. The plant-specific Individual Plant Examinations are an effective vehicle for identification and resolution of these plant-specific anomalies and hardware configurations.« less
Dissolution Failure of Solid Oral Drug Products in Field Alert Reports.
Sun, Dajun; Hu, Meng; Browning, Mark; Friedman, Rick L; Jiang, Wenlei; Zhao, Liang; Wen, Hong
2017-05-01
From 2005 to 2014, 370 data entries of dissolution failures of solid oral drug products were assessed with respect to the solubility of drug substances, dosage forms [immediate release (IR) vs. modified release (MR)], and manufacturers (brand name vs. generic). The study results show that the solubility of drug substances does not play a significant role in dissolution failures; however, MR drug products fail dissolution tests more frequently than IR drug products. When multiple variables were analyzed simultaneously, poorly water-soluble IR drug products failed the most dissolution tests, followed by poorly soluble MR drug products and very soluble MR drug products. Interestingly, the generic drug products fail dissolution tests at an earlier time point during a stability study than the brand name drug products. Whether the dissolution failure of these solid oral drug products has any in vivo implication will require further pharmacokinetic, pharmacodynamic, clinical, and drug safety evaluation. Food and Drug Administration is currently conducting risk-based assessment using in-house dissolution testing, physiologically based pharmacokinetic modeling and simulation, and post-market surveillance tools. At the meantime, this interim report will outline a general scheme of monitoring dissolution failures of solid oral dosage forms as a pharmaceutical quality indicator. Published by Elsevier Inc.
An industrial information integration approach to in-orbit spacecraft
NASA Astrophysics Data System (ADS)
Du, Xiaoning; Wang, Hong; Du, Yuhao; Xu, Li Da; Chaudhry, Sohail; Bi, Zhuming; Guo, Rong; Huang, Yongxuan; Li, Jisheng
2017-01-01
To operate an in-orbit spacecraft, the spacecraft status has to be monitored autonomously by collecting and analysing real-time data, and then detecting abnormities and malfunctions of system components. To develop an information system for spacecraft state detection, we investigate the feasibility of using ontology-based artificial intelligence in the system development. We propose a new modelling technique based on the semantic web, agent, scenarios and ontologies model. In modelling, the subjects of astronautics fields are classified, corresponding agents and scenarios are defined, and they are connected by the semantic web to analyse data and detect failures. We introduce the modelling methodologies and the resulted framework of the status detection information system in this paper. We discuss system components as well as their interactions in details. The system has been prototyped and tested to illustrate its feasibility and effectiveness. The proposed modelling technique is generic which can be extended and applied to the system development of other large-scale and complex information systems.
Process management using component thermal-hydraulic function classes
Morman, James A.; Wei, Thomas Y. C.; Reifman, Jaques
1999-01-01
A process management expert system where following malfunctioning of a component, such as a pump, for determining system realignment procedures such as for by-passing the malfunctioning component with on-line speeds to maintain operation of the process at full or partial capacity or to provide safe shut down of the system while isolating the malfunctioning component. The expert system uses thermal-hydraulic function classes at the component level for analyzing unanticipated as well as anticipated component malfunctions to provide recommended sequences of operator actions. Each component is classified according to its thermal-hydraulic function, and the generic and component-specific characteristics for that function. Using the diagnosis of the malfunctioning component and its thermal hydraulic class, the expert system analysis is carried out using generic thermal-hydraulic first principles. One aspect of the invention employs a qualitative physics-based forward search directed primarily downstream from the malfunctioning component in combination with a subsequent backward search directed primarily upstream from the serviced component. Generic classes of components are defined in the knowledge base according to the three thermal-hydraulic functions of mass, momentum and energy transfer and are used to determine possible realignment of component configurations in response to thermal-hydraulic function imbalance caused by the malfunctioning component. Each realignment to a new configuration produces the accompanying sequence of recommended operator actions. All possible new configurations are examined and a prioritized list of acceptable solutions is produced.
Process management using component thermal-hydraulic function classes
Morman, J.A.; Wei, T.Y.C.; Reifman, J.
1999-07-27
A process management expert system where following malfunctioning of a component, such as a pump, for determining system realignment procedures such as for by-passing the malfunctioning component with on-line speeds to maintain operation of the process at full or partial capacity or to provide safe shut down of the system while isolating the malfunctioning component. The expert system uses thermal-hydraulic function classes at the component level for analyzing unanticipated as well as anticipated component malfunctions to provide recommended sequences of operator actions. Each component is classified according to its thermal-hydraulic function, and the generic and component-specific characteristics for that function. Using the diagnosis of the malfunctioning component and its thermal hydraulic class, the expert system analysis is carried out using generic thermal-hydraulic first principles. One aspect of the invention employs a qualitative physics-based forward search directed primarily downstream from the malfunctioning component in combination with a subsequent backward search directed primarily upstream from the serviced component. Generic classes of components are defined in the knowledge base according to the three thermal-hydraulic functions of mass, momentum and energy transfer and are used to determine possible realignment of component configurations in response to thermal-hydraulic function imbalance caused by the malfunctioning component. Each realignment to a new configuration produces the accompanying sequence of recommended operator actions. All possible new configurations are examined and a prioritized list of acceptable solutions is produced. 5 figs.
Rahman, Md Motiur; Alatawi, Yasser; Cheng, Ning; Qian, Jingjing; Plotkina, Annya V; Peissig, Peggy L; Berg, Richard L; Page, David; Hansen, Richard A
2017-09-01
Despite the cost saving role of generic anti-epileptic drugs (AEDs), debate exists as to whether generic substitution of branded AEDs may lead to therapeutic failure and increased toxicity. This study compared adverse event (AE) reporting rates for brand vs. authorized generic (AG) vs. generic AEDs. Since AGs are pharmaceutically identical to brand but perceived as generics, the generic vs. AG comparison minimized potential bias against generics. Events reported to the U.S. Food and Drug Administration Adverse Event Reporting System between January 2004 to March 2015 with lamotrigine, carbamazepine, and oxcarbazepine listed as primary or secondary suspect were classified as brand, generic, or AG based on the manufacturer. Disproportionality analyses using the reporting odds ratio (ROR) assessed the relative rate of reporting of labeled AEs compared to reporting these events with all other drugs. The Breslow-Day statistic compared RORs across brand, AG, and other generics using a Bonferroni-corrected P<0.01. A total of 27,150 events with lamotrigine, 13,950 events with carbamazepine, and 5077 events with oxcarbazepine were reported, with generics accounting for 27%, 41%, and 32% of reports, respectively. Although RORs for the majority of known AEs were different between brand and generics for all three drugs of interest (Breslow-Day P<0.001), RORs generally were similar for AG and generic comparisons. Generic lamotrigine and carbamazepine were more commonly involved in reports of suicide or suicidal ideation compared with the respective AGs based on a multiple comparison-adjusted P<0.01. Similar AED reporting rates were observed for the AG and generic comparisons for most outcomes and drugs, suggesting that brands and generics have similar reporting rates after accounting for generic perception biases. Disproportional suicide reporting was observed for generics compared with AGs and brand, although this finding needs further study. Copyright © 2017 Elsevier B.V. All rights reserved.
Memory Circuit Fault Simulator
NASA Technical Reports Server (NTRS)
Sheldon, Douglas J.; McClure, Tucker
2013-01-01
Spacecraft are known to experience significant memory part-related failures and problems, both pre- and postlaunch. These memory parts include both static and dynamic memories (SRAM and DRAM). These failures manifest themselves in a variety of ways, such as pattern-sensitive failures, timingsensitive failures, etc. Because of the mission critical nature memory devices play in spacecraft architecture and operation, understanding their failure modes is vital to successful mission operation. To support this need, a generic simulation tool that can model different data patterns in conjunction with variable write and read conditions was developed. This tool is a mathematical and graphical way to embed pattern, electrical, and physical information to perform what-if analysis as part of a root cause failure analysis effort.
Is There Evidence to Support Brand to Generic Interchange of the Mycophenolic Acid Products?
Phillips, Karen; Reddy, Prabashni; Gabardi, Steven
2017-02-01
The uptake of generic immunosuppressants lags comparatively to other drug classes, despite that the Food and Drug Administration (FDA) uses identical bioequivalence standards for all drugs. Transplant societies acknowledge the cost savings associated with generic immunosuppressants and support their use following heart, lung, kidney, or bone marrow transplantation. Seven studies of the pharmacokinetics or clinical efficacy of generic mycophenolate mofetil compared to the innovator product are published; all studies and products were ex-United States. Three studies did not demonstrate any pharmacokinetic differences between generic and innovator products in healthy subjects, achieving FDA bioequivalence requirements. Two studies in renal allograft recipients demonstrated no difference in area under the curves between generic and innovator products, and in one, the maximum concentration (Cmax) fell outside the FDA regulatory range. Two studies revealed no difference in acute organ rejection or graft function in renal allograft recipients. Patient surveys indicate that cost is a barrier to immunosuppressant adherence. Generics present a viable method to reduce costs to payers, patients, and health care systems. Adherence to immunosuppressants is crucial to prevent graft failure. An affordable regimen potentially confers greater adherence. Concerns regarding the presumed inferiority of generic immunosuppressants should be assuaged by regulatory requirements for bioequivalency testing, transplant society position statements, and pharmacokinetic and clinical studies.
Computing Lives And Reliabilities Of Turboprop Transmissions
NASA Technical Reports Server (NTRS)
Coy, J. J.; Savage, M.; Radil, K. C.; Lewicki, D. G.
1991-01-01
Computer program PSHFT calculates lifetimes of variety of aircraft transmissions. Consists of main program, series of subroutines applying to specific configurations, generic subroutines for analysis of properties of components, subroutines for analysis of system, and common block. Main program selects routines used in analysis and causes them to operate in desired sequence. Series of configuration-specific subroutines put in configuration data, perform force and life analyses for components (with help of generic component-property-analysis subroutines), fill property array, call up system-analysis routines, and finally print out results of analysis for system and components. Written in FORTRAN 77(IV).
Triple effect absorption cycles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, D.C.; Potnis, S.V.; Tang, J.
1996-12-31
Triple effect absorption chillers can achieve 50% COP improvement over double-effect systems. However, to translate this potential into cost-effective hardware, the most promising embodiments must be identified. In this study, 12 generic triple effect cycles and 76 possible hermetic loop arrangements of those 12 generic cycles were identified. The generic triple effect cycles were screened based on their pressure and solubility field requirements, generic COPs, risk involved in the component design, and number of components in a high corrosive environment. This screening identified four promising arrangements: Alkitrate Topping cycle, Pressure Staged Envelope cycle, High Pressure Overlap cycle, and Dual Loopmore » cycle. All of these arrangements have a very high COP ({approximately} 1.8), however the development risk and cost involved is different for each arrangement. Therefore, the selection of a particular arrangement will depend upon the specific situation under consideration.« less
Nuclear power plant Generic Aging Lessons Learned (GALL). Main report and appendix A
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaza, K.E.; Diercks, D.R.; Holland, J.W.
The purpose of this generic aging lessons learned (GALL) review is to provide a systematic review of plant aging information in order to assess materials and component aging issues related to continued operation and license renewal of operating reactors. Literature on mechanical, structural, and thermal-hydraulic components and systems reviewed consisted of 97 Nuclear Plant Aging Research (NPAR) reports, 23 NRC Generic Letters, 154 Information Notices, 29 Licensee Event Reports (LERs), 4 Bulletins, and 9 Nuclear Management and Resources Council Industry Reports (NUMARC IRs) and literature on electrical components and systems reviewed consisted of 66 NPAR reports, 8 NRC Generic Letters,more » 111 Information Notices, 53 LERs, 1 Bulletin, and 1 NUMARC IR. More than 550 documents were reviewed. The results of these reviews were systematized using a standardized GALL tabular format and standardized definitions of aging-related degradation mechanisms and effects. The tables are included in volume s 1 and 2 of this report. A computerized data base has also been developed for all review tables and can be used to expedite the search for desired information on structures, components, and relevant aging effects. A survey of the GALL tables reveals that all ongoing significant component aging issues are currently being addressed by the regulatory process. However, the aging of what are termed passive components has been highlighted for continued scrutiny. This document is Volume 1, consisting of the executive summary, summary and observations, and an appendix listing the GALL literature review tables.« less
Budget impact analysis of 8 hormonal contraceptive options.
Crespi, Simone; Kerrigan, Matthew; Sood, Vipan
2013-07-01
To develop a model comparing costs of 8 hormonal contraceptives and determine whether acquisition costs for implants and intrauterine devices (IUDs) were offset by decreased pregnancy-related costs over a 3-year time horizon from a managed care perspective. A model was developed to assess the budget impact of branded or generic oral contraceptives (OCs), quarterly intramuscular depot medroxyprogesterone, etonogestrel/ethinyl estradiol vaginal ring, etonogestrel implant, levonorgestrel IUD, norelgestromin/ethinyl estradiol transdermal contraceptive, and ethinyl estradiol/levonorgestrel extended-cycle OC. Major variables included drug costs, typical use failure rates, discontinuation rates, and pregnancy costs. The base case assessed costs for 1000 women initiating each of the hormonal contraceptives. The etonogestrel implant and levonorgestrel IUD resulted in the fewest pregnancies, 63 and 85, respectively, and the least cost, $1.75 million and $2.0 million, respectively. In comparison, generic OC users accounted for a total of 243 pregnancies and $3.4 million in costs. At the end of year 1, costs for the etonogestrel implant ($800,471) and levonorgestrel IUD ($949,721) were already lower than those for generic OCs ($1,146,890). Sensitivity analysis showed that the cost of pregnancies, not product acquisition cost, was the primary cost driver. Higher initial acquisition costs for the etonogestrel implant and levonorgestrel IUD were offset within 1 year by lower contraceptive failure rates and consequent pregnancy costs. Thus, after accounting for typical use failure rates of contraceptive products, the etonogestrel implant and levonorgestrel IUD emerged as the least expensive hormonal contraceptives.
Kaplan, Robert M; Tally, Steven; Hays, Ron D; Feeny, David; Ganiats, Theodore G; Palta, Mari; Fryback, Dennis G
2011-05-01
To compare the responsiveness to clinical change of five widely used preference-based health-related quality-of-life indexes in two longitudinal cohorts. Five generic instruments were simultaneously administered to 376 adults undergoing cataract surgery and 160 adults in heart failure management programs. Patients were assessed at baseline and reevaluated after 1 and 6 months. The measures were the Short Form (SF)-6D (based on responses scored from SF-36v2), Self-Administered Quality of Well-being Scale (QWB-SA), the EuroQol-5D developed by the EuroQol Group, the Health Utilities Indexes Mark 2 (HUI2) and Mark 3 (HUI3). Cataract patients completed the National Eye Institute Visual Functioning Questionnaire-25, and heart failure patients completed the Minnesota Living with Heart Failure Questionnaire. Responsiveness was estimated by the standardized response mean. For cataract patients, mean changes between baseline and 1-month follow-up for the generic indices ranged from 0.00 (SF-6D) to 0.052 (HUI3) and were statistically significant for all indexes except the SF-6D. For heart failure patients, only the SF-6D showed significant change from baseline to 1 month, whereas only the QWB-SA change was significant between 1 and 6 months. Preference-based methods for measuring health outcomes are not equally responsive to change. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Mirdamadi, Massoud; Johnson, W. Steven
1992-01-01
Cross ply laminate behavior of Ti-15V-3Cr-3Al-3Sn (Ti-15-3) matrix reinforced with continuous silicon carbide fibers (SCS-6) subjected to a generic hypersonic flight profile was evaluated experimentally and analytically. Thermomechanical fatigue test techniques were developed to conduct a simulation of a generic hypersonic flight profile. A micromechanical analysis was used. The analysis predicts the stress-strain response of the laminate and of the constituents in each ply during thermal and mechanical cycling by using only constituent properties as input. The fiber was modeled using a thermo-viscoplastic constitutive relation. The fiber transverse modulus was reduced in the analysis to simulate the fiber matrix interface failure. Excellent correlation was found between measured and predicted laminate stress-strain response due to generic hypersonic flight profile when fiber debonding was modeled.
Status of nuclear Class 1 component requalification: Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooper, W.E.
1986-12-01
Qualification relates to assurance of acceptability of the component with respect to structural integrity, operability and functional capability. Requalification is required if existing qualification is lost because of: expiration of the qualified service life (life extension); reactivation of a cancelled or suspended plant; failure to conform with certain requirements of the Technical Specifications, or revision to the applicable Regulatory requirements. The alternatives to requalification are replacement or removal from service. The choice between requalification, replacement and removal from service is governed by economics. The purpose of requalification standards is to ensure the acceptability of the requalification process. A previous EPRImore » Report prepared by Teledyne Engineering Services (TES) (NP-1921) developed a rationale for, and a draft of, a generic requalification standard for Class 1 Pressure Boundary Components presently considered by the Boiler and Pressure Vessel Code published by The American Society of Mechanical Engineers (ASME/BPVC). International Energy Associates Limited (IEA) prepared another report for EPRI shortly thereafter (NP-2418), which reviewed the economic and technologies factors of nuclear plant life extension, and concluded that NP-1921 makes a strong case that the nuclear industry will benefit from the development of the proposed standard.« less
Structures and geriatrics from a failure analysis experience viewpoint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopper, D.M.
In a failure analysis consulting engineering practice one sees a variety of structural failures from which observations may be made concerning geriatric structures. Representative experience with power plants, refineries, offshore structures, and forensic investigations is summarized and generic observations are made regarding the maintenance of fitness for purpose of structures. Although it is important to optimize the engineering design for a range of operational and environmental variables, it is essential that fabrication and inspection controls exist along with common sense based ongoing monitoring and operations procedures. 18 figs.
Discipline-Specific Compared to Generic Training of Teachers in Higher Education.
Silva-Fletcher, Ayona; May, Stephen A
A recurrent theme arising in the higher education sector is the suitability and effectiveness of generic versus discipline-specific training of university teachers, who are often recruited based on their disciplinary specialties to become teachers in higher education. We compared two groups of participants who had undergone training using a generic post-graduate certificate in higher education (PGCertGeneric) versus a discipline-specific course in veterinary education (PGCertVetEd). The study was conducted using a survey that allowed comparison of participants who completed PGCertGeneric (n=21) with PGCertVetEd (n=22). Results indicated that participants from both PGCertGeneric and PGCertVetEd considered teaching to be satisfying and important to their careers, valued the teaching observation component of the course, and identified similar training needs. However, the participants of the PGCertVetEd felt that the course made them better teachers, valued the relevance of the components taught, understood course design better, were encouraged to do further courses/reading in teaching and learning, changed their teaching as a result of the course, and were less stressed about teaching as compared to the PGCertGeneric participants (p<.05). It is likely that the PGCertVetEd, which was designed and developed by veterinarians with a wider understanding of the veterinary sector, helped the participants perceive the training course as suited to their needs.
Generic particulate-monitoring system for retrofit to Hanford exhaust stacks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Camman, J.W.; Carbaugh, E.H.
1982-11-01
Evaluations of 72 sampling and monitoring systems were performed at Hanford as the initial phase of a program to upgrade such systems. Each evaluation included determination of theoretical sampling efficiencies for particle sizes ranging from 0.5 to 10 micrometers aerodynamic equivalent diameter, addressing anisokinetic bias, sample transport line losses, and collector device efficiency. Upgrades needed to meet current Department of Energy guidance for effluent sampling and monitoring were identified, and a cost for each upgrade was estimated. A relative priority for each system's upgrade was then established based on evaluation results, current operational status, and future plans for the facilitymore » being exhausted. Common system upgrade requirements lead to the development of a generic design for common components of an exhaust stack sampling and monitoring system for airborne radioactive particulates. The generic design consists of commercially available off-the-shelf components to the extent practical and will simplify future stack sampling and monitoring system design, fabrication, and installation efforts. Evaluation results and their significance to system upgrades are empasized. A brief discussion of the analytical models used and experience to date with the upgrade program is included. Development of the generic stack sampling and monitoring system design is outlined. Generic system design features and limitations are presented. Requirements for generic system retrofitting to existing exhaust stacks are defined and benefits derived from generic system application are discussed.« less
Zhang, Yuting; Baik, Seo Hyon; Zhou, Lei; Reynolds, Charles F; Lave, Judith R
2012-07-01
Maintenance antidepressant pharmacotherapy in late life prevents recurrent episodes of major depression. The coverage gap in Medicare Part D could increase the likelihood of reducing appropriate use of antidepressants, thereby exposing older adults to an increased risk for relapse of depressive episodes. To determine whether (1) beneficiaries reduce antidepressant use in the gap, (2) the reduction in antidepressant use is similar to the reduction in heart failure medications and antidiabetics, (3) the provision of generic coverage reduces the risk of reduction of medication use, and (4) medical spending increases in the gap. Observational before-after study with a comparison group design. A 5% random sample of US Medicare beneficiaries 65 years or older with depression (n = 65,223) enrolled in stand-alone Part D plans in 2007. Antidepressant pharmacotherapy, physician, outpatient, and inpatient spending. Being in the gap was associated with comparable reductions in the use of antidepressants, heart failure medications, and antidiabetics. Relative to the comparison group (those who had full coverage in the gap because of Medicare coverage or low-income subsidies), the no-coverage group reduced their monthly antidepressant prescriptions by 12.1% (95% CI, 9.9%-14.3%) from the pregap level, whereas they reduced use of heart failure drugs and antidiabetics by 12.9% and 13.4%, respectively. Those with generic drug coverage in the gap reduced their monthly antidepressant prescriptions by 6.9% (95% CI, 4.8%-9.1%); this decrease was entirely attributable to the reduction in the use of brand-name antidepressants. Medicare spending on medical care did not increase for either group relative to the comparison group. The Medicare Part D coverage gap was associated with modest reductions in the use of antidepressants. Those with generic coverage reduced their use of brand-name drugs and did not switch from brand-name to generic drugs. The reduction in antidepressant use was not associated with an increase in nondrug medical spending.
The KATE shell: An implementation of model-based control, monitor and diagnosis
NASA Technical Reports Server (NTRS)
Cornell, Matthew
1987-01-01
The conventional control and monitor software currently used by the Space Center for Space Shuttle processing has many limitations such as high maintenance costs, limited diagnostic capabilities and simulation support. These limitations have caused the development of a knowledge based (or model based) shell to generically control and monitor electro-mechanical systems. The knowledge base describes the system's structure and function and is used by a software shell to do real time constraints checking, low level control of components, diagnosis of detected faults, sensor validation, automatic generation of schematic diagrams and automatic recovery from failures. This approach is more versatile and more powerful than the conventional hard coded approach and offers many advantages over it, although, for systems which require high speed reaction times or aren't well understood, knowledge based control and monitor systems may not be appropriate.
Space flight requirements for fiber optic components: qualification testing and lessons learned
NASA Astrophysics Data System (ADS)
Ott, Melanie N.; Jin, Xiaodan Linda; Chuska, Richard; Friedberg, Patricia; Malenab, Mary; Matuszeski, Adam
2006-04-01
"Qualification" of fiber optic components holds a very different meaning than it did ten years ago. In the past, qualification meant extensive prolonged testing and screening that led to a programmatic method of reliability assurance. For space flight programs today, the combination of using higher performance commercial technology, with shorter development schedules and tighter mission budgets makes long term testing and reliability characterization unfeasible. In many cases space flight missions will be using technology within years of its development and an example of this is fiber laser technology. Although the technology itself is not a new product the components that comprise a fiber laser system change frequently as processes and packaging changes occur. Once a process or the materials for manufacturing a component change, even the data that existed on its predecessor can no longer provide assurance on the newer version. In order to assure reliability during a space flight mission, the component engineer must understand the requirements of the space flight environment as well as the physics of failure of the components themselves. This can be incorporated into an efficient and effective testing plan that "qualifies" a component to specific criteria defined by the program given the mission requirements and the component limitations. This requires interaction at the very initial stages of design between the system design engineer, mechanical engineer, subsystem engineer and the component hardware engineer. Although this is the desired interaction what typically occurs is that the subsystem engineer asks the components or development engineers to meet difficult requirements without knowledge of the current industry situation or the lack of qualification data. This is then passed on to the vendor who can provide little help with such a harsh set of requirements due to high cost of testing for space flight environments. This presentation is designed to guide the engineers of design, development and components, and vendors of commercial components with how to make an efficient and effective qualification test plan with some basic generic information about many space flight requirements. Issues related to the physics of failure, acceptance criteria and lessons learned will also be discussed to assist with understanding how to approach a space flight mission in an ever changing commercial photonics industry.
Space Flight Requirements for Fiber Optic Components; Qualification Testing and Lessons Learned
NASA Technical Reports Server (NTRS)
Ott, Melanie N.; Jin, Xiaodan Linda; Chuska, Richard; Friedberg, Patricia; Malenab, Mary; Matuszeski, Adam
2007-01-01
"Qualification" of fiber optic components holds a very different meaning than it did ten years ago. In the past, qualification meant extensive prolonged testing and screening that led to a programmatic method of reliability assurance. For space flight programs today, the combination of using higher performance commercial technology, with shorter development schedules and tighter mission budgets makes long term testing and reliability characterization unfeasible. In many cases space flight missions will be using technology within years of its development and an example of this is fiber laser technology. Although the technology itself is not a new product the components that comprise a fiber laser system change frequently as processes and packaging changes occur. Once a process or the materials for manufacturing a component change, even the data that existed on its predecessor can no longer provide assurance on the newer version. In order to assure reliability during a space flight mission, the component engineer must understand the requirements of the space flight environment as well as the physics of failure of the components themselves. This can be incorporated into an efficient and effective testing plan that "qualifies" a component to specific criteria defined by the program given the mission requirements and the component limitations. This requires interaction at the very initial stages of design between the system design engineer, mechanical engineer, subsystem engineer and the component hardware engineer. Although this is the desired interaction what typically occurs is that the subsystem engineer asks the components or development engineers to meet difficult requirements without knowledge of the current industry situation or the lack of qualification data. This is then passed on to the vendor who can provide little help with such a harsh set of requirements due to high cost of testing for space flight environments. This presentation is designed to guide the engineers of design, development and components, and vendors of commercial components with how to make an efficient and effective qualification test plan with some basic generic information about many space flight requirements. Issues related to the physics of failure, acceptance criteria and lessons learned will also be discussed to assist with understanding how to approach a space flight mission in an ever changing commercial photonics industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohamed Abdelrahman; roger Haggard; Wagdy Mahmoud
The final goal of this project was the development of a system that is capable of controlling an industrial process effectively through the integration of information obtained through intelligent sensor fusion and intelligent control technologies. The industry of interest in this project was the metal casting industry as represented by cupola iron-melting furnaces. However, the developed technology is of generic type and hence applicable to several other industries. The system was divided into the following four major interacting components: 1. An object oriented generic architecture to integrate the developed software and hardware components @. Generic algorithms for intelligent signal analysismore » and sensor and model fusion 3. Development of supervisory structure for integration of intelligent sensor fusion data into the controller 4. Hardware implementation of intelligent signal analysis and fusion algorithms« less
NASA Astrophysics Data System (ADS)
Jackson, Andrew
2015-07-01
On launch, one of Swarm's absolute scalar magnetometers (ASMs) failed to function, leaving an asymmetrical arrangement of redundant spares on different spacecrafts. A decision was required concerning the deployment of individual satellites into the low-orbit pair or the higher "lonely" orbit. I analyse the probabilities for successful operation of two of the science components of the Swarm mission in terms of a classical probabilistic failure analysis, with a view to concluding a favourable assignment for the satellite with the single working ASM. I concentrate on the following two science aspects: the east-west gradiometer aspect of the lower pair of satellites and the constellation aspect, which requires a working ASM in each of the two orbital planes. I use the so-called "expert solicitation" probabilities for instrument failure solicited from Mission Advisory Group (MAG) members. My conclusion from the analysis is that it is better to have redundancy of ASMs in the lonely satellite orbit. Although the opposite scenario, having redundancy (and thus four ASMs) in the lower orbit, increases the chance of a working gradiometer late in the mission; it does so at the expense of a likely constellation. Although the results are presented based on actual MAG members' probabilities, the results are rather generic, excepting the case when the probability of individual ASM failure is very small; in this case, any arrangement will ensure a successful mission since there is essentially no failure expected at all. Since the very design of the lower pair is to enable common mode rejection of external signals, it is likely that its work can be successfully achieved during the first 5 years of the mission.
A generic coding approach for the examination of meal patterns.
Woolhead, Clara; Gibney, Michael J; Walsh, Marianne C; Brennan, Lorraine; Gibney, Eileen R
2015-08-01
Meal pattern analysis can be complex because of the large variability in meal consumption. The use of aggregated, generic meal data may address some of these issues. The objective was to develop a meal coding system and use it to explore meal patterns. Dietary data were used from the National Adult Nutrition Survey (2008-2010), which collected 4-d food diary information from 1500 healthy adults. Self-recorded meal types were listed for each food item. Common food group combinations were identified to generate a number of generic meals for each meal type: breakfast, light meals, main meals, snacks, and beverages. Mean nutritional compositions of the generic meals were determined and substituted into the data set to produce a generic meal data set. Statistical comparisons were performed against the original National Adult Nutrition Survey data. Principal component analysis was carried out by using these generic meals to identify meal patterns. A total of 21,948 individual meals were reduced to 63 generic meals. Good agreement was seen for nutritional comparisons (original compared with generic data sets mean ± SD), such as fat (75.7 ± 29.4 and 71.7 ± 12.9 g, respectively, P = 0.243) and protein (83.3 ± 26.9 and 80.1 ± 13.4 g, respectively, P = 0.525). Similarly, Bland-Altman plots demonstrated good agreement (<5% outside limits of agreement) for many nutrients, including protein, saturated fat, and polyunsaturated fat. Twelve meal types were identified from the principal component analysis ranging in meal-type inclusion/exclusion, varying in energy-dense meals, and differing in the constituents of the meals. A novel meal coding system was developed; dietary intake data were recoded by using generic meal consumption data. Analysis revealed that the generic meal coding system may be appropriate when examining nutrient intakes in the population. Furthermore, such a coding system was shown to be suitable for use in determining meal-based dietary patterns. © 2015 American Society for Nutrition.
Efficacy of generic allometric equations for estimating biomass: a test in Japanese natural forests.
Ishihara, Masae I; Utsugi, Hajime; Tanouchi, Hiroyuki; Aiba, Masahiro; Kurokawa, Hiroko; Onoda, Yusuke; Nagano, Masahiro; Umehara, Toru; Ando, Makoto; Miyata, Rie; Hiura, Tsutom
2015-07-01
Accurate estimation of tree and forest biomass is key to evaluating forest ecosystem functions and the global carbon cycle. Allometric equations that estimate tree biomass from a set of predictors, such as stem diameter and tree height, are commonly used. Most allometric equations are site specific, usually developed from a small number of trees harvested in a small area, and are either species specific or ignore interspecific differences in allometry. Due to lack of site-specific allometries, local equations are often applied to sites for which they were not originally developed (foreign sites), sometimes leading to large errors in biomass estimates. In this study, we developed generic allometric equations for aboveground biomass and component (stem, branch, leaf, and root) biomass using large, compiled data sets of 1203 harvested trees belonging to 102 species (60 deciduous angiosperm, 32 evergreen angiosperm, and 10 evergreen gymnosperm species) from 70 boreal, temperate, and subtropical natural forests in Japan. The best generic equations provided better biomass estimates than did local equations that were applied to foreign sites. The best generic equations included explanatory variables that represent interspecific differences in allometry in addition to stem diameter, reducing error by 4-12% compared to the generic equations that did not include the interspecific difference. Different explanatory variables were selected for different components. For aboveground and stem biomass, the best generic equations had species-specific wood specific gravity as an explanatory variable. For branch, leaf, and root biomass, the best equations had functional types (deciduous angiosperm, evergreen angiosperm, and evergreen gymnosperm) instead of functional traits (wood specific gravity or leaf mass per area), suggesting importance of other traits in addition to these traits, such as canopy and root architecture. Inclusion of tree height in addition to stem diameter improved the performance of the generic equation only for stem biomass and had no apparent effect on aboveground, branch, leaf, and root biomass at the site level. The development of a generic allometric equation taking account of interspecific differences is an effective approach for accurately estimating aboveground and component biomass in boreal, temperate, and subtropical natural forests.
Olsen, Siv Js; Fridlund, Bengt; Eide, Leslie Sp; Hufthammer, Karl O; Kuiper, Karel Kj; Nordrehaug, Jan E; Skaar, Elisabeth; Norekvål, Tone M
2017-01-01
In addition to favourable results regarding mortality and morbidity it is important to identify the impact transcatheter aortic valve implantation (TAVI) has on patients' quality of life. The aims were: (i) to describe clinical characteristics, self-reported health and quality of life in octogenarians before TAVI intervention; (ii) to determine changes in self-reported health and quality of life one month after TAVI; and (iii) to establish the clinical importance of the findings. A prospective cohort study was conducted on consecutively enrolled octogenarians with severe aortic stenosis undergoing TAVI ( N = 65). Self-reported health and quality of life were recorded at baseline and one month later using two global questions from the World Health Organization Quality of Life Instrument Abbreviated (WHOQOL-BREF), the generic Short Form Health 12 and the disease-specific Minnesota Living with Heart Failure Questionnaire. One month after TAVI, WHOQOL-BREF showed that self-reported health improved moderately ( p < 0.001), while quality of life improved slightly, but not statistically significantly ( p = 0.06). There were changes in all Short Form Health 12 domains, except social functioning and role emotional. The estimated changes were 3.6 to 5.8 with large confidence intervals. The Physical Component Summary increased statistically significantly from baseline to 30 days (30.6-34.7; p = 0.02), but the Mental Component Summary did not (46.9-50.0; p = 0.13). Despite being an advanced treatment performed in a high risk population, TAVI in octogenarians improves short-term self-reported global health and generic physical health and quality of life. These patient-reported outcomes have importance, particularly in this age group.
Development of failure criterion for Kevlar-epoxy fabric laminates
NASA Technical Reports Server (NTRS)
Tennyson, R. C.; Elliott, W. G.
1984-01-01
The development of the tensor polynomial failure criterion for composite laminate analysis is discussed. In particular, emphasis is given to the fabrication and testing of Kevlar-49 fabric (Style 285)/Narmco 5208 Epoxy. The quadratic-failure criterion with F(12)=0 provides accurate estimates of failure stresses for the Kevlar/Epoxy investigated. The cubic failure criterion was re-cast into an operationally easier form, providing the engineer with design curves that can be applied to laminates fabricated from unidirectional prepregs. In the form presented no interaction strength tests are required, although recourse to the quadratic model and the principal strength parameters is necessary. However, insufficient test data exists at present to generalize this approach for all undirectional prepregs and its use must be restricted to the generic materials investigated to-date.
Grosso, A M; Bodalia, P N; Macallister, R J; Hingorani, A D; Moon, J C; Scott, M A
2011-03-01
The UK National Health Service (NHS) currently spends in excess of £250 million per annum on angiotensin II receptor blockers (ARBs) for the treatment of hypertension and heart failure; with candesartan currently dominating the market. With the recent introduction of generic losartan, we set out to directly compare the branded market leader to its now cheaper alternative. The primary objectives were to compare the blood pressure (BP) lowering efficacy and cardiovascular outcomes of candesartan and losartan in the treatment of essential hypertension and chronic heart failure, respectively. The secondary objective was to model their comparative incremental cost-effectiveness in a UK NHS setting. The Cochrane Central Register of Controlled Trials (Cochrane Library 2009, issue 2), which contains the Hypertension and Heart Group's specialist register, Medline (1950-February 2010), and Embase (1980-February 2010) were included in the search strategy. Selection criteria were randomised studies of candesartan versus losartan in adults (> 18 years). The main outcome measures were as follows: Hypertension: mean change from baseline in trough (24 h postdose) systolic and diastolic BP. Heart failure: composite of cardiovascular death and hospital admission for management of heart failure. Two reviewers applied inclusion criteria, assessed trial quality, and extracted data. Eight (three of which met inclusion criteria) and zero trials compared candesartan directly with losartan in the treatment of hypertension and heart failure, respectively. A between-treatment difference of -1.96 mmHg [95% confidence interval (CI) -2.40 to -1.51] for trough diastolic BP and -3.00 mmHg (95% CI -3.79 to -2.22) for trough systolic BP in favour of candesartan was observed. Based on this differential, a 10-year Markov model estimates the cost per quality-adjusted life-year gained to exceed £40,000 for using candesartan in place of generic losartan. Candesartan reduces BP to a slightly greater extent when compared with losartan, however, such difference is unlikely to be cost-effective based on current acquisition costs, perceived NHS affordability thresholds and use of combination regimens. We could find no robust evidence supporting the superiority of candesartan over losartan in the treatment of heart failure. We therefore recommend using generic losartan as the ARB of choice which could save the UK NHS approximately £200 million per annum in drug costs. © 2011 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Ramanujam, G.; Bert, C. W.
1983-06-01
The objective of this paper is to provide a theoretical foundation to predict many aspects of dynamic behavior of flywheel systems when spin-tested with a quill shaft support and driven by an air turbine. Theoretical analyses for the following are presented: (1) determination of natural frequencies (or for brevity critical speeds of various orders), (2) Routh-type stability analysis to determine the stability limits (i.e., the speed range within which small perturbations attenuate rather than cause catastrophic failure), and (3) forced whirling analysis to estimate the response of major components of the system to flywheel mass eccentricity and initial tilt. For the first and third kinds of analyses, two different mathematical models of the generic system are investigated. One is a seven-degree-of-freedom lumped parameter analysis, while the other is a combined distributed and lumped parameter analysis.
The Clinical and Economic Impact of Generic Locking Plate Utilization at a Level II Trauma Center.
Mcphillamy, Austin; Gurnea, Taylor P; Moody, Alastair E; Kurnik, Christopher G; Lu, Minggen
2016-12-01
In today's climate of cost containment and fiscal responsibility, generic implant alternatives represent an interesting area of untapped resources. As patents have expired on many commonly used trauma implants, generic alternatives have recently become available from a variety of sources. The purpose of this study was to examine the clinical and economic impact of a cost containment program using high quality, generic orthopaedic locking plates. The implants available for study were anatomically precontoured plates for the clavicle, proximal humerus, distal radius, proximal tibia, distal tibia, and distal fibula. Retrospective review. Level II Trauma center. 828 adult patients with operatively managed clavicle, proximal humerus, distal radius, proximal tibia, tibial pilon, and ankle fractures. Operative treatment with conventional or generic implants. The 414 patients treated with generic implants were compared with 414 patients treated with conventional implants. There were no significant differences in age, sex, presence of diabetes, smoking history or fracture type between the generic and conventional groups. No difference in operative time, estimated blood loss or intraoperative complication rate was observed. No increase in postoperative infection rate, hardware failure, hardware loosening, malunion, nonunion or need for hardware removal was noted. Overall, our hospital realized a 56% reduction in implant costs, an average savings of $1197 per case, and a total savings of $458,080 for the study period. Use of generic orthopaedic implants has been successful at our institution, providing equivalent clinical outcomes while significantly reducing implant expenditures. Based on our data, the use of generic implants has the potential to markedly reduce operative costs as long as quality products are used. Therapeutic Level III.
Generic Drugs - Decreasing Costs and Room for Increased Number of Kidney Transplantations.
Spasovski, Goce
2015-01-01
Kidney transplantation is the best treatment option in comparison to dialysis, although patients are obliged to receive life-long medical treatment with immunosuppressive drugs (ISDs) for prevention of the graft rejection. Such immunosuppressive treatment may be costly and associated with multiple adverse effects. Since costs are viewed as one of the major constraints for the increasing number of transplantation, the use of generic ISDs may decrease the overall cost of transplantation and raise the possibility for its further development. An ideal ISD should have the security margin between toxic and therapeutic dose, and prevent development of acute or chronic rejection of the transplanted kidney. This is particularly important for drugs with a "narrow therapeutical index" (NTI), where small differences in dose or concentration lead to dose and concentration-dependent, serious therapeutic failures and/or adverse drug reactions. The NTI generic drug is approved if within 90%-112% of the area under the curve of the original product the pharmacokinetics fulfills the strict criteria of pharmaceutical equivalence and bioequivalence. Every generic has to be proven to be bioequivalent to the innovator product, and not to other generic products because of the possible generic "drift". Thus, the generic ISDs may be economically attractive, but theoretically, they may pose a risk to transplant patients. Such risks may be reduced if a long-term clinical studies showing cost-effectiveness of generic ISDs in de novo and prevalent transplant patients for every new generic ISD are performed. In conclusion, the increased number of solid organ transplantation goes in line with the increased health care expenditure for ISDs. The generic immunosuppressants could be a possible solution if safely substituted for innovator products or other generic drug of choice. The substantial cost reduction needs to be redirected into organ donation initiatives so that more patients can benefit from the further increase in transplantation.
Condition monitoring of Electric Components
NASA Astrophysics Data System (ADS)
Zaman, Ishtiaque
A universal non-intrusive model of a flexible antenna array is presented in this paper to monitor and identify the failures in electric machines. This adjustable antenna is designed to serve the purpose of condition monitoring of a vast range of electrical components including Induction Motor (IM), Printed Circuit Board (PCB), Synchronous Reluctance Motor (SRM), Permanent Magnet Synchronous Machine (PMSM) etc. by capturing the low frequency magnetic field radiated around these machines. The basic design and specification of the proposed antenna array for low frequency components is portrayed first. The design of the antenna is adjustable to fit for an extensive variety of segments. Subsequent to distinguishing the design and specifications of the antenna, the ideal area of the most delicate stray field has been identified for healthy current streaming around the machineries. Following this, short circuit representing faulty situation has been introduced and compared with the healthy cases. Precision has been found recognizing the faults using this one generic model of Antenna and the results are presented for three different machines i.e. IM, SRM and PMSM. Finite element method has been used to design the antenna and detect the optimum location and faults in the machines. Finally, a 3D Printer is proposed to be employed to build the antenna as per the details tended to in this paper contingent upon the power segments.
[Two cases of pulmonary aspergilosis, which deteriorated with generic itraconazole].
Saito, Wakana; Shishikura, Yutaka; Nishimaki, Katsushi; Kikuchi, Tadashi; Sasamori, Kan; Kikuchi, Yoshihiro; Miki, Hiroshi
2014-07-01
We experienced two cases of pulmonary aspergillosis, which deteriorated during treatment with generic itraconazole (ITCZ) because of low plasma concentration. One case was chronic pulmonary aspergillosis and the other was allergic bronchopulmonary aspergillosis (ABPA). Treatment of both cases was started with a brand-name-ITCZ, and changed to a generic ITCZ. Deterioration of pulmonary aspergillosis occurred after 8 months and 9 months from change to generic ITCZ respectively. In the first case, the ITCZ-plasma concentration was 46.9 ng/mL and of OH-ITCZ 96.5 ng/mL with generic ITCZ at the dose of 300 mg/day, but increased to 1,559.7 ng/mL and to 2,485.0 ng/mL with the brand-name-ITCZ 300 mg/day, respectively. In the second case, the ITCZ-plasma concentration was 27.2 ng/mL and of OH-ITCZ 20.1 ng/mL with 150 mg/day for generic ITCZ, but reached 857.3 ng/mL and to 1,144.2 ng/ml with the brand-name-ITCZ 300 mg/day, respectively. After treatment failure, the first case was changed to voriconazole, then brand-name-ITCZ 300 mg/day, and the second case to the brand-name-ITCZ 300 mg/day, with successful clinical course. Plasma concentrations of ITCZ can differ significantly depending on the patient or type of ITCZ. The ITCZ-plasma concentration should be controlled after changing from a brand-name-ITCZ to a generic ITCZ.
Katikireddi, Srinivasa Vittal; Reilly, Jacqueline
2017-09-01
A dissertation is often a core component of the Masters in Public Health (MPH) qualification. This study aims to explore its purpose, from the perspective of both students and supervisors, and identify practices viewed as constituting good supervision. A multi-perspective qualitative study drawing on in-depth one-to-one interviews with MPH supervisors (n = 8) and students (n = 10), with data thematically analysed. The MPH dissertation was viewed as providing generic as well as discipline-specific knowledge and skills. It provided an opportunity for in-depth study on a chosen topic but different perspectives were evident as to whether the project should be grounded in public health practice rather than academia. Good supervision practice was thought to require topic knowledge, generic supervision skills (including clear communication of expectations and timely feedback) and adaptation of supervision to meet student needs. Two ideal types of the MPH dissertation process were identified. Supervisor-led projects focus on achieving a clearly defined output based on a supervisor-identified research question and aspire to harmonize research and teaching practice, but often have a narrower focus. Student-led projects may facilitate greater learning opportunities and better develop skills for public health practice but could be at greater risk of course failure. © The Author 2016. Published by Oxford University Press on behalf of Faculty of Public Health.
1991-01-01
games. A leader with limited rationality will make decisions that bear a -vi- reasonable relationship to his objectives and values, but they may be...possible reasoning of opponents before or during crisis and conflict. The methodology is intended for use in analysis and defense planning, especially...overconfidence in prediction, failure to hedge, and failure actively to find ways to determine and affect the opponent’s reasoning before it is too late
AADL Fault Modeling and Analysis Within an ARP4761 Safety Assessment
2014-10-01
Analysis Generator 27 3.2.3 Mapping to OpenFTA Format File 27 3.2.4 Mapping to Generic XML Format 28 3.2.5 AADL and FTA Mapping Rules 28 3.2.6 Issues...PSSA), System Safety Assessment (SSA), Common Cause Analysis (CCA), Fault Tree Analysis ( FTA ), Failure Modes and Effects Analysis (FMEA), Failure...Modes and Effects Summary, Mar - kov Analysis (MA), and Dependence Diagrams (DDs), also referred to as Reliability Block Dia- grams (RBDs). The
ATS-6 engineering performance report. Volume 2: Orbit and attitude controls
NASA Technical Reports Server (NTRS)
Wales, R. O. (Editor)
1981-01-01
Attitude control is reviewed, encompassing the attitude control subsystem, spacecraft attitude precision pointing and slewing adaptive control experiment, and RF interferometer experiment. The spacecraft propulsion system (SPS) is discussed, including subsystem, SPS design description and validation, orbital operations and performance, in-orbit anomalies and contingency operations, and the cesium bombardment ion engine experiment. Thruster failure due to plugging of the propellant feed passages, a major cause for mission termination, are considered among the critical generic failures on the satellite.
An overview of the crash dynamics failure behavior of metal and composite aircraft structures
NASA Technical Reports Server (NTRS)
Carden, Huey D.; Boitnott, Richard L.; Fasanella, Edwin L.; Jones, Lisa E.
1991-01-01
An overview of failure behavior results is presented from some of the crash dynamics research conducted with concepts of aircraft elements and substructure not necessarily designed or optimized for energy absorption or crash loading considerations. Experimental and analytical data are presented that indicate some general trends in the failure behavior of a class of composite structures that includes fuselage panels, individual fuselage sections, fuselage frames, skeleton subfloors with stringers and floor beams without skin covering, and subfloors with skin added to the frame stringer structure. Although the behavior is complex, a strong similarity in the static/dynamic failure behavior among these structures is illustrated through photographs of the experimental results and through analytical data of generic composite structural models.
Failure Analysis of Space Shuttle Orbiter Valve Poppet
NASA Technical Reports Server (NTRS)
Russell, Rick
2010-01-01
The poppet failed during STS-126 due to fatigue cracking that most likely was initiated during MDC ground-testing. This failure ultimately led to the discovery that the cracking problem was a generic issue effecting numerous poppets throughout the Shuttle program's history. This presentation has focused on the laboratory analysis of the failed hardware, but this analysis was only one aspect of a comprehensive failure investigation. One critical aspect of the overall investigation was modeling of the fluid flow through this valve to determine the possible sources of cyclic loading. This work has led to the conclusion that the poppets are failing due to flow-induced vibration.
Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis
NASA Astrophysics Data System (ADS)
Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang
2017-07-01
In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.
NASA Technical Reports Server (NTRS)
Johnson, Charles S.
1986-01-01
It is nearly axiomatic, that to take the greatest advantage of the useful features available in a development system, and to avoid the negative interactions of those features, requires the exercise of a design methodology which constrains their use. A major design support feature of the Ada language is abstraction: for data, functions processes, resources, and system elements in general. Atomic abstract types can be created in packages defining those private types and all of the overloaded operators, functions, and hidden data required for their use in an application. Generically structured abstract types can be created in generic packages defining those structured private types, as buildups from the user-defined data types which are input as parameters. A study is made of the design constraints required for software incorporating either atomic or generically structured abstract types, if the integration of software components based on them is to be subsequently performed. The impact of these techniques on the reusability of software and the creation of project-specific software support environments is also discussed.
Effect of DGPS failures on dynamic positioning of mobile drilling units in the North Sea.
Chen, Haibo; Moan, Torgeir; Verhoeven, Harry
2009-11-01
Basic features of differential global positioning system (DGPS), and its operational configuration on dynamically positioned (DP) mobile offshore drilling units in the North Sea are described. Generic failure modes of DGPS are discussed, and a critical DGPS failure which has the potential to cause drive-off for mobile drilling units is identified. It is the simultaneous erroneous position data from two DGPS's. Barrier method is used to analyze this critical DGPS failure. Barrier elements to prevent this failure are identified. Deficiencies of each barrier element are revealed based on the incidents and operational experiences in the North Sea. Recommendations to strengthen these barrier elements, i.e. to prevent erroneous position data from DGPS, are proposed. These recommendations contribute to the safety of DP operations of mobile offshore drilling units.
Cheng, Ning; Banerjee, Tannista; Qian, Jingjing; Hansen, Richard A
Prior research suggests that authorized generic drugs increase competition and decrease prices, but little empirical evidence supports this conclusion. This study evaluated the impact of authorized generic marketing on brand and generic prices. Longitudinal analysis of the household component of the Medical Expenditure Panel Survey. Interview panels over 12 years, with a new panel each year. For each panel, 5 rounds of household interviews were conducted over 30 months. Nationally representative sample of the U.S. civilian noninstitutionalized population, focusing on people using 1 of 5 antidepressant drugs that became generically available between 2000 to 2011. Drugs and dose/formulations with versus without an authorized generic drug marketed. Multiple linear regression models with lagged variables evaluated the effect of an authorized generic on average inflation-adjusted brand and generic price, adjusting for payment sources, generic entry time, competitor price, and year. During 2000-2011, annual brand antidepressant utilization decreased from 51.47 to 7.52 million prescriptions, and generic antidepressant utilization increased from 0 to 88.83 million prescriptions. Over time, payment per prescription for brand prescriptions increased 25% overall, and generic payments decreased 70% for all payer types. With unadjusted data, after generic entry the average brand price decreased $0.59 per year with and $3.62 per year without an authorized generic in the market. Average generic prices decreased $10.30 per year with and $8.47 per year without an authorized generic in the market. In multiple regression models with lagged variables adjusted for heteroscedasticity, payer source, time since generic entry, competitor price, and year, authorized generics significantly reduced average payment for generic (-$3.03) and brand (-$60.64) prescriptions, and over time this price change slowly diminished. Availability of an authorized generic was associated with reduced average generic and brand price in the antidepressant market, supporting prior evidences. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Babar, Zaheer Ud Din; Ibrahim, Mohamed Izham Mohamed; Singh, Harpal; Bukahri, Nadeem Irfan; Creese, Andrew
2007-01-01
Background Malaysia's stable health care system is facing challenges with increasing medicine costs. To investigate these issues a survey was carried out to evaluate medicine prices, availability, affordability, and the structure of price components. Methods and Findings The methodology developed by the World Health Organization (WHO) and Health Action International (HAI) was used. Price and availability data for 48 medicines was collected from 20 public sector facilities, 32 private sector retail pharmacies and 20 dispensing doctors in four geographical regions of West Malaysia. Medicine prices were compared with international reference prices (IRPs) to obtain a median price ratio. The daily wage of the lowest paid unskilled government worker was used to gauge the affordability of medicines. Price component data were collected throughout the supply chain, and markups, taxes, and other distribution costs were identified. In private pharmacies, innovator brand (IB) prices were 16 times higher than the IRPs, while generics were 6.6 times higher. In dispensing doctor clinics, the figures were 15 times higher for innovator brands and 7.5 for generics. Dispensing doctors applied high markups of 50%–76% for IBs, and up to 316% for generics. Retail pharmacy markups were also high—25%–38% and 100%–140% for IBs and generics, respectively. In the public sector, where medicines are free, availability was low even for medicines on the National Essential Drugs List. For a month's treatment for peptic ulcer disease and hypertension people have to pay about a week's wages in the private sector. Conclusions The free market by definition does not control medicine prices, necessitating price monitoring and control mechanisms. Markups for generic products are greater than for IBs. Reducing the base price without controlling markups may increase profits for retailers and dispensing doctors without reducing the price paid by end users. To increase access and affordability, promotion of generic medicines and improved availability of medicines in the public sector are required. PMID:17388660
Babar, Zaheer Ud Din; Ibrahim, Mohamed Izham Mohamed; Singh, Harpal; Bukahri, Nadeem Irfan; Creese, Andrew
2007-03-27
Malaysia's stable health care system is facing challenges with increasing medicine costs. To investigate these issues a survey was carried out to evaluate medicine prices, availability, affordability, and the structure of price components. The methodology developed by the World Health Organization (WHO) and Health Action International (HAI) was used. Price and availability data for 48 medicines was collected from 20 public sector facilities, 32 private sector retail pharmacies and 20 dispensing doctors in four geographical regions of West Malaysia. Medicine prices were compared with international reference prices (IRPs) to obtain a median price ratio. The daily wage of the lowest paid unskilled government worker was used to gauge the affordability of medicines. Price component data were collected throughout the supply chain, and markups, taxes, and other distribution costs were identified. In private pharmacies, innovator brand (IB) prices were 16 times higher than the IRPs, while generics were 6.6 times higher. In dispensing doctor clinics, the figures were 15 times higher for innovator brands and 7.5 for generics. Dispensing doctors applied high markups of 50%-76% for IBs, and up to 316% for generics. Retail pharmacy markups were also high-25%-38% and 100%-140% for IBs and generics, respectively. In the public sector, where medicines are free, availability was low even for medicines on the National Essential Drugs List. For a month's treatment for peptic ulcer disease and hypertension people have to pay about a week's wages in the private sector. The free market by definition does not control medicine prices, necessitating price monitoring and control mechanisms. Markups for generic products are greater than for IBs. Reducing the base price without controlling markups may increase profits for retailers and dispensing doctors without reducing the price paid by end users. To increase access and affordability, promotion of generic medicines and improved availability of medicines in the public sector are required.
[Special considerations in generic substitution of immunosuppressive drugs in transplantation].
Remport, Adám; Dankó, Dávid; Gerlei, Zsuzsa; Czebe, Krisztina; Kiss, István
2012-08-26
Long-term success in solid organ transplantation strongly depends on the optimal use of maintenance immunosuppressive treatment. Cyclosporin and tacrolimus are the most frequently administered immunosuppressants and they are designed to narrow therapeutic index drugs. The substitution of the branded formulation by their generic counterparts may lead to economic benefit only if equivalent clinical outcomes can be achieved. There is no published evidence to date on the guarantee of their long-term therapeutic equivalence and cases of therapeutic failures have been reported due to inadvertent drug conversion. The disadvantageous clinical consequences of a non medical, mechanistic forced switch from the original to generic formulation of tacrolimus and the estimated loss of the payer's presumed savings are presented in a kidney transplant recipient population. Special problems related to pediatric patients, drug interactions with concurrent medications and the burden of additional therapeutic drug monitoring and follow up visits are also discussed. The authors are convinced that the implementation of the European Society of Organ Transplantation guidelines on generic substitution may provide a safe way for patients and healthcare payers.
The impact of generic language about ability on children's achievement motivation.
Cimpian, Andrei
2010-09-01
Nuances in how adults talk about ability may have important consequences for children's sustained involvement and success in an activity. In this study, I tested the hypothesis that children would be less motivated while performing a novel activity if they were told that boys or girls in general are good at this activity (generic language) than if they were told that a particular boy or girl is good at it (non-generic language). Generic language may be detrimental because it expresses normative societal expectations regarding performance. If these expectations are negative, they may cause children to worry about confirming them; if positive, they may cause worries about failing to meet them. Moreover, generic statements may be threatening because they imply that performance is the result of stable traits rather than effort. Ninety-seven 4- to 7-year-olds were asked to play a game in which they succeeded at first but then made a few mistakes. Since young children remain optimistic in achievement situations until the possibility of failure is made clear, I hypothesized that 4- and 5-year-olds would not be affected by the implications of generic language until after they made mistakes; 6- and 7-year-olds, however, may be susceptible earlier. As expected, the older children who heard that boys or girls are good at this game displayed lower motivation (e.g., more negative emotions, lower perceived competence) from the start, while they were still succeeding and receiving praise. Four- and 5-year-olds who heard these generic statements had a similar reaction, but only after they made mistakes. These findings demonstrate that exposure to generic language about ability can be an obstacle to children's motivation and, potentially, their success.
Generic immunosuppression in solid organ transplantation: a Canadian perspective.
Harrison, Jennifer J; Schiff, Jeffrey R; Coursol, Christian J; Daley, Christopher J A; Dipchand, Anne I; Heywood, Norine M; Keough-Ryan, Tammy M; Keown, Paul A; Levy, Gary A; Lien, Dale C; Wichart, Jenny R; Cantarovich, Marcelo
2012-04-15
The introduction of generic immunosuppressant medications may present an opportunity for cost savings in solid organ transplantation if equivalent clinical outcomes to the branded counterparts can be achieved. An interprofessional working group of the Canadian Society of Transplantation was established to develop recommendations on the use of generic immunosuppression in solid organ transplant recipients (SOTR) based on a review of the available data. Under current Health Canada licensing requirements, a demonstration of bioequivalence with the branded formulation in healthy volunteers allows for bridging of clinical data. Cyclosporine, tacrolimus, and sirolimus are designated as "critical dose drugs" and are held to stricter criteria. However, whether this provides sufficient guarantee of therapeutic equivalence in SOTR remains controversial, and failure to maintain an appropriate balance of immunosuppression may have serious consequences, including rejection, graft loss, and death. Published evidence supporting therapeutic equivalence of generic formulations in SOTR is lacking. Moreover, in the setting of multiple generic formulations the potential for uncontrolled product switching is a major concern, since generic preparations are not required to demonstrate bioequivalence with each other. Although close monitoring is recommended with any change in formulation, drug product switches are likely to occur without prescriber knowledge and may pose a significant patient safety risk. The advent of generic immunosuppression will require new practices including more frequent therapeutic drug and clinical monitoring, and increased patient education. The additional workload placed on transplant centers without additional funding will create challenges and could ultimately jeopardize patient outcomes. Until more robust clinical data are available and adequate regulatory safeguards are instituted, caution in the use of generic immunosuppressive drugs in solid organ transplantation is warranted.
Customer focus in breast cancer screening services.
Buttimer, Andreas
2009-01-01
The purpose of the paper is to demonstrate how a generic value chain and customer focused system as demonstrated by the Scottish and Irish breast screening programmes can be used to provide a high quality health service. Literature relevant to aligning the entire operating model--the companies' culture, business processes, management systems to serve one value discipline, i.e. customer intimacy, is reviewed and considered in the context of the NHS Scottish Breast Screening Programme in Edinburgh and BreastCheck--the National Breast Screening Programme in Ireland. This paper demonstrates how an emphasis on customer focus and operational excellence, as used in other service industries, can help to provide a better health service. It uses the Scottish and Irish breast screening programmes as illustrative examples. The paper applies the key requirements in the delivery of a quality service including an understanding of the characteristics of a service industry, the management of discontinuities involved in its delivery and the environment in which it operates. System failure is commonly the cause of quality failure in the health system. Breast screening programmes are designed to prevent such a failure. This paper promotes and describes the use of the generic value chain by using the knowledge gained in delivering a mammography-screening programme.
Compressed natural gas bus safety: a quantitative risk assessment.
Chamberlain, Samuel; Modarres, Mohammad
2005-04-01
This study assesses the fire safety risks associated with compressed natural gas (CNG) vehicle systems, comprising primarily a typical school bus and supporting fuel infrastructure. The study determines the sensitivity of the results to variations in component failure rates and consequences of fire events. The components and subsystems that contribute most to fire safety risk are determined. Finally, the results are compared to fire risks of the present generation of diesel-fueled school buses. Direct computation of the safety risks associated with diesel-powered vehicles is possible because these are mature technologies for which historical performance data are available. Because of limited experience, fatal accident data for CNG bus fleets are minimal. Therefore, this study uses the probabilistic risk assessment (PRA) approach to model and predict fire safety risk of CNG buses. Generic failure data, engineering judgments, and assumptions are used in this study. This study predicts the mean fire fatality risk for typical CNG buses as approximately 0.23 fatalities per 100-million miles for all people involved, including bus passengers. The study estimates mean values of 0.16 fatalities per 100-million miles for bus passengers only. Based on historical data, diesel school bus mean fire fatality risk is 0.091 and 0.0007 per 100-million miles for all people and bus passengers, respectively. One can therefore conclude that CNG buses are more prone to fire fatality risk by 2.5 times that of diesel buses, with the bus passengers being more at risk by over two orders of magnitude. The study estimates a mean fire risk frequency of 2.2 x 10(-5) fatalities/bus per year. The 5% and 95% uncertainty bounds are 9.1 x 10(-6) and 4.0 x 10(-5), respectively. The risk result was found to be affected most by failure rates of pressure relief valves, CNG cylinders, and fuel piping.
ERIC Educational Resources Information Center
Tynjälä, Päivi; Virtanen, Anne; Klemola, Ulla; Kostiainen, Emma; Rasku-Puttonen, Helena
2016-01-01
The purpose of the study was to examine how social competence and other generic skills can be developed in teacher education using a pedagogical model called Integrative Pedagogy. This model is based on the idea of integrating the four basic components of expertise: Theoretical knowledge, practical knowledge, self-regulative knowledge, and…
Can the Integration of a PLE in an E-Portfolio Platform Improve Generic Competences?
ERIC Educational Resources Information Center
Galván-Fernández, Cristina; Rubio-Hurtado, María José; Martínez-Olmo, Francesc; Rodríguez-Illera, José Luis
2017-01-01
The study analyzes the improvement in generic competences through e-portfolio/PLE platform and didactic planning. The new version of the platform, Digital Folder, contains utilities for students and teachers and some PLE components that help the learning process through e-portfolios. Didactic planning is compared for students from the University…
Designing of a Digital Behind-the-Ear Hearing Aid to Meet the World Health Organization Requirements
Bento, Ricardo Ferreira; Penteado, Silvio Pires
2010-01-01
Hearing loss is a common health issue that affects nearly 10% of the world population as indicated by many international studies. The hearing impaired typically experience more frustration, anxiety, irritability, depression, and disorientation than those with normal hearing levels. The standard rehabilitation tool for hearing impairment is an electronic hearing aid whose main components are transducers (microphone and receiver) and a digital signal processor. These electronic components are manufactured by supply chain rather than by hearing aid manufacturers. Manufacturers can use custom-designed components or generic off-the-shelf components. These electronic components are available as application-specific or off-the-shelf products, with the former designed for a specific manufacturer and the latter for a generic approach. The choice of custom or generic components will affect the product specifications, pricing, manufacturing, life cycle, and marketing strategies of the product. The World Health Organization is interested in making available to developing countries hearing aids that are inexpensive to purchase and maintain. The hearing aid presented in this article was developed with these specifications in mind together with additional contemporary features such as four channels with wide dynamic range compression, an adjustable compression rate for each channel, four comfort programs, an adaptive feedback manager, and full volume control. This digital hearing aid is fitted using a personal computer with minimal hardware requirements in intuitive three-step fitting software. A trimmer-adjusted version can be developed where human and material resources are scarce. PMID:20724354
Bento, Ricardo Ferreira; Penteado, Silvio Pires
2010-06-01
Hearing loss is a common health issue that affects nearly 10% of the world population as indicated by many international studies. The hearing impaired typically experience more frustration, anxiety, irritability, depression, and disorientation than those with normal hearing levels. The standard rehabilitation tool for hearing impairment is an electronic hearing aid whose main components are transducers (microphone and receiver) and a digital signal processor. These electronic components are manufactured by supply chain rather than by hearing aid manufacturers. Manufacturers can use custom-designed components or generic off-the-shelf components. These electronic components are available as application-specific or off-the-shelf products, with the former designed for a specific manufacturer and the latter for a generic approach. The choice of custom or generic components will affect the product specifications, pricing, manufacturing, life cycle, and marketing strategies of the product. The World Health Organization is interested in making available to developing countries hearing aids that are inexpensive to purchase and maintain. The hearing aid presented in this article was developed with these specifications in mind together with additional contemporary features such as four channels with wide dynamic range compression, an adjustable compression rate for each channel, four comfort programs, an adaptive feedback manager, and full volume control. This digital hearing aid is fitted using a personal computer with minimal hardware requirements in intuitive three-step fitting software. A trimmer-adjusted version can be developed where human and material resources are scarce.
2011-01-01
Background Ontologies are increasingly used to structure and semantically describe entities of domains, such as genes and proteins in life sciences. Their increasing size and the high frequency of updates resulting in a large set of ontology versions necessitates efficient management and analysis of this data. Results We present GOMMA, a generic infrastructure for managing and analyzing life science ontologies and their evolution. GOMMA utilizes a generic repository to uniformly and efficiently manage ontology versions and different kinds of mappings. Furthermore, it provides components for ontology matching, and determining evolutionary ontology changes. These components are used by analysis tools, such as the Ontology Evolution Explorer (OnEX) and the detection of unstable ontology regions. We introduce the component-based infrastructure and show analysis results for selected components and life science applications. GOMMA is available at http://dbs.uni-leipzig.de/GOMMA. Conclusions GOMMA provides a comprehensive and scalable infrastructure to manage large life science ontologies and analyze their evolution. Key functions include a generic storage of ontology versions and mappings, support for ontology matching and determining ontology changes. The supported features for analyzing ontology changes are helpful to assess their impact on ontology-dependent applications such as for term enrichment. GOMMA complements OnEX by providing functionalities to manage various versions of mappings between two ontologies and allows combining different match approaches. PMID:21914205
Environmental control system transducer development study
NASA Technical Reports Server (NTRS)
Brudnicki, M. J.
1973-01-01
A failure evaluation of the transducers used in the environmental control systems of the Apollo command service module, lunar module, and portable life support system is presented in matrix form for several generic categories of transducers to enable identification of chronic failure modes. Transducer vendors were contacted and asked to supply detailed information. The evaluation data generated for each category of transducer were compiled and published in failure design evaluation reports. The evaluation reports also present a review of the failure and design data for the transducers and suggest both design criteria to improve reliability of the transducers and, where necessary, design concepts for required redesign of the transducers. Remedial designs were implemented on a family of pressure transducers and on the oxygen flow transducer. The design concepts were subjected to analysis, breadboard fabrication, and verification testing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huff, Kathryn D.
Component level and system level abstraction of detailed computational geologic repository models have resulted in four rapid computational models of hydrologic radionuclide transport at varying levels of detail. Those models are described, as is their implementation in Cyder, a software library of interchangeable radionuclide transport models appropriate for representing natural and engineered barrier components of generic geology repository concepts. A proof of principle demonstration was also conducted in which these models were used to represent the natural and engineered barrier components of a repository concept in a reducing, homogenous, generic geology. This base case demonstrates integration of the Cyder openmore » source library with the Cyclus computational fuel cycle systems analysis platform to facilitate calculation of repository performance metrics with respect to fuel cycle choices. (authors)« less
Decomposition-Based Failure Mode Identification Method for Risk-Free Design of Large Systems
NASA Technical Reports Server (NTRS)
Tumer, Irem Y.; Stone, Robert B.; Roberts, Rory A.; Clancy, Daniel (Technical Monitor)
2002-01-01
When designing products, it is crucial to assure failure and risk-free operation in the intended operating environment. Failures are typically studied and eliminated as much as possible during the early stages of design. The few failures that go undetected result in unacceptable damage and losses in high-risk applications where public safety is of concern. Published NASA and NTSB accident reports point to a variety of components identified as sources of failures in the reported cases. In previous work, data from these reports were processed and placed in matrix form for all the system components and failure modes encountered, and then manipulated using matrix methods to determine similarities between the different components and failure modes. In this paper, these matrices are represented in the form of a linear combination of failures modes, mathematically formed using Principal Components Analysis (PCA) decomposition. The PCA decomposition results in a low-dimensionality representation of all failure modes and components of interest, represented in a transformed coordinate system. Such a representation opens the way for efficient pattern analysis and prediction of failure modes with highest potential risks on the final product, rather than making decisions based on the large space of component and failure mode data. The mathematics of the proposed method are explained first using a simple example problem. The method is then applied to component failure data gathered from helicopter, accident reports to demonstrate its potential.
Evaluation of failure criterion for graphite/epoxy fabric laminates
NASA Technical Reports Server (NTRS)
Tennyson, R. C.; Wharram, G. E.
1985-01-01
The development and application of the tensor polynomial failure criterion for composite laminate analysis is described. Emphasis is given to the fabrication and testing of Narmco Rigidite 5208-WT300, a plain weave fabric of Thornel 300 Graphite fibers impregnated with Narmco 5208 Resin. The quadratic-failure criterion with F sub 12=0 provides accurate estimates of failure stresses for the graphite/epoxy investigated. The cubic failure criterion was recast into an operationally easier form, providing design curves that can be applied to laminates fabricated from orthotropic woven fabric prepregs. In the form presented, no interaction strength tests are required, although recourse to the quadratic model and the principal strength parameters is necessary. However, insufficient test data exist at present to generalize this approach for all prepreg constructions, and its use must be restricted to the generic materials and configurations investigated to date.
Sharing, samples, and generics: an antitrust framework.
Carrier, Michael A
Rising drug prices are in the news. By increasing price, drug companies have placed vital, even life-saving, medicines out of the reach of consumers. In a recent development, brand firms have prevented generics even from entering the market. The ruse for this strategy involves risk-management programs known as Risk Evaluation and Mitigation Strategies ("REMS"). Pursuant to legislation enacted in 2007, the FDA requires REMS when a drug's risks (such as death or injury) outweigh its rewards. Brands have used this regime, intended to bring drugs to the market, to block generic competition. Regulations such as the federal Hatch-Waxman Act and state substitution laws foster widespread generic competition. But these regimes can only be effectuated through generic entry. And that entry can take place only if a generic can use a brand's sample to show that its product is equivalent. More than 100 generic firms have complained that they have not been able to access needed samples. One study of 40 drugs subject to restricted access programs found that generics' inability to enter cost more than $5 billion a year. Brand firms have contended that antitrust law does not compel them to deal with their competitors and have highlighted concerns related to safety and product liability in justifying their refusals. This Article rebuts these claims. It highlights the importance of samples in the regulatory regime and the FDA's inability to address the issue. It shows how a sharing requirement in this setting is consistent with Supreme Court caselaw. And it demonstrates that the brands' behavior fails the defendant-friendly "no economic sense" test because the conduct literally makes no sense other than by harming generics. Brands' denial of samples offers a textbook case of monopolization. In the universe of pharmaceutical antitrust behavior, other conduct--such as "pay for delay" settlements between brands and generics and "product hopping" from one drug to a slightly modified version--has received the lion's share of attention. But sample denials are overdue for antitrust scrutiny. This Article fills this gap. Given the failure of Congress and the FDA to remedy the issue, antitrust can play a crucial role in ensuring generic access to samples, affirming a linchpin of the pharmaceutical regime.
Fatigue failure of metal components as a factor in civil aircraft accidents
NASA Technical Reports Server (NTRS)
Holshouser, W. L.; Mayner, R. D.
1972-01-01
A review of records maintained by the National Transportation Safety Board showed that 16,054 civil aviation accidents occurred in the United States during the 3-year period ending December 31, 1969. Material failure was an important factor in the cause of 942 of these accidents. Fatigue was identified as the mode of the material failures associated with the cause of 155 accidents and in many other accidents the records indicated that fatigue failures might have been involved. There were 27 fatal accidents and 157 fatalities in accidents in which fatigue failures of metal components were definitely identified. Fatigue failures associated with accidents occurred most frequently in landing-gear components, followed in order by powerplant, propeller, and structural components in fixed-wing aircraft and tail-rotor and main-rotor components in rotorcraft. In a study of 230 laboratory reports on failed components associated with the cause of accidents, fatigue was identified as the mode of failure in more than 60 percent of the failed components. The most frequently identified cause of fatigue, as well as most other types of material failures, was improper maintenance (including inadequate inspection). Fabrication defects, design deficiencies, defective material, and abnormal service damage also caused many fatigue failures. Four case histories of major accidents are included in the paper as illustrations of some of the factors invovled in fatigue failures of aircraft components.
Development of a polymetric grout for the hydrostatic bearing at DSS 14
NASA Technical Reports Server (NTRS)
Mcclung, C. E.; Schwendeman, J. L.; Ball, G. L., III; Jenkins, G. H.; Casperson, R. D.; Gale, G. P.; Riewe, A. A.
1981-01-01
Results of an investigation into the causes of the deterioration and premature failure of the grout under the hydrostatic bearing runner at DSS 14 are reported. Generic types of materials were screened and tested to find a grout material more resistive to the causes of grout failure. Emphasis was placed on the physical properties, strength, modulus of elasticity, and resistance to erosion and chemical attack by oil and unique requirements imposed by each material for mixing, placing, compacting, and cooling. The polymetric grout developed to replace the dry grout is described.
Preparing for the workplace: fostering generic attributes in allied health education programs.
Higgs, J; Hunt, A
1999-01-01
Allied health curricula need to extend beyond the learning of discipline-specific skills to encompass broader learning goals. In particular, the acquisition of generic skills is necessary to enable graduates to function more competently and confidently within their rapidly changing work, professional, and societal environments. In health sciences education particularly, the rate of change in practice and education is rapid and unprecedented. If educators focus on components of the curriculum rather than the entire learning experience, they are likely to significantly limit the students' acquisition of such generic skills. To achieve the desired generic skills outcomes, an overarching, integrated, and consistently applied curriculum strategy is advocated. This article considers a number of such strategies relevant to allied health education.
A Generic Evaluation Model for Semantic Web Services
NASA Astrophysics Data System (ADS)
Shafiq, Omair
Semantic Web Services research has gained momentum over the last few Years and by now several realizations exist. They are being used in a number of industrial use-cases. Soon software developers will be expected to use this infrastructure to build their B2B applications requiring dynamic integration. However, there is still a lack of guidelines for the evaluation of tools developed to realize Semantic Web Services and applications built on top of them. In normal software engineering practice such guidelines can already be found for traditional component-based systems. Also some efforts are being made to build performance models for servicebased systems. Drawing on these related efforts in component-oriented and servicebased systems, we identified the need for a generic evaluation model for Semantic Web Services applicable to any realization. The generic evaluation model will help users and customers to orient their systems and solutions towards using Semantic Web Services. In this chapter, we have presented the requirements for the generic evaluation model for Semantic Web Services and further discussed the initial steps that we took to sketch such a model. Finally, we discuss related activities for evaluating semantic technologies.
NASA Technical Reports Server (NTRS)
Williams, R. E.; Kruger, R.
1980-01-01
Estimation procedures are described for measuring component failure rates, for comparing the failure rates of two different groups of components, and for formulating confidence intervals for testing hypotheses (based on failure rates) that the two groups perform similarly or differently. Appendix A contains an example of an analysis in which these methods are applied to investigate the characteristics of two groups of spacecraft components. The estimation procedures are adaptable to system level testing and to monitoring failure characteristics in orbit.
NASA Technical Reports Server (NTRS)
Hendricks, Robert C.; Zaretsky, Erwin V.
2001-01-01
Critical component design is based on minimizing product failures that results in loss of life. Potential catastrophic failures are reduced to secondary failures where components removed for cause or operating time in the system. Issues of liability and cost of component removal become of paramount importance. Deterministic design with factors of safety and probabilistic design address but lack the essential characteristics for the design of critical components. In deterministic design and fabrication there are heuristic rules and safety factors developed over time for large sets of structural/material components. These factors did not come without cost. Many designs failed and many rules (codes) have standing committees to oversee their proper usage and enforcement. In probabilistic design, not only are failures a given, the failures are calculated; an element of risk is assumed based on empirical failure data for large classes of component operations. Failure of a class of components can be predicted, yet one can not predict when a specific component will fail. The analogy is to the life insurance industry where very careful statistics are book-kept on classes of individuals. For a specific class, life span can be predicted within statistical limits, yet life-span of a specific element of that class can not be predicted.
The role of fixation and bone quality on the mechanical stability of tibial knee components.
Lee, R W; Volz, R G; Sheridan, D C
1991-12-01
Tibial component loosening remains one of the major causes of failure of cemented and noncemented total knee arthroplasties. In this study, the authors identified the role of implant design, method of fixation, and bone density as it related to implant stability. The physical properties of "good" and "bad" bone were simulated using a "good" and "bad" foam model of the proximal tibia, fabricated in the laboratory from DARO RF-100 foam. A generic tibial component permitting various fixation designs was implanted into "good" and "bad" variable density foam tibial models in both cemented and noncemented modes. The mechanical stability of the implants was determined using a Materials Testing Machine by the application of an eccentrically applied cyclic load. The micromotion (subsidence and lift-off) of the tibial implants was recorded using two Linear Variable Differential Transformers. Statistically significant differences in implant stability were recorded as a function of fixation method. The most rigid implant fixation was achieved using four peripherally placed, 6.5-mm cancellous screws. The addition of a central stem added stability only in the case of "poor" quality foam. The mechanical stability of noncemented implants related directly to the density of the foam. Implant stability was greatly enhanced in "poor" quality foam by the use of cement. The method of implant fixation and bone density are critical determinants to tibial implant stability.
Class Dismissed? Historical Materialism and the Politics of "Difference"
ERIC Educational Resources Information Center
Scatamburlo-D'Annibale, Valerie; McLaren, Peter
2004-01-01
Perhaps one of the most taken-for-granted features of contemporary social theory is the ritual and increasingly generic critique of Marxism in terms of its alleged failure to address forms of oppression other than that of "class." Marxism is considered to be theoretically bankrupt and intellectually passe, and class analysis is often savagely…
NASA Technical Reports Server (NTRS)
DiCarlo, James A.
2011-01-01
Under the Supersonics Project of the NASA Fundamental Aeronautics Program, modeling and experimental efforts are underway to develop generic physics-based tools to better implement lightweight ceramic matrix composites into supersonic engine components and to assure sufficient durability for these components in the engine environment. These activities, which have a crosscutting aspect for other areas of the Fundamental Aero program, are focusing primarily on improving the multi-directional design strength and rupture strength of high-performance SiC/SiC composites by advanced fiber architecture design. This presentation discusses progress in tool development with particular focus on the use of 2.5D-woven architectures and state-of-the-art constituents for a generic un-cooled SiC/SiC low-pressure turbine blade.
Sandau, Kristin E; Hoglund, Barbara A; Weaver, Carrie E; Boisjolie, Charlene; Feldman, David
2014-01-01
To develop a conceptual definition of quality of life (QoL) with a left ventricular assist device (LVAD). Conceptual and operational definitions of QoL with an LVAD are lacking. A grounded theory method was used. Adult, outpatient LVAD recipients (n = 11) participated twice in individual or paired interviews. A conceptual definition of QoL while living with an LVAD was established as: "Being well enough to do and enjoy day-to-day activities that are important to me." Participants described 5 important life domains consistent with QoL literature: physical, emotional, social, cognitive, and spiritual/meaning. However, participants identified unique concerns not addressed by generic or heart failure disease specific measures typically used in the LVAD population. Existing generic and heart-failure specific QoL measures are not adequate for understanding QoL among LVAD patients. Cognition and spiritual/meaning domains were significant; these need inclusion for comprehensive QoL assessment in the LVAD population. Copyright © 2014 Elsevier Inc. All rights reserved.
Source Data Applicability Impacts on Epistemic Uncertainty for Launch Vehicle Fault Tree Models
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Novack, Steven D.; Ring, Robert W.
2016-01-01
Launch vehicle systems are designed and developed using both heritage and new hardware. Design modifications to the heritage hardware to fit new functional system requirements can impact the applicability of heritage reliability data. Risk estimates for newly designed systems must be developed from generic data sources such as commercially available reliability databases using reliability prediction methodologies, such as those addressed in MIL-HDBK-217F. Failure estimates must be converted from the generic environment to the specific operating environment of the system where it is used. In addition, some qualification of applicability for the data source to the current system should be made. Characterizing data applicability under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This paper will demonstrate a data-source applicability classification method for assigning uncertainty to a target vehicle based on the source and operating environment of the originating data. The source applicability is determined using heuristic guidelines while translation of operating environments is accomplished by applying statistical methods to MIL-HDK-217F tables. The paper will provide a case study example by translating Ground Benign (GB) and Ground Mobile (GM) to the Airborne Uninhabited Fighter (AUF) environment for three electronic components often found in space launch vehicle control systems. The classification method will be followed by uncertainty-importance routines to assess the need to for more applicable data to reduce uncertainty.
An expert system based software sizing tool, phase 2
NASA Technical Reports Server (NTRS)
Friedlander, David
1990-01-01
A software tool was developed for predicting the size of a future computer program at an early stage in its development. The system is intended to enable a user who is not expert in Software Engineering to estimate software size in lines of source code with an accuracy similar to that of an expert, based on the program's functional specifications. The project was planned as a knowledge based system with a field prototype as the goal of Phase 2 and a commercial system planned for Phase 3. The researchers used techniques from Artificial Intelligence and knowledge from human experts and existing software from NASA's COSMIC database. They devised a classification scheme for the software specifications, and a small set of generic software components that represent complexity and apply to large classes of programs. The specifications are converted to generic components by a set of rules and the generic components are input to a nonlinear sizing function which makes the final prediction. The system developed for this project predicted code sizes from the database with a bias factor of 1.06 and a fluctuation factor of 1.77, an accuracy similar to that of human experts but without their significant optimistic bias.
Generic drug names and social welfare.
Lobo, Félix; Feldman, Roger
2013-06-01
This article studies how well International Nonproprietary Names (INNs), the "generic" names for pharmaceuticals, address the problems of imperfect information. Left in private hands, the identification of medicines leads to confusion and errors. Developed in the 1950s by the World Health Organization, INNs are a common, global, scientific nomenclature designed to overcome this failure. Taking stock after sixty years, we argue that the contribution of INNs to social welfare is paramount. They enhance public health by reducing errors and improving patient safety. They also contribute to economic efficiency by creating transparency as the foundation of competitive generic drug markets, reducing transaction costs, and favoring trade. The law in most countries requires manufacturers to designate pharmaceuticals with INNs in labeling and advertising. Generic substitution is also permitted or mandatory in many countries. But not all the benefits of INNs are fully realized because prescribers may not use them. We advocate strong incentives or even legally binding provisions to extend the use of INNs by prescribing physicians and dispensing pharmacists, but we do not recommend replacing brand names entirely with INNs. Instead, we propose dual use of brand names and INNs in prescribing, as in drug labeling.
Rocketdyne/Westinghouse nuclear thermal rocket engine modeling
NASA Technical Reports Server (NTRS)
Glass, James F.
1993-01-01
The topics are presented in viewgraph form and include the following: systems approach needed for nuclear thermal rocket (NTR) design optimization; generic NTR engine power balance codes; rocketdyne nuclear thermal system code; software capabilities; steady state model; NTR engine optimizer code-logic; reactor power calculation logic; sample multi-component configuration; NTR design code output; generic NTR code at Rocketdyne; Rocketdyne NTR model; and nuclear thermal rocket modeling directions.
Simulation system architecture design for generic communications link
NASA Technical Reports Server (NTRS)
Tsang, Chit-Sang; Ratliff, Jim
1986-01-01
This paper addresses a computer simulation system architecture design for generic digital communications systems. It addresses the issues of an overall system architecture in order to achieve a user-friendly, efficient, and yet easily implementable simulation system. The system block diagram and its individual functional components are described in detail. Software implementation is discussed with the VAX/VMS operating system used as a target environment.
Garster, Noelle C; Palta, Mari; Sweitzer, Nancy K; Kaplan, Robert M; Fryback, Dennis G
2009-11-01
To compare HRQoL differences with CHD in generic indexes and a proxy CVD-specific score in a nationally representative sample of U.S. adults. The National Health Measurement Study, a cross-sectional random-digit-dialed telephone survey of adults aged 35-89, administered the EQ-5D, QWB-SA, HUI2, HUI3, SF-36v2 (yielding PCS, MCS, and SF-6D), and HALex. Analyses compared 3,350 without CHD (group 1), 265 with CHD not taking chest pain medication (group 2), and 218 with CHD currently taking chest pain medication (group 3), with and without adjustment for demographic variables and comorbidities. Data on 154 patients from heart failure clinics were used to construct a proxy score utilizing generic items probing CVD symptoms. Mean scores differed between CHD groups for all indexes with and without adjustment (P < 0.0001 for all except MCS P = 0.018). Unadjusted group 3 versus 1 differences were about three times larger than for group 2 versus 1. Standardized differences for the proxy score were similar to those for generic indexes, and were about 1.0 for all except MCS for group 3 versus 1. Generic indexes capture differences in HRQoL in population-based studies of CHD similarly to a score constructed from questions probing CVD-specific symptoms.
SCADA alarms processing for wind turbine component failure detection
NASA Astrophysics Data System (ADS)
Gonzalez, E.; Reder, M.; Melero, J. J.
2016-09-01
Wind turbine failure and downtime can often compromise the profitability of a wind farm due to their high impact on the operation and maintenance (O&M) costs. Early detection of failures can facilitate the changeover from corrective maintenance towards a predictive approach. This paper presents a cost-effective methodology to combine various alarm analysis techniques, using data from the Supervisory Control and Data Acquisition (SCADA) system, in order to detect component failures. The approach categorises the alarms according to a reviewed taxonomy, turning overwhelming data into valuable information to assess component status. Then, different alarms analysis techniques are applied for two purposes: the evaluation of the SCADA alarm system capability to detect failures, and the investigation of the relation between components faults being followed by failure occurrences in others. Various case studies are presented and discussed. The study highlights the relationship between faulty behaviour in different components and between failures and adverse environmental conditions.
Prediction of Burst Pressure in Multistage Tube Hydroforming of Aerospace Alloys.
Saboori, M; Gholipour, J; Champliaud, H; Wanjara, P; Gakwaya, A; Savoie, J
2016-08-01
Bursting, an irreversible failure in tube hydroforming (THF), results mainly from the local plastic instabilities that occur when the biaxial stresses imparted during the process exceed the forming limit strains of the material. To predict the burst pressure, Oyan's and Brozzo's decoupled ductile fracture criteria (DFC) were implemented as user material models in a dynamic nonlinear commercial 3D finite-element (FE) software, ls-dyna. THF of a round to V-shape was selected as a generic representative of an aerospace component for the FE simulations and experimental trials. To validate the simulation results, THF experiments up to bursting were carried out using Inconel 718 (IN 718) tubes with a thickness of 0.9 mm to measure the internal pressures during the process. When comparing the experimental and simulation results, the burst pressure predicated based on Oyane's decoupled damage criterion was found to agree better with the measured data for IN 718 than Brozzo's fracture criterion.
SEL Ada reuse analysis and representations
NASA Technical Reports Server (NTRS)
Kester, Rush
1990-01-01
Overall, it was revealed that the pattern of Ada reuse has evolved from initial reuse of utility components into reuse of generalized application architectures. Utility components were both domain-independent utilities, such as queues and stacks, and domain-specific utilities, such as those that implement spacecraft orbit and attitude mathematical functions and physics or astronomical models. The level of reuse was significantly increased with the development of a generalized telemetry simulator architecture. The use of Ada generics significantly increased the level of verbatum reuse, which is due to the ability, using Ada generics, to parameterize the aspects of design that are configurable during reuse. A key factor in implementing generalized architectures was the ability to use generic subprogram parameters to tailor parts of the algorithm embedded within the architecture. The use of object oriented design (in which objects model real world entities) significantly improved the modularity for reuse. Encapsulating into packages the data and operations associated with common real world entities creates natural building blocks for reuse.
Space tug propulsion system failure mode, effects and criticality analysis
NASA Technical Reports Server (NTRS)
Boyd, J. W.; Hardison, E. P.; Heard, C. B.; Orourke, J. C.; Osborne, F.; Wakefield, L. T.
1972-01-01
For purposes of the study, the propulsion system was considered as consisting of the following: (1) main engine system, (2) auxiliary propulsion system, (3) pneumatic system, (4) hydrogen feed, fill, drain and vent system, (5) oxygen feed, fill, drain and vent system, and (6) helium reentry purge system. Each component was critically examined to identify possible failure modes and the subsequent effect on mission success. Each space tug mission consists of three phases: launch to separation from shuttle, separation to redocking, and redocking to landing. The analysis considered the results of failure of a component during each phase of the mission. After the failure modes of each component were tabulated, those components whose failure would result in possible or certain loss of mission or inability to return the Tug to ground were identified as critical components and a criticality number determined for each. The criticality number of a component denotes the number of mission failures in one million missions due to the loss of that component. A total of 68 components were identified as critical with criticality numbers ranging from 1 to 2990.
Composite Load Spectra for Select Space Propulsion Structural Components
NASA Technical Reports Server (NTRS)
Ho, Hing W.; Newell, James F.
1994-01-01
Generic load models are described with multiple levels of progressive sophistication to simulate the composite (combined) load spectra (CLS) that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades and liquid oxygen (LOX) posts. These generic (coupled) models combine the deterministic models for composite load dynamic, acoustic, high-pressure and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients are then determined using advanced probabilistic simulation methods with and without strategically selected experimental data. The entire simulation process is included in a CLS computer code. Applications of the computer code to various components in conjunction with the PSAM (Probabilistic Structural Analysis Method) to perform probabilistic load evaluation and life prediction evaluations are also described to illustrate the effectiveness of the coupled model approach.
Di Paolo, Antonello; Arrigoni, Elena
2018-03-01
Generic drugs are important components of measures introduced by healthcare regulatory authorities to reduce treatment costs. In most patients and conditions the switch from a branded drug to its generic counterpart is performed with no major complications. However, evidence from complex diseases suggests that generic substitution requires careful evaluation in some settings and that current bioequivalence criteria may not always be adequate for establishing the interchangeability of branded and generic products. Rare diseases, also called orphan diseases, are a group of heterogeneous diseases that share important characteristics: in addition to their scarcity, most are severe, chronic, highly debilitating, and often present in early childhood. Finding a treatment for a rare disease is challenging. Thanks to incentives that encourage research and development programs in rare diseases, several orphan drugs are currently available. The elevated cost of orphan drugs is a highly debated issue and a cause of limited access to treatment for many patients. As patent protection and the exclusivity period of several orphan drugs will expire soon, generic versions of orphan drugs should reach the market shortly, with great expectations about their impact on the economic burden of rare diseases. However, consistent with other complex diseases, generic substitution may require thoughtful considerations and may be even contraindicated in some rare conditions. This article provides an overview of rare disease characteristics, reviews reports of problematic generic substitution, and discusses why generic substitution of orphan drugs may be challenging and should be undertaken carefully in rare disease patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua
2014-11-01
Passive system, structure and components (SSCs) will degrade over their operation life and this degradation may cause to reduction in the safety margins of a nuclear power plant. In traditional probabilistic risk assessment (PRA) using the event-tree/fault-tree methodology, passive SSC failure rates are generally based on generic plant failure data and the true state of a specific plant is not reflected realistically. To address aging effects of passive SSCs in the traditional PRA methodology [1] does consider physics based models that account for the operating conditions in the plant, however, [1] does not include effects of surveillance/inspection. This paper representsmore » an overall methodology for the incorporation of aging modeling of passive components into the RAVEN/RELAP-7 environment which provides a framework for performing dynamic PRA. Dynamic PRA allows consideration of both epistemic and aleatory uncertainties (including those associated with maintenance activities) in a consistent phenomenological and probabilistic framework and is often needed when there is complex process/hardware/software/firmware/ human interaction [2]. Dynamic PRA has gained attention recently due to difficulties in the traditional PRA modeling of aging effects of passive components using physics based models and also in the modeling of digital instrumentation and control systems. RAVEN (Reactor Analysis and Virtual control Environment) [3] is a software package under development at the Idaho National Laboratory (INL) as an online control logic driver and post-processing tool. It is coupled to the plant transient code RELAP-7 (Reactor Excursion and Leak Analysis Program) also currently under development at INL [3], as well as RELAP 5 [4]. The overall methodology aims to: • Address multiple aging mechanisms involving large number of components in a computational feasible manner where sequencing of events is conditioned on the physical conditions predicted in a simulation environment such as RELAP-7. • Identify the risk-significant passive components, their failure modes and anticipated rates of degradation • Incorporate surveillance and maintenance activities and their effects into the plant state and into component aging progress. • Asses aging affects in a dynamic simulation environment 1. C. L. SMITH, V. N. SHAH, T. KAO, G. APOSTOLAKIS, “Incorporating Ageing Effects into Probabilistic Risk Assessment –A Feasibility Study Utilizing Reliability Physics Models,” NUREG/CR-5632, USNRC, (2001). 2. T. ALDEMIR, “A Survey of Dynamic Methodologies for Probabilistic Safety Assessment of Nuclear Power Plants, Annals of Nuclear Energy, 52, 113-124, (2013). 3. C. RABITI, A. ALFONSI, J. COGLIATI, D. MANDELLI and R. KINOSHITA “Reactor Analysis and Virtual Control Environment (RAVEN) FY12 Report,” INL/EXT-12-27351, (2012). 4. D. ANDERS et.al, "RELAP-7 Level 2 Milestone Report: Demonstration of a Steady State Single Phase PWR Simulation with RELAP-7," INL/EXT-12-25924, (2012).« less
Orbiter post-tire failure and skid testing results
NASA Technical Reports Server (NTRS)
Daugherty, Robert H.; Stubbs, Sandy M.
1989-01-01
An investigation was conducted at the NASA Langley Research Center's Aircraft Landing Dynamics Facility (ALDF) to define the post-tire failure drag characteristics of the Space Shuttle Orbiter main tire and wheel assembly. Skid tests on various materials were also conducted to define their friction and wear rate characteristics under higher speed and bearing pressures than any previous tests. The skid tests were conducted to support a feasibility study of adding a skid to the orbiter strut between the main tires to protect an intact tire from failure due to overload should one of the tires fail. Roll-on-rim tests were conducted to define the ability of a standard and a modified orbiter main wheel to roll without a tire. Results of the investigation are combined into a generic model of strut drag versus time under failure conditions for inclusion into rollout simulators used to train the shuttle astronauts.
Quality of care and investment in property, plant, and equipment in hospitals.
Levitt, S W
1994-02-01
This study explores the relationship between quality of care and investment in property, plant, and equipment (PPE) in hospitals. Hospitals' investment in PPE was derived from audited financial statements for the fiscal years 1984-1989. Peer Review Organization (PRO) Generic Quality Screen (GQS) reviews and confirmed failures between April 1989 and September 1990 were obtained from the Massachusetts PRO. Weighted least squares regression models used PRO GQS confirmed failure rates as the dependent variable, and investment in PPE as the key explanatory variable. Investment in PPE was standardized, summed by the hospital over the six years, and divided by the hospital's average number of beds in that period. The number of PRO reviewed cases with one or more GQS confirmed failures was divided by the total number of cases reviewed to create confirmed failure rates. Investment in PPE in Massachusetts hospitals is correlated with GQS confirmed failure rates. A financial variable, investment in PPE, predicts certain dimensions of quality of care in hospitals.
Reliability analysis of the F-8 digital fly-by-wire system
NASA Technical Reports Server (NTRS)
Brock, L. D.; Goodman, H. A.
1981-01-01
The F-8 Digital Fly-by-Wire (DFBW) flight test program intended to provide the technology for advanced control systems, giving aircraft enhanced performance and operational capability is addressed. A detailed analysis of the experimental system was performed to estimated the probabilities of two significant safety critical events: (1) loss of primary flight control function, causing reversion to the analog bypass system; and (2) loss of the aircraft due to failure of the electronic flight control system. The analysis covers appraisal of risks due to random equipment failure, generic faults in design of the system or its software, and induced failure due to external events. A unique diagrammatic technique was developed which details the combinatorial reliability equations for the entire system, promotes understanding of system failure characteristics, and identifies the most likely failure modes. The technique provides a systematic method of applying basic probability equations and is augmented by a computer program written in a modular fashion that duplicates the structure of these equations.
Life prediction systems for critical rotating components
NASA Technical Reports Server (NTRS)
Cunningham, Susan E.
1993-01-01
With the advent of advanced materials in rotating gas turbine engine components, the methodologies for life prediction of these parts must also increase in sophistication and capability. Pratt & Whitney's view of generic requirements for composite component life prediction systems are presented, efforts underway to develop these systems are discussed, and industry participation in key areas requiring development is solicited.
Stress Analysis of B-52B and B-52H Air-Launching Systems Failure-Critical Structural Components
NASA Technical Reports Server (NTRS)
Ko, William L.
2005-01-01
The operational life analysis of any airborne failure-critical structural component requires the stress-load equation, which relates the applied load to the maximum tangential tensile stress at the critical stress point. The failure-critical structural components identified are the B-52B Pegasus pylon adapter shackles, B-52B Pegasus pylon hooks, B-52H airplane pylon hooks, B-52H airplane front fittings, B-52H airplane rear pylon fitting, and the B-52H airplane pylon lower sway brace. Finite-element stress analysis was performed on the said structural components, and the critical stress point was located and the stress-load equation was established for each failure-critical structural component. The ultimate load, yield load, and proof load needed for operational life analysis were established for each failure-critical structural component.
An object oriented generic controller using CLIPS
NASA Technical Reports Server (NTRS)
Nivens, Cody R.
1990-01-01
In today's applications, the need for the division of code and data has focused on the growth of object oriented programming. This philosophy gives software engineers greater control over the environment of an application. Yet the use of object oriented design does not exclude the need for greater understanding by the application of what the controller is doing. Such understanding is only possible by using expert systems. Providing a controller that is capable of controlling an object by using rule-based expertise would expedite the use of both object oriented design and expert knowledge of the dynamic of an environment in modern controllers. This project presents a model of a controller that uses the CLIPS expert system and objects in C++ to create a generic controller. The polymorphic abilities of C++ allow for the design of a generic component stored in individual data files. Accompanying the component is a set of rules written in CLIPS which provide the following: the control of individual components, the input of sensory data from components and the ability to find the status of a given component. Along with the data describing the application, a set of inference rules written in CLIPS allows the application to make use of sensory facts and status and control abilities. As a demonstration of this ability, the control of the environment of a house is provided. This demonstration includes the data files describing the rooms and their contents as far as devices, windows and doors. The rules used for the home consist of the flow of people in the house and the control of devices by the home owner.
Composite load spectra for select space propulsion structural components
NASA Technical Reports Server (NTRS)
Newell, J. F.; Ho, H. W.; Kurth, R. E.
1991-01-01
The work performed to develop composite load spectra (CLS) for the Space Shuttle Main Engine (SSME) using probabilistic methods. The three methods were implemented to be the engine system influence model. RASCAL was chosen to be the principal method as most component load models were implemented with the method. Validation of RASCAL was performed. High accuracy comparable to the Monte Carlo method can be obtained if a large enough bin size is used. Generic probabilistic models were developed and implemented for load calculations using the probabilistic methods discussed above. Each engine mission, either a real fighter or a test, has three mission phases: the engine start transient phase, the steady state phase, and the engine cut off transient phase. Power level and engine operating inlet conditions change during a mission. The load calculation module provides the steady-state and quasi-steady state calculation procedures with duty-cycle-data option. The quasi-steady state procedure is for engine transient phase calculations. In addition, a few generic probabilistic load models were also developed for specific conditions. These include the fixed transient spike model, the poison arrival transient spike model, and the rare event model. These generic probabilistic load models provide sufficient latitude for simulating loads with specific conditions. For SSME components, turbine blades, transfer ducts, LOX post, and the high pressure oxidizer turbopump (HPOTP) discharge duct were selected for application of the CLS program. They include static pressure loads and dynamic pressure loads for all four components, centrifugal force for the turbine blade, temperatures of thermal loads for all four components, and structural vibration loads for the ducts and LOX posts.
NASA Astrophysics Data System (ADS)
Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo
2017-03-01
Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.
Compound estimation procedures in reliability
NASA Technical Reports Server (NTRS)
Barnes, Ron
1990-01-01
At NASA, components and subsystems of components in the Space Shuttle and Space Station generally go through a number of redesign stages. While data on failures for various design stages are sometimes available, the classical procedures for evaluating reliability only utilize the failure data on the present design stage of the component or subsystem. Often, few or no failures have been recorded on the present design stage. Previously, Bayesian estimators for the reliability of a single component, conditioned on the failure data for the present design, were developed. These new estimators permit NASA to evaluate the reliability, even when few or no failures have been recorded. Point estimates for the latter evaluation were not possible with the classical procedures. Since different design stages of a component (or subsystem) generally have a good deal in common, the development of new statistical procedures for evaluating the reliability, which consider the entire failure record for all design stages, has great intuitive appeal. A typical subsystem consists of a number of different components and each component has evolved through a number of redesign stages. The present investigations considered compound estimation procedures and related models. Such models permit the statistical consideration of all design stages of each component and thus incorporate all the available failure data to obtain estimates for the reliability of the present version of the component (or subsystem). A number of models were considered to estimate the reliability of a component conditioned on its total failure history from two design stages. It was determined that reliability estimators for the present design stage, conditioned on the complete failure history for two design stages have lower risk than the corresponding estimators conditioned only on the most recent design failure data. Several models were explored and preliminary models involving bivariate Poisson distribution and the Consael Process (a bivariate Poisson process) were developed. Possible short comings of the models are noted. An example is given to illustrate the procedures. These investigations are ongoing with the aim of developing estimators that extend to components (and subsystems) with three or more design stages.
Pujari, Sanjay; Dravid, Ameet; Gupte, Nikhil; Joshi, Kedar; Bele, Vivek
2008-01-01
To assess effectiveness and safety of a generic fixed-dose combination of tenofovir (TDF)/emtricitabine (FTC)/efavirenz (EFV) among HIV-1-infected patients in Western India. Antiretroviral (ARV)-naive and experienced (thymidine analog nucleoside reverse transcriptase inhibitor [tNRTI] replaced by TDF) patients were started on a regimen of 1 TDF/FTC/EFV pill once a day. They were followed clinically on a periodic basis, and viral loads and CD4 counts were measured at 6 and 12 months. Creatinine clearance was calculated at baseline and at 6 months and/or as clinically indicated. Effectiveness was defined as not having to discontinue the regimen due to failure or toxicity. One hundred forty-one patients who started TDF/FTC/EFV before 1 June 2007 were eligible. Of these, 130 (92.2%) and 44 (31.2%) had 6- and 12-months follow-up, respectively. Thirty-five percent of the patients were ARV-naive. Eleven patients discontinued treatment (4 for virologic failure, 1 for grade 3-4 central nervous system disturbances, 4 for grade 3-4 renal toxicity, and 2 for cost). Ninety-six percent of patients were virologically suppressed at 6 months. Frequency of TDF-associated grade 3-4 renal toxicity was 2.8%; however, 3 of these patients had comorbid conditions associated with renal dysfunction. A fixed-dose combination of generic TDF/FTC/EFV is effective in ARV-naive and experienced patients. Although frequency of severe renal toxicity was higher than has been reported in the literature, it was safe in patients with no comorbid renal conditions.
Pujari, Sanjay; Dravid, Ameet; Gupte, Nikhil; Joshix, Kedar; Bele, Vivek
2008-08-20
To assess effectiveness and safety of a generic fixed-dose combination of tenofovir (TDF)/emtricitabine (FTC)/efavirenz (EFV) among HIV-1-infected patients in Western India. Antiretroviral (ARV)-naive and experienced (thymidine analog nucleoside reverse transcriptase inhibitor [tNRTI] replaced by TDF) patients were started on a regimen of 1 TDF/FTC/EFV pill once a day. They were followed clinically on a periodic basis, and viral loads and CD4 counts were measured at 6 and 12 months. Creatinine clearance was calculated at baseline and at 6 months and/or as clinically indicated. Effectiveness was defined as not having to discontinue the regimen due to failure or toxicity. One hundred forty-one patients who started TDF/FTC/EFV before 1 June 2007 were eligible. Of these, 130 (92.2%) and 44 (31.2%) had 6- and 12-months follow-up, respectively. Thirty-five percent of the patients were ARV-naive. Eleven patients discontinued treatment (4 for virologic failure, 1 for grade 3-4 central nervous system disturbances, 4 for grade 3-4 renal toxicity, and 2 for cost). Ninety-six percent of patients were virologically suppressed at 6 months. Frequency of TDF-associated grade 3-4 renal toxicity was 2.8%; however, 3 of these patients had comorbid conditions associated with renal dysfunction. A fixed-dose combination of generic TDF/FTC/EFV is effective in ARV-naive and experienced patients. Although frequency of severe renal toxicity was higher than has been reported in the literature, it was safe in patients with no comorbid renal conditions.
Thøgersen, Anna Margrethe; Larsen, Jacob Moesgaard; Johansen, Jens Brock; Abedin, Moeen
2017-01-01
Background: In clinical trials, manufacturer-specific, strategic programming of implantable cardioverter–defibrillators (ICDs), including faster detection rates, reduces unnecessary therapy but permits therapy for ventricular tachycardia/ventricular fibrillation (VF). Present consensus recommends a generic rate threshold between 185 and 200 beats per minute, which exceeds the rate tested in clinical trials for some manufacturers. In a case series, we sought to determine the relationship between programmed parameters and failure of modern ICDs to treat VF. Methods and Results: We reviewed cases in which normally functioning ICDs failed to deliver timely therapy for VF from April 2015 to January 2017 at 4 institutions. Of 10 ambulatory patients, 5 died from untreated VF, 4 had cardiac arrests requiring external shocks, and 1 was rescued by a delayed ICD shock. VF did not satisfy programmed detection criteria in 9 patients (90%). Seven of these patients had slowest detection rates that were consistent with generic recommendations but not tested in a peer-reviewed trial for their manufacturer’s ICDs. Manufacturer-specific factors interacted with fast detection rates to withhold therapy, including strict VF episode termination rules, enhancements to minimize T-wave oversensing, and features that restrict therapy to regular rhythms in ventricular tachycardia zones. Untreated VF despite recommended programming accounted for 56% of sudden deaths and 11% of all deaths during the study period. Conclusions: Complex and unanticipated interactions between manufacturer-specific features and generic programming can prevent therapy for VF. More data are needed to assess the risks and benefits of translating evidence-based detection parameters from one manufacturer to another. PMID:28916511
Socket position determines hip resurfacing 10-year survivorship.
Amstutz, Harlan C; Le Duff, Michel J; Johnson, Alicia J
2012-11-01
Modern metal-on-metal hip resurfacing arthroplasty designs have been used for over a decade. Risk factors for short-term failure include small component size, large femoral head defects, low body mass index, older age, high level of sporting activity, and component design, and it is established there is a surgeon learning curve. Owing to failures with early surgical techniques, we developed a second-generation technique to address those failures. However, it is unclear whether the techniques affected the long-term risk factors. We (1) determined survivorship for hips implanted with the second-generation cementing technique; (2) identified the risk factors for failure in these patients; and (3) determined the effect of the dominant risk factors on the observed modes of failure. We retrospectively reviewed the first 200 hips (178 patients) implanted using our second-generation surgical technique, which consisted of improvements in cleaning and drying the femoral head before and during cement application. There were 129 men and 49 women. Component orientation and contact patch to rim distance were measured. We recorded the following modes of failure: femoral neck fracture, femoral component loosening, acetabular component loosening, wear, dislocation, and sepsis. The minimum followup was 25 months (mean, 106.5 months; range, 25-138 months). Twelve hips were revised. Kaplan-Meier survivorship was 98.0% at 5 years and 94.3% at 10 years. The only variable associated with revision was acetabular component position. Contact patch to rim distance was lower in hips that dislocated, were revised for wear, or were revised for acetabular loosening. The dominant modes of failure were related to component wear or acetabular component loosening. Acetabular component orientation, a factor within the surgeon's control, determines the long-term success of our current hip resurfacing techniques. Current techniques have changed the modes of failure from aseptic femoral failure to wear or loosening of the acetabular component. Level III, prognostic study. See Guidelines for Authors for a complete description of levels of evidence.
Survivorship analysis of failure pattern after revision total hip arthroplasty.
Retpen, J B; Varmarken, J E; Jensen, J S
1989-12-01
Failure, defined as established indication for or performed re-revision of one or both components, was analyzed using survivorship methods in 306 revision total hip arthroplasties. The longevity of revision total hip arthroplasties was inferior to that of previously reported primary total hip arthroplasties. The overall survival curve was two-phased, with a late failure period associated with aseptic loosening of one or both components and an early failure period associated with causes of failure other than loosening. Separate survival curves for aseptic loosening of femoral and acetabular components showed late and almost simultaneous decline, but with a tendency toward a higher rate of failure for the femoral component. No differences in survival could be found between the Stanmore, Lubinus standard, and Lubinus long-stemmed femoral components. A short interval between the index operation and the revision and intraoperative and postoperative complications were risk factors for early failure. Young age was a risk factor for aseptic loosening of the femoral component. Intraoperative fracture of the femoral shaft was not a risk factor for secondary loosening. No difference in survival was found between primary cemented total arthroplasty and primary noncemented hemiarthroplasty.
ERIC Educational Resources Information Center
Ronstadt, Robert
2007-01-01
In this article, the author defines the Corridor Principle, which explains how entrepreneurs are able to use knowledge and insight from earlier ventures to see new venture opportunities that they could not have seen and/or pursued had they not started an earlier venture. He discusses its importance to practitioners in allowing them to anticipate…
ERIC Educational Resources Information Center
Merino, Claralynn
Many Native American communities have high rates of alcoholism. Children growing up in alcoholic families often exhibit co-dependent or para-alcoholic behaviors, which place them at high risk of educational failure. The Love Bug model was designed to encourage culturally appropriate self-expression and to promote self-love and detachment from…
Application and Evaluation of Control Modes for Risk-Based Engine Performance Enhancements
NASA Technical Reports Server (NTRS)
Liu, Yuan; Litt, Jonathan S.; Sowers, T. Shane; Owen, A. Karl (Compiler); Guo, Ten-Huei
2014-01-01
The engine control system for civil transport aircraft imposes operational limits on the propulsion system to ensure compliance with safety standards. However, during certain emergency situations, aircraft survivability may benefit from engine performance beyond its normal limits despite the increased risk of failure. Accordingly, control modes were developed to improve the maximum thrust output and responsiveness of a generic high-bypass turbofan engine. The algorithms were designed such that the enhanced performance would always constitute an elevation in failure risk to a consistent predefined likelihood. This paper presents an application of these risk-based control modes to a combined engine/aircraft model. Through computer and piloted simulation tests, the aim is to present a notional implementation of these modes, evaluate their effects on a generic airframe, and demonstrate their usefulness during emergency flight situations. Results show that minimal control effort is required to compensate for the changes in flight dynamics due to control mode activation. The benefits gained from enhanced engine performance for various runway incursion scenarios are investigated. Finally, the control modes are shown to protect against potential instabilities during propulsion-only flight where all aircraft control surfaces are inoperable.
Application and Evaluation of Control Modes for Risk-Based Engine Performance Enhancements
NASA Technical Reports Server (NTRS)
Liu, Yuan; Litt, Jonathan S.; Sowers, T. Shane; Owen, A. Karl; Guo, Ten-Huei
2015-01-01
The engine control system for civil transport aircraft imposes operational limits on the propulsion system to ensure compliance with safety standards. However, during certain emergency situations, aircraft survivability may benefit from engine performance beyond its normal limits despite the increased risk of failure. Accordingly, control modes were developed to improve the maximum thrust output and responsiveness of a generic high-bypass turbofan engine. The algorithms were designed such that the enhanced performance would always constitute an elevation in failure risk to a consistent predefined likelihood. This paper presents an application of these risk-based control modes to a combined engine/aircraft model. Through computer and piloted simulation tests, the aim is to present a notional implementation of these modes, evaluate their effects on a generic airframe, and demonstrate their usefulness during emergency flight situations. Results show that minimal control effort is required to compensate for the changes in flight dynamics due to control mode activation. The benefits gained from enhanced engine performance for various runway incursion scenarios are investigated. Finally, the control modes are shown to protect against potential instabilities during propulsion-only flight where all aircraft control surfaces are inoperable.
Mathematical Modeling Of Life-Support Systems
NASA Technical Reports Server (NTRS)
Seshan, Panchalam K.; Ganapathi, Balasubramanian; Jan, Darrell L.; Ferrall, Joseph F.; Rohatgi, Naresh K.
1994-01-01
Generic hierarchical model of life-support system developed to facilitate comparisons of options in design of system. Model represents combinations of interdependent subsystems supporting microbes, plants, fish, and land animals (including humans). Generic model enables rapid configuration of variety of specific life support component models for tradeoff studies culminating in single system design. Enables rapid evaluation of effects of substituting alternate technologies and even entire groups of technologies and subsystems. Used to synthesize and analyze life-support systems ranging from relatively simple, nonregenerative units like aquariums to complex closed-loop systems aboard submarines or spacecraft. Model, called Generic Modular Flow Schematic (GMFS), coded in such chemical-process-simulation languages as Aspen Plus and expressed as three-dimensional spreadsheet.
A portable MPI-based parallel vector template library
NASA Technical Reports Server (NTRS)
Sheffler, Thomas J.
1995-01-01
This paper discusses the design and implementation of a polymorphic collection library for distributed address-space parallel computers. The library provides a data-parallel programming model for C++ by providing three main components: a single generic collection class, generic algorithms over collections, and generic algebraic combining functions. Collection elements are the fourth component of a program written using the library and may be either of the built-in types of C or of user-defined types. Many ideas are borrowed from the Standard Template Library (STL) of C++, although a restricted programming model is proposed because of the distributed address-space memory model assumed. Whereas the STL provides standard collections and implementations of algorithms for uniprocessors, this paper advocates standardizing interfaces that may be customized for different parallel computers. Just as the STL attempts to increase programmer productivity through code reuse, a similar standard for parallel computers could provide programmers with a standard set of algorithms portable across many different architectures. The efficacy of this approach is verified by examining performance data collected from an initial implementation of the library running on an IBM SP-2 and an Intel Paragon.
A Portable MPI-Based Parallel Vector Template Library
NASA Technical Reports Server (NTRS)
Sheffler, Thomas J.
1995-01-01
This paper discusses the design and implementation of a polymorphic collection library for distributed address-space parallel computers. The library provides a data-parallel programming model for C + + by providing three main components: a single generic collection class, generic algorithms over collections, and generic algebraic combining functions. Collection elements are the fourth component of a program written using the library and may be either of the built-in types of c or of user-defined types. Many ideas are borrowed from the Standard Template Library (STL) of C++, although a restricted programming model is proposed because of the distributed address-space memory model assumed. Whereas the STL provides standard collections and implementations of algorithms for uniprocessors, this paper advocates standardizing interfaces that may be customized for different parallel computers. Just as the STL attempts to increase programmer productivity through code reuse, a similar standard for parallel computers could provide programmers with a standard set of algorithms portable across many different architectures. The efficacy of this approach is verified by examining performance data collected from an initial implementation of the library running on an IBM SP-2 and an Intel Paragon.
76 FR 69292 - Aging Management of Stainless Steel Structures and Components in Treated Borated Water
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-08
... NUCLEAR REGULATORY COMMISSION [NRC-2011-0256] Aging Management of Stainless Steel Structures and... Stainless Steel Structures and Components in Treated Borated Water.'' This LR-ISG revises the guidance in...) and Generic Aging Lessons Learned (GALL) Report for the aging management of stainless steel structures...
Witt, Daniel M; Tillman, Donald J; Evans, Christy M; Plotkin, Tatyana V; Sadler, Melanie A
2003-03-01
Substitution of generic warfarin initially was discouraged because of concerns regarding therapeutic failure or toxicity. Although subsequent research with AB-rated (i.e., bioequivalent) warfarin did not confirm initial concerns, the issue is not settled for all clinicians. We sought to provide additional information regarding the clinical and economic impact of warfarin conversion by analyzing a real-life sample of patients receiving long-term anticoagulation therapy who were switched from brand name to generic warfarin. Patients who had been taking warfarin for at least 180 days and had received uninterrupted oral anticoagulation 90 days before and 90 days after switching to generic warfarin were included. The switch date was based on the first time generic warfarin was dispensed from our pharmacies. The primary end point was the calculated amount of time each patient's international normalized ratio (INR) values were within the patient-specific target INR range in the 90 days before and after the switch. Data regarding adverse events and medical resource utilization were also collected. Pharmacoeconomic analyses were performed. The analysis included 2299 patients. The overall difference in calculated time INR values were below (22.6% before vs 26.1% after switch, p<0.0001) and within (65.9% before vs 63.3% after switch, p=0.0002) the therapeutic INR range was statistically but not clinically significant. Only 28.0% of patients experienced a change in therapeutic INR control of 10% or less, 33.1% experienced INR control that improved by greater than 10%, and 38.9% experienced INR control that worsened by more than 10%. The difference in total treatment costs associated with brand name and generic warfarin was 3128 dollars/100 patient-years in favor of the generic product. Sensitivity analyses revealed that cost savings associated with warfarin conversion in this health care system were highly dependent on the difference between warfarin costs and cost of treating anticoagulation-related adverse events. Most of these patients were successfully switched from brand name to generic warfarin. However, supplemental INR monitoring is warranted when one warfarin product is substituted for another to allow timely detection of those patients who experience significant changes in anticoagulation response.
Failure and recovery in dynamical networks.
Böttcher, L; Luković, M; Nagler, J; Havlin, S; Herrmann, H J
2017-02-03
Failure, damage spread and recovery crucially underlie many spatially embedded networked systems ranging from transportation structures to the human body. Here we study the interplay between spontaneous damage, induced failure and recovery in both embedded and non-embedded networks. In our model the network's components follow three realistic processes that capture these features: (i) spontaneous failure of a component independent of the neighborhood (internal failure), (ii) failure induced by failed neighboring nodes (external failure) and (iii) spontaneous recovery of a component. We identify a metastable domain in the global network phase diagram spanned by the model's control parameters where dramatic hysteresis effects and random switching between two coexisting states are observed. This dynamics depends on the characteristic link length of the embedded system. For the Euclidean lattice in particular, hysteresis and switching only occur in an extremely narrow region of the parameter space compared to random networks. We develop a unifying theory which links the dynamics of our model to contact processes. Our unifying framework may help to better understand controllability in spatially embedded and random networks where spontaneous recovery of components can mitigate spontaneous failure and damage spread in dynamical networks.
Virtually-synchronous communication based on a weak failure suspector
NASA Technical Reports Server (NTRS)
Schiper, Andre; Ricciardi, Aleta
1993-01-01
Failure detectors (or, more accurately Failure Suspectors (FS)) appear to be a fundamental service upon which to build fault-tolerant, distributed applications. This paper shows that a FS with very weak semantics (i.e., that delivers failure and recovery information in no specific order) suffices to implement virtually-synchronous communication (VSC) in an asynchronous system subject to process crash failures and network partitions. The VSC paradigm is particularly useful in asynchronous systems and greatly simplifies building fault-tolerant applications that mask failures by replicating processes. We suggest a three-component architecture to implement virtually-synchronous communication: (1) at the lowest level, the FS component; (2) on top of it, a component (2a) that defines new views; and (3) a component (2b) that reliably multicasts messages within a view. The issues covered in this paper also lead to a better understanding of the various membership service semantics proposed in recent literature.
NASA Technical Reports Server (NTRS)
Vanschalkwyk, Christiaan Mauritz
1991-01-01
Many applications require that a control system must be tolerant to the failure of its components. This is especially true for large space-based systems that must work unattended and with long periods between maintenance. Fault tolerance can be obtained by detecting the failure of the control system component, determining which component has failed, and reconfiguring the system so that the failed component is isolated from the controller. Component failure detection experiments that were conducted on an experimental space structure, the NASA Langley Mini-Mast are presented. Two methodologies for failure detection and isolation (FDI) exist that do not require the specification of failure modes and are applicable to both actuators and sensors. These methods are known as the Failure Detection Filter and the method of Generalized Parity Relations. The latter method was applied to three different sensor types on the Mini-Mast. Failures were simulated in input-output data that were recorded during operation of the Mini-Mast. Both single and double sensor parity relations were tested and the effect of several design parameters on the performance of these relations is discussed. The detection of actuator failures is also treated. It is shown that in all the cases it is possible to identify the parity relations directly from input-output data. Frequency domain analysis is used to explain the behavior of the parity relations.
Analysis of failed nuclear plant components
NASA Astrophysics Data System (ADS)
Diercks, D. R.
1993-12-01
Argonne National Laboratory has conducted analyses of failed components from nuclear power- gener-ating stations since 1974. The considerations involved in working with and analyzing radioactive compo-nents are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in serv-ice. The failures discussed are (1) intergranular stress- corrosion cracking of core spray injection piping in a boiling water reactor, (2) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressurized water reactor, (3) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (4) failure of pump seal wear rings by nickel leaching in a boiling water reactor.
Quality of care and investment in property, plant, and equipment in hospitals.
Levitt, S W
1994-01-01
OBJECTIVE. This study explores the relationship between quality of care and investment in property, plant, and equipment (PPE) in hospitals. DATA SOURCES. Hospitals' investment in PPE was derived from audited financial statements for the fiscal years 1984-1989. Peer Review Organization (PRO) Generic Quality Screen (GQS) reviews and confirmed failures between April 1989 and September 1990 were obtained from the Massachusetts PRO. STUDY DESIGN. Weighted least squares regression models used PRO GQS confirmed failure rates as the dependent variable, and investment in PPE as the key explanatory variable. DATA EXTRACTION. Investment in PPE was standardized, summed by the hospital over the six years, and divided by the hospital's average number of beds in that period. The number of PRO reviewed cases with one or more GQS confirmed failures was divided by the total number of cases reviewed to create confirmed failure rates. PRINCIPAL FINDINGS. Investment in PPE in Massachusetts hospitals is correlated with GQS confirmed failure rates. CONCLUSIONS. A financial variable, investment in PPE, predicts certain dimensions of quality of care in hospitals. PMID:8113054
Using Generic Data to Establish Dormancy Failure Rates
NASA Technical Reports Server (NTRS)
Reistle, Bruce
2014-01-01
Many hardware items are dormant prior to being operated. The dormant period might be especially long, for example during missions to the moon or Mars. In missions with long dormant periods the risk incurred during dormancy can exceed the active risk contribution. Probabilistic Risk Assessments (PRAs) need to account for the dormant risk contribution as well as the active contribution. A typical method for calculating a dormant failure rate is to multiply the active failure rate by a constant, the dormancy factor. For example, some practitioners use a heuristic and divide the active failure rate by 30 to obtain an estimate of the dormant failure rate. To obtain a more empirical estimate of the dormancy factor, this paper uses the recently updated database NPRD-2011 [1] to arrive at a set of distributions for the dormancy factor. The resulting dormancy factor distributions are significantly different depending on whether the item is electrical, mechanical, or electro-mechanical. Additionally, this paper will show that using a heuristic constant fails to capture the uncertainty of the possible dormancy factors.
2008-01-01
Objective To assess effectiveness and safety of a generic fixed-dose combination of tenofovir (TDF)/emtricitabine (FTC)/efavirenz (EFV) among HIV-1-infected patients in Western India. Methods Antiretroviral (ARV)-naive and experienced (thymidine analog nucleoside reverse transcriptase inhibitor [tNRTI] replaced by TDF) patients were started on a regimen of 1 TDF/FTC/EFV pill once a day. They were followed clinically on a periodic basis, and viral loads and CD4 counts were measured at 6 and 12 months. Creatinine clearance was calculated at baseline and at 6 months and/or as clinically indicated. Effectiveness was defined as not having to discontinue the regimen due to failure or toxicity. Results One hundred forty-one patients who started TDF/FTC/EFV before 1 June 2007 were eligible. Of these, 130 (92.2%) and 44 (31.2%) had 6- and 12-months follow-up, respectively. Thirty-five percent of the patients were ARV-naive. Eleven patients discontinued treatment (4 for virologic failure, 1 for grade 3–4 central nervous system disturbances, 4 for grade 3–4 renal toxicity, and 2 for cost). Ninety-six percent of patients were virologically suppressed at 6 months. Frequency of TDF-associated grade 3–4 renal toxicity was 2.8%; however, 3 of these patients had comorbid conditions associated with renal dysfunction. Conclusion A fixed-dose combination of generic TDF/FTC/EFV is effective in ARV-naive and experienced patients. Although frequency of severe renal toxicity was higher than has been reported in the literature, it was safe in patients with no comorbid renal conditions. PMID:19825144
Pujari, Sanjay; Dravid, Ameet; Gupte, Nikhil; Joshi, Kedar; Bele, Vivek
2008-01-01
Objective To assess effectiveness and safety of a generic fixed-dose combination of tenofovir (TDF)/emtricitabine (FTC)/efavirenz (EFV) among HIV-1-infected patients in Western India. Methods Antiretroviral (ARV)-naive and experienced (thymidine analog nucleoside reverse transcriptase inhibitor [tNRTI] replaced by TDF) patients were started on a regimen of 1 TDF/FTC/EFV pill once a day. They were followed clinically on a periodic basis, and viral loads and CD4 counts were measured at 6 and 12 months. Creatinine clearance was calculated at baseline and at 6 months and/or as clinically indicated. Effectiveness was defined as not having to discontinue the regimen due to failure or toxicity. Results One hundred forty-one patients who started TDF/FTC/EFV before 1 June 2007 were eligible. Of these, 130 (92.2%) and 44 (31.2%) had 6- and 12-months follow-up, respectively. Thirty-five percent of the patients were ARV-naive. Eleven patients discontinued treatment (4 for virologic failure, 1 for grade 3-4 central nervous system disturbances, 4 for grade 3-4 renal toxicity, and 2 for cost). Ninety-six percent of patients were virologically suppressed at 6 months. Frequency of TDF-associated grade 3-4 renal toxicity was 2.8%; however, 3 of these patients had comorbid conditions associated with renal dysfunction. Conclusion A fixed-dose combination of generic TDF/FTC/EFV is effective in ARV-naive and experienced patients. Although frequency of severe renal toxicity was higher than has been reported in the literature, it was safe in patients with no comorbid renal conditions. PMID:18924648
Cosmological N -body simulations with generic hot dark matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandbyge, Jacob; Hannestad, Steen, E-mail: jacobb@phys.au.dk, E-mail: sth@phys.au.dk
2017-10-01
We have calculated the non-linear effects of generic fermionic and bosonic hot dark matter components in cosmological N -body simulations. For sub-eV masses, the non-linear power spectrum suppression caused by thermal free-streaming resembles the one seen for massive neutrinos, whereas for masses larger than 1 eV, the non-linear relative suppression of power is smaller than in linear theory. We furthermore find that in the non-linear regime, one can map fermionic to bosonic models by performing a simple transformation.
Composite structural materials
NASA Technical Reports Server (NTRS)
Loewy, R.; Wiberley, S. E.
1986-01-01
Overall emphasis is on basic long-term research in the following categories: constituent materials, composite materials, generic structural elements, processing science technology; and maintaining long-term structural integrity. Research in basic composition, characteristics, and processing science of composite materials and their constituents is balanced against the mechanics, conceptual design, fabrication, and testing of generic structural elements typical of aerospace vehicles so as to encourage the discovery of unusual solutions to present and future problems. Detailed descriptions of the progress achieved in the various component parts of this comprehensive program are presented.
Cosmological N-body simulations with generic hot dark matter
NASA Astrophysics Data System (ADS)
Brandbyge, Jacob; Hannestad, Steen
2017-10-01
We have calculated the non-linear effects of generic fermionic and bosonic hot dark matter components in cosmological N-body simulations. For sub-eV masses, the non-linear power spectrum suppression caused by thermal free-streaming resembles the one seen for massive neutrinos, whereas for masses larger than 1 eV, the non-linear relative suppression of power is smaller than in linear theory. We furthermore find that in the non-linear regime, one can map fermionic to bosonic models by performing a simple transformation.
NASA Technical Reports Server (NTRS)
Ganapathi, Gani B.; Seshan, P. K.; Ferrall, Joseph; Rohatgi, Naresh
1992-01-01
An extension is proposed for the NASA Space Exploration Initiative's Generic Modular Flow Schematics for physical/chemical life support systems which involves the addition of biological processes. The new system architecture includes plant, microbial, and animal habitat, as well as the human habitat subsystem. Major Feedstock Production and Food Preparation and Packaging components have also been incorporated. Inedible plant, aquaculture, microbial, and animal solids are processed for recycling.
A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities
Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.
1999-01-01
A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.
Failure Maps for Rectangular 17-4PH Stainless Steel Sandwiched Foam Panels
NASA Technical Reports Server (NTRS)
Raj, S. V.; Ghosn, L. J.
2007-01-01
A new and innovative concept is proposed for designing lightweight fan blades for aircraft engines using commercially available 17-4PH precipitation hardened stainless steel. Rotating fan blades in aircraft engines experience a complex loading state consisting of combinations of centrifugal, distributed pressure and torsional loads. Theoretical failure plastic collapse maps, showing plots of the foam relative density versus face sheet thickness, t, normalized by the fan blade span length, L, have been generated for rectangular 17-4PH sandwiched foam panels under these three loading modes assuming three failure plastic collapse modes. These maps show that the 17-4PH sandwiched foam panels can fail by either the yielding of the face sheets, yielding of the foam core or wrinkling of the face sheets depending on foam relative density, the magnitude of t/L and the loading mode. The design envelop of a generic fan blade is superimposed on the maps to provide valuable insights on the probable failure modes in a sandwiched foam fan blade.
Space Station Furnace Facility. Volume 3: Program cost estimate
NASA Technical Reports Server (NTRS)
1992-01-01
The approach used to estimate costs for the Space Station Furnace Facility (SSFF) is based on a computer program developed internally at Teledyne Brown Engineering (TBE). The program produces time-phased estimates of cost elements for each hardware component, based on experience with similar components. Engineering estimates of the degree of similarity or difference between the current project and the historical data is then used to adjust the computer-produced cost estimate and to fit it to the current project Work Breakdown Structure (WBS). The SSFF Concept as presented at the Requirements Definition Review (RDR) was used as the base configuration for the cost estimate. This program incorporates data on costs of previous projects and the allocation of those costs to the components of one of three, time-phased, generic WBS's. Input consists of a list of similar components for which cost data exist, number of interfaces with their type and complexity, identification of the extent to which previous designs are applicable, and programmatic data concerning schedules and miscellaneous data (travel, off-site assignments). Output is program cost in labor hours and material dollars, for each component, broken down by generic WBS task and program schedule phase.
Constructing the "Best" Reliability Data for the Job
NASA Technical Reports Server (NTRS)
DeMott, D. L.; Kleinhammer, R. K.
2014-01-01
Modern business and technical decisions are based on the results of analyses. When considering assessments using "reliability data", the concern is how long a system will continue to operate as designed. Generally, the results are only as good as the data used. Ideally, a large set of pass/fail tests or observations to estimate the probability of failure of the item under test would produce the best data. However, this is a costly endeavor if used for every analysis and design. Developing specific data is costly and time consuming. Instead, analysts rely on available data to assess reliability. Finding data relevant to the specific use and environment for any project is difficult, if not impossible. Instead, we attempt to develop the "best" or composite analog data to support our assessments. One method used incorporates processes for reviewing existing data sources and identifying the available information based on similar equipment, then using that generic data to derive an analog composite. Dissimilarities in equipment descriptions, environment of intended use, quality and even failure modes impact the "best" data incorporated in an analog composite. Once developed, this composite analog data provides a "better" representation of the reliability of the equipment or component can be used to support early risk or reliability trade studies, or analytical models to establish the predicted reliability data points. Data that is more representative of reality and more project specific would provide more accurate analysis, and hopefully a better final decision.
Non-invasive terahertz imaging of tissue water content for flap viability assessment
Bajwa, Neha; Au, Joshua; Jarrahy, Reza; Sung, Shijun; Fishbein, Michael C.; Riopelle, David; Ennis, Daniel B.; Aghaloo, Tara; St. John, Maie A.; Grundfest, Warren S.; Taylor, Zachary D.
2016-01-01
Accurate and early prediction of tissue viability is the most significant determinant of tissue flap survival in reconstructive surgery. Perturbation in tissue water content (TWC) is a generic component of the tissue response to such surgeries, and, therefore, may be an important diagnostic target for assessing the extent of flap viability in vivo. We have previously shown that reflective terahertz (THz) imaging, a non-ionizing technique, can generate spatially resolved maps of TWC in superficial soft tissues, such as cornea and wounds, on the order of minutes. Herein, we report the first in vivo pilot study to investigate the utility of reflective THz TWC imaging for early assessment of skin flap viability. We obtained longitudinal visible and reflective THz imagery comparing 3 bipedicled flaps (i.e. survival model) and 3 fully excised flaps (i.e. failure model) in the dorsal skin of rats over a postoperative period of 7 days. While visual differences between both models manifested 48 hr after surgery, statistically significant (p < 0.05, independent t-test) local differences in TWC contrast were evident in THz flap image sets as early as 24 hr. Excised flaps, histologically confirmed as necrotic, demonstrated a significant, yet localized, reduction in TWC in the flap region compared to non-traumatized skin. In contrast, bipedicled flaps, histologically verified as viable, displayed mostly uniform, unperturbed TWC across the flap tissue. These results indicate the practical potential of THz TWC sensing to accurately predict flap failure 24 hours earlier than clinical examination. PMID:28101431
Constructing the Best Reliability Data for the Job
NASA Technical Reports Server (NTRS)
Kleinhammer, R. K.; Kahn, J. C.
2014-01-01
Modern business and technical decisions are based on the results of analyses. When considering assessments using "reliability data", the concern is how long a system will continue to operate as designed. Generally, the results are only as good as the data used. Ideally, a large set of pass/fail tests or observations to estimate the probability of failure of the item under test would produce the best data. However, this is a costly endeavor if used for every analysis and design. Developing specific data is costly and time consuming. Instead, analysts rely on available data to assess reliability. Finding data relevant to the specific use and environment for any project is difficult, if not impossible. Instead, we attempt to develop the "best" or composite analog data to support our assessments. One method used incorporates processes for reviewing existing data sources and identifying the available information based on similar equipment, then using that generic data to derive an analog composite. Dissimilarities in equipment descriptions, environment of intended use, quality and even failure modes impact the "best" data incorporated in an analog composite. Once developed, this composite analog data provides a "better" representation of the reliability of the equipment or component can be used to support early risk or reliability trade studies, or analytical models to establish the predicted reliability data points. Data that is more representative of reality and more project specific would provide more accurate analysis, and hopefully a better final decision.
Kaur, Paramjeet; Jiang, Xiaojian; Stier, Ethan
2017-01-01
The US FDA's rule on "Requirements for Submission of Bioequivalence Data" requiring submission of all bioequivalence (BE) studies conducted on the same formulation of the drug product submitted for approval was published in Federal Register in January 2009. With the publication of this rule, we evaluated the impact of data from non-pivotal BE studies in assessing BE and identified the reasons for failed in vivo BE studies for generic oral delayed-release (DR) drug products only. We searched the Agency databases from January 2009 toDecember 2016 to identify Abbreviated New Drug Applications (ANDAs) submitted for DR drug products containing non-pivotal BE studies. Out of 202 ANDAs, 43 ANDAs contained 102 non-pivotal BE studies. Forty-nine non-pivotal BE studies were conducted on the to-be-marketed (TBM) formulation and 53 were conducted on formulations different from the TBM formulation. These experimental formulations primarily differed in the ratio of components of the enteric coating layer and/or amount (i.e., %w/w) of enteric coating layer. Of the 49 non-pivotal BE studies conducted on the TBM formulation, 41 failed to meet the BE acceptance criteria. The majority of failed non-pivotal BE studies on the TBM DR generic products had insufficient power, which was expected as these studies are exploratory in nature and not designed to have adequate power to pass the BE statistical criteria. In addition, among the failed non-pivotal BE studies on the TBM DR generic products, the most commonly failing pharmacokinetic parameter was Cmax. The data from these non-pivotal BE studies indicate that inadequate BE study design can lead to failure of the BE on the same formulation. Also, the non-pivotal BE studies on formulations different from the TBM formulation help us link the formulation design to the product performance in vivo. This article is open to POST-PUBLICATION REVIEW. Registered readers (see "For Readers") may comment by clicking on ABSTRACT on the issue's contents page.
Inflated medicine prices in Vietnam: a qualitative study.
Nguyen, Tuan Anh; Knight, Rosemary; Mant, Andrea; Razee, Husna; Brooks, Geoffrey; Dang, Thu Ha; Roughead, Elizabeth Ellen
2017-06-01
One third of the world's population lacks regular access to essential medicines partly because of the high cost of medicines. In Vietnam, the cost to patients of medicines was 47 times the international reference price for originator brands and 11 times the price for generic equivalents in the public sector. In this article, we report the results of a qualitative study conducted to identify the principal reasons for inflated medicine prices in Vietnam.Between April 2008 and December 2009, 29 semi-structured interviews were conducted with staff from pharmaceutical companies, private pharmacies, the Ministry of Health, and the Ministry of Finance of Vietnam. Study participants were recruited using a combination of purposive and snowball sampling techniques. Interviews were recorded, transcribed and coded using NVivo8® software and analyzed using a framework of structure-conduct-performance (SCP).Participants attributed high prices of originator medicines to a monopoly of supply. The prices of generic medicines were also considered to be excessive, reportedly due to the need to recoup the cost of financial inducements paid to prescribers and procurement officers. These inducements constituted a dominant cost component of the end price of generic medicines. Poor market intelligence about current world prices, as well as failure to achieve economies of scale because of unwarranted duplication in pharmaceutical production and distribution system were also factors contributing to high prices. This was reported to be further compounded by multiple layers in the supply chain and unregulated retail mark-ups.To address these problems a multifaceted approach is needed encompassing policy and legislative responses. Policy options include establishing effective monitoring of medicine quality assurance, procurement, distribution and use. Rationalization of the domestic pharmaceutical production and distribution system to achieve economies of scale is also required. Appropriate legal responses include collaborations with the justice and law enforcement sectors to enforce existing laws. © The Author 2017. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Eck, M.; Mukunda, M.
1988-01-01
Given here are predictions of fragment velocities and azimuths resulting from the Space Transportation System Solid Rocket Motor range destruct, or random failure occurring at any time during the 120 seconds of Solid Rocket Motor burn. Results obtained using the analytical methods described showed good agreement between predictions and observations for two specific events. It was shown that these methods have good potential for use in predicting the fragmentation process of a number of generically similar casing systems. It was concluded that coupled Eulerian-Lagrangian calculational methods of the type described here provide a powerful tool for predicting Solid Rocket Motor response.
NASA Technical Reports Server (NTRS)
Hoffler, Keith D.; Fears, Scott P.; Carzoo, Susan W.
1997-01-01
A generic airplane model concept was developed to allow configurations with various agility, performance, handling qualities, and pilot vehicle interface to be generated rapidly for piloted simulation studies. The simple concept allows stick shaping and various stick command types or modes to drive an airplane with both linear and nonlinear components. Output from the stick shaping goes to linear models or a series of linear models that can represent an entire flight envelope. The generic model also has provisions for control power limitations, a nonlinear feature. Therefore, departures from controlled flight are possible. Note that only loss of control is modeled, the generic airplane does not accurately model post departure phenomenon. The model concept is presented herein, along with four example airplanes. Agility was varied across the four example airplanes without altering specific excess energy or significantly altering handling qualities. A new feedback scheme to provide angle-of-attack cueing to the pilot, while using a pitch rate command system, was implemented and tested.
Health monitoring display system for a complex plant
Ridolfo, Charles F [Bloomfield, CT; Harmon, Daryl L [Enfield, CT; Colin, Dreyfuss [Enfield, CT
2006-08-08
A single page enterprise wide level display provides a comprehensive readily understood representation of the overall health status of a complex plant. Color coded failure domains allow rapid intuitive recognition of component failure status. A three-tier hierarchy of displays provide details on the health status of the components and systems displayed on the enterprise wide level display in a manner that supports a logical drill down to the health status of sub-components on Tier 1 to expected faults of the sub-components on Tier 2 to specific information relative to expected sub-component failures on Tier 3.
Composite Interlaminar Shear Fracture Toughness, G(sub 2c): Shear Measurement of Sheer Myth?
NASA Technical Reports Server (NTRS)
OBrien, T. Kevin
1997-01-01
The concept of G2c as a measure of the interlaminar shear fracture toughness of a composite material is critically examined. In particular, it is argued that the apparent G2c as typically measured is inconsistent with the original definition of shear fracture. It is shown that interlaminar shear failure actually consists of tension failures in the resin rich layers between plies followed by the coalescence of ligaments created by these failures and not the sliding of two planes relative to one another that is assumed in fracture mechanics theory. Several strain energy release rate solutions are reviewed for delamination in composite laminates and structural components where failures have been experimentally documented. Failures typically occur at a location where the mode 1 component accounts for at least one half of the total G at failure. Hence, it is the mode I and mixed-mode interlaminar fracture toughness data that will be most useful in predicting delamination failure in composite components in service. Although apparent G2c measurements may prove useful for completeness of generating mixed-mode criteria, the accuracy of these measurements may have very little influence on the prediction of mixed-mode failures in most structural components.
Linearization instability for generic gravity in AdS spacetime
NASA Astrophysics Data System (ADS)
Altas, Emel; Tekin, Bayram
2018-01-01
In general relativity, perturbation theory about a background solution fails if the background spacetime has a Killing symmetry and a compact spacelike Cauchy surface. This failure, dubbed as linearization instability, shows itself as non-integrability of the perturbative infinitesimal deformation to a finite deformation of the background. Namely, the linearized field equations have spurious solutions which cannot be obtained from the linearization of exact solutions. In practice, one can show the failure of the linear perturbation theory by showing that a certain quadratic (integral) constraint on the linearized solutions is not satisfied. For non-compact Cauchy surfaces, the situation is different and for example, Minkowski space having a non-compact Cauchy surface, is linearization stable. Here we study, the linearization instability in generic metric theories of gravity where Einstein's theory is modified with additional curvature terms. We show that, unlike the case of general relativity, for modified theories even in the non-compact Cauchy surface cases, there are some theories which show linearization instability about their anti-de Sitter backgrounds. Recent D dimensional critical and three dimensional chiral gravity theories are two such examples. This observation sheds light on the paradoxical behavior of vanishing conserved charges (mass, angular momenta) for non-vacuum solutions, such as black holes, in these theories.
Connolly, Martin J; Kenealy, Timothy; Barber, P Alan; Carswell, Peter; Clinton, Janet; Dyall, Lorna; Devlin, Gerard; Doughty, Robert N; Kerse, Ngaire; Kolbe, John; Lawrenson, Ross; Moffitt, Allan; Sheridan, Nicolette
2011-10-14
Chronic illness is the leading cause of morbidity, mortality, and inequitable health outcomes in New Zealand. The ABCCNZ Stocktake aimed to identify extent of long-term conditions management evidence-based practices in stroke, cardiovascular disease, chronic obstructive pulmonary disease and congestive heart failure in New Zealand's District Health Boards (DHBs). Eleven 'dimensions' of care for long-term conditions, identified by literature review and confirmed at workshops with long-term conditions professionals, formed the basis of the Stocktake of all 21 DHBs. It comprised two questionnaires: a generic component capturing perceptions of practice; and a disease-specific component assessing service provision. Fifteen DHBs completed all or parts of the questionnaires. Data accrual was completed in July 2008. Although most DHBs had developed long-term conditions management strategies to a moderate degree, there was considerable variability of practice between DHBs. DHBs thought their PHOs had developed strategies in some areas to a low to moderate level, though cardiovascular disease provision rated more highly. Regarding disease-specific services, larger DHBs had greater long-term conditions management provision not only of tertiary services, but of standard care, leadership, self-management, case-management, and audit. There is considerable variability in perceptions of long-term conditions management service provision across DHBs. In many instances variability in actual disease-specific service provision appears to relate to DHB size.
Culture and error in space: implications from analog environments.
Helmreich, R L
2000-09-01
An ongoing study investigating national, organizational, and professional cultures in aviation and medicine is described. Survey data from 26 nations on 5 continents show highly significant national differences regarding appropriate relationships between leaders and followers, in group vs. individual orientation, and in values regarding adherence to rules and procedures. These findings replicate earlier research on dimensions of national culture. Data collected also isolate significant operational issues in multi-national flight crews. While there are no better or worse cultures, these cultural differences have operational implications for the way crews function in an international space environment. The positive professional cultures of pilots and physicians exhibit a high enjoyment of the job and professional pride. However, a negative component was also identified characterized by a sense of personal invulnerability regarding the effects of stress and fatigue on performance. This misperception of personal invulnerability has operational implications such as failures in teamwork and increased probability of error. A second component of the research examines team error in operational environments. From observational data collected during normal flight operations, new models of threat and error and their management were developed that can be generalized to operations in space and other socio-technological domains. Five categories of crew error are defined and their relationship to training programs in team performance, known generically as Crew Resource Management, is described. The relevance of these data for future spaceflight is discussed.
Mass and Reliability Source (MaRS) Database
NASA Technical Reports Server (NTRS)
Valdenegro, Wladimir
2017-01-01
The Mass and Reliability Source (MaRS) Database consolidates components mass and reliability data for all Oribital Replacement Units (ORU) on the International Space Station (ISS) into a single database. It was created to help engineers develop a parametric model that relates hardware mass and reliability. MaRS supplies relevant failure data at the lowest possible component level while providing support for risk, reliability, and logistics analysis. Random-failure data is usually linked to the ORU assembly. MaRS uses this data to identify and display the lowest possible component failure level. As seen in Figure 1, the failure point is identified to the lowest level: Component 2.1. This is useful for efficient planning of spare supplies, supporting long duration crewed missions, allowing quicker trade studies, and streamlining diagnostic processes. MaRS is composed of information from various databases: MADS (operating hours), VMDB (indentured part lists), and ISS PART (failure data). This information is organized in Microsoft Excel and accessed through a program made in Microsoft Access (Figure 2). The focus of the Fall 2017 internship tour was to identify the components that were the root cause of failure from the given random-failure data, develop a taxonomy for the database, and attach material headings to the component list. Secondary objectives included verifying the integrity of the data in MaRS, eliminating any part discrepancies, and generating documentation for future reference. Due to the nature of the random-failure data, data mining had to be done manually without the assistance of an automated program to ensure positive identification.
Development of STS/Centaur failure probabilities liftoff to Centaur separation
NASA Technical Reports Server (NTRS)
Hudson, J. M.
1982-01-01
The results of an analysis to determine STS/Centaur catastrophic vehicle response probabilities for the phases of vehicle flight from STS liftoff to Centaur separation from the Orbiter are presented. The analysis considers only category one component failure modes as contributors to the vehicle response mode probabilities. The relevant component failure modes are grouped into one of fourteen categories of potential vehicle behavior. By assigning failure rates to each component, for each of its failure modes, the STS/Centaur vehicle response probabilities in each phase of flight can be calculated. The results of this study will be used in a DOE analysis to ascertain the hazard from carrying a nuclear payload on the STS.
An exploratory survey of methods used to develop measures of performance
NASA Astrophysics Data System (ADS)
Hamner, Kenneth L.; Lafleur, Charles A.
1993-09-01
Nonmanufacturing organizations are being challenged to provide high-quality products and services to their customers, with an emphasis on continuous process improvement. Measures of performance, referred to as metrics, can be used to foster process improvement. The application of performance measurement to nonmanufacturing processes can be very difficult. This research explored methods used to develop metrics in nonmanufacturing organizations. Several methods were formally defined in the literature, and the researchers used a two-step screening process to determine the OMB Generic Method was most likely to produce high-quality metrics. The OMB Generic Method was then used to develop metrics. A few other metric development methods were found in use at nonmanufacturing organizations. The researchers interviewed participants in metric development efforts to determine their satisfaction and to have them identify the strengths and weaknesses of, and recommended improvements to, the metric development methods used. Analysis of participants' responses allowed the researchers to identify the key components of a sound metrics development method. Those components were incorporated into a proposed metric development method that was based on the OMB Generic Method, and should be more likely to produce high-quality metrics that will result in continuous process improvement.
An evaluation of a real-time fault diagnosis expert system for aircraft applications
NASA Technical Reports Server (NTRS)
Schutte, Paul C.; Abbott, Kathy H.; Palmer, Michael T.; Ricks, Wendell R.
1987-01-01
A fault monitoring and diagnosis expert system called Faultfinder was conceived and developed to detect and diagnose in-flight failures in an aircraft. Faultfinder is an automated intelligent aid whose purpose is to assist the flight crew in fault monitoring, fault diagnosis, and recovery planning. The present implementation of this concept performs monitoring and diagnosis for a generic aircraft's propulsion and hydraulic subsystems. This implementation is capable of detecting and diagnosing failures of known and unknown (i.e., unforseeable) type in a real-time environment. Faultfinder uses both rule-based and model-based reasoning strategies which operate on causal, temporal, and qualitative information. A preliminary evaluation is made of the diagnostic concepts implemented in Faultfinder. The evaluation used actual aircraft accident and incident cases which were simulated to assess the effectiveness of Faultfinder in detecting and diagnosing failures. Results of this evaluation, together with the description of the current Faultfinder implementation, are presented.
ERIC Educational Resources Information Center
Al-Ali, Mohammed N.
2006-01-01
This study reports an investigation of the genre components and pragmatic strategies of letters of applications written by Jordanian Arabic--English bilinguals. Specifically it is set up to trace how far novice non-native speakers of English are able to utilise the generic components and politeness strategies of the target language that strongly…
Intelligence, Surveillance, and Reconnaissance Fusion for Coalition Operations
2008-07-01
classification of the targets of interest. The MMI features extracted in this manner have two properties that provide a sound justification for...are generalizations of well- known feature extraction methods such as Principal Components Analysis (PCA) and Independent Component Analysis (ICA...augment (without degrading performance) a large class of generic fusion processes. Ontologies Classifications Feature extraction Feature analysis
NASA Technical Reports Server (NTRS)
Abramson, Michael; Refai, Mohamad; Santiago, Confesor
2017-01-01
The paper describes the Generic Resolution Advisor and Conflict Evaluator (GRACE), a novel alerting and guidance algorithm that combines flexibility, robustness, and computational efficiency. GRACE is generic since it was designed without any assumptions regarding temporal or spatial scales, aircraft performance, or its sensor and communication systems. Therefore, GRACE was adopted as a core component of the Java Architecture for Detect-And-Avoid (DAA) Extensibility and Modeling, developed by NASA as a research and modeling tool for Unmanned Aerial Systems Integration in the National Airspace System (NAS). GRACE has been used in a number of real-time and fast-time experiments supporting evolving requirements of DAA research, including parametric studies, NAS-wide simulations, human-in-the-loop experiments, and live flight tests.
Patents and profits: A disparity of manufacturing margins in the tenofovir value chain.
Walwyn, David
2013-03-01
Registered in 2001, tenofovir disoproxil fumarate (TDF) has quickly become a mainstay of first line regimens for the treatment of HIV. Initially only available in developed countries at a cost of US$5 000 per person per year (ppy), Gilead's Access Programme (GAP) has extended the use of the product to 2.4 million patients in low and middle income countries. The programme has two components: distribution of the branded product at reduced prices and licensing partnerships with generic manufacturers. The licensing partnerships now supply 75% of the market by volume, at a treatment cost of US$57 ppy (1% of the branded cost). From Gilead's perspective, GAP must be considered a huge success. It has enabled the company to maintain high prices in developed countries whilst reducing its input costs and deflecting criticism of its failure to provide essential medicines for the poor, hence risking the possibility of compulsory licensing. Over the period 2001 to 2011, TDF in its various forms has generated for Gilead more than US$31 billion revenue at a gross margin of 80%, equivalent to a gross profit of US$25 billion. Analysis of the TDF value chain, from preparation of the active pharmaceutical ingredient (API) to sale of the formulated product, shows that manufacturing margins are highly skewed in favour of the originator, with the latter's profit being US$3.2 billion vs. US$4 million for API manufacturers and US$39 million for formulators (2011). The data argues for a more rational approach to drug pricing including possible regulation in developed countries and more sustainable margins for the generic producers.
NASA Technical Reports Server (NTRS)
Kennedy, Barbara J.
2004-01-01
The purposes of this study are to compare the current Space Shuttle Ground Support Equipment (GSE) infrastructure with the proposed GSE infrastructure upgrade modification. The methodology will include analyzing the first prototype installation equipment at Launch PAD B called the "Pathfinder". This study will begin by comparing the failure rate of the current components associated with the "Hardware interface module (HIM)" at the Kennedy Space Center to the failure rate of the neW Pathfinder components. Quantitative data will be gathered specifically on HIM components and the PAD B Hypergolic Fuel facility and Hypergolic Oxidizer facility areas which has the upgraded pathfinder equipment installed. The proposed upgrades include utilizing industrial controlled modules, software, and a fiber optic network. The results of this study provide evidence that there is a significant difference in the failure rates of the two studied infrastructure equipment components. There is also evidence that the support staff for each infrastructure system is not equal. A recommendation to continue with future upgrades is based on a significant reduction of failures in the new' installed ground system components.
Solder Reflow Failures in Electronic Components During Manual Soldering
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander; Greenwell, Chris; Felt, Frederick
2008-01-01
This viewgraph presentation reviews the solder reflow failures in electronic components that occur during manual soldering. It discusses the specifics of manual-soldering-induced failures in plastic devices with internal solder joints. The failure analysis turned up that molten solder had squeezed up to the die surface along the die molding compound interface, and the dice were not protected with glassivation allowing solder to short gate and source to the drain contact. The failure analysis concluded that the parts failed due to overheating during manual soldering.
21 CFR 892.5700 - Remote controlled radionuclide applicator system.
Code of Federal Regulations, 2011 CFR
2011-04-01
... source into the body or to the surface of the body for radiation therapy. This generic type of device may include patient and equipment supports, component parts, treatment planning computer programs, and...
Bouaud, Jacques; Guézennec, Gilles; Séroussi, Brigitte
2018-01-01
The integration of clinical information models and termino-ontological models into a unique ontological framework is highly desirable for it facilitates data integration and management using the same formal mechanisms for both data concepts and information model components. This is particularly true for knowledge-based decision support tools that aim to take advantage of all facets of semantic web technologies in merging ontological reasoning, concept classification, and rule-based inferences. We present an ontology template that combines generic data model components with (parts of) existing termino-ontological resources. The approach is developed for the guideline-based decision support module on breast cancer management within the DESIREE European project. The approach is based on the entity attribute value model and could be extended to other domains.
Generic immunosuppression in solid organ transplantation: systematic review and meta-analysis
Molnar, Amber O; Fergusson, Dean; Tsampalieros, Anne K; Bennett, Alexandria; Fergusson, Nicholas; Ramsay, Timothy
2015-01-01
Objective To compare the clinical efficacy and bioequivalence of generic immunosuppressive drugs in patients with solid organ transplants. Design Systematic review and meta-analysis of all studies comparing generic with innovator immunosuppressive drugs. Data sources Medline and Embase from 1980 to September 2014. Review methods A literature search was performed for all studies comparing a generic to an innovator immunosuppressive drug in solid organ transplantation. Two reviewers independently extracted data and assessed quality of studies. Meta-analyses of prespecified outcomes were performed when deemed appropriate. Outcomes included patient survival, allograft survival, acute rejection, adverse events and bioequivalence. Results 1679 citations were screened, of which 50 studies met eligibility criteria (17 randomized trials, 15 non-randomized interventional studies, and 18 observational studies). Generics were compared with Neoral (cyclosporine) (32 studies), Prograf (tacrolimus) (12 studies), and Cellcept (mycophenolate mofetil) (six studies). Pooled analysis of randomized controlled trials in patients with kidney transplants that reported bioequivalence criteria showed that Neoral (two studies) and Prograf (three studies) were not bioequivalent with generic preparations according to criteria of the European Medicines Agency. The single Cellcept trial also did not meet bioequivalence. Acute rejection was rare but did not differ between groups. For Neoral, the pooled Peto odds ratio was 1.23 (95% confidence interval 0.64 to 2.36) for kidney randomized controlled trials and 0.66 (0.40 to 1.08) for observational studies. For kidney observational studies, the pooled Peto odds ratios were 0.98 (0.37 to 2.60) for Prograf and 0.49 (0.09 to 2.56) for Cellcept. Meta-analyses for non-renal solid organ transplants were not performed because of a lack of data.There were insufficient data reported on patient or graft survival. Pooling of results was limited by inconsistent study methods and reporting of outcomes. Many studies did not report standard criteria used to determine bioequivalence. While rates of acute rejection seemed similar and were relatively rare, few studies were designed to properly compare clinical outcomes. Most studies had short follow-up times and included stable patients without a history of rejection. Conclusions High quality data showing bioequivalence and clinical efficacy of generic immunosuppressive drugs in patients with transplants are lacking. Given the serious consequences of rejection and allograft failure, well designed studies on bioequivalence and safety of generic immunosuppression in transplant recipients are needed. PMID:26101226
Application of Function-Failure Similarity Method to Rotorcraft Component Design
NASA Technical Reports Server (NTRS)
Roberts, Rory A.; Stone, Robert E.; Tumer, Irem Y.; Clancy, Daniel (Technical Monitor)
2002-01-01
Performance and safety are the top concerns of high-risk aerospace applications at NASA. Eliminating or reducing performance and safety problems can be achieved with a thorough understanding of potential failure modes in the designs that lead to these problems. The majority of techniques use prior knowledge and experience as well as Failure Modes and Effects as methods to determine potential failure modes of aircraft. During the design of aircraft, a general technique is needed to ensure that every potential failure mode is considered, while avoiding spending time on improbable failure modes. In this work, this is accomplished by mapping failure modes to specific components, which are described by their functionality. The failure modes are then linked to the basic functions that are carried within the components of the aircraft. Using this technique, designers can examine the basic functions, and select appropriate analyses to eliminate or design out the potential failure modes. The fundamentals of this method were previously introduced for a simple rotating machine test rig with basic functions that are common to a rotorcraft. In this paper, this technique is applied to the engine and power train of a rotorcraft, using failures and functions obtained from accident reports and engineering drawings.
Service Life Extension of the Propulsion System of Long-Term Manned Orbital Stations
NASA Technical Reports Server (NTRS)
Kamath, Ulhas; Kuznetsov, Sergei; Spencer, Victor
2014-01-01
One of the critical non-replaceable systems of a long-term manned orbital station is the propulsion system. Since the propulsion system operates beginning with the launch of station elements into orbit, its service life determines the service life of the station overall. Weighing almost a million pounds, the International Space Station (ISS) is about four times as large as the Russian space station Mir and about five times as large as the U.S. Skylab. Constructed over a span of more than a decade with the help of over 100 space flights, elements and modules of the ISS provide more research space than any spacecraft ever built. Originally envisaged for a service life of fifteen years, this Earth orbiting laboratory has been in orbit since 1998. Some elements that have been launched later in the assembly sequence were not yet built when the first elements were placed in orbit. Hence, some of the early modules that were launched at the inception of the program were already nearing the end of their design life when the ISS was finally ready and operational. To maximize the return on global investments on ISS, it is essential for the valuable research on ISS to continue as long as the station can be sustained safely in orbit. This paper describes the work performed to extend the service life of the ISS propulsion system. A system comprises of many components with varying failure rates. Reliability of a system is the probability that it will perform its intended function under encountered operating conditions, for a specified period of time. As we are interested in finding out how reliable a system would be in the future, reliability expressed as a function of time provides valuable insight. In a hypothetical bathtub shaped failure rate curve, the failure rate, defined as the number of failures per unit time that a currently healthy component will suffer in a given future time interval, decreases during infant-mortality period, stays nearly constant during the service life and increases at the end when the design service life ends and wear-out phase begins. However, the component failure rates do not remain constant over the entire cycle life. The failure rate depends on various factors such as design complexity, current age of the component, operating conditions, severity of environmental stress factors, etc. Development, qualification and acceptance test processes provide rigorous screening of components to weed out imperfections that might otherwise cause infant mortality failures. If sufficient samples are tested to failure, the failure time versus failure quantity can be analyzed statistically to develop a failure probability distribution function (PDF), a statistical model of the probability of failure versus time. Driven by cost and schedule constraints however, spacecraft components are generally not tested in large numbers. Uncertainties in failure rate and remaining life estimates increase when fewer units are tested. To account for this, spacecraft operators prefer to limit useful operations to a period shorter than the maximum demonstrated service life of the weakest component. Running each component to its failure to determine the maximum possible service life of a system can become overly expensive and impractical. Spacecraft operators therefore, specify the required service life and an acceptable factor of safety (FOS). The designers use these requirements to limit the life test duration. Midway through the design life, when benefits justify additional investments, supplementary life test may be performed to demonstrate the capability to safely extend the service life of the system. An innovative approach is required to evaluate the entire system, without having to go through an elaborate test program of propulsion system elements. Evaluating every component through a brute force test program would be a cost prohibitive and time consuming endeavor. ISS propulsion system components were designed and built decades ago. There are no representative ground test articles for some of the components. A 'test everything' approach would require manufacturing new test articles. The paper outlines some of the techniques used for selective testing, by way of cherry picking candidate components based on failure mode effects analysis, system level impacts, hazard analysis, etc. The type of testing required for extending the service life depends on the design and criticality of the component, failure modes and failure mechanisms, life cycle margin provided by the original certification, operational and environmental stresses encountered, etc. When specific failure mechanism being considered and the underlying relationship of that mode to the stresses provided in the test can be correlated by supporting analysis, time and effort required for conducting life extension testing can be significantly reduced. Exposure to corrosive propellants over long periods of time, for instance, lead to specific failure mechanisms in several components used in the propulsion system. Using Arrhenius model, which is tied to chemically dependent failure mechanisms such as corrosion or chemical reactions, it is possible to subject carefully selected test articles to accelerated life test. Arrhenius model reflects the proportional relationship between time to failure of a component and the exponential of the inverse of absolute temperature acting on the component. The acceleration factor is used to perform tests at higher stresses that allow direct correlation between the times to failure at a high test temperature to the temperatures to be expected in actual use. As long as the temperatures are such that new failure mechanisms are not introduced, this becomes a very useful method for testing to failure a relatively small sample of items for a much shorter amount of time. In this article, based on the example of the propulsion system of the first ISS module Zarya, theoretical approaches and practical activities of extending the service life of the propulsion system are reviewed with the goal of determining the maximum duration of its safe operation.
Selective inhibition of a multicomponent response can be achieved without cost
Westrick, Zachary; Ivry, Richard B.
2014-01-01
Behavioral flexibility frequently requires the ability to modify an on-going action. In some situations, optimal performance requires modifying some components of an on-going action without interrupting other components of that action. This form of control has been studied with the selective stop-signal task, in which participants are instructed to abort only one movement of a multicomponent response. Previous studies have shown a transient disruption of the nonaborted component, suggesting limitations in our ability to use selective inhibition. This cost has been attributed to a structural limitation associated with the recruitment of a cortico-basal ganglia pathway that allows for the rapid inhibition of action but operates in a relatively generic manner. Using a model-based approach, we demonstrate that, with a modest amount of training and highly compatible stimulus-response mappings, people can perform a selective-stop task without any cost on the nonaborted component. Prior reports of behavioral costs in selective-stop tasks reflect, at least in part, a sampling bias in the method commonly used to estimate such costs. These results suggest that inhibition can be selectively controlled and present a challenge for models of inhibitory control that posit the operation of generic processes. PMID:25339712
Reliability analysis based on the losses from failures.
Todinov, M T
2006-04-01
The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.
Balancing generality and specificity in component-based reuse
NASA Technical Reports Server (NTRS)
Eichmann, David A.; Beck, Jon
1992-01-01
For a component industry to be successful, we must move beyond the current techniques of black box reuse and genericity to a more flexible framework supporting customization of components as well as instantiation and composition of components. Customization of components strikes a balanced between creating dozens of variations of a base component and requiring the overhead of unnecessary features of an 'everything but the kitchen sink' component. We argue that design and instantiation of reusable components have competing criteria - design-for-use strives for generality, design-with-reuse strives for specificity - and that providing mechanisms for each can be complementary rather than antagonistic. In particular, we demonstrate how program slicing techniques can be applied to customization of reusable components.
Code of Federal Regulations, 2013 CFR
2013-01-01
... installed swimming pool slide shall be such that no structural failures of any component part shall cause failures of any other component part of the slide as described in the performance tests in paragraphs (d)(4... number and placement of such fasteners shall not cause a failure of the tread under the ladder loading...
Jiang, Jiping; Wang, Peng; Lung, Wu-seng; Guo, Liang; Li, Mei
2012-08-15
This paper presents a generic framework and decision tools of real-time risk assessment on Emergency Environmental Decision Support System for response to chemical spills in river basin. The generic "4-step-3-model" framework is able to delineate the warning area and the impact on vulnerable receptors considering four types of hazards referring to functional area, societal impact, and human health and ecology system. Decision tools including the stand-alone system and software components were implemented on GIS platform. A detailed case study on the Songhua River nitrobenzene spill illustrated the goodness of the framework and tool Spill first responders and decision makers of catchment management will benefit from the rich, visual and dynamic hazard information output from the software. Copyright © 2012 Elsevier B.V. All rights reserved.
Failure analysis of aluminum alloy components
NASA Technical Reports Server (NTRS)
Johari, O.; Corvin, I.; Staschke, J.
1973-01-01
Analysis of six service failures in aluminum alloy components which failed in aerospace applications is reported. Identification of fracture surface features from fatigue and overload modes was straightforward, though the specimens were not always in a clean, smear-free condition most suitable for failure analysis. The presence of corrosion products and of chemically attacked or mechanically rubbed areas here hindered precise determination of the cause of crack initiation, which was then indirectly inferred from the scanning electron fractography results. In five failures the crack propagation was by fatigue, though in each case the fatigue crack initiated from a different cause. Some of these causes could be eliminated in future components by better process control. In one failure, the cause was determined to be impact during a crash; the features of impact fracture were distinguished from overload fractures by direct comparisons of the received specimens with laboratory-generated failures.
A geometric approach to failure detection and identification in linear systems
NASA Technical Reports Server (NTRS)
Massoumnia, M. A.
1986-01-01
Using concepts of (C,A)-invariant and unobservability (complementary observability) subspaces, a geometric formulation of the failure detection and identification filter problem is stated. Using these geometric concepts, it is shown that it is possible to design a causal linear time-invariant processor that can be used to detect and uniquely identify a component failure in a linear time-invariant system, assuming: (1) The components can fail simultaneously, and (2) The components can fail only one at a time. In addition, a geometric formulation of Beard's failure detection filter problem is stated. This new formulation completely clarifies of output separability and mutual detectability introduced by Beard and also exploits the dual relationship between a restricted version of the failure detection and identification problem and the control decoupling problem. Moreover, the frequency domain interpretation of the results is used to relate the concepts of failure sensitive observers with the generalized parity relations introduced by Chow. This interpretation unifies the various failure detection and identification concepts and design procedures.
Deriving Function-failure Similarity Information for Failure-free Rotorcraft Component Design
NASA Technical Reports Server (NTRS)
Roberts, Rory A.; Stone, Robert B.; Tumer, Irem Y.; Clancy, Daniel (Technical Monitor)
2002-01-01
Performance and safety are the top concerns of high-risk aerospace applications at NASA. Eliminating or reducing performance and safety problems can be achieved with a thorough understanding of potential failure modes in the design that lead to these problems. The majority of techniques use prior knowledge and experience as well as Failure Modes and Effects as methods to determine potential failure modes of aircraft. The aircraft design needs to be passed through a general technique to ensure that every potential failure mode is considered, while avoiding spending time on improbable failure modes. In this work, this is accomplished by mapping failure modes to certain components, which are described by their functionality. In turn, the failure modes are then linked to the basic functions that are carried within the components of the aircraft. Using the technique proposed in this paper, designers can examine the basic functions, and select appropriate analyses to eliminate or design out the potential failure modes. This method was previously applied to a simple rotating machine test rig with basic functions that are common to a rotorcraft. In this paper, this technique is applied to the engine and power train of a rotorcraft, using failures and functions obtained from accident reports and engineering drawings.
Health-related quality of life measurement in patients with chronic respiratory failure.
Oga, Toru; Windisch, Wolfram; Handa, Tomohiro; Hirai, Toyohiro; Chin, Kazuo
2018-05-01
The improvement of health-related quality of life (HRQL) is an important goal in managing patients with chronic respiratory failure (CRF) receiving long-term oxygen therapy (LTOT) and/or domiciliary noninvasive ventilation (NIV). Two condition-specific HRQL questionnaires have been developed to specifically assess these patients: the Maugeri Respiratory Failure Questionnaire (MRF) and the Severe Respiratory Insufficiency Questionnaire (SRI). The MRF is more advantageous in its ease of completion; conversely, the SRI measures diversified health impairments more multi-dimensionally and discriminatively with greater balance, especially in patients receiving NIV. The SRI is available in many different languages as a result of back-translation and validation processes, and is widely validated for various disorders such as chronic obstructive pulmonary disease, restrictive thoracic disorders, neuromuscular disorders, and obesity hypoventilation syndrome, among others. Dyspnea and psychological status were the main determinants for both questionnaires, while the MRF tended to place more emphasis on activity limitations than SRI. In comparison to existing generic questionnaires such as the Medical Outcomes Study 36-item short form (SF-36) and disease-specific questionnaires such as the St. George's Respiratory Questionnaire (SGRQ) and the Chronic Respiratory Disease Questionnaire (CRQ), both the MRF and the SRI have been shown to be valid and reliable, and have better discriminatory, evaluative, and predictive features than other questionnaires. Thus, in assessing the HRQL of patients with CRF using LTOT and/or NIV, we might consider avoiding the use of the SF-36 or even the SGRQ or CRQ alone and consider using the CRF-specific SRI and MRF in addition to existing generic and/or disease-specific questionnaires. Copyright © 2018 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.
Generic Design Procedures for the Repair of Acoustically Damaged Panels
2008-12-01
plate for component 1 h2 Thickness of plate for component 2 h3 Thickness of plate for component 3 h13 Distance from centroid of component 1 to centroid...E1 View AA Simply supported/clamped plate h13 Ly Lx y x d3 d1 y 2a Figure 4: Geometry for constrained layer damping of a simply...dimensions, properties and parameters Physical dimensions (Figure 4) Material properties Key parameters h1, h2 , h3 , h13 , Lx , Ly , 2a E1 , E3 , G2
Debonding Stress Concentrations in a Pressurized Lobed Sandwich-Walled Generic Cryogenic Tank
NASA Technical Reports Server (NTRS)
Ko, William L.
2004-01-01
A finite-element stress analysis has been conducted on a lobed composite sandwich tank subjected to internal pressure and cryogenic cooling. The lobed geometry consists of two obtuse circular walls joined together with a common flat wall. Under internal pressure and cryogenic cooling, this type of lobed tank wall will experience open-mode (a process in which the honeycomb is stretched in the depth direction) and shear stress concentrations at the junctures where curved wall changes into flat wall (known as a curve-flat juncture). Open-mode and shear stress concentrations occur in the honeycomb core at the curve-flat junctures and could cause debonding failure. The levels of contributions from internal pressure and temperature loading to the open-mode and shear debonding failure are compared. The lobed fuel tank with honeycomb sandwich walls has been found to be a structurally unsound geometry because of very low debonding failure strengths. The debonding failure problem could be eliminated if the honeycomb core at the curve-flat juncture is replaced with a solid core.
The Effect of Modified Control Limits on the Performance of a Generic Commercial Aircraft Engine
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; May, Ryan D.; Gou, Ten-Huei; Litt, Jonathan S.
2012-01-01
This paper studies the effect of modifying the control limits of an aircraft engine to obtain additional performance. In an emergency situation, the ability to operate an engine above its normal operating limits and thereby gain additional performance may aid in the recovery of a distressed aircraft. However, the modification of an engine s limits is complex due to the risk of an engine failure. This paper focuses on the tradeoff between enhanced performance and risk of either incurring a mechanical engine failure or compromising engine operability. The ultimate goal is to increase the engine performance, without a large increase in risk of an engine failure, in order to increase the probability of recovering the distressed aircraft. The control limit modifications proposed are to extend the rotor speeds, temperatures, and pressures to allow more thrust to be produced by the engine, or to increase the rotor accelerations and allow the engine to follow a fast transient. These modifications do result in increased performance; however this study indicates that these modifications also lead to an increased risk of engine failure.
Environmental testing to prevent on-orbit TDRS failures
NASA Technical Reports Server (NTRS)
Cutler, Robert M.
1994-01-01
Can improved environmental testing prevent on-orbit component failures such as those experienced in the Tracking and Data Relay Satellite (TDRS) constellation? TDRS communications have been available to user spacecraft continuously for over 11 years, during which the five TDRS's placed in orbit have demonstrated their redundancies and robustness by surviving 26 component failures. Nevertheless, additional environmental testing prior to launch could prevent the occurrence of some types of failures, and could help to maintain communication services. Specific testing challenges involve traveling wave tube assemblies (TWTA's) whose lives may decrease with on-off cycling, and heaters that are subject to thermal cycles. The development of test conditions and procedures should account for known thermal variations. Testing may also have the potential to prevent failures in which components such as diplexers have had their lives dramatically shortened because of particle migration in a weightless environment. Reliability modeling could be used to select additional components that could benefit from special testing, but experience shows that this approach has serious limitations. Through knowledge of on-orbit experience, and with advances in testing, communication satellite programs might avoid the occurrence of some types of failures, and extend future spacecraft longevity beyond the current TDRS design life of ten years. However, determining which components to test, and how must testing to do, remain problematical.
FEAT - FAILURE ENVIRONMENT ANALYSIS TOOL (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Pack, G.
1994-01-01
The Failure Environment Analysis Tool, FEAT, enables people to see and better understand the effects of failures in a system. FEAT uses digraph models to determine what will happen to a system if a set of failure events occurs and to identify the possible causes of a selected set of failures. Failures can be user-selected from either engineering schematic or digraph model graphics, and the effects or potential causes of the failures will be color highlighted on the same schematic or model graphic. As a design tool, FEAT helps design reviewers understand exactly what redundancies have been built into a system and where weaknesses need to be protected or designed out. A properly developed digraph will reflect how a system functionally degrades as failures accumulate. FEAT is also useful in operations, where it can help identify causes of failures after they occur. Finally, FEAT is valuable both in conceptual development and as a training aid, since digraphs can identify weaknesses in scenarios as well as hardware. Digraphs models for use with FEAT are generally built with the Digraph Editor, a Macintosh-based application which is distributed with FEAT. The Digraph Editor was developed specifically with the needs of FEAT users in mind and offers several time-saving features. It includes an icon toolbox of components required in a digraph model and a menu of functions for manipulating these components. It also offers FEAT users a convenient way to attach a formatted textual description to each digraph node. FEAT needs these node descriptions in order to recognize nodes and propagate failures within the digraph. FEAT users store their node descriptions in modelling tables using any word processing or spreadsheet package capable of saving data to an ASCII text file. From within the Digraph Editor they can then interactively attach a properly formatted textual description to each node in a digraph. Once descriptions are attached to them, a selected set of nodes can be saved as a library file which represents a generic digraph structure for a class of components. The Generate Model feature can then use library files to generate digraphs for every component listed in the modeling tables, and these individual digraph files can be used in a variety of ways to speed generation of complete digraph models. FEAT contains a preprocessor which performs transitive closure on the digraph. This multi-step algorithm builds a series of phantom bridges, or gates, that allow accurate bi-directional processing of digraphs. This preprocessing can be time-consuming, but once preprocessing is complete, queries can be answered and displayed within seconds. A UNIX X-Windows port of version 3.5 of FEAT, XFEAT, is also available to speed the processing of digraph models created on the Macintosh. FEAT v3.6, which is only available for the Macintosh, has some report generation capabilities which are not available in XFEAT. For very large integrated systems, FEAT can be a real cost saver in terms of design evaluation, training, and knowledge capture. The capability of loading multiple digraphs and schematics into FEAT allows modelers to build smaller, more focused digraphs. Typically, each digraph file will represent only a portion of a larger failure scenario. FEAT will combine these files and digraphs from other modelers to form a continuous mathematical model of the system's failure logic. Since multiple digraphs can be cumbersome to use, FEAT ties propagation results to schematic drawings produced using MacDraw II (v1.1v2 or later) or MacDraw Pro. This makes it easier to identify single and double point failures that may have to cross several system boundaries and multiple engineering disciplines before creating a hazardous condition. FEAT v3.6 for the Macintosh is written in C-language using Macintosh Programmer's Workshop C v3.2. It requires at least a Mac II series computer running System 7 or System 6.0.8 and 32 Bit QuickDraw. It also requires a math coprocessor or coprocessor emulator and a color monitor (or one with 256 gray scale capability). A minimum of 4Mb of free RAM is highly recommended. The UNIX version of FEAT includes both FEAT v3.6 for the Macintosh and XFEAT. XFEAT is written in C-language for Sun series workstations running SunOS, SGI workstations running IRIX, DECstations running ULTRIX, and Intergraph workstations running CLIX version 6. It requires the MIT X Window System, Version 11 Revision 4, with OSF/Motif 1.1.3, and 16Mb of RAM. The standard distribution medium for FEAT 3.6 (Macintosh version) is a set of three 3.5 inch Macintosh format diskettes. The standard distribution package for the UNIX version includes the three FEAT 3.6 Macintosh diskettes plus a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format which contains XFEAT. Alternate distribution media and formats for XFEAT are available upon request. FEAT has been under development since 1990. Both FEAT v3.6 for the Macintosh and XFEAT v3.5 were released in 1993.
Coelho Neto, José; Lisboa, Fernanda L C
2017-07-01
Viagra and Cialis are among the most counterfeited medicines in many parts of the world, including Brazil. Despite the many studies that have been made regarding discrimination between genuine and counterfeit samples, most published works do not contemplate generic and similar versions of these medicines and also do not explore excipients/adjuvants contributions when characterizing genuine and suspected samples. In this study, we present our findings in exploring ATR-FTIR spectral profiles for characterizing both genuine and questioned samples of several generic and brand-name sildenafil- and tadalafil-based tablets available on the Brazilian market, including Viagra and Cialis. Multi-component spectral matching (deconvolution), objective visual comparison and correlation tests were used during analysis. Besides from allowing simple and quick identification of counterfeits, results obtained evidenced the strong spectral similarities between generic and brand-named tablets employing the same active ingredient and the indistinguishability between samples produced by the same manufacturer, generic or not. For all sildenafil-based and some tadalafil-based tablets tested, differentiation between samples from different manufacturers, attributed to slight variations in excipients/adjuvants proportions, was achieved, thus allowing the possibility of tracing an unknown/unidentified tablet back to a specific manufacturer. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.
Lima, Robson B DE; Alves, Francisco T; Oliveira, Cinthia P DE; Silva, José A A DA; Ferreira, Rinaldo L C
2017-01-01
Dry tropical forests are a key component in the global carbon cycle and their biomass estimates depend almost exclusively of fitted equations for multi-species or individual species data. Therefore, a systematic evaluation of statistical models through validation of estimates of aboveground biomass stocks is justifiable. In this study was analyzed the capacity of generic and specific equations obtained from different locations in Mexico and Brazil, to estimate aboveground biomass at multi-species levels and for four different species. Generic equations developed in Mexico and Brazil performed better in estimating tree biomass for multi-species data. For Poincianella bracteosa and Mimosa ophthalmocentra, only the Sampaio and Silva (2005) generic equation was the most recommended. These equations indicate lower tendency and lower bias, and biomass estimates for these equations are similar. For the species Mimosa tenuiflora, Aspidosperma pyrifolium and for the genus Croton the specific regional equations are more recommended, although the generic equation of Sampaio and Silva (2005) is not discarded for biomass estimates. Models considering gender, families, successional groups, climatic variables and wood specific gravity should be adjusted, tested and the resulting equations should be validated at both local and regional levels as well as on the scales of tropics with dry forest dominance.
Glauser, Bianca F; Vairo, Bruno C; Oliveira, Stephan-Nicollas M C G; Cinelli, Leonardo P; Pereira, Mariana S; Mourão, Paulo A S
2012-02-01
Patent protection for enoxaparin has expired. Generic preparations are developed and approved for clinical use in different countries. However, there is still skepticism about the possibility of making an exact copy of the original drug due to the complex processes involved in generating low-molecular-weight heparins. We have undertaken a careful analysis of generic versions of enoxaparin available for clinical use in Brazil. Thirty-three batches of active ingredient and 70 of the final pharmaceutical product were obtained from six different suppliers. They were analysed for their chemical composition, molecular size distribution, in vitro anticoagulant activity and pharmacological effects on animal models of experimental thrombosis and bleeding. Clearly, the generic versions of enoxaparin available for clinical use in Brazil are similar to the original drug. Only three out of 33 batches of active ingredient from one supplier showed differences in molecular size distribution, resulting from a low percentage of tetrasaccharide or the presence of a minor component eluted as monosaccharide. Three out of 70 batches of the final pharmaceutical products contained lower amounts of the active ingredient than that declared by the suppliers. Our results suggest that the generic versions of enoxaparin are a viable therapeutic option, but their use requires strict regulations to ensure accurate standards.
When 1+1 can be >2: Uncertainties compound when simulating climate, fisheries and marine ecosystems
NASA Astrophysics Data System (ADS)
Evans, Karen; Brown, Jaclyn N.; Sen Gupta, Alex; Nicol, Simon J.; Hoyle, Simon; Matear, Richard; Arrizabalaga, Haritz
2015-03-01
Multi-disciplinary approaches that combine oceanographic, biogeochemical, ecosystem, fisheries population and socio-economic models are vital tools for modelling whole ecosystems. Interpreting the outputs from such complex models requires an appreciation of the many different types of modelling frameworks being used and their associated limitations and uncertainties. Both users and developers of particular model components will often have little involvement or understanding of other components within such modelling frameworks. Failure to recognise limitations and uncertainties associated with components and how these uncertainties might propagate throughout modelling frameworks can potentially result in poor advice for resource management. Unfortunately, many of the current integrative frameworks do not propagate the uncertainties of their constituent parts. In this review, we outline the major components of a generic whole of ecosystem modelling framework incorporating the external pressures of climate and fishing. We discuss the limitations and uncertainties associated with each component of such a modelling system, along with key research gaps. Major uncertainties in modelling frameworks are broadly categorised into those associated with (i) deficient knowledge in the interactions of climate and ocean dynamics with marine organisms and ecosystems; (ii) lack of observations to assess and advance modelling efforts and (iii) an inability to predict with confidence natural ecosystem variability and longer term changes as a result of external drivers (e.g. greenhouse gases, fishing effort) and the consequences for marine ecosystems. As a result of these uncertainties and intrinsic differences in the structure and parameterisation of models, users are faced with considerable challenges associated with making appropriate choices on which models to use. We suggest research directions required to address these uncertainties, and caution against overconfident predictions. Understanding the full impact of uncertainty makes it clear that full comprehension and robust certainty about the systems themselves are not feasible. A key research direction is the development of management systems that are robust to this unavoidable uncertainty.
NASA Technical Reports Server (NTRS)
Foster, John V.; Hartman, David C.
2017-01-01
The NASA Unmanned Aircraft System (UAS) Traffic Management (UTM) project is conducting research to enable civilian low-altitude airspace and UAS operations. A goal of this project is to develop probabilistic methods to quantify risk during failures and off nominal flight conditions. An important part of this effort is the reliable prediction of feasible trajectories during off-nominal events such as control failure, atmospheric upsets, or navigation anomalies that can cause large deviations from the intended flight path or extreme vehicle upsets beyond the normal flight envelope. Few examples of high-fidelity modeling and prediction of off-nominal behavior for small UAS (sUAS) vehicles exist, and modeling requirements for accurately predicting flight dynamics for out-of-envelope or failure conditions are essentially undefined. In addition, the broad range of sUAS aircraft configurations already being fielded presents a significant modeling challenge, as these vehicles are often very different from one another and are likely to possess dramatically different flight dynamics and resultant trajectories and may require different modeling approaches to capture off-nominal behavior. NASA has undertaken an extensive research effort to define sUAS flight dynamics modeling requirements and develop preliminary high fidelity six degree-of-freedom (6-DOF) simulations capable of more closely predicting off-nominal flight dynamics and trajectories. This research has included a literature review of existing sUAS modeling and simulation work as well as development of experimental testing methods to measure and model key components of propulsion, airframe and control characteristics. The ultimate objective of these efforts is to develop tools to support UTM risk analyses and for the real-time prediction of off-nominal trajectories for use in the UTM Risk Assessment Framework (URAF). This paper focuses on modeling and simulation efforts for a generic quad-rotor configuration typical of many commercial vehicles in use today. An overview of relevant off-nominal multi-rotor behaviors will be presented to define modeling goals and to identify the prediction capability lacking in simplified models of multi-rotor performance. A description of recent NASA wind tunnel testing of multi-rotor propulsion and airframe components will be presented illustrating important experimental and data acquisition methods, and a description of preliminary propulsion and airframe models will be presented. Lastly, examples of predicted off-nominal flight dynamics and trajectories from the simulation will be presented.
[Analysis of generic drug supply in France].
Taboulet, F; Haramburu, F; Latry, Ph
2003-09-01
The list of generic medicines (LGM), published since 1997 by the Agence Française de Sécurité Sanitaire des Produits de Santé (AFFSSaPS), the French Medicine Agency, concerns a special part of the medicines reimbursed by the National Health Insurance (Social Security). The objectives of the present study were: i) to describe the components of this list, based on pharmaceutical, economical and therapeutic characteristics, ii) to study differences between generic and reference products (formulations, excipients, prices, etc.), iii) to analyze information on excipients provided to health care professionals. The 21st version of the LGM (April 2001) was used. Therapeutic value was retrieved from the 2001 AFSSaPS report on the therapeutic value of 4490 reimbursed medicines. Information on excipients in the LGM and the Vidal dictionary (reference prescription book in France) was compared. The products included in the LGM represent 20% of all reimbursed medicines. The mean price differences between generics and their reference products vary between 30 and 50% for more than two thirds of the generic groups. The therapeutic value of the products of the LGM was judged important in 71% of cases (vs 63% for the 4409 assessed medicines) and insufficient in 13% of cases (vs 19%). Information on excipients is often missing and sometimes erroneous. Although the LGM is regularly revised and thus the generic market in perpetual change, the 2001 cross description of this pharmaceutical market provides much informations and raises some concern.
21 CFR 314.440 - Addresses for applications and abbreviated applications.
Code of Federal Regulations, 2011 CFR
2011-04-01
... mail code for the Office of Generic Drugs is HFD-600, the mail codes for the Divisions of Chemistry I... leukapheresis; (3) Blood component processing solutions and shelf life extenders; and (4) Oxygen carriers. [50...
21 CFR 314.440 - Addresses for applications and abbreviated applications.
Code of Federal Regulations, 2013 CFR
2013-04-01
... mail code for the Office of Generic Drugs is HFD-600, the mail codes for the Divisions of Chemistry I... leukapheresis; (3) Blood component processing solutions and shelf life extenders; and (4) Oxygen carriers. [50...
21 CFR 314.440 - Addresses for applications and abbreviated applications.
Code of Federal Regulations, 2014 CFR
2014-04-01
... mail code for the Office of Generic Drugs is HFD-600, the mail codes for the Divisions of Chemistry I... leukapheresis; (3) Blood component processing solutions and shelf life extenders; and (4) Oxygen carriers. [50...
21 CFR 314.440 - Addresses for applications and abbreviated applications.
Code of Federal Regulations, 2012 CFR
2012-04-01
... mail code for the Office of Generic Drugs is HFD-600, the mail codes for the Divisions of Chemistry I... leukapheresis; (3) Blood component processing solutions and shelf life extenders; and (4) Oxygen carriers. [50...
Progress on ultrasonic flaw sizing in turbine-engine rotor components: bore and web geometries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, J.H.; Gray, T.A.; Thompson, R.B.
1983-01-01
The application of generic flaw-sizing techniques to specific components generally involves difficulties associated with geometrical complexity and simplifications arising from a knowledge of the expected flaw distribution. This paper is concerned with the case of ultrasonic flaw sizing in turbine-engine rotor components. The sizing of flat penny-shaped cracks in the web geometry discussed and new crack-sizing algorithms based on the Born and Kirchhoff approximations are introduced. Additionally, we propose a simple method for finding the size of a flat, penny-shaped crack given only the magnitude of the scattering amplitude. The bore geometry is discussed with primary emphasis on the cylindricalmore » focusing of the incident beam. Important questions which are addressed include the effects of diffraction and the position of the flaw with respect to the focal line. The appropriate deconvolution procedures to account for these effects are introduced. Generic features of the theory are compared with experiment. Finally, the effects of focused transducers on the Born inversion algorithm are discussed.« less
NASA Technical Reports Server (NTRS)
Storms, Bruce L.; Satran, Dale R.; Heineck, James T.; Walker, Stephen M.
2006-01-01
Experimental measurements of a generic tractor-trailer were obtained in two wind tunnels at Ames Research Center. After a preliminary study at atmospheric conditions in the 7- by 10-Foot Wind Tunnel, additional testing was conducted at Reynolds numbers corresponding to full-scale highway speeds in the 12-Foot Pressure Wind Tunnel. To facilitate computational modeling, the 1:8-scale geometry, designated the Generic Conventional Model, included a simplified underbody and omitted many small-scale details. The measurements included overall and component forces and moments, static and dynamic surface pressures, and three-component particle image velocimetry. This summary report highlights the effects of numerous drag reduction concepts and provides details of the model installation in both wind tunnels. To provide a basis for comparison, the wind-averaged drag coefficient was tabulated for all configurations tested. Relative to the baseline configuration representative of a modern class-8 tractor-trailer, the most effective concepts were the trailer base flaps and trailer belly box providing a drag-coefficient reduction of 0.0855 and 0.0494, respectively. Trailer side skirts were less effective yielding a drag reduction of 0.0260. The database of this experimental effort is publicly available for further analysis.
Distribution of a Generic Mission Planning and Scheduling Toolkit for Astronomical Spacecraft
NASA Technical Reports Server (NTRS)
Kleiner, Steven C.
1998-01-01
This 2-year report describes the progress made to date on the project to package and distribute the planning and scheduling toolkit for the SWAS astronomical spacecraft. SWAS was scheduled to be launched on a Pegasus XL vehicle in fall 1995. Three separate failures in the launch vehicle have delayed the SWAS launch. The researchers have used this time to continue developing scheduling algorithms and GUI design. SWAS is expected to be launched this year.
Pathophysiology of chest trauma.
Calhoon, J H; Trinkle, J K
1997-05-01
Recent information indicates that there is a complex cellular and molecular generic response to injury that can lead to multi-organ failure. For many years, basic physiology and biochemistry were considered to be the systemic mechanisms to injury, but now it is known that subcellular and molecular events are the keys to unlocking the secrets of the body's response to trauma. The interaction of the endothelial cell with neutrophils and platelets to produce cytokines, free radicals, and upregulating adhesion molecules is especially significant.
A generic bio-economic farm model for environmental and economic assessment of agricultural systems.
Janssen, Sander; Louhichi, Kamel; Kanellopoulos, Argyris; Zander, Peter; Flichman, Guillermo; Hengsdijk, Huib; Meuter, Eelco; Andersen, Erling; Belhouchette, Hatem; Blanco, Maria; Borkowski, Nina; Heckelei, Thomas; Hecker, Martin; Li, Hongtao; Oude Lansink, Alfons; Stokstad, Grete; Thorne, Peter; van Keulen, Herman; van Ittersum, Martin K
2010-12-01
Bio-economic farm models are tools to evaluate ex-post or to assess ex-ante the impact of policy and technology change on agriculture, economics and environment. Recently, various BEFMs have been developed, often for one purpose or location, but hardly any of these models are re-used later for other purposes or locations. The Farm System Simulator (FSSIM) provides a generic framework enabling the application of BEFMs under various situations and for different purposes (generating supply response functions and detailed regional or farm type assessments). FSSIM is set up as a component-based framework with components representing farmer objectives, risk, calibration, policies, current activities, alternative activities and different types of activities (e.g., annual and perennial cropping and livestock). The generic nature of FSSIM is evaluated using five criteria by examining its applications. FSSIM has been applied for different climate zones and soil types (criterion 1) and to a range of different farm types (criterion 2) with different specializations, intensities and sizes. In most applications FSSIM has been used to assess the effects of policy changes and in two applications to assess the impact of technological innovations (criterion 3). In the various applications, different data sources, level of detail (e.g., criterion 4) and model configurations have been used. FSSIM has been linked to an economic and several biophysical models (criterion 5). The model is available for applications to other conditions and research issues, and it is open to be further tested and to be extended with new components, indicators or linkages to other models.
A Generic Bio-Economic Farm Model for Environmental and Economic Assessment of Agricultural Systems
Louhichi, Kamel; Kanellopoulos, Argyris; Zander, Peter; Flichman, Guillermo; Hengsdijk, Huib; Meuter, Eelco; Andersen, Erling; Belhouchette, Hatem; Blanco, Maria; Borkowski, Nina; Heckelei, Thomas; Hecker, Martin; Li, Hongtao; Oude Lansink, Alfons; Stokstad, Grete; Thorne, Peter; van Keulen, Herman; van Ittersum, Martin K.
2010-01-01
Bio-economic farm models are tools to evaluate ex-post or to assess ex-ante the impact of policy and technology change on agriculture, economics and environment. Recently, various BEFMs have been developed, often for one purpose or location, but hardly any of these models are re-used later for other purposes or locations. The Farm System Simulator (FSSIM) provides a generic framework enabling the application of BEFMs under various situations and for different purposes (generating supply response functions and detailed regional or farm type assessments). FSSIM is set up as a component-based framework with components representing farmer objectives, risk, calibration, policies, current activities, alternative activities and different types of activities (e.g., annual and perennial cropping and livestock). The generic nature of FSSIM is evaluated using five criteria by examining its applications. FSSIM has been applied for different climate zones and soil types (criterion 1) and to a range of different farm types (criterion 2) with different specializations, intensities and sizes. In most applications FSSIM has been used to assess the effects of policy changes and in two applications to assess the impact of technological innovations (criterion 3). In the various applications, different data sources, level of detail (e.g., criterion 4) and model configurations have been used. FSSIM has been linked to an economic and several biophysical models (criterion 5). The model is available for applications to other conditions and research issues, and it is open to be further tested and to be extended with new components, indicators or linkages to other models. PMID:21113782
Risk Analysis using Corrosion Rate Parameter on Gas Transmission Pipeline
NASA Astrophysics Data System (ADS)
Sasikirono, B.; Kim, S. J.; Haryadi, G. D.; Huda, A.
2017-05-01
In the oil and gas industry, the pipeline is a major component in the transmission and distribution process of oil and gas. Oil and gas distribution process sometimes performed past the pipeline across the various types of environmental conditions. Therefore, in the transmission and distribution process of oil and gas, a pipeline should operate safely so that it does not harm the surrounding environment. Corrosion is still a major cause of failure in some components of the equipment in a production facility. In pipeline systems, corrosion can cause failures in the wall and damage to the pipeline. Therefore it takes care and periodic inspections or checks on the pipeline system. Every production facility in an industry has a level of risk for damage which is a result of the opportunities and consequences of damage caused. The purpose of this research is to analyze the level of risk of 20-inch Natural Gas Transmission Pipeline using Risk-based inspection semi-quantitative based on API 581 associated with the likelihood of failure and the consequences of the failure of a component of the equipment. Then the result is used to determine the next inspection plans. Nine pipeline components were observed, such as a straight pipes inlet, connection tee, and straight pipes outlet. The risk assessment level of the nine pipeline’s components is presented in a risk matrix. The risk level of components is examined at medium risk levels. The failure mechanism that is used in this research is the mechanism of thinning. Based on the results of corrosion rate calculation, remaining pipeline components age can be obtained, so the remaining lifetime of pipeline components are known. The calculation of remaining lifetime obtained and the results vary for each component. Next step is planning the inspection of pipeline components by NDT external methods.
Scientific Data Management (SDM) Center for Enabling Technologies. Final Report, 2007-2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ludascher, Bertram; Altintas, Ilkay
Our contributions to advancing the State of the Art in scientific workflows have focused on the following areas: Workflow development; Generic workflow components and templates; Provenance collection and analysis; and, Workflow reliability and fault tolerance.
A Generic Approach for Pen-Based User Interface Development
NASA Astrophysics Data System (ADS)
Macé, Sébastien; Anquetil, Éric
Pen-based interaction is an intuitive way to realize hand drawn structured documents, but few applications take advantage of it. Indeed, the interpretation of the user hand drawn strokes in the context of document is a complex problem. In this paper, we propose a new generic approach to develop such systems based on three independent components. The first one is a set of graphical and editing functions adapted to pen interaction. The second one is a rule-based formalism that models structured document composition and the corresponding interpretation process. The last one is a hand drawn stroke analyzer that is able to interpret strokes progressively, directly while the user is drawing. We highlight in particular the human-computer interaction induced from this progressive interpretation process. Thanks to this generic approach, three pen-based system prototypes have already been developed, for musical score editing, for graph editing, and for UML class diagram editing
NASA Technical Reports Server (NTRS)
White, A. L.
1983-01-01
This paper examines the reliability of three architectures for six components. For each architecture, the probabilities of the failure states are given by algebraic formulas involving the component fault rate, the system recovery rate, and the operating time. The dominant failure modes are identified, and the change in reliability is considered with respect to changes in fault rate, recovery rate, and operating time. The major conclusions concern the influence of system architecture on failure modes and parameter requirements. Without this knowledge, a system designer may pick an inappropriate structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jadaan, O.M.; Powers, L.M.; Nemeth, N.N.
1995-08-01
A probabilistic design methodology which predicts the fast fracture and time-dependent failure behavior of thermomechanically loaded ceramic components is discussed using the CARES/LIFE integrated design computer program. Slow crack growth (SCG) is assumed to be the mechanism responsible for delayed failure behavior. Inert strength and dynamic fatigue data obtained from testing coupon specimens (O-ring and C-ring specimens) are initially used to calculate the fast fracture and SCG material parameters as a function of temperature using the parameter estimation techniques available with the CARES/LIFE code. Finite element analysis (FEA) is used to compute the stress distributions for the tube as amore » function of applied pressure. Knowing the stress and temperature distributions and the fast fracture and SCG material parameters, the life time for a given tube can be computed. A stress-failure probability-time to failure (SPT) diagram is subsequently constructed for these tubes. Such a diagram can be used by design engineers to estimate the time to failure at a given failure probability level for a component subjected to a given thermomechanical load.« less
Enhanced Component Performance Study: Motor-Driven Pumps 1998–2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2016-02-01
This report presents an enhanced performance evaluation of motor-driven pumps at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The motor-driven pump failure modes considered for standby systems are failure to start, failure to run less than or equal to one hour, and failure to run more than one hour; for normally running systems, the failure modes considered are failure to start and failure tomore » run. An eight hour unreliability estimate is also calculated and trended. The component reliability estimates and the reliability data are trended for the most recent 10-year period while yearly estimates for reliability are provided for the entire active period. Statistically significant increasing trends were identified in pump run hours per reactor year. Statistically significant decreasing trends were identified for standby systems industry-wide frequency of start demands, and run hours per reactor year for runs of less than or equal to one hour.« less
Estimating distributions with increasing failure rate in an imperfect repair model.
Kvam, Paul H; Singh, Harshinder; Whitaker, Lyn R
2002-03-01
A failed system is repaired minimally if after failure, it is restored to the working condition of an identical system of the same age. We extend the nonparametric maximum likelihood estimator (MLE) of a system's lifetime distribution function to test units that are known to have an increasing failure rate. Such items comprise a significant portion of working components in industry. The order-restricted MLE is shown to be consistent. Similar results hold for the Brown-Proschan imperfect repair model, which dictates that a failed component is repaired perfectly with some unknown probability, and is otherwise repaired minimally. The estimators derived are motivated and illustrated by failure data in the nuclear industry. Failure times for groups of emergency diesel generators and motor-driven pumps are analyzed using the order-restricted methods. The order-restricted estimators are consistent and show distinct differences from the ordinary MLEs. Simulation results suggest significant improvement in reliability estimation is available in many cases when component failure data exhibit the IFR property.
An experimental and analytical investigation on the response of GR/EP composite I-frames
NASA Technical Reports Server (NTRS)
Moas, E., Jr.; Boitnott, R. L.; Griffin, O. H., Jr.
1991-01-01
Six-foot diameter, semicircular graphite/epoxy specimens representative of generic aircraft frames were loaded quasi-statically to determine their load response and failure mechanisms for large deflections that occur in an airplane crash. These frame-skin specimens consisted of a cylindrical skin section cocured with a semicircular I-frame. Various frame laminate stacking sequences and geometries were evaluated by statically loading the specimen until multiple failures occurred. Two analytical methods were compared for modeling the frame-skin specimens: a two-dimensional branched-shell finite element analysis and a one-dimensional, closed-form, curved beam solution derived using an energy method. Excellent correlation was obtained between experimental results and the finite element predictions of the linear response of the frames prior to the initial failure. The beam solution was used for rapid parameter and design studies, and was found to be stiff in comparison with the finite element analysis. The specimens were found to be useful for evaluating composite frame designs.
NASA Technical Reports Server (NTRS)
Stewart, E. C.; Brown, P. W.; Yenni, K. R.
1986-01-01
A simulation study was conducted to investigate the piloting problems associated with failure of an engine on a generic light twin-engine airplane. A primary piloting problem for a light twin-engine airplane after an engine failure is maintaining precise control of the airplane in the presence of large steady control forces. To address this problem, a simulated automatic trim system which drives the trim tabs as an open-loop function of propeller slipstream measurements was developed. The simulated automatic trim system was found to greatly increase the controllability in asymmetric powered flight without having to resort to complex control laws or an irreversible control system. However, the trim-tab control rates needed to produce the dramatic increase in controllability may require special design consideration for automatic trim system failures. Limited measurements obtained in full-scale flight tests confirmed the fundamental validity of the proposed control law.
Studies on Automobile Clutch Release Bearing Characteristics with Acoustic Emission
NASA Astrophysics Data System (ADS)
Chen, Guoliang; Chen, Xiaoyang
Automobile clutch release bearings are important automotive driveline components. For the clutch release bearing, early fatigue failure diagnosis is significant, but the early fatigue failure response signal is not obvious, because failure signals are susceptible to noise on the transmission path and to working environment factors such as interference. With an improvement in vehicle design, clutch release bearing fatigue life indicators have increasingly become an important requirement. Contact fatigue is the main failure mode of release rolling bearing components. Acoustic emission techniques in contact fatigue failure detection have unique advantages, which include highly sensitive nondestructive testing methods. In the acoustic emission technique to detect a bearing, signals are collected from multiple sensors. Each signal contains partial fault information, and there is overlap between the signals' fault information. Therefore, the sensor signals receive simultaneous source information integration is complete fragment rolling bearing fault acoustic emission signal, which is the key issue of accurate fault diagnosis. Release bearing comprises the following components: the outer ring, inner ring, rolling ball, cage. When a failure occurs (such as cracking, pitting), the other components will impact damaged point to produce acoustic emission signal. Release bearings mainly emit an acoustic emission waveform with a Rayleigh wave propagation. Elastic waves emitted from the sound source, and it is through the part surface bearing scattering. Dynamic simulation of rolling bearing failure will contribute to a more in-depth understanding of the characteristics of rolling bearing failure, because monitoring and fault diagnosis of rolling bearings provide a theoretical basis and foundation.
Jo, Min-Woo; Lee, Hyeon-Jeong; Kim, Soo Young; Kim, Seon-Ha; Chang, Hyejung; Ahn, Jeonghoon; Ock, Minsu
2017-01-01
Few attempts have been made to develop a generic health-related quality of life (HRQoL) instrument and to examine its validity and reliability in Korea. We aimed to do this in our present study. After a literature review of existing generic HRQoL instruments, a focus group discussion, in-depth interviews, and expert consultations, we selected 30 tentative items for a new HRQoL measure. These items were evaluated by assessing their ceiling effects, difficulty, and redundancy in the first survey. To validate the HRQoL instrument that was developed, known-groups validity and convergent/discriminant validity were evaluated and its test-retest reliability was examined in the second survey. Of the 30 items originally assessed for the HRQoL instrument, four were excluded due to high ceiling effects and six were removed due to redundancy. We ultimately developed a HRQoL instrument with a reduced number of 20 items, known as the Health-related Quality of Life Instrument with 20 items (HINT-20), incorporating physical, mental, social, and positive health dimensions. The results of the HINT-20 for known-groups validity were poorer in women, the elderly, and those with a low income. For convergent/discriminant validity, the correlation coefficients of items (except vitality) in the physical health dimension with the physical component summary of the Short Form 36 version 2 (SF-36v2) were generally higher than the correlations of those items with the mental component summary of the SF-36v2, and vice versa. Regarding test-retest reliability, the intraclass correlation coefficient of the total HINT-20 score was 0.813 (p<0.001). A novel generic HRQoL instrument, the HINT-20, was developed for the Korean general population and showed acceptable validity and reliability.
Uncemented glenoid component in total shoulder arthroplasty. Survivorship and outcomes.
Martin, Scott David; Zurakowski, David; Thornhill, Thomas S
2005-06-01
Glenoid component loosening continues to be a major factor affecting the long-term survivorship of total shoulder replacements. Radiolucent lines, cement fracture, migration, and loosening requiring revision are common problems with cemented glenoid components. The purpose of this study was to evaluate the results of total shoulder arthroplasty with an uncemented glenoid component and to identify predictors of glenoid component failure. One hundred and forty-seven consecutive total shoulder arthroplasties were performed in 132 patients (mean age, 63.3 years) with use of an uncemented glenoid component fixed with screws between 1988 and 1996. One hundred and forty shoulders in 124 patients were available for follow-up at an average of 7.5 years. One shoulder in which the arthroplasty had failed at 2.4 years and for which the duration of follow-up was four years was also included for completeness. The preoperative diagnoses included osteoarthritis in seventy-two shoulders and rheumatoid arthritis in fifty-five. Radiolucency was noted around the glenoid component and/or screws in fifty-three of the 140 shoulders. The mean modified ASES (American Shoulder and Elbow Surgeons) score (and standard deviation) improved from 15.6 +/- 11.8 points preoperatively to 75.8 +/- 17.5 points at the time of follow-up. Eighty-five shoulders were not painful, forty-two were slightly or mildly painful, ten were moderately painful, and three were severely painful. Fifteen (11%) of the glenoid components failed clinically, and ten of them also had radiographic signs of failure. Eleven other shoulders had radiographic signs of failure but no symptoms at the time of writing. Three factors had a significant independent association with clinical failure: male gender (p = 0.02), pain (p < 0.01), and radiolucency adjacent to the flat tray (p < 0.001). In addition, the annual risk of implant revision was nearly seven times higher for patients with radiographic signs of failure. Clinical survivorship was 95% at five years and 85% at ten years. The failure rates of the total shoulder arthroplasties in this study were higher than those in previously reported studies of cemented polyethylene components with similar durations of follow-up. Screw breakage and excessive polyethylene wear were common problems that may lead to additional failures of these uncemented glenoid components in the future.
(n, N) type maintenance policy for multi-component systems with failure interactions
NASA Astrophysics Data System (ADS)
Zhang, Zhuoqi; Wu, Su; Li, Binfeng; Lee, Seungchul
2015-04-01
This paper studies maintenance policies for multi-component systems in which failure interactions and opportunistic maintenance (OM) involve. This maintenance problem can be formulated as a Markov decision process (MDP). However, since an action set and state space in MDP exponentially expand as the number of components increase, traditional approaches are computationally intractable. To deal with curse of dimensionality, we decompose such a multi-component system into mutually influential single-component systems. Each single-component system is formulated as an MDP with the objective of minimising its long-run average maintenance cost. Under some reasonable assumptions, we prove the existence of the optimal (n, N) type policy for a single-component system. An algorithm to obtain the optimal (n, N) type policy is also proposed. Based on the proposed algorithm, we develop an iterative approximation algorithm to obtain an acceptable maintenance policy for a multi-component system. Numerical examples find that failure interactions and OM pose significant effects on a maintenance policy.
Failure detection in high-performance clusters and computers using chaotic map computations
Rao, Nageswara S.
2015-09-01
A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.
21 CFR 892.1715 - Full-field digital mammography system.
Code of Federal Regulations, 2012 CFR
2012-04-01
... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...
21 CFR 892.1715 - Full-field digital mammography system.
Code of Federal Regulations, 2013 CFR
2013-04-01
... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...
21 CFR 892.1715 - Full-field digital mammography system.
Code of Federal Regulations, 2014 CFR
2014-04-01
... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...
21 CFR 892.1715 - Full-field digital mammography system.
Code of Federal Regulations, 2011 CFR
2011-04-01
... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...
40 CFR 721.6498 - Modified polyisocyanates (generic).
Code of Federal Regulations, 2013 CFR
2013-07-01
... that contain them, an industrial hygiene and safety program should be operative. Important components... efficient and well-maintained application equipment, engineering controls and personal protective equipment.... Engineering controls should serve as the first, most effective means of reducing airborne polyisocyanate and...
Common Cause Failure Modeling in Space Launch Vehicles
NASA Technical Reports Server (NTRS)
Hark, Frank; Ring, Rob; Novack, Steven D.; Britton, Paul
2015-01-01
Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFs are a set of dependent type of failures that can be caused for example by system environments, manufacturing, transportation, storage, maintenance, and assembly. Since there are many factors that contribute to CCFs, they can be reduced, but are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and dependent CCF. Because common cause failure data is limited in the aerospace industry, the Probabilistic Risk Assessment (PRA) Team at Bastion Technology Inc. is estimating CCF risk using generic data collected by the Nuclear Regulatory Commission (NRC). Consequently, common cause risk estimates based on this database, when applied to other industry applications, are highly uncertain. Therefore, it is important to account for a range of values for independent and CCF risk and to communicate the uncertainty to decision makers. There is an existing methodology for reducing CCF risk during design, which includes a checklist of 40+ factors grouped into eight categories. Using this checklist, an approach to produce a beta factor estimate is being investigated that quantitatively relates these factors. In this example, the checklist will be tailored to space launch vehicles, a quantitative approach will be described, and an example of the method will be presented.
Nonlinear viscoelasticity and generalized failure criterion for biopolymer gels
NASA Astrophysics Data System (ADS)
Divoux, Thibaut; Keshavarz, Bavand; Manneville, Sébastien; McKinley, Gareth
2016-11-01
Biopolymer gels display a multiscale microstructure that is responsible for their solid-like properties. Upon external deformation, these soft viscoelastic solids exhibit a generic nonlinear mechanical response characterized by pronounced stress- or strain-stiffening prior to irreversible damage and failure, most often through macroscopic fractures. Here we show on a model acid-induced protein gel that the nonlinear viscoelastic properties of the gel can be described in terms of a 'damping function' which predicts the gel mechanical response quantitatively up to the onset of macroscopic failure. Using a nonlinear integral constitutive equation built upon the experimentally-measured damping function in conjunction with power-law linear viscoelastic response, we derive the form of the stress growth in the gel following the start up of steady shear. We also couple the shear stress response with Bailey's durability criteria for brittle solids in order to predict the critical values of the stress σc and strain γc for failure of the gel, and how they scale with the applied shear rate. This provides a generalized failure criterion for biopolymer gels in a range of different deformation histories. This work was funded by the MIT-France seed fund and by the CNRS PICS-USA scheme (#36939). BK acknowledges financial support from Axalta Coating Systems.
Agudelo, M.
2012-01-01
Animal models of infection have been used to demonstrate the therapeutic failure of “bioequivalent” generic products, but their applicability for this purpose requires the accurate identification of those products that are truly bioequivalent. Here, we present data comparing one intravenous generic product of metronidazole with the innovator product in a neutropenic mouse thigh anaerobic infection model. Simultaneous experiments allowed comparisons (generic versus innovator) of potency and the concentration of the active pharmaceutical ingredient (API), analytical chemistry (liquid chromatography/mass spectrometry [LC/MS]), in vitro susceptibility testing, single-dose serum pharmacokinetics (PK) in infected mice, and in vivo pharmacodynamics (PD) against Bacteroides fragilis ATCC 25825 in synergy with Escherichia coli SIG-1 in the neutropenic mouse thigh anaerobic infection model. The Hill dose-response model followed by curve-fitting analysis was used to calculate and compare primary and secondary PD parameters. The generic and the innovator products were identical in terms of the concentration and potency of the API, chromatographic and spectrographic profiles, MIC and minimal bactericidal concentrations (MBC) (2.0 mg/liter), and mouse PK. We found no differences between products in bacteriostatic doses (BD) (15 to 22 mg/kg of body weight per day) or the doses needed to kill 1 log (1LKD) (21 to 29 mg/kg per day) or 2 logs (2LKD) (28 to 54 mg/kg per day) of B. fragilis under dosing schedules of every 12 h (q12h), q8h, or q6h. The area under the concentration-time curve over 24 h in the steady state divided by the MIC (AUC/MIC ratio) was the best PD index to predict the antibacterial efficacy of metronidazole (adjusted coefficient of determination [AdjR2] = 84.6%), and its magnitude to reach bacteriostasis in vivo (56.6 ± 5.17 h) or to kill the first (90.8 ± 9.78 h) and second (155.5 ± 22.2 h) logs was the same for both products. Animal models of infection allow a thorough demonstration of the therapeutic equivalence of generic antimicrobials. PMID:22330928
Faunce, Thomas A
2007-01-01
Industrial renewal in the bio/nanopharma sector is important for the long term strength of the Australian economy and for the health of its citizens. A variety of factors, however, may have caused inadequate attention to focus on systematically promoting domestic generic and small biotechnology manufacturers in Australian health policy. Despite recent clarifications of 'springboarding' capacity in intellectual property legislation, federal government requirements for specific generic price reductions on market entry and the potential erosion of reference pricing through new F1 and F2 categories for the purposes of Pharmaceutical Benefits Scheme (PBS) assessments, do not appear to be coherently designed to sustainably position this industry sector in 'biologics,' nanotherapeutics and pharmacogenetics. There also appears to have been little attention paid in this context to policies fostering industry sustainability and public affordability (as encouraged by the National Medicines Policy). One notable example includes that failure to consider facilitating mutual exchanges on regulatory assessment of health technology safety and cost-effectiveness (including reference pricing) in the context of ongoing free trade negotiations between Australia and China (the latter soon to possess the world's largest generic pharmaceutical manufacturing capacity). The importance of a thriving Australian domestic generic pharmaceutical and bio/nano tech industry in terms of biosecurity, similarly appears to have been given insufficient policy attention. Reasons for such policy oversights may relate to increasing interrelationships between generic and 'brand-name' manufacturers and the scale of investment required for the Australian generics and bio/nano technology sector to be a significant driver of local production. It might also result from singularly effective lobbying pressure exerted by Medicines Australia, the 'brand-name' pharmaceutical industry association, utilising controversial interpretations of reward of pharmaceutical 'innovation' provisions in the Australia-US Free Trade Agreement (AUSFTA) through the policy-development mechanisms of the AUSFTA Medicines Working Group and most recently an Innovative Medicines Working Group with the Department of Health and Ageing. This paper critically analyses such arguments in the context of emerging challenges for sustainable industrial renewal in Australia's bio/nanopharma sector. PMID:17543114
Yokozawa, T; Dong, E; Oura, H
1997-02-01
The effects of a green tea tannin mixture and its individual tannin components on methylguanidine were examined in rats with renal failure. The green tea tannin mixture caused a dose-dependent decrease in methylguanidine, a substance which accumulates in the blood with the progression of renal failure. Among individual tannin components, the effect was most conspicuous with (-)-epigallocatechin 3-O-gallate and (-)-epicatechin 3-O-gallate, while other components not linked to gallic acid showed only weak effects. Thus, the effect on methylguanidine was found to vary among different types of tannin.
Acoustic emissions (AE) monitoring of large-scale composite bridge components
NASA Astrophysics Data System (ADS)
Velazquez, E.; Klein, D. J.; Robinson, M. J.; Kosmatka, J. B.
2008-03-01
Acoustic Emissions (AE) has been successfully used with composite structures to both locate and give a measure of damage accumulation. The current experimental study uses AE to monitor large-scale composite modular bridge components. The components consist of a carbon/epoxy beam structure as well as a composite to metallic bonded/bolted joint. The bonded joints consist of double lap aluminum splice plates bonded and bolted to carbon/epoxy laminates representing the tension rail of a beam. The AE system is used to monitor the bridge component during failure loading to assess the failure progression and using time of arrival to give insight into the origins of the failures. Also, a feature in the AE data called Cumulative Acoustic Emission counts (CAE) is used to give an estimate of the severity and rate of damage accumulation. For the bolted/bonded joints, the AE data is used to interpret the source and location of damage that induced failure in the joint. These results are used to investigate the use of bolts in conjunction with the bonded joint. A description of each of the components (beam and joint) is given with AE results. A summary of lessons learned for AE testing of large composite structures as well as insight into failure progression and location is presented.
Woods-Giscombé, Cheryl L.; Lobel, Marci
2008-01-01
Based on prior research and theory, the authors constructed a multidimensional model of stress in African American women comprised of race-related, gender-related, and generic stress. Exposure to and appraisal of these three types of stress were combined into a higher-order global stress factor. Using structural equation modeling, the fit of this stress factor and its ability to predict distress symptoms were examined in 189 socioeconomically diverse African American women aged 21 to 78. Results support the multidimensional conceptualization and operationalization of stress. Race-related, gender-related, and generic stress contributed equally to the global stress factor, and global stress predicted a significant amount of variance in distress symptoms and intensity. This model exhibited better fit than a model without a global stress factor, in which each stress component predicted distress directly. Furthermore, race-related, gender-related, and generic stress did not contribute to distress beyond their representation in the global stress factor. These findings illustrate that stress related to central elements of identity, namely race and gender, cohere with generic stress to define the stress experience of African American women. PMID:18624581
Woods-Giscombé, Cheryl L; Lobel, Marci
2008-07-01
Based on prior research and theory, the authors constructed a multidimensional model of stress in African American women comprised of race-related, gender-related, and generic stress. Exposure to and appraisal of these three types of stress were combined into a higher-order global stress factor. Using structural equation modeling, the fit of this stress factor and its ability to predict distress symptoms were examined in 189 socioeconomically diverse African American women aged 21 to 78. Results support the multidimensional conceptualization and operationalization of stress. Race-related, gender-related, and generic stress contributed equally to the global stress factor, and global stress predicted a significant amount of variance in distress symptoms and intensity. This model exhibited better fit than a model without a global stress factor, in which each stress component predicted distress directly. Furthermore, race-related, gender-related, and generic stress did not contribute to distress beyond their representation in the global stress factor. These findings illustrate that stress related to central elements of identity, namely race and gender, cohere with generic stress to define the stress experience of African American women. Copyright (c) 2008 APA, all rights reserved.
Model-Based Fault Diagnosis: Performing Root Cause and Impact Analyses in Real Time
NASA Technical Reports Server (NTRS)
Figueroa, Jorge F.; Walker, Mark G.; Kapadia, Ravi; Morris, Jonathan
2012-01-01
Generic, object-oriented fault models, built according to causal-directed graph theory, have been integrated into an overall software architecture dedicated to monitoring and predicting the health of mission- critical systems. Processing over the generic fault models is triggered by event detection logic that is defined according to the specific functional requirements of the system and its components. Once triggered, the fault models provide an automated way for performing both upstream root cause analysis (RCA), and for predicting downstream effects or impact analysis. The methodology has been applied to integrated system health management (ISHM) implementations at NASA SSC's Rocket Engine Test Stands (RETS).
Low-speed aerodynamic characteristics of a generic forward-swept-wing aircraft
NASA Technical Reports Server (NTRS)
Ross, J. C.; Matarazzo, A. D.
1982-01-01
Low-speed wind-tunnel tests were performed on a generic forward-swept-wing aircraft model in the 7- by 10-Foot Wind Tunnel (No. 2) at Ames Research Center. The effects of various configurational changes and control-surface deflections on the performance of the model were measured. Six-component force measurements were augmented by flow-visualization photographs, using both surface oil-flow and tufts. It was found that the tendency toward premature root separation on the forward-swept wing could be reduced by use of either canards or leading-edge wing strakes and that differential canard deflections can be used to produce a direct side-force control.
Hybrid stretchable circuits on silicone substrate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, A., E-mail: adam.1.robinson@nokia.com; Aziz, A., E-mail: a.aziz1@lancaster.ac.uk; Liu, Q.
When rigid and stretchable components are integrated onto a single elastic carrier substrate, large strain heterogeneities appear in the vicinity of the deformable-non-deformable interfaces. In this paper, we report on a generic approach to manufacture hybrid stretchable circuits where commercial electronic components can be mounted on a stretchable circuit board. Similar to printed circuit board development, the components are electrically bonded on the elastic substrate and interconnected with stretchable electrical traces. The substrate—a silicone matrix carrying concentric rigid disks—ensures both the circuit elasticity and the mechanical integrity of the most fragile materials.
Guyen, Olivier; Lewallen, David G; Cabanela, Miguel E
2008-07-01
The Osteonics constrained tripolar implant has been one of the most commonly used options to manage recurrent instability after total hip arthroplasty. Mechanical failures were expected and have been reported. The purpose of this retrospective review was to identify the observed modes of failure of this device. Forty-three failed Osteonics constrained tripolar implants were revised at our institution between September 1997 and April 2005. All revisions related to the constrained acetabular component only were considered as failures. All of the devices had been inserted for recurrent or intraoperative instability during revision procedures. Seven different methods of implantation were used. Operative reports and radiographs were reviewed to identify the modes of failure. The average time to failure of the forty-three implants was 28.4 months. A total of five modes of failure were observed: failure at the bone-implant interface (type I), which occurred in eleven hips; failure at the mechanisms holding the constrained liner to the metal shell (type II), in six hips; failure of the retaining mechanism of the bipolar component (type III), in ten hips; dislocation of the prosthetic head at the inner bearing of the bipolar component (type IV), in three hips; and infection (type V), in twelve hips. The mode of failure remained unknown in one hip that had been revised at another institution. The Osteonics constrained tripolar total hip arthroplasty implant is a complex device involving many parts. We showed that failure of this device can occur at most of its interfaces. It would therefore appear logical to limit its application to salvage situations.
Might generic OCs create contraceptive price war?
1987-02-01
Genora 1/35 and 1/50, the 1st generic oral contraceptives (OCs) in the world, are now being marketed in the US. Clinicians interviewed by "Contraceptive Technology Update" (CTU) offer differing opinions as to what this new OC may mean in the marketplace. Products of Rugby Laboratories, the pills are copy products of Ortho Pharmaceutical's ON 1/35 and ON 1/50 formulations. Most clinicians believe that Genora's success or failure in the OC market depends on its eventual retail price. The price difference of $3-$4 may be sufficiently substantial for retailers to charge less for the generic OCs. If that is the case, many doctors may prescribe a pill which will save their patients $4/month. Dr. Mildred Hanson, a Minneapolis gynecologist/obstetrician, feels any cost savings from Genora will have a significant impact on the OC market. She suggests that the less expensive OCs will catch the attention of health maintenance organizations (HMOs) and the business of women who participate in such health plans. Yet James Burns, director of family planning services for the Hartford City Health Department, thinks that even a full-scale retail price war won't have much effect from a clinic standpoint. He reports that clinics are able to obtain contraceptive supplies rather inexpensively through the contracting system. Hanson also expressed doubt over the potential popularity of Genora 1/50 as clinical concerns about the effects of combined OCs on serum lipid levels and carbohydrate metabolism have resulted in a nationwide push toward OCs containing less than 50 micrograms of estrogen. He indicated concern that declines in pharmaceutical house products from pricing competition with generic pills might have a negative impact on contraceptive research and development. Dick Haskitt, director of business planning for Syntex Laboratories, Inc., who will produce the OCs for Rugby, reports that their market research shows that people are very interested in having a generic OC available. Initially, Genora will be marketed only to retail pharmacies, wholesalers, and drug chains.
Blowout Prevention System Events and Equipment Component Failures : 2016 SafeOCS Annual Report
DOT National Transportation Integrated Search
2017-09-22
The SafeOCS 2016 Annual Report, produced by the Bureau of Transportation Statistics (BTS), summarizes blowout prevention (BOP) equipment failures on marine drilling rigs in the Outer Continental Shelf. It includes an analysis of equipment component f...
2006-12-01
intelligent control algorithm embedded in the FADEC . This paper evaluates the LEC, based on critical components research, to demonstrate how an...control action, engine component life usage, and designing an intelligent control algorithm embedded in the FADEC . This paper evaluates the LEC, based on...simulation code for each simulator. One is typically configured to operate as a Full- Authority Digital Electronic Controller ( FADEC
NASA Technical Reports Server (NTRS)
Monaghan, Mark W.; Gillespie, Amanda M.
2013-01-01
During the shuttle era NASA utilized a failure reporting system called the Problem Reporting and Corrective Action (PRACA) it purpose was to identify and track system non-conformance. The PRACA system over the years evolved from a relatively nominal way to identify system problems to a very complex tracking and report generating data base. The PRACA system became the primary method to categorize any and all anomalies from corrosion to catastrophic failure. The systems documented in the PRACA system range from flight hardware to ground or facility support equipment. While the PRACA system is complex, it does possess all the failure modes, times of occurrence, length of system delay, parts repaired or replaced, and corrective action performed. The difficulty is mining the data then to utilize that data in order to estimate component, Line Replaceable Unit (LRU), and system reliability analysis metrics. In this paper, we identify a methodology to categorize qualitative data from the ground system PRACA data base for common ground or facility support equipment. Then utilizing a heuristic developed for review of the PRACA data determine what reports identify a credible failure. These data are the used to determine inter-arrival times to perform an estimation of a metric for repairable component-or LRU reliability. This analysis is used to determine failure modes of the equipment, determine the probability of the component failure mode, and support various quantitative differing techniques for performing repairable system analysis. The result is that an effective and concise estimate of components used in manned space flight operations. The advantage is the components or LRU's are evaluated in the same environment and condition that occurs during the launch process.
Analytical Method to Evaluate Failure Potential During High-Risk Component Development
NASA Technical Reports Server (NTRS)
Tumer, Irem Y.; Stone, Robert B.; Clancy, Daniel (Technical Monitor)
2001-01-01
Communicating failure mode information during design and manufacturing is a crucial task for failure prevention. Most processes use Failure Modes and Effects types of analyses, as well as prior knowledge and experience, to determine the potential modes of failures a product might encounter during its lifetime. When new products are being considered and designed, this knowledge and information is expanded upon to help designers extrapolate based on their similarity with existing products and the potential design tradeoffs. This paper makes use of similarities and tradeoffs that exist between different failure modes based on the functionality of each component/product. In this light, a function-failure method is developed to help the design of new products with solutions for functions that eliminate or reduce the potential of a failure mode. The method is applied to a simplified rotating machinery example in this paper, and is proposed as a means to account for helicopter failure modes during design and production, addressing stringent safety and performance requirements for NASA applications.
Enhanced Component Performance Study: Turbine-Driven Pumps 1998–2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-11-01
This report presents an enhanced performance evaluation of turbine-driven pumps (TDPs) at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The TDP failure modes considered are failure to start (FTS), failure to run less than or equal to one hour (FTR=1H), failure to run more than one hour (FTR>1H), and normally running systems FTS and failure to run (FTR). The component reliability estimates and themore » reliability data are trended for the most recent 10-year period while yearly estimates for reliability are provided for the entire active period. Statistically significant increasing trends were identified for TDP unavailability, for frequency of start demands for standby TDPs, and for run hours in the first hour after start. Statistically significant decreasing trends were identified for start demands for normally running TDPs, and for run hours per reactor critical year for normally running TDPs.« less
Murray, James L.; Hu, Peixu; Shafer, David A.
2015-01-01
We have developed novel probe systems for real-time PCR that provide higher specificity, greater sensitivity, and lower cost relative to dual-labeled probes. The seven DNA Detection Switch (DDS)-probe systems reported here employ two interacting polynucleotide components: a fluorescently labeled probe and a quencher antiprobe. High-fidelity detection is achieved with three DDS designs: two internal probes (internal DDS and Flip probes) and a primer probe (ZIPR probe), wherein each probe is combined with a carefully engineered, slightly mismatched, error-checking antiprobe. The antiprobe blocks off-target detection over a wide range of temperatures and facilitates multiplexing. Other designs (Universal probe, Half-Universal probe, and MacMan probe) use generic components that enable low-cost detection. Finally, single-molecule G-Force probes employ guanine-mediated fluorescent quenching by forming a hairpin between adjacent C-rich and G-rich sequences. Examples provided show how these probe technologies discriminate drug-resistant Mycobacterium tuberculosis mutants, Escherichia coli O157:H7, oncogenic EGFR deletion mutations, hepatitis B virus, influenza A/B strains, and single-nucleotide polymorphisms in the human VKORC1 gene. PMID:25307756
NASA Technical Reports Server (NTRS)
Campbell, Colin
2015-01-01
As the Shuttle/ISS EMU Program exceeds 35 years in duration and is still supporting the needs of the International Space Station (ISS), a critical benefit of such a long running program with thorough documentation of system and component failures is the ability to study and learn from those failures when considering the design of the next generation space suit. Study of the subject failure history leads to changes in the Advanced EMU Portable Life Support System (PLSS) schematic, selected component technologies, as well as the planned manner of ground testing. This paper reviews the Shuttle/ISS EMU failure history and discusses the implications to the AEMU PLSS.
Shuttle/ISS EMU Failure History and the Impact on Advanced EMU PLSS Design
NASA Technical Reports Server (NTRS)
Campbell, Colin
2011-01-01
As the Shuttle/ISS EMU Program exceeds 30 years in duration and is still successfully supporting the needs of the International Space Station (ISS), a critical benefit of such a long running program with thorough documentation of system and component failures is the ability to study and learn from those failures when considering the design of the next generation space suit. Study of the subject failure history leads to changes in the Advanced EMU Portable Life Support System (PLSS) schematic, selected component technologies, as well as the planned manner of ground testing. This paper reviews the Shuttle/ISS EMU failure history and discusses the implications to the AEMU PLSS.
Shuttle/ISS EMU Failure History and the Impact on Advanced EMU PLSS Design
NASA Technical Reports Server (NTRS)
Campbell, Colin
2015-01-01
As the Shuttle/ISS EMU Program exceeds 30 years in duration and is still supporting the needs of the International Space Station (ISS), a critical benefit of such a long running program with thorough documentation of system and component failures is the ability to study and learn from those failures when considering the design of the next generation space suit. Study of the subject failure history leads to changes in the Advanced EMU Portable Life Support System (PLSS) schematic, selected component technologies, as well as the planned manner of ground testing. This paper reviews the Shuttle/ISS EMU failure history and discusses the implications to the AEMU PLSS.
Reliability Quantification of Advanced Stirling Convertor (ASC) Components
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward
2010-01-01
The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.
DEPEND - A design environment for prediction and evaluation of system dependability
NASA Technical Reports Server (NTRS)
Goswami, Kumar K.; Iyer, Ravishankar K.
1990-01-01
The development of DEPEND, an integrated simulation environment for the design and dependability analysis of fault-tolerant systems, is described. DEPEND models both hardware and software components at a functional level, and allows automatic failure injection to assess system performance and reliability. It relieves the user of the work needed to inject failures, maintain statistics, and output reports. The automatic failure injection scheme is geared toward evaluating a system under high stress (workload) conditions. The failures that are injected can affect both hardware and software components. To illustrate the capability of the simulator, a distributed system which employs a prediction-based, dynamic load-balancing heuristic is evaluated. Experiments were conducted to determine the impact of failures on system performance and to identify the failures to which the system is especially susceptible.
10 CFR 34.101 - Notifications.
Code of Federal Regulations, 2010 CFR
2010-01-01
... written report to the NRC's Office of Federal and State Materials and Environmental Management Programs... shielded position and secure it in this position; or (3) Failure of any component (critical to safe... overexposure submitted under 10 CFR 20.2203 which involves failure of safety components of radiography...
A model for the progressive failure of laminated composite structural components
NASA Technical Reports Server (NTRS)
Allen, D. H.; Lo, D. C.
1991-01-01
Laminated continuous fiber polymeric composites are capable of sustaining substantial load induced microstructural damage prior to component failure. Because this damage eventually leads to catastrophic failure, it is essential to capture the mechanics of progressive damage in any cogent life prediction model. For the past several years the authors have been developing one solution approach to this problem. In this approach the mechanics of matrix cracking and delamination are accounted for via locally averaged internal variables which account for the kinematics of microcracking. Damage progression is predicted by using phenomenologically based damage evolution laws which depend on the load history. The result is a nonlinear and path dependent constitutive model which has previously been implemented to a finite element computer code for analysis of structural components. Using an appropriate failure model, this algorithm can be used to predict component life. In this paper the model will be utilized to demonstrate the ability to predict the load path dependence of the damage and stresses in plates subjected to fatigue loading.
45 CFR 310.1 - What definitions apply to this part?
Code of Federal Regulations, 2010 CFR
2010-10-01
... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...
45 CFR 310.1 - What definitions apply to this part?
Code of Federal Regulations, 2013 CFR
2013-10-01
... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...
45 CFR 310.1 - What definitions apply to this part?
Code of Federal Regulations, 2014 CFR
2014-10-01
... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...
45 CFR 310.1 - What definitions apply to this part?
Code of Federal Regulations, 2011 CFR
2011-10-01
... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...
45 CFR 310.1 - What definitions apply to this part?
Code of Federal Regulations, 2012 CFR
2012-10-01
... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...
The Generic Spacecraft Analyst Assistant (gensaa): a Tool for Developing Graphical Expert Systems
NASA Technical Reports Server (NTRS)
Hughes, Peter M.
1993-01-01
During numerous contacts with a satellite each day, spacecraft analysts must closely monitor real-time data. The analysts must watch for combinations of telemetry parameter values, trends, and other indications that may signify a problem or failure. As the satellites become more complex and the number of data items increases, this task is becoming increasingly difficult for humans to perform at acceptable performance levels. At NASA GSFC, fault-isolation expert systems are in operation supporting this data monitoring task. Based on the lessons learned during these initial efforts in expert system automation, a new domain-specific expert system development tool named the Generic Spacecraft Analyst Assistant (GenSAA) is being developed to facilitate the rapid development and reuse of real-time expert systems to serve as fault-isolation assistants for spacecraft analysts. Although initially domain-specific in nature, this powerful tool will readily support the development of highly graphical expert systems for data monitoring purposes throughout the space and commercial industry.
Failure detection and identification
NASA Technical Reports Server (NTRS)
Massoumnia, Mohammad-Ali; Verghese, George C.; Willsky, Alan S.
1989-01-01
Using the geometric concept of an unobservability subspace, a solution is given to the problem of detecting and identifying control system component failures in linear, time-invariant systems. Conditions are developed for the existence of a causal, linear, time-invariant processor that can detect and uniquely identify a component failure, first for the case where components can fail simultaneously, and then for the case where they fail only one at a time. Explicit design algorithms are provided when these conditions are satisfied. In addition to time-domain solvability conditions, frequency-domain interpretations of the results are given, and connections are drawn with results already available in the literature.
Enhanced Component Performance Study: Air-Operated Valves 1998-2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-11-01
This report presents a performance evaluation of air-operated valves (AOVs) at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The AOV failure modes considered are failure-to-open/close, failure to operate or control, and spurious operation. The component reliability estimates and the reliability data are trended for the most recent 10-year period, while yearly estimates for reliability are provided for the entire active period. One statistically significantmore » trend was observed in the AOV data: The frequency of demands per reactor year for valves recording the fail-to-open or fail-to-close failure modes, for high-demand valves (those with greater than twenty demands per year), was found to be decreasing. The decrease was about three percent over the ten year period trended.« less
Bhojani, Upendra; Kolsteren, Patrick; Criel, Bart; De Henauw, Stefaan; Beerenahally, Thriveni S; Verstraeten, Roos; Devadasan, Narayanan
2015-01-01
Many efficacious health service interventions to improve diabetes care are known. However, there is little evidence on whether such interventions are effective while delivered in real-world resource-constrained settings. To evaluate an intervention aimed at improving diabetes care using the RE-AIM (reach, efficacy/effectiveness, adoption, implementation, and maintenance) framework. A quasi-experimental study was conducted in a poor urban neighborhood in South India. Four health facilities delivered the intervention (n=163 diabetes patients) and the four matched facilities served as control (n=154). The intervention included provision of culturally appropriate education to diabetes patients, use of generic medications, and standard treatment guidelines for diabetes management. Patients were surveyed before and after the 6-month intervention period. We did field observations and interviews with the doctors at the intervention facilities. Quantitative data were used to assess the reach of the intervention and its effectiveness on patients' knowledge, practice, healthcare expenditure, and glycemic control through a difference-in-differences analysis. Qualitative data were analyzed thematically to understand adoption, implementation, and maintenance of the intervention. Reach: Of those who visited intervention facilities, 52.3% were exposed to the education component and only 7.2% were prescribed generic medications. The doctors rarely used the standard treatment guidelines for diabetes management. The intervention did not have a statistically and clinically significant impact on the knowledge, healthcare expenditure, or glycemic control of the patients, with marginal reduction in their practice score. Adoption: All the facilities adopted the education component, while all but one facility adopted the prescription of generic medications. There was poor implementation of the intervention, particularly with regard to the use of generic medications and the standard treatment guidelines. Doctors' concerns about the efficacy, quality, availability, and acceptability by patients of generic medications explained limited prescriptions of generic medications. The patients' perception that ailments should be treated through medications limited the use of non-medical management by the doctors in early stages of diabetes. The other reason for the limited use of the standard treatment guidelines was that these doctors mainly provided follow-up care to patients who were previously put on a given treatment plan by specialists. Maintenance: The intervention facilities continued using posters and television monitors for health education after the intervention period. The use of generic medications and standard treatment guidelines for diabetes management remained very limited. Implementing efficacious health service intervention in a real-world resource-constrained setting is challenging and may not prove effective in improving patient outcomes. Interventions need to consider patients' and healthcare providers' experiences and perceptions and how macro-level policies translate into practice within local health systems.
Cameron, A; Ewen, M; Ross-Degnan, D; Ball, D; Laing, R
2009-01-17
WHO and Health Action International (HAI) have developed a standardised method for surveying medicine prices, availability, affordability, and price components in low-income and middle-income countries. Here, we present a secondary analysis of medicine availability in 45 national and subnational surveys done using the WHO/HAI methodology. Data from 45 WHO/HAI surveys in 36 countries were adjusted for inflation or deflation and purchasing power parity. International reference prices from open international procurements for generic products were used as comparators. Results are presented for 15 medicines included in at least 80% of surveys and four individual medicines. Average public sector availability of generic medicines ranged from 29.4% to 54.4% across WHO regions. Median government procurement prices for 15 generic medicines were 1.11 times corresponding international reference prices, although purchasing efficiency ranged from 0.09 to 5.37 times international reference prices. Low procurement prices did not always translate into low patient prices. Private sector patients paid 9-25 times international reference prices for lowest-priced generic products and over 20 times international reference prices for originator products across WHO regions. Treatments for acute and chronic illness were largely unaffordable in many countries. In the private sector, wholesale mark-ups ranged from 2% to 380%, whereas retail mark-ups ranged from 10% to 552%. In countries where value added tax was applied to medicines, the amount charged varied from 4% to 15%. Overall, public and private sector prices for originator and generic medicines were substantially higher than would be expected if purchasing and distribution were efficient and mark-ups were reasonable. Policy options such as promoting generic medicines and alternative financing mechanisms are needed to increase availability, reduce prices, and improve affordability.
NASA Technical Reports Server (NTRS)
Ko, William L.; Chen, Tony
2006-01-01
The previously developed Ko closed-form aging theory has been reformulated into a more compact mathematical form for easier application. A new equivalent loading theory and empirical loading theories have also been developed and incorporated into the revised Ko aging theory for the prediction of a safe operational life of airborne failure-critical structural components. The new set of aging and loading theories were applied to predict the safe number of flights for the B-52B aircraft to carry a launch vehicle, the structural life of critical components consumed by load excursion to proof load value, and the ground-sitting life of B-52B pylon failure-critical structural components. A special life prediction method was developed for the preflight predictions of operational life of failure-critical structural components of the B-52H pylon system, for which no flight data are available.
Arnould, Carlyne; Vandervelde, Laure; Batcho, Charles Sèbiyo; Penta, Massimo; Thonnard, Jean-Louis
2012-01-01
Objectives Several ABILHAND Rasch-built manual ability scales were previously developed for chronic stroke (CS), cerebral palsy (CP), rheumatoid arthritis (RA), systemic sclerosis (SSc) and neuromuscular disorders (NMD). The present study aimed to explore the applicability of a generic manual ability scale unbiased by diagnosis and to study the nature of manual ability across diagnoses. Design Cross-sectional study. Setting Outpatient clinic homes (CS, CP, RA), specialised centres (CP), reference centres (CP, NMD) and university hospitals (SSc). Participants 762 patients from six diagnostic groups: 103 CS adults, 113 CP children, 112 RA adults, 156 SSc adults, 124 NMD children and 124 NMD adults. Primary and secondary outcome measures Manual ability as measured by the ABILHAND disease-specific questionnaires, diagnosis and nature (ie, uni-manual or bi-manual involvement and proximal or distal joints involvement) of the ABILHAND manual activities. Results The difficulties of most manual activities were diagnosis dependent. A principal component analysis highlighted that 57% of the variance in the item difficulty between diagnoses was explained by the symmetric or asymmetric nature of the disorders. A generic scale was constructed, from a metric point of view, with 11 items sharing a common difficulty among diagnoses and 41 items displaying a category-specific location (asymmetric: CS, CP; and symmetric: RA, SSc, NMD). This generic scale showed that CP and NMD children had significantly less manual ability than RA patients, who had significantly less manual ability than CS, SSc and NMD adults. However, the generic scale was less discriminative and responsive to small deficits than disease-specific instruments. Conclusions Our finding that most of the manual item difficulties were disease-dependent emphasises the danger of using generic scales without prior investigation of item invariance across diagnostic groups. Nevertheless, a generic manual ability scale could be developed by adjusting and accounting for activities perceived differently in various disorders. PMID:23117570
Callaghan, John J; O'Rourke, Michael R; Goetz, Devon D; Lewallen, David G; Johnston, Richard C; Capello, William N
2004-12-01
Constrained acetabular components have been used to treat certain cases of intraoperative instability and postoperative dislocation after total hip arthroplasty. We report our experience with a tripolar constrained component used in these situations since 1988. The outcomes of the cases where this component was used were analyzed for component failure, component loosening, and osteolysis. At average 10-year followup, for cases treated for intraoperative instability (2 cases) or postoperative dislocation (4 cases), the component failure rate was 6% (6 of 101 hips in 5 patients). For cases where the constrained liner was cemented into a fixed cementless acetabular shell, the failure rate was 7% (2 of 31 hips in 2 patients) at 3.9-year average followup. Use of a constrained liner was not associated with an increased osteolysis or aseptic loosening rate. This tripolar constrained acetabular liner provided total hip arthroplasty construct stability in most cases in which it was used for intraoperative instability or postoperative dislocation.
NASA Astrophysics Data System (ADS)
Eck, M.; Mukunda, M.
The proliferation of space vehicle launch sites and the projected utilization of these facilities portends an increase in the number of on-pad, ascent, and on-orbit solid-rocket motor (SRM) casings and liquid-rocket tanks which will randomly fail or will fail from range destruct actions. Beyond the obvious safety implications, these failures may have serious resource implications for mission system and facility planners. SRM-casing failures and liquid-rocket tankage failures result in the generation of large, high velocity fragments which may be serious threats to the safety of launch support personnel if proper bunkers and exclusion areas are not provided. In addition, these fragments may be indirect threats to the general public's safety if they encounter hazardous spacecraft payloads which have not been designed to withstand shrapnel of this caliber. They may also become threats to other spacecraft if, by failing on-orbit, they add to the ever increasing space-junk collision cross-section. Most prior attempts to assess the velocity of fragments from failed SRM casings have simply assigned the available chamber impulse to available casing and fuel mass and solved the resulting momentum balance for velocity. This method may predict a fragment velocity which is high or low by a factor of two depending on the ratio of fuel to casing mass extant at the time of failure. Recognizing the limitations of existing methods, the authors devised an analytical approach which properly partitions the available impulse to each major system-mass component. This approach uses the Physics International developed PISCES code to couple the forces generated by an Eulerian modeled gas flow field to a Lagrangian modeled fuel and casing system. The details of a predictive analytical modeling process as well as the development of normalized relations for momentum partition as a function of SRM burn time and initial geometry are discussed in this paper. Methods for applying similar modeling techniques to liquid-tankage-over-pressure failures are also discussed. These methods have been calibrated against observed SRM ascent failures and on-orbit tankage failures. Casing-quadrant sized fragments with velocities exceeding 100 m/s resulted from Titan 34D-SRM range destruct actions at 10 s mission elapsed time (MET). Casing-quadrant sized fragments with velocities of approx. 200 m/s resulted from STS-SRM range destruct actions at 110 s MET. Similar sized fragments for Ariane third stage and Delta second stage tankage were predicted to have maximum velocities of 260 and 480 m/s respectively. Good agreement was found between the predictions and observations for five specific events and it was concluded that the methods developed have good potential for use in predicting the fragmentation process of a number of generically similar casing and tankage systems.
An oil fraction neural sensor developed using electrical capacitance tomography sensor data.
Zainal-Mokhtar, Khursiah; Mohamad-Saleh, Junita
2013-08-26
This paper presents novel research on the development of a generic intelligent oil fraction sensor based on Electrical Capacitance Tomography (ECT) data. An artificial Neural Network (ANN) has been employed as the intelligent system to sense and estimate oil fractions from the cross-sections of two-component flows comprising oil and gas in a pipeline. Previous works only focused on estimating the oil fraction in the pipeline based on fixed ECT sensor parameters. With fixed ECT design sensors, an oil fraction neural sensor can be trained to deal with ECT data based on the particular sensor parameters, hence the neural sensor is not generic. This work focuses on development of a generic neural oil fraction sensor based on training a Multi-Layer Perceptron (MLP) ANN with various ECT sensor parameters. On average, the proposed oil fraction neural sensor has shown to be able to give a mean absolute error of 3.05% for various ECT sensor sizes.
An Oil Fraction Neural Sensor Developed Using Electrical capacitance Tomography Sensor Data
Zainal-Mokhtar, Khursiah; Mohamad-Saleh, Junita
2013-01-01
This paper presents novel research on the development of a generic intelligent oil fraction sensor based on Electrical capacitance Tomography (ECT) data. An artificial Neural Network (ANN) has been employed as the intelligent system to sense and estimate oil fractions from the cross-sections of two-component flows comprising oil and gas in a pipeline. Previous works only focused on estimating the oil fraction in the pipeline based on fixed ECT sensor parameters. With fixed ECT design sensors, an oil fraction neural sensor can be trained to deal with ECT data based on the particular sensor parameters, hence the neural sensor is not generic. This work focuses on development of a generic neural oil fraction sensor based on training a Multi-Layer Perceptron (MLP) ANN with various ECT sensor parameters. On average, the proposed oil fraction neural sensor has shown to be able to give a mean absolute error of 3.05% for various ECT sensor sizes. PMID:24064598
Overview of NASA Lewis Research Center free-piston Stirling engine activities
NASA Technical Reports Server (NTRS)
Slaby, J. G.
1984-01-01
A generic free-piston Stirling technology project is being conducted to develop technologies generic to both space power and terrestrial heat pump applications in a cooperative, cost-shared effort. The generic technology effort includes extensive parametric testing of a 1 kW free-piston Stirling engine (RE-1000), development of a free-piston Stirling performance computer code, design and fabrication under contract of a hydraulic output modification for RE-1000 engine tests, and a 1000-hour endurance test, under contract, of a 3 kWe free-piston Stirling/alternator engine. A newly initiated space power technology feasibility demonstration effort addresses the capability of scaling a free-piston Stirling/alternator system to about 25 kWe; developing thermodynamic cycle efficiency or equal to 70 percent of Carnot at temperature ratios in the order of 1.5 to 2.0; achieving a power conversion unit specific weight of 6 kg/kWe; operating with noncontacting gas bearings; and dynamically balancing the system. Planned engine and component design and test efforts are described.
SmartSIM - a virtual reality simulator for laparoscopy training using a generic physics engine.
Khan, Zohaib Amjad; Kamal, Nabeel; Hameed, Asad; Mahmood, Amama; Zainab, Rida; Sadia, Bushra; Mansoor, Shamyl Bin; Hasan, Osman
2017-09-01
Virtual reality (VR) training simulators have started playing a vital role in enhancing surgical skills, such as hand-eye coordination in laparoscopy, and practicing surgical scenarios that cannot be easily created using physical models. We describe a new VR simulator for basic training in laparoscopy, i.e. SmartSIM, which has been developed using a generic open-source physics engine called the simulation open framework architecture (SOFA). This paper describes the systems perspective of SmartSIM including design details of both hardware and software components, while highlighting the critical design decisions. Some of the distinguishing features of SmartSIM include: (i) an easy-to-fabricate custom-built hardware interface; (ii) use of a generic physics engine to facilitate wider accessibility of our work and flexibility in terms of using various graphical modelling algorithms and their implementations; and (iii) an intelligent and smart evaluation mechanism that facilitates unsupervised and independent learning. Copyright © 2016 John Wiley & Sons, Ltd.
From Domain Specific Languages to DEVS Components: Application to Cognitive M&S
2011-04-01
AND SUBTITLE From Domain Specific Languages to DEVS Components: Application to Cognitive M&S 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ...that is devoid of any DEVS and programming language constructs (Figure 4). The key idea being domain specialists need not delve in the DEVS world to...DSL. DSLs can be created using many available tools and technologies such as: Generic Modeling Environment (GME) [23], Xtext, Ruby, Scala and many
System Lifetimes, The Memoryless Property, Euler's Constant, and Pi
ERIC Educational Resources Information Center
Agarwal, Anurag; Marengo, James E.; Romero, Likin Simon
2013-01-01
A "k"-out-of-"n" system functions as long as at least "k" of its "n" components remain operational. Assuming that component failure times are independent and identically distributed exponential random variables, we find the distribution of system failure time. After some examples, we find the limiting…
Ground Operations Autonomous Control and Integrated Health Management
NASA Technical Reports Server (NTRS)
Figueroa, Fernando; Walker, Mark; Wilkins, Kim; Johnson, Robert; Sass, Jared; Youney, Justin
2014-01-01
An intelligent autonomous control capability has been developed and is currently being validated in ground cryogenic fluid management operations. The capability embodies a physical architecture consistent with typical launch infrastructure and control systems, augmented by a higher level autonomous control (AC) system enabled to make knowledge-based decisions. The AC system is supported by an integrated system health management (ISHM) capability that detects anomalies, diagnoses causes, determines effects, and could predict future anomalies. AC is implemented using the concept of programmed sequences that could be considered to be building blocks of more generic mission plans. A sequence is a series of steps, and each executes actions once conditions for the step are met (e.g. desired temperatures or fluid state are achieved). For autonomous capability, conditions must consider also health management outcomes, as they will determine whether or not an action is executed, or how an action may be executed, or if an alternative action is executed instead. Aside from health, higher level objectives can also drive how a mission is carried out. The capability was developed using the G2 software environment (www.gensym.com) augmented by a NASA Toolkit that significantly shortens time to deployment. G2 is a commercial product to develop intelligent applications. It is fully object oriented. The core of the capability is a Domain Model of the system where all elements of the system are represented as objects (sensors, instruments, components, pipes, etc.). Reasoning and decision making can be done with all elements in the domain model. The toolkit also enables implementation of failure modes and effects analysis (FMEA), which are represented as root cause trees. FMEA's are programmed graphically, they are reusable, as they address generic FMEA referring to classes of subsystems or objects and their functional relationships. User interfaces for integrated awareness by operators have been created.
Compression Strength of Composite Primary Structural Components
NASA Technical Reports Server (NTRS)
Johnson, Eric R.
1998-01-01
Research conducted under NASA Grant NAG-1-537 focussed on the response and failure of advanced composite material structures for application to aircraft. Both experimental and analytical methods were utilized to study the fundamental mechanics of the response and failure of selected structural components subjected to quasi-static loads. Most of the structural components studied were thin-walled elements subject to compression, such that they exhibited buckling and postbuckling responses prior to catastrophic failure. Consequently, the analyses were geometrically nonlinear. Structural components studied were dropped-ply laminated plates, stiffener crippling, pressure pillowing of orthogonally stiffened cylindrical shells, axisymmetric response of pressure domes, and the static crush of semi-circular frames. Failure of these components motivated analytical studies on an interlaminar stress postprocessor for plate and shell finite element computer codes, and global/local modeling strategies in finite element modeling. These activities are summarized in the following section. References to literature published under the grant are listed on pages 5 to 10 by a letter followed by a number under the categories of journal publications, conference publications, presentations, and reports. These references are indicated in the text by their letter and number as a superscript.
Common-Cause Failure Treatment in Event Assessment: Basis for a Proposed New Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana Kelly; Song-Hua Shen; Gary DeMoss
2010-06-01
Event assessment is an application of probabilistic risk assessment in which observed equipment failures and outages are mapped into the risk model to obtain a numerical estimate of the event’s risk significance. In this paper, we focus on retrospective assessments to estimate the risk significance of degraded conditions such as equipment failure accompanied by a deficiency in a process such as maintenance practices. In modeling such events, the basic events in the risk model that are associated with observed failures and other off-normal situations are typically configured to be failed, while those associated with observed successes and unchallenged components aremore » assumed capable of failing, typically with their baseline probabilities. This is referred to as the failure memory approach to event assessment. The conditioning of common-cause failure probabilities for the common cause component group associated with the observed component failure is particularly important, as it is insufficient to simply leave these probabilities at their baseline values, and doing so may result in a significant underestimate of risk significance for the event. Past work in this area has focused on the mathematics of the adjustment. In this paper, we review the Basic Parameter Model for common-cause failure, which underlies most current risk modelling, discuss the limitations of this model with respect to event assessment, and introduce a proposed new framework for common-cause failure, which uses a Bayesian network to model underlying causes of failure, and which has the potential to overcome the limitations of the Basic Parameter Model with respect to event assessment.« less
Code of Federal Regulations, 2014 CFR
2014-01-01
... pool slide shall be such that no structural failures of any component part shall cause failures of any... such fasteners shall not cause a failure of the tread under the ladder loading conditions specified in... without failure or permanent deformation. (d) Handrails. Swimming pool slide ladders shall be equipped...
Code of Federal Regulations, 2012 CFR
2012-01-01
... pool slide shall be such that no structural failures of any component part shall cause failures of any... such fasteners shall not cause a failure of the tread under the ladder loading conditions specified in... without failure or permanent deformation. (d) Handrails. Swimming pool slide ladders shall be equipped...
1996-01-01
failure as due to an adhesive layer between the foil and inner polypropylene layers. "* Under subcontract, NFPA provided HACCP draft manuals for the...parameters of the production process and to ensure that they are within their target values. In addition, a HACCP program was used to assure product...played an important part in implementing Hazard Analysis Critical Control Points ( HACCP ) as part of the Process and Quality Control manual. The National
Schools of pharmacology: retinoid update.
Scheinfeld, Noah
2006-10-01
The most widely used retinoids include topical tretinoin (Retin-A), adapalene (Differin), topical tazarotene (Tazorac), isotretinoin (Accutane), and acitretin (Soriatane). This article will review new uses and developments in tazarotene (its failure to secure FDA approval in oral form for psoriasis), adapalene (its new 0.3% gel form and use in rosacea), alitretinoin (its use in photoaging), bexarotene (its use for psoriasis and chronic hand dermatitis), isotretinoin (the IPledge program, its use for neuroblastoma and branded formulation pharmacological superiority to generics), and retinoic acid metabolism-blocking agents (RAMBAs) (liarazole use for ichthyosis and psoriasis).
2016-09-01
an instituted safety program that utilizes a generic risk assessment method involving the 5-M (Mission, Man, Machine , Medium and Management) factor...the Safety core value is hinged upon three key principles—(1) each soldier has a crucial part to play, by adopting safety as a core value and making...it a way of life in his unit; (2) safety is an integral part of training, operations and mission success, and (3) safety is an individual, team and
Generic Health Management: A System Engineering Process Handbook Overview and Process
NASA Technical Reports Server (NTRS)
Wilson, Moses Lee; Spruill, Jim; Hong, Yin Paw
1995-01-01
Health Management, a System Engineering Process, is one of those processes-techniques-and-technologies used to define, design, analyze, build, verify, and operate a system from the viewpoint of preventing, or minimizing, the effects of failure or degradation. It supports all ground and flight elements during manufacturing, refurbishment, integration, and operation through combined use of hardware, software, and personnel. This document will integrate Health Management Processes (six phases) into five phases in such a manner that it is never a stand alone task/effort which separately defines independent work functions.
A Hybrid Procedural/Deductive Executive for Autonomous Spacecraft
NASA Technical Reports Server (NTRS)
Pell, Barney; Gamble, Edward B.; Gat, Erann; Kessing, Ron; Kurien, James; Millar, William; Nayak, P. Pandurang; Plaunt, Christian; Williams, Brian C.; Lau, Sonie (Technical Monitor)
1998-01-01
The New Millennium Remote Agent (NMRA) will be the first AI system to control an actual spacecraft. The spacecraft domain places a strong premium on autonomy and requires dynamic recoveries and robust concurrent execution, all in the presence of tight real-time deadlines, changing goals, scarce resource constraints, and a wide variety of possible failures. To achieve this level of execution robustness, we have integrated a procedural executive based on generic procedures with a deductive model-based executive. A procedural executive provides sophisticated control constructs such as loops, parallel activity, locks, and synchronization which are used for robust schedule execution, hierarchical task decomposition, and routine configuration management. A deductive executive provides algorithms for sophisticated state inference and optimal failure recover), planning. The integrated executive enables designers to code knowledge via a combination of procedures and declarative models, yielding a rich modeling capability suitable to the challenges of real spacecraft control. The interface between the two executives ensures both that recovery sequences are smoothly merged into high-level schedule execution and that a high degree of reactivity is retained to effectively handle additional failures during recovery.
The effects of type of knowledge upon human problem solving in a process control task
NASA Technical Reports Server (NTRS)
Morris, N. M.; Rouse, W. B.
1985-01-01
The question of what the operator of a dynamic system needs to know was investigated in an experiment using PLANT, a simulation of a generic dynamic production process. Knowledge of PLANT was manipulated via different types of instruction, so that four different groups were created: (1) minimal instructions only; (2) minimal instructions and guidelines for operation (procedures); (3) minimal instructions and dynamic relationships (principles); and (4) minimal instructions, and procedures, and principles. Subjects controlled PLANT in a variety of situations which required maintaining production while also diagnosing familiar and unfamiliar failures. Despite the fact that these manipulations resulted in differences in subjects' Knowledge, as assessed via a written test at the end of the experiment, instructions had no effect upon achievement of the primary goal of production, or upon subjects' ability to diagnose unfamiliar failures. However, those groups receiving procedures controlled the system in a more stable manner. Possible reasons for the failure to find an effect of principles are presented, and the implications of these results for operator training and aiding are discussed.
2014-05-01
shelters, tents and fabric covers, mechanical aerial delivery parts and components, kitchens , and combat feeding items (see Figure 4). NSRDEC’s PIF is...generic terms and refrain from revealing confidential or classified information. Research hypotheses are as follows: H1: The PIF leadership predicts
Children's Use of Categories and Mental States to Predict Social Behavior
ERIC Educational Resources Information Center
Chalik, Lisa; Rivera, Cyrielle; Rhodes, Marjorie
2014-01-01
Integrating generic information about categories with knowledge of specific individuals is a critical component of successful inductive inferences. The present study tested whether children's approach to this task systematically shifts as they develop causal understandings of the mechanisms that shape individual action. In the current study, 3-and…
A Multidimensional Curriculum Model for Heritage or International Language Instruction.
ERIC Educational Resources Information Center
Lazaruk, Wally
1993-01-01
Describes the Multidimension Curriculum Model for developing a language curriculum and suggests a generic approach to selecting and sequencing learning objectives. Alberta Education used this model to design a new French-as-a-Second-Language program. The experience/communication, culture, language, and general language components at the beginning,…
A distributed telerobotics construction set
NASA Technical Reports Server (NTRS)
Wise, James D.
1994-01-01
During the course of our research on distributed telerobotic systems, we have assembled a collection of generic, reusable software modules and an infrastructure for connecting them to form a variety of telerobotic configurations. This paper describes the structure of this 'Telerobotics Construction Set' and lists some of the components which comprise it.
Development and Command-Control Tools for Many-Robot Systems
2005-01-01
been components such as pressure sensors and accelerometers for the automobile market. In fact, robots of any size have yet to appear in our daily...34 mode, so that the target hardware is neither reprogrammable nor rechargable. The goal of this paper is to propose some generic tools that the
A Three-Level Analysis of Collaborative Learning in Dual-Interaction Spaces
ERIC Educational Resources Information Center
Lonchamp, Jacques
2009-01-01
CSCL systems which follow the dual-interaction spaces paradigm support the synchronous construction and discussion of shared artifacts by distributed or colocated small groups of learners. The most recent generic dual-interaction space environments, either model based or component based, can be deeply customized by teachers for supporting…
Advanced Self-Calibrating, Self-Repairing Data Acquisition System
NASA Technical Reports Server (NTRS)
Medelius, Pedro J. (Inventor); Eckhoff, Anthony J. (Inventor); Angel, Lucena R. (Inventor); Perotti, Jose M. (Inventor)
2002-01-01
An improved self-calibrating and self-repairing Data Acquisition System (DAS) for use in inaccessible areas, such as onboard spacecraft, and capable of autonomously performing required system health checks, failure detection. When required, self-repair is implemented utilizing a "spare parts/tool box" system. The available number of spare components primarily depends upon each component's predicted reliability which may be determined using Mean Time Between Failures (MTBF) analysis. Failing or degrading components are electronically removed and disabled to reduce power consumption, before being electronically replaced with spare components.
An overview of fatigue failures at the Rocky Flats Wind System Test Center
NASA Technical Reports Server (NTRS)
Waldon, C. A.
1981-01-01
Potential small wind energy conversion (SWECS) design problems were identified to improve product quality and reliability. Mass produced components such as gearboxes, generators, bearings, etc., are generally reliable due to their widespread uniform use in other industries. The likelihood of failure increases, though, in the interfacing of these components and in SWECS components designed for a specific system use. Problems relating to the structural integrity of such components are discussed and analyzed with techniques currently used in quality assurance programs in other manufacturing industries.
Current Challenges for HTCMC Aero-Propulsion Components
NASA Technical Reports Server (NTRS)
DiCarlo, James A.; Bansal, Narottam P.
2007-01-01
In comparison to the best metallic materials, HTCMC aero-propulsion engine components offer the opportunity of reduced weight and higher temperature operation, with corresponding improvements in engine cooling requirements, emissions, thrust, and specific fuel consumption. Although much progress has been made in the development of advanced HTCMC constituent materials and processes, major challenges still remain for their implementation into these components. The objectives of this presentation are to briefly review (1) potential HTCMC aero-propulsion components and their generic material performance requirements, (2) recent progress at NASA and elsewhere concerning advanced constituents and processes for meeting these requirements, (3) key HTCMC component implementation challenges that are currently being encountered, and (4) on-going activities within the new NASA Fundamental Aeronautics Program that are addressing these challenges.
Morgan, Steven G
2002-01-01
Objective To quantify the relative and absolute importance of different factors contributing to increases in per capita prescription drug costs for a population of Canadian seniors. Data Sources/Study Setting Data consist of every prescription claim from 1985 to 1999 for the British Columbia Pharmacare Plan A, a tax-financed public drug plan covering all community-dwelling British Columbians aged 65 and older. Study Design Changes in per capita prescription drug expenditures are attributed to changes to four components of expenditure inflation: (1) the pattern of exposure to drugs across therapeutic categories; (2) the mix of drugs used within therapeutic categories; (3) the rate of generic drug product selection; and (4) the prices of unchanged products. Data Collection/Extraction Methods Data were extracted from administrative claims files housed at the UBC Centre for Health Services and Policy Research. Principal Findings Changes in drug prices, the pattern of exposure to drugs across therapeutic categories, and the mix of drugs used within therapeutic categories all caused spending per capita to increase. Incentives for generic substitution and therapeutic reference pricing policies temporarily slowed the cost-increasing influence of changes in product selection by encouraging the use of generic drug products and/or cost-effective brand-name products within therapeutic categories. Conclusions The results suggest that drug plans (and patients) would benefit from more concerted efforts to evaluate the relative cost-effectiveness of competing products within therapeutic categories of drugs. PMID:12479495
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-08-01
The Knowledge and Abilities Catalog for Nuclear Power Plant Operators: Boiling-Water Reactors (BWRs) (NUREG-1123, Revision 1) provides the basis for the development of content-valid licensing examinations for reactor operators (ROs) and senior reactor operators (SROs). The examinations developed using the BWR Catalog along with the Operator Licensing Examiner Standards (NUREG-1021) and the Examiner`s Handbook for Developing Operator Licensing Written Examinations (NUREG/BR-0122), will cover the topics listed under Title 10, Code of Federal Regulations, Part 55 (10 CFR 55). The BWR Catalog contains approximately 7,000 knowledge and ability (K/A) statements for ROs and SROs at BWRs. The catalog is organized intomore » six major sections: Organization of the Catalog, Generic Knowledge and Ability Statements, Plant Systems grouped by Safety Functions, Emergency and Abnormal Plant Evolutions, Components, and Theory. Revision 1 to the BWR Catalog represents a modification in form and content of the original catalog. The K/As were linked to their applicable 10 CFR 55 item numbers. SRO level K/As were identified by 10 CFR 55.43 item numbers. The plant-wide generic and system generic K/As were combined in one section with approximately one hundred new K/As. Component Cooling Water and Instrument Air Systems were added to the Systems Section. Finally, High Containment Hydrogen Concentration and Plant Fire On Site evolutions added to the Emergency and Abnormal Plant Evolutions section.« less
DATMAN: A reliability data analysis program using Bayesian updating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, M.; Feltus, M.A.
1996-12-31
Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less
Wodecki, P; Sabbah, D; Kermarrec, G; Semaan, I
2013-10-01
Total hip replacements (THR) with modular femoral components (stem-neck interface) make it possible to adapt to extramedullary femoral parameters (anteversion, offset, and length) theoretically improving muscle function and stability. Nevertheless, adding a new interface has its disadvantages: reduced mechanical resistance, fretting corrosion and material fatigue fracture. We report the case of a femoral stem fracture of the female part of the component where the modular morse taper of the neck is inserted. An extended trochanteric osteotomy was necessary during revision surgery because the femoral stump could not be grasped for extraction, so that a long stem had to be used. In this case, the patient had the usual risk factors for modular neck failure: he was an active overweight male patient with a long varus neck. This report shows that the female part of the stem of a small femoral component may also be at increased failure risk and should be added to the list of risk factors. To our knowledge, this is the first reported case of this type of failure. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Validation of PV-RPM Code in the System Advisor Model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klise, Geoffrey Taylor; Lavrova, Olga; Freeman, Janine
2017-04-01
This paper describes efforts made by Sandia National Laboratories (SNL) and the National Renewable Energy Laboratory (NREL) to validate the SNL developed PV Reliability Performance Model (PV - RPM) algorithm as implemented in the NREL System Advisor Model (SAM). The PV - RPM model is a library of functions that estimates component failure and repair in a photovoltaic system over a desired simulation period. The failure and repair distributions in this paper are probabilistic representations of component failure and repair based on data collected by SNL for a PV power plant operating in Arizona. The validation effort focuses on whethermore » the failure and repair dist ributions used in the SAM implementation result in estimated failures that match the expected failures developed in the proof - of - concept implementation. Results indicate that the SAM implementation of PV - RPM provides the same results as the proof - of - concep t implementation, indicating the algorithms were reproduced successfully.« less
Modelling the failure behaviour of wind turbines
NASA Astrophysics Data System (ADS)
Faulstich, S.; Berkhout, V.; Mayer, J.; Siebenlist, D.
2016-09-01
Modelling the failure behaviour of wind turbines is an essential part of offshore wind farm simulation software as it leads to optimized decision making when specifying the necessary resources for the operation and maintenance of wind farms. In order to optimize O&M strategies, a thorough understanding of a wind turbine's failure behaviour is vital and is therefore being developed at Fraunhofer IWES. Within this article, first the failure models of existing offshore O&M tools are presented to show the state of the art and strengths and weaknesses of the respective models are briefly discussed. Then a conceptual framework for modelling different failure mechanisms of wind turbines is being presented. This framework takes into account the different wind turbine subsystems and structures as well as the failure modes of a component by applying several influencing factors representing wear and break failure mechanisms. A failure function is being set up for the rotor blade as exemplary component and simulation results have been compared to a constant failure rate and to empirical wind turbine fleet data as a reference. The comparison and the breakdown of specific failure categories demonstrate the overall plausibility of the model.
NASA Technical Reports Server (NTRS)
Davis, Robert N.; Polites, Michael E.; Trevino, Luis C.
2004-01-01
This paper details a novel scheme for autonomous component health management (ACHM) with failed actuator detection and failed sensor detection, identification, and avoidance. This new scheme has features that far exceed the performance of systems with triple-redundant sensing and voting, yet requires fewer sensors and could be applied to any system with redundant sensing. Relevant background to the ACHM scheme is provided, and the simulation results for the application of that scheme to a single-axis spacecraft attitude control system with a 3rd order plant and dual-redundant measurement of system states are presented. ACHM fulfills key functions needed by an integrated vehicle health monitoring (IVHM) system. It is: autonomous; adaptive; works in realtime; provides optimal state estimation; identifies failed components; avoids failed components; reconfigures for multiple failures; reconfigures for intermittent failures; works for hard-over, soft, and zero-output failures; and works for both open- and closed-loop systems. The ACHM scheme combines a prefilter that generates preliminary state estimates, detects and identifies failed sensors and actuators, and avoids the use of failed sensors in state estimation with a fixed-gain Kalman filter that generates optimal state estimates and provides model-based state estimates that comprise an integral part of the failure detection logic. The results show that ACHM successfully isolates multiple persistent and intermittent hard-over, soft, and zero-output failures. It is now ready to be tested on a computer model of an actual system.
A case study in nonconformance and performance trend analysis
NASA Technical Reports Server (NTRS)
Maloy, Joseph E.; Newton, Coy P.
1990-01-01
As part of NASA's effort to develop an agency-wide approach to trend analysis, a pilot nonconformance and performance trending analysis study was conducted on the Space Shuttle auxiliary power unit (APU). The purpose of the study was to (1) demonstrate that nonconformance analysis can be used to identify repeating failures of a specific item (and the associated failure modes and causes) and (2) determine whether performance parameters could be analyzed and monitored to provide an indication of component or system degradation prior to failure. The nonconformance analysis of the APU did identify repeating component failures, which possibly could be reduced if key performance parameters were monitored and analyzed. The performance-trending analysis verified that the characteristics of hardware parameters can be effective in detecting degradation of hardware performance prior to failure.
Minzi, Omary M S; Marealle, Ignace A; Shekalaghe, Seif; Juma, Omar; Ngaimisi, Eliford; Chemba, Mwajuma; Rutaihwa, Mastidia; Abdulla, Salim; Sasi, Philip
2013-05-30
Existence of anti-malarial generic drugs with low bioavailability marketed on sub-Saharan Africa raises a concern on patients achieving therapeutic concentrations after intake of such products. This work compared bioavailability of one generic tablet formulation with innovator's product. Both were fixed dose combination tablet formulations containing artemether and lumefantrine. The study was conducted in Dar Es Salaam, Tanzania, in which a survey of the most abundant generic containing artemether-lumefantrine tablet formulation was carried out in retail pharmacies. The most widely available generic (Artefan®, Ajanta Pharma Ltd, Maharashtra, India) was sampled for bioavailability comparison with Coartem® (Novartis Pharma, Basel, Switzerland)--the innovator's product. A randomized, two-treatment cross-over study was conducted in 18 healthy Tanzanian black male volunteers. Each volunteer received Artefan® (test) and Coartem® (as reference) formulation separated by 42 days of drug-free washout period. Serial blood samples were collected up to 168 hours after oral administration of a single dose of each treatment. Quantitation of lumefantrine plasma levels was done using HPLC with UV detection. Bioequivalence of the two products was assessed in accordance with the US Food and Drug Authority (FDA) guidelines. The most widely available generic in pharmacies was Artefan® from India. All eighteen enrolled volunteers completed the study and both test and reference tablet formulations were well tolerated. It was possible to quantify lumefantrine alone, therefore, the pharmacokinetic parameters reported herein are for lumefantrine. The geometric mean ratios for Cmax, AUC0-t and AUC0-∞ were 84% in all cases and within FDA recommended bioequivalence limits of 80%-125%, but the 90% confidence intervals were outside FDA recommended limits (CI 49-143%, 53-137%, 52-135% respectively). There were no statistical significant differences between the two formulations with regard to PK parameters (P > 0.05). Although the ratios of AUCs and Cmax were within the acceptable FDA range, bioequivalence between Artefan® and Coartem® tablet formulations was not demonstrated due to failure to comply with the FDA 90% confidence interval criteria. Based on the observed total drug exposure (AUCs), Artefan® is likely to produce a similar therapeutic response as Coartem®.
NASA Astrophysics Data System (ADS)
Lescinsky, D. T.; Wyborn, L. A.; Evans, B. J. K.; Allen, C.; Fraser, R.; Rankine, T.
2014-12-01
We present collaborative work on a generic, modular infrastructure for virtual laboratories (VLs, similar to science gateways) that combine online access to data, scientific code, and computing resources as services that support multiple data intensive scientific computing needs across a wide range of science disciplines. We are leveraging access to 10+ PB of earth science data on Lustre filesystems at Australia's National Computational Infrastructure (NCI) Research Data Storage Infrastructure (RDSI) node, co-located with NCI's 1.2 PFlop Raijin supercomputer and a 3000 CPU core research cloud. The development, maintenance and sustainability of VLs is best accomplished through modularisation and standardisation of interfaces between components. Our approach has been to break up tightly-coupled, specialised application packages into modules, with identified best techniques and algorithms repackaged either as data services or scientific tools that are accessible across domains. The data services can be used to manipulate, visualise and transform multiple data types whilst the scientific tools can be used in concert with multiple scientific codes. We are currently designing a scalable generic infrastructure that will handle scientific code as modularised services and thereby enable the rapid/easy deployment of new codes or versions of codes. The goal is to build open source libraries/collections of scientific tools, scripts and modelling codes that can be combined in specially designed deployments. Additional services in development include: provenance, publication of results, monitoring, workflow tools, etc. The generic VL infrastructure will be hosted at NCI, but can access alternative computing infrastructures (i.e., public/private cloud, HPC).The Virtual Geophysics Laboratory (VGL) was developed as a pilot project to demonstrate the underlying technology. This base is now being redesigned and generalised to develop a Virtual Hazards Impact and Risk Laboratory (VHIRL); any enhancements and new capabilities will be incorporated into a generic VL infrastructure. At same time, we are scoping seven new VLs and in the process, identifying other common components to prioritise and focus development.
2017-01-01
Objectives Few attempts have been made to develop a generic health-related quality of life (HRQoL) instrument and to examine its validity and reliability in Korea. We aimed to do this in our present study. Methods After a literature review of existing generic HRQoL instruments, a focus group discussion, in-depth interviews, and expert consultations, we selected 30 tentative items for a new HRQoL measure. These items were evaluated by assessing their ceiling effects, difficulty, and redundancy in the first survey. To validate the HRQoL instrument that was developed, known-groups validity and convergent/discriminant validity were evaluated and its test-retest reliability was examined in the second survey. Results Of the 30 items originally assessed for the HRQoL instrument, four were excluded due to high ceiling effects and six were removed due to redundancy. We ultimately developed a HRQoL instrument with a reduced number of 20 items, known as the Health-related Quality of Life Instrument with 20 items (HINT-20), incorporating physical, mental, social, and positive health dimensions. The results of the HINT-20 for known-groups validity were poorer in women, the elderly, and those with a low income. For convergent/discriminant validity, the correlation coefficients of items (except vitality) in the physical health dimension with the physical component summary of the Short Form 36 version 2 (SF-36v2) were generally higher than the correlations of those items with the mental component summary of the SF-36v2, and vice versa. Regarding test-retest reliability, the intraclass correlation coefficient of the total HINT-20 score was 0.813 (p<0.001). Conclusions A novel generic HRQoL instrument, the HINT-20, was developed for the Korean general population and showed acceptable validity and reliability. PMID:28173686
Game-Theoretic strategies for systems of components using product-form utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Ma, Cheng-Yu; Hausken, K.
Many critical infrastructures are composed of multiple systems of components which are correlated so that disruptions to one may propagate to others. We consider such infrastructures with correlations characterized in two ways: (i) an aggregate failure correlation function specifies the conditional failure probability of the infrastructure given the failure of an individual system, and (ii) a pairwise correlation function between two systems specifies the failure probability of one system given the failure of the other. We formulate a game for ensuring the resilience of the infrastructure, wherein the utility functions of the provider and attacker are products of an infrastructuremore » survival probability term and a cost term, both expressed in terms of the numbers of system components attacked and reinforced. The survival probabilities of individual systems satisfy first-order differential conditions that lead to simple Nash Equilibrium conditions. We then derive sensitivity functions that highlight the dependence of infrastructure resilience on the cost terms, correlation functions, and individual system survival probabilities. We apply these results to simplified models of distributed cloud computing and energy grid infrastructures.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groth, Katrina M.; Zumwalt, Hannah Ruth; Clark, Andrew Jordan
2016-03-01
Hydrogen Risk Assessment Models (HyRAM) is a prototype software toolkit that integrates data and methods relevant to assessing the safety of hydrogen fueling and storage infrastructure. The HyRAM toolkit integrates deterministic and probabilistic models for quantifying accident scenarios, predicting physical effects, and characterizing the impact of hydrogen hazards, including thermal effects from jet fires and thermal pressure effects from deflagration. HyRAM version 1.0 incorporates generic probabilities for equipment failures for nine types of components, and probabilistic models for the impact of heat flux on humans and structures, with computationally and experimentally validated models of various aspects of gaseous hydrogen releasemore » and flame physics. This document provides an example of how to use HyRAM to conduct analysis of a fueling facility. This document will guide users through the software and how to enter and edit certain inputs that are specific to the user-defined facility. Description of the methodology and models contained in HyRAM is provided in [1]. This User’s Guide is intended to capture the main features of HyRAM version 1.0 (any HyRAM version numbered as 1.0.X.XXX). This user guide was created with HyRAM 1.0.1.798. Due to ongoing software development activities, newer versions of HyRAM may have differences from this guide.« less
Vulnerability of bridges to scour: insights from an international expert elicitation workshop
NASA Astrophysics Data System (ADS)
Lamb, Rob; Aspinall, Willy; Odbert, Henry; Wagener, Thorsten
2017-08-01
Scour (localised erosion) during flood events is one of the most significant threats to bridges over rivers and estuaries, and has been the cause of numerous bridge failures, with damaging consequences. Mitigation of the risk of bridges being damaged by scour is therefore important to many infrastructure owners, and is supported by industry guidance. Even after mitigation, some residual risk remains, though its extent is difficult to quantify because of the uncertainties inherent in the prediction of scour and the assessment of the scour risk. This paper summarises findings from an international expert workshop on bridge scour risk assessment that explores uncertainties about the vulnerability of bridges to scour. Two specialised structured elicitation methods were applied to explore the factors that experts in the field consider important when assessing scour risk and to derive pooled expert judgements of bridge failure probabilities that are conditional on a range of assumed scenarios describing flood event severity, bridge and watercourse types and risk mitigation protocols. The experts' judgements broadly align with industry good practice, but indicate significant uncertainty about quantitative estimates of bridge failure probabilities, reflecting the difficulty in assessing the residual risk of failure. The data and findings presented here could provide a useful context for the development of generic scour fragility models and their associated uncertainties.
Performance of fuselage pressure structure
NASA Technical Reports Server (NTRS)
Maclin, James R.
1992-01-01
There are currently more than 1,000 Boeing airplanes around the world over 20 years old. That number is expected to double by the year 1995. With these statistics comes the reality that structural airworthiness will be in the forefront of aviation issues well into the next century. The results of previous and recent test programs Boeing has implemented to study the structural performance of older airplanes relative to pressurized fuselage sections are described. Included in testing were flat panels with multiple site damage (MSD), a full-scale 737 and 2 747s as well as panels representing a 737 and 777, and a generic aircraft in large pressure-test fixtures. Because damage is a normal part of aging, focus is on the degree to which structural integrity is maintained after failure or partial failure of any structural element, including multiple site damage (MSD), and multiple element damage (MED).
Murray, James L; Hu, Peixu; Shafer, David A
2014-11-01
We have developed novel probe systems for real-time PCR that provide higher specificity, greater sensitivity, and lower cost relative to dual-labeled probes. The seven DNA Detection Switch (DDS)-probe systems reported here employ two interacting polynucleotide components: a fluorescently labeled probe and a quencher antiprobe. High-fidelity detection is achieved with three DDS designs: two internal probes (internal DDS and Flip probes) and a primer probe (ZIPR probe), wherein each probe is combined with a carefully engineered, slightly mismatched, error-checking antiprobe. The antiprobe blocks off-target detection over a wide range of temperatures and facilitates multiplexing. Other designs (Universal probe, Half-Universal probe, and MacMan probe) use generic components that enable low-cost detection. Finally, single-molecule G-Force probes employ guanine-mediated fluorescent quenching by forming a hairpin between adjacent C-rich and G-rich sequences. Examples provided show how these probe technologies discriminate drug-resistant Mycobacterium tuberculosis mutants, Escherichia coli O157:H7, oncogenic EGFR deletion mutations, hepatitis B virus, influenza A/B strains, and single-nucleotide polymorphisms in the human VKORC1 gene. Copyright © 2014 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Extended Testability Analysis Tool
NASA Technical Reports Server (NTRS)
Melcher, Kevin; Maul, William A.; Fulton, Christopher
2012-01-01
The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.
Enhanced Component Performance Study. Emergency Diesel Generators 1998–2013
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2014-11-01
This report presents an enhanced performance evaluation of emergency diesel generators (EDGs) at U.S. commercial nuclear power plants. This report evaluates component performance over time using Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES) data from 1998 through 2013 and maintenance unavailability (UA) performance data using Mitigating Systems Performance Index (MSPI) Basis Document data from 2002 through 2013. The objective is to present an analysis of factors that could influence the system and component trends in addition to annual performance trends of failure rates and probabilities. The factors analyzed for the EDG component are the differences in failuresmore » between all demands and actual unplanned engineered safety feature (ESF) demands, differences among manufacturers, and differences among EDG ratings. Statistical analyses of these differences are performed and results showing whether pooling is acceptable across these factors. In addition, engineering analyses were performed with respect to time period and failure mode. The factors analyzed are: sub-component, failure cause, detection method, recovery, manufacturer, and EDG rating.« less
Ultra Reliable Closed Loop Life Support for Long Space Missions
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Ewert, Michael K.
2010-01-01
Spacecraft human life support systems can achieve ultra reliability by providing sufficient spares to replace all failed components. The additional mass of spares for ultra reliability is approximately equal to the original system mass, provided that the original system reliability is not too low. Acceptable reliability can be achieved for the Space Shuttle and Space Station by preventive maintenance and by replacing failed units. However, on-demand maintenance and repair requires a logistics supply chain in place to provide the needed spares. In contrast, a Mars or other long space mission must take along all the needed spares, since resupply is not possible. Long missions must achieve ultra reliability, a very low failure rate per hour, since they will take years rather than weeks and cannot be cut short if a failure occurs. Also, distant missions have a much higher mass launch cost per kilogram than near-Earth missions. Achieving ultra reliable spacecraft life support systems with acceptable mass will require a well-planned and extensive development effort. Analysis must determine the reliability requirement and allocate it to subsystems and components. Ultra reliability requires reducing the intrinsic failure causes, providing spares to replace failed components and having "graceful" failure modes. Technologies, components, and materials must be selected and designed for high reliability. Long duration testing is needed to confirm very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The system must be designed, developed, integrated, and tested with system reliability in mind. Maintenance and reparability of failed units must not add to the probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass should start soon since it must be a long term effort.
Reeves, Ashley; Trepanier, Angela
2016-02-01
Multiplex genetic carrier screening is increasingly being integrated into reproductive care. Obtaining informed consent becomes more challenging as the number of screened conditions increases. Implementing a model of generic informed consent may facilitate informed decision-making. Current Wayne State University students and staff were invited to complete a web-based survey by blast email solicitation. Participants were asked to determine which of two generic informed consent scenarios they preferred: a brief versus a detailed consent. They were asked to rank the importance of different informational components in making an informed decision and to provide demographic information. Comparisons between informational preferences, demographic variables and scenario preferences were made. Six hundred ninety three participants completed the survey. When evaluating these generic consents, the majority preferred the more detailed consent (74.5%), and agreed that it provided enough information to make an informed decision (89.5%). Those who thought it would be more important to know the severity of the conditions being screened (p = .002) and range of symptoms (p = .000) were more likely to prefer the more detailed consent. There were no significant associations between scenario preferences and demographic variables. A generic consent was perceived to provide sufficient information for informed decision making regarding multiplex carrier screening with most preferring a more detailed version of the consent. Individual attitudes rather than demographic variables influenced preferences regarding the amount of information that should be included in the generic consent. The findings have implications for how clinicians approach providing tailored informed consent.
A probabilisitic based failure model for components fabricated from anisotropic graphite
NASA Astrophysics Data System (ADS)
Xiao, Chengfeng
The nuclear moderator for high temperature nuclear reactors are fabricated from graphite. During reactor operations graphite components are subjected to complex stress states arising from structural loads, thermal gradients, neutron irradiation damage, and seismic events. Graphite is a quasi-brittle material. Two aspects of nuclear grade graphite, i.e., material anisotropy and different behavior in tension and compression, are explicitly accounted for in this effort. Fracture mechanic methods are useful for metal alloys, but they are problematic for anisotropic materials with a microstructure that makes it difficult to identify a "critical" flaw. In fact cracking in a graphite core component does not necessarily result in the loss of integrity of a nuclear graphite core assembly. A phenomenological failure criterion that does not rely on flaw detection has been derived that accounts for the material behaviors mentioned. The probability of failure of components fabricated from graphite is governed by the scatter in strength. The design protocols being proposed by international code agencies recognize that design and analysis of reactor core components must be based upon probabilistic principles. The reliability models proposed herein for isotropic graphite and graphite that can be characterized as being transversely isotropic are another set of design tools for the next generation very high temperature reactors (VHTR) as well as molten salt reactors. The work begins with a review of phenomenologically based deterministic failure criteria. A number of this genre of failure models are compared with recent multiaxial nuclear grade failure data. Aspects in each are shown to be lacking. The basic behavior of different failure strengths in tension and compression is exhibited by failure models derived for concrete, but attempts to extend these concrete models to anisotropy were unsuccessful. The phenomenological models are directly dependent on stress invariants. A set of invariants, known as an integrity basis, was developed for a non-linear elastic constitutive model. This integrity basis allowed the non-linear constitutive model to exhibit different behavior in tension and compression and moreover, the integrity basis was amenable to being augmented and extended to anisotropic behavior. This integrity basis served as the starting point in developing both an isotropic reliability model and a reliability model for transversely isotropic materials. At the heart of the reliability models is a failure function very similar in nature to the yield functions found in classic plasticity theory. The failure function is derived and presented in the context of a multiaxial stress space. States of stress inside the failure envelope denote safe operating states. States of stress on or outside the failure envelope denote failure. The phenomenological strength parameters associated with the failure function are treated as random variables. There is a wealth of failure data in the literature that supports this notion. The mathematical integration of a joint probability density function that is dependent on the random strength variables over the safe operating domain defined by the failure function provides a way to compute the reliability of a state of stress in a graphite core component fabricated from graphite. The evaluation of the integral providing the reliability associated with an operational stress state can only be carried out using a numerical method. Monte Carlo simulation with importance sampling was selected to make these calculations. The derivation of the isotropic reliability model and the extension of the reliability model to anisotropy are provided in full detail. Model parameters are cast in terms of strength parameters that can (and have been) characterized by multiaxial failure tests. Comparisons of model predictions with failure data is made and a brief comparison is made to reliability predictions called for in the ASME Boiler and Pressure Vessel Code. Future work is identified that would provide further verification and augmentation of the numerical methods used to evaluate model predictions.
The Local Wind Pump for Marginal Societies in Indonesia: A Perspective of Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Gunawan, Insan; Taufik, Ahmad
2007-10-01
There are many efforts to reduce a cost of investment of well established hybrid wind pump applied to rural areas. A recent study on a local wind pump (LWP) for marginal societies in Indonesia (traditional farmers, peasant and tribes) was one of the efforts reporting a new application area. The objectives of the study were defined to measure reliability value of the LWP due to fluctuated wind intensity, low wind speed, economic point of view regarding a prolong economic crisis occurring and an available local component of the LWP and to sustain economics productivity (agriculture product) of the society. In the study, a fault tree analysis (FTA) was deployed as one of three methods used for assessing the LWP. In this article, the FTA has been thoroughly discussed in order to improve a better performance of the LWP applied in dry land watering system of Mesuji district of Lampung province-Indonesia. In the early stage, all of local component of the LWP was classified in term of its function. There were four groups of the components. Moreover, all of the sub components of each group were subjected to failure modes of the FTA, namely (1) primary failure modes; (2) secondary failure modes and (3) common failure modes. In the data processing stage, an available software package, ITEM was deployed. It was observed that the component indicated obtaining relative a long life duration of operational life cycle in 1,666 hours. Moreover, to enhance high performance the LWP, maintenance schedule, critical sub component suffering from failure and an overhaul priority have been identified in term of quantity values. Throughout a year pilot project, it can be concluded that the LWP is a reliable product to the societies enhancing their economics productivities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brett Emery Trabun; Gamage, Thoshitha Thanushka; Bakken, David Edward
This disclosure describes, in part, a system management component and failure detection component for use in a power grid data network to identify anomalies within the network and systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription. The failure detection component may identify an anomaly within the network and a source of the anomaly. Based on the identified anomaly, data rates and or datamore » paths may be adjusted in real-time to ensure that the power grid data network does not become overloaded and/or fail.« less
Gutiérrez, Sergio; Greiwe, R Michael; Frankle, Mark A; Siegal, Steven; Lee, William E
2007-01-01
There has been renewed interest in reverse shoulder arthroplasty for the treatment of glenohumeral arthritis with concomitant rotator cuff deficiency. Failure of the prosthesis at the glenoid attachment site remains a concern. The purpose of this study was to examine glenoid component stability with regard to the angle of implantation. This investigation entailed a biomechanical analysis to evaluate forces and micromotion in glenoid components attached to 12 polyurethane blocks at -15 degrees, 0 degrees, and +15 degrees of superior and inferior tilt. The 15 degrees inferior tilt had the most uniform compressive forces and the least amount of tensile forces and micromotion when compared with the 0 degrees and 15 degrees superiorly tilted baseplate. Our results suggest that implantation with an inferior tilt will reduce the incidence of mechanical failure of the glenoid component in a reverse shoulder prosthesis.
Independent Orbiter Assessment (IOA): Weibull analysis report
NASA Technical Reports Server (NTRS)
Raffaelli, Gary G.
1987-01-01
The Auxiliary Power Unit (APU) and Hydraulic Power Unit (HPU) Space Shuttle Subsystems were reviewed as candidates for demonstrating the Weibull analysis methodology. Three hardware components were identified as analysis candidates: the turbine wheel, the gearbox, and the gas generator. Detailed review of subsystem level wearout and failure history revealed the lack of actual component failure data. In addition, component wearout data were not readily available or would require a separate data accumulation effort by the vendor. Without adequate component history data being available, the Weibull analysis methodology application to the APU and HPU subsystem group was terminated.
Ferrographic and spectrometer oil analysis from a failed gas turbine engine
NASA Technical Reports Server (NTRS)
Jones, W. R., Jr.
1982-01-01
An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. It was concluded that a severe surge may have caused interference between rotating and stationary compressor that either directly or indirectly ignited the titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph, a plasma, an atomic absorption, and an emission spectrometer to see if this information would aid in the engine failure diagnosis. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism nor a high level of wear debris was detected in the engine oil sample taken just prior to the test in which the failure occurred. However, low concentrations (0.2 to 0.5 ppm) of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations ( 2 ppm) were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure. The oil analyses eliminated a lubrication system bearing or shaft seal failure as the cause of the engine failure.
ERIC Educational Resources Information Center
Smith, Calvin; Worsfold, Kate
2015-01-01
This paper describes the impacts of work-integrated learning (WIL) curriculum components on general employability skills--professional work-readiness, self-efficacy and team skills. Regression analyses emphasise the importance of the "authenticity" of WIL placements for the development of these generic outcomes. Other curricula factors…
Composite structural materials
NASA Technical Reports Server (NTRS)
Ansell, G. S.; Loewy, R. G.; Wiberley, S. E.
1982-01-01
Research in the basic composition, characteristics, and processng science of composite materials and their constituents is balanced against the mechanics, conceptual design, fabrication, and testing of generic structural elements typical of aerospace vehicles so as to encourage the discovery of unusual solutions to problems. Detailed descriptions of the progress achieved in the various component parts of his program are presented.
Addressing Small Computers in the First OS Course
ERIC Educational Resources Information Center
Nutt, Gary
2006-01-01
Small computers are emerging as important components of the contemporary computing scene. Their operating systems vary from specialized software for an embedded system to the same style of OS used on a generic desktop or server computer. This article describes a course in which systems are classified by their hardware capability and the…
ERIC Educational Resources Information Center
Carton, Janet; Jerrams, Steve
2008-01-01
Graduate education platforms have received general acclaim as key components in the future structural development of third-level and fourth-level education in Europe. In Ireland the Higher Education Authority (HEA) has endorsed the restructuring of postgraduate education to incorporate the training of research students in key generic and…
Command Structure for Theater Warfare: The Quest for Unity of Command
1984-09-01
that trryfari tail forth thr htst from iti—Btturki Major General« Pern M Smith. USAF. and Hark) A Hughe«. USAF. contributed more to the rndertaking...an air. ground, and sea component. These arc generic commands which control all combat operations in the media of the air. ground, and sea. There
Diverse Redundant Systems for Reliable Space Life Support
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2015-01-01
Reliable life support systems are required for deep space missions. The probability of a fatal life support failure should be less than one in a thousand in a multi-year mission. It is far too expensive to develop a single system with such high reliability. Using three redundant units would require only that each have a failure probability of one in ten over the mission. Since the system development cost is inverse to the failure probability, this would cut cost by a factor of one hundred. Using replaceable subsystems instead of full systems would further cut cost. Using full sets of replaceable components improves reliability more than using complete systems as spares, since a set of components could repair many different failures instead of just one. Replaceable components would require more tools, space, and planning than full systems or replaceable subsystems. However, identical system redundancy cannot be relied on in practice. Common cause failures can disable all the identical redundant systems. Typical levels of common cause failures will defeat redundancy greater than two. Diverse redundant systems are required for reliable space life support. Three, four, or five diverse redundant systems could be needed for sufficient reliability. One system with lower level repair could be substituted for two diverse systems to save cost.
Biau, D J; Meziane, M; Bhumbra, R S; Dumaine, V; Babinet, A; Anract, P
2011-09-01
The purpose of this study was to define immediate post-operative 'quality' in total hip replacements and to study prospectively the occurrence of failure based on these definitions of quality. The evaluation and assessment of failure were based on ten radiological and clinical criteria. The cumulative summation (CUSUM) test was used to study 200 procedures over a one-year period. Technical criteria defined failure in 17 cases (8.5%), those related to the femoral component in nine (4.5%), the acetabular component in 32 (16%) and those relating to discharge from hospital in five (2.5%). Overall, the procedure was considered to have failed in 57 of the 200 total hip replacements (28.5%). The use of a new design of acetabular component was associated with more failures. For the CUSUM test, the level of adequate performance was set at a rate of failure of 20% and the level of inadequate performance set at a failure rate of 40%; no alarm was raised by the test, indicating that there was no evidence of inadequate performance. The use of a continuous monitoring statistical method is useful to ensure that the quality of total hip replacement is maintained, especially as newer implants are introduced.
A two-stage storage routing model for green roof runoff detention.
Vesuviano, Gianni; Sonnenwald, Fred; Stovin, Virginia
2014-01-01
Green roofs have been adopted in urban drainage systems to control the total quantity and volumetric flow rate of runoff. Modern green roof designs are multi-layered, their main components being vegetation, substrate and, in almost all cases, a separate drainage layer. Most current hydrological models of green roofs combine the modelling of the separate layers into a single process; these models have limited predictive capability for roofs not sharing the same design. An adaptable, generic, two-stage model for a system consisting of a granular substrate over a hard plastic 'egg box'-style drainage layer and fibrous protection mat is presented. The substrate and drainage layer/protection mat are modelled separately by previously verified sub-models. Controlled storm events are applied to a green roof system in a rainfall simulator. The time-series modelled runoff is compared to the monitored runoff for each storm event. The modelled runoff profiles are accurate (mean Rt(2) = 0.971), but further characterization of the substrate component is required for the model to be generically applicable to other roof configurations with different substrate.
The development of a post-test diagnostic system for rocket engines
NASA Technical Reports Server (NTRS)
Zakrajsek, June F.
1991-01-01
An effort was undertaken by NASA to develop an automated post-test, post-flight diagnostic system for rocket engines. The automated system is designed to be generic and to automate the rocket engine data review process. A modular, distributed architecture with a generic software core was chosen to meet the design requirements. The diagnostic system is initially being applied to the Space Shuttle Main Engine data review process. The system modules currently under development are the session/message manager, and portions of the applications section, the component analysis section, and the intelligent knowledge server. An overview is presented of a rocket engine data review process, the design requirements and guidelines, the architecture and modules, and the projected benefits of the automated diagnostic system.
Comín-Colet, Josep; Anguita, Manuel; Formiga, Francesc; Almenar, Luis; Crespo-Leiro, María G; Manzano, Luis; Muñiz, Javier; Chaves, José; de Frutos, Trinidad; Enjuanes, Cristina
2016-03-01
Although heart failure negatively affects the health-related quality of life of Spanish patients there is little information on the clinical factors associated with this issue. Cross-sectional multicenter study of health-related quality of life. A specific questionnaire (Kansas City Cardiomyopathy Questionnaire) and a generic questionnaire (EuroQoL-5D) were administered to 1037 consecutive outpatients with systolic heart failure. Most patients with poor quality of life had a worse prognosis and increased severity of heart failure. Mobility was more limited and rates of pain/discomfort and anxiety/depression were higher in the study patients than in the general population and patients with other chronic conditions. The scores on both questionnaires were very highly correlated (Pearson r =0.815; P < .001). Multivariable linear regression showed that being older (standardized β=-0.2; P=.03), female (standardized β=-10.3; P < .001), having worse functional class (standardized β=-20.4; P < .001), a higher Charlson comorbidity index (standardized β=-1.2; P=.005), and recent hospitalization for heart failure (standardized β=6.28; P=.006) were independent predictors of worse health-related quality of life. Patients with heart failure have worse quality of life than the general Spanish population and patients with other chronic diseases. Female sex, being older, comorbidity, advanced symptoms, and recent hospitalization are determinant factors in health-related quality of life in these patients. Copyright © 2015 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Zeng, Yajun; Skibniewski, Miroslaw J.
2013-08-01
Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.
Correlation study between vibrational environmental and failure rates of civil helicopter components
NASA Technical Reports Server (NTRS)
Alaniz, O.
1979-01-01
An investigation of two selected helicopter types, namely, the Models 206A/B and 212, is reported. An analysis of the available vibration and reliability data for these two helicopter types resulted in the selection of ten components located in five different areas of the helicopter and consisting primarily of instruments, electrical components, and other noncritical flight hardware. The potential for advanced technology in suppressing vibration in helicopters was assessed. The are still several unknowns concerning both the vibration environment and the reliability of helicopter noncritical flight components. Vibration data for the selected components were either insufficient or inappropriate. The maintenance data examined for the selected components were inappropriate due to variations in failure mode identification, inconsistent reporting, or inaccurate informaton.
Microtensile bond strength of etch and rinse versus self-etch adhesive systems.
Hamouda, Ibrahim M; Samra, Nagia R; Badawi, Manal F
2011-04-01
The aim of this study was to compare the microtensile bond strength of the etch and rinse adhesive versus one-component or two-component self-etch adhesives. Twelve intact human molar teeth were cleaned and the occlusal enamel of the teeth was removed. The exposed dentin surfaces were polished and rinsed, and the adhesives were applied. A microhybride composite resin was applied to form specimens of 4 mm height and 6 mm diameter. The specimens were sectioned perpendicular to the adhesive interface to produce dentin-resin composite sticks, with an adhesive area of approximately 1.4 mm(2). The sticks were subjected to tensile loading until failure occurred. The debonded areas were examined with a scanning electron microscope to determine the site of failure. The results showed that the microtensile bond strength of the etch and rinse adhesive was higher than that of one-component or two-component self-etch adhesives. The scanning electron microscope examination of the dentin surfaces revealed adhesive and mixed modes of failure. The adhesive mode of failure occurred at the adhesive/dentin interface, while the mixed mode of failure occurred partially in the composite and partially at the adhesive/dentin interface. It was concluded that the etch and rinse adhesive had higher microtensile bond strength when compared to that of the self-etch adhesives. Copyright © 2010 Elsevier Ltd. All rights reserved.
Toward an expert project management system
NASA Technical Reports Server (NTRS)
Silverman, Barry G.; Murray, Arthur; Diakite, Coty; Feggos, Kostas
1987-01-01
The purpose of the research effort is to prescribe a generic reusable shell that any project office can install and customize for the purposes of advising, guiding, and supporting project managers in that office. The prescribed shell is intended to provide both: a component that generates prescriptive guidance for project planning and monitoring activities, and an analogy (intuition) component that generates descriptive insights of previous experience of successful project managers. The latter component is especially significant in that it has the potential to: retrieve insights, not just data, and provide a vehicle for expert PMs to easily transcribe their current experiences in the course of each new project managed.
Analyses of Transistor Punchthrough Failures
NASA Technical Reports Server (NTRS)
Nicolas, David P.
1999-01-01
The failure of two transistors in the Altitude Switch Assembly for the Solid Rocket Booster followed by two additional failures a year later presented a challenge to failure analysts. These devices had successfully worked for many years on numerous missions. There was no history of failures with this type of device. Extensive checks of the test procedures gave no indication for a source of the cause. The devices were manufactured more than twenty years ago and failure information on this lot date code was not readily available. External visual exam, radiography, PEID, and leak testing were performed with nominal results Electrical testing indicated nearly identical base-emitter and base-collector characteristics (both forward and reverse) with a low resistance short emitter to collector. These characteristics are indicative of a classic failure mechanism called punchthrough. In failure analysis punchthrough refers to an condition where a relatively low voltage pulse causes the device to conduct very hard producing localized areas of thermal runaway or "hot spots". At one or more of these hot spots, the excessive currents melt the silicon. Heavily doped emitter material diffuses through the base region to the collector forming a diffusion pipe shorting the emitter to base to collector. Upon cooling, an alloy junction forms between the pipe and the base region. Generally, the hot spot (punch-through site) is under the bond and no surface artifact is visible. The devices were delidded and the internal structures were examined microscopically. The gold emitter lead was melted on one device, but others had anomalies in the metallization around the in-tact emitter bonds. The SEM examination confirmed some anomalies to be cosmetic defects while other anomalies were artifacts of the punchthrough site. Subsequent to these analyses, the contractor determined that some irregular testing procedures occurred at the time of the failures heretofore unreported. These testing irregularities involved the use of a breakout box and were the likely cause of the failures. There was no evidence to suggest a generic failure mechanism was responsible for the failure of these transistors.
BioJS DAGViewer: A reusable JavaScript component for displaying directed graphs
Micklem, Gos
2014-01-01
Summary: The DAGViewer BioJS component is a reusable JavaScript component made available as part of the BioJS project and intended to be used to display graphs of structured data, with a particular emphasis on Directed Acyclic Graphs (DAGs). It enables users to embed representations of graphs of data, such as ontologies or phylogenetic trees, in hyper-text documents (HTML). This component is generic, since it is capable (given the appropriate configuration) of displaying any kind of data that is organised as a graph. The features of this component which are useful for examining and filtering large and complex graphs are described. Availability: http://github.com/alexkalderimis/dag-viewer-biojs; http://github.com/biojs/biojs; http://dx.doi.org/10.5281/zenodo.8303. PMID:24627804
A review of typical thermal fatigue failure models for solder joints of electronic components
NASA Astrophysics Data System (ADS)
Li, Xiaoyan; Sun, Ruifeng; Wang, Yongdong
2017-09-01
For electronic components, cyclic plastic strain makes it easier to accumulate fatigue damage than elastic strain. When the solder joints undertake thermal expansion or cold contraction, different thermal strain of the electronic component and its corresponding substrate is caused by the different coefficient of thermal expansion of the electronic component and its corresponding substrate, leading to the phenomenon of stress concentration. So repeatedly, cracks began to sprout and gradually extend [1]. In this paper, the typical thermal fatigue failure models of solder joints of electronic components are classified and the methods of obtaining the parameters in the model are summarized based on domestic and foreign literature research.
NASA Astrophysics Data System (ADS)
Lebon, G.; Grmela, M.; Lhuillier, D.
2003-03-01
Our main objective is to describe non-Fickean thermodiffusion in binary fluids within the framework of three recent theories of non-equilibrium thermodynamics, namely Extended Irreversible Thermodynamics (EIT), GENERIC (General Equation for the Non-Equilibrium Reversible Irreversible Coupling) and Thermodynamics with Internal Variables (IVT). In the first part presented in this paper, we develop the EIT description. For pedagogical reasons, we start from the simplest situation to end with the most intricate one. Therefore, we first examine the simple problem of mass diffusion at uniform temperature. Then we study heat transport in a one-component fluid before considering the more complex coupled heat and mass transfer. In Part II developed in the accompanying paper, we follow the same hierarchy of situations from the point of view of GENERIC. Finally, in Part III, we present the point of view of the thermodynamic theory of internal variables. Similarities and differences between EIT, GENERIC and IVT are stressed. In the present work, we have taken advantage of the problem of heat conduction to revisit the notion of caloric.
Finch, Aureliano Paolo; Brazier, John Edward; Mukuria, Clara; Bjorner, Jakob Bue
2017-12-01
Generic preference-based measures such as the EuroQol five-dimensional questionnaire (EQ-5D) are used in economic evaluation, but may not be appropriate for all conditions. When this happens, a possible solution is adding bolt-ons to expand their descriptive systems. Using review-based methods, studies published to date claimed the relevance of bolt-ons in the presence of poor psychometric results. This approach does not identify the specific dimensions missing from the Generic preference-based measure core descriptive system, and is inappropriate for identifying dimensions that might improve the measure generically. This study explores the use of principal-component analysis (PCA) and confirmatory factor analysis (CFA) for bolt-on identification in the EQ-5D. Data were drawn from the international Multi-Instrument Comparison study, which is an online survey on health and well-being measures in five countries. Analysis was based on a pool of 92 items from nine instruments. Initial content analysis provided a theoretical framework for PCA results interpretation and CFA model development. PCA was used to investigate the underlining dimensional structure and whether EQ-5D items were represented in the identified constructs. CFA was used to confirm the structure. CFA was cross-validated in random halves of the sample. PCA suggested a nine-component solution, which was confirmed by CFA. This included psychological symptoms, physical functioning, and pain, which were covered by the EQ-5D, and satisfaction, speech/cognition,relationships, hearing, vision, and energy/sleep which were not. These latter factors may represent relevant candidate bolt-ons. PCA and CFA appear useful methods for identifying potential bolt-ons dimensions for an instrument such as the EQ-5D. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Product component genealogy modeling and field-failure prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Caleb; Hong, Yili; Meeker, William Q.
Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less
Product component genealogy modeling and field-failure prediction
King, Caleb; Hong, Yili; Meeker, William Q.
2016-04-13
Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less
Clinical assessment of pacemaker power sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bilitch, M.; Parsonnet, V.; Furman, S.
1980-01-01
The development of power sources for cardiac pacemakers has progressed from a 15-year usage of mercury-zinc batteries to widely used and accepted lithium cells. At present, there are about 6 different types of lithium cells incorporated into commercially distributed pacemakers. The authors reviewed experience over a 5-year period with 1711 mercury-zinc, 130 nuclear (P238) and 1912 lithium powered pacemakers. The lithium units have included 698 lithium-iodide, 270 lithium-silver chromate, 135 lithium-thionyl chloride, 31 lithium-lead and 353 lithium-cupric sulfide batteries. 57 of the lithium units have failed (91.2% component failure and 5.3% battery failure). 459 mercury-zinc units failed (25% component failuremore » and 68% battery depletion). The data show that lithium powered pacemaker failures are primarily component, while mercury-zinc failures are primarily battery related. It is concluded that mercury-zinc powered pulse generators are obsolete and that lithium and nuclear (P238) power sources are highly reliable over the 5 years for which data are available. 3 refs.« less
Packaging-induced failure of semiconductor lasers and optical telecommunications components
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharps, J.A.
1996-12-31
Telecommunications equipment for field deployment generally have specified lifetimes of > 100,000 hr. To achieve this high reliability, it is common practice to package sensitive components in hermetic, inert gas environments. The intent is to protect components from particulate and organic contamination, oxidation, and moisture. However, for high power density 980 nm diode lasers used in optical amplifiers, the authors found that hermetic, inert gas packaging induced a failure mode not observed in similar, unpackaged lasers. They refer to this failure mode as packaging-induced failure, or PIF. PIF is caused by nanomole amounts of organic contamination which interact with highmore » intensity 980 nm light to form solid deposits over the emitting regions of the lasers. These deposits absorb 980 nm light, causing heating of the laser, narrowing of the band gap, and eventual thermal runaway. The authors have found PIF is averted by packaging with free O{sub 2} and/or a getter material that sequesters organics.« less
Systematic Destruction of Electronic Parts for Aid in Electronic Failure Analysis
NASA Technical Reports Server (NTRS)
Decker, S. E.; Rolin, T. D.; McManus, P. D.
2012-01-01
NASA analyzes electrical, electronic, and electromechanical (EEE) parts used in space vehicles to understand failure modes of these components. Operational amplifiers and transistors are two examples of EEE parts critical to NASA missions that can fail due to electrical overstress (EOS). EOS is the result of voltage or current over time conditions that exceeds a component s specification limit. The objective of this study was to provide known voltage pulses over well-defined time intervals to determine the type and extent of damage imparted to the device. The amount of current was not controlled but measured so that pulse energy was determined. The damage was ascertained electrically using curve trace plots and optically using various metallographic techniques. The resulting data can be used to build a database of physical evidence to compare to damaged components removed from flight avionics. The comparison will provide the avionics failure analyst necessary information about voltage and times that caused flight or test failures when no other electrical data is available.
New understandings of failure modes in SSL luminaires
NASA Astrophysics Data System (ADS)
Shepherd, Sarah D.; Mills, Karmann C.; Yaga, Robert; Johnson, Cortina; Davis, J. Lynn
2014-09-01
As SSL products are being rapidly introduced into the market, there is a need to develop standard screening and testing protocols that can be performed quickly and provide data surrounding product lifetime and performance. These protocols, derived from standard industry tests, are known as ALTs (accelerated life tests) and can be performed in a timeframe of weeks to months instead of years. Accelerated testing utilizes a combination of elevated temperature and humidity conditions as well as electrical power cycling to control aging of the luminaires. In this study, we report on the findings of failure modes for two different luminaire products exposed to temperature-humidity ALTs. LEDs are typically considered the determining component for the rate of lumen depreciation. However, this study has shown that each luminaire component can independently or jointly influence system performance and reliability. Material choices, luminaire designs, and driver designs all have significant impacts on the system reliability of a product. From recent data, it is evident that the most common failure modes are not within the LED, but instead occur within resistors, capacitors, and other electrical components of the driver. Insights into failure modes and rates as a result of ALTs are reported with emphasis on component influence on overall system reliability.
Substandard drugs: a potential crisis for public health
Johnston, Atholl; Holt, David W
2014-01-01
Poor-quality medicines present a serious public health problem, particularly in emerging economies and developing countries, and may have a significant impact on the national clinical and economic burden. Attention has largely focused on the increasing availability of deliberately falsified drugs, but substandard medicines are also reaching patients because of poor manufacturing and quality-control practices in the production of genuine drugs (either branded or generic). Substandard medicines are widespread and represent a threat to health because they can inadvertently lead to healthcare failures, such as antibiotic resistance and the spread of disease within a community, as well as death or additional illness in individuals. This article reviews the different aspects of substandard drug formulation that can occur (for example, pharmacological variability between drug batches or between generic and originator drugs, incorrect drug quantity and presence of impurities). The possible means of addressing substandard manufacturing practices are also discussed. A concerted effort is required on the part of governments, drug manufacturers, charities and healthcare providers to ensure that only drugs of acceptable quality reach the patient. PMID:24286459
van der Linden, Helma; Austin, Tony; Talmon, Jan
2009-09-01
Future-proof EHR systems must be capable of interpreting information structures for medical concepts that were not available at the build-time of the system. The two-model approach of CEN 13606/openEHR using archetypes achieves this by separating generic clinical knowledge from domain-related knowledge. The presentation of this information can either itself be generic, or require design time awareness of the domain knowledge being employed. To develop a Graphical User Interface (GUI) that would be capable of displaying previously unencountered clinical data structures in a meaningful way. Through "reasoning by analogy" we defined an approach for the representation and implementation of "presentational knowledge". A proof-of-concept implementation was built to validate its implementability and to test for unanticipated issues. A two-model approach to specifying and generating a screen representation for archetype-based information, inspired by the two-model approach of archetypes, was developed. There is a separation between software-related display knowledge and domain-related display knowledge and the toolkit is designed with the reuse of components in mind. The approach leads to a flexible GUI that can adapt not only to information structures that had not been predefined within the receiving system, but also to novel ways of displaying the information. We also found that, ideally, the openEHR Archetype Definition Language should receive minor adjustments to allow for generic binding.
Modelling Wind Turbine Failures based on Weather Conditions
NASA Astrophysics Data System (ADS)
Reder, Maik; Melero, Julio J.
2017-11-01
A large proportion of the overall costs of a wind farm is directly related to operation and maintenance (O&M) tasks. By applying predictive O&M strategies rather than corrective approaches these costs can be decreased significantly. Here, especially wind turbine (WT) failure models can help to understand the components’ degradation processes and enable the operators to anticipate upcoming failures. Usually, these models are based on the age of the systems or components. However, latest research shows that the on-site weather conditions also affect the turbine failure behaviour significantly. This study presents a novel approach to model WT failures based on the environmental conditions to which they are exposed to. The results focus on general WT failures, as well as on four main components: gearbox, generator, pitch and yaw system. A penalised likelihood estimation is used in order to avoid problems due to for example highly correlated input covariates. The relative importance of the model covariates is assessed in order to analyse the effect of each weather parameter on the model output.
Ouyang, Min; Tian, Hui; Wang, Zhenghua; Hong, Liu; Mao, Zijun
2017-01-17
This article studies a general type of initiating events in critical infrastructures, called spatially localized failures (SLFs), which are defined as the failure of a set of infrastructure components distributed in a spatially localized area due to damage sustained, while other components outside the area do not directly fail. These failures can be regarded as a special type of intentional attack, such as bomb or explosive assault, or a generalized modeling of the impact of localized natural hazards on large-scale systems. This article introduces three SLFs models: node centered SLFs, district-based SLFs, and circle-shaped SLFs, and proposes a SLFs-induced vulnerability analysis method from three aspects: identification of critical locations, comparisons of infrastructure vulnerability to random failures, topologically localized failures and SLFs, and quantification of infrastructure information value. The proposed SLFs-induced vulnerability analysis method is finally applied to the Chinese railway system and can be also easily adapted to analyze other critical infrastructures for valuable protection suggestions. © 2017 Society for Risk Analysis.
Sensor Failure Detection of FASSIP System using Principal Component Analysis
NASA Astrophysics Data System (ADS)
Sudarno; Juarsa, Mulya; Santosa, Kussigit; Deswandri; Sunaryo, Geni Rina
2018-02-01
In the nuclear reactor accident of Fukushima Daiichi in Japan, the damages of core and pressure vessel were caused by the failure of its active cooling system (diesel generator was inundated by tsunami). Thus researches on passive cooling system for Nuclear Power Plant are performed to improve the safety aspects of nuclear reactors. The FASSIP system (Passive System Simulation Facility) is an installation used to study the characteristics of passive cooling systems at nuclear power plants. The accuracy of sensor measurement of FASSIP system is essential, because as the basis for determining the characteristics of a passive cooling system. In this research, a sensor failure detection method for FASSIP system is developed, so the indication of sensor failures can be detected early. The method used is Principal Component Analysis (PCA) to reduce the dimension of the sensor, with the Squarred Prediction Error (SPE) and statistic Hotteling criteria for detecting sensor failure indication. The results shows that PCA method is capable to detect the occurrence of a failure at any sensor.
Uncertainty Quantification of the FUN3D-Predicted NASA CRM Flutter Boundary
NASA Technical Reports Server (NTRS)
Stanford, Bret K.; Massey, Steven J.
2017-01-01
A nonintrusive point collocation method is used to propagate parametric uncertainties of the flexible Common Research Model, a generic transport configuration, through the unsteady aeroelastic CFD solver FUN3D. A range of random input variables are considered, including atmospheric flow variables, structural variables, and inertial (lumped mass) variables. UQ results are explored for a range of output metrics (with a focus on dynamic flutter stability), for both subsonic and transonic Mach numbers, for two different CFD mesh refinements. A particular focus is placed on computing failure probabilities: the probability that the wing will flutter within the flight envelope.
NASA Technical Reports Server (NTRS)
1979-01-01
Graphite/polyimide (Gr/PI) bolted and bonded joints were investigated. Possible failure modes and the design loads for the four generic joint types are discussed. Preliminary sizing of a type 1 joint, bonded and bolted configuration is described, including assumptions regarding material properties and sizing methodology. A general purpose finite element computer code is described that was formulated to analyze single and double lap joints, with and without tapered adherends, and with user-controlled variable element size arrangements. An initial order of Celion 6000/PMR-15 prepreg was received and characterized.
Stephens, Robert P
2011-01-01
Addiction films have been shaped by the internal demands of a commercial medium. Specifically, melodrama, as a genre, has defined the limits of the visual representation of addiction. Similarly, the process of intermedialization has tended to induce a metamorphosis that shapes disparate narratives with diverse goals into a generic filmic form and substantially alters the meanings of the texts. Ultimately, visual representations shape public perceptions of addiction in meaningful ways, privileging a moralistic understanding of drug addiction that makes a complex issue visually uncomplicated by reinforcing "common sense" ideas of moral failure and redemption. Copyright © 2011 Informa Healthcare USA, Inc.
Mental health services then and now.
Mechanic, David
2007-01-01
Over the past twenty-five years, psychiatric services have shifted from hospital to community. Managed care reinforces this trend. Mental illness is better understood and less stigmatized, and services are more commonly used. But many in need do not receive care consistent with evidence-based standards, or at all. Challenges are greatest for people with serious and persistent mental illnesses who depend on generic health and welfare programs and integrated services. Evidence-based rehabilitative care is often unavailable. Failures in community care lead to arrest; jail diversion and treatment are required. Despite progress, implementing an effective, patient-centered care system remains a formidable challenge.
Analytic gravitational waveforms for generic precessing compact binaries
NASA Astrophysics Data System (ADS)
Chatziioannou, Katerina; Klein, Antoine; Cornish, Neil; Yunes, Nicolas
2017-01-01
Gravitational waves from compact binaries are subject to amplitude and phase modulations arising from interactions between the angular momenta of the system. Failure to account for such spin-precession effects in gravitational wave data analysis could hinder detection and completely ruin parameter estimation. In this talk I will describe the construction of closed-form, frequency-domain waveforms for fully-precessing, quasi-circular binary inspirals. The resulting waveforms can model spinning binaries of arbitrary spin magnitudes, spin orientations, and masses during the inspiral phase. I will also describe ongoing efforts to extend these inspiral waveforms to the merger and ringdown phases.
Giardina, M; Castiglia, F; Tomarchio, E
2014-12-01
Failure mode, effects and criticality analysis (FMECA) is a safety technique extensively used in many different industrial fields to identify and prevent potential failures. In the application of traditional FMECA, the risk priority number (RPN) is determined to rank the failure modes; however, the method has been criticised for having several weaknesses. Moreover, it is unable to adequately deal with human errors or negligence. In this paper, a new versatile fuzzy rule-based assessment model is proposed to evaluate the RPN index to rank both component failure and human error. The proposed methodology is applied to potential radiological over-exposure of patients during high-dose-rate brachytherapy treatments. The critical analysis of the results can provide recommendations and suggestions regarding safety provisions for the equipment and procedures required to reduce the occurrence of accidental events.
A Sixteen Node Shell Element with a Matrix Stabilization Scheme.
1987-04-22
coordinates with components x, y and z are defined on the shell midsurface in addition to global coordinates with components X, Y and Z. The x, y and z axes... midsurface while a3 is normal to the surface. The al, A2 and a3 vectors are given at each node as an input. In addition, they are defined at each integra...drawn from the point on the midsurface to the generic material point, t is the shell thickness and the nondimenslonal coordinate C runs from -1 to 1
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-06
... controls, even if those parts or components are ``specially designed'' for a defense article on the USML or... using broad, design-intent based catch- all controls over generic types of items such as ``parts'' and... that multilateral regimes do not require that the United States control non-significant parts and the...
76 FR 14028 - Generic Drug User Fee; Notice of Public Meeting; Reopening of the Comment Period
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-15
... is reopening the comment period for the expected duration of the active negotiation phase to ensure... throughout the negotiation phase, FDA is reopening the comment period until June 30, 2011. FDA expects that the public component of the GDUF negotiations will be complete by the end of June 2011. Therefore, the...
Earth mineral resource of the month: asbestos
Virta, Robert L.
2010-01-01
The article discusses the characteristics and feature of asbestos. According to the author, asbestos is a generic name for six needle-shaped minerals that possess high tensile strengths, flexibility, and resistance to chemical and thermal degradation. These minerals are actinolite, amosite, anthophyllite, chrysolite, crocilodite and tremolite. Asbestos is used for strengthening concrete pipe, plastic components, and gypsum plasters.
Can Holistic Processing Be Learned for Inverted Faces?
ERIC Educational Resources Information Center
Robbins, Rachel; McKone, Elinor
2003-01-01
The origin of "special" processing for upright faces has been a matter of ongoing debate. If it is due to generic expertise, as opposed to having some innate component, holistic processing should be learnable for stimuli other than upright faces. Here we assess inverted faces. We trained subjects to discriminate identical twins using up to 1100…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-01
....'' This LR-ISG revises an NRC staff-recommended aging management program (AMP) in NUREG-1801, Revision 2, ``Generic Aging Lessons Learned (GALL) Report,'' and the NRC staff's aging management review procedure and... for piping and components within the scope of the Requirements for Renewal of Operating Licenses for...
Developing Student Programming and Problem-Solving Skills with Visual Basic
ERIC Educational Resources Information Center
Siegle, Del
2009-01-01
Although most computer users will never need to write a computer program, many students enjoy the challenge of creating one. Computer programming enhances students' problem solving by forcing students to break a problem into its component pieces and reassemble it in a generic format that can be understood by a nonsentient entity. It promotes…
Reliability of hybrid microcircuit discrete components
NASA Technical Reports Server (NTRS)
Allen, R. V.
1972-01-01
Data accumulated during 4 years of research and evaluation of ceramic chip capacitors, ceramic carrier mounted active devices, beam-lead transistors, and chip resistors are presented. Life and temperature coefficient test data, and optical and scanning electron microscope photographs of device failures are presented and the failure modes are described. Particular interest is given to discrete component qualification, power burn-in, and procedures for testing and screening discrete components. Burn-in requirements and test data will be given in support of 100 percent burn-in policy on all NASA flight programs.
Ferrographic and spectrometer oil analysis from a failed gas turbine engine
NASA Technical Reports Server (NTRS)
Jones, W. R., Jr.
1983-01-01
An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. It was concluded that a severe surge may have caused interference between rotating and stationary compressor parts that either directly or indirectly ignited the titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph, and with plasma, atomic absorption, and emission spectrometers to see if this information would aid in the engine failure diagnosis. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism nor a high level of wear debris was detected in the engine oil sample taken just prior to the test in which the failure occurred. However, low concentrations (0.2 to 0.5 ppm) of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations (2 ppm) were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure. The oil analyses eliminated a lubrication system bearing or shaft seal failure as the cause of the engine failure. Previously announced in STAR as N83-12433
NASA Technical Reports Server (NTRS)
Moas, Eduardo; Boitnott, Richard L.; Griffin, O. Hayden, Jr.
1994-01-01
Six-foot diameter, semicircular graphite/epoxy specimens representative of generic aircraft frames were loaded quasi-statistically to determine their load response and failure mechanisms for large deflections that occur in airplanes crashes. These frame/skin specimens consisted of a cylindrical skin section co-cured with a semicircular I-frame. The skin provided the necessary lateral stiffness to keep deformations in the plane of the frame in order to realistically represent deformations as they occur in actual fuselage structures. Various frame laminate stacking sequences and geometries were evaluated by statically loading the specimen until multiple failures occurred. Two analytical methods were compared for modeling the frame/skin specimens: a two-dimensional shell finite element analysis and a one-dimensional, closed-form, curved beam solution derived using an energy method. Flange effectivities were included in the beam analysis to account for the curling phenomenon that occurs in thin flanges of curved beams. Good correlation was obtained between experimental results and the analytical predictions of the linear response of the frames prior to the initial failure. The specimens were found to be useful for evaluating composite frame designs.
Physics-based Entry, Descent and Landing Risk Model
NASA Technical Reports Server (NTRS)
Gee, Ken; Huynh, Loc C.; Manning, Ted
2014-01-01
A physics-based risk model was developed to assess the risk associated with thermal protection system failures during the entry, descent and landing phase of a manned spacecraft mission. In the model, entry trajectories were computed using a three-degree-of-freedom trajectory tool, the aerothermodynamic heating environment was computed using an engineering-level computational tool and the thermal response of the TPS material was modeled using a one-dimensional thermal response tool. The model was capable of modeling the effect of micrometeoroid and orbital debris impact damage on the TPS thermal response. A Monte Carlo analysis was used to determine the effects of uncertainties in the vehicle state at Entry Interface, aerothermodynamic heating and material properties on the performance of the TPS design. The failure criterion was set as a temperature limit at the bondline between the TPS and the underlying structure. Both direct computation and response surface approaches were used to compute the risk. The model was applied to a generic manned space capsule design. The effect of material property uncertainty and MMOD damage on risk of failure were analyzed. A comparison of the direct computation and response surface approach was undertaken.
On the Mechanical Properties and Microstructure of Nitinol forBiomedical Stent Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, Scott W.
2006-01-01
This dissertation was motivated by the alarming number of biomedical device failures reported in the literature, coupled with the growing trend towards the use of Nitinol for endovascular stents. The research is aimed at addressing two of the primary failure modes in Nitinol endovascular stents: fatigue-crack growth and overload fracture. The small dimensions of stents, coupled with their complex geometries and variability among manufacturers, make it virtually impossible to determine generic material constants associated with specific devices. Instead, the research utilizes a hybrid of standard test techniques (fracture mechanics and x-ray micro-diffraction) and custom-designed testing apparatus for the determination ofmore » the fracture properties of specimens that are suitable representations of self-expanding Nitinol stents. Specifically, the role of texture (crystallographic alignment of atoms) and the austenite-to-martensite phase transformation on the propagation of cracks in Nitinol was evaluated under simulated body conditions and over a multitude of stresses and strains. The results determined through this research were then used to create conservative safe operating and inspection criteria to be used by the biomedical community for the determination of specific device vulnerability to failure by fracture and/or fatigue.« less
Lim, Renly; Liong, Men Long; Leong, Wing Seng; Lau, Yong Khee; Khan, Nurzalina Abdul Karim; Yuen, Kah Hay
2018-02-01
To assess the impact of stress urinary incontinence (SUI) on individual components of quality of life (QoL) using both condition-specific and generic questionnaires, and to compare the results of the 2 instruments with a control group. Women with or without SUI aged ≥21 years old were recruited. Subjects completed the International Consultation of Incontinence-Urinary Incontinence Short Form (ICIQ-UI-SF), International Consultation of Incontinence-Lower Urinary Tract Symptoms Quality of Life (ICIQ-LUTSqol), and EQ-5D questionnaires. A total of 120 women with SUI and 145 controls participated. The ICIQ-LUTSqol total score (mean ± standard deviation) was significantly higher in the SUI group (38.96 ± 10.28) compared with the control group (20.78 ± 2.73) (P <.001). When adjusted for significant confounders, the SUI group continued to have significantly poorer QoL compared with the control group (P <.001). The negative effect of SUI on "physical activities" and "jobs" were the 2 most frequently reported and burdensome components of the ICIQ-LUTSqol, with approximately 50% of women with SUI affected "moderately" or "a lot." When measured using the EQ-5D questionnaire, there were significantly higher percentages of patients with SUI who had problems with usual activities, pain or discomfort, and anxiety or depression (P <.05). Women suffering from SUI have significantly poorer QoL compared with continent women when measured using both condition-specific and generic QoL measures. Clinicians should pay closer attention to the impact of SUI on individual components of QoL, particularly limitations on physical activities and jobs, which were the 2 most impairing and frequently reported components of QoL. Copyright © 2017 Elsevier Inc. All rights reserved.
Defense strategies for asymmetric networked systems under composite utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell
We consider an infrastructure of networked systems with discrete components that can be reinforced at certain costs to guard against attacks. The communications network plays a critical, asymmetric role of providing the vital connectivity between the systems. We characterize the correlations within this infrastructure at two levels using (a) aggregate failure correlation function that specifies the infrastructure failure probability giventhe failure of an individual system or network, and (b) first order differential conditions on system survival probabilities that characterize component-level correlations. We formulate an infrastructure survival game between an attacker and a provider, who attacks and reinforces individual components, respectively.more » They use the composite utility functions composed of a survival probability term and a cost term, and the previously studiedsum-form and product-form utility functions are their special cases. At Nash Equilibrium, we derive expressions for individual system survival probabilities and the expected total number of operational components. We apply and discuss these estimates for a simplified model of distributed cloud computing infrastructure« less
NASA Technical Reports Server (NTRS)
Frady, Greg; Nesman, Thomas; Zoladz, Thomas; Szabo, Roland
2010-01-01
For many years, the capabilities to determine the root-cause failure of component failures have been limited to the analytical tools and the state of the art data acquisition systems. With this limited capability, many anomalies have been resolved by adding material to the design to increase robustness without the ability to determine if the design solution was satisfactory until after a series of expensive test programs were complete. The risk of failure and multiple design, test, and redesign cycles were high. During the Space Shuttle Program, many crack investigations in high energy density turbomachines, like the SSME turbopumps and high energy flows in the main propulsion system, have led to the discovery of numerous root-cause failures and anomalies due to the coexistences of acoustic forcing functions, structural natural modes, and a high energy excitation, such as an edge tone or shedding flow, leading the technical community to understand many of the primary contributors to extremely high frequency high cycle fatique fluid-structure interaction anomalies. These contributors have been identified using advanced analysis tools and verified using component and system tests during component ground tests, systems tests, and flight. The structural dynamics and fluid dynamics communities have developed a special sensitivity to the fluid-structure interaction problems and have been able to adjust and solve these problems in a time effective manner to meet budget and schedule deadlines of operational vehicle programs, such as the Space Shuttle Program over the years.
NASA Astrophysics Data System (ADS)
Chan, Kwai S.; Enright, Michael P.; Moody, Jonathan; Fitch, Simeon H. K.
2014-01-01
The objective of this investigation was to develop an innovative methodology for life and reliability prediction of hot-section components in advanced turbopropulsion systems. A set of generic microstructure-based time-dependent crack growth (TDCG) models was developed and used to assess the sources of material variability due to microstructure and material parameters such as grain size, activation energy, and crack growth threshold for TDCG. A comparison of model predictions and experimental data obtained in air and in vacuum suggests that oxidation is responsible for higher crack growth rates at high temperatures, low frequencies, and long dwell times, but oxidation can also induce higher crack growth thresholds (Δ K th or K th) under certain conditions. Using the enhanced risk analysis tool and material constants calibrated to IN 718 data, the effect of TDCG on the risk of fracture in turboengine components was demonstrated for a generic rotor design and a realistic mission profile using the DARWIN® probabilistic life-prediction code. The results of this investigation confirmed that TDCG and cycle-dependent crack growth in IN 718 can be treated by a simple summation of the crack increments over a mission. For the temperatures considered, TDCG in IN 718 can be considered as a K-controlled or a diffusion-controlled oxidation-induced degradation process. This methodology provides a pathway for evaluating microstructural effects on multiple damage modes in hot-section components.
The Common Data Acquisition Platform in the Helmholtz Association
NASA Astrophysics Data System (ADS)
Kaever, P.; Balzer, M.; Kopmann, A.; Zimmer, M.; Rongen, H.
2017-04-01
Various centres of the German Helmholtz Association (HGF) started in 2012 to develop a modular data acquisition (DAQ) platform, covering the entire range from detector readout to data transfer into parallel computing environments. This platform integrates generic hardware components like the multi-purpose HGF-Advanced Mezzanine Card or a smart scientific camera framework, adding user value with Linux drivers and board support packages. Technically the scope comprises the DAQ-chain from FPGA-modules to computing servers, notably frontend-electronics-interfaces, microcontrollers and GPUs with their software plus high-performance data transmission links. The core idea is a generic and component-based approach, enabling the implementation of specific experiment requirements with low effort. This so called DTS-platform will support standards like MTCA.4 in hard- and software to ensure compatibility with commercial components. Its capability to deploy on other crate standards or FPGA-boards with PCI express or Ethernet interfaces remains an essential feature. Competences of the participating centres are coordinated in order to provide a solid technological basis for both research topics in the Helmholtz Programme ``Matter and Technology'': ``Detector Technology and Systems'' and ``Accelerator Research and Development''. The DTS-platform aims at reducing costs and development time and will ensure access to latest technologies for the collaboration. Due to its flexible approach, it has the potential to be applied in other scientific programs.
Loads and Performance Data from a Wind-Tunnel Test of Generic Model Helicopter Rotor Blades
NASA Technical Reports Server (NTRS)
Yeager, William T., Jr.; Wilbur, Matthew L.
2005-01-01
An investigation was conducted in the NASA Langley Transonic Dynamics Tunnel to acquire data for use in assessing the ability of current and future comprehensive analyses to predict helicopter rotating-system and fixed-system vibratory loads. The investigation was conducted with a generic model helicopter rotor system using blades with rectangular planform, no built-in twist, uniform radial distribution of mass and stiffnesses, and a NACA 0012 airfoil section. Rotor performance data, as well as mean and vibratory components of blade bending and torsion moments, fixed-system forces and moments, and pitch link loads were obtained at advance ratios up to 0.35 for various combinations of rotor shaft angle-of-attack and collective pitch. The data are presented without analysis.
A generic multibody simulation
NASA Technical Reports Server (NTRS)
Hopping, K. A.; Kohn, W.
1986-01-01
Described is a dynamic simulation package which can be configured for orbital test scenarios involving multiple bodies. The rotational and translational state integration methods are selectable for each individual body and may be changed during a run if necessary. Characteristics of the bodies are determined by assigning components consisting of mass properties, forces, and moments, which are the outputs of user-defined environmental models. Generic model implementation is facilitated by a transformation processor which performs coordinate frame inversions. Transformations are defined in the initialization file as part of the simulation configuration. The simulation package includes an initialization processor, which consists of a command line preprocessor, a general purpose grammar, and a syntax scanner. These permit specifications of the bodies, their interrelationships, and their initial states in a format that is not dependent on a particular test scenario.
A Generic Software Architecture For Prognostics
NASA Technical Reports Server (NTRS)
Teubert, Christopher; Daigle, Matthew J.; Sankararaman, Shankar; Goebel, Kai; Watkins, Jason
2017-01-01
Prognostics is a systems engineering discipline focused on predicting end-of-life of components and systems. As a relatively new and emerging technology, there are few fielded implementations of prognostics, due in part to practitioners perceiving a large hurdle in developing the models, algorithms, architecture, and integration pieces. As a result, no open software frameworks for applying prognostics currently exist. This paper introduces the Generic Software Architecture for Prognostics (GSAP), an open-source, cross-platform, object-oriented software framework and support library for creating prognostics applications. GSAP was designed to make prognostics more accessible and enable faster adoption and implementation by industry, by reducing the effort and investment required to develop, test, and deploy prognostics. This paper describes the requirements, design, and testing of GSAP. Additionally, a detailed case study involving battery prognostics demonstrates its use.
Open architectures for formal reasoning and deductive technologies for software development
NASA Technical Reports Server (NTRS)
Mccarthy, John; Manna, Zohar; Mason, Ian; Pnueli, Amir; Talcott, Carolyn; Waldinger, Richard
1994-01-01
The objective of this project is to develop an open architecture for formal reasoning systems. One goal is to provide a framework with a clear semantic basis for specification and instantiation of generic components; construction of complex systems by interconnecting components; and for making incremental improvements and tailoring to specific applications. Another goal is to develop methods for specifying component interfaces and interactions to facilitate use of existing and newly built systems as 'off the shelf' components, thus helping bridge the gap between producers and consumers of reasoning systems. In this report we summarize results in several areas: our data base of reasoning systems; a theory of binding structures; a theory of components of open systems; a framework for specifying components of open reasoning system; and an analysis of the integration of rewriting and linear arithmetic modules in Boyer-Moore using the above framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilkinson, V.K.; Young, J.M.
1995-07-01
The US Army`s Project Manager, Advanced Field Artillery System/Future Armored Resupply Vehicle (PM-AFAS/FARV) is sponsoring the development of technologies that can be applied to the resupply vehicle for the Advanced Field Artillery System. The Engineering Technology Division of the Oak Ridge National Laboratory has proposed adding diagnostics/prognostics systems to four components of the Ammunition Transfer Arm of this vehicle, and a cost-benefit analysis was performed on the diagnostics/prognostics to show the potential savings that may be gained by incorporating these systems onto the vehicle. Possible savings could be in the form of reduced downtime, less unexpected or unnecessary maintenance, fewermore » regular maintenance checks. and/or tower collateral damage or loss. The diagnostics/prognostics systems are used to (1) help determine component problems, (2) determine the condition of the components, and (3) estimate the remaining life of the monitored components. The four components on the arm that are targeted for diagnostics/prognostics are (1) the electromechanical brakes, (2) the linear actuators, (3) the wheel/roller bearings, and (4) the conveyor drive system. These would be monitored using electrical signature analysis, vibration analysis, or a combination of both. Annual failure rates for the four components were obtained along with specifications for vehicle costs, crews, number of missions, etc. Accident scenarios based on component failures were postulated, and event trees for these scenarios were constructed to estimate the annual loss of the resupply vehicle, crew, arm. or mission aborts. A levelized cost-benefit analysis was then performed to examine the costs of such failures, both with and without some level of failure reduction due to the diagnostics/prognostics systems. Any savings resulting from using diagnostics/prognostics were calculated.« less
NASA Astrophysics Data System (ADS)
Amirat, Yassine; Choqueuse, Vincent; Benbouzid, Mohamed
2013-12-01
Failure detection has always been a demanding task in the electrical machines community; it has become more challenging in wind energy conversion systems because sustainability and viability of wind farms are highly dependent on the reduction of the operational and maintenance costs. Indeed the most efficient way of reducing these costs would be to continuously monitor the condition of these systems. This allows for early detection of the generator health degeneration, facilitating a proactive response, minimizing downtime, and maximizing productivity. This paper provides then an assessment of a failure detection techniques based on the homopolar component of the generator stator current and attempts to highlight the use of the ensemble empirical mode decomposition as a tool for failure detection in wind turbine generators for stationary and non-stationary cases.
Developing Reliable Life Support for Mars
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2017-01-01
A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and less certain expected reliability. A plan to develop reliable life support is needed to achieve the best possible reliability.
Risk Based Inspection Methodology and Software Applied to Atmospheric Storage Tanks
NASA Astrophysics Data System (ADS)
Topalis, P.; Korneliussen, G.; Hermanrud, J.; Steo, Y.
2012-05-01
A new risk-based inspection (RBI) methodology and software is presented in this paper. The objective of this work is to allow management of the inspections of atmospheric storage tanks in the most efficient way, while, at the same time, accident risks are minimized. The software has been built on the new risk framework architecture, a generic platform facilitating efficient and integrated development of software applications using risk models. The framework includes a library of risk models and the user interface is automatically produced on the basis of editable schemas. This risk-framework-based RBI tool has been applied in the context of RBI for above-ground atmospheric storage tanks (AST) but it has been designed with the objective of being generic enough to allow extension to the process plants in general. This RBI methodology is an evolution of an approach and mathematical models developed for Det Norske Veritas (DNV) and the American Petroleum Institute (API). The methodology assesses damage mechanism potential, degradation rates, probability of failure (PoF), consequence of failure (CoF) in terms of environmental damage and financial loss, risk and inspection intervals and techniques. The scope includes assessment of the tank floor for soil-side external corrosion and product-side internal corrosion and the tank shell courses for atmospheric corrosion and internal thinning. It also includes preliminary assessment for brittle fracture and cracking. The data are structured according to an asset hierarchy including Plant, Production Unit, Process Unit, Tag, Part and Inspection levels and the data are inherited / defaulted seamlessly from a higher hierarchy level to a lower level. The user interface includes synchronized hierarchy tree browsing, dynamic editor and grid-view editing and active reports with drill-in capability.
[Clinical experience with 53 consecutive heart transplants].
Villavicencio, Mauricio; Rossel, Víctor; Larrea, Ricardo; Peralta, Juan Pablo; Larraín, Ernesto; Sung Lim, Jong; Rojo, Pamela; Gajardo, Francesca; Donoso, Erika; Hurtado, Margarita
2013-12-01
Heart transplantation is the therapy of choice for advance heart failure. Our group developed two transplant programs at Instituto Nacional del Tórax and Clínica Dávila. We report our clinical experience based on distinctive clinical policies. Fifty-three consecutive patients were transplanted between November 2008 and April 2013, representing 51% of all Chilean cases. Distinctive clinical policies include intensive donor management, generic immunosuppression and VAD (ventricular assist devices) insertion. Ischemic or dilated cardiomyopathy were the main indications (23 (43%) each), age 48 ± 13 years and 48 (91%) were male. Transplant listing Status: IA 14 (26%) (VAD or 2 inotropes), IB 14 (26%) (1 inotrope) and II25 (47%) (no inotrope). Mean waiting time 70 ± 83 days. Twelve (24%) were transplanted during VAD support (median support: 36 days). orthotopic bicaval transplant with ischemia time: 175 ± 54 min. Operative mortality: 3 (6%), all due to right ventricular failure. Re-exploration for bleeding 2 (4%), stroke 3 (6%), mediastinitis 0 (0%), pneumonia 4 (8%), and transient dialysis 6 (11%). Mean follow-up was 21 ± 14 months. Three-year survival was 86 ± 6%. One patient died of Pneumocystis jirovecii pneumonia and the other died suddenly (non-compliance). Freedom from rejection requiring specific therapy was 80 ± 7% at 3 years of follow-up. Four hundred eighty four endomyocardial biopsies were done: 11 (2.3%) had 2R rejection. All survivors are in NYHA (New York Heart Association) functional class I and all but one have normal biventricular function. Mid-term results are similar to those reported by the registry of the International Society for Heart and Lung Transplantation. This experience has a higher proportion of VAD support than previous national series. Rejection rates are low in spite of generic immunosuppression.
Catastrophic Fault Recovery with Self-Reconfigurable Chips
NASA Technical Reports Server (NTRS)
Zheng, Will Hua; Marzwell, Neville I.; Chau, Savio N.
2006-01-01
Mission critical systems typically employ multi-string redundancy to cope with possible hardware failure. Such systems are only as fault tolerant as there are many redundant strings. Once a particular critical component exhausts its redundant spares, the multi-string architecture cannot tolerate any further hardware failure. This paper aims at addressing such catastrophic faults through the use of 'Self-Reconfigurable Chips' as a last resort effort to 'repair' a faulty critical component.
Reliability Centred Maintenance (RCM) Analysis of Laser Machine in Filling Lithos at PT X
NASA Astrophysics Data System (ADS)
Suryono, M. A. E.; Rosyidi, C. N.
2018-03-01
PT. X used automated machines which work for sixteen hours per day. Therefore, the machines should be maintained to keep the availability of the machines. The aim of this research is to determine maintenance tasks according to the cause of component’s failure using Reliability Centred Maintenance (RCM) and determine the amount of optimal inspection frequency which must be performed to the machine at filling lithos process. In this research, RCM is used as an analysis tool to determine the critical component and find optimal inspection frequencies to maximize machine’s reliability. From the analysis, we found that the critical machine in filling lithos process is laser machine in Line 2. Then we proceed to determine the cause of machine’s failure. Lastube component has the highest Risk Priority Number (RPN) among other components such as power supply, lens, chiller, laser siren, encoder, conveyor, and mirror galvo. Most of the components have operational consequences and the others have hidden failure consequences and safety consequences. Time-directed life-renewal task, failure finding task, and servicing task can be used to overcome these consequences. The results of data analysis show that the inspection must be performed once a month for laser machine in the form of preventive maintenance to lowering the downtime.
NASA Astrophysics Data System (ADS)
Yao, Hua-Dong; Davidson, Lars
2018-03-01
We investigate the interior noise caused by turbulent flows past a generic side-view mirror. A rectangular glass window is placed downstream of the mirror. The window vibration is excited by the surface pressure fluctuations and emits the interior noise in a cuboid cavity. The turbulent flows are simulated using a compressible large eddy simulation method. The window vibration and interior noise are predicted with a finite element method. The wavenumber-frequency spectra of the surface pressure fluctuations are analyzed. The spectra are identified with some new features that cannot be explained by the Chase model for turbulent boundary layers. The spectra contain a minor hydrodynamic domain in addition to the hydrodynamic domain caused by the main convection of the turbulent boundary layer. The minor domain results from the local convection of the recirculating flow. These domains are formed in bent elliptic shapes. The spanwise expansion of the wake is found causing the bending. Based on the wavenumber-frequency relationships in the spectra, the surface pressure fluctuations are decomposed into hydrodynamic and acoustic components. The acoustic component is more efficient in the generation of the interior noise than the hydrodynamic component. However, the hydrodynamic component is still dominant at low frequencies below approximately 250 Hz since it has low transmission losses near the hydrodynamic critical frequency of the window. The structural modes of the window determine the low-frequency interior tonal noise. The combination of the mode shapes of the window and cavity greatly affects the magnitude distribution of the interior noise.
NASA Astrophysics Data System (ADS)
Sang, Z. X.; Huang, J. Q.; Yan, J.; Du, Z.; Xu, Q. S.; Lei, H.; Zhou, S. X.; Wang, S. C.
2017-11-01
The protection is an essential part for power device, especially for those in power grid, as the failure may cost great losses to the society. A study on the voltage and current abnormality in the power electronic devices in Distribution Electronic Power Transformer (D-EPT) during the failures on switching components is presented, as well as the operational principles for 10 kV rectifier, 10 kV/400 V DC-DC converter and 400 V inverter in D-EPT. Derived from the discussion on the effects of voltage and current distortion, the fault characteristics as well as a fault diagnosis method for D-EPT are introduced.
NASA Technical Reports Server (NTRS)
Dyall, Kenneth G.; Faegri, Knut, Jr.
1990-01-01
The paper investigates bounds failure in calculations using Gaussian basis sets for the solution of the one-electron Dirac equation for the 2p1/2 state of Hg(79+). It is shown that bounds failure indicates inadequacies in the basis set, both in terms of the exponent range and the number of functions. It is also shown that overrepresentation of the small component space may lead to unphysical results. It is concluded that it is important to use matched large and small component basis sets with an adequate size and exponent range.
Efficient 3-D finite element failure analysis of compression loaded angle-ply plates with holes
NASA Technical Reports Server (NTRS)
Burns, S. W.; Herakovich, C. T.; Williams, J. G.
1987-01-01
Finite element stress analysis and the tensor polynomial failure criterion predict that failure always initiates at the interface between layers on the hole edge for notched angle-ply laminates loaded in compression. The angular location of initial failure is a function of the fiber orientation in the laminate. The dominant stress components initiating failure are shear. It is shown that approximate symmetry can be used to reduce the computer resources required for the case of unaxial loading.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Irazola, L; Terron, J; Sanchez-Doblado, F
2015-06-15
Purpose: Previous measurements with Bonner spheres{sup 1} showed that normalized neutron spectra are equal for the majority of the existing linacs{sup 2}. This information, in addition to thermal neutron fluences obtained in the characterization procedure{sup 3}3, would allow to estimate neutron doses accidentally received by exposed workers, without the need of an extra experimental measurement. Methods: Monte Carlo (MC) simulations demonstrated that the thermal neutron fluence distribution inside the bunker is quite uniform, as a consequence of multiple scatter in the walls{sup 4}. Although inverse square law is approximately valid for the fast component, a more precise calculation could bemore » obtained with a generic fast fluence distribution map around the linac, from MC simulations{sup 4}. Thus, measurements of thermal neutron fluences performed during the characterization procedure{sup 3}, together with a generic unitary spectra{sup 2}, would allow to estimate the total neutron fluences and H*(10) at any point{sup 5}. As an example, we compared estimations with Bonner sphere measurements{sup 1}, for two points in five facilities: 3 Siemens (15–23 MV), Elekta (15 MV) and Varian (15 MV). Results: Thermal neutron fluences obtained from characterization, are within (0.2–1.6×10{sup 6}) cm−{sup 2}•Gy{sup −1} for the five studied facilities. This implies ambient equivalent doses ranging from (0.27–2.01) mSv/Gy 50 cm far from the isocenter and (0.03–0.26) mSv/Gy at detector location with an average deviation of ±12.1% respect to Bonner measurements. Conclusion: The good results obtained demonstrate that neutron fluence and H*(10) can be estimated based on: (a) characterization procedure established for patient risk estimation in each facility, (b) generic unitary neutron spectrum and (c) generic MC map distribution of the fast component. [1] Radiat. Meas (2010) 45: 1391 – 1397; [2] Phys. Med. Biol (2012) 5 7:6167–6191; [3] Med. Phys (2015) 42:276 - 281. [4] IFMBE (2012) 39: 1245–1248. [5] ICRU Report 57 (1998)« less
The Geoinformatica free and open source software stack
NASA Astrophysics Data System (ADS)
Jolma, A.
2012-04-01
The Geoinformatica free and open source software (FOSS) stack is based mainly on three established FOSS components, namely GDAL, GTK+, and Perl. GDAL provides access to a very large selection of geospatial data formats and data sources, a generic geospatial data model, and a large collection of geospatial analytical and processing functionality. GTK+ and the Cairo graphics library provide generic graphics and graphical user interface capabilities. Perl is a programming language, for which there is a very large set of FOSS modules for a wide range of purposes and which can be used as an integrative tool for building applications. In the Geoinformatica stack, data storages such as FOSS RDBMS PostgreSQL with its geospatial extension PostGIS can be used below the three above mentioned components. The top layer of Geoinformatica consists of a C library and several Perl modules. The C library comprises a general purpose raster algebra library, hydrological terrain analysis functions, and visualization code. The Perl modules define a generic visualized geospatial data layer and subclasses for raster and vector data and graphs. The hydrological terrain functions are already rather old and they suffer for example from the requirement of in-memory rasters. Newer research conducted using the platform include basic geospatial simulation modeling, visualization of ecological data, linking with a Bayesian network engine for spatial risk assessment in coastal areas, and developing standards-based distributed water resources information systems in Internet. The Geoinformatica stack constitutes a platform for geospatial research, which is targeted towards custom analytical tools, prototyping and linking with external libraries. Writing custom analytical tools is supported by the Perl language and the large collection of tools that are available especially in GDAL and Perl modules. Prototyping is supported by the GTK+ library, the GUI tools, and the support for object-oriented programming in Perl. New feature types, geospatial layer classes, and tools as extensions with specific features can be defined, used, and studied. Linking with external libraries is possible using the Perl foreign function interface tools or with generic tools such as Swig. We are interested in implementing and testing linking Geoinformatica with existing or new more specific hydrological FOSS.
T-MATS Toolbox for the Modeling and Analysis of Thermodynamic Systems
NASA Technical Reports Server (NTRS)
Chapman, Jeffryes W.
2014-01-01
The Toolbox for the Modeling and Analysis of Thermodynamic Systems (T-MATS) is a MATLABSimulink (The MathWorks Inc.) plug-in for creating and simulating thermodynamic systems and controls. The package contains generic parameterized components that can be combined with a variable input iterative solver and optimization algorithm to create complex system models, such as gas turbines.
Cultivating Self-Management and Leadership Skills among Hong Kong Students
ERIC Educational Resources Information Center
Lee, John Chi-Kin; Law, Edmond Hau-fai; Chun, Derek Wai-sun; Chan, Kim Nim-chi
2017-01-01
Developing generic skills is a core component of the current educational reforms in Asia in response to global demands to shift towards a knowledge-based economy in the last two decades. The study adopted a mixed approach by using a survey and interviews to find out the views of a group of students and teachers about the effects of the school…
Critical Incident Stress Management in Schools: Mental Health Component.
ERIC Educational Resources Information Center
Tortorici Luna, Joanne M.
This manual provides a brief framework of organization that serves as a response tool for a wide spectrum of crisis circumstances encountered by schools. It is meant to be a generic guide for school teams and should be customized by each school that uses it. Even with emergency procedures in place, each crisis at a school needs to be evaluated as…
ERIC Educational Resources Information Center
Riggs, Anne E.; Kalish, Charles W.; Alibali, Martha W.
2014-01-01
In any learning situation, children must decide the level of generality with which to encode information. Cues to generality may affect children's memory for different components of a learning episode. In this research, we investigated whether 1 cue to generality, generic language, affects children's memory for information about social categories…
Ran, Xiang; Wang, Zhenzhen; Zhang, Zhijun; Pu, Fang; Ren, Jinsong; Qu, Xiaogang
2016-01-11
We display a nucleic acid controlled AgNC platform for latent fingerprint visualization. The versatile emission of aptamer-modified AgNCs was regulated by the nearby DNA regions. Multi-color images for simultaneous visualization of fingerprints and exogenous components were successfully obtained. A quantitative detection strategy for exogenous substances in fingerprints was also established.
NASA Astrophysics Data System (ADS)
Biham, Ofer; Malcai, Ofer; Levy, Moshe; Solomon, Sorin
1998-08-01
The dynamics of generic stochastic Lotka-Volterra (discrete logistic) systems of the form wi(t+1)=λ(t)wi(t)+aw¯(t)-bwi(t)w¯(t) is studied by computer simulations. The variables wi, i=1,...,N, are the individual system components and w¯(t)=(1/N)∑iwi(t) is their average. The parameters a and b are constants, while λ(t) is randomly chosen at each time step from a given distribution. Models of this type describe the temporal evolution of a large variety of systems such as stock markets and city populations. These systems are characterized by a large number of interacting objects and the dynamics is dominated by multiplicative processes. The instantaneous probability distribution P(w,t) of the system components wi turns out to fulfill a Pareto power law P(w,t)~w-1-α. The time evolution of w¯(t) presents intermittent fluctuations parametrized by a Lévy-stable distribution with the same index α, showing an intricate relation between the distribution of the wi's at a given time and the temporal fluctuations of their average.
Scientific Data Management (SDM) Center for Enabling Technologies. 2007-2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ludascher, Bertram; Altintas, Ilkay
Over the past five years, our activities have both established Kepler as a viable scientific workflow environment and demonstrated its value across multiple science applications. We have published numerous peer-reviewed papers on the technologies highlighted in this short paper and have given Kepler tutorials at SC06,SC07,SC08,and SciDAC 2007. Our outreach activities have allowed scientists to learn best practices and better utilize Kepler to address their individual workflow problems. Our contributions to advancing the state-of-the-art in scientific workflows have focused on the following areas. Progress in each of these areas is described in subsequent sections. Workflow development. The development of amore » deeper understanding of scientific workflows "in the wild" and of the requirements for support tools that allow easy construction of complex scientific workflows; Generic workflow components and templates. The development of generic actors (i.e.workflow components and processes) which can be broadly applied to scientific problems; Provenance collection and analysis. The design of a flexible provenance collection and analysis infrastructure within the workflow environment; and, Workflow reliability and fault tolerance. The improvement of the reliability and fault-tolerance of workflow environments.« less
NASA Astrophysics Data System (ADS)
Abbas, Mohammad
Recently developed methodology that provides the direct assessment of traditional thrust-based performance of aerospace vehicles in terms of entropy generation (i.e., exergy destruction) is modified for stand-alone jet engines. This methodology is applied to a specific single-spool turbojet engine configuration. A generic compressor performance map along with modeled engine component performance characterizations are utilized in order to provide comprehensive traditional engine performance results (engine thrust, mass capture, and RPM), for on and off-design engine operation. Details of exergy losses in engine components, across the entire engine, and in the engine wake are provided and the engine performance losses associated with their losses are discussed. Results are provided across the engine operating envelope as defined by operational ranges of flight Mach number, altitude, and fuel throttle setting. The exergy destruction that occurs in the engine wake is shown to be dominant with respect to other losses, including all exergy losses that occur inside the engine. Specifically, the ratio of the exergy destruction rate in the wake to the exergy destruction rate inside the engine itself ranges from 1 to 2.5 across the operational envelope of the modeled engine.
Holden, Richard J; Kulanthaivel, Anand; Purkayastha, Saptarshi; Goggins, Kathryn M; Kripalani, Sunil
2017-12-01
Personas are a canonical user-centered design method increasingly used in health informatics research. Personas-empirically-derived user archetypes-can be used by eHealth designers to gain a robust understanding of their target end users such as patients. To develop biopsychosocial personas of older patients with heart failure using quantitative analysis of survey data. Data were collected using standardized surveys and medical record abstraction from 32 older adults with heart failure recently hospitalized for acute heart failure exacerbation. Hierarchical cluster analysis was performed on a final dataset of n=30. Nonparametric analyses were used to identify differences between clusters on 30 clustering variables and seven outcome variables. Six clusters were produced, ranging in size from two to eight patients per cluster. Clusters differed significantly on these biopsychosocial domains and subdomains: demographics (age, sex); medical status (comorbid diabetes); functional status (exhaustion, household work ability, hygiene care ability, physical ability); psychological status (depression, health literacy, numeracy); technology (Internet availability); healthcare system (visit by home healthcare, trust in providers); social context (informal caregiver support, cohabitation, marital status); and economic context (employment status). Tabular and narrative persona descriptions provide an easy reference guide for informatics designers. Personas development using approaches such as clustering of structured survey data is an important tool for health informatics professionals. We describe insights from our study of patients with heart failure, then recommend a generic ten-step personas development process. Methods strengths and limitations of the study and of personas development generally are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
Engine materials characterization and damage monitoring by using x ray technologies
NASA Technical Reports Server (NTRS)
Baaklini, George Y.
1993-01-01
X ray attenuation measurement systems that are capable of characterizing density variations in monolithic ceramics and damage due to processing and/or mechanical testing in ceramic and intermetallic matrix composites are developed and applied. Noninvasive monitoring of damage accumulation and failure sequences in ceramic matrix composites is used during room-temperature tensile testing. This work resulted in the development of a point-scan digital radiography system and an in situ x ray material testing system. The former is used to characterize silicon carbide and silicon nitride specimens, and the latter is used to image the failure behavior of silicon-carbide-fiber-reinforced, reaction-bonded silicon nitride matrix composites. State-of-the-art x ray computed tomography is investigated to determine its capabilities and limitations in characterizing density variations of subscale engine components (e.g., a silicon carbide rotor, a silicon nitride blade, and a silicon-carbide-fiber-reinforced beta titanium matrix rod, rotor, and ring). Microfocus radiography, conventional radiography, scanning acoustic microscopy, and metallography are used to substantiate the x ray computed tomography findings. Point-scan digital radiography is a viable technique for characterizing density variations in monolithic ceramic specimens. But it is very limited and time consuming in characterizing ceramic matrix composites. Precise x ray attenuation measurements, reflecting minute density variations, are achieved by photon counting and by using microcollimators at the source and the detector. X ray computed tomography is found to be a unique x ray attenuation measurement technique capable of providing cross-sectional spatial density information in monolithic ceramics and metal matrix composites. X ray computed tomography is proven to accelerate generic composite component development. Radiographic evaluation before, during, and after loading shows the effect of preexisting volume flaws on the fracture behavior of composites. Results from one-, three-, five-, and eight-ply ceramic composite specimens show that x ray film radiography can monitor damage accumulation during tensile loading. Matrix cracking, fiber-matrix debonding, fiber bridging, and fiber pullout are imaged throughout the tensile loading of the specimens. In situ film radiography is found to be a practical technique for estimating interfacial shear strength between the silicon carbide fibers and the reaction-bonded silicon nitride matrix. It is concluded that pretest, in situ, and post-test x ray imaging can provide greater understanding of ceramic matrix composite mechanical behavior.
On-Board Particulate Filter Failure Prevention and Failure Diagnostics Using Radio Frequency Sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sappok, Alex; Ragaller, Paul; Herman, Andrew
The increasing use of diesel and gasoline particulate filters requires advanced on-board diagnostics (OBD) to prevent and detect filter failures and malfunctions. Early detection of upstream (engine-out) malfunctions is paramount to preventing irreversible damage to downstream aftertreatment system components. Such early detection can mitigate the failure of the particulate filter resulting in the escape of emissions exceeding permissible limits and extend the component life. However, despite best efforts at early detection and filter failure prevention, the OBD system must also be able to detect filter failures when they occur. In this study, radio frequency (RF) sensors were used to directlymore » monitor the particulate filter state of health for both gasoline particulate filter (GPF) and diesel particulate filter (DPF) applications. The testing included controlled engine dynamometer evaluations, which characterized soot slip from various filter failure modes, as well as on-road fleet vehicle tests. The results show a high sensitivity to detect conditions resulting in soot leakage from the particulate filter, as well as potential for direct detection of structural failures including internal cracks and melted regions within the filter media itself. Furthermore, the measurements demonstrate, for the first time, the capability to employ a direct and continuous monitor of particulate filter diagnostics to both prevent and detect potential failure conditions in the field.« less
The Analysis of the Contribution of Human Factors to the In-Flight Loss of Control Accidents
NASA Technical Reports Server (NTRS)
Ancel, Ersin; Shih, Ann T.
2012-01-01
In-flight loss of control (LOC) is currently the leading cause of fatal accidents based on various commercial aircraft accident statistics. As the Next Generation Air Transportation System (NextGen) emerges, new contributing factors leading to LOC are anticipated. The NASA Aviation Safety Program (AvSP), along with other aviation agencies and communities are actively developing safety products to mitigate the LOC risk. This paper discusses the approach used to construct a generic integrated LOC accident framework (LOCAF) model based on a detailed review of LOC accidents over the past two decades. The LOCAF model is comprised of causal factors from the domain of human factors, aircraft system component failures, and atmospheric environment. The multiple interdependent causal factors are expressed in an Object-Oriented Bayesian belief network. In addition to predicting the likelihood of LOC accident occurrence, the system-level integrated LOCAF model is able to evaluate the impact of new safety technology products developed in AvSP. This provides valuable information to decision makers in strategizing NASA's aviation safety technology portfolio. The focus of this paper is on the analysis of human causal factors in the model, including the contributions from flight crew and maintenance workers. The Human Factors Analysis and Classification System (HFACS) taxonomy was used to develop human related causal factors. The preliminary results from the baseline LOCAF model are also presented.
Defense Strategies for Asymmetric Networked Systems with Discrete Components.
Rao, Nageswara S V; Ma, Chris Y T; Hausken, Kjell; He, Fei; Yau, David K Y; Zhuang, Jun
2018-05-03
We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models.
Defense Strategies for Asymmetric Networked Systems with Discrete Components
Rao, Nageswara S. V.; Ma, Chris Y. T.; Hausken, Kjell; He, Fei; Yau, David K. Y.
2018-01-01
We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models. PMID:29751588
Failure: A Source of Progress in Maintenance and Design
NASA Astrophysics Data System (ADS)
Chaïb, R.; Taleb, M.; Benidir, M.; Verzea, I.; Bellaouar, A.
This approach, allows using the failure as a source of progress in maintenance and design to detect the most critical components in equipment, to determine the priority order maintenance actions to lead and direct the exploitation procedure towards the most penalizing links in this equipment, even define the necessary changes and recommendations for future improvement. Thus, appreciate the pathological behaviour of the material and increase its availability, even increase its lifespan and improve its future design. In this context and in the light of these points, the failures are important in managing the maintenance function. Indeed, it has become important to understand the phenomena of failure and degradation of equipments in order to establish an appropriate maintenance policy for the rational use of mechanical components and move to the practice of proactive maintenance [1], do maintenance at the design [2].
Parts and Components Reliability Assessment: A Cost Effective Approach
NASA Technical Reports Server (NTRS)
Lee, Lydia
2009-01-01
System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.
Preventing blood transfusion failures: FMEA, an effective assessment method.
Najafpour, Zhila; Hasoumi, Mojtaba; Behzadi, Faranak; Mohamadi, Efat; Jafary, Mohamadreza; Saeedi, Morteza
2017-06-30
Failure Mode and Effect Analysis (FMEA) is a method used to assess the risk of failures and harms to patients during the medical process and to identify the associated clinical issues. The aim of this study was to conduct an assessment of blood transfusion process in a teaching general hospital, using FMEA as the method. A structured FMEA was recruited in our study performed in 2014, and corrective actions were implemented and re-evaluated after 6 months. Sixteen 2-h sessions were held to perform FMEA in the blood transfusion process, including five steps: establishing the context, selecting team members, analysis of the processes, hazard analysis, and developing a risk reduction protocol for blood transfusion. Failure modes with the highest risk priority numbers (RPNs) were identified. The overall RPN scores ranged from 5 to 100 among which, four failure modes were associated with RPNs over 75. The data analysis indicated that failures with the highest RPNs were: labelling (RPN: 100), transfusion of blood or the component (RPN: 100), patient identification (RPN: 80) and sampling (RPN: 75). The results demonstrated that mis-transfusion of blood or blood component is the most important error, which can lead to serious morbidity or mortality. Provision of training to the personnel on blood transfusion, knowledge raising on hazards and appropriate preventative measures, as well as developing standard safety guidelines are essential, and must be implemented during all steps of blood and blood component transfusion.
A Comparison of Functional Models for Use in the Function-Failure Design Method
NASA Technical Reports Server (NTRS)
Stock, Michael E.; Stone, Robert B.; Tumer, Irem Y.
2006-01-01
When failure analysis and prevention, guided by historical design knowledge, are coupled with product design at its conception, shorter design cycles are possible. By decreasing the design time of a product in this manner, design costs are reduced and the product will better suit the customer s needs. Prior work indicates that similar failure modes occur with products (or components) with similar functionality. To capitalize on this finding, a knowledge base of historical failure information linked to functionality is assembled for use by designers. One possible use for this knowledge base is within the Elemental Function-Failure Design Method (EFDM). This design methodology and failure analysis tool begins at conceptual design and keeps the designer cognizant of failures that are likely to occur based on the product s functionality. The EFDM offers potential improvement over current failure analysis methods, such as FMEA, FMECA, and Fault Tree Analysis, because it can be implemented hand in hand with other conceptual design steps and carried throughout a product s design cycle. These other failure analysis methods can only truly be effective after a physical design has been completed. The EFDM however is only as good as the knowledge base that it draws from, and therefore it is of utmost importance to develop a knowledge base that will be suitable for use across a wide spectrum of products. One fundamental question that arises in using the EFDM is: At what level of detail should functional descriptions of components be encoded? This paper explores two approaches to populating a knowledge base with actual failure occurrence information from Bell 206 helicopters. Functional models expressed at various levels of detail are investigated to determine the necessary detail for an applicable knowledge base that can be used by designers in both new designs as well as redesigns. High level and more detailed functional descriptions are derived for each failed component based on NTSB accident reports. To best record this data, standardized functional and failure mode vocabularies are used. Two separate function-failure knowledge bases are then created aid compared. Results indicate that encoding failure data using more detailed functional models allows for a more robust knowledge base. Interestingly however, when applying the EFDM, high level descriptions continue to produce useful results when using the knowledge base generated from the detailed functional models.
NASA Technical Reports Server (NTRS)
Roth, J. P.
1972-01-01
Methods for development of logic design together with algorithms for failure testing, a method for design of logic for ultra-large-scale integration, extension of quantum calculus to describe the functional behavior of a mechanism component-by-component and to computer tests for failures in the mechanism using the diagnosis algorithm, and the development of an algorithm for the multi-output 2-level minimization problem are discussed.
Hainsworth, S V; Fitzpatrick, M E
2007-06-01
Forensic engineering is the application of engineering principles or techniques to the investigation of materials, products, structures or components that fail or do not perform as intended. In particular, forensic engineering can involve providing solutions to forensic problems by the application of engineering science. A criminal aspect may be involved in the investigation but often the problems are related to negligence, breach of contract, or providing information needed in the redesign of a product to eliminate future failures. Forensic engineering may include the investigation of the physical causes of accidents or other sources of claims and litigation (for example, patent disputes). It involves the preparation of technical engineering reports, and may require giving testimony and providing advice to assist in the resolution of disputes affecting life or property.This paper reviews the principal methods available for the analysis of failed components and then gives examples of different component failure modes through selected case studies.
Reliability and availability analysis of a 10 kW@20 K helium refrigerator
NASA Astrophysics Data System (ADS)
Li, J.; Xiong, L. Y.; Liu, L. Q.; Wang, H. R.; Wang, B. M.
2017-02-01
A 10 kW@20 K helium refrigerator has been established in the Technical Institute of Physics and Chemistry, Chinese Academy of Sciences. To evaluate and improve this refrigerator’s reliability and availability, a reliability and availability analysis is performed. According to the mission profile of this refrigerator, a functional analysis is performed. The failure data of the refrigerator components are collected and failure rate distributions are fitted by software Weibull++ V10.0. A Failure Modes, Effects & Criticality Analysis (FMECA) is performed and the critical components with higher risks are pointed out. Software BlockSim V9.0 is used to calculate the reliability and the availability of this refrigerator. The result indicates that compressors, turbine and vacuum pump are the critical components and the key units of this refrigerator. The mitigation actions with respect to design, testing, maintenance and operation are proposed to decrease those major and medium risks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perkins, M P; Ong, M M; Crull, E W
2009-07-21
During lightning strikes buildings and other structures can act as imperfect Faraday Cages, enabling electromagnetic fields to be developed inside the facilities. Some equipment stored inside these facilities may unfortunately act as antenna systems. It is important to have techniques developed to analyze how much voltage, current, or energy dissipation may be developed over valuable components. In this discussion we will demonstrate the modeling techniques used to accurately analyze a generic missile type weapons system as it goes through different stages of assembly. As work is performed on weapons systems detonator cables can become exposed. These cables will form differentmore » monopole and loop type antenna systems that must be analyzed to determine the voltages developed over the detonator regions. Due to the low frequencies of lightning pulses, a lumped element circuit model can be developed to help analyze the different antenna configurations. We will show an example of how numerical modeling can be used to develop the lumped element circuit models used to calculate voltage, current, or energy dissipated over the detonator region of a generic missile type weapons system.« less
Quality of Life for Saudi Patients With Heart Failure: A Cross-Sectional Correlational Study.
AbuRuz, Mohannad Eid; Alaloul, Fawwaz; Saifan, Ahmed; Masa'deh, Rami; Abusalem, Said
2015-06-25
Heart failure is a major public health issue and a growing concern in developing countries, including Saudi Arabia. Most related research was conducted in Western cultures and may have limited applicability for individuals in Saudi Arabia. Thus, this study assesses the quality of life of Saudi patients with heart failure. A cross-sectional correlational design was used on a convenient sample of 103 patients with heart failure. Data were collected using the Short Form-36 and the Medical Outcomes Study-Social Support Survey. Overall, the patients' scores were low for all domains of Quality of Life. The Physical Component Summary and Mental Component Summary mean scores and SDs were (36.7±12.4, 48.8±6.5) respectively, indicating poor Quality of Life. Left ventricular ejection fraction was the strongest predictor of both physical and mental summaries. Identifying factors that impact quality of life for Saudi heart failure patients is important in identifying and meeting their physical and psychosocial needs.
Modeling joint restoration strategies for interdependent infrastructure systems.
Zhang, Chao; Kong, Jingjing; Simonovic, Slobodan P
2018-01-01
Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems.
Anger, hostility, and hospitalizations in patients with heart failure.
Keith, Felicia; Krantz, David S; Chen, Rusan; Harris, Kristie M; Ware, Catherine M; Lee, Amy K; Bellini, Paula G; Gottlieb, Stephen S
2017-09-01
Heart failure patients have a high hospitalization rate, and anger and hostility are associated with coronary heart disease morbidity and mortality. Using structural equation modeling, this prospective study assessed the predictive validity of anger and hostility traits for cardiovascular and all-cause rehospitalizations in patients with heart failure. 146 heart failure patients were administered the STAXI and Cook-Medley Hostility Inventory to measure anger, hostility, and their component traits. Hospitalizations were recorded for up to 3 years following baseline. Causes of hospitalizations were categorized as heart failure, total cardiac, noncardiac, and all-cause (sum of cardiac and noncardiac). Measurement models were separately fit for Anger and Hostility, followed by a Confirmatory Factor Analysis to estimate the relationship between the Anger and Hostility constructs. An Anger model consisted of State Anger, Trait Anger, Anger Expression Out, and Anger Expression In, and a Hostility model included Cynicism, Hostile Affect, Aggressive Responding, and Hostile Attribution. The latent construct of Anger did not predict any of the hospitalization outcomes, but Hostility significantly predicted all-cause hospitalizations. Analyses of individual trait components of each of the 2 models indicated that Anger Expression Out predicted all-cause and noncardiac hospitalizations, and Trait Anger predicted noncardiac hospitalizations. None of the individual components of Hostility were related to rehospitalizations or death. The construct of Hostility and several components of Anger are predictive of hospitalizations that were not specific to cardiac causes. Mechanisms common to a variety of health problems, such as self-care and risky health behaviors, may be involved in these associations. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Viscous Dissipation in One-Dimensional Quantum Liquids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matveev, K. A.; Pustilnik, M.
We develop a theory of viscous dissipation in one-dimensional single-component quantum liquids at low temperatures. Such liquids are characterized by a single viscosity coefficient, the bulk viscosity. We show that for a generic interaction between the constituent particles this viscosity diverges in the zerotemperature limit. In the special case of integrable models, the viscosity is infinite at any temperature, which can be interpreted as a breakdown of the hydrodynamic description. In conclusion, our consideration is applicable to all single-component Galilean- invariant one-dimensional quantum liquids, regardless of the statistics of the constituent particles and the interaction strength.
Viscous Dissipation in One-Dimensional Quantum Liquids
Matveev, K. A.; Pustilnik, M.
2017-07-20
We develop a theory of viscous dissipation in one-dimensional single-component quantum liquids at low temperatures. Such liquids are characterized by a single viscosity coefficient, the bulk viscosity. We show that for a generic interaction between the constituent particles this viscosity diverges in the zerotemperature limit. In the special case of integrable models, the viscosity is infinite at any temperature, which can be interpreted as a breakdown of the hydrodynamic description. In conclusion, our consideration is applicable to all single-component Galilean- invariant one-dimensional quantum liquids, regardless of the statistics of the constituent particles and the interaction strength.
NASA Technical Reports Server (NTRS)
Powers, L. M.; Jadaan, O. M.; Gyekenyesi, J. P.
1998-01-01
The desirable properties of ceramics at high temperatures have generated interest in their use for structural application such as in advanced turbine engine systems. Design lives for such systems can exceed 10,000 hours. The long life requirement necessitates subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this paper is to present a design methodology for predicting the lifetimes of structural components subjected to creep rupture conditions. This methodology utilizes commercially available finite element packages and takes into account the time-varying creep strain distributions (stress relaxation). The creep life, of a component is discretized into short time steps, during which the stress and strain distributions are assumed constant. The damage is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. Failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity. The corresponding time will be the creep rupture life for that component. Examples are chosen to demonstrate the Ceramics Analysis and Reliability Evaluation of Structures/CREEP (CARES/CREEP) integrated design program, which is written for the ANSYS finite element package. Depending on the component size and loading conditions, it was found that in real structures one of two competing failure modes (creep or slow crack growth) will dominate. Applications to benchmark problems and engine components are included.
NASA Technical Reports Server (NTRS)
Gyekenyesi, J. P.; Powers, L. M.; Jadaan, O. M.
1998-01-01
The desirable properties of ceramics at high temperatures have generated interest in their use for structural applications such as in advanced turbine systems. Design lives for such systems can exceed 10,000 hours. The long life requirement necessitates subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this paper is to present a design methodology for predicting the lifetimes of structural components subjected to creep rupture conditions. This methodology utilized commercially available finite element packages and takes into account the time-varying creep strain distributions (stress relaxation). The creep life of a component is discretized into short time steps, during which the stress and strain distributions are assumed constant. The damage is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. Failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity. The corresponding time will be the creep rupture life for that component. Examples are chosen to demonstrate the CARES/CREEP (Ceramics Analysis and Reliability Evaluation of Structures/CREEP) integrated design programs, which is written for the ANSYS finite element package. Depending on the component size and loading conditions, it was found that in real structures one of two competing failure modes (creep or slow crack growth) will dominate. Applications to benechmark problems and engine components are included.
Myers, T J; Kytömaa, H K; Smith, T R
2007-04-11
Fiberglass reinforced plastic (FRP) composite materials are often used to construct tanks, piping, scrubbers, beams, grating, and other components for use in corrosive environments. While FRP typically offers superior and cost effective corrosion resistance relative to other construction materials, the glass fibers traditionally used to provide the structural strength of the FRP can be susceptible to attack by the corrosive environment. The structural integrity of traditional FRP components in corrosive environments is usually dependent on the integrity of a corrosion-resistant barrier, such as a resin-rich layer containing corrosion resistant glass fibers. Without adequate protection, FRP components can fail under loads well below their design by an environmental stress-corrosion cracking (ESCC) mechanism when simultaneously exposed to mechanical stress and a corrosive chemical environment. Failure of these components can result in significant releases of hazardous substances into plants and the environment. In this paper, we present two case studies where fiberglass components failed due to ESCC at small chemical manufacturing facilities. As is often typical, the small chemical manufacturing facilities relied largely on FRP component suppliers to determine materials appropriate for the specific process environment and to repair damaged in-service components. We discuss the lessons learned from these incidents and precautions companies should take when interfacing with suppliers and other parties during the specification, design, construction, and repair of FRP components in order to prevent similar failures and chemical releases from occurring in the future.
An application of artificial intelligence theory to reconfigurable flight control
NASA Technical Reports Server (NTRS)
Handelman, David A.
1987-01-01
Artificial intelligence techniques were used along with statistical hpyothesis testing and modern control theory, to help the pilot cope with the issues of information, knowledge, and capability in the event of a failure. An intelligent flight control system is being developed which utilizes knowledge of cause and effect relationships between all aircraft components. It will screen the information available to the pilots, supplement his knowledge, and most importantly, utilize the remaining flight capability of the aircraft following a failure. The list of failure types the control system will accommodate includes sensor failures, actuator failures, and structural failures.
Composite structural materials
NASA Technical Reports Server (NTRS)
Ansell, G. S.; Loewy, R. G.; Wiberley, S. E.
1984-01-01
Progress is reported in studies of constituent materials composite materials, generic structural elements, processing science technology, and maintaining long-term structural integrity. Topics discussed include: mechanical properties of high performance carbon fibers; fatigue in composite materials; experimental and theoretical studies of moisture and temperature effects on the mechanical properties of graphite-epoxy laminates and neat resins; numerical investigations of the micromechanics of composite fracture; delamination failures of composite laminates; effect of notch size on composite laminates; improved beam theory for anisotropic materials; variation of resin properties through the thickness of cured samples; numerical analysis composite processing; heat treatment of metal matrix composites, and the RP-1 and RP2 gliders of the sailplane project.
NASA Astrophysics Data System (ADS)
Monicke, A.; Katajisto, H.; Leroy, M.; Petermann, N.; Kere, P.; Perillo, M.
2012-07-01
For many years, layered composites have proven essential for the successful design of high-performance space structures, such as launchers or satellites. A generic cylindrical composite structure for a launcher application was optimized with respect to objectives and constraints typical for space applications. The studies included the structural stability, laminate load response and failure analyses. Several types of cylinders (with and without stiffeners) were considered and optimized using different lay-up parameterizations. Results for the best designs are presented and discussed. The simulation tools, ESAComp [1] and modeFRONTIER [2], employed in the optimization loop are elucidated and their value for the optimization process is explained.
Why Clothes Don't Fall Apart: Tension Transmission in Staple Yarns
NASA Astrophysics Data System (ADS)
Warren, Patrick B.; Ball, Robin C.; Goldstein, Raymond E.
2018-04-01
The problem of how staple yarns transmit tension is addressed within abstract models in which the Amontons-Coulomb friction laws yield a linear programing (LP) problem for the tensions in the fiber elements. We find there is a percolation transition such that above the percolation threshold the transmitted tension is in principle unbounded. We determine that the mean slack in the LP constraints is a suitable order parameter to characterize this supercritical state. We argue the mechanism is generic, and in practical terms, it corresponds to a switch from a ductile to a brittle failure mode accompanied by a significant increase in mechanical strength.
C-Based Design Methodology and Topological Change for an Indian Agricultural Tractor Component
NASA Astrophysics Data System (ADS)
Matta, Anil Kumar; Raju, D. Ranga; Suman, K. N. S.; Kranthi, A. S.
2018-06-01
The failure of tractor components and their replacement has now become very common in India because of re-cycling, re-sale, and duplication. To over come the problem of failure we propose a design methodology for topological change co-simulating with software's. In the proposed Design methodology, the designer checks Paxial, Pcr, Pfailue, τ by hand calculations, from which refined topological changes of R.S.Arm are formed. We explained several techniques employed in the component for reduction, removal of rib material to change center of gravity and centroid point by using system C for mixed level simulation and faster topological changes. The design process in system C can be compiled and executed with software, TURBO C7. The modified component is developed in proE and analyzed in ANSYS. The topologically changed component with slot 120 × 4.75 × 32.5 mm at the center showed greater effectiveness than the original component.
C-Based Design Methodology and Topological Change for an Indian Agricultural Tractor Component
NASA Astrophysics Data System (ADS)
Matta, Anil Kumar; Raju, D. Ranga; Suman, K. N. S.; Kranthi, A. S.
2018-02-01
The failure of tractor components and their replacement has now become very common in India because of re-cycling, re-sale, and duplication. To over come the problem of failure we propose a design methodology for topological change co-simulating with software's. In the proposed Design methodology, the designer checks Paxial, Pcr, Pfailue, τ by hand calculations, from which refined topological changes of R.S.Arm are formed. We explained several techniques employed in the component for reduction, removal of rib material to change center of gravity and centroid point by using system C for mixed level simulation and faster topological changes. The design process in system C can be compiled and executed with software, TURBO C7. The modified component is developed in proE and analyzed in ANSYS. The topologically changed component with slot 120 × 4.75 × 32.5 mm at the center showed greater effectiveness than the original component.
Orbital construction support equipment
NASA Technical Reports Server (NTRS)
1977-01-01
Approximately 200 separate construction steps were defined for the three solar power satellite (SPS) concepts. Detailed construction scenarios were developed which describe the specific tasks to be accomplished, and identify general equipment requirements. The scenarios were used to perform a functional analysis, which resulted in the definition of 100 distinct SPS elements. These elements are the components, parts, subsystems, or assemblies upon which construction activities take place. The major SPS elements for each configuration are shown. For those elements, 300 functional requirements were identified in seven generic processes. Cumulatively, these processes encompass all functions required during SPS construction/assembly. Individually each process is defined such that it includes a specific type of activity. Each SPS element may involve activities relating to any or all of the generic processes. The processes are listed, and examples of the requirements defined for a typical element are given.
Generic worklist handler for workflow-enabled products
NASA Astrophysics Data System (ADS)
Schmidt, Joachim; Meetz, Kirsten; Wendler, Thomas
1999-07-01
Workflow management (WfM) is an emerging field of medical information technology. It appears as a promising key technology to model, optimize and automate processes, for the sake of improved efficiency, reduced costs and improved patient care. The Application of WfM concepts requires the standardization of architectures and interfaces. A component of central interest proposed in this report is a generic work list handler: A standardized interface between a workflow enactment service and application system. Application systems with embedded work list handlers will be called 'Workflow Enabled Application Systems'. In this paper we discus functional requirements of work list handlers, as well as their integration into workflow architectures and interfaces. To lay the foundation for this specification, basic workflow terminology, the fundamentals of workflow management and - later in the paper - the available standards as defined by the Workflow Management Coalition are briefly reviewed.
Qualification and issues with space flight laser systems and components
NASA Astrophysics Data System (ADS)
Ott, Melanie N.; Coyle, D. B.; Canham, John S.; Leidecker, Henning W., Jr.
2006-02-01
The art of flight quality solid-state laser development is still relatively young, and much is still unknown regarding the best procedures, components, and packaging required for achieving the maximum possible lifetime and reliability when deployed in the harsh space environment. One of the most important issues is the limited and unstable supply of quality, high power diode arrays with significant technological heritage and market lifetime. Since Spectra Diode Labs Inc. ended their involvement in the pulsed array business in the late 1990's, there has been a flurry of activity from other manufacturers, but little effort focused on flight quality production. This forces NASA, inevitably, to examine the use of commercial parts to enable space flight laser designs. System-level issues such as power cycling, operational derating, duty cycle, and contamination risks to other laser components are some of the more significant unknown, if unquantifiable, parameters that directly effect transmitter reliability. Designs and processes can be formulated for the system and the components (including thorough modeling) to mitigate risk based on the known failures modes as well as lessons learned that GSFC has collected over the past ten years of space flight operation of lasers. In addition, knowledge of the potential failure modes related to the system and the components themselves can allow the qualification testing to be done in an efficient yet, effective manner. Careful test plan development coupled with physics of failure knowledge will enable cost effect qualification of commercial technology. Presented here will be lessons learned from space flight experience, brief synopsis of known potential failure modes, mitigation techniques, and options for testing from the system level to the component level.
Qualification and Issues with Space Flight Laser Systems and Components
NASA Technical Reports Server (NTRS)
Ott, Melanie N.; Coyle, D. Barry; Canham, John S.; Leidecker, Henning W.
2006-01-01
The art of flight quality solid-state laser development is still relatively young, and much is still unknown regarding the best procedures, components, and packaging required for achieving the maximum possible lifetime and reliability when deployed in the harsh space environment. One of the most important issues is the limited and unstable supply of quality, high power diode arrays with significant technological heritage and market lifetime. Since Spectra Diode Labs Inc. ended their involvement in the pulsed array business in the late 1990's, there has been a flurry of activity from other manufacturers, but little effort focused on flight quality production. This forces NASA, inevitably, to examine the use of commercial parts to enable space flight laser designs. System-level issues such as power cycling, operational derating, duty cycle, and contamination risks to other laser components are some of the more significant unknown, if unquantifiable, parameters that directly effect transmitter reliability. Designs and processes can be formulated for the system and the components (including thorough modeling) to mitigate risk based on the known failures modes as well as lessons learned that GSFC has collected over the past ten years of space flight operation of lasers. In addition, knowledge of the potential failure modes related to the system and the components themselves can allow the qualification testing to be done in an efficient yet, effective manner. Careful test plan development coupled with physics of failure knowledge will enable cost effect qualification of commercial technology. Presented here will be lessons learned from space flight experience, brief synopsis of known potential failure modes, mitigation techniques, and options for testing from the system level to the component level.
Qualification and Issues with Space Flight Laser Systems and Components
NASA Technical Reports Server (NTRS)
Ott, Melanie N.; Coyle, D. Barry; Canham, John S.; Leidecker, Henning W.
2006-01-01
The art of flight quality solid-state laser development is still relatively young, and much is still unknown regarding the best procedures, components, and packaging required for achieving the maximum possible lifetime and reliability when deployed in the harsh space environment. One of the most important issues is the limited and unstable supply of quality, high power diode arrays with significant technological heritage and market lifetime. Since Spectra Diode Labs Inc. ended their involvement in the pulsed array business in the late 199O's, there has been a flurry of activity from other manufacturers, but little effort focused on flight quality production. This forces NASA, inevitably, to examine the use of commercial parts to enable space flight laser designs. System-level issues such as power cycling, operational derating, duty cycle, and contamination risks to other laser components are some of the more significant unknown, if unquantifiable, parameters that directly effect transmitter reliability. Designs and processes can be formulated for the system and the components (including thorough modeling) to mitigate risk based on the known failures modes as well as lessons learned that GSFC has collected over the past ten years of space flight operation of lasers. In addition, knowledge of the potential failure modes related to the system and the components themselves can allow the qualification testing to be done in an efficient yet, effective manner. Careful test plan development coupled with physics of failure knowledge will enable cost effect qualification of commercial technology. Presented here will be lessons learned from space flight experience, brief synopsis of known potential failure modes, mitigation techniques, and options for testing from the system level to the component level.
X-33 LH2 Tank Failure Investigation Findings
NASA Technical Reports Server (NTRS)
Niedermeyer, Mindy; Clinton, R. G., Jr. (Technical Monitor)
2000-01-01
This presentation focuses on the tank history, test objectives, failure description, investigation and conclusions. The test objectives include verify structural integrity at 105% expected flight load limit varying the following parameters: cryogenic temperature; internal pressure; and mechanical loading. The Failure description includes structural component of the aft body, quad-lobe design, and sandwich - honeycomb graphite epoxy construction.
Microscopic observations during longitudinal compression loading of single pulp fibers
Irving B. Sachs
1986-01-01
Paperboard components (linerboard adn corrugating medium) fail in edgewise compression because of failure of single fibers, as well as fiber-to-fiber bonds. While fiber-to-fiber-bond failure has been studied extensively, little is known about the longitudinal compression failure of a single fiber. In this study, surface alterations on single loblolly pine kraft pulp...
Bisset, S A; Knight, J S; Bouchet, C L G
2014-02-24
A multiplex PCR-based method was developed to overcome the limitations of microscopic examination as a means of identifying individual infective larvae from the wide range of strongylid parasite species commonly encountered in sheep in mixed sheep-cattle grazing situations in New Zealand. The strategy employed targets unique species-specific sequence markers in the second internal transcribed spacer (ITS-2) region of ribosomal DNA of the nematodes and utilises individual larval lysates as reaction templates. The basic assay involves two sets of reactions designed to target the ten strongylid species most often encountered in ovine faecal cultures under New Zealand conditions (viz. Haemonchus contortus, Teladorsagia circumcincta, Trichostrongylus axei, Trichostrongylus colubriformis, Trichostrongylus vitrinus, Cooperia curticei, Cooperia oncophora, Nematodirus spathiger, Chabertia ovina, and Oesophagostomum venulosum). Five species-specific primers, together with a pair of "generic" (conserved) primers, are used in each of the reactions. Two products are generally amplified, one by the generic primer pair regardless of species (providing a positive PCR control) and the other (whose size is indicative of the species present) by the appropriate species-specific primer in combination with one or other of the generic primers. If necessary, any larvae not identified by these reactions can subsequently be tested using primers designed specifically to detect those species less frequently encountered in ovine faecal cultures (viz. Ostertagia ostertagi, Ostertagia leptospicularis, Cooperia punctata, Nematodirus filicollis, and Bunostomum trigonocephalum). Results of assays undertaken on >5500 nematode larvae cultured from lambs on 16 different farms distributed throughout New Zealand indicated that positive identifications were initially obtained for 92.8% of them, while a further 4.4% of reactions gave a generic but no visible specific product and 2.8% gave no discernible PCR products (indicative of insufficient or poor quality DNA template). Of the reactions which yielded only generic products, 91% gave positive identifications in an assay re-run, resulting in a failure rate of just ∼ 0.4% for reactions containing amplifiable template. Although the method was developed primarily to provide a reliable way to identify individual strongylid larvae for downstream molecular applications, it potentially has a variety of other research and practical applications which are not readily achievable at present using other methods. Copyright © 2013 Elsevier B.V. All rights reserved.
Kinetics of Hole Nucleation in Biomembrane Rupture
Evans, Evan; Smith, Benjamin A
2011-01-01
The core component of a biological membrane is a fluid-lipid bilayer held together by interfacial-hydrophobic and van der Waals interactions, which are balanced for the most part by acyl chain entropy confinement. If biomembranes are subjected to persistent tensions, an unstable (nanoscale) hole will emerge at some time to cause rupture. Because of the large energy required to create a hole, thermal activation appears to be requisite for initiating a hole and the activation energy is expected to depend significantly on mechanical tension. Although models exist for the kinetic process of hole nucleation in tense membranes, studies of membrane survival have failed to cover the ranges of tension and lifetime needed to critically examine nucleation theory. Hence, rupturing giant (~20 μm) membrane vesicles ultra-slowly to ultra-quickly with slow to fast ramps of tension, we demonstrate a method to directly quantify kinetic rates at which unstable holes form in fluid membranes, at the same time providing a range of kinetic rates from < 0.01 s−1 to > 100 s−1. Measuring lifetimes of many hundreds of vesicles, each tensed by precision control of micropipet suction, we have determined the rates of failure for vesicles made from several synthetic phospholipids plus 1:1 mixtures of phospho- and sphingo-lipids with cholesterol, all of which represent prominent constituents of eukaryotic cell membranes. Plotted on a logarithmic scale, the failure rates for vesicles are found to rise dramatically with increase of tension. Converting the experimental profiles of kinetic rates into changes of activation energy versus tension, we show that the results closely match expressions for thermal activation derived from a combination of meso-scale theory and molecular-scale simulations of hole formation. Moreover, we demonstrate a generic approach to transform analytical fits of activation energies obtained from rupture experiments into energy landscapes characterizing the process hole nucleation along the reaction coordinate defined by hole size. PMID:21966242
Improving online risk assessment with equipment prognostics and health monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coble, Jamie B.; Liu, Xiaotong; Briere, Chris
The current approach to evaluating the risk of nuclear power plant (NPP) operation relies on static probabilities of component failure, which are based on industry experience with the existing fleet of nominally similar light water reactors (LWRs). As the nuclear industry looks to advanced reactor designs that feature non-light water coolants (e.g., liquid metal, high temperature gas, molten salt), this operating history is not available. Many advanced reactor designs use advanced components, such as electromagnetic pumps, that have not been used in the US commercial nuclear fleet. Given the lack of rich operating experience, we cannot accurately estimate the evolvingmore » probability of failure for basic components to populate the fault trees and event trees that typically comprise probabilistic risk assessment (PRA) models. Online equipment prognostics and health management (PHM) technologies can bridge this gap to estimate the failure probabilities for components under operation. The enhanced risk monitor (ERM) incorporates equipment condition assessment into the existing PRA and risk monitor framework to provide accurate and timely estimates of operational risk.« less
Vibration detection of component health and operability
NASA Technical Reports Server (NTRS)
Baird, B. C.
1975-01-01
In order to prevent catastrophic failure and eliminate unnecessary periodic maintenance in the shuttle orbiter program environmental control system components, some means of detecting incipient failure in these components is required. The utilization was investigated of vibrational/acoustic phenomena as one of the principal physical parameters on which to base the design of this instrumentation. Baseline vibration/acoustic data was collected from three aircraft type fans and two aircraft type pumps over a frequency range from a few hertz to greater than 3000 kHz. The baseline data included spectrum analysis of the baseband vibration signal, spectrum analysis of the detected high frequency bandpass acoustic signal, and amplitude distribution of the high frequency bandpass acoustic signal. A total of eight bearing defects and two unbalancings was introduced into the five test items. All defects were detected by at least one of a set of vibration/acoustic parameters with a margin of at least 2:1 over the worst case baseline. The design of a portable instrument using this set of vibration/acoustic parameters for detecting incipient failures in environmental control system components is described.
Comparison between four dissimilar solar panel configurations
NASA Astrophysics Data System (ADS)
Suleiman, K.; Ali, U. A.; Yusuf, Ibrahim; Koko, A. D.; Bala, S. I.
2017-12-01
Several studies on photovoltaic systems focused on how it operates and energy required in operating it. Little attention is paid on its configurations, modeling of mean time to system failure, availability, cost benefit and comparisons of parallel and series-parallel designs. In this research work, four system configurations were studied. Configuration I consists of two sub-components arranged in parallel with 24 V each, configuration II consists of four sub-components arranged logically in parallel with 12 V each, configuration III consists of four sub-components arranged in series-parallel with 8 V each, and configuration IV has six sub-components with 6 V each arranged in series-parallel. Comparative analysis was made using Chapman Kolmogorov's method. The derivation for explicit expression of mean time to system failure, steady state availability and cost benefit analysis were performed, based on the comparison. Ranking method was used to determine the optimal configuration of the systems. The results of analytical and numerical solutions of system availability and mean time to system failure were determined and it was found that configuration I is the optimal configuration.
Probabilistic finite elements for fracture and fatigue analysis
NASA Technical Reports Server (NTRS)
Liu, W. K.; Belytschko, T.; Lawrence, M.; Besterfield, G. H.
1989-01-01
The fusion of the probabilistic finite element method (PFEM) and reliability analysis for probabilistic fracture mechanics (PFM) is presented. A comprehensive method for determining the probability of fatigue failure for curved crack growth was developed. The criterion for failure or performance function is stated as: the fatigue life of a component must exceed the service life of the component; otherwise failure will occur. An enriched element that has the near-crack-tip singular strain field embedded in the element is used to formulate the equilibrium equation and solve for the stress intensity factors at the crack-tip. Performance and accuracy of the method is demonstrated on a classical mode 1 fatigue problem.
Generic Amplicon Deep Sequencing to Determine Ilarvirus Species Diversity in Australian Prunus
Kinoti, Wycliff M.; Constable, Fiona E.; Nancarrow, Narelle; Plummer, Kim M.; Rodoni, Brendan
2017-01-01
The distribution of Ilarvirus species populations amongst 61 Australian Prunus trees was determined by next generation sequencing (NGS) of amplicons generated using a genus-based generic RT-PCR targeting a conserved region of the Ilarvirus RNA2 component that encodes the RNA dependent RNA polymerase (RdRp) gene. Presence of Ilarvirus sequences in each positive sample was further validated by Sanger sequencing of cloned amplicons of regions of each of RNA1, RNA2 and/or RNA3 that were generated by species specific PCRs and by metagenomic NGS. Prunus necrotic ringspot virus (PNRSV) was the most frequently detected Ilarvirus, occurring in 48 of the 61 Ilarvirus-positive trees and Prune dwarf virus (PDV) and Apple mosaic virus (ApMV) were detected in three trees and one tree, respectively. American plum line pattern virus (APLPV) was detected in three trees and represents the first report of APLPV detection in Australia. Two novel and distinct groups of Ilarvirus-like RNA2 amplicon sequences were also identified in several trees by the generic amplicon NGS approach. The high read depth from the amplicon NGS of the generic PCR products allowed the detection of distinct RNA2 RdRp sequence variant populations of PNRSV, PDV, ApMV, APLPV and the two novel Ilarvirus-like sequences. Mixed infections of ilarviruses were also detected in seven Prunus trees. Sanger sequencing of specific RNA1, RNA2, and/or RNA3 genome segments of each virus and total nucleic acid metagenomics NGS confirmed the presence of PNRSV, PDV, ApMV and APLPV detected by RNA2 generic amplicon NGS. However, the two novel groups of Ilarvirus-like RNA2 amplicon sequences detected by the generic amplicon NGS could not be associated to the presence of sequence from RNA1 or RNA3 genome segments or full Ilarvirus genomes, and their origin is unclear. This work highlights the sensitivity of genus-specific amplicon NGS in detection of virus sequences and their distinct populations in multiple samples, and the need for a standardized approach to accurately determine what constitutes an active, viable virus infection after detection by molecular based methods. PMID:28713347
Generic Amplicon Deep Sequencing to Determine Ilarvirus Species Diversity in Australian Prunus.
Kinoti, Wycliff M; Constable, Fiona E; Nancarrow, Narelle; Plummer, Kim M; Rodoni, Brendan
2017-01-01
The distribution of Ilarvirus species populations amongst 61 Australian Prunus trees was determined by next generation sequencing (NGS) of amplicons generated using a genus-based generic RT-PCR targeting a conserved region of the Ilarvirus RNA2 component that encodes the RNA dependent RNA polymerase (RdRp) gene. Presence of Ilarvirus sequences in each positive sample was further validated by Sanger sequencing of cloned amplicons of regions of each of RNA1, RNA2 and/or RNA3 that were generated by species specific PCRs and by metagenomic NGS. Prunus necrotic ringspot virus (PNRSV) was the most frequently detected Ilarvirus , occurring in 48 of the 61 Ilarvirus -positive trees and Prune dwarf virus (PDV) and Apple mosaic virus (ApMV) were detected in three trees and one tree, respectively. American plum line pattern virus (APLPV) was detected in three trees and represents the first report of APLPV detection in Australia. Two novel and distinct groups of Ilarvirus -like RNA2 amplicon sequences were also identified in several trees by the generic amplicon NGS approach. The high read depth from the amplicon NGS of the generic PCR products allowed the detection of distinct RNA2 RdRp sequence variant populations of PNRSV, PDV, ApMV, APLPV and the two novel Ilarvirus -like sequences. Mixed infections of ilarviruses were also detected in seven Prunus trees. Sanger sequencing of specific RNA1, RNA2, and/or RNA3 genome segments of each virus and total nucleic acid metagenomics NGS confirmed the presence of PNRSV, PDV, ApMV and APLPV detected by RNA2 generic amplicon NGS. However, the two novel groups of Ilarvirus -like RNA2 amplicon sequences detected by the generic amplicon NGS could not be associated to the presence of sequence from RNA1 or RNA3 genome segments or full Ilarvirus genomes, and their origin is unclear. This work highlights the sensitivity of genus-specific amplicon NGS in detection of virus sequences and their distinct populations in multiple samples, and the need for a standardized approach to accurately determine what constitutes an active, viable virus infection after detection by molecular based methods.
A Summary of Taxonomies of Digital System Failure Modes Provided by the DigRel Task Group
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu T. L.; Yue M.; Postma, W.
2012-06-25
Recently, the CSNI directed WGRisk to set up a task group called DIGREL to initiate a new task on developing a taxonomy of failure modes of digital components for the purposes of PSA. It is an important step towards standardized digital I&C reliability assessment techniques for PSA. The objective of this paper is to provide a comparison of the failure mode taxonomies provided by the participants. The failure modes are classified in terms of their levels of detail. Software and hardware failure modes are discussed separately.
A failure management prototype: DR/Rx
NASA Technical Reports Server (NTRS)
Hammen, David G.; Baker, Carolyn G.; Kelly, Christine M.; Marsh, Christopher A.
1991-01-01
This failure management prototype performs failure diagnosis and recovery management of hierarchical, distributed systems. The prototype, which evolved from a series of previous prototypes following a spiral model for development, focuses on two functions: (1) the diagnostic reasoner (DR) performs integrated failure diagnosis in distributed systems; and (2) the recovery expert (Rx) develops plans to recover from the failure. Issues related to expert system prototype design and the previous history of this prototype are discussed. The architecture of the current prototype is described in terms of the knowledge representation and functionality of its components.
Bian, Boyang; Kelton, Christina M L; Guo, Jeff J; Wigle, Patricia R
2010-01-01
Angiotensin-converting enzyme (ACE) inhibitors and angiotensin receptor blockers (ARBs) are widely prescribed for the treatment of hypertension and heart failure, as well as for kidney disease prevention in patients with diabetes mellitus and the management of patients after myocardial infarction. To (a) describe ACE inhibitor and ARB utilization and spending in the Medicaid fee-for-service program from 1991 through 2008, and (b) estimate the potential cost savings for the collective Medicaid programs from a higher ratio of generic ACE inhibitor utilization. A retrospective, descriptive analysis was performed using the National Summary Files from the Medicaid State Drug Utilization Data, which are composed of pharmacy claims that are subject to federally mandated rebates from pharmaceutical manufacturers. For the years 1991-2008, quarterly claim counts and expenditures were calculated by summing data for individual ACE inhibitors and ARBs. Quarterly per-claim expenditure as a proxy for drug price was computed for all brand and generic drugs. Market shares were calculated based on the number of pharmacy claims and Medicaid expenditures. In the Medicaid fee-for-service program, ACE inhibitors accounted for 100% of the claims in the combined market for ACE inhibitors and ARBs in 1991, 80.6% in 2000, and 64.7% in 2008. The Medicaid expenditure per ACE inhibitor claim dropped from $37.24 in 1991 to $24.03 in 2008 when generics accounted for 92.5% of ACE inhibitor claims; after adjusting for inflation for the period from 1991 to 2008, the real price drop was 59.2%. Brand ACE inhibitors accounted for only 7.5% of the claims in 2008 for all ACE inhibitors but 32.1% of spending; excluding the effects of manufacturer rebates, Medicaid spending would have been reduced by $28.7 million (9%) in 2008 if all ACE inhibitor claims were generic. The average price per ACE inhibitor claim in 2008 was $24.03 ($17.64 per generic claim vs. $103.45 per brand claim) versus $81.98 per ARB claim. If the ACE inhibitor ratio had been 75% in 2008 rather than 64.7%, the Medicaid program would have saved approximately 13% or about $41.8 million, again excluding the effects of manufacturer rebates. If the ACE inhibitor ratio had been 90% in 2008, the cost savings for the combined Medicaid fee-forservice programs would have been about 33% or about $102.3 million. The total cost savings opportunity with 100% generic ACE inhibitor utilization in 2008 and an ACE inhibitor ratio of 75% was $75.1 million (24%) or $142.3M (46%) with a 90% ACE inhibitor ratio. Factors that affect Medicaid spending by contributing to increased utilization of ACE inhibitors and ARBs, such as the rising prevalence of hypertension, heart disease, and diabetes, can be offset by reduction in the average price attained through a higher proportion of ACE inhibitors and a higher percentage of generic versus brand ACE inhibitors.
Ramtinfar, Sara; Chabok, Shahrokh Yousefzadeh; Chari, Aliakbar Jafari; Reihanian, Zoheir; Leili, Ehsan Kazemnezhad; Alizadeh, Arsalan
2016-10-01
The aim of this study is to compare the discriminant function of multiple organ dysfunction score (MODS) and sequential organ failure assessment (SOFA) components in predicting the Intensive Care Unit (ICU) mortality and neurologic outcome. A descriptive-analytic study was conducted at a level I trauma center. Data were collected from patients with severe traumatic brain injury admitted to the neurosurgical ICU. Basic demographic data, SOFA and MOD scores were recorded daily for all patients. Odd's ratios (ORs) were calculated to determine the relationship of each component score to mortality, and area under receiver operating characteristic (AUROC) curve was used to compare the discriminative ability of two tools with respect to ICU mortality. The most common organ failure observed was respiratory detected by SOFA of 26% and MODS of 13%, and the second common was cardiovascular detected by SOFA of 18% and MODS of 13%. No hepatic or renal failure occurred, and coagulation failure reported as 2.5% by SOFA and MODS. Cardiovascular failure defined by both tools had a correlation to ICU mortality and it was more significant for SOFA (OR = 6.9, CI = 3.6-13.3, P < 0.05 for SOFA; OR = 5, CI = 3-8.3, P < 0.05 for MODS; AUROC = 0.82 for SOFA; AUROC = 0.73 for MODS). The relationship of cardiovascular failure to dichotomized neurologic outcome was not significant statistically. ICU mortality was not associated with respiratory or coagulation failure. Cardiovascular failure defined by either tool significantly related to ICU mortality. Compared to MODS, SOFA-defined cardiovascular failure was a stronger predictor of death. ICU mortality was not affected by respiratory or coagulation failures.
Air Force Systems Command Research Planning Guide (Research Objectives).
1987-07-15
potential for producing alloys with superior properties. Titanium and Iron Aluminides - Basic research to identify approaches leading to the formation...performance of ni’.kel, aluminumr,, and titanium alloys and ceramics are required to provide future Air Force weapon systems components with structural...seriously block full exploitat,on. Aluminum and Titanium Alloys - Three generic families of Pylie-,7 alloys are being investigated for both alloy
Document for 270 Voltage Direct Current (270 V dc) System
NASA Astrophysics Data System (ADS)
1992-09-01
The paper presents the technical design and application information established by the SAE Aerospace Recommended Practice concerning the generation, distribution, control, and utilization of aircraft 270 V dc electrical power systems and support equipment. Also presented are references and definitions making it possible to compare various electrical systems and components. A diagram of the generic 270 V Direct Current High-Voltage Direct System is included.
ERIC Educational Resources Information Center
Tomba, J. Pablo
2015-01-01
The thermodynamic formalism of ideal solutions is developed in most of the textbooks postulating a form for the chemical potential of a generic component, which is adapted from the thermodynamics of ideal gas mixtures. From this basis, the rest of useful thermodynamic properties can be derived straightforwardly without further hypothesis. Although…
NASA Technical Reports Server (NTRS)
Lim, H. S.; Verzwyvelt, S. A.
1989-01-01
KOH concentration effects on cycle life of a Ni/H2 cell have been studied by carrying out a cycle life test of ten Ni/H2 boiler plate cells which contain electrolytes of various KOH concentrations. Failure analyses of these cells were carried out after completion of the life test which accumulated up to 40,000 cycles at an 80 percent depth of discharge over a period of 3.7 years. These failure analyses included studies on changes of electrical characteristics of test cells and component analyses after disassembly of the cell. The component analyses included visual inspections, dimensional changes, capacity measurements of nickel electrodes, scanning electron microscopy, BET surface area measurements, and chemical analyses. Results have indicated that failure mode and change in the nickel electrode varied as the concentration was varied, especially, when the concentration was changed from 31 percent or higher to 26 percent or lower.
Analysis of Emergency Diesel Generators Failure Incidents in Nuclear Power Plants
NASA Astrophysics Data System (ADS)
Hunt, Ronderio LaDavis
In early years of operation, emergency diesel generators have had a minimal rate of demand failures. Emergency diesel generators are designed to operate as a backup when the main source of electricity has been disrupted. As of late, EDGs (emergency diesel generators) have been failing at NPPs (nuclear power plants) around the United States causing either station blackouts or loss of onsite and offsite power. These failures occurred from a specific type called demand failures. This thesis evaluated the current problem that raised concern in the nuclear industry which was averaging 1 EDG demand failure/year in 1997 to having an excessive event of 4 EDG demand failure year which occurred in 2011. To determine the next occurrence of the extreme event and possible cause to an event of such happening, two analyses were conducted, the statistical and root cause analysis. Considering the statistical analysis in which an extreme event probability approach was applied to determine the next occurrence year of an excessive event as well as, the probability of that excessive event occurring. Using the root cause analysis in which the potential causes of the excessive event occurred by evaluating, the EDG manufacturers, aging, policy changes/ maintenance practices and failure components. The root cause analysis investigated the correlation between demand failure data and historical data. Final results from the statistical analysis showed expectations of an excessive event occurring in a fixed range of probability and a wider range of probability from the extreme event probability approach. The root-cause analysis of the demand failure data followed historical statistics for the EDG manufacturer, aging and policy changes/ maintenance practices but, indicated a possible cause regarding the excessive event with the failure components. Conclusions showed the next excessive demand failure year, prediction of the probability and the next occurrence year of such failures, with an acceptable confidence level, was difficult but, it was likely that this type of failure will not be a 100 year event. It was noticeable to see that the majority of the EDG demand failures occurred within the main components as of 2005. The overall analysis of this study provided from percentages, indicated that it would be appropriate to make the statement that the excessive event was caused by the overall age (wear and tear) of the Emergency Diesel Generators in Nuclear Power Plants. Future Work will be to better determine the return period of the excessive event once the occurrence has happened for a second time by implementing the extreme event probability approach.
Failure mode analysis to predict product reliability.
NASA Technical Reports Server (NTRS)
Zemanick, P. P.
1972-01-01
The failure mode analysis (FMA) is described as a design tool to predict and improve product reliability. The objectives of the failure mode analysis are presented as they influence component design, configuration selection, the product test program, the quality assurance plan, and engineering analysis priorities. The detailed mechanics of performing a failure mode analysis are discussed, including one suggested format. Some practical difficulties of implementation are indicated, drawn from experience with preparing FMAs on the nuclear rocket engine program.
Hansen, R A; Qian, J; Berg, R L; Linneman, J G; Seoane-Vazquez, E; Dutcher, S; Raofi, S; Page, C D; Peissig, P L
2018-02-01
Authorized generics are identical in formulation to brand drugs, manufactured by the brand company but marketed as a generic. Generics, marketed by generic manufacturers, are required to demonstrate pharmaceutical and bioequivalence to the brand drug, but repetition of clinical trials is not required. This retrospective cohort study compared outcomes for generics and authorized generics, which serves as a generic vs. brand proxy that minimizes bias against generics. For the seven drugs studied between 1999 and 2014, 5,234 unique patients were on brand drugs prior to generic entry and 4,900 (93.6%) switched to a generic. During the 12 months following the brand-to-generic switch, patients using generics vs. authorized generics were similar in terms of outpatient visits, urgent care visits, hospitalizations, and medication discontinuation. The likelihood of emergency department (ED) visits was slightly higher for authorized generics compared with generics. These data suggest that generics were clinically no worse than their proxy brand comparators. © 2017 American Society for Clinical Pharmacology and Therapeutics.
Reliability considerations in the placement of control system components
NASA Technical Reports Server (NTRS)
Montgomery, R. C.
1983-01-01
This paper presents a methodology, along with applications to a grid type structure, for incorporating reliability considerations in the decision for actuator placement on large space structures. The method involves the minimization of a criterion that considers mission life and the reliability of the system components. It is assumed that the actuator gains are to be readjusted following failures, but their locations cannot be changed. The goal of the design is to suppress vibrations of the grid and the integral square of the grid modal amplitudes is used as a measure of performance of the control system. When reliability of the actuators is considered, a more pertinent measure is the expected value of the integral; that is, the sum of the squares of the modal amplitudes for each possible failure state considered, multiplied by the probability that the failure state will occur. For a given set of actuator locations, the optimal criterion may be graphed as a function of the ratio of the mean time to failure of the components and the design mission life or reservicing interval. The best location of the actuators is typically different for a short mission life than for a long one.
Prognostics for Microgrid Components
NASA Technical Reports Server (NTRS)
Saxena, Abhinav
2012-01-01
Prognostics is the science of predicting future performance and potential failures based on targeted condition monitoring. Moving away from the traditional reliability centric view, prognostics aims at detecting and quantifying the time to impending failures. This advance warning provides the opportunity to take actions that can preserve uptime, reduce cost of damage, or extend the life of the component. The talk will focus on the concepts and basics of prognostics from the viewpoint of condition-based systems health management. Differences with other techniques used in systems health management and philosophies of prognostics used in other domains will be shown. Examples relevant to micro grid systems and subsystems will be used to illustrate various types of prediction scenarios and the resources it take to set up a desired prognostic system. Specifically, the implementation results for power storage and power semiconductor components will demonstrate specific solution approaches of prognostics. The role of constituent elements of prognostics, such as model, prediction algorithms, failure threshold, run-to-failure data, requirements and specifications, and post-prognostic reasoning will be explained. A discussion on performance evaluation and performance metrics will conclude the technical discussion followed by general comments on open research problems and challenges in prognostics.
Overview of the Smart Network Element Architecture and Recent Innovations
NASA Technical Reports Server (NTRS)
Perotti, Jose M.; Mata, Carlos T.; Oostdyk, Rebecca L.
2008-01-01
In industrial environments, system operators rely on the availability and accuracy of sensors to monitor processes and detect failures of components and/or processes. The sensors must be networked in such a way that their data is reported to a central human interface, where operators are tasked with making real-time decisions based on the state of the sensors and the components that are being monitored. Incorporating health management functions at this central location aids the operator by automating the decision-making process to suggest, and sometimes perform, the action required by current operating conditions. Integrated Systems Health Management (ISHM) aims to incorporate data from many sources, including real-time and historical data and user input, and extract information and knowledge from that data to diagnose failures and predict future failures of the system. By distributing health management processing to lower levels of the architecture, there is less bandwidth required for ISHM, enhanced data fusion, make systems and processes more robust, and improved resolution for the detection and isolation of failures in a system, subsystem, component, or process. The Smart Network Element (SNE) has been developed at NASA Kennedy Space Center to perform intelligent functions at sensors and actuators' level in support of ISHM.
Enhanced Component Performance Study: Emergency Diesel Generators 1998–2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-11-01
This report presents an enhanced performance evaluation of emergency diesel generators (EDGs) at U.S. commercial nuclear power plants. This report evaluates component performance over time using (1) Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES) data from 1998 through 2014 and (2) maintenance unavailability (UA) performance data from Mitigating Systems Performance Index (MSPI) Basis Document data from 2002 through 2014. The objective is to show estimates of current failure probabilities and rates related to EDGs, trend these data on an annual basis, determine if the current data are consistent with the probability distributions currently recommended for use inmore » NRC probabilistic risk assessments, show how the reliability data differ for different EDG manufacturers and for EDGs with different ratings; and summarize the subcomponents, causes, detection methods, and recovery associated with each EDG failure mode. Engineering analyses were performed with respect to time period and failure mode without regard to the actual number of EDGs at each plant. The factors analyzed are: sub-component, failure cause, detection method, recovery, manufacturer, and EDG rating. Six trends with varying degrees of statistical significance were identified in the data.« less
A Critical Analysis of the Conventionally Employed Creep Lifing Methods
Abdallah, Zakaria; Gray, Veronica; Whittaker, Mark; Perkins, Karen
2014-01-01
The deformation of structural alloys presents problems for power plants and aerospace applications due to the demand for elevated temperatures for higher efficiencies and reductions in greenhouse gas emissions. The materials used in such applications experience harsh environments which may lead to deformation and failure of critical components. To avoid such catastrophic failures and also increase efficiency, future designs must utilise novel/improved alloy systems with enhanced temperature capability. In recognising this issue, a detailed understanding of creep is essential for the success of these designs by ensuring components do not experience excessive deformation which may ultimately lead to failure. To achieve this, a variety of parametric methods have been developed to quantify creep and creep fracture in high temperature applications. This study reviews a number of well-known traditionally employed creep lifing methods with some more recent approaches also included. The first section of this paper focuses on predicting the long-term creep rupture properties which is an area of interest for the power generation sector. The second section looks at pre-defined strains and the re-production of full creep curves based on available data which is pertinent to the aerospace industry where components are replaced before failure. PMID:28788623
Surrogate oracles, generalized dependency and simpler models
NASA Technical Reports Server (NTRS)
Wilson, Larry
1990-01-01
Software reliability models require the sequence of interfailure times from the debugging process as input. It was previously illustrated that using data from replicated debugging could greatly improve reliability predictions. However, inexpensive replication of the debugging process requires the existence of a cheap, fast error detector. Laboratory experiments can be designed around a gold version which is used as an oracle or around an n-version error detector. Unfortunately, software developers can not be expected to have an oracle or to bear the expense of n-versions. A generic technique is being investigated for approximating replicated data by using the partially debugged software as a difference detector. It is believed that the failure rate of each fault has significant dependence on the presence or absence of other faults. Thus, in order to discuss a failure rate for a known fault, the presence or absence of each of the other known faults needs to be specified. Also, in simpler models which use shorter input sequences without sacrificing accuracy are of interest. In fact, a possible gain in performance is conjectured. To investigate these propositions, NASA computers running LIC (RTI) versions are used to generate data. This data will be used to label the debugging graph associated with each version. These labeled graphs will be used to test the utility of a surrogate oracle, to analyze the dependent nature of fault failure rates and to explore the feasibility of reliability models which use the data of only the most recent failures.
Komiya, Akira; Suzuki, Hiroyoshi; Awa, Yusuke; Egoshi, Ken-ichi; Onishi, Tetsuro; Nakatsu, Hiroomi; Ohki, Takemasa; Mikami, Kazuo; Sato, Naohide; Araki, Kazuhiro; Ota, Sho; Naya, Yukio; Ichikawa, Tomohiko
2010-06-01
To investigate the benefit of alpha1-adrenoceptor antagonist naftopidil on the quality of life (QOL) of patients with lower urinary tract symptoms suggestive of benign prostatic hyperplasia (BPH/LUTS). A total of 99 men with BPH/LUTS were prospectively recruited. The Short Form-8 (SF-8) was used for generic QOL assessment and each parameter was compared with the norm in these patients. Longitudinal changes were evaluated using the SF-8 and the International Prostatic Symptoms Score (I-PSS) at baseline, 4 and 8 weeks after naftopidil administration. The relationship between SF-8 and I-PSS was analyzed. Five of eight components in the SF-8 were significantly lower than the Japanese national norm at baseline. SF-8 score was improved by naftopidil at 4 and 8 weeks in general health (GH) and physical component summary (PCS) in the patients in their 70s. Mental health (MH) and mental component summary (MCS) were improved at 8 weeks in patients in their 60s. When analyzing the whole cohort, SF-8 GH, role emotional (RE) and MH had improved at 8 weeks, which was similar to the norm, and bodily pain (BP) results were better. Compared with the baseline, total I-PSS, storage/voiding symptoms and QOL index scores improved significantly under naftopidil. Each component of I-PSS (except for hesitancy) correlated with SF-8 sub-scales (except for BP) to some extent. BPH/LUTS impairs generic QOL, which is improved by naftopidil treatment. SF-8 can be a useful instrument to assess the efficacy of BPH/LUTS treatment because its simplicity to complete and analyze, and its meaningful relationship to I-PSS.
ERIC Educational Resources Information Center
Simpson, Amber; Maltese, Adam
2017-01-01
The term failure typically evokes negative connotations in educational settings and is likely to be accompanied by negative emotional states, low sense of confidence, and lack of persistence. These negative emotional and behavioral states may factor into an individual not pursuing a degree or career in science, technology, engineering, or…
2017-06-30
along the intermetallic component or at the interface between the two components of the composite. The availability of rnicroscale experimental data in...obtained with the PD model; (c) map of strain energy density; (d) the new quasi -index damage is a predictor of fai lure. As in the case of FRCs, one...which points are most likely to fail, before actual failure happens. The " quasi -damage index", shown in the formula below, is a point-wise measure
Forensic applications of metallurgy - Failure analysis of metal screw and bolt products
NASA Astrophysics Data System (ADS)
Tiner, Nathan A.
1993-03-01
It is often necessary for engineering consultants in liability lawsuits to consider whether a component has a manufacturing and/or design defect, as judged by industry standards, as well as whether the component was strong enough to resist service loads. Attention is presently given to the principles that must be appealed to in order to clarify these two issues in the cases of metal screw and bolt failures, which are subject to fatigue and brittle fractures and ductile dimple rupture.
Digital Systems Validation Handbook. Volume 2
1989-02-01
0 TABLE 7.2-3. FAILURE RATES FOR MAJOR RDFCS COMPONENTS COMPONENT UNIT FAILURE RATE* Pitch Angle Gyro 303 Roll Angle Gyro 303 Yaw Rate Gyro 200...Airplane Weight 314,500 lb Altitude 35 ft Angle of Attack 10.91 0 Indicated Air Speed 168 kts Flap Deployment 22 o Transition capability was added to go...various pieces of information into the form needed by the FCCs. For example, roll angle and pitch angle are converted to three-wire AC signals, properly
Controlling stress corrosion cracking in mechanism components of ground support equipment
NASA Technical Reports Server (NTRS)
Majid, W. A.
1988-01-01
The selection of materials for mechanism components used in ground support equipment so that failures resulting from stress corrosion cracking will be prevented is described. A general criteria to be used in designing for resistance to stress corrosion cracking is also provided. Stress corrosion can be defined as combined action of sustained tensile stress and corrosion to cause premature failure of materials. Various aluminum, steels, nickel, titanium and copper alloys, and tempers and corrosive environment are evaluated for stress corrosion cracking.
Modular space vehicle boards, control software, reprogramming, and failure recovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judd, Stephen; Dallmann, Nicholas; McCabe, Kevin
A space vehicle may have a modular board configuration that commonly uses some or all components and a common operating system for at least some of the boards. Each modular board may have its own dedicated processing, and processing loads may be distributed. The space vehicle may be reprogrammable, and may be launched without code that enables all functionality and/or components. Code errors may be detected and the space vehicle may be reset to a working code version to prevent system failure.
Generic trending and analysis system
NASA Technical Reports Server (NTRS)
Keehan, Lori; Reese, Jay
1994-01-01
The Generic Trending and Analysis System (GTAS) is a generic spacecraft performance monitoring tool developed by NASA Code 511 and Loral Aerosys. It is designed to facilitate quick anomaly resolution and trend analysis. Traditionally, the job of off-line analysis has been performed using hardware and software systems developed for real-time spacecraft contacts; then, the systems were supplemented with a collection of tools developed by Flight Operations Team (FOT) members. Since the number of upcoming missions is increasing, NASA can no longer afford to operate in this manner. GTAS improves control center productivity and effectiveness because it provides a generic solution across multiple missions. Thus, GTAS eliminates the need for each individual mission to develop duplicate capabilities. It also allows for more sophisticated tools to be developed because it draws resources from several projects. In addition, the GTAS software system incorporates commercial off-the-shelf tools software (COTS) packages and reuses components of other NASA-developed systems wherever possible. GTAS has incorporated lessons learned from previous missions by involving the users early in the development process. GTAS users took a proactive role in requirements analysis, design, development, and testing. Because of user involvement, several special tools were designed and are now being developed. GTAS users expressed considerable interest in facilitating data collection for long term trending and analysis. As a result, GTAS provides easy access to large volumes of processed telemetry data directly in the control center. The GTAS archival and retrieval capabilities are supported by the integration of optical disk technology and a COTS relational database management system.
Fracture Damage and Failure of Cannon Components by Service Loading
1983-02-01
the result of normal service :onALtiolt. )etaits of the failure and the redesign of the cannon h3’e eea iLOs’rtbed elsewhere.| , The brief review...here is intenlded to ie.,Acrlbe the extreme situation of very severe damage and failure of a cannon. In fact, this failure led to many fracture- safe ...criterion; elastic-perfectly plastic material properties. The experiments summarized in Figure 6 used cannon tubes in which a 6.4 mm deep semi
Preliminary Failure Modes and Effects Analysis of the US DCLL Test Blanket Module
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee C. Cadwallader
2010-06-01
This report presents the results of a preliminary failure modes and effects analysis (FMEA) of a small tritium-breeding test blanket module design for the International Thermonuclear Experimental Reactor. The FMEA was quantified with “generic” component failure rate data, and the failure events are binned into postulated initiating event families and frequency categories for safety assessment. An appendix to this report contains repair time data to support an occupational radiation exposure assessment for test blanket module maintenance.
Preliminary Failure Modes and Effects Analysis of the US DCLL Test Blanket Module
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee C. Cadwallader
2007-08-01
This report presents the results of a preliminary failure modes and effects analysis (FMEA) of a small tritium-breeding test blanket module design for the International Thermonuclear Experimental Reactor. The FMEA was quantified with “generic” component failure rate data, and the failure events are binned into postulated initiating event families and frequency categories for safety assessment. An appendix to this report contains repair time data to support an occupational radiation exposure assessment for test blanket module maintenance.
ELECTRONIC COMPONENT COOLING ALTERNATIVES: COMPRESSED AIR AND LIQUID NITROGEN
The goal of this study was to evaluate topics used to troubleshoot circuit boards with known or suspected thermally intermittent components. Failure modes for thermally intermittent components are typically mechanical defects, such as cracks in solder paths or joints, or broken b...
Comparison of generic-to-brand switchback patterns for generic and authorized generic drugs
Hansen, Richard A.; Qian, Jingjing; Berg, Richard; Linneman, James; Seoane-Vazquez, Enrique; Dutcher, Sarah K.; Raofi, Saeid; Page, C. David; Peissig, Peggy
2018-01-01
Background While generic drugs are therapeutically equivalent to brand drugs, some patients and healthcare providers remain uncertain about whether they produce identical outcomes. Authorized generics, which are identical in formulation to corresponding brand drugs but marketed as a generic, provide a unique post-marketing opportunity to study whether utilization patterns are influenced by perceptions of generic drugs. Objectives To compare generic-to-brand switchback rates between generics and authorized generics. Methods A retrospective cohort study was conducted using claims and electronic health records data from a regional U.S. healthcare system. Ten drugs with authorized generics and generics marketed between 1999 and 2014 were evaluated. Eligible adult patients received a brand drug during the 6 months preceding generic entry, and then switched to a generic or authorized generic. Patients in this cohort were followed for up to 30 months from the index switch date to evaluate occurrence of generic-to-brand switchbacks. Switchback rates were compared between patients on authorized generics versus generics using Kaplan-Meier curves and Cox proportional hazards models, controlling for individual drug effects, age, sex, Charlson comorbidity score, pre-index drug use characteristics, and pre-index healthcare utilization. Results Among 5,542 unique patients that switched from brand-to-generic or brand-to-authorized generic, 264 (4.8%) switched back to the brand drug. Overall switchback rates were similar for authorized generics compared with generics (HR=0.86; 95% CI 0.65-1.15). The likelihood of switchback was higher for alendronate (HR=1.64; 95% CI 1.20-2.23) and simvastatin (HR=1.81; 95% CI 1.30-2.54) and lower for amlodipine (HR=0.27; 95% CI 0.17-0.42) compared with other drugs in the cohort. Conclusions Overall switchback rates were similar between authorized generic and generic drug users, indirectly supporting similar efficacy and tolerability profiles for brand and generic drugs. Reasons for differences in switchback rates among specific products need to be further explored. PMID:28152215
CARES/Life Software for Designing More Reliable Ceramic Parts
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Powers, Lynn M.; Baker, Eric H.
1997-01-01
Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion, and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CAPES/Life software eases this task by providing a tool to optimize the design and manufacture of brittle material components using probabilistic reliability analysis techniques. Probabilistic component design involves predicting the probability of failure for a thermomechanically loaded component from specimen rupture data. Typically, these experiments are performed using many simple geometry flexural or tensile test specimens. A static, dynamic, or cyclic load is applied to each specimen until fracture. Statistical strength and SCG (fatigue) parameters are then determined from these data. Using these parameters and the results obtained from a finite element analysis, the time-dependent reliability for a complex component geometry and loading is then predicted. Appropriate design changes are made until an acceptable probability of failure has been reached.
NASA Technical Reports Server (NTRS)
Wong, J. T.; Andre, W. L.
1981-01-01
A recent result shows that, for a certain class of systems, the interdependency among the elements of such a system together with the elements constitutes a mathematical structure a partially ordered set. It is called a loop free logic model of the system. On the basis of an intrinsic property of the mathematical structure, a characterization of system component failure in terms of maximal subsets of bad test signals of the system was obtained. Also, as a consequence, information concerning the total number of failure components in the system was deduced. Detailed examples are given to show how to restructure real systems containing loops into loop free models for which the result is applicable.
Quality of Life for Saudi Patients With Heart Failure: A Cross-Sectional Correlational Study
AbuRuz, Mohannad Eid; Alaloul, Fawwaz; Saifan, Ahmed; Masa’Deh, Rami; Abusalem, Said
2016-01-01
Introduction: Heart failure is a major public health issue and a growing concern in developing countries, including Saudi Arabia. Most related research was conducted in Western cultures and may have limited applicability for individuals in Saudi Arabia. Thus, this study assesses the quality of life of Saudi patients with heart failure. Materials and Methods: A cross-sectional correlational design was used on a convenient sample of 103 patients with heart failure. Data were collected using the Short Form-36 and the Medical Outcomes Study-Social Support Survey. Results: Overall, the patients’ scores were low for all domains of Quality of Life. The Physical Component Summary and Mental Component Summary mean scores and SDs were (36.7±12.4, 48.8±6.5) respectively, indicating poor Quality of Life. Left ventricular ejection fraction was the strongest predictor of both physical and mental summaries. Conclusion: Identifying factors that impact quality of life for Saudi heart failure patients is important in identifying and meeting their physical and psychosocial needs. PMID:26493415
Modeling joint restoration strategies for interdependent infrastructure systems
Simonovic, Slobodan P.
2018-01-01
Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems. PMID:29649300
Probabilistic framework for product design optimization and risk management
NASA Astrophysics Data System (ADS)
Keski-Rahkonen, J. K.
2018-05-01
Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.
NESTEM-QRAS: A Tool for Estimating Probability of Failure
NASA Technical Reports Server (NTRS)
Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.
2002-01-01
An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.
NESTEM-QRAS: A Tool for Estimating Probability of Failure
NASA Astrophysics Data System (ADS)
Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.
2002-10-01
An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.
A new class of asymptotically non-chaotic vacuum singularities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klinger, Paul, E-mail: paul.klinger@univie.ac.at
2015-12-15
The BKL conjecture, stated in the 1960s and early 1970s by Belinski, Khalatnikov and Lifschitz, proposes a detailed description of the generic asymptotic dynamics of spacetimes as they approach a spacelike singularity. It predicts complicated chaotic behaviour in the generic case, but simpler non-chaotic one in cases with symmetry assumptions or certain kinds of matter fields. Here we construct a new class of four-dimensional vacuum spacetimes containing spacelike singularities which show non-chaotic behaviour. In contrast with previous constructions, no symmetry assumptions are made. Rather, the metric is decomposed in Iwasawa variables and conditions on the asymptotic evolution of some ofmore » them are imposed. The constructed solutions contain five free functions of all space coordinates, two of which are constrained by inequalities. We investigate continuous and discrete isometries and compare the solutions to previous constructions. Finally, we give the asymptotic behaviour of the metric components and curvature.« less
Generic Business Model Types for Enterprise Mashup Intermediaries
NASA Astrophysics Data System (ADS)
Hoyer, Volker; Stanoevska-Slabeva, Katarina
The huge demand for situational and ad-hoc applications desired by the mass of business end users led to a new kind of Web applications, well-known as Enterprise Mashups. Users with no or limited programming skills are empowered to leverage in a collaborative manner existing Mashup components by combining and reusing company internal and external resources within minutes to new value added applications. Thereby, Enterprise Mashup environments interact as intermediaries to match the supply of providers and demand of consumers. By following the design science approach, we propose an interaction phase model artefact based on market transaction phases to structure required intermediary features. By means of five case studies, we demonstrate the application of the designed model and identify three generic business model types for Enterprise Mashups intermediaries (directory, broker, and marketplace). So far, intermediaries following a real marketplace business model don’t exist in context of Enterprise Mashups and require further research for this emerging paradigm.
Monk, Andrew; Hone, Kate; Lines, Lorna; Dowdall, Alan; Baxter, Gordon; Blythe, Mark; Wright, Peter
2006-09-01
Information and communication technology applications can help increase the independence and quality of life of older people, or people with disabilities who live in their own homes. A risk management framework is proposed to assist in selecting applications that match the needs and wishes of particular individuals. Risk comprises two components: the likelihood of the occurrence of harm and the consequences of that harm. In the home, the social and psychological harms are as important as the physical ones. The importance of the harm (e.g., injury) is conditioned by its consequences (e.g., distress, costly medical treatment). We identify six generic types of harm (including dependency, loneliness, fear and debt) and four generic consequences (including distress and loss of confidence in ability to live independently). The resultant client-centred framework offers a systematic basis for selecting and evaluating technology for independent living.
Mini-mast CSI testbed user's guide
NASA Technical Reports Server (NTRS)
Tanner, Sharon E.; Pappa, Richard S.; Sulla, Jeffrey L.; Elliott, Kenny B.; Miserentino, Robert; Bailey, James P.; Cooper, Paul A.; Williams, Boyd L., Jr.; Bruner, Anne M.
1992-01-01
The Mini-Mast testbed is a 20 m generic truss highly representative of future deployable trusses for space applications. It is fully instrumented for system identification and active vibrations control experiments and is used as a ground testbed at NASA-Langley. The facility has actuators and feedback sensors linked via fiber optic cables to the Advanced Real Time Simulation (ARTS) system, where user defined control laws are incorporated into generic controls software. The object of the facility is to conduct comprehensive active vibration control experiments on a dynamically realistic large space structure. A primary goal is to understand the practical effects of simplifying theoretical assumptions. This User's Guide describes the hardware and its primary components, the dynamic characteristics of the test article, the control law implementation process, and the necessary safeguards employed to protect the test article. Suggestions for a strawman controls experiment are also included.